Exadata DataBase Machine Overview
This article is going to provide the overview about the Oracle’s Exadata database machine and the benefits of this engineering system. Oracle Database machine is fully integrated with oracle databases and it uses Exadata storage servers.This machine provides high performance and high availability for all type of database workloads.It also completely eliminates the all the bottlenecks and mainly IOPS. So you can simply consolidate multiple databases to one single database machine.These machines are very easy to deploy since one single script will be doing the job for you with help of Oracle Exadata Deployment Assistant (which generates the XML input file ).
Why do we need the database machines ?
Data warehousing issues:Oracle database machine will support large and complex queries.Since the storage is connected to high speed infiniband networks ,you will be getting more than enough I/O throughput to support massive scans.Using the smart scan feature, it reduces unproductive I/O. It aslo supports the parallel processing to improve the system performance.It uses the hybrid column compression to reduce the storage space.
OLTP issues:Database machines are completely eliminating the OLTP issues. It supports large user populations and transactions by providing the enough I/Os per second and caching frequently accessed data.It provides the consistent performance across all the tables and minimizing the IO latency.
Consolidation Issues:To reduce the datacenter space , most of the companies are behind the virtualization technologies for small and mid range operations.How this database machine is address the consolidation issues ? In database machine , you can accommodate multiple workloads on same box instead of having the multiple database servers on your environment. You can also prioritizing the workloads but it requires proper analysis and planning prior to the implementation.
Configuration issues:
Database eliminates the configuration issues completely. Since only oracle is going to provide the support for complete database machine components , all the hardwares and firmwares will be compatibility with the oracle database software.It also creates the well-balanced configuration across the database machine to eliminates the bottlenecks.
Database machine consists following components on it.1. Exadata Storage servers (cell)
2. Computing Nodes (Database servers – Typical oracle Linux x86_64 servers )
3. Infiniband Switches (Internal networking )
4. Cisco switches. (External networking )
5. Power distribution units
database machine x3-2 Full Rack
No of Exadata Stroage cells
No of Database Servers
No of Infiniband Switches
Full Rack 14 8 3
Half Rack 7 3 3
Quarter Rack
3 2 2
Exadata Storage servers:(cell)Exadata storage server is exclusively designed for oracle database.It is a self contained storage platform and runs the Exadata storage software. Databases are typically deployed across multiple exadata storage servers to provide the vast performance. The databases (compute nodes ) and cell communicate with each other via infiniband network.(40Gb/s). Exadata storage server runs on oracle Linux x86_64 and storage is managed by Exadata cell software.
You can’t allocate the Exadata storage to non-oracle database servers. The Exadata storage servers are exclusively designed to provide the storage to oracle databases within the Rack.
Exadata – Quarter Rack Example Exadata Storage Server X3-2 – Hardware Overview
Processors 12 Intel CPU cores
System Memory 64GB
Disk Drives (If HPD) 12×600 GB 15K RPM
Disk Drives (If HCD) 12x3TB 7.2K RPM
Flash 1.6 TB
Disk ControllerDisk Controller Host Bus Adapterwith 512 MB Battery Backed Write Cache
InfiniBand Network Dual-Port QDR (40Gb/s) InfiniBand Host Channel Adapter
Remote Management Integrated Lights Out Manager (ILOM) Ethernet Port
Power Supplies 2 x Redundant Hot-Swappable Power Supplies
Exadata Storage Server X3-2 Configuration Options Exadata storage servers is available in two type of configurations. q.HP(High Performance) disks 2. HC (High Capacity)disks. If you are looking for more storage space, then you need to choose the high capacity disk type (Ex: Dataware housing). If you need high performance (Ex: OLTP), You have to choose high performance disks.Please the below table to know the differences between the both configuration types.
High Performance Disks High Capacity Disks
Raw Disk Capacity 7.2TB 36TB
Raw Disk Throughput
1.8 GB/sec 1.3 GB/sec
Flash Throughput 7.25 GB/sec 6.75 GB/sec
X3-2 Database Server Hardware: OverviewConfiguration:
Processors 16 Intel CPU Cores
System Memory 256GB
Disk Drives 4 x 300 GB 10K RPM Disk Drives
Disk ControllerDisk Controller Host Bus Adapterwith 512 MB Battery Backed Write Cache
Remote Management Integrated Lights Out Manager (ILOM) Ethernet Port
Power Supplies 2 x Redundant Hot-Swappable Power Supplies
Network Interfaces• Dual-Port QDR (40Gb/s) InfiniBand Host Channel Adapter• Four 1/10 Gb Ethernet Ports (copper)• Two 10Gb Ethernet Ports (optical)
Database Machine X3-8 is only offered in a Full Rack.
Exadata – Database machine x3-8 Full Rack
Both X3-2 and X3-8 database machines contain 14 Exadata X3-2 cells, 3 InfiniBand switches, 2 power distribution units (PDUs) and an Ethernet switch.The difference is with the database server configuration. It just has 2
computing nodes where as X3-2 has 8 computing nodes. But x3-8 database servers has more cpu cores and physical memory.
X3-8 database Machine configuration:
Processors 80 Intel CPU Cores
System Memory 2TB
Disk Drives 4 x 300 GB 10K RPM Disk Drives
Disk ControllerDisk Controller Host Bus Adapterwith 512 MB Battery Backed Write Cache
Remote Management Integrated Lights Out Manager (ILOM) Ethernet Port
Power Supplies 4x Redundant Hot-Swappable Power Supplies
Network Interfaces• Dual-Port QDR (40Gb/s) InfiniBand Host Channel Adapter• Four 1/10 Gb Ethernet Ports (copper)• Two 10Gb Ethernet Ports (optical)
Exadata Storage Rack Expansions For an example , you have fully utilized the full rack of the database machine and running out of the Exadata storage cell space.How do you scale up the environment ? Should we need to order another database machine ? No. We just require Exadata storage servers . So we need to order for the Exadata Storage Expansion Racks along with Exadata storage servers and infiniband switches.
The full rack of the Exadata storage expansion can accommodate 18 Exadata storage servers and 3 infiniband Switches.In Half Rack , you can accommodate 9 Exadata storage servers and 3 infiniband switches. In Quarter rack , you can have only 4 Exadata storage servers and 2 Infiniband switches.
Infiniband Network OverviewInfiniband provides the inter-connectivity between the database servers and Exadata storage servers with the speed of 40Gb/s .It is used for storage networking ,RAC interconnect and high performance external connectivity.It
uses the ZDP protocol (Zero loss Zero Copy Datagram Protocol)so that very low CPU overhead required.
To explorer theX3-2 Exadata database machine 3D view,check out the below link, http://oracle.com.edgesuite.net/producttours/3d/exadata-x3-2/index.html
Architecture of Exadata Database Machine
Exadata Database machine provides a high performance,high availability and plenty of storage space platform for oracle database .The high availability clustering is is provided by Oracle RAC and ASM will be responsible for storage mirroring .Infiniband technology provides high bandwidth and low latency cluster interconnect and storage networking. The powerful compute nodes joins in the RAC cluster to offers the great performance.
In this article, we will see the
Exadata Database Machine Network architecture
Exadata Database Machine Storage architecture
Exadata Database Machine Software architecture.
How to scale up the Exadata Database Machine
Key components of the Exadata Database MachineShared storage: Exadata Storage servers Database Machine provides intelligent, high-performance shared storage to both single-instance and RAC implementations of Oracle Database using Exadata Storage Server technology.The Exadata storage servers is designed to provide the storage to oracle database using the ASM (Automatic Storage Management). ASM keeps the redundant copies of data on separate Exadata Storage Servers and it protects against the data loss if you lost the disk or entire storage server.
Shared Network – Infiniband
Database machine uses the infiniband network for interconnect between database servers and exadata storage servers. The infiniband network provides 40Gb/s speed.So the latency is very low and offers the high bandwidth. In Exadata Database machine , multiple infiniband switches and interface boning will be used to provide the network redundancy.
Shared cache:The database machine’s RAC environment, the database instance buffer cache are shared. If one instance has kept some data on cache and that required by another instance,the data will be provided to the required node via infiniband cluster interconnect. It increases the performance since the data is happening between memory to memory via cluster interconnect.
Database Server cluster:The Exadata database machine’s full rack consists , 8 compute nodes and you can able to build the 8-n0de cluster using the oracle RAC. The each compute nodes has up to 80 CPU cores and 256GB memory .
Cluster interconnect:By default, the database machine is configured use the infiniband storage network as cluster interconnect.
Database Machine – Network Architecture
There are three different networks has been shown on the above diagram.
Management Network – ILOM: ILOM(Integrated lights out manager) is the default remote hardware management on all oracle servers.It uses the traditional Ethernet network to manage the exadata database machine remotely. ILOM provides the graphical remote administration facility and it also helps the system administrators to monitor the hardware remotely.
Client Access:The database servers will be accessed by application servers via Ethernet network. Bonding will be created using multiple ethernet adapters for network redundancy and load balancing.Note: This database machine consists Cisco switch to provide the connectivity to ethernet networks.
InfiniBand Network ArchitectureThe below diagrams shows that how the infiniband links are connected to different components on X3-2 Half/Full Rack setup.
infiniband switch x3-2 half-full rack
The spine switch will be exists only on half rack and full rack exadata database configuration only. The spine switch will help you to scale the environment by providing the Inifiniband links to multiple racks. In the quarter rack of X3-2 model, you will get leaf switches . You can scale up to 18 rack by adding the infiniband cables to the infiniband switches.
How we can interconnect two racks ? Have a look at the below diagram closely.Single InfiniBand network formed based on a Fat Tree topology
Scale two Racks
Six ports on each leaf switch are reserved for external connectivity.These ports are used for Connecting to media servers for tape backup,Connecting to external ETL servers,Client or application access Including Oracle Exalogic Elastic Cloud
Database Machine Software Architecture
Software architecture- exadata
CELLSRV, MS,RS & IORM are the important process of the exadata storage cell servers. In the DB servers , these storage’s griddisks are used to create the ASM diskgroup.In the database server, there will be special library called LIBCELL. In combination with the database kernel and ASM, LIBCELL transparently maps database I/O to exadata storage server.
There is no other filesystems are allowed to create in Exadata storage cell. Oracle Database must use the ASM for volume manager and filesystem.
Customers has option to choose the database servers operating system between oracle Linux and oracle Solaris x86 . Exadata will support the oracle database 11g release 2 and laster versions of database.
Database Machine Storage Architecture
Exadata Storage cell
Exadata storage servers has above mentioned software components. Oracle Linux is the default operating system for exadata storage cell software . CELLSRV is the core exadata storage component which provides the most of the services. Management Server (MS) provides Exadata cell management and configuration.MS is responsible for sending alerts and collects some statistics in addition to those collected by CELLSRV.Restart Server (RS) is used to start up/shut down the CELLSRV and MS services and monitors these services to automatically restart them if required.
How the disks are mapped to Database from the Exadata storage servers ?
Exadata Disks overview
If you look the below image , you can observe that database servers is considering the each cell nodes as failure group.
Exadata DG
Exploring the Exadata Storage Cell Processes
Exadata storage cell is new to the industry and only oracle is offering such a customized storage for oracle database. Unlike the traditional SAN storage ,Exadata data storage will help to reduce the processing at the DB node level. Since the exadata storage cell has its own processors and 64GB physical memory , it can easily offload the DB nodes. It has huge amount of Flash storage to speed up the I/O .The default Flash cache settings is write through. These flash can also be used as storage (like harddrive). Flash devices can give 10x better performance than normal harddrive.
Examine the Exadata Storage cell Processes1. Login to Exadata storage cell .
login as: root
[email protected]'s password:
Last login: Sat Nov 15 01:50:58 2014
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# uname -a
Linux uaexacell1 2.6.39-300.26.1.el5uek #1 SMP Thu Jan 3 18:31:38 PST 2013 x86_64 x86_64 x86_64
GNU/Linux
[root@uaexacell1 ~]#
2.List the exadata cell restart server process.(RS)
[root@uaexacell1 ~]# ps -ef |grep cellrs
root 10001 1 0 14:23 ? 00:00:00 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrssrm
-ms 1 -cellsrv 1
root 10009 10001 0 14:23 ? 00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsmmt -ms 1 -cellsrv 1
root 10010 10001 0 14:23 ? 00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsomt -ms 1 -cellsrv 1
root 10011 10001 0 14:23 ? 00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsbmt -ms 1 -cellsrv 1
root 10012 10011 0 14:23 ? 00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrsbkm -rs_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellinit.ora -ms_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsms.state -cellsrv_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsos.state -debug 0
root 10022 10012 0 14:23 ? 00:00:00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellrssmt -rs_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellinit.ora -ms_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsms.state -cellsrv_conf
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/config/cellrsos.state -debug 0
root 12992 12945 0 14:48 pts/2 00:00:00 grep cellrs
[root@uaexacell1 ~]#
RS – Restart server process is responsible to make the cellsrv & ms process up for all the time. If these process are not responding or terminated, automatically RS(restart server) , will restart the cellsrv & ms process.
3.List the MS process. (Management Server process). MS maintains the cell configuration with the help of cellcli(command line utility). It also responsible for sending alerts and collecting the exadata cell statistics.
[root@uaexacell1 ~]# ps -ef | grep ms.err
root 10013 10009 1 14:23 ? 00:00:21 /usr/java/jdk1.5.0_15/bin/java -Xms256m -Xmx512m -
Djava.library.path=/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/lib -Ddisable.checkForUpdate=true -
jar /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/oc4j/ms/j2ee/home/oc4j.jar -out
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.lst -err
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.err
root 13945 12945 0 14:56 pts/2 00:00:00 grep ms.err
[root@uaexacell1 ~]#
MS(Management server) process’s parent process id belongs to RS (restart server).RS will restart the MS when it crashes or terminated abnormally.
4.CELLSRV is multi-threaded process which provides the storage services to the database nodes. CELLSRV communicates with oracle database to serve simple block requests,such as database buffer cache reads and smart scan requests. You list the cellsrv process using below mentioned command.
[root@uaexacell1 ~]# ps -ef | grep "/cellsrv "
root 5705 10010 8 19:13 ? 00:08:20 /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/bin/cellsrv
100 5000 9 5042
1000 8390 4457 0 20:57 pts/1 00:00:00 grep /cellsrv
[root@uaexacell1 ~]#
CELLSRV process’s parent process id belongs to RS process(restart server).RS will restart the CELLRSV when it crashes or terminated abnormally.
5.Let me kill the MS process and see if it restarts automatically.
[root@uaexacell1 ~]# ps -ef |grep ms.err
root 10013 10009 0 14:23 ? 00:00:23 /usr/java/jdk1.5.0_15/bin/java -Xms256m -Xmx512m -
Djava.library.path=/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/lib -Ddisable.checkForUpdate=true -
jar /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/oc4j/ms/j2ee/home/oc4j.jar -out
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.lst -err
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.err
root 15220 12945 0 15:06 pts/2 00:00:00 grep ms.err
[root@uaexacell1 ~]# kill -9 10013
[root@uaexacell1 ~]# ps -ef |grep ms.err
root 15245 12945 0 15:07 pts/2 00:00:00 grep ms.err
[root@uaexacell1 ~]# ps -ef |grep ms.err
root 15249 12945 0 15:07 pts/2 00:00:00 grep ms.err
[root@uaexacell1 ~]#
within few seconds another MS process has started with new PID.
[root@uaexacell1 ~]# ps -ef |grep ms.err
root 15366 10009 74 15:07 ? 00:00:00 /usr/java/jdk1.5.0_15/bin/java -Xms256m -Xmx512m -
Djava.library.path=/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/lib -Ddisable.checkForUpdate=true -
jar /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/oc4j/ms/j2ee/home/oc4j.jar -out
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.lst -err
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log/ms.err
root 15379 12945 0 15:07 pts/2 00:00:00 grep ms.err
[root@uaexacell1 ~]#
6.How to stop and start the services on exadata storage cell using the init scripts ? Its like other start up scripts will be located on /etc/init.d and link has been added to /etc/rc3.d to bring up the cell process on the start-up.
[root@uaexacell1 ~]# cd /etc/init.d
[root@uaexacell1 init.d]# ls -lrt |grep cell
lrwxrwxrwx 1 root root 50 Nov 15 01:15 celld -> /opt/oracle/cell/cellsrv/deploy/scripts/unix/celld
[root@uaexacell1 init.d]# cd /etc/rc3.d
[root@uaexacell1 rc3.d]# ls -lrt |grep cell
lrwxrwxrwx 1 root root 15 Nov 15 01:15 S99celld -> ../init.d/celld
[root@uaexacell1 rc3.d]#
This script can be used to start, stop, restart the exadata cell software.
To stop the cell software
[root@uaexacell1 rc3.d]# ./S99celld stop
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
[root@uaexacell1 rc3.d]#
To start the cell software
[root@uaexacell1 rc3.d]# ./S99celld start
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
[root@uaexacell1 rc3.d]#
TO restart the cell software,
[root@uaexacell1 rc3.d]# ./S99celld restart
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
[root@uaexacell1 rc3.d]#
Cell software services will be managed using celladmin user and cellcli utility. You can also start,stop,restart the services using cellcli utility.We will see the cellcli in next article.
Hope this article give you the overview of the exadata storage cell processes.
Exadata – CELLCLI command Line Utility1
Exadata storage is managed by CELLCLI command line utility. Management process (MS) will communicate with cellcli to maintain the configuration on the system. CELLCLI utility can be launched by user “celladmin” or “root”. In this article ,we will see how to list the storage objects and how to stop/start the cell services using the CELLCLI utility.At the end of the article, we will see how to use the help command to form the command syntax.
1. Login to Exadata storage cell using celladmin user and start cellcli utility.
[celladmin@uaexacell1 ~]$ id
uid=1000(celladmin) gid=500(celladmin) groups=500(celladmin),502(cellusers)
[celladmin@uaexacell1 ~]$ cellcli
CellCLI: Release 11.2.3.2.1 - Production on Sun Nov 16 16:05:27 GMT+05:30 2014
Copyright (c) 2007, 2012, Oracle. All rights reserved.
Cell Efficiency Ratio: 1
CellCLI>
Note:CELLCLI is case in-sensitive. So you can use the both upper & lower case.2. List the cell information. (Exadata storage box)
CellCLI> list cell
uaexacell1 online
CellCLI> list cell detail
name: uaexacell1
bbuTempThreshold: 60
bbuChargeThreshold: 800
bmcType: absent
cellVersion: OSS_11.2.3.2.1_LINUX.X64_130109
cpuCount: 1
diagHistoryDays: 7
fanCount: 1/1
fanStatus: normal
flashCacheMode: WriteThrough
id: a3c87541-4d0e-478a-9ec9-8a4bea3eeaac
interconnectCount: 2
interconnect1: eth1
iormBoost: 0.0
ipaddress1: 192.168.1.5/24
kernelVersion: 2.6.39-300.26.1.el5uek
makeModel: Fake hardware
metricHistoryDays: 7
offloadEfficiency: 1.0
powerCount: 1/1
powerStatus: normal
releaseVersion: 11.2.3.2.1
releaseTrackingBug: 14522699
status: online
temperatureReading: 0.0
temperatureStatus: normal
upTime: 0 days, 2:24
cellsrvStatus: running
msStatus: running
rsStatus: running
CellCLI>
3. List the available storage devices on the system.It will list both harddrives and flash disks.
CellCLI> LIST LUN
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13 normal
CellCLI>
My Exdata storage is running on virtual hardware. That’s why you are seeing the storage devices are listing with full path. In real hardware, You will be just seeing the controller number and disks numbers. (Ex: 0_0 0_0 normal). Note: Exadata VM will be used by oracle only for training purposes.
4. The below command will list only the harddisks attached to the exadata server.
CellCLI> list lun where disktype=harddisk
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13 normal
CellCLI>
5.The below command list only the flash devices which are attached to the exadata storage server.
CellCLI> list lun where disktype=flashdisk
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13 normal
CellCLI>
6.List the celldisks
CellCLI> list celldisk
CD_DISK00_uaexacell1 normal
CD_DISK01_uaexacell1 normal
CD_DISK02_uaexacell1 normal
CD_DISK03_uaexacell1 normal
CD_DISK04_uaexacell1 normal
CD_DISK05_uaexacell1 normal
CD_DISK06_uaexacell1 normal
CD_DISK07_uaexacell1 normal
CD_DISK08_uaexacell1 normal
CD_DISK09_uaexacell1 normal
CD_DISK10_uaexacell1 normal
CD_DISK11_uaexacell1 normal
CD_DISK12_uaexacell1 normal
CD_DISK13_uaexacell1 normal
FD_00_uaexacell1 normal
FD_01_uaexacell1 normal
FD_02_uaexacell1 normal
FD_03_uaexacell1 normal
FD_04_uaexacell1 normal
FD_05_uaexacell1 normal
FD_06_uaexacell1 normal
FD_07_uaexacell1 normal
FD_08_uaexacell1 normal
FD_09_uaexacell1 normal
FD_10_uaexacell1 normal
FD_11_uaexacell1 normal
FD_12_uaexacell1 normal
FD_13_uaexacell1 normal
CellCLI>
7. List grid disks
CellCLI> list griddisk
DATA01_CD_DISK00_uaexacell1 active
DATA01_CD_DISK01_uaexacell1 active
DATA01_CD_DISK02_uaexacell1 active
DATA01_CD_DISK03_uaexacell1 active
DATA01_CD_DISK04_uaexacell1 active
DATA01_CD_DISK05_uaexacell1 active
DATA01_CD_DISK06_uaexacell1 active
DATA01_CD_DISK07_uaexacell1 active
DATA01_CD_DISK08_uaexacell1 active
DATA01_CD_DISK09_uaexacell1 active
DATA01_CD_DISK10_uaexacell1 active
DATA01_CD_DISK11_uaexacell1 active
DATA01_CD_DISK12_uaexacell1 active
DATA01_CD_DISK13_uaexacell1 active
CellCLI>
8. List the flash disks which are configured as flashcache.
CellCLI> list flashcache detail
name: uaexacell1_FLASHCACHE
cellDisk:
FD_05_uaexacell1,FD_02_uaexacell1,FD_04_uaexacell1,FD_03_uaexacell1,FD_01_uaexacell1,FD_12_uaexacel
l1
creationTime: 2014-11-16T18:57:54+05:30
degradedCelldisks:
effectiveCacheSize: 4.3125G
id: f972c16a-5fcc-4cc7-8083-a06b026f662b
size: 4.3125G
status: normal
CellCLI>
9.List the flashcache which are configured as flashlog.
CellCLI> list flashlog detail
name: uaexacell1_FLASHLOG
cellDisk: FD_13_uaexacell1
creationTime: 2014-11-16T16:31:23+05:30
degradedCelldisks:
effectiveSize: 512M
efficiency: 100.0
id: 1fbc893b-4ab1-4861-b6cc-0b86bd45376d
size: 512M
status: normal
CellCLI>
10.List only the status of the RS,MS and CELLSRV status.
CellCLI> list cell attributes rsStatus, msStatus, cellsrvStatus detail
rsStatus: running
msStatus: running
cellsrvStatus: running
11. To stop the services using CELLCLI,
CellCLI> alter cell shutdown services all
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
CellCLI>
12.To start the services using CELLCLI,
CellCLI> alter cell startup services all
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
CellCLI>
13.To restart the services forcefully using CELLCLI,
CellCLI> alter cell restart services all force
Stopping the RS, CELLSRV, and MS services...
The SHUTDOWN of services was successful.
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
CellCLI>
The same way you can shutdown the services forcefully by swapping the “restart” command with “shutdown”
14. How to get the command syntax help in Exadata CELLCLI ?
Just execute the command “help” to get the list of commands.
CellCLI> help
HELP [topic]
Available Topics:
ALTER
ALTER ALERTHISTORY
ALTER CELL
ALTER CELLDISK
ALTER FLASHCACHE
ALTER GRIDDISK
ALTER IBPORT
ALTER IORMPLAN
ALTER LUN
ALTER PHYSICALDISK
ALTER QUARANTINE
ALTER THRESHOLD
ASSIGN KEY
CALIBRATE
CREATE
CREATE CELL
CREATE CELLDISK
CREATE FLASHCACHE
CREATE FLASHLOG
CREATE GRIDDISK
CREATE KEY
CREATE QUARANTINE
CREATE THRESHOLD
DESCRIBE
DROP
DROP ALERTHISTORY
DROP CELL
DROP CELLDISK
DROP FLASHCACHE
DROP FLASHLOG
DROP GRIDDISK
DROP QUARANTINE
DROP THRESHOLD
EXPORT CELLDISK
IMPORT CELLDISK
LIST
LIST ACTIVEREQUEST
LIST ALERTDEFINITION
LIST ALERTHISTORY
LIST CELL
LIST CELLDISK
LIST FLASHCACHE
LIST FLASHCACHECONTENT
LIST FLASHLOG
LIST GRIDDISK
LIST IBPORT
LIST IORMPLAN
LIST KEY
LIST LUN
LIST METRICCURRENT
LIST METRICDEFINITION
LIST METRICHISTORY
LIST PHYSICALDISK
LIST QUARANTINE
LIST THRESHOLD
SET
SPOOL
START
CellCLI>
15. To get the help for specific topic,use HELP <TOPIC> command.
CellCLI> HELP LIST
Enter HELP LIST <object_type> for specific help syntax.
<object_type>: {ACTIVEREQUEST | ALERTHISTORY | ALERTDEFINITION | CELL
| CELLDISK | FLASHCACHE | FLASHLOG | FLASHCACHECONTENT | GRIDDISK
| IBPORT | IORMPLAN | KEY | LUN
| METRICCURRENT | METRICDEFINITION | METRICHISTORY
| PHYSICALDISK | QUARANTINE | THRESHOLD }
CellCLI>
16.To get the help of specific command,use the below syntax,
CellCLI> HELP LIST CELLDISK
Usage: LIST CELLDISK [ | ] [<attribute_list>] [DETAIL]
Purpose: Displays specified attributes for cell disks.
Arguments:
: The name of the cell disk to be displayed.
: an expression which determines which cell disks should
be displayed.
<attribute_list>: The attributes that are to be displayed.
ATTRIBUTES {ALL | attr1 [, attr2]... }
Options:
[DETAIL]: Formats the display as an attribute on each line, with
an attribute descriptor preceding each value.
Examples:
LIST CELLDISK cd1 DETAIL
LIST CELLDISK where freespace > 100M
CellCLI>
You can check the exadata storage cell logs using the below command,
CellCLI> list alerthistory
1_1 2014-11-15T01:17:14+05:30 critical "File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr :
2.35G /tmp : 1.37G /opt : 593.27M"
1_2 2014-11-15T01:25:44+05:30 critical "File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr :
2.35G /tmp : 1.37G /opt : 593.36M"
1_3 2014-11-15T01:36:51+05:30 critical "File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr :
2.35G /tmp : 1.37G /opt : 593.38M"
1_4 2014-11-15T01:44:27+05:30 critical "File system "/" is 84% full, which is above the 80%
threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/"
becomes less than 75% full. Top three directories ordered by total space usage are as follows: /usr :
2.35G /tmp : 1.37G /opt : 593.39M"
1_5 2014-11-16T15:00:21+05:30 clear "File system "/" is 62% full, which is below the 75%
threshold. Normal space reclamation will resume."
2 2014-11-16T14:47:28+05:30 critical "RS-7445 [Serv CELLSRV hang detected] [It will be
restarted] [] [] [] [] [] [] [] [] [] []"
3 2014-11-16T15:07:05+05:30 critical "RS-7445 [Serv MS is absent] [It will be restarted] []
[] [] [] [] [] [] [] [] []"
4 2014-11-16T16:31:51+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
5 2014-11-16T16:32:57+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
6 2014-11-16T16:34:42+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
7 2014-11-16T16:36:15+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
8 2014-11-16T16:44:28+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
9 2014-11-16T16:49:00+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
10 2014-11-16T16:52:32+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
11 2014-11-16T16:58:42+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
12 2014-11-16T16:59:48+05:30 critical "RS-7445 [CELLSRV monitor disabled] [Detected a
flood of restarts] [] [] [] [] [] [] [] [] [] []"
13 2014-11-16T17:07:04+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
14 2014-11-16T18:31:17+05:30 critical "ORA-07445: exception encountered: core dump
[_ZN14FlashCacheCore20fcDeleteMemoryForFciEj()+45] [11] [0x000000000] [] [] []"
CellCLI>
Exadata Storage Cell – Administrating the Disks
Exadata Storage server uses the cell software to manage the disks. Like volume manager, we need to build couple of virtual layers to get the grid disks.These griddisk will be used to create the ASM disk group on the database level . In this article, we will see that how we can create/delete the celldisk, griddisk,flashcache & flashlog using the cellcli utility as well as Linux command line.As i said earlier, we can also use flash disk to create the griddisks for high write intensive databases. But in most of the cases, we will be using those flash disks for flashcache and flashlog purposes due to the storage limitation.
Exadata Storage ArchitectureThe below diagram will explain that how the virtual storage objects are built on exadata storage server .
Exadata storage cell disks
1. Login to the exadata storage server celladmin and start cellcli utility.
[celladmin@uaexacell1 ~]$ id
uid=1000(celladmin) gid=500(celladmin) groups=500(celladmin),502(cellusers)
[celladmin@uaexacell1 ~]$ cellcli
CellCLI: Release 11.2.3.2.1 - Production on Sun Nov 16 22:19:23 GMT+05:30 2014
Copyright (c) 2007, 2012, Oracle. All rights reserved.
Cell Efficiency Ratio: 1
CellCLI>
2.List the physical disks. It lists all the attached harddisks and flash drives.
CellCLI> list physicaldisk
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/DISK13 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH00 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH05 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH06 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH07 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH08 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH09 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH10 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH11 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH12 normal
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13
/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw/FLASH13 normal
CellCLI>
3.Check the existing celldisks.
CellCLI> LIST CELLDISK
CellCLI>
4. Create the celldisks on all disks. (we normally do this)
CellCLI> CREATE CELLDISK ALL
CellDisk CD_DISK00_uaexacell1 successfully created
CellDisk CD_DISK01_uaexacell1 successfully created
CellDisk CD_DISK02_uaexacell1 successfully created
CellDisk CD_DISK03_uaexacell1 successfully created
CellDisk CD_DISK04_uaexacell1 successfully created
CellDisk CD_DISK05_uaexacell1 successfully created
CellDisk CD_DISK06_uaexacell1 successfully created
CellDisk CD_DISK07_uaexacell1 successfully created
CellDisk CD_DISK08_uaexacell1 successfully created
CellDisk CD_DISK09_uaexacell1 successfully created
CellDisk CD_DISK10_uaexacell1 successfully created
CellDisk CD_DISK11_uaexacell1 successfully created
CellDisk CD_DISK12_uaexacell1 successfully created
CellDisk CD_DISK13_uaexacell1 successfully created
CellDisk FD_00_uaexacell1 successfully created
CellDisk FD_01_uaexacell1 successfully created
CellDisk FD_02_uaexacell1 successfully created
CellDisk FD_03_uaexacell1 successfully created
CellDisk FD_04_uaexacell1 successfully created
CellDisk FD_05_uaexacell1 successfully created
CellDisk FD_06_uaexacell1 successfully created
CellDisk FD_07_uaexacell1 successfully created
CellDisk FD_08_uaexacell1 successfully created
CellDisk FD_09_uaexacell1 successfully created
CellDisk FD_10_uaexacell1 successfully created
CellDisk FD_11_uaexacell1 successfully created
CellDisk FD_12_uaexacell1 successfully created
CellDisk FD_13_uaexacell1 successfully created
CellCLI> LIST CELLDISK
CD_DISK00_uaexacell1 normal
CD_DISK01_uaexacell1 normal
CD_DISK02_uaexacell1 normal
CD_DISK03_uaexacell1 normal
CD_DISK04_uaexacell1 normal
CD_DISK05_uaexacell1 normal
CD_DISK06_uaexacell1 normal
CD_DISK07_uaexacell1 normal
CD_DISK08_uaexacell1 normal
CD_DISK09_uaexacell1 normal
CD_DISK10_uaexacell1 normal
CD_DISK11_uaexacell1 normal
CD_DISK12_uaexacell1 normal
CD_DISK13_uaexacell1 normal
FD_00_uaexacell1 normal
FD_01_uaexacell1 normal
FD_02_uaexacell1 normal
FD_03_uaexacell1 normal
FD_04_uaexacell1 normal
FD_05_uaexacell1 normal
FD_06_uaexacell1 normal
FD_07_uaexacell1 normal
FD_08_uaexacell1 normal
FD_09_uaexacell1 normal
FD_10_uaexacell1 normal
FD_11_uaexacell1 normal
FD_12_uaexacell1 normal
FD_13_uaexacell1 normal
CellCLI>
We have successfully created the celldisks on all the harddisks and flashdisks. This is one time activity and you no need to perform celldisk creation unless you replace any faulty drives.
5.To create the griddisk on all the harddisks , use the below command.
CellCLI> create griddisk ALL HARDDISK PREFIX=CD_DISK
GridDisk CD_DISK_CD_DISK00_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK01_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK02_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK03_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK04_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK05_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK06_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK07_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK08_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK09_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK10_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK11_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK12_uaexacell1 successfully created
GridDisk CD_DISK_CD_DISK13_uaexacell1 successfully created
CellCLI>
6.If you want to create the griddisk with specific size & name, use the below syntax,
CellCLI> CREATE GRIDDISK DATA01_DG celldisk = CD_DISK00_uaexacell1, size =100M
GridDisk DATA01_DG successfully created
CellCLI> list griddisk
DATA01_DG active
CellCLI> list griddisk detail
name: DATA01_DG
availableTo:
cachingPolicy: default
cellDisk: CD_DISK00_uaexacell1
comment:
creationTime: 2014-11-16T22:27:50+05:30
diskType: HardDisk
errorCount: 0
id: d681708b-9717-41fc-afad-78d61ca2f476
offset: 48M
size: 96M
status: active
CellCLI>
If you are having the Exadata quarter rack , you need to create the same size grid disks on all the exadata storage cells. Oracle ASM will mirror across all the cell nodes for redundancy. When Database requires the additional space , its highly recommended to create the griddisk with existing griddisk size.
7.How to delete the griddisk ? Drop (delete) the specific griddisk using the below syntax
CellCLI> list griddisk DATA01_DG
DATA01_DG active
CellCLI> drop griddisk DATA01_DG
GridDisk DATA01_DG successfully dropped
CellCLI> list griddisk DATA01_DG
CELL-02007: Grid disk does not exist: DATA01_DG
CellCLI>
8.You can also drop the bunch of grid disks using the prefix.Please see the below syntax.
CellCLI> list griddisk
CD_DISK_CD_DISK00_uaexacell1 active
CD_DISK_CD_DISK01_uaexacell1 active
CD_DISK_CD_DISK02_uaexacell1 active
CD_DISK_CD_DISK03_uaexacell1 active
CD_DISK_CD_DISK04_uaexacell1 active
CD_DISK_CD_DISK05_uaexacell1 active
CD_DISK_CD_DISK06_uaexacell1 active
CD_DISK_CD_DISK07_uaexacell1 active
CD_DISK_CD_DISK08_uaexacell1 active
CD_DISK_CD_DISK09_uaexacell1 active
CD_DISK_CD_DISK10_uaexacell1 active
CD_DISK_CD_DISK11_uaexacell1 active
CD_DISK_CD_DISK12_uaexacell1 active
CD_DISK_CD_DISK13_uaexacell1 active
CellCLI> drop griddisk all prefix=CD_DISK
GridDisk CD_DISK_CD_DISK00_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK01_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK02_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK03_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK04_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK05_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK06_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK07_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK08_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK09_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK10_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK11_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK12_uaexacell1 successfully dropped
GridDisk CD_DISK_CD_DISK13_uaexacell1 successfully dropped
CellCLI>
The above command deletes the griddisk which name starts from “CD_DISK”.
9. How to drop specific celldisk ? Drop the specific celldisk using the below syntax.
CellCLI> list celldisk CD_DISK00_uaexacell1
CD_DISK00_uaexacell1 normal
CellCLI> drop celldisk CD_DISK00_uaexacell1
CellDisk CD_DISK00_uaexacell1 successfully dropped
CellCLI> list celldisk CD_DISK00_uaexacell1
CELL-02525: Unknown cell disk: CD_DISK00_uaexacell1
CellCLI>
Playing the Flashdisks
1. List the flashdisks
CellCLI> LIST CELLDISK where disktype=flashdisk
FD_00_uaexacell1 normal
FD_01_uaexacell1 normal
FD_02_uaexacell1 normal
FD_03_uaexacell1 normal
FD_04_uaexacell1 normal
FD_05_uaexacell1 normal
FD_06_uaexacell1 normal
FD_07_uaexacell1 normal
FD_08_uaexacell1 normal
FD_09_uaexacell1 normal
FD_10_uaexacell1 normal
FD_11_uaexacell1 normal
FD_12_uaexacell1 normal
FD_13_uaexacell1 normal
CellCLI>
Flashdisks will commonly used to create the flashcache and flashlog.
Exadata Flashdisk
2.Configuring specific flashdisks as flashlog.
CellCLI> CREATE FLASHLOG celldisk='FD_00_uaexacell1,FD_01_uaexacell1' , SIZE=100M
Flash log uaexacell1_FLASHLOG successfully created
CellCLI> LIST FLASHLOG
uaexacell1_FLASHLOG normal
CellCLI> LIST FLASHLOG DETAIL
name: uaexacell1_FLASHLOG
cellDisk: FD_00_uaexacell1,FD_01_uaexacell1
creationTime: 2014-11-16T23:02:50+05:30
degradedCelldisks:
effectiveSize: 96M
efficiency: 100.0
id: a12265f9-f80b-491b-a0e5-518b2143eede
size: 96M
status: normal
CellCLI>
3.Configuring flashcache on specific flashdisks.
CellCLI> CREATE FLASHCACHE celldisk='FD_03_uaexacell1,FD_04_uaexacell1' , SIZE=100M
Flash cache uaexacell1_FLASHCACHE successfully created
CellCLI> LIST FLASHCACHE
uaexacell1_FLASHCACHE normal
CellCLI> LIST FLASHCACHE DETAIL
name: uaexacell1_FLASHCACHE
cellDisk: FD_04_uaexacell1,FD_03_uaexacell1
creationTime: 2014-11-16T23:04:50+05:30
degradedCelldisks:
effectiveCacheSize: 96M
id: fe936779-abfc-4b70-a0d0-5146523cef48
size: 96M
status: normal
CellCLI>
4.Deleting the flashlog.
CellCLI> DROP FLASHLOG
Flash log uaexacell1_FLASHLOG successfully dropped
CellCLI> LIST FLASHLOG
CellCLI>
5.Deleting the flashcache.
CellCLI> LIST FLASHCACHE
uaexacell1_FLASHCACHE normal
CellCLI> DROP FLASHCACHE
Flash cache uaexacell1_FLASHCACHE successfully dropped
CellCLI> LIST FLASHCACHE
CellCLI>
We need to invoke cellcli utility to manage the virtual storage objects. Is it possible manage the storage from command line ? Yes. You can manage the storage from linux command line. The below example will show that all the
cellcli commands can be executed from the command line.you need to provide the command along with “cellcli -e” .
[celladmin@uaexacell1 ~]$ cellcli -e create griddisk all harddisk prefix=UADB
GridDisk UADB_CD_DISK01_uaexacell1 successfully created
GridDisk UADB_CD_DISK02_uaexacell1 successfully created
GridDisk UADB_CD_DISK03_uaexacell1 successfully created
GridDisk UADB_CD_DISK04_uaexacell1 successfully created
GridDisk UADB_CD_DISK05_uaexacell1 successfully created
GridDisk UADB_CD_DISK06_uaexacell1 successfully created
GridDisk UADB_CD_DISK07_uaexacell1 successfully created
GridDisk UADB_CD_DISK08_uaexacell1 successfully created
GridDisk UADB_CD_DISK09_uaexacell1 successfully created
GridDisk UADB_CD_DISK10_uaexacell1 successfully created
GridDisk UADB_CD_DISK11_uaexacell1 successfully created
GridDisk UADB_CD_DISK12_uaexacell1 successfully created
GridDisk UADB_CD_DISK13_uaexacell1 successfully created
[celladmin@uaexacell1 ~]$ cellcli -e list griddisk where disktype=harddisk
UADB_CD_DISK01_uaexacell1 active
UADB_CD_DISK02_uaexacell1 active
UADB_CD_DISK03_uaexacell1 active
UADB_CD_DISK04_uaexacell1 active
UADB_CD_DISK05_uaexacell1 active
UADB_CD_DISK06_uaexacell1 active
UADB_CD_DISK07_uaexacell1 active
UADB_CD_DISK08_uaexacell1 active
UADB_CD_DISK09_uaexacell1 active
UADB_CD_DISK10_uaexacell1 active
UADB_CD_DISK11_uaexacell1 active
UADB_CD_DISK12_uaexacell1 active
UADB_CD_DISK13_uaexacell1 active
[celladmin@uaexacell1 ~]$
Exadata – Distributed Command-Line Utility (dcli)
Distributed command line utility(dcli) provides an option to execute the monitoring and administration commands on multiple servers simultaneously.In exadata database machine , you may need to create the griddisks on all the exadata storage cells frequently. Each time , you need to login to all the storage cells and create the griddisk manually.But dcli will make our life easier once you configured the all the storage cells on any one of the storage cell or on the database node. In this article ,we will see how to configure the dcli on multiple storage cells.
It’s good to configure the dcli on the database server. So that you no need to login to exadata storage cells for each grid disk creation/drop.
1. Login to the database server or any one of the exadata storage cell.Make sure all the exadata stroage cells has been added to the /etc/hosts file.
[root@uaexacell1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.2.50 uaexacell1
192.168.2.51 uaexacell2
192.168.2.52 uaexacell3
[root@uaexacell1 ~]#
2. Create the file with all the exadata storage cell .
[root@uaexacell1 ~]# cat << END >> exacells
> uaexacell1
> uaexacell2
> uaexacell3
> END
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# cat exacells
uaexacell1
uaexacell2
uaexacell3
[root@uaexacell1 ~]#
3.Create the ssh key for the host.
[root@uaexacell1 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
15:ac:fb:66:8b:5f:32:09:dd:b9:e7:ca:6c:ef:6b:b4 root@uaexacell1
[root@uaexacell1 ~]#
4.Execute the below command to make the password less login for all the hosts which we have added in exacells file. DCLI Utility configures the password less authentication across the nodes using ssh .
[root@uaexacell1 ~]# dcli -g exacells -k
The authenticity of host 'uaexacell1 (192.168.2.50)' can't be established.
RSA key fingerprint is e6:e9:4f:d1:a0:05:eb:38:d5:bf:5b:fb:2a:5f:2c:b7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'uaexacell1,192.168.2.50' (RSA) to the list of known hosts.
celladmin@uaexacell1's password:
celladmin@uaexacell2's password:
celladmin@uaexacell3's password:
uaexacell1: ssh key added
uaexacell2: ssh key added
uaexacell3: ssh key added
[root@uaexacell1 ~]#
We have successfully configured the dcli utility on all the exadata storage cells. Now we can monitor & administrate cells nodes from the current host.
5.Let me check the status of the all exadata cells.
[root@uaexacell1 ~]# dcli -g exacells cellcli -e list cell
uaexacell1: uaexacell1 online
uaexacell2: uaexacell1 online
uaexacell3: uaexacell1 online
[root@uaexacell1 ~]#
6.Create the griddisk on all the exadata storage node using the dcli utility.
[root@uaexacell1 ~]# dcli -g exacells cellcli -e list celldisk where disktype=harddisk
uaexacell1: CD_DISK01_uaexacell1 normal
uaexacell1: CD_DISK02_uaexacell1 normal
uaexacell1: CD_DISK03_uaexacell1 normal
uaexacell1: CD_DISK04_uaexacell1 normal
uaexacell1: CD_DISK05_uaexacell1 normal
uaexacell1: CD_DISK06_uaexacell1 normal
uaexacell1: CD_DISK07_uaexacell1 normal
uaexacell1: CD_DISK08_uaexacell1 normal
uaexacell1: CD_DISK09_uaexacell1 normal
uaexacell1: CD_DISK10_uaexacell1 normal
uaexacell1: CD_DISK11_uaexacell1 normal
uaexacell1: CD_DISK12_uaexacell1 normal
uaexacell1: CD_DISK13_uaexacell1 normal
uaexacell2: CD_DISK01_uaexacell1 normal
uaexacell2: CD_DISK02_uaexacell1 normal
uaexacell2: CD_DISK03_uaexacell1 normal
uaexacell2: CD_DISK04_uaexacell1 normal
uaexacell2: CD_DISK05_uaexacell1 normal
uaexacell2: CD_DISK06_uaexacell1 normal
uaexacell2: CD_DISK07_uaexacell1 normal
uaexacell2: CD_DISK08_uaexacell1 normal
uaexacell2: CD_DISK09_uaexacell1 normal
uaexacell2: CD_DISK10_uaexacell1 normal
uaexacell2: CD_DISK11_uaexacell1 normal
uaexacell2: CD_DISK12_uaexacell1 normal
uaexacell2: CD_DISK13_uaexacell1 normal
uaexacell3: CD_DISK01_uaexacell1 normal
uaexacell3: CD_DISK02_uaexacell1 normal
uaexacell3: CD_DISK03_uaexacell1 normal
uaexacell3: CD_DISK04_uaexacell1 normal
uaexacell3: CD_DISK05_uaexacell1 normal
uaexacell3: CD_DISK06_uaexacell1 normal
uaexacell3: CD_DISK07_uaexacell1 normal
uaexacell3: CD_DISK08_uaexacell1 normal
uaexacell3: CD_DISK09_uaexacell1 normal
uaexacell3: CD_DISK10_uaexacell1 normal
uaexacell3: CD_DISK11_uaexacell1 normal
uaexacell3: CD_DISK12_uaexacell1 normal
uaexacell3: CD_DISK13_uaexacell1 normal
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# dcli -g exacells cellcli -e create griddisk HRDB celldisk=CD_DISK01_uaexacell1,
size=100M
uaexacell1: GridDisk HRDB successfully created
uaexacell2: GridDisk HRDB successfully created
uaexacell3: GridDisk HRDB successfully created
[root@uaexacell1 ~]#
[root@uaexacell1 ~]# dcli -g exacells cellcli -e list griddisk HRDB detail
uaexacell1: name: HRDB
uaexacell1: availableTo:
uaexacell1: cachingPolicy: default
uaexacell1: cellDisk: CD_DISK01_uaexacell1
uaexacell1: comment:
uaexacell1: creationTime: 2014-11-17T15:46:43+05:30
uaexacell1: diskType: HardDisk
uaexacell1: errorCount: 0
uaexacell1: id: 3bf213a3-dafc-41b7-b133-5580dd04c334
uaexacell1: offset: 48M
uaexacell1: size: 96M
uaexacell1: status: active
uaexacell2: name: HRDB
uaexacell2: availableTo:
uaexacell2: cachingPolicy: default
uaexacell2: cellDisk: CD_DISK01_uaexacell1
uaexacell2: comment:
uaexacell2: creationTime: 2014-11-17T15:46:43+05:30
uaexacell2: diskType: HardDisk
uaexacell2: errorCount: 0
uaexacell2: id: 21014da6-6e17-4ca1-a7dc-cc059bd75654
uaexacell2: offset: 48M
uaexacell2: size: 96M
uaexacell2: status: active
uaexacell3: name: HRDB
uaexacell3: availableTo:
uaexacell3: cachingPolicy: default
uaexacell3: cellDisk: CD_DISK01_uaexacell1
uaexacell3: comment:
uaexacell3: creationTime: 2014-11-17T15:46:43+05:30
uaexacell3: diskType: HardDisk
uaexacell3: errorCount: 0
uaexacell3: id: 3821ce2c-4376-4674-8cb4-6c8868b5b1f9
uaexacell3: offset: 48M
uaexacell3: size: 96M
uaexacell3: status: active
[root@uaexacell1 ~]#
You can also use the dcli command without having the hosts file.
[root@uaexacell1 ~]# dcli -c uaexacell1,uaexacell2,uaexacell3 cellcli -e drop griddisk HRDB
uaexacell1: GridDisk HRDB successfully dropped
uaexacell2: GridDisk HRDB successfully dropped
uaexacell3: GridDisk HRDB successfully dropped
[root@uaexacell1 ~]#
Exadata Storage Cell Commands Cheat Sheet
It is not an easy to remember the commands since most of the UNIX administrators are working on multiple Operating systems and different OS flavors. Exadata and ZFS appliance are adding additional responsibility to Unix administrator and need to remember those appliance commands as well. This
article will provide the reference to all Exadata storage cell commands with examples for some complex command options.
All the below mentioned commands will work only on cellcli prompt.
Listing the Exadata Storage cell Objects (LIST)Command Description Examples
cellcli To Manage the Exadata cell Storage
[root@uaexacell1 init.d]# cellcliCellCLI: Release 11.2.3.2.1 – Production on Tue Nov 18 02:16:03 GMT+05:30 2014Copyright (c) 2007, 2012, Oracle. All rights reserved.Cell Efficiency Ratio: 1CellCLI>
LIST CELL List the Cell StatusCellCLI> LIST CELLuaexacell1 onlineCellCLI>
LIST LUN To list all the physical Drive & Flash drives
LIST PHYSICALDISK To list all the physical Drive & Flash drives
LIST LUN where celldisk = <celldisk>To list the LUN which is mapped to specific disk
CellCLI> LIST LUN where celldisk = FD_13_uaexacell1FLASH13 FLASH13 normal
LIST CELL DETAIL List the cell Status with all attributes
CellCLI> LIST CELL DETAILname: uaexacell1bbuTempThreshold: 60bbuChargeThreshold: 800bmcType: absent
LIST CELL attributes <attribute> To list the specific cell attributesCellCLI> LIST CELL attributes flashCacheModeWriteThrough
LIST CELLDISK List all the cell DisksCellCLI> LIST CELLDISKCD_DISK00_uaexacell1 normalCD_DISK01_uaexacell1 normal
LIST CELLDISK DETAIL List all the cell Disks with Detailed CellCLI> LIST CELLDISK detail
information
name: FD_13_uaexacell1comment:creationTime: 2014-11-15T01:46:57+05:30deviceName: 0_0devicePartition: 0_0diskType: FlashDisk
LIST CELLDISK <CELLDISK> detail TO list the Specific celldisk detail
CellCLI> LIST CELLDISK FD_00_uaexacell1 detailname: FD_00_uaexacell1comment:creationTime: 2014-11-15T01:46:56+05:30
LIST CELLDISK where disktype=harddisk
To list the celldisk which are created using harddisk
CellCLI> LIST CELLDISK where disktype=harddiskCD_DISK00_uaexacell1 normalCD_DISK01_uaexacell1 normalCD_DISK02_uaexacell1 normal
LIST CELLDISK where disktype=flashdisk
To list the celldisk which are created using Flashdisk
CellCLI> LIST CELLDISK where disktype=flashdiskFD_00_uaexacell1 normalFD_01_uaexacell1 normalFD_02_uaexacell1 normal
LIST CELLDISK where freespace > SIZETo list the celldisks which has more than specificed size
CellCLI> LIST CELLDISK where freespace > 50MFD_00_uaexacell1 normalFD_01_uaexacell1 normal
LIST FLASHCACHE To list the configured FLASHCACHE
LIST FLASHCACHE DETAIL To list the configured FLASHCACHE in detail
LIST FLASHLOG To list the configured FLASHLOG
LIST FLASHLOG DETAIL To list the configured FLASHLOG in detail
LIST FLASHCACHECONTENT To list the Flashcache content
LIST GRIDDISK To list the griddisksCellCLI> LIST GRIDDISKDATA01_CD_DISK00_uaexacell1 DATA01_CD_DISK01_uaexacell1
LIST GRIDDISK DETAIL To list the griddisks in detail CellCLI> LIST GRIDDISK DETAIL
name: DATA01_CD_DISK00_uaexacell1availableTo:cachingPolicy: defaultcellDisk: CD_DISK00_uaexacell1
LIST GRIDDISK <GRIDDISK_NAME> To list the specific GriddiskCellCLI> LIST GRIDDISK DATA01_CD_DISK00_uaexacell1DATA01_CD_DISK00_uaexacell1
LIST GRIDDISK <GRIDDISK_NAME> detail
To list the specific Griddisk in detail
CellCLI> LIST GRIDDISK DATA01_CD_DISK00_uaexacell1 detailname: DATA01_CD_DISK00_uaexacell1availableTo:cachingPolicy: defaultcellDisk: CD_DISK00_uaexacell1
LIST GRIDDISK where size > SIZETo list the griddisk which size is higher than specified value
CellCLI> LIST GRIDDISK where size > 750MDATA01_CD_DISK00_uaexacell1
LIST IBPORT To list the inifiniband Port
LIST IORMPLAN To list the IORMPLANCellCLI> LIST IORMPLANuaexacell1_IORMPLAN active
LIST IORMPLAN DETAIL To list the IORMPLAN in DETAIL
CellCLI> LIST IORMPLAN DETAILname: uaexacell1_IORMPLANcatPlan:dbPlan:objective: basicstatus: active
LIST METRICCURRENT To get the I/O’s second for all the objects
CellCLI> LIST METRICCURRENTCD_BY_FC_DIRTY CD_DISK00_uaexacell1MBCD_BY_FC_DIRTY CD_DISK01_uaexacell1MBCD_BY_FC_DIRTY CD_DISK02_uaexacell1MBCD_BY_FC_DIRTY CD_DISK03_uaexacell1MB
LIST METRICCURRENT cl_cput, cl_runq detail
To list the RUNQ
CellCLI> list metriccurrent cl_cput, cl_runq detailname: CL_CPUTalertState: normalcollectionTime: 2014-11-18T02:42:26+05:30metricObjectName: uaexacell1metricType: InstantaneousmetricValue: 4.7 %objectType: CELLname: alertState: normalcollectionTime: 2014-11-18T02:42:26+05:30metricObjectName: uaexacell1metricType: InstantaneousmetricValue: 12.2objectType: CELL
LIST QUARANTINE To list the QUARANTINE disk
LIST QUARANTINE detail To list the QUARANTINE disk in detail
LIST THRESHOLD To list the thersold limits
LIST THRESHOLD DETAIL To list the thersold limits in detail
LIST ACTIVEREQUEST To list the active Requests
LIST ALERTHISTORY To list the alerts
CREATING the Exadata Storage cell Objects (CREATE)The below commands will be used most commonly on exadata storage to create the virtual objects.
CREATE CELL <CELL_NAME> interconnect1=<ethx> Configures the cell network
CellCLI> CREATE CELL uaexacell1Cell uaexacell1 successfully createdStarting CELLSRV services…The STARTUP of CELLSRV services was successful.Flash cell disks, FlashCache, and FlashLog will be created.
CREATE CELLDISK <CELLDISK_NAME> <LUN> Creates cell disk(s) according to CREATE CELLDISK UADBG1
attributes provided.
CREATE CELLDISK ALL HARDISKCreates cell disk(s) on all the harddisks
CellCLI> CREATE CELLDISK ALL HARDDISKCellDisk CD_DISK00_uaexacell1 successfully createdCellDisk CD_DISK01_uaexacell1 successfully createdCellDisk CD_DISK02_uaexacell1 successfully created
CREATE CELLDISK ALLCreates cell disk(s) on all the harddisks & flashdisks
CellCLI> CREATE CELLDISK ALLCellDisk CD_DISK00_uaexacell1 successfully created
CREATE CELLDISK ALL FLASHDISKCreates cell disk(s) on all the flashdisks
CellCLI> CREATE CELLDISK ALL FLASHDISKCellDisk FD_00_uaexacell1 successfully created
CREATE FLASHCACHE celldisk=”Flash_celldisk1″Creates flash cache for IO requests on specific flashdisk
CellCLI> CREATE FLASHCACHE celldisk=”FD_00_uaexacell1,FD_01_uaexacell1″, size=500M
CREATE FLASHCACHE ALL size = <size>Creates flash cache for IO requests on all deviceswith specific size
CREATE FLASHCACHE ALL size = 10G
CREATE FLASHLOG celldisk=”Flash_celldisk1″Creates flash log for logging requests on specified flashdisk
CellCLI> CREATE FLASHLOG celldisk=”FD_00_uaexacell1,FD_01_uaexacell1″, size=500M
CREATE FLASHLOG ALL size = <size>Creates flash log for logging requests on all deviceswith specific size
CREATE FLASHLOG ALL size = 252M
CREATE GRIDDISK <GRIDDISK_NAME> CELLDISK=<celldisk>
Creates grid disk on specific disk
CellCLI> CREATE GRIDDISK UADBDK1 CELLDISK=CD_DISK00_uaexacell1GridDisk UADBDK1 successfully createdCellCLI>
CREATE GRIDDISK <GRIDDISK_NAME> CELLDISK=<celldisk>, size=<size>
Creates grid disk on specific disk with specific size
CellCLI> CREATE GRIDDISK UADBDK2 CELLDISK=CD_DISK02_uaexacell1, SIZE=100MGridDisk UADBDK2 successfully createdCellCLI>
CREATE GRIDDISK ALL HARDDISK PREFIX=<Disk_Name>, size=<size>
Create Grid disks on all the harddisk with specific size.
CellCLI> CREATE GRIDDISK ALLsize=100MCell disks were skipped because they had no freespace for grid disks: CD_DISK00_uaexacell1.
GridDisk UADBPROD_CD_DISK01_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK02_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK03_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK04_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK05_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK06_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK07_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK08_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK09_uaexacell1 successfully createdGridDisk UADBPROD_CD_DISK10_uaexacell1 successfully created
CREATE GRIDDISK ALL FLASHDISK PREFIX=<Disk_Name>, size=<size>
Create Grid disks on all the flashdisk with specific size.
CellCLI> CREATE GRIDDISK ALL FLASHDISK PREFIX=UAFLSHDB, size=100MGridDisk UAFLSHDB_FD_00_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_01_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_02_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_03_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_04_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_05_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_06_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_07_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_08_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_09_uaexacell1 successfully createdGridDisk UAFLSHDB_FD_10_uaexacell1 successfully created
CREATE KEYCreates and displays random key for use in assigning client keys.
CellCLI> CREATE KEY1820ef8f9c2bafcd12e15ebfe267abadCellCLI>
CREATE QUARANTINE quarantineType=<“SQLID” or “DISK REGION” or \“SQL PLAN” or “CELL OFFLOAD”> attributename=value
Define the attributes for a new quarantine entity
CellCLI> CREATE QUARANTINE quarantineType=”SQLID”, sqlid=”5xnjp4cutc1s8″Quarantine successfully created.CellCLI>
CREATE THRESHOLD <Thersold1> attributename=value
Defines conditions for generation of a metric alert.
CellCLI> CREATE THRESHOLD db_io_rq_sm_sec.db123comparison=’>’, critical=120Threshold db_io_rq_sm_sec.db123 successfully createdCellCLI>
DELETING the Exadata Storage cell Objects (DROP)
The below mentioned cellcli commands will help you to remove the various objects on the exadata storage cell. Be carefully with “force” option since it can remove the object even though it is in use.
DROP ALERTHISTORY <ALER1>, <ALERT2>Removes the specific alert from the cell’s alert history.
CellCLI> DROP ALERTHISTORY 2Alert 2 successfully droppedCellCLI>
DROP ALERTHISTORY ALLRemoves all alert from the cell’s alert history.
CellCLI> DROP ALERTHISTORY ALLAlert 1_1 successfully droppedAlert 1_2 successfully droppedAlert 1_3 successfully droppedAlert 1_4 successfully droppedAlert 1_5 successfully droppedAlert 1_6 successfully dropped
DROP THRESHOLD <THERSOLD>Removes specific threshold from the cell
CellCLI> DROP THRESHOLD db_io_rq_sm_sec.db123Threshold db_io_rq_sm_sec.db123 successfully droppedCellCLI>
DROP THRESHOLD ALL Removes all threshold from the cell CellCLI> DROP THRESHOLD ALL
DROP QUARANTINE <quarantine1> Removes quarantine from the cell CellCLI> DROP QUARANTINE QADB1
DROP QUARANTINE ALLRemoves all the quarantine from the cell
CellCLI> DROP QUARANTINE ALL
DROP GRIDDISK <Griddisk_Name>Removes the specific grid disk from the cell
CellCLI> DROP GRIDDISK UADBDK1GridDisk UADBDK1 successfully droppedCellCLI>
DROP GRIDDISK ALL PREFIX=<GRIDDISK_STARTNAME>
Removes the set of grid disks from the cell by using the prefix
CellCLI> DROP GRIDDISK ALL PREFIX=UAFLSHDBGridDisk UAFLSHDB_FD_00_uaexacell1 successfully droppedGridDisk UAFLSHDB_FD_01_uaexacell1 successfully droppedGridDisk UAFLSHDB_FD_02_uaexacell1 successfully droppedGridDisk UAFLSHDB_FD_03_uaexacell1 successfully droppedGridDisk UAFLSHDB_FD_04_uaexacell1 successfully droppedGridDisk UAFLSHDB_FD_05_uaexacell1 successfully dropped
DROP GRIDDISK <GRIDDISK> ERASE=1pass Removes the specific grid disks from the cell and Performs secure data
CellCLI> DROP GRIDDISK UADBPROD_CD_DISK10_uaexacell1 ERASE=1pass
deletion on the grid diskGridDisk UADBPROD_CD_DISK10_uaexacell1 successfully droppedCellCLI>
DROP GRIDDISK <GRIDDISK> FORCEDrops grid disk even if it is currently active.
CellCLI> DROP GRIDDISK UADBPROD_CD_DISK08_uaexacell1 FORCEGridDisk UADBPROD_CD_DISK08_uaexacell1 successfully dropped
DROP GRIDDISK ALL HARDDISKDrops griddisks which are created on top of hardisk
DROP GRIDDISK ALL HARDDISK
Modifying the Exadata Storage cell Objects (ALTER)The below mentioned commands will help you to modify the cell attributes and various objects setting. ALTER command will be used to perform the start/stop/restart the MS/RS/CELLSRV services as well.
ALTER ALERTHISTORY 123 examinedby=<user_name>
Sets the examinedby attribute of alerts ALTER ALERTHISTORY 123 examinedby=lingesh
ALTER CELL RESTART SERVICES ALL All(RS+CELLSRV+MS) services are restarted CellCLI>ALTER CELL RESTART SERVICES ALL
ALTER CELL RESTART SERVICES < RS | MS | CELLSRV >
To restart specific services
CellCLI>ALTER CELL RESTART SERVICES RS
CellCLI>ALTER CELL RESTART SERVICES MS
CellCLI>ALTER CELL RESTART SERVICES CELLSRV
ALTER CELL SHUTDOWN SERVICES ALL All(RS+CELLSRV+MS) services will be halted CellCLI>ALTER CELL SHUTDOWN SERVICES
ALTER CELL SHUTDOWN SERVICES < RS | MS | CELLSRV >
To shutdown specfic service
CellCLI>ALTER CELL SHUTDOWN SERVICES RS
CellCLI>ALTER CELL SHUTDOWN SERVICES MS
CellCLI>ALTER CELL SHUTDOWN SERVICES CELLSRV
ALTER CELL STARTUP SERVICES ALL All(RS+CELLSRV+MS) services will be started CellCLI>ALTER CELL STARTUP SERVICES
ALTER CELL STARTUP SERVICES < RS | MS | To start specific Service CellCLI>ALTER CELL STARTUP SERVICES RS
CELLSRV >CellCLI>ALTER CELL STARTUP SERVICES MS
CellCLI>ALTER CELL STARTUP SERVICES CELLSRV
ALTER CELL NAME=<Name>To Set the Name/Re-name to the Exadata Storage Cell
CellCLI> ALTER CELL NAME=UAEXACELL1
Cell UAEXACELL1 successfully altered
CellCLI>
ALTER CELL flashCacheMode=WriteBack
To Modify the flashcache mode to writeback from writethrough. To perform this,You need to drop the flashcache & Stop the cellsrv .Then you need to create the new Flashcache
CellCLI> DROP flashcache
Flash cache UAEXACELL1_FLASHCACHE successfully dropped
CellCLI>
CellCLI> ALTER CELL SHUTDOWN SERVICES CELLSRV
Stopping CELLSRV services…
The SHUTDOWN of CELLSRV services was successful.
CellCLI>
CellCLI> ALTER CELL flashCacheMode=WriteBack
Cell UAEXACELL1 successfully altered
CellCLI>
CellCLI> CREATE FLASHCACHE celldisk=”FD_00_uaexacell1,FD_01_uaexacell1″, size=500M
ALTER CELL interconnect1=<Network_Interface> To set the network interface for cell stroage. CellCLI> ALTER CELL INTERCONNECT1=eth1
A restart of all services is required to put new network configuration into effect. MS-CELLSRV communication
may be hampered until restart.Cell UAEXACELL1 successfully altered
ALTER CELL LED OFF The chassis LED is turned off. CellCLI> ALTER CELL LED OFF
ALTER CELL LED ON The chassis LED is turned on. CellCLI> ALTER CELL LED ON
ALTER CELL smtpServer='<SMTP_SERVER>’ Set the SMTP serverCellCLI> ALTER CELL smtpServer=’myrelay.unixarena.com’
ALTER CELL smtpFromAddr='<[email protected]>’
Set the Email From AddressCellCLI> ALTER CELL smtpFromAddr=’[email protected]’
ALTER CELL smtpToAddr='<[email protected]>’
Send the alrets to this Email AddressCellCLI> ALTER CELL smtpToAddr=’[email protected]’
ALTER CELL smtpFrom='<myhostname>’ Alias host name for email CellCLI> ALTER CELL smtpFrom=’uaexacell1′
ALTER CELL smtpPort=’25’ Set the SMTP port CellCLI> ALTER CELL smtpPort=’25’
ALTER CELL smtpUseSSL=’TRUE’ Make the smtp to use SLL CellCLI> ALTER CELL smtpUseSSL=’TRUE’
ALTER CELL notificationPolicy=’critical,warning,clear’
Send the alrets for critical,warning and clearCellCLI> ALTER CELL notificationPolicy=’critical,warning,clear’
ALTER CELL notificationMethod=’mail’ Set the notification method as email CellCLI> ALTER CELL notificationMethod=’mail’
ALTER CELLDISK <existing_celldisk_name> name='<new_cell_name>’,
Modify’s the celldisk name
CellCLI> ALTER CELLDISK CD_DISK00_uaexacell1 name=’UACELLD’, comment=’Re-named for UnixArena’
comment='<comments>’ CellDisk UACELLD successfully altered
ALTER CELLDISK ALL HARDDISK FLUSH Dirty blocks for all harddisk will be flushed CellCLI> ALTER CELLDISK ALL HARDDISK FLUSH
ALTER CELLDISK ALL HARDDISK FLUSH NOWAITAllows alter command to complete while flush operation continues on all harddisks
CellCLI> ALTER CELLDISK ALL HARDDISK FLUSH NOWAIT
Flash cache flush is in progress
CellCLI>
ALTER CELLDISK ALL HARDDISK CANCEL FLUSHPrevious flush operation on all harddisk will be terminated
CellCLI> ALTER CELLDISK ALL HARDDISK CANCEL FLUSH
CellDisk CD_DISK02_uaexacell1 successfully altered
CellDisk CD_DISK03_uaexacell1 successfully altered
CellDisk CD_DISK04_uaexacell1 successfully altered
CellDisk CD_DISK05_uaexacell1 successfully altered
ALTER CELLDISK <CELLDISK> FLUSH Dirty blocks for specific celldisk will be flushed CellCLI> ALTER CELLDISK
ALTER CELLDISK <CELLDISK> FLUSH NOWAITAllows alter command to complete while flush operation continues on specific celldisk
CellCLI> ALTER CELLDISKNOWAIT
Flash cache flush is in progress
ALTER FLASHCACHE ALL size=<size> Resize the all Flash celldisks to specified size ALTER FLASHCACHE ALL size=100G
ALTER FLASHCACHE ALL All the flashsdisks will be assigned to Flashcache
CellCLI> ALTER FLASHCACHE ALL
Flash cache uaexacell1_FLASHCACHE altered successfully
ALTER FLASHCACHECELLDISK='<Flashcelldisk1>,<Flashcelldisk2>’
The specified Flashcell disks be assigned to Flashcache &
CellCLI> ALTER FLASHCACHE CELLDISK=’FD_09_uaexacell1,FD_04_uaexacell1′
other flashdisks will be removedFlash cache uaexacell1_FLASHCACHE altered successfully
ALTER FLASHCACHE ALL FLUSH Dirty blocks for all Flashdisks will be flushed CellCLI> ALTER FLASHCACHE ALL
ALTER FLASHCACHE ALL CANCEL FLUSHPrevious flush operation on all Flashdisk will be terminated
CellCLI> ALTER FLASHCACHE ALL CANCEL FLUSH
Flash cache uaexacell1_FLASHCACHE altered successfully
ALTER FLASHCACHE ALL FLUSH NOWAIT Allows alter command to complete while flush operation
CellCLI> ALTER FLASHCACHE ALL FLUSH NOWAIT
continues on all the flash celldisk Flash cache flush is in progress
ALTER FLASHCACHE CELLDISK=<FLASH-CELLDISK> FLUSH
Dirty blocks for specific flash celldisk will be flushed
CellCLI> ALTER FLASHCACHE CELLDISK=FD_04_uaexacell1 FLUSH
Flash cache uaexacell1_FLASHCACHE altered successfully
ALTER FLASHCACHECELLDISK=<FLASH-CELLDISK> CANCEL FLUSH
Previous flush operation on specific flash celldisk will be terminated
CellCLI> ALTER FLASHCACHE CELLDISK=FD_04_uaexacell1 CANCEL FLUSH
Flash cache uaexacell1_FLASHCACHE altered successfully
Do not modify the Exadata Storage cell configuration without notifying oracle support.
Top Related