Oracle Clusterware and Private Network...
-
Upload
nguyenduong -
Category
Documents
-
view
263 -
download
3
Transcript of Oracle Clusterware and Private Network...
Copyright © 2008, Oracle. All rights reserved.1
Oracle Clusterware and Private Network Considerations
Much of this presentation is attributed to Michael Zoll and work done by the RAC Performance Development group
Copyright © 2008, Oracle. All rights reserved.2
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
Copyright © 2008, Oracle. All rights reserved.3
Architectural OverviewRAC and Cache Fusion PerformanceInfrastructureCommon Problems and ResolutionAggregation and VLANs
Agenda
Copyright © 2008, Oracle. All rights reserved.4
Oracle Clusterware
public network
EVMD
CRSD
OPROCD
ONS
VIP1
CSSD
VIP2 VIPn
/…/
shared storage
OCR and Voting DisksRAW Devices
CSSDRuns in Real Time Priority
OS
EVMD
CRSD
OPROCD
ONS
CSSD
OS
EVMD
CRSD
OPROCD
ONS
CSSD
OS
Cluster Private High Speed Network
L2/L3 SWITCH
Copyright © 2008, Oracle. All rights reserved.5
Under the Covers
Redo Log Files
Node nNode 2
Data Files and Control Files
Redo Log Files Redo Log Files
DictionaryCache
Log buffer
VKTM LGWR DBW0
SMON PMON
LibraryCache
Global Resoruce Directory
LMS0
Instance 2
SGA
Instance n
Buffer Cache
LMON LMD0 DIAG
Dictionary Cache
Log buffer
VKTM LGWR DBW0
SMON PMON
Library Cache
Global Resoruce Directory
LMS0
Buffer Cache
LMON LMD0 DIAG
Dictionary Cache
Log buffer
VKTM LGWR DBW0
SMON PMON
Library Cache
Global Resoruce Directory
LMS0
Buffer Cache
LMON LMD0 DIAG
Instance 1
Node 1
SGA SGA
Runs in Real Time Priority
Cluster Private High Speed Network
L2/L3 SWITCH
Copyright © 2008, Oracle. All rights reserved.6
Global Cache Service (GCS)
Manages coherent access to data in buffer caches of all instances in the cluster
Minimizes access time to data which is not in local cache • access to data in global cache faster than disk
accessImplements fast direct memory access over high-speed
interconnects • for all data blocks and types
Uses an efficient and scalable messaging protocol• Never more than 3 hops
New optimizations for read-mostly applications
Copyright © 2008, Oracle. All rights reserved.7
Cache Hierarchy: Data in Remote Cache
Local Cache Miss
Datablock Requested
Datablock Returned
Remote Cache Hit
Copyright © 2008, Oracle. All rights reserved.8
Cache Hierarchy: Data On Disk
Local Cache Miss
Datablock Requested
Grant Returned
Remote Cache Miss
Disk Read
Copyright © 2008, Oracle. All rights reserved.9
Cache Hierarchy: Read Mostly
Local Cache Miss
No Message required Disk Read
Copyright © 2008, Oracle. All rights reserved.10
11.1
CPU Optimizations for read-intensive operations• Read-only access
– No messages– Direct reads
• Read-mostly access– Message reductions– Latency improvements
Significant gains• From 50-70% reductions measured
Copyright © 2008, Oracle. All rights reserved.11
Performance of Cache Fusion
Message:~200 bytes
Block: e.g. 8K
LMS
Initiate send and wait
Receive
Process block
Send
Receive
200 bytes/(1 Gb/sec )
8192 bytes/(1 Gb/sec)
Total access time: e.g. ~360 microseconds (UDP over GBE)Network propagation delay ( “wire time” ) is a minor factor for roundtrip time( approx.: 6% , vs. 52% in OS and network stack )
Copyright © 2008, Oracle. All rights reserved.12
Fundamentals: Minimum Latency (*), UDP/GBE and RDS/IB
0.200.160.130.12RDS/IB
0.460.360.310.30UDP/GE
16K8K4K2KBlock sizeRT (ms)
(*) roundtrip, blocks are not “busy” i.e. no log flush, no serialization ( “buffer busy”)AWR and Statspack reports would report averages as if they were normally distributed, the session wait history which is included in Statspack in 10.2 and AWR in 11g will show the actual quantilesThe minimum values in this table are the optimal values for 2-way and 3-way block transfers, but can be assumed to be the expected values ( I.e. 10ms for a 2-way block would be very high )
Copyright © 2008, Oracle. All rights reserved.13
PROCESS(FG/LMS*)
PROCESS(FG/LMS*)
SOCKET LAYER
(tx Buffers)
IP LAYERTCP / UDP
L2/L3 SWITCH
IP LAYERTCP / UDP
INTERFACELAYER
INTERFACELAYER
TX RX
SOCKET LAYER
(rx Buffers)
Infrastructure: Network Packet Processing
Copyright © 2008, Oracle. All rights reserved.14
Infrastructure: Interconnect Bandwidth
Bandwidth requirements depend on several factors ( e.g. buffer cache size, #of CPUs per node, access patterns) and cannot be predicted precisely for every application
Typical utilization approx. 10-30% in OLTP• 10000-12000 8K blocks per sec to saturate 1 x Gb
Ethernet ( 75-80% of theoretical bandwidth )Generally, 1Gb/sec sufficient for performance and
scalability in OLTP. DSS/DW systems should be designed with > 1Gb/sec
capacityA sizing approach with rules of thumb is described in
• Project MegaGrid: Capacity Planning for Large Commodity Clusters (http://otn.oracle.com/rac)
Copyright © 2008, Oracle. All rights reserved.15
Infrastructure: Private Interconnect
• Network between the nodes of a RAC cluster MUST be private• Supported links: GbE, IB ( IPoIB: 10.2 ) • Supported transport protocols: UDP, RDS (10.2.0.3)• Use multiple or dual-ported NICs for redundancy and increase bandwidth with NIC bonding• Large ( Jumbo ) Frames for GbE recommended if the global cache workload requires it.• global cache block shipping versus small lock
message passing.
Copyright © 2008, Oracle. All rights reserved.16
PROCESS(FG/LMS*)
PROCESS(FG/LMS*)
SOCKET LAYER
(tx Buffers)
IP LAYERTCP / UDP
L2/L3 SWITCH
Ingress Buffers
Egress Buffers
IP LAYERTCP / UDP
INTERFACELAYER
INTERFACELAYER Hardware interrupts
RX queues, RX buffers
Software interrupts
RX IP input queue
TX
Socket queues
RX
SOCKET LAYER
(rx Buffers)
USER
KERNEL
Backplane pressure cpu
Socket buffers
TX IP queues
Network Packet Processing: Layers, Queues and Buffers
Rec()
Copyright © 2008, Oracle. All rights reserved.17
Infrastructure: IPC configuration
Important Settings:• Negotiated top bit rate and full duplex mode • NIC ring buffers• Ethernet flow control settings• CPU(s) receiving network interrupts
Verify your setup:• CVU does checking • Load testing eliminates potential for problems• AWR and ADDM give estimations of link utilization
Buffer overflows, congested links and flow control can have severe consequences for performance
Copyright © 2008, Oracle. All rights reserved.18
Infrastructure: Operating SystemBlock access latencies increase when CPU(s) busy
and run queues are long • Immediate LMS scheduling is critical for
predictable block access latencies when CPU > 80% busy
Fewer and busier LMS processes may be more efficient. • monitor their CPU utilization• Caveat: 1 LMS can be good for runtime
performance but may impact cluster reconfiguration and instance recovery time
• the default is good for most requirements Higher priority for LMS is default
• The implementation is platform-specific
Copyright © 2008, Oracle. All rights reserved.19
<Insert Picture Here>
Common Problems and Symptoms
“Lost Blocks”: Interconnect or Switch ProblemsSystem load and scheduling ContentionUnexpectedly high global cache latencies
Copyright © 2008, Oracle. All rights reserved.20
Miss-configured or Faulty Interconnect Can Cause:
Dropped packets/fragmentsBuffer overflowsPacket reassembly failures or timeoutsEthernet Flow control kicks inTX/RX errors
“lost blocks” at the RDBMS level, responsible for 64% of escalations
Copyright © 2008, Oracle. All rights reserved.21
“Lost Blocks”: NIC Receive Errors
Db_block_size = 8Kifconfig –a:eth0 Link encap:Ethernet HWaddr 00:0B:DB:4B:A2:04 inet addr:130.35.25.110 Bcast:130.35.27.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:21721236 errors:135 dropped:0 overruns:0 frame:95 TX packets:273120 errors:0 dropped:0 overruns:0 carrier:0
…
Copyright © 2008, Oracle. All rights reserved.22
“Lost Blocks”: IP Packet Reassembly Failures
netstat –s
Ip: 84884742 total packets received … 1201 fragments dropped after timeout … 3384 packet reassembles failed
Copyright © 2008, Oracle. All rights reserved.23
Top 5 Timed Events Avg %Total~~~~~~~~~~~~~~~~~~ wait CallEvent Waits Time(s)(ms) Time Wait Class----------------------------------------------------------------------------------------------------
log file sync 286,038 49,872 174 41.7 Commitgc buffer busy 177,315 29,021 164 24.3 Clustergc cr block busy 110,348 5,703 52 4.8 Clustergc cr block lost 4,272 4,953 1159 4.1 Clustercr request retry 6,316 4,668 739 3.9 Other
Finding a Problem with the Interconnect or IPC
Should never be here
Copyright © 2008, Oracle. All rights reserved.24
Global Cache Lost block handling
Detection Time in 11g reduced• 500ms ( around 5 secs in 10g )• can be lowered if necessary • robust ( no false positives )• no extra overhead
Cr request retry event related to lost blocks• It is highly likely to see it when gc cr blocks lost
show up
Copyright © 2008, Oracle. All rights reserved.25
Interconnect StatisticsAutomatic Workload Repository (AWR )
Target Avg Latency Stddev Avg Latency StddevInstance 500B msg 500B msg 8K msg 8K msg--------------------------------------------------------------------- 1 .79 .65 1.04 1.06 2 .75 .57 . 95 .78 3 .55 .59 .53 .59 4 1.59 3.16 1.46 1.82---------------------------------------------------------------------
Latency probes for different message sizesExact throughput measurements ( not shown)Send and receive errors, dropped packets ( not shown )
Copyright © 2008, Oracle. All rights reserved.26
“Blocks Lost”: Solution
Fix interconnect NICs and switchesTune IPC buffer sizes
Copyright © 2008, Oracle. All rights reserved.27
CPU Saturation or Long Run Queues
Top 5 Timed Events Avg %Total ~~~~~~~~~~~~~~~~~~ wait Call Event Waits Time(s) (ms) Time Wait Class----------------- --------- ------ ---- ----- ----------db file sequential 1,312,840 21,590 16 21.8 User I/Oread gc current block 275,004 21,054 77 21.3 Clustercongestedgc cr grant congested 177,044 13,495 76 13.6 Clustergc current block 1,192,113 9,931 8 10.0 Cluster2-waygc cr block congested 85,975 8,917 104 9.0 Cluster“Congested” : LMS could not dequeue messages fast enough
Cause : Long run queue, CPU starvation
Copyright © 2008, Oracle. All rights reserved.28
High CPU Load: Solution
Run LMS at higher priority (default)Start more LMS processes
• Never use more LMS processes than CPUsReduce the number of user processesFind cause of high CPU consumption
Copyright © 2008, Oracle. All rights reserved.29
Contention
Event Waits Time (s) AVG (ms) % Call Time
---------------------- --------- -------- -------- --------gc cr block 2-way 317,062 5,767 18 19.0 gc current block 2-way 201,663 4,063 20 13.4 gc buffer busy 111,372 3,970 36 13.1 CPU time 2,938 9.7gc cr block busy 40,688 1,670 41 5.5 -------------------------------------------------------
Global Contention on DataSerialization
Its is very likely that CR BLOCK BUSY and GC BUFFER BUSY are related
Copyright © 2008, Oracle. All rights reserved.30
Contention: Solution
Identify “hot” blocks in applicationReduce concurrency on hot blocks
Copyright © 2008, Oracle. All rights reserved.31
High Latencies
Event Waits Time (s) AVG (ms) % Call Time
---------------------- ---------- ---------- --------- --------gc cr block 2-way 317,062 5,767 18 19.0 gc current block 2-way 201,663 4,063 20 13.4 gc buffer busy 111,372 3,970 36 13.1 CPU time 2,938 9.7gc cr block busy 40,688 1,670 41 5.5 -------------------------------------------------------
Tackle latency first, then tackle busy events
Expected: To see 2-way, 3-way
Unexpected: To see > 1 ms (AVG ms should be around 1 ms)
Copyright © 2008, Oracle. All rights reserved.32
High Latencies : Solution
Check network configuration • Private• Running at expected bit rate
Find cause of high CPU consumption• Runaway or spinning processes
Copyright © 2008, Oracle. All rights reserved.33
Health Check
Look for: Unexpected Events
gc cr block lost 1159 ms
Unexpected “Hints”• Contention and Serialization
gc cr/current block busy 52 ms• Load and Scheduling
gc current block congested 14 msUnexpected high avg
gc cr/current block 2-way 36 ms
Copyright © 2008, Oracle. All rights reserved.34
Gigabit Ethernet Definition
Max Bandwidth • 1000 Mbits = 125 MB per sec, excluding header and pause
frames 118 MB per sec• Equates to 85000 Clusterware/RAC messages or 14000 8k blocks • RAC workload has a mix of short messages of 256 bytes and db_block_size of long messages • For real life workload only 60-70% the bandwidth can be
sustained• For RAC type workload 40 MB per sec per interface is optimal load• For additional bandwidth more interfaces can be aggregated
Copyright © 2008, Oracle. All rights reserved.35
1 U
ce2
NIC NIC
ce4:1
$ --> ifconfig -ace2: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER STANDBY,INACTIVE > mtu 1500 index 3 inet 192.168.83.36 netmask ffffff00 broadcast 192.168.83.255 groupname privatece4: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 8 inet 192.168.83.35 netmask ffffff00 broadcast 192.168.83.255 groupname privatece4:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,DEPRICATED,IPv4,NOFAILOVER> mtu 1500 index 8 inet 192.168.83.37 netmask ffffff00 broadcast 192.168.83.255
ce4
Aggregation Active/Standby(single switch)
Copyright © 2008, Oracle. All rights reserved.36
1 U
ce2
NIC NIC
ce4:1
$ --> ifconfig -ace2: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3 inet 192.168.83.36 netmask ffffff00 broadcast 192.168.83.255 groupname privatece4: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 8 inet 192.168.83.35 netmask ffffff00 broadcast 192.168.83.255 groupname privatece4:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8 inet 192.168.83.37 netmask ffffff00 broadcast 192.168.83.255
ce4
Aggregation Active/Active(Single Switch)
Copyright © 2008, Oracle. All rights reserved.37
1 U
1 U
ce2ce4
ce8ce10
NIC
NICce4:1
ce2ce4
ce8ce10
NIC
NIC ce4:1
$ --> ifconfig -ace10: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 3 inet 192.168.83.36 netmask ffffff00 broadcast 192.168.83.255 groupname privatece4: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 8 inet 192.168.83.35 netmask ffffff00 broadcast 192.168.83.255 groupname privatece4:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8 inet 192.168.83.37 netmask ffffff00 broadcast 192.168.83.255
Aggregation Active/Standby(Switch Redundancy)
Copyright © 2008, Oracle. All rights reserved.38
Aggregation Solutions
Cisco Etherchannel based 802.3ad AIX EtherchannelHPUX Auto Port AggregationSUN Trunking, IPMP, GLD Linux Bonding (only certain modes) Windows NIC teaming
Copyright © 2008, Oracle. All rights reserved.39
Aggregation Methods
Load balance/failover/load spreading• spread on sends/serialize on receives
Active/StandbyOracle Interconnect Requirement• Both Send/Receive side load balancing• NIC and Switch port failure detection
Copyright © 2008, Oracle. All rights reserved.40
General Interconnect requirement Recommendations
For OLTP Workloads • Normally 1 Gbit Ethernet with redundancy
(active/standby or load-balance) is sufficient• For DW workloads
Multiple GigE aggregated 10 Gig E or Infiniband
Copyright © 2008, Oracle. All rights reserved.41
Oracle RAC Cluster Interconnect network selection
Oracle Clusterware • IP address associated with Private Hostname
(provided during Install interview)
Oracle RAC Database• Private Network specified during the Install interview• Cluster_interconnect parameter provided IP address
Copyright © 2008, Oracle. All rights reserved.42
Jumbo Frames
• Non-IEEE standard• Useful for NAS/iSCSI storage • Network device inter-operability issues• Configure with care and test rigorously
Excerpt from alert.log:Maximum Tranmission Unit (mtu) of the ether adapter is different on the node running instance 4, and this node. Ether adapters connecting the cluster nodes must be configured with identical mtu on all the nodes, for Oracle. Please ensure the mtu attribute of the ether adapter on all nodes [and switch ports] are identical, before running Oracle.
Copyright © 2008, Oracle. All rights reserved.43
UDP Socket Buffer(rx)
Default settings adequate for majority of customersMay need to increase allocated buffer size• MTU size increases• netstat reporting fragmentation and/or reassembly
errors• ifconfig reporting dropped packets or overflow
Copyright © 2008, Oracle. All rights reserved.44
Cluster Interconnect NIC settings• NIC driver dependent – DEFAULTS GENERALLY SATISFACTORY• Changes can occur between OS versions
• Linux 2.4 => 2.6 kernels,flowcontrol on e1000 drivers• NAPI interrupt coalescence in 2.6
• Confirm flow control: rx=on, tx=off• Confirm full bit rate (1000) for the NICs• Confirm full duplex auto-negotiate• Ensure NIC names/slots identical on all nodes• Configure interconnect NICs on fastest PCI bus• Ensure compatible switch settings• 802.3ad on NICs = 802.3ad on switch ports• MTU=9000 on NICs = MTU=9000 on switch ports
FAILURE TO CONFIGURE THE NICS AND SWITCHES CORRECTLY WILL RESULT IN SEVERE PERFORMANCE DEGRADATION AND NODE FENCING
Copyright © 2008, Oracle. All rights reserved.45
The Interconnect and VLANs
• Interconnect should be dedicated non-routable subnet mapped to a single dedicated, non-shared VLAN• If VLANs are ‘trunked’ the interconnect VLAN traffic should not exceed the access switch layer• Minimize the impact of Spanning Tree events• Monitor the switch(es) for congestion• Avoid QoS definitions that may negatively impact interconnect performance
Copyright © 2008, Oracle. All rights reserved.46
Copyright © 2008, Oracle. All rights reserved.47
Copyright © 2008, Oracle. All rights reserved.48
AQ&Q U E S T I O N SQ U E S T I O N S
A N S W E R SA N S W E R S