Mount Adams Lar
-
Upload
guestdbf0cb -
Category
Business
-
view
850 -
download
2
Transcript of Mount Adams Lar
© 2005 Intel Corporation.
Introduction to Intel® SSR212MAStorage System(Condensed)Q3 2005
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
• Learning About Intel® Platform SSR212MA
– Understanding the Chassis Architecture– Understanding the Board Architecture– Understanding the RAID Architecture– Support for SSR212MA
• System Features and Implementation– Understanding SAN Management Software
Introduction to IP SAN Management Connecting and Configuring the SSM Planning for Disk Management
– Provisioning Using SAN Management Software Using Management Groups Using Clusters Using Volumes
Introduction to Intel® SSR212MA Storage System
Agenda
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to Intel® SSR212MA Storage System
Agenda (cont.)
• System Features and Implementation (cont.)– Configuring Data Transmission Protocols
iSCSI
– Understanding Data Protection Using Snapshots Using Remote Copy Planning for Disaster Recovery
– Performance Tuning
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to Intel® SSR212MA Storage System
Learning More About Intel Products
• Intel provides a special website for product support:– http://support.intel.com
• Site allows you to– Download software, drivers, plug-ins and other utilities
for Intel® server products
– Access product-related documentation
– Gain contact information for technical support
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About the Intel® Storage System SSR212MA
• Introduction
• Storage System Features– Hardware
– Software
• Storage System Contents
• SAN Software Accessories
• Spares
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About Intel® Storage System SSR212MA
Introduction
• Integrated storage area network (SAN) system includes system board, two SATA Host Bus Adapter cards, backplane, 500W power supply, and 2U chassis
• Flexible design supports up to 12 hard drives
• Software is easy-to-use, with a user-friendly GUI and a wealth of features
• Configured with iSCSI connectivity
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About Intel® Storage System SSR212MA
Features: Hardware
• Support for up to 12 hot-swap SATA hard drives
• Single Intel® Xeon™ processor
• Two 500W power supply modules in a 1+1 redundant configuration (unit ships with one power supply)
• System cooling– Five 40-mm hot-swappable redundant fans
Three dual-rotor Two single-rotor
– One 60-mm fan in the power supply enclosure– Processor wind tunnel assembly
• Includes two SATA Host Bus Adapter cards
• Includes an IDE Disk On Module (DOM) containing the OS and SAN application software
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About Intel® Storage System SSR212MA
Features: Software
• Product suite provides sophisticated storage management– Storage System Module
– Storage System Engine
– Storage System Console
• Overview of features– Volume management and provisioning
– Over-subscription (thin provisioning)
– Snapshots
– Security
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About Intel® Storage System SSR212MA
Features: Software (cont.)
• Volume management and provisioning– Configure and provision volumes
– Manage disk volumes on-the-fly, including hardware and software RAID controllers
– Pool drives within the enclosure for high availability and scalability
• Over-subscription– Create volumes that are larger than actual drive capacity
– Save upfront disk investment until actually required
• Snapshots– Manually create single or multiple-volume snapshots
– Can be upgraded to allow for automatic snapshots
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About Intel® Storage System SSR212MA
Storage System Contents
• 2U chassis integrated with– System board– Two SATA Host Bus Adapter
cards with battery backup units
– Single Disk On Module (DOM) attached to IDE
– Control panel assembly– One 500W power supply
module– Power distribution board– System cooling module (five
system fans)
– Control panel cable– SATA hard drive cables– SATA hot-swap backplane
• Serial configuration cable
• Hard drive carriers
• Hard drive labels
• Rail kit
• Quick Start Guide
• Resource CD
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About Intel® Storage System SSR212MA
SAN Software Accessories
• Scalability package
• Configurable snapshot package
• Remote data protection package
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Learning About Intel® Storage System SSR212MA Spares
• Cable kit (SATA bundle of 4)
• Chassis (including power supply enclosure, backplane, SATA cables, interposer board, and front ops panel)
• 10x drive carriers (including Torx driver and HDD label set)
• Interposer board (connects fan tray assembly to backplane)
• Power supply cage (includes power distribution board)
• Fan tray assembly (includes fan distribution board)
• Packaging (flattened external box, no internal foam or inserts)
• Intel® Server Board SE7520JR2
• Intel XeonTM processor
• 2U three-slot active PCI-X riser for Intel Server Board SE7520JR2
• Intel RAID Controller SRCS28X
• Battery backup unit for Intel RAID Controller SRCS28X
• 500W power supply
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Intel® Storage System SSR212MA Chassis Architecture• Chassis design
characteristics– Electromagnetic
compatibility
– Cooling subsystem design
• Overview of the architecture
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Chassis Design Characteristics• Worldwide regulatory compliance
– Electromagnetic compatibility
– Safety
• Cooling subsystem design– Thermals
– Acoustics
– Environmental limits
– Server management
– Airflow
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Overview of the Architecture: Common Chassis Components• Enclosures
– Internal components– Conceptual diagram– Front view– Rear view
• Control panel– Buttons– LEDs
• Hard drive bays
• Hard drive carrier
• Hot-swap SATA backplane
• Disk On Module (DOM)
• PCI expansion slot
• Cooling subsystem
• Power subsystem
• Security and management
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Enclosures: Internal Components
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Enclosures: Front View
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Front Panel: LEDs
• Power On
• Fault
• Enclosure ID
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Hard Drive Bays
• Supports up to 12 hot-swap SATA hard drives
• All drives must be populated in order, from left to right and top to bottom
• Drives are installed into carriers, which ship with the system
• Once a RAID configuration is set up, if the drives are removed they must be reinstalled in the same location
• Drive labels are included with the system
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Hot-Swap SATA Backplane
• Slots provided for docking up to 12 1.5-GB/s or 3.0-GB/s SATA hot-swap hard drives
• Two SATA host I2C interfaces
• SATA Specification version 1.00 compatible
• Supports SAF-TE commands
• 256K external SRAM memory
• Drive status LEDs
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Disk On Module (DOM)
• Ships with one 512-MB Disk On Module (DOM) installed directly to the SSR212MA baseboard’s IDE connector
• The DOM includes the OS and SAN application software
• System must be powered down before the DOM is removed, or data corruption could occur
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
PCI Expansion Slot
• One PCI expansion slot – slot 3
• Supports PCI-X 133-MHz cards
• Provides support for expanded Ethernet client connectivity
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Cooling Subsystem
• Five 40-mm redundant hot-swappable system fans**– Three dual-rotor
– Two single-rotor
• One 60-mm fan in the power supply enclosure
• Temperature sensor
• Cooling vents in drive carriers and at chassis rear
• Processor wind tunnel installed for processor cooling
** See TPS for proper hot-swap procedure.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Power Subsystem
• 500W 1+1 redundant hot-swappable power supply module (system ships with one power supply)– One AC line connector per
module– Power factor correction – Auto-ranging for either 100-127
VAC or 200-240 VAC– Each module has a single bi-
color LED
• Power supply enclosure– One 60-mm fan– DC-to-DC converter
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Security and Management
• Security– Chassis intrusion detection
• Storage System software module– Temperature sensor
– Chassis intrusion sensor
– Managed chassis fans
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Chassis Architecture
Rack Shelf Mounting
• Supports cabinets that are 19 inches wide by up to 36 inches deep
• Rack mounting rails are included with system
• Brackets attach to the front and the rear of the rack
• Designed to conform to the SSI Server Rack spec and EIA 310-D
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
The Intel® Storage System SSR212MA Board ArchitectureTour of the functional subsystems
• Intel® E7520 Chipset
• Processor Subsystem(Single Processor Only)
• Memory Subsystem (Up to 16GB but only 4GB used)
• Onboard Devices:– Video – Network– Parallel ATA– USB– Legacy I/O– Serial Port for using the Text User
interface.
• Riser Slots
• Power Connectors
• Server Management– BMCs– LEDs– Headers
• Jumpers and additional header
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Understanding the Intel® RAID Controller SRCS28X Architecture
Architecture overview
• Conceptual diagram
• Summary of components
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Intel® RAID Controller SRCS28X
Conceptual Diagram
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Intel® RAID Controller SRCS28X
Summary of Components
• I/O processor
• SATA chipset
• SATA connectors - 8
• Cache memory
• Battery backup module
• PCI-X interface
• Flash memory
• Jumpers and connectors
• Audible alarm
• Physical characteristics
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Service and Support for the Intel® Storage System SSR212MA
• Installation guides
• Platform Confidence Test
• SMaRT Tool
• Technical support– Collateral
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Service and Support
Installation Guides
Available on the Resource CD, http://support.intel.com/, and your Intel® Business Link account (iBL)
• Interposer Board
• Disk on Memory (DOM)
• Rail Kit
• Power Supply Cage
• Cooling Module
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Service and Support
Platform Confidence Test
• Executable program available via Intel® Support http://support.intel.com/ and your Intel Business Link account (iBL)
• Can be used to troubleshoot the system board only
• Customer must create an image on a bootable USB key– Boot from the USB key
– Follow the prompts
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Service and Support
SMaRT Tool
• SMaRT Tool system modules consist of:– Virtual System Tour (VST)– “How To Replace” procedures– Parts database
• New features for the SSR212MA– Detailed procedure for performing client SAN
configuration routines– Visuals for VST comprised of CAD models rather than
traditional photography– To decrease cost and TTM, “How To Replace” procedures
for “simple” FRUs such as memory and CPU will be linked to online manual
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Service and Support
Technical Support Collateral
• Product Brief
• Technical Product Specification (TPS)
• Quick Start User’s Guide (QSG)
• Product Guide
• Specification Update
• Tested Hardware and OS List (THOL)
• Tested Memory List (TML)
• Software User’s Manual
• Software Release Notes
• Interposer Board Installation Guide
• Disk on Memory (DOM) Installation Guide
• Rail Kit Installation Guide
• Power Supply Cage Installation Guide
• Fan Tray Installation Guide
• Product Change Notifications
Available via Intel® Support (http://support.intel.com/) and your Intel Business Link account (iBL)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management
• IP SAN overview
• Product suite
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Intel® Storage System IP SAN Benefits• Implementing a full-featured IP SAN
– Utilizes the existing ethernet infrastructure
– Eliminates the learning curve, expensive fibre channel components, and complexity associated with fibre channel (FC) SANs
– Is ideal for Microsoft Exchange or Microsoft SQL Server
– Allows server consolidation, local or geographic disaster recovery, general file services, and disk-to-disk backup projects
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Network and Storage Layouts
Site 1Site 1 Site 2Site 2
Ethernet or FCEthernet or FC
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Introducing the Product Suite
• Storage System Module (SSM) – Uses ethernet or fibre channel-based storage devices
• Storage System Engine Software – Combines with SSMs to provide IP or FC SAN solution
• Centralized Management Console– A centralized storage management facility
• Standard iSCSI support – Provides access to storage
• Fibre channel (Qlogic only in first release) support– Provides access to storage
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Storage System Module (SSM)
• High availability and reliability hardware– Dual redundant, hot swappable power supplies
– Hot-swappable drives
– Dual NIC connectivity
– Robust power and cooling diagnostics
• Modular hardware package for granular scalability
• Pooled storage with centralized management
SSMs
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Storage System Software
• Virtualizes multiple storage modules into a storage pool– Scales from four terabytes to hundreds of terabytes
• Supports an easy-to-use Graphical User Interface (GUI)
• Provides sophisticated data management features, such as:– Replication
– Snapshot
– Automated failover
– Storage pooling
– Virtualization
– Remote Copy
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Storage System Console
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management iSCSI Client Driver for Windows
• Software that connects the server to the volumes created on the SAN
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Add-On Modules and Extensions
• Scalability Package (Clustering)
• Configurable Snapshot Package
• Disaster Recovery Package (Remote Copy)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Introduction to IP SAN Management Summary
• Implementing an IP SAN– Allows server consolidation and flexible allocation of
storage through the use of the SSC
– Additional storage can be added quickly and easily with no downtime to users
– Data protection and disaster recovery functions can be managed at the volume level
– Survive the loss of a single SSM because replication of all data is at the volume level
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
• Connecting to the Storage System Module (SSM)
• Configuring the SSM
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Installation Documentation
• Quick Start Guide
• SW Users Information Guide
• Technical Product Specification
• Release Notes
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Network Configuration
• Use the Management Console to set up the SSM IP address manually, as summarized below– Install the hardware and cables
– Power up the module
– Attach a PC or laptop to the SSM serial port with the provided serial cable
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Connecting to the Management Console - Windows• On the PC or laptop attached to the SSM, open a session with
HyperTerminal or ProComm Plus terminal emulation program
• Configure the following settings in the Windows terminal emulation program
– Bits per second = 19200– Data bits = 8– Parity = None– Stop bits = 1– Flow or hardware control = None– Backspace key sends = Del– Emulation = ANSI
• Press <Enter> when the terminal emulation session is established– A prompt appears asking you to type “start”
• Type start and press <Enter>. When the session is connected to the SSM, the Management Console window opens
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Log In Using the Management Console1. Press <Enter> on the Management Console Login
screen. The Management Console opens to the Login prompt
2. Use the Tab key to move through the fields and menu options in the Management Console
3. Type in the User Name and Password found in the Quick Start Guide
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Setting Up IP Address and Gateway1. Press <Tab> to navigate to the Network Settings window
2. The cursor should be in the Hostname field. If not, tab to the field. Press <Backspace> to delete the existing text and type the desired host name. This name displays in the Storage System Console and on the network
3. Tab to the 3rd choice - Set IP Address Manually and press <Enter>. The IP Address, Netmask, and Gateway list opens
– Enter the IP address from the planning sheet in the IP Address field
– Enter the netmask from the planning sheet in the Mask field.
– Enter the default gateway or if there is not a gateway, use 0.0.0.0.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Changing the Default Password
1. Press <Tab> to General Settings and press <Enter>. The General window opens
2. Tab to Change Password and press <Enter>. The Change Password window opens
3. Type in the new password and tab to the Password Confirm field
4. Retype the new password
5. Tab to OK and press <Enter>. The General window opens
6. Tab to Done and press <Enter>. The Management Console window opens
7. Tab to Log Out and press <Enter>. The Management Console entry window is displayed again.
8. Exit the terminal emulation session
9. Proceed to the installation of the Storage System Console
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Storage System Console Main WindowNetwork View• Displays all the
SSMs on the network
Tab View• Presents the
functions associated with the selected item in the Network View
Alerts View – Not Shown
• Provides a message area where alerts arrive from the active monitoring of the SSM
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Storage System Console Items
• Primary Volume
• Remote Volume
• Snapshot
• Cluster
• Management Group
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Storage System Console States
• SSM is manager
• SSM is virtual manager
• SSM is logged in
• SSM is hot spare
• Virtual manager not started
• Volume schedule(s) has failed
• Snapshot is primary
• Snapshot is remote
• Remote snapshot pair
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Storage System Console – Hierarchical StructureAvailable storage modules - Are either available or part of a management group
Management Groups - Are groups of SSMs within which one or more SSMs are designated as managers
Clusters - Clusters are sub-groupings of SSMs within a management group
Volumes - Volumes are data storage areas created on clusters
Snapshots - Snapshots are read-only copies of volumes created at specific points in time
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Configuration Categories
• Under Edit Configuration– Provides specific information on a specific module
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using Module Functions
• Module Information
• Power Off
• Feature Registration
• Boot Devices
• Backup & Restore
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using Storage Functions
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Monitoring RAID Status
There are four RAID states
• Normal
• Rebuild
• Degraded
• Off
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Storage - Powering Drives On/Off
• Powering On– Select the disk
– Click OK
– Add disk to RAID, if disk is new
• Powering Off– Select the disk
– Remove disk from RAID, if disk is Active
– Click OK
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using Time Functions – NTP Mode Off
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using Time Functions – NTP Mode On• Preferred
– Can contact reliably
• Not Preferred– Used as backup
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using Network Functions
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Network - TCP Status
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Network Interface Failover
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Network Bonding
• NIC Aggregation and Bonding
• Failover– Different Switches
• Aggregation– 802.3ad - Same
Switch, Combined Bandwidth, NIC Fault tolerance
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Network - Jumbo Frames
• Support for jumbo frames– Larger Packets = Fewer Packets = Lower CPU Load
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
SSM Administration - Users and Groups
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using SNMP
• Enable SNMP agent
• Set up access control– Client IP
address or
– Resolvable host name
• Add SNMP trap recipients
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using the SNMP MIB
• Provides read-only access to the SSM and supports MIB-II compliant objects
• Optionally, load SSM MIB files into the system. They display– Model number
– Serial number
– Hard disk capacity
– Network parameters
– RAID configuration
– DNS server configuration details
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Using Reports
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Passive Report - RAID Information
• Module Name - The aggregated IDE drives
• RAID Level – RAID 0, 1/10 or 5/50
• No. of Disks - The number of disks in the RAID device
• Build/rebuild status - % complete, time remaining
• Chunk Size - The read/write block size for the RAID device
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Reporting - Log Files
If technical support requests that you send a copy of a log file, use the Log Files tab to save the log file as a text file.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Reporting - Active Monitoring
1. Choose monitoring variable
2. Choose action(s)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Connecting and Configuring the SSM
Summary
• Set up a HyperTerminal configuration to access the SSM– Connect with a serial cable– Assign the naming convention– Assign IP addresses, subnet masks– Verify network connectivity
• Use the configuration categories to configure the SSM – Verify SSM RAID and disks– Set date/time on SSM– Recognize default administrative groups and users– To configure and use SNMP– Set monitoring parameters and view reports– Backup/restore the SSM configuration
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
• Disk configuration
• Disk management for RAID levels
• Recommended memory configuration
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Disk Configuration – SSR316MJ2
• Maximum 16 drives (SATA)
• Drives populated from left to right
• 2 Intel® SRCS28X RAID controller cards in system
• Minimum configuration– 2 drives
– 512 MB of RAM
– 2 Intel RAID controller cards
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Disk Configuration – SSR212MA
• Maximum 12 drives (SATA)
• Drives populated from left to right, top to bottom
• 2 Intel® SRCS28X RAID controller cards in system
• Minimum configuration– 2 drives
– 512 MB of RAM
– 2 Intel RAID controller cards
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Disk Status Indication
• Drive LED states– Solid amber when drive is OFF or Faulty
– For I/O or Identify Drive state or rebuilding stopped - Fast Blink Green (~2.5 Hz)
– OFF when drive is Idle
– For predicted drive fault or rebuilding - Slow Blink Green (1Hz)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management Replacing a Disk Drive
1. Power down the drive, first, using the Storage System Console (SSC)
2. Pull the handle out fully
3. Pull the drive tray out of the SSM
4. Replace drive
5. Power on the drive using the SSC
6. Add disk to array
Warning:Warning:If you remove a drive whose light is not amber, you may lose data.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Disk Management
• Supported RAID levels– RAID 0 using software
– RAID 1/10 using software
– RAID 5/50 using software and hardware
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Disk Management for RAID 0
• RAID 0– Minimum drive requirements = 2
– SSM storage capacity = n x Drive Capacity (where n is the total number of drives in the system)
• Disk management– Add 1 drive at a time from left to right in contiguous bay
– System allows adding multiple drives in contiguous bay
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Disk Management for RAID 1/10
• RAID 1– Minimum drive requirements = 2 – System can lose 1 drive and still ensure data availability &
accessibility
• RAID 10– Minimum drive requirements = 4 – System can lose max of n/2 drives in ‘n’ drives system and still
ensure data availability & accessibility– Loss of drives 1 & 2 result in data loss, loss of drives 2 & 3
would not– Note: when n/2 disks are lost, system will allow I/O as long as
SSM is up & running. If recycled in this state, system will come up with RAID in OFF state and will have to be reconfigured
• SSM storage capacity = n/2 x Drive Capacity where n is the total number of drives in the system
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Disk Management for RAID 5/50
• RAID 5– Minimum drive requirements = 4 w/ rotating parity– System can lose 1 drive and still ensure data availability &
accessibility
• RAID 50 (RAID 5 in hardware, 0 in software)– Minimum drive requirements = 8– Multiple RAID 5 groups (drive 1-4, 5-8, 9-12 & 13-16)– System can lose max of 1 drive in each of RAID 5 group & still
ensure data availability and accessibility– Large drive support
• SSM storage capacity = ¾ x n x Drive Capacity, where n is the total number of drives in the system
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Recommended Memory Configuration• For optimum performance - 128 MB per 250 GB
drive installed– Up to 4 drives – 512 MB RAM
– 5 to 8 drives – 1 GB RAM
– 9 to 12 drives – 1.5 GB RAM
– 13 to 16 drives – 2 GB RAM
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disk Management
Summary
• The RAID configuration you choose depends upon how you plan to use the SSM
• Supported RAID levels– RAID 0 using software– RAID 1/10 using software– RAID 5/50 using software and hardware
• The RAID configuration you choose for the SSM depends on your plans for data safety, data availability, and capacity growth
• Use 128 MB of memory per 250 MB disk drive
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
• Managers and virtual managers
• Management groups
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Management Group Overview
• Is the container within which you cluster SSMs and create volumes for storage
• Can be used to– Organize your SSMs into different functional groups (e.g.
Oracle Applications)
– Set up clustering managers and virtual managers
– Provide added administrative security
– Prevent unintentional use of storage resources
– Separate sites geographically
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Manager Functions
• Control data replication
• Keep track of system status
• Coordinate reconfigurations as SSMs are brought up and taken offline
• Resynchronize data when SSMs fail and recoverManagement Group Tab View
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Stopping Managers
• Is normal part of removing an SSM from a management group
• Implications of stopping managers include– Quorum may be decreased
– Fewer copies of configuration data are available
– Fault tolerance may be lost
– Data integrity and security may be compromised
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Running Managers
• To achieve optimum fault tolerance in a To achieve optimum fault tolerance in a management group use the following guidelines management group use the following guidelines for running managersfor running managers– For 1 or 2 SSMs, run 1 managerFor 1 or 2 SSMs, run 1 manager
– For 3 or 4 SSMs, run 3 managersFor 3 or 4 SSMs, run 3 managers
– For 5 or more SSMs, run 5 managersFor 5 or more SSMs, run 5 managers
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Virtual Manager
• Is a manager that is configured in the management group, but remains unassigned to any SSM until needed to regain quorum (majority of managers)
• Supports disaster recovery scenarios by recovering a quorum, ensuring data remains accessible
• Use in the following system configurations– A management group across two sites
– A management group in a single location with two SSMs
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Creating a Management Group
Under Add to New or Current Management Group
• Type management group name
• First unit in group will
be a Manager
• Dialog box will show IP addresses of all Managers
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Creating a Management Group (cont.)• Progress bar on the console tracks the creation process
• Manager is automatically started on the first SSM assigned
New management group with 1st SSM
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Editing a Management Group
• Set/edit the synchronization bandwidth– Maximum rate per second that a manager will devote to
non-application processing
16384 Kb/Sec = 2MB/Sec16384 Kb/Sec = 2MB/Sec
32768 Kb/Sec = 4MB/Sec32768 Kb/Sec = 4MB/Sec
32768 Kb/Sec = 40MB/Sec32768 Kb/Sec = 40MB/Sec
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups
Other Configuration Considerations• Reset the management group time if you change
the time on an SSM running a manager
• Check the Disk Setup window to verify that all the disks in the SSM are active and participating in RAID before adding it to a management group
• SSMs in the management group must have static IP addresses or reserved IP addresses if using DHCP
• Deleting a management group may cause the loss of all data stored on the SSMs in that group!
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Management Groups Summary
Management groups allow you to
• Cluster SSMs
• Create volumes for storage
• Create application groups (e.g., Oracle Apps)
• Enforce added administrative security
• Prevent unintentional use of storage resources
• Separate sites geographically
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
• Overview
• Prerequisites
• Creating a cluster
• Using iSCSI failover
• Using a virtual IP
• Adding an SSM to a cluster
• Deleting a cluster
• Hot spare in a cluster
• Summary
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Cluster Overview
• Is a grouping of SSMs, within a management group, from which you create volumes
• Consider a cluster as a “storage pool” from which you carve volumes
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Cluster Prerequisites
• Requires a management group
• At least one manager
• All the SSMs in a cluster must be configured alike – Same RAID level
• Can contain SSMs with different capacities– Cluster will operate equal to the smallest capacity SSM
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Creating a Cluster
• Cluster name
• Description (optional)
• Add from available SSMs
• Hot spare (optional) – Time out value
• iSCSI virtual IP information
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Using iSCSI Failover
• Configure a Virtual IP when using Microsoft* iSCSI initiator
• Allows access to volumes through other SSMs in the cluster
• Virtual IP– IP Address
– Subnet Mask
– Default Gateway
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Using a Virtual IP
• Required for iSCSI Failover
• Configured on per cluster basis
• Must be unique to all SSMs on the Network– It is an “extra” IP
• A single SSM in a cluster will acquire the Virtual IP – It will then retain it’s own IP plus host the VIP
• When an SSM Failure occurs, VIP Moves to the next available SSM in the cluster within 30 sec
• Initiators must be configured to connect to VIP for Failover to work
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Adding an SSM to a Cluster
• Expands the storage for a cluster
• The SSM must be a member of the management group that already contains the cluster
• Capacity of the SSM should match the capacity of the other SSMs in the cluster– Adding a smaller capacity SSM reduces the capacity of
the entire cluster
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Deleting a Cluster
1. Log in to the Management Group that contains the cluster you want to delete
2. Select the cluster you want to delete
3. Click Delete Cluster on the Details tab. A confirmation message opens. If the message says that the cluster is in use, you must delete the snapshots and volumes on the cluster first
4. Click OK. The cluster is deleted and the SSMs return to the management group as available
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Hot Spare in a Cluster
• Is not used for data storage – ‘stands by’ in case an SSM in the cluster goes down
• Is automatically activated by a manager and the data starts to migrate to the new SSM if an SSM in the cluster goes down
• Only 1 per cluster
• The hot spare SSM must be equal to or greater in size than the other SSMs in the cluster
Hot Spare Exchange Cluster
A cluster must contain at least 3 SSMs to have oneSSM designated as a hot spare.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Clusters
Summary
• Consists of a sub-grouping of SSMs within a management group
• Consider a cluster as a storage pool from which you carve volumes
• For iSCSI failover use a virtual IP
• Can use an SSM as a hot spare
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
• Volume architecture
• Volume planning
• Volume configuration
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Overview
• Logical entity that is comprised of storage on one or more SSMs that is synonymous to – LUN– Virtual disk– Disk target
• Volume prerequisites– A management group must exist– At least one cluster exists in the given management group
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Creating a Volume
• Type– Primary– Disaster recovery
• Cluster
• Replication level– The number of copies of the data to
create on SSMs in the cluster
• Replication priority– Set volume availability if replication
level can not be maintained– Note: Volumes with Availability stay
online even if replication is not possible
• Size– Defaults to total capacity available on
the cluster – The logical storage size of the volume
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Creating a Volume (cont.)
• Hard threshold– The amount of physical space
allocated for actual data storage
• Soft threshold– The amount of space used on the
volume that triggers a warning alert
• Auto Grow– Volume grows as needed by Auto
or Manual
• Checksum– Whether to use check summing to
verify data transmissions
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Planning Volume Size
• Volume size is the size of the virtual device communicated to the operating system and the applications
• Several categories– Volumes that are smaller than the storage capacity of the
clusters– Volumes that are equal in size to the storage capacity of the
cluster– Volumes that are larger than the storage capacity
• An assigned hard threshold cannot be exceeded
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Types of Volumes
Basic
• Size=Hard Threshold, no snapshots, no auto grow
Typical
• Size=Hard Threshold, snapshot schedule enabled, auto grow
Advanced
• Size>Hard Threshold, snapshot schedule enabled, auto grow
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Planning Data Replication
• Creates redundant copies of blocks that reside on different SSMs
• Up to 3-way replication
• Replication level can not exceed the number of SSMs in the cluster
• Recommended default – 2-way replication
Number of SSMs
Replication Levels Allowed
1 None
2 2-way or none.
3 or 4 3-way, 2-way (recommended) or none
5 or more 3-way, 2-way (recommended) or none
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
How Replication Works
The order in which SSMs are added to the cluster determines the order in The order in which SSMs are added to the cluster determines the order in which copies of data are written to the volume. which copies of data are written to the volume.
Dynamic Pool Dynamic PoolDynamic Pool
Pool Member 1RAID 0
Pool Member 2RAID 0
Pool Member 3RAID 0
Pool Member 4RAID 0
B1
B2
B3
B4 B4
4 Blocks of DataReplication = 1
4 Blocks of DataReplication = 2
4 Blocks of DataReplication = 3
B4B1
B1
B2
B2
B3
B3
B4
B4B1
B1
B2
B2
B3
B3B1
B2
B3
B4
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Managing Volume Growth Capacity
• Set the soft threshold value to help manage capacity growth by– Creating the volume and designating the size
– Setting the hard threshold to the same size as the actual volume size (best practice if not using snapshots)
– Setting the soft threshold lower than the hard threshold
– Increasing the volume size, as well as the hard and soft threshold, when you receive an alert
– Using Auto Grow to adjust size, when needed
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Auto Grow Considerations
Auto• Increments start small, get large rapidly
• After max of 10 auto-grows, volume is full-size
Manual• Hard & soft threshold are incremented by the same, customer-
specified amount
• Increment must at least 1% of volume length or 1MB, whichever is greater
• Avoid the use of tiny increments
Volume
length
1/21/41/64 1/8 3/4
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Volume Considerations
• Changing a cluster– Must reside in the same management group
– Must have sufficient unallocated space for the hard threshold and replication level of the volume being moved
• Changing replication level– Must have sufficient SSMs and unallocated space to support a
new replication level
• Changing volume size– Must be sufficient unallocated space in the cluster
– Move the volume to a cluster that has enough unallocated space
– Add an SSM to the cluster
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Authentication Groups
• Are sets of application servers by– Name
– IP address
– Subnet
• Can have a volume list assigned
• Are associated with a management group
• Equivalent to hosts in a router
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Creating an Authentication Group
• Name – case sensitive
• Description - optional
• iSCSI– Choose access level for
group
• CHAP mode
• Fibre Channel– Requires separate group
for each volume
– Choose access level for group
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Authentication Group Requirements• Prerequisites
– Create one or more authentication groups at the management group level
– Create one or more clusters in that management group– Create one or more volumes in that cluster
• Requirements for associations to volumes– Only one authentication group should have read/write access to a
volume. Multiple can be assigned but only one will have control– Multiple authentication groups can have simultaneous read-only access
SQL_VOL
Backup_App – read/write
DB_Report – read-only
Accounting – read-only
Authentication Groups
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Volume Lists
• Are lists of volumes, each with it’s own security level
• Have one or more Auth Group assigned
• Are associated with a single management group
• Assign access to volumes and grant – No access
– Read-only access
– Read and Write access
• Equivalent to custom maps in a router
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Creating a Volume List
• Name
• Description (optional)
• Add volumes to the volume list
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Creating a Volume List (cont.)
• Set permission level
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Associate With an Existing Authentication Group• Select Authentication
Groups tab
• Select authentication groups to add to this volume list
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Associate With an Existing Authentication Group (cont.)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Deleting a Volume
• Delete a volume to remove that data from the Module and make the space available
• Delete all snapshots associated with the volume
• Stop applications from accessing the volume
• Disable the drives on the hostWarnings
– Deleting a volume removes that volume’s data permanently from the SSM
– Deleting a Fibre Channel volume requires a reboot of Windows* server 2000 and Windows* server 2003 systems
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Volumes
Volume Summary
• A volume is a logical entity that is made up of storage on one or more SSMs
• Volumes can be used as raw data storage or can be formatted with a file system and used by a host or file server
• You create volumes on clusters of one or more modules
• Volumes are assigned a size, hard threshold, and soft threshold
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
• iSCSI architecture
• iSCSI security
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Overview
• End-to-end protocol for transporting storage I/O block data over an IP network
• Expands and extends SAN storage
• One network platform for LANs, NAS, SANs
• Includes – Servers (initiators)– Storage devices (targets)– Protocol transfer gateway devices– Ethernet switches– Ethernet routers
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
EcosystemProduct Vendors Comments
iSCSI Chips AdaptecAlacritech
IntelSiliquentQLogic
Silverback
TCP/IP & iSCSI protocol on a chip. Required for 10 Gig-E.
Software Initiators MicrosoftCisco
HPIBM
Microsoft and HP are single OS only. Cisco and IBM support multiple OS.
Hardware/Firmware Adaptors (NICs, HBAs, TOEs)
Adaptec QLogicEmulex
Intel
Most of these HBAs are new and only support Windows and Linux. QLogic supports Solaris too.
Switches with iSCSI blades McDataCisco
Protocol conversion blades on FC switches for IP connectivity to FC SAN
Routers or Gateways SANRADCisco
McData
Protocol conversion gateways for bridging FC SANs over IP and providing IP connectivity to FC SANs
iSCSI concentrator StoneflyIntransa
Used for connecting servers with iSCSI to FC or SCSI disk arrays
iSCSI-attached Storage Arrays
LeftHand NetworksEMC
Network ApplianceEqualLogic
Competitive storage arrays in the market
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
ArchitectureApplication Serverswith software or hardware iSCSI initiators
iSCSI Initiators
iSCSI Initiators send data to any SSM running
iSCSI target software Target SSM writes data to all SSMs in the Cluster
iSCSI Target Data Map
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Features
• Multiple initiator access to volume– Target allows connection from any number of initiators
– Volume access is controlled by initiator Overridden using challenge handshake authentication
protocol (CHAP)
– Can be used to host cluster servers such as Oracle* Real Application Clusters
• iSNS
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
CHAP Concepts
• Target secret– Associated with volume
Target authenticates initiator
– Required for initiator use of volume
• Initiator secret– Associated with iSCSI client
Initiator authenticates target
– Required for target authentication
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
CHAP Secrets and Security
• A secret is essentially a password– Target secret system password
– Initiator secret client authenticates system
• Security features– Secrets are not sent over network
Challenges are encoded binary strings Responses are binary md5 hash values
– Secret must be known at both ends Security is in secret management policies
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Types of CHAP Authentication• None
– No CHAP processing– Disallowed if CHAP configured for target
• One-way– Only Target authenticates initiator– Fails if no target secret set
• Two-way– Initiator authenticates initiator and target– Fails if no initiator secret set
• No provision for initiator authenticate target only
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Single Host Authentication with CHAP Not Required
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Single Host Authentication with One-way CHAP
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Single Host Authentication with Two-way CHAP
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Set-up Target and Initiator Secret – SSC• Target Secrets have to
be between 12 and 24 characters
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
MS iSCSI Portal Set-up
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Authentication Groups
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI Authentication Groups (cont.)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Set-up Target Secret: MS Initiator
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Set-up Initiator Secret: SCC
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Set-up Initiator Secret: MS Initiator
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
iSCSI Volume Identification
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
iSCSI
Summary
• Is an open source protocol to access a SAN– Used over Ethernet
LANs WANs SANs
• Capability to enable several layers of security– One-way CHAP
– Two-way CHAP
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Snapshot architecture
Creating and configuring snapshots
Scheduling snapshots
Scripting snapshots
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Overview
• Provides a fixed version of a volume for read only access
• Snapshot Requirements– Management group
– Cluster
– Created volumes
• Snapshots are used for– Data mining or backup
– Data or file system preservation
– Protection against data or file system corruption
– File level restore without tape or backup software
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots Snapshot Layered Architecture
• Volume/snapshot relationship can be conceptually thought of as “layers”
• Space is allocated from the storage pool to accommodate snapshots
• User_Data_ss1 is the base line snapshot, the volume is read only
• User_Data_ss2 contains the deltas since User_Data_ss1 was created
• User_Data contains the deltas since User_Data_ss2 was created
• The User_Data volume is comprised of User_Data + User_Data_ss2 + User_Data_ss1
User_Data
User_Data_ss1
User_Data_ss2
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
User_Data
User_Data_ss1
New volume is created to hold any changed blocks
Previous volume becomes Snapshot volume and is read only
User_Data Volume composed of Disk Blocks: A, B, C
User_Data Volume composed of MODIFIED Disk Blocks
Using Snapshots How a Snapshot Works
All Operations Transparent to the Operating System and the User Community
User_DataMounted as Drive U
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
User_Data
New volume is created to hold any changed blocks
User_Data_ss1
Snapshot volume is read only
User_Data Volume composed of Disk Blocks: A, B, C
User_Data Volume composed of MODIFIED Disk Blocks: A’, D, E
Snapshot volume becomes read only
User_Data_ss2
NEW User_Data Volume composed of MODIFIED Disk Blocks:
Using Snapshots How a Snapshot Works (cont.)
All Operations Transparent to the Operating System and the User Community
User_DataMounted as Drive U
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots Using a Snapshot
User_Data
User_Data_ss1
User_Data Volume composed of Disk Blocks: A, B, C
User_Data Volume composed of MODIFIED Disk Blocks: A’, D, E User_Data_ss2
NEW User_Data Volume composed of MODIFIED Disk Blocks:
An Executive come to the IT department and says they accidentally deleted a file. The file from yesterday is the one they need.
With a snapshot you can mount the snapshot on to a server and allow the exec to access a previous version of the file.
User_DataMounted as Drive U
User_Data-ss2Mounted as Drive Z
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots Writing to a Snapshot
Volume Branching
• Non-persistent changes
• Created upon client connect
• Deleted upon deleting snapshot or deleting writable space
Volume
Snapshot
Snapshot
Snapshot
Writablesnapshot
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Single Versus Scheduled Snapshot
• Single snapshots– Deleted when no longer needed
• Schedule snapshot– Series of snapshots up to a specified number
– For a specified time period Earliest snapshot is deleted when new one is created
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
How a Snapshot Work – Sizing a Snapshot• Creates a new volume
– Holds all the changed blocks
• Allocates space in storage pool
• Use the auto grow feature to keep snapshot volume usage close to allocation
Original volume Size – 100GHardTh – 100GSoftTh – 80G
Snapshot volume Size – 100GHardTh – 5GSoftTh – 2.5G
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Managing Capacity
• Set hard thresholds– Volumes– Snapshots
• Compensate for snapshot deletion– 20% for changes
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Auto-Grow Snapshots
• Used in conjunction with scheduled snapshots to manage threshold allocation more efficiently
• Hard and soft thresholds automatically grow by a specified number of megabytes
Snapshot volume Size – 100GHardTh – 5GSoftTh – 2.5G
Auto Grow is set to grow the snapshot volume by 500 MB when the soft threshold is reached
Snapshot Growth Size – 100GHardTh – 5.5GSoftTh – 3G
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Data Mining or Backup Planning
• Applications– Source volume for data mining and backup
– Data preservation before software upgrade
• Planning– Sufficient space in cluster to create the snapshot
– Compensating for snapshot deletions
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Protection Against Data Corruption
• Applications– Business continuance
– Data protection
• Planning– Sufficient space in cluster to create the snapshots
– Compensating for snapshot deletions
– Minimum hard threshold size
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Creating a Snapshot
• Volume
• Snapshot name
• Description (optional)
• Hard threshold– Setting it smaller than the
size of the original volume allows you to create snapshots that require less space on the cluster
• Soft threshold
• Auto Grow
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Rolling Back a Volume
• Replaces the original volume with a read/write copy of the selected snapshot
• Offers DR support where snapshot is last “known good” state of data
• Snapshot must be verified
• Requires volume name change
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
How a Rollback Works
User_DataMounted as
Local Drive
User_Data_ss1
Snapshot volume is read only
User_Data Volume composed of Disk Blocks: A, B, C
User_Data Volume composed of MODIFIED Disk Blocks: A’, D, E
Snapshot volume is read only
User_Data_ss2
User_Data
Volume created to hold any changed blocks
User_Data Volume composed of MODIFIED Disk Blocks: A’’, B’, E’, F
• Unplug and eject current volume
• Select rollback volume
• Rollback volume twice to preserve naming convention
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Mounting Snapshots of Basic Disks• Create authentication group
• Associate authentication group to snapshot– Read only
• Configure client access to snapshot– Read only
• Read only volumes and snapshots appear to be read/write
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Mounting Snapshots of Dynamic Disks• Create an authentication group
• Associate authentication group to snapshot– Read only
• Configure client access to snapshot– Read only
• Snapshot must have one of the following– Mounted on separate server as parent volume
– Parent volume must be disabled/deleted
• Import foreign disks if volume is shown as foreign
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Scheduling Snapshots
Retention Policy
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Scheduling Snapshots Considerations• Coordinate retention of snapshots with cluster
capacity
• Sufficient capacity is needed in the cluster for both snapshots– The scheduled snapshot will not be created
– The schedule will not continue until an existing snapshot is deleted
• Observe the best practices guidelines in the application note for your target application
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Snapshots
Summary
• Snapshots are used for– Data mining or backup
– Data or file system preservation before upgrading software
– Protection against data or file system corruption
– File level restore without tape or backup software
• Snapshots can be– Created
– Deleted
– Edited
– Scripted
– Scheduled
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
• Creating a remote snapshots
• Scheduling a remote snapshot
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Overview
• Utilizes the existing volume and snapshot features along with asynchronous replication across geographic distance to create remote snapshots
• Use Remote Copy for– Business continuance/disaster recovery
– Off-site backup and recovery
– Split mirror
– Data migration
– Content distribution
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Concepts
• Primary volume– Accessed by application server– Backed up with remote IP copy
• Primary snapshot– Located on primary volume cluster– Created with remote snapshot
• Remote volume– Pointer for copy of primary snapshot– Contains no data
• Remote snapshot– Located on remote volume cluster– Copy of primary snapshot
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Concepts (cont.)
• Remote copy pair– Primary and associated remote volume
• Failover– Transfer of application server operation from primary to
remote volume
• Failback– Restoration of primary volume– Failover volume converted back to remote volume
• Failover recovery– User choice for designating primary volume
• Synchronize– Copy of latest snapshot from primary to remote volume
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Procedure
• Create the primary snapshot for a selected volume at production location.
• Create a remote volume at the remote location and then create a remote snapshot linked to the primary snapshot.
• System copies data from the primary snapshot to the remote snapshot.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Prerequisites
• Log into both management groups– Primary volume
– Target cluster for the remote snapshot
• Set up the bandwidth for copying the remote snapshot
• Designate or create a remote volume
• Ensure enough space is available for the remote snapshot
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Setting the Bandwidth
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Designating or Creating a Remote Volume• Make an existing volume into a remote volume
– A snapshot of all existing data will be created for that volume
– All the data in that volume will be deleted so that the remote volume will have zero length and zero hard and soft thresholds
• Create a remote volume
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Viewing Remote Snapshots
• Management group
• Primary snapshot
• Remote management group
• Remote snapshot
• Status of the copying (% complete)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Other Remote Snapshot Tasks
• Canceling a remote snapshot– If you cancel a remote snapshot in progress, the remote
snapshot is deleted but the primary snapshot remains
• Editing a remote snapshot– Description
– Hard and soft thresholds
• Deleting a remote snapshot
• Note: When the connection between a primary and remote snapshot is lost, it pauses. When the connection is restored it starts from where it left off
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Planning for Remote Snapshot Schedules• Recurrence
– When to create snapshots
• Thresholds– Sufficient space for remote snapshots
• Retention– How long to retain primary and remote snapshots
• Time Required– Recommended having two remote snapshots of the
primary volume– Increased time
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Remote Snapshot Schedule Checklist• Set snapshot schedule
– Start time– Recurrence – yes or no
• Primary setup– Set hard threshold– Set soft threshold– Retention
Set either the maximum number of snapshots or the period of time to retain them
• Remote setup– Choose management group to contain the remote snapshot– Choose volume for the remote snapshots– Retention
Set either the maximum number of snapshots or the period of time to retain them
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Creating a Remote Snapshot Schedule
Primary Setup
Remote Snapshot Schedule
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Creating a Remote Snapshot Schedule (cont.)
Remote Setup
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Using Remote Copy
Summary
• Use the existing volume and snapshot features along with asynchronous replication across geographic distance to create remote snapshots
• Creating a remote copy follows a three-step process– Create a snapshot of the primary volume
– Create a volume at the remote location for a snapshot linked to the primary snapshot
– Copy data from the primary snapshot to the remote snapshot
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
• Virtual manager
• Cluster configuration
• Best practices
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Virtual Manager Overview
• Minimum requirements for a fault-tolerant storage solution– Two-way replication for data redundancy
– Management group with three configured managers
• Virtual manager is used for two configurations– Synchronous replication between two sites
– Two SSMs in the management group
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Synchronous Replication – Two Sites
Primary Site Disaster Recovery Site
• Configure even number of managers in the management group
• Configure equal number per site
• For this example, configure:
– Four (4) managers, two in each location
– One (1) virtual manager
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Two SSM Configuration
• Configure both SSMs to be managers
• Configure a virtual manager in the management group
• Quorum value for this configuration is three (3)
Quorum Value = 3 (1 Manager + 1 Manager + Virtual Manager)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
When Not To Use a Virtual Manager
• Management group is located within a single site with reliable and redundant network connectivity
• Management group is spread across three (3) or more sites with reliable, redundant, and no single point of failure in WAN/MAN links on the network
• Using asynchronous replication between two or more sites
Virtual Manager
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Virtual Manager Requirements
• Is configured in the management group but is not started on an SSM until the management group experiences a failure and loss of quorum
• Cannot be added after quorum has been lost
• Management group, or geographical location, with at least one regular manager running
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Add/Delete Virtual Manager
• Add or delete dialog box appears depending on whether or not a virtual manager has been added
• Use virtual manager icon (black triangle) to verify it has been added to the management group
Virtual Manager icon
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Verifying Virtual Manager
Yes
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Start/Stop Virtual Manager
• Click the Start Virtual Manager control button
• Select the SSM to run the virtual manager on
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Configuring a Cluster for Disaster Recovery• Best practice
– Configure the cluster to have all data replicated at each site
– Configure the managers to correctly handle disaster recovery
Sample, Two Site Configuration
Denver Boulder
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
DR Cluster Configuration Scenario
• Naming SSMs– Use host names that identify in the Storage System
Console where each is located
• Planning Managers/Virtual Manager– Plan to start two managers per site
– Add a virtual manager to the management group
Denver Boulder
Denver_1
Denver_2
Boulder_1
Boulder_2
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
DR Cluster Configuration Scenario (cont.)• Create the cluster in the desired management group
• Add the SSMs to the cluster in alternating order– First SSM – Denver_1
– Second SSM – Boulder_1
– Third SSM – Denver_2
– Fourth SSM – Boulder_2
Boulder
Denver_1
Denver_2
Boulder_1
Boulder_2
Volume Replication=2-way
One Copy One Copy
Denver
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
DR Cluster Configuration Scenario (cont.)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Achieving Cost-Effective Disaster Recovery• Using remote copy capability
– Maintain copies of data volumes at remote sites
– Set up a schedule for creating remote copies
– Access the latest remote copy of your data volumes if a site becomes unavailable
10km
Fibre Channel DR Challenge
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Best Practices for Remote Copy and DR• Select a recurrence schedule that minimizes
potential for data loss
• Use remote snapshots in conjunction with local synchronous volume replication
Sample Configuration
• Create a cluster of three SSMs, all with Managers, in the production location.
• Create volumes on the cluster and set the replication level to 3.
• Schedule remote snapshots for: recurrence = 4 hours and retention = 2 days.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Planning for Disaster Recovery
Summary
• Explanation of disaster recovery
• Using a Virtual Manager
• Cluster disaster recovery planning considerations
• Achieving cost-effective disaster recovery using remote copy capabilities
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
• Performance factors
• Best practices
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Performance Factors
The performance factors include:– Replication level
– RAID Level
– Drive buffering
– Number of drives per Storage System Module (SSM)
– Number of SSMs per cluster
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Replication Level
• One-way (none) replication for volume– Highest performance
– No data redundancy
• Two-way replication for volume– Optimal performance
– Data redundancy
• Three-way replication for volume– Reduced performance
– Maximum data redundancy
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Performance Configurations
-RAID 0 on each SSM, multiple SSMs, 2-way replication
-RAID 10 on each SSM, multiple SSMs, no replication
-RAID 5 on each SSM, multiple SSMs, no replication
-RAID 10 on each SSM, single SSM, no replication
-RAID 0 on each SSM, multiple SSMs, 3-way replication
-RAID 5 on each SSM, single SSM, no replication
RAID 0 with two-way replication and a minimum of two SSMs RAID 0 with two-way replication and a minimum of two SSMs
provides optimal performance and redundancyprovides optimal performance and redundancy Three SSMs is the optimal base configuration for a balance of performance and Three SSMs is the optimal base configuration for a balance of performance and redundancy.redundancy.
Per
form
ance
Per
form
ance
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Drive Buffering
• Ability to enable drive buffer, or cache, within the EBSD client driver
• Take advantage of Gigabit Ethernet throughput capabilities
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Number of SSMs in Cluster
• Modular approach to scaling– Storage is added to a cluster in discrete chunks
– Each discrete chunk provides more resources
• Performance of the storage pool scales up as one adds more SSMs to a storage pool
• Mount adams does not scale when adding SSMs for sequential applications. Random does scale.
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Other Performance Factors
• Network performance
• Multi-threaded applications
• Server hardware
• Block size
• File system characteristics
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Best Practice for Replication
Strategy Performance Redundancy Default
Replication Level
One-way Three-way Two-way
Redundancy Level
None Maximum Optimal
Performance Level
Maximum Reduced Optimal
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Best Practice for Drive Buffering
• Advantages– Ideal for write intensive applications
– Full use of Gigabit Ethernet
– Up to 4x performance increase
• Disadvantages– Risk of data loss
– Added infrastructure costs Uninterruptible power supply (UPS)
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Performance Tuning
Performance Monitor
© 2005 Intel Corporation.*Other names and brands may be claimed as the property of others.
Summary
Performance tuning involves
• Choosing RAID level
• Deciding whether to use drive buffering
• Selecting the number of SSMs
• Selecting the right strategy– Performance
– Redundancy
• Knowing your existing infrastructure