SAN EMC Clariion Foundations

119
SAN Foundations EMC Clariion Administration

Transcript of SAN EMC Clariion Foundations

Page 1: SAN EMC Clariion Foundations

SAN FoundationsEMC Clariion Administration

Page 2: SAN EMC Clariion Foundations

2

Course Objectives

Upon the completion of program you will be able to: Differ between SAN NAS DAS Understand & Describe SAN Components Administer Clariion as L1+ resource

Page 3: SAN EMC Clariion Foundations

SAN Foundations

Page 4: SAN EMC Clariion Foundations

4

Network

Network is defined area where Hosts and Clients talk / communicate to each other.

Components of Network.1. 2. 3.4.5.6.

Page 5: SAN EMC Clariion Foundations

5

Local Area Network

Switch / Hub / Router

Testing Server

Exchange Server

Windows Client Windows Client

Windows ClientWindows Client Windows Client

Page 6: SAN EMC Clariion Foundations

RAID0 - Striped set without parity or Striping.RAID1 - Mirrored set without parity or Mirroring.

Page 7: SAN EMC Clariion Foundations

RAID3 - Striped set with dedicated parity byte level parity.RAID4 - Block level parity.

Page 8: SAN EMC Clariion Foundations

RAID5 - Striped set with distributed parity or interleave parity.RAID6 - Striped set with dual distributed parity.

Page 9: SAN EMC Clariion Foundations

Nested (hybrid) RAID

RAID 0+1: striped sets in a mirrored set (minimum four disks; even number of disks) provides fault tolerance and improved performance but increases complexity. The key difference from RAID 1+0 is that RAID 0+1 creates a second striped set to mirror a primary striped set. The array continues to operate with one or more drives failed in the same mirror set, but if drives fail on both sides of the mirror the data on the RAID system is lost.

RAID 1+0: mirrored sets in a striped set (minimum two disks but more commonly four disks to take advantage of speed benefits; even number of disks) provides fault tolerance and improved performance but increases complexity. The key difference from RAID 0+1 is that RAID 1+0 creates a

striped set from a series of mirrored drives. In a failed disk situation, RAID 1+0 performs better because all the remaining disks continue to be used. The array can sustain multiple drive losses so long as no mirror loses all its drives.

Page 10: SAN EMC Clariion Foundations

10

Storage Area Network

Storage Area Network (SAN) is network of host and storage array that are often connected over Fibre Channel Fabrics

Components of SAN.1. HOST2. FC Cables3. HBA – Host Bus Adapter4. FC Switch5. Storage Array6. FCP

Page 11: SAN EMC Clariion Foundations

11

Fibre Channel

Fibre Channel is a gigabit-speed network technology primarily used for Storage Networking.

Fibre Channel Protocol (FCP) is the interface protocol of SCSI on the Fibre Channel.

Fabric is defined when a Host is connected to Storage Array via at least one FC Switch.

Page 12: SAN EMC Clariion Foundations

12

SAN Fabric

Source

Target

FC Initiator:

HBAFC Responder:

CLARiion SP Ports

Or

HP EVA Controller Ports

FC Switch

Page 13: SAN EMC Clariion Foundations

13

SAN

Switch / Hub / Router

Page 15: SAN EMC Clariion Foundations

15

HBA (Host Bus Adapter)

An HBA is refered to Fibre Channel interface card.

An HBA is an I/O adapter that sits between the host computer's bus and the Fibre Channel loop, and manages the transfer of information between the two channels. Fail-over Load balance

Major HBA manufacturers are Emulex, Qlogic, JNI, LSI, ATTO Technology.

Page 16: SAN EMC Clariion Foundations

16

Fibre Channel

Fibre Channel is a serial data transfer interface intended for connecting high-speed storage devices to computers

Page 17: SAN EMC Clariion Foundations

17

World Wide Name

World Wide Name - WWN, is a 64-bit unique address used to uniquely identify elements of SAN.

WWN is assigned to a Host Bus Adapter or switch port by the vendor at the time of manufacture.

WWN to HBA is what MAC is to NIC

Every FC Manufacturing Company Registers itself with SNIA (Storage Governing Body similar to IEEE).

Page 18: SAN EMC Clariion Foundations

18

World Wide Name

1 0 0 0 0 0 0 0 C 9 2 0 C D 4 0Example: Emulex HBA’s World Wide Name

Example: EMC Clariion HBA’s World Wide Name5 0 0 6 0 1 6 0 0 0 6 0 0 1 B 2

2 0 0 0 0 0 2 0 3 7 E 2 8 8 B EExample: Qlogic HBA’s World Wide Name

5 0 0 6 0 B 0 0 0 0 C 2 6 2 0 2Example: HP EVA HBA’s World Wide Name

Page 19: SAN EMC Clariion Foundations

19

Switches

A Fibre Channel switch is a device that routes data between host bus adapters and fibre ports on storage systems

Brocade 4900Cisco MDS 9020

Page 20: SAN EMC Clariion Foundations

20

Switch PortsThe following types of ports are defined by Fibre Channel:

E_port is the connection between two fibre channel switches. Also known as an Expansion port. When E_ports between two switches form a link, that link is referred to as an InterSwitch Link or ISL.

F_port is a fabric connection in a switched fabric topology. Also known as Fabric port. An F_port is not loop capable.

N_port is the node connection pertaining to hosts or storage devices in a Point-to-Point or switched fabric topology. Also known as Node port

TE_port is a term used for multiple E_ports trunked together to create high bandwidth between switches. Also known as Trunking Expansion port.

Page 21: SAN EMC Clariion Foundations

21

Directors

Brocade 4800 Director

Directors are considered to be more highly available than switches

Cisco MDS 9509

Page 22: SAN EMC Clariion Foundations

22

Fibre Channel SAN Switches and Directors

Switches Redundant fans and power supplies High availability through redundant

deployment Departmental and data-center

deployment Lower number of ports High performance Web-based management features

Directors “Redundant everything” provides

optimal serviceability and highest availability

Data-center deployment Maximum scalability Large Fabrics Highest port count Highest performance Web-based and/or console-based

management features

Page 23: SAN EMC Clariion Foundations

23

Storage

Storage can be internal or external:

Internal storage — Internal storage consist of disks located within the host server that has a basic RAID controller. The disks themselves, in most cases, are the same as those used in external storage shelves, using SCSI and Fibre Channel technologies.

Page 24: SAN EMC Clariion Foundations

24

External Storage Array

External storage External storage connects to a

physically separate storage cabinet or shelf. The interface is through an HBA located in the host server normally using a Fibre Channel interface or SCSI interface

EMC Clariion

HP EVA

Page 25: SAN EMC Clariion Foundations

25

Physical and Logical Topologies

The Fibre Channel environment consists of physical topology and logical topology

Fibre Channel Switch

WindowsServer

ExchangeServer

Storage

PhysicalTopology

LogicalTopology

PhysicalTopology

Page 26: SAN EMC Clariion Foundations

26

Physical Topology

SANs are scalable from two to 14 million ports in one system, with multiple topology choices such as:

Point-to-point — A dedicated and direct connection exists between two SAN devices. Arbitrated loop — SAN devices are connected in the form of a ring. Switched fabric — SAN devices are connected using a fabric switch. This enables a SAN device to

connect and communicate with multiple SAN devices simultaneously.

Page 27: SAN EMC Clariion Foundations

27

Zoning

Partitions a Fibre Channel switched Fabric into subsets of logical devices

Zones contain a set of members that are permitted to access each other

A member can be identified by its Source ID (SID), its World Wide Name (WWN), or a combination of both

Page 28: SAN EMC Clariion Foundations

28

WWN Zoning

Host

WWPN = 10:00:00:00:C9:20:DC:40

WWPN = 10:00:00:60:69:40:8E:41Domain ID = 21Port = 1

FC Switch

FC Switch Storage

Fabric

WWPN = 50:06:04:82:E8:91:2B:9E

WWPN = 10:00:00:60:69:40:DD:A1Domain ID = 25Port = 3

WWN Zone 1 = 10:00:00:00:C9:20:DC:40; 50:06:04:82:E8:91:2B:9E

Page 29: SAN EMC Clariion Foundations

29

Port Zoning

Host

WWPN = 10:00:00:00:C9:20:DC:40

WWPN = 10:00:00:60:69:40:8E:41Domain ID = 21Port = 1

FC Switch

FC Switch Storage

Fabric

WWPN = 50:06:04:82:E8:91:2B:9E

WWPN = 10:00:00:60:69:40:DD:A1Domain ID = 25Port = 3

Port Zone 1 = 21,1; 25,3

Page 30: SAN EMC Clariion Foundations

30

RAID

RAID 0 – Data striping No parity protection, least-expensive storage Applications using read-only data that require quick access, such as

data down-loading RAID 1 – Mirroring between two disks

Excellent availability, but expensive storage Transaction, logging or record keeping applications

RAID 1/0 – Data striping with mirroring Excellent availability, but expensive storage Provides the best balance of performance and availability

RAID 3 – Data striping with dedicated parity disk RAID 5 – Data striping/parity spread across all drives

Very good availability and inexpensive storage Support mixed types of RAID in the same chassis

Page 31: SAN EMC Clariion Foundations

31

LUN

A logical unit (LUN) is a grouping of one or more disks into one span of disk storage space. A LUN looks like an individual disk to the server’s OS. It has a RAID type and properties that define it.

5 disk RAID-5 group 4 disk RAID-1/0 group

Page 32: SAN EMC Clariion Foundations

32

SAN

Switch / Hub / Router

Page 33: SAN EMC Clariion Foundations

33

SAN Vendors

(source: www.byteandswitch.com)

Note: Inrange was acquired by CNT, Who was acquired by McData

Page 34: SAN EMC Clariion Foundations

34

Data Storage Solutions

Direct-attached storage (DAS) Network-attached storage (NAS) Storage Area Network (SAN)

Page 35: SAN EMC Clariion Foundations

35

DAS – Direct Attach Storage

DAS is storage connected to a server. The storage itself can be external to the server connected by a cable to a controller with an external port, or the storage can be internal to the server. Some internal storage devices use high-availability features such as adding redundant component capabilities.

The DAS configuration starts with a server. An HBA is being installed in the server, so the server can communicate

with the external storage. A storage disk drive is installed into a storage subsystem. The server and storage are connected with cables.

Page 36: SAN EMC Clariion Foundations

36

NAS – Network Attached Storage

NAS is storage that resides on the LAN behind the servers. NAS storage devices require special storage cabinets providing specialized file access, security, and network connectivity. NAS storage devices require special storage cabinets providing specialized file access, security, and network connectivity.

Requires network connectivity. Requires a network interface card (NIC) on the server to access the storage. Provides client access at the file level using network protocols. Does not require the server to have a SCSI HBA and cable for storage access. Supports FAT, NTFS, and NFS file systems.

Page 37: SAN EMC Clariion Foundations

37

SAN – Storage Area Network

SAN is a high-speed network with heterogeneous (mixed vendor or platforms) servers accessing a common or shared pool of heterogeneous storage devices. SAN environments provide any-to-any communication between servers and storage resources, including multiple paths.

Page 38: SAN EMC Clariion Foundations

38

SAN Benefits

SAN benefits provide high return on investment (ROI) and reduce the total cost of ownership (TCO) by increasing performance, manageability, and scalability.

Some key benefits of SANs are: Reduced data center rack and floor space — Because you do not need to buy big

servers with room for many disks, you can buy fewer, smaller servers, which takes less room in the data center.

Disaster recovery capabilities — SAN devices can mirror the data on the disk to another location.

Increased I/O performance — SANs operate faster than internal drives or devices attached to a LAN.

Modular Scalability — enabling changes to the infrastructure as business needs evolve

Consolidated Storage — reduces the cost of storage management, and better utilization of available resources.

Page 39: SAN EMC Clariion Foundations

39

Storage Solution Comparison Table

Storage Solution Comparison Table

  DAS NAS SAN

Applications Any File serving Storage for application servers

Server and operating systems

General purpose Optimized General purpose

Storage devices Internal or external dedicated

External direct-attached External shared

Management Labor intensive Centralized Centralized

Data centers Workgroup or departmental

Workgroup or departmental

Small workgroup to enterprise data centers

Performance Network trafficIncreased network performance

Higher bandwidth

Distance None Limited distances Greater distances

Speed Bottlenecks Improved bottlenecks Greater speeds

High availability Limited LimitedOffers no-single-point-of-failure storage and data path protection

Cost Low cost Affordable Higher cost, but greater benefits

Page 40: SAN EMC Clariion Foundations

40

Course Summary

Key points covered in this course: SAN Fibre Channel Layer World Wide Name SAN Topologies Zoning

Page 41: SAN EMC Clariion Foundations

41

Course Summary

Key points covered in this course: SAN Topologies

FC-AL FC-SW

Zoning Single Initiator Port WWN

Page 42: SAN EMC Clariion Foundations

CLARiiON

Page 43: SAN EMC Clariion Foundations

43

CLARiiON Foundations

CLARIION RANGE

Page 44: SAN EMC Clariion Foundations

44

CLARiiON Foundations

Page 45: SAN EMC Clariion Foundations

45

CLARiiON Timeline

CX200CX400CX6002002

FC47002001

FC45002000

FC53001999

FC57001998

FC55001997

SCSICLARiiONs

Pre-1997

CX300CX500CX7002003

CX300iCX500i

2005

CX3-20CX3-40CX3-80

2006

Page 46: SAN EMC Clariion Foundations

46

High-End Storage: The New DefinitionHigh-End Then Simple redundancy

Automated Fail-over Benchmark performance (IOPs

and MB/s) Single and/or simple workloads

Basic local and remote data replication disaster recovery

Scalability Capacity

Manage the storage system Easy configuration, simple

operation, minimal tuning

High-End Today Non-disruptive everything

Upgrades, operation, and service

Predictable performance… unpredictable world Complex, dynamic workloads

Replicate any amount, any time, anywhere Replicate any amount data,

across any distance, without impact to service levels

Flexibility Capacity, performance, multi

protocol connectivity, workloads, etc.

Manage service levels Centralized management of the

storage environment

Page 47: SAN EMC Clariion Foundations

47

Flexible, High Availability Design

Fully redundant architecture Power, cooling, data

paths, SPS Non-stop operation Online software upgrades Online hardware changes

Continuous diagnostics Data and system integrity CLARalert Phone Home

Dual I/O paths with non-disruptive failover

Leader in data integrity Mirrored write cache SNiiFF Verify Background Verify-Per

RAID Group

SnapView and MirrorView replication software

SAN Copy No single points of

failure, modular architecture

Fibre Channel and ATA RAID

From 5 to 240 disks Flexibility

Individual Disk RAID levels 0, 1, 1/0, 3, 5 Mix drive types Mix RAID levels

Up to 16 GB of memory 8 GB per Storage Processor Configurable read and

write cache size

Page 48: SAN EMC Clariion Foundations

48

CLARiiON CX Series

Sixth generation, full Fibre Channel networked storage running FLARE Operating Environment

Flexible connectivity and bandwidth Up to 8 FC-AL or FC-SW host ports 1 Gb/2 Gb Fibre Channel host connections

Scalable processing power Dual or Quad-processors supporting advanced storage-based

functionality Industry-leading performance and availability

1 GB, 2 GB, 4 GB, 8 GB or 16 GB memory and dual or quad redundant 2 Gb back-end storage connections

Cross-generational software support Non-disruptive hardware replacement and software

upgrades

Page 49: SAN EMC Clariion Foundations

49

CLARiiON Foundations

CLARIION COMPONENTS

Page 50: SAN EMC Clariion Foundations

50

Modular Building Blocks Disk Array Enclosure (DAE)

CX family uses DAE2 with up to (15) 2Gb FC Drives

FC family uses DAE with up to (10) 1Gb FC drives

DAE2-ATA contains up to 15 ATA drives (Advanced Technology Attachment)

Disk Array Enclosure ( DAE2P) CX family only on 300, 500, and 700

series Replacement for DAE2 with code upgrade Houses up to 15 2GB FC Drives

Disk Array Enclosure ( DAE3P) CX 3 series Replacement for DAE2 with code upgrade Houses up to 15 2GB or 4GB FC Drives

Disk Processor Enclosure (DPE) Some CX series use DPE2 that contains

Storage Processors and up to (15) 2Gb FC Drives

FC family uses DPE that contains Storage Processors and up to (10) 1Gb FC Drives

Storage Processor Enclosure (SPE) Contains two dual CPU Storage Processors

and no drives Standby Power Supply (SPS)

Provides battery backup protection

Page 51: SAN EMC Clariion Foundations

51

CLARiiON ATA (Advanced Technology Attachment)

Lower $/MB for backup or bulk storage

Alternative to Fibre Channel HDAs Same Software capability as FC Uses FC interconnect

Mix FC and ATA enclosures First shelf must be Fibre Channel

Full HA features Dual-ported access Redundant power and

LCCs Hot swap capability

Page 52: SAN EMC Clariion Foundations

52

CX 600 Architecture

Storage Processor based architecture

CLARiiON Messaging Interface (CMI)

2Gb Fibre Channel Front End

2Gb Fibre Channel Back End

StorageProcessor

StorageProcessor

Modular design

LCC

LCC

LCC

LCC

LCC

LCC LCC

LCC

Page 53: SAN EMC Clariion Foundations

53

CX 3 Architecture

Power supply

Power supply SPS

Fan FanFan

SPS

Up to 480 drives max per storage system (CX3-80)

4GLCC

4GLCC

4GLCC

4GLCC

4GLCC

4GLCC

4GLCC

4GLCC

UltraScaleStorage Processor

UltraScaleStorage Processor

Fibre Channel

Mirrored cache

Fibre Channel

CPU

Mirrored cache

CPU

FC FC

CPU

FC

CPU

FCFC FC FCFC

Fan

2/4 Gb/s Fibre Channel Back End

2/4 Gb/s Fibre Channel Back End

1/2/4Gb/s Fibre Channel Front End

CLARiiON Messaging Interface (CMI)Multi-Lane PCI-Express bridge link

Page 54: SAN EMC Clariion Foundations

54

Storage Processor Introduction

Storage processors are configured in pairs for maximum availability

One or two processors per Storage Processor board

Two or four Fibre Channel front-end ports for host connectivity 1Gb or 2Gb or 4 Gb Arbitrated loop or switched

fabric Dual-ported Fibre Channel Disk

drives at the back-end Two or Four Arbitrated Loop

connections Maximum of 8GB of memory per SP

Write Cache is mirrored between Storage Processors for availability using the CMI (CLARiiON Messaging Interface)

Write Caching accelerates host writes

Ethernet connection for management

Storage Processor

4 Fibre Channel Ports

Mirrored Cache

CPU CPU

FC-AL FC-AL

CMI

LCC

LCC

Page 55: SAN EMC Clariion Foundations

55

Persistent Storage Manager (PSM)

PSM is a hidden LUN that records configuration information Both SPs access a single PSM so environmental records are

in sync If one SP receives new configuration info, that info is written to

the PSM and the other SP instantaneously updates itself If one SP needs to be replaced, the new one can easily find

the unique environmental information on the PSM Enables SPs to be completely field-replaceable

Page 56: SAN EMC Clariion Foundations

56

Data Units on a Disk

Sector 520 Bytes

512 bytes of user data 8 bytes of administrative data

Sece.0

Sece.1

Sece.2

Sece.3

Sece.4

Sece.5

Sece.127

Sece.126

Sece.125

Sece.124

Sece.123

Sece.122

User Data (512 Bytes)

Element s.0 Element s.1 Element s.2 Parity s. Element s.3 Element s.4 Element s.5

Page 57: SAN EMC Clariion Foundations

57

CLARiiON Foundations

DATA AVAILABILITYDATA PROTECTION

Page 58: SAN EMC Clariion Foundations

58

Mirrored Write Caching

Write cache size is user configurable and is allocated in pages

How much write cache is used by each SP is dynamically adjusted based on workload

All write requests to a given SP are copied to the other SP

Data integrity ensured through hardware failure events

CLARiiON Messaging Interface (CMI) used to communicate between SPs

SP-A SP-B

Storage System

Base Software

SP-AWrite Cache

SP-BWrite Cache

Mirror

Read Cache

Base SoftwareSP-A

Write CacheMirror

SP-BWrite Cache

Read Cache

CMI

Page 59: SAN EMC Clariion Foundations

59

Advanced Availability: LUN Ownership Model

Only one Storage Processor “owns” a LUN at any point in time Assigned when LUN is created but can also be changed using Navisphere

Manager or CLI If the Storage Processor, Host Bus Adapter, cable, or any component in

the I/O path fails, ownership of the LUN can be moved to the surviving SP Process is called LUN Trespassing CLARiiON originally used host based ATF (Application Transparent Failover)

to automate path-failover. Today, EMC PowerPath provides this function For maximum availability, careful design of I/O path for no Single-Point-

Of-Failure is required

Page 60: SAN EMC Clariion Foundations

60

Write Cache Protected by “Vault”

The “vault” is a reserved area found on specific protected disks At the first sign of an event which could potentially compromise

the integrity of the data in write cache, cache data is dumped to the vault area

After the data is dumped to the vault, it will be migrated to the LUNs where it belongs When power is restored data is migrated from the vault back to

cache (if an SPS has a charged battery) Other failures such as a SP or Cache failure will disable write

cache

Page 61: SAN EMC Clariion Foundations

61

sdsdHBA

Requestsd

HBA

Request

PowerPath

ApplicationApplication

SP BSP A

FC Switch FC Switch

0 1 1 0

Host Connectivity Redundancy PowerPath – Failover Software

Host resident program for automatic detection and management of failed paths

Host will typically be configured with multiple paths to LUN

If HBA, cable or Switch fails, PowerPath will redirect I/O over surviving path

If Storage Processor fails, PowerPath will “Trespass” LUN to surviving Storage Processor and redirect I/O

Dynamic load balancing across HBA and Fabric – Not Storage Processors

Page 62: SAN EMC Clariion Foundations

62

Course Summary

Key points covered in this course: The basic architecture of a CLARiiON Disk Array The architectures of the various CLARiiON models The data protection options available on CLARiiON on

Storage Processor. The relationship between CLARiiON physical disk drives and

LUNs High availability features of the CLARiiON and how this

potentially impacts data availability

Page 63: SAN EMC Clariion Foundations

63

CLARiiON Foundations

SOFTWARE AND MANAGEMENT ENVIRONMENT

Page 64: SAN EMC Clariion Foundations

64

FLARE Operating Environment

FLARE Operating Environment is the “Base Software“ that runs in the CLARiiON Storage Processor I/O handling, RAID algorithms End-to-end data protection Cache implementation

Access Logix provides LUN masking that allows sharing of storage system

Navisphere middleware provides common interface for managing CLARiiON

CLARiiON optional software including MirrorView, SnapView, SAN Copy

EMC ControlCenter provides end-to-end management of a CLARiiON

CLARiiON Hardware

FLARE Operating Environment

Navisphere

EMC ControlCenterCLARiiON Based Applications

Page 65: SAN EMC Clariion Foundations

FLARE Versions

Generation 1: CX200, CX400, CX600 Generation 2: CX300, CX500, CX700 including the iSCSI flavors Generation 3: CX3-10, CX3-20, CX3-40, CX3-80 Generation 4: CX4-120, CX4-240, CX4-480, CX4-960 FLARE Code is broken down as follows (Please see the color coded scheme below). 1.14.600.5.022 (32 Bit) 2.16.700.5.031 (32 Bit) three digits are the number of drives it can support) 2.24.700.5.031 (32 Bit) 3.26.020.5.011 (32 Bit) 4.28.480.5.010 (64 Bit) The first digit: 1, 2, 3 and 4 indicate the Generation of the machine this code level can be

installed on. These numbers will always increase as new Generations of Clariion machines are added.

The next two digits are the release numbers, 28 being the latest FLARE Code Version. The next 3 digits are the model number of the Clariion, like the CX600, CX700, CX3-20

and CX4-480. The 5 here is unknown The last 3 digits are the Patch level of the FLARE Environment.

Page 66: SAN EMC Clariion Foundations

66

CLARiiON Management Options

There are two CLARiiON management interfaces CLI (Command Line Interface)

navicli commands can be entered from the command line and can perform all management functions

GUI (Graphical User Interface) Navisphere Manager is the graphical interface for all management

functions to the CLARiiON array

Page 67: SAN EMC Clariion Foundations

67

EMC Navisphere Management Software

Centralized management for CLARiiON storage throughout the enterprise Centralized management

means a more effective staff Allows user to quickly adapt

to business changes Keeps business-critical

applications available Key features

Java based interface has familiar look and feel

Multiple server support EMC ControlCenter integration Management framework

integration Navisphere Software Suite

Navisphere Manager Navisphere Analyzer Navisphere CLI

Page 68: SAN EMC Clariion Foundations

68

Navisphere Manager

Discover Discovers all managed CLARiiON

systems Monitor

Show status of storage systems, Storage Processors, disks, snapshots, remote mirrors, and other components

Centralized alerting Apply and provision

Configure volumes and assign storage to hosts

Configure snapshots and remote mirrors

Set system parameters Customize views via Navisphere

Organizer Report

Provide extensive performance statistics via Navisphere Analyzer

Page 69: SAN EMC Clariion Foundations

69

Storage Configuration and Provisioning Understanding application and server

requirements and planning configuration is critical!

RAID Group is a collection of physical disks RAID Protection level is assigned to all

disks within the RAID group Binding LUNs is the creation of Logical

Units from space within a RAID Group Storage groups are collections of

LUNs that a host or group of hosts have access to

Step 5 – Connect Hosts with Storage Groups

Step 4 – Add LUNs to Storage Groups

Step 3 – Create Storage Groups

Step 2 – Bind LUNs

Step 1 – Create RAID Groups

Step 0 - Planning

Page 70: SAN EMC Clariion Foundations

70

CLARiiON RAID Options

Disk No protection JBOD

RAID-0: Stripe No protection Performance JBOD

RAID-1: Mirroring Some performance gain by splitting read operations Protection against single disk failure Minimum performance hit during failure

Step 0 - Planning

Page 71: SAN EMC Clariion Foundations

71

CLARiiON RAID Options

RAID-1/0: Striped Mirrors Performance of stripes combined with split read operations Protection against single disk failure Minimum performance hit in failure mode

RAID-3: Striped Elements – Sequential IOPS Each data element striped across disks - parity kept on the last disk in the RAID

group Extremely fast read access from the disk Used for streaming media Parity protection against single disk failure Performance penalty during failure

Step 0 - Planning

Page 72: SAN EMC Clariion Foundations

72

CLARiiON RAID Options

RAID-5: Striping with Parity – Random IOPS Performance of striping Protection from single disk failure Parity distributed across member drives within the RAID Group Write performance penalty Performance impact if a disk fails in RAID Group

Hot Spare Takes the place of failed disk within a RAID group Must have equal or greater capacity than the disk it replaces Can be located anywhere except on Vault disks When failing disk is replaced, the hot spare restores the data

to the replacement disk and returns to the hot spare pool

Step 0 - Planning

Page 73: SAN EMC Clariion Foundations

73

Which RAID Level Is Right

RAID 0 – Data striping No parity protection, least-expensive storage Applications using read-only data that require quick access, such as

data down-loading RAID 1 – Mirroring between two disks

Excellent availability, but expensive storage Transaction, logging or record keeping applications

RAID 1/0 – Data striping with mirroring Excellent availability, but expensive storage Provides the best balance of performance and availability

RAID 3 – Data striping with dedicated parity disk RAID 5 – Data striping/parity spread across all drives

Very good availability and inexpensive storage Support mixed types of RAID in the same chassis

Step 0 - Planning

Page 74: SAN EMC Clariion Foundations

74

Creating RAID Groups

RAID protection levels are set through a RAID group

Physical disks part of one RAID group only Drive types cannot be mixed in the RAID Group

May include disks from any enclosure RAID types may be mixed in an array RAID groups may be expanded Users do not access RAID groups directly

5 disk RAID-5 group 4 disk RAID-1/0 group

Step 1 – Create RAID Groups

Page 75: SAN EMC Clariion Foundations

75

Creating a RAID Group

Step 1 – Create RAID Groups

Page 76: SAN EMC Clariion Foundations

76

Binding a LUN

Binding is the process of buildingLUNs onto RAID Groups

May be up to 128 LUNs in a RAID Group May be up to 2048 LUNs per CLARiiON array LUNs are assigned to one SP at a time

The SP owns the LUN The SP manages the RAID protection of the LUN The SP manages access to the LUN

The LUN uses part of each disk in the RAID Group Same sectors on each disk

Step 2 – Bind LUNs

Page 77: SAN EMC Clariion Foundations

77

Bind Operation - Setting Parameters

Fixed Bind parameters Disk numbers, RAID type, LUN #, element size Can’t be changed without unbinding and rebinding

Variable parameters Cache enable, rebuild time, verify time, auto assignment Can change without unbinding

Bind Operation Fastbind is the almost instantaneous bind achieved on a factory

system

Step 2 – Bind LUNs

Page 78: SAN EMC Clariion Foundations

78

Binding a LUN

Step 2 – Bind LUNs

Page 79: SAN EMC Clariion Foundations

79

LUN Properties - General

Step 2 – Bind LUNs

Page 80: SAN EMC Clariion Foundations

80

metaLUNs A metaLUN is created by combining LUNs

Dynamically increase LUN capacity Can be done on-line while host I/O is in progress A LUN can be expanded to create a metaLUN and a metaLUN can be

further expanded by adding additional LUNs Striped or concatenated

Data is restriped when a striped metaLUN is created Appears to host as a single LUN

Added to storage group like any other LUN Can be used with MirrorView, SnapView, or SAN Copy

Supported only on CX family with Navisphere 6.5+

+ + =

metaLUN

Page 81: SAN EMC Clariion Foundations
Page 82: SAN EMC Clariion Foundations

82

Storage Groups

Storage Groups are a feature of Access Logix and used to implement LUN Masking Storage Groups define the LUNs each host can access A Storage Group contains a subset of LUNs grouped for access by one or more

hosts and inaccessible to other hosts Without Storage Groups, all host can access all LUNs

Storage groups can be viewed as a “bucket” that has dedicated and/or shared LUNs accessible by a server or servers

Access Logix controls which hosts have access to a storage group Host access the array and provide information through the Initiator

Registration Records process

Storage Group planning is required

Step 3 – Create Storage Groups

Page 83: SAN EMC Clariion Foundations

83

Creating a Storage Group

Step 3 – Create Storage Groups

Page 84: SAN EMC Clariion Foundations

84

Storage Group Properties - LUNs

Step 4 – Add LUNs to Storage Groups

Page 85: SAN EMC Clariion Foundations

85

Storage Group Properties - Hosts

Step 5 – Connect Hosts to Storage Groups

Page 86: SAN EMC Clariion Foundations

LUN Migration

Migration moves data from one LUN to another LUN – CX series CLARiiONs only – Any RAID type to any RAID type, FC to ATA or ATA to FC Neither LUN may be private LUNs or Hot Spares Neither LUN may be binding, expanding, or migrating Either or both may be metaLUNs Destination LUN may not be in a Storage Group Destination LUN may not be part of SnapView or MirrorView operations Destination LUN may be larger than Source LUN

Page 87: SAN EMC Clariion Foundations

Data is copied from Source LUN to Destination LUN Source stays online and accepts I/O Destination assumes identity of Source when copy completes. LUN ID, WWN Storage Group membership Source LUN is unbound after copy completes The migration process is non-disruptive There may be a performance impact LUN Migration may be cancelled at any point Storage system returns to its previous state

Page 88: SAN EMC Clariion Foundations

LUN Migration Limits

Page 89: SAN EMC Clariion Foundations

89

Course Summary

Key points covered in this course: The Operating Environment of a CLARiiON Disk Array Storage Configuration and Provisioning Storage Administration Troubleshooting and Diagnosing Problems

Page 90: SAN EMC Clariion Foundations

90

Brocade Zoning

Page 91: SAN EMC Clariion Foundations

91

Switch Zoning

Understanding requirements and planning resilient architecture is critical!

Alias – Recognizable name for WWN of Host or Storage

Zone – Map Initiator and Target aliases

Config – Configuration file which has zoning information Step 4 – Re-Check and

Enable Config

Step 3 – Add Zones to Config Members Pane

Step 2 – Create Zone, Add Member

Step 1 – Create Alias, Add Member

Step 0 - Planning

Page 92: SAN EMC Clariion Foundations

92

Switch Zoning

zoneshow : displays the cfg, zone & alias info. zonecreate : create zoneswitchshow : displays the F-Logi’s in the switchalishow : displays aliasalicreate : create aliasconfigupload : upload the switch configuration to a host file.switchdisable : disable switchcfgclear : clear the current configurationconfigdownload : download the switch configuration file from hostswitchenable : enable switchcfgdisable : disable cfgcfgenable : enable cfg

Page 93: SAN EMC Clariion Foundations

93

BUSINESS CONTINUITY

Page 94: SAN EMC Clariion Foundations

94

Data Copy

Page 95: SAN EMC Clariion Foundations

95

Optional array-based software

Creates an instant point-in-time copy of a LUN Copy and pointer-based design Utilizes “copy-on-first-write” technique Operates only on the array, no host cycles are expended 8 Snapshot sessions per LUN

SnapView session resides on SP Session can contain multiple snapshots Can have multiple sessions on one source LUN

SnapView

Page 96: SAN EMC Clariion Foundations

96

Access to Snapshot

Primary access to LUN Snap

Snapshot

LUN

SnapView

Page 97: SAN EMC Clariion Foundations

97

• Access Logix required• Separate server for snapshot visibility• Snap is LUN level

• Make snapshot– Navisphere Manager GUI

– NaviCLI

– admsnapSnapshot

Source LUN

SNAP

Server

BackupUnit

BackupHost

SnapView

Page 98: SAN EMC Clariion Foundations

98

SnapView

Page 99: SAN EMC Clariion Foundations

99

3

1

2

SnapView

Page 100: SAN EMC Clariion Foundations

100

SnapView Point-in-time view

Reduces time that application data is unavailable to users

Summary

Page 101: SAN EMC Clariion Foundations

101

Disaster Recovery

Page 102: SAN EMC Clariion Foundations

102

Optional array-based software

Allows array-to-array mirroring Focused on disaster recovery

MirrorView integration Off-site backup Application testing

Mirror View

Page 103: SAN EMC Clariion Foundations

103

Two Fibre Channel connections required between associated storage systems

MirrorView runs in synchronous mode This is to ensure the highest in data integrity

Byte-for-byte image copy Total number of mirrors per array

Supports 50 primary images

Mirror View Configuration

Page 104: SAN EMC Clariion Foundations

104

• MirrorView setup –MirrorView software–Secondary LUN must be the same size as primary LUN–Can be different RAID type

• Navisphere–Provides ease of management–GUI and CLI interface supports all operations

Mirror View Configuration

Page 105: SAN EMC Clariion Foundations

105

Production Host

Site ASynchronous,

bi-directional mirror

Site BStandby Host

Primary array

ProductionA

Mirror B

Secondary array

MirrorA

ProductionB

300 m up to 60 km

Extenders supportedDWDM ; Optera 5200, CNT Ultranet

ADVA FSP2000also

IP – Extends greater than 60 km

DirectLongWave GBICs

ExtendersCheck the

EMC Support Matrix

Page 106: SAN EMC Clariion Foundations

106

Navisphere View

Page 107: SAN EMC Clariion Foundations

107

MirrorView

Focus on disaster recovery

Direct or extended connection

Summary

Page 108: SAN EMC Clariion Foundations

108

Data Migration

Data Migration

Page 109: SAN EMC Clariion Foundations

109

Optional array-based software

Allows array-to-array Data Migration Focused on Off Loading Host Traffic Content Distribution

SAN Copy

Page 110: SAN EMC Clariion Foundations

110

SAN

San Copy - storage-system based data-mover application

- uses the SAN to copy data between storage systems. - Data migration takes place on the SAN - host not involved in the copy process

- eliminates need to move data to/ from attached hosts - reserves host processing resources for users and applications

Off-Load Traffic on HOST

Page 111: SAN EMC Clariion Foundations

111

FC over IP

SANFC over IP

Extender

In today’s business environment, it is common for a company to have multiple data centers in different regions.

Customers frequently need to distribute data from headquarters to regional offices and collect data from local offices to headquarters.

Such applications are defined as content distribution and are supported by EMC SAN Copy.

Web content distribution is also in this category, which involves distributing contentto multiple servers on an internal or external website.

Content Distribution

Page 112: SAN EMC Clariion Foundations

112

Types of data migration

CLARiiON to CLARiiON Symmetrix to CLARiiON Internally within a CLARiiON Compaq StorageWorks to CLARiiON

There are four different migration types. The most likely scenario is CLARiiON to CLARiiON, Symm to CLARiiON and internally within a CLARiiON

Internally – covered in SAN Copy Administrator’s guide.Symm to CLARiiON – procedure for setup in release notes and manualCompaq Storage Works – done only by PS – escalations go to Engineering.

Check the EMC Support Matrix of eNavigator for the latest supported configurations

Page 113: SAN EMC Clariion Foundations

113

Simultaneous Sessions

Storage System Type

Maximum number of concurrent sessions per system

Maximum number of destination logical units per session

CX400 8 50

CX600 16 100

FC4700 16 100

See latest eLab Navigator or EMC Support Matrix for info regarding newer model arrays.

San Copy lets you have more than one session active at the same time. The number of supported concurrent active sessions and the number of logical units per session depends on the storage system type.

Page 114: SAN EMC Clariion Foundations

114

SAN Copy Features Concurrent Copy sessions

- allows multiple source LUNs to simultaneously transfer data to multiple destination LUNs. Queued Copy sessions

- queued sessions are sessions that have been created but are not active or paused. Create/Modify Copy Sessions

- management tolls allow full control to create and modify sessions as seen fit. Multiple Destinations

- each source LUN may have multiple destinations- up to 50 per session in a CX400 and 100 per session in the CX600 and FC4700.- see eLab Navigator or EMC Support Matrix for newer model arrays.

Pause/Resume/Abort- control over an active session is in the hands of the administrator. - possible to pause and later resume a session or abort a session before completion.

Throttle - resources used by SAN Copy sessions can be controlled through use of a throttle value.

Checkpoint/Restart- allows admin-defined time interval that lets SAN Copy resume an interrupted session from the last checkpoint, rather than having to start the session over.

Page 115: SAN EMC Clariion Foundations

115

SAN Copy Operation

SP becomes an initiator (looks like a host to SAN) Source is Read-only during copy Read from Source, Write to destination(s) Start n Reads from Source (n = # Buffers) When any Read completes, write to Destination When any Write completes, start another ReadFor SAN Copy to operate the SP port must become an initiator and register with the non-SAN Copy storage.

While a session is operational the source LUN is put into read-only mode. If this is unacceptable, a SnapCopy, Clone or BCV in the case of Symmetrix can be created from the source LUN and used as the source for the SAN Copy session.

Data is read from the source and written to the destinations. SAN Copy will initiate a number of reads equal to the number of buffers allocated for the session. When any read to the buffer is complete, SAN Copy will write the data to the target LUN.

When write is complete and buffer empty, SAN Copy will refill buffer with another read

Page 116: SAN EMC Clariion Foundations

116

SAN Copy Create Session Process Flow

Make SAN Copy Connections Not needed for local copy

Create Session Designate Source LUN(s) Designate Target LUN(s)

Set Parameters Session Name Throttle Value

SAN Copy session can be set to copy data between (2) LUNs in a single array, between arrays and between a CLARiiON array and a Symmetrix array. While there are many similiarities when setting up different sessions there are also some differences. In the interest of clarity each of these session types will be covered in full. The creation of a SAN Copy session involves a number of steps.

If the source and destination lun(s) are located in different arrays, the source array must be connected to the destination array(s) as an initiator. Source lun and destination lun are easily selected. Destination must be at least as large as the source.

Each session requires unique name and priority of copy traffic can be set with throttle value.

Page 117: SAN EMC Clariion Foundations

117

Local SAN Copy

Target LUN

Source LUN

A local SAN Copy will copy the data from one LUN in an array to one or more LUNs in the same array.

Because this transfer is entirely self contained, connecting and verifying connections between this SAN Copy array and remote array need not be performed.

Page 118: SAN EMC Clariion Foundations

118

Course Summary

Key points covered in this course:The Layered Applications

SnapView : Data Copy Mirror View : Disaster Recovery SanCopy : Data Migration

Page 119: SAN EMC Clariion Foundations

119

Thank You