Fundamentals of Storage Area Networks -SAN

65
WZIB-2030: Fundamentals of Storage Area Networks Module 1: Data Transport Protocols and Storage Models This module contains the following topics. Use the course menus button to view the course table of contents. Module Introduction Topic 1: Data Transport Protocols Topic 2: Evolution and Function of Fibre Channel Protocol Topic 3: Fibre Channel Organizations and Standards Topic 4: DAS Storage Model Topic 5: NAS Storage Model Topic 6: SAN Storage Model Review Questions Module Summary Module Introduction Before you can understand SANs, you need to appreciate their evolution from earlier storage models. You also need to understand the protocols used to transport data. This module gives you an introduction to the primary protocols used to transport data to and from storage. It also introduces you to the storage models as they developed from

description

hahah

Transcript of Fundamentals of Storage Area Networks -SAN

SAN design tools

WZIB-2030: Fundamentals of Storage Area NetworksModule 1: Data Transport Protocols and Storage Models

This module contains the following topics. Use the course menus button to view the course table of contents.

Module Introduction Topic 1: Data Transport Protocols Topic 2: Evolution and Function of Fibre Channel Protocol Topic 3: Fibre Channel Organizations and Standards Topic 4: DAS Storage Model Topic 5: NAS Storage Model Topic 6: SAN Storage Model Review Questions Module SummaryModule IntroductionBefore you can understand SANs, you need to appreciate their evolution from earlier storage models. You also need to understand the protocols used to transport data.

This module gives you an introduction to the primary protocols used to transport data to and from storage. It also introduces you to the storage models as they developed from DAS to NAS and finally to SAN. You should learn the characteristics and limitations of each storage model. Your business may have one, two, or all three of these storage models, depending on your requirements.

Although the storage models are distinctly different, they share common goals. These goals are:

Data integrity - Data is considered to be the most valuable asset of an organization. Integrity of this data is critical to any storage model.

Data availability - All storage models can be configured for high availability by using a highly available hardware and software framework that eliminates single points of failure.

Leveraging existing investments - Existing storage arrays can be incorporated into more complex storage models. This process is especially critical for large tape libraries that may be deployed within an enterprise.

Upon completion of this module, you should be able to:

Identify the differences between small computer system interface (SCSI) and Fibre Channel (FC) protocols

Define the evolution and function of the FC protocol

Describe the functions of the FC standards organizations

Describe the characteristics and limitations of DAS

Describe the characteristics and limitations of NAS

Describe the characteristics and limitations of SANs

TOPIC 1. Objective

Data Transport ProtocolsHistorical Perspective

In the early 1970's, the paradigm for storage shifted from mainframes to open storage systems. There was a short period during which many proprietary disk systems were introduced. The industry recognized the need for a standard, so American National Standards Institute (ANSI) formed a working group to define the new storage standard. The new standard was called Small Computer Systems Interface (SCSI). SCSI was based on a parallel wire connection, for a limited connection distance and a relatively high speed, 10 to 40 Megabytes per second (Mbytes/sec).

As storage needs expanded and prices dropped on storage hardware, applications demanded more flexibility and performance. ANSI saw an opportunity to introduce a new transport that could solve the storage needs into the future. They introduced the Fibre Channel (FC) specification. FC offered longer distances, 500 meters (m) with copper cables and up to 3 kilometers (km) with optical cables. It also offered greater speeds, 100 Mbytes/sec, with the flexibility to increase distance and speed with new technologies

SCSI ProtocolThe SCSI protocol is a method for accessing data on disk drives physically attached to a server. SCSI was initially designed to support a small number of disks attached to a single interface on the host. The SCSI protocol has matured from its original standard that supported a few low-speed devices. It has gone through several iterations that improved access speed, increased the number of devices, and defined a wider range of supported devices.

SCSI is limited by the number of devices that can be attached to one SCSI chain (up to 15). Its speed is also limited due to electrical interference and signal timing between individual wires in the copper cabling.

Fibre Channel Protocol

The FC protocol is a layered protocol that defines a set of standards for the efficient transfer of information.

The FC protocol is characterized by the following features:

Uses a synchronous serial transfer protocol

Simplifies traditional cable plants with cables using only transmit and receive

Allows extended distance between devices (kilometers, rather than meters)

Allows the connectivity of thousands and, potentially, millions of devices

The FC transport can use both fiber-optic cable and copper wire (either twisted pair or coaxial). Because copper is also a valid transport, when referring to the FC protocol, the spelling of fiber has been replaced with fibre to remove the assumed association with optical technology

TOPIC 2. Objective

Evolution and Function of Fibre Channel ProtocolEvolution of Fibre Channel

Development of FC began in 1988. The primary design goals included:

Support for multiple physical interface types (copper and fiber-optic cables)

Greater bandwidth than that offered by SCSI

Support for multiple upper-layer software protocols (ULPs) over a common physical transport layer. Upper-layer protocol support includes SCSI, IP, asynchronous transfer mode (ATM), High-Performance Parallel Interface-Fibre Protocol (HiPPI-FP), and Single Byte Command Code Set (SBCCS).

Adoption as an industry standard

Fibre Channel Frames

Although FC does not have an integrated command set, it provides a means to encapsulate other protocols (such as SCSI, or IP) onto the FC carrier.

FC data packets, known as FC frames, encapsulate ULP commands from the appropriate command set (for example, SCSI commands). FC transports these frames to the correct destination without processing the encapsulated command (for example, SCSI read, SCSI write). The destination accepts the command from the frame and acts upon it.

The following diagram illustrates the elements of an FC frame

Key elements of an FC frame are found in the frame header and the payload.

The 24-byte header includes a 24-bit destination address and a 24-bit source addresses for the frame. The 24-bit addresses can be used to identify up to 16.7 million unique addresses.

The payload of the frame contains the encapsulated ULP command or application data. The payload can be from 0 to 2048 bytes

Fibre Channel Layers

The FC model is based on the Open System Interconnection (OSI) reference model. The OSI reference model generically defines layers that are commonly referred to as the stack. The lowest layer in the stack is the physical hardware layer. The highest layer in the stack is the application layer (that is, the end-user application running on the computing device). Intervening layers define how to move data reliably between the physical hardware layer and the application layer, in either direction.

The FC model is called a stack because, to transfer a data packet from an application running on one device to an application running on another device, data moves down the layers on one side, and moves up the layers on the other side. The FC stack includes five layers.

Click View Demo for more information on each of the FC layers.

Fibre Channel Topologies

FC devices can be connected in a variety of ways. The connection models are known as topologies. The basic FC topologies are point-to-point, Fibre Channel Arbitrated Loop (FC-AL), and switched fabric.

The point-to-point topology provides a dedicated connection between two devices.

The FC-AL topology provides shared bandwidth among devices on a loop. Only two devices can communicate at one time so the devices must "arbitrate" to access the loop.

The switched fabric topology connects devices through switches

TOPIC 3. Objective

Fibre Channel Organizations and StandardsTelecommunications Industry Association

The Telecommunications Industry Association (TIA) is the leading United States (U.S.) non-profit trade association serving the communications and information technology industry, with proven strengths in the following:

Market development

Trade shows

Domestic and international advocacy

Standards development

Enablement of e-business

Through its worldwide activities, the association facilitates business development opportunities and a competitive market environment. TIA provides a market-focused forum for its member companies, which manufacture or supply the products and services used in global

Fibre Channel Organizations and StandardsStorage Networking Industry Association

As the world computer systems market embarks on the evolutionary journey called storage networking, the Storage Networking Industry Association (SNIA) is the point of cohesion for:

Developers of storage and networking products

System integrators

Application vendors

Service providers

The SNIA is uniquely committed to delivering architectures, education, and services that propel storage networking solutions into the broader market. Storage networking represents the next step of technological evolution for the networking and storage industries. It is an opportunity to fundamentally improve the effectiveness and efficiency of the storage resources employed by the Information Technology (IT) community.

http://www.snia.orgFibre Channel Industry AssociationThe Fibre Channel Industry Association (FCIA) is an international organization of:

Manufacturers

Systems integrators

Developers

Systems vendors

Industry professionals

End users

FCIA is committed to delivering a broad base of FC infrastructure to support a wide array of industry applications within the mass storage and IT-based arenas. FCIA working groups focus on specific aspects of the technology, which target both vertical and horizontal markets, including:

Storage

Video

Networking

SAN management

http://www.fibrechannel.orgTOPIC 4. Objective

DAS Storage ModelDefinition of DAS

One of the earliest storage models, after mainframe storage, is direct attached storage (DAS). With DAS, a storage device is directly attached to a dedicated server. DAS devices provide flexibility in managing and allocating storage to a server. External devices can be shut down and maintained without necessarily affecting the server to which they are attached. DAS devices have some intelligence, which allows them to off load some of the overhead, like managing RAID volumes, from the server.

In the DAS model:

Storage devices are directly attached to dedicated servers.

These storage devices are referred to as direct attached storage devices, also known as DASD. Access to data is directly controlled by the host.

File systems are not readily available to other hosts unless they are NFS mounted, thereby providing fairly strong physical data security.

Application, file, and file system data can be made available to clients over local area and wide area networks by using file access and network protocols, such as Network File System (NFS) and Common Internet File System (CIFS).

Click View Example to see an example of a DAS Configuration. Notice the DAS devices attached directly to the servers.

Limitations of DAS

DAS devices present challenges to the system administrator. New tool sets are required to manage intelligent DAS boxes. Troubleshooting becomes more complex as the number of devices increases. When a server uses up the available space within an array, additional arrays can be added. However, the storage needs can increase beyond the ability of the server hardware to accommodate the added devices.

DAS has the following limitations:

File systems are not readily available to other hosts unless they are NFS mounted.

For SCSI arrays, only a limited number of disks are supported on the SCSI chain, thereby limiting the addition of new drives.

For FC arrays, large numbers of disks in the loop contribute to poor performance for lower priority devices.

Servers have limited slots available, thereby restricting the total number of disks that can be attached.

Failure of a storage device can require system downtime for repair.

TOPIC 5. Objective

NAS Storage ModelDefinition of NAS

A number of storage vendors have improved upon file servers and DAS by introducing NAS devices. NAS devices plug directly into a network, and are often referred to as NAS appliances. The term appliance often refers to a computer device that can be plugged into the network and begin providing services with minimal configuration.

NAS appliances provide a level of flexibility to the system and the storage administrator. By using network protocols, such as NFS, file systems can be made available to any server or host attached to the network.

NAS devices can be added or removed from the network without directly impacting the servers attached to the network.

Storage can be centralized and shared between a number of heterogeneous servers and desktops.

Storage management requirements are reduced as storage is more centralized.

Backups can be handled efficiently because the storage is clustered in groups.

Characteristics of NASNAS appliances incorporate a file server and disk array storage within a single physical unit. The file server integrated into the NAS appliance, which is generally available as a LAN attached device, is usually running a cut-down or thin operating system (OS). This OS is tuned specifically for the purpose of file management and logical volume management.

As with the DAS model, application, file, and file system data are made available to clients over local area and wide area networks using file access and network protocols, such as NFS and CIFS. Access to data is limited to LAN speeds, and availability of data is limited to the availability of the LAN/WAN.

The NAS model:

Is a file-centric model. All transfers must be at the file or record level, rather than at the block or track level.

Makes a storage array a network addressable device.

Treats NAS devices as modules that can be attached to and removed from the network with minimum disruption to network activity or other network attached devices.

One industry trend is to replace several smaller file servers, which use DAS, with one or more larger NAS appliances. The larger NAS appliances use redundant components, such as redundant power and logical volume RAID levels.

Click View Example to see a typical NAS model. The figure shows a NAS appliance that has been used to provide access to all of the clients in the LAN.

Limitations of NAS

The NAS model is limited by network bandwidth issues. Each FC packet contains headers and trailers that must be managed individually by the LAN. File access protocols such as NFS lead to additional overhead.

LAN/WAN technology was never designed as a network for the transport of sustained, sequential, high bandwidth I/O that the current storage environment often demands.

TOPIC 6. Objective

SAN Storage ModelDefinition of SANA SAN is a dedicated network for the attachment and management of storage devices and for the movement of data between those storage devices. The storage is accessed through interconnecting devices called hubs or switches. While most SANs use an FC transport, other mechanisms, such as iSCSI can also be used.

Storage that is directly attached to a server using fiber optic cables is not a SAN, even when it uses FC transport. A more complex SAN configuration could include DAS, NAS, and FC-attached storage devices. The overall environment is known as a SAN.

Some additional definitions:

The Storage Networking Industry Association (SNIA) technical dictionary defines a SAN as follows:

"A network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements. Abbreviated SAN. A SAN consists of a communication infrastructure, which provides physical connections, and a management layer, which organizes the connections, storage elements, and computer systems so that data transfer is secure and robust. The term SAN is usually (but not necessarily) identified with block I/O services rather than file access services. A storage system consisting of storage elements, storage devices, computer systems, and/or appliances, plus all control software, communicating over a network."

In Designing Storage Area Networks, Tom Clark offers the following definition:

"Storage area networks; a network linking servers or workstations to disk arrays, tape-backup subsystems, and other devices, typically over FC"

Note: Although a SAN storage network is typically implemented by using FC technology, general definitions of SAN do not mandate the use of FC. For example, an Ethernet network with the primary (or dedicated) function of providing storage services, could be considered a SAN. When discussing a SAN that is implemented using FC technology, the SAN is usually referred to as an FC SAN.

According to the first definition, SANs are generally considered to be device-centric, as opposed to file-centric. Data is written directly to a device rather than to a file system. This reduces the overhead and increases efficiency.

Characteristics of a SAN

A SAN:

Is a network dedicated to storage needs that uses serial transport protocols

Is scalable through the addition of new components

Uses pooled storage that can, potentially, be accessed by any host on the SAN

Does not increase traffic on the LAN or WAN

The single most important feature of the SAN model is the replacement of DAS storage configurations with a dedicated storage network that can share storage resources. This network makes use of transport protocols that are optimized for data movement and data access. Storage resources are not directly attached to any one host. All of the benefits and advantages of a SAN evolve from this one feature.

The most common data transport mechanism used within a SAN is FC. FC is a serial transport protocol --the physical cabling mechanism uses just two lines, one for data transmit and one for data receive. This serial transport mechanism replaces the more traditional SCSI transport, which is a parallel transport mechanism limited by length and connections.

Click View Example to see an example of the SAN model (sometimes referred to as networking behind the server).

Advantages of SANSANs have the potential to solve many problems encountered in both storage device management and data management. SANs have the advantage of combining the existing investment in storage devices as well as incorporating newer storage strategies as they evolve.

SANs lend themselves to storage consolidation efforts, thereby eliminating poorly utilized pools of storage.

A SAN is a highly available, redundant network storage infrastructure that seeks to eliminate single points of failure.

SANs can be used to eliminate distance barriers of other storage models. NAS is attached to the same LAN/WAN backbone. DAS is attached directly to the server.

Performance can be managed more effectively in a SAN through the use of multiple routes between the application servers and their data.

Limitations of SANThe development of the FC SAN has revolutionized the storage industry and greatly improved the availability and accessibility of data for Enterprise IT resources. It has also brought new challenges.

Interoperability between hardware vendors is problematic. As a result, many SAN installations are still single vendor. Organizations like SNIA are working to help alleviate the management problem and hone the standard to reduce limitations on interoperability.

Troubleshooting failures in a SAN requires a high level of expertise. SAN administrators must deal with a wide variety of servers, arrays, and volume managers in order to properly diagnose and correct errors and performance concerns.

Management of a SAN introduces additional complexity. Few products exist that can present a single picture of the SAN and allow all devices to be monitored and managed. This process typically requires the administrator to be familiar with many different configuration and management software tools. Organizations like the Distributed Management Task Force (DMTF) and SNIA are working to solve this industry-wide problem.

Health monitoring tools are needed to predict or notify the administrator of emerging problems. Many of the underlying metrics to support health monitoring are still being developed by equipment manufacturers.

Historical PerspectiveStorage networks, which used DAS and NAS devices, resulted from the desire to maintain legacy technologies such as SCSI and to utilize every dollar from the more expensive, older technologies. Standards and practices changed with the availability of FC technology, but the computer room hardware did not always change.

The success of FC presented some new problems:

Disk arrays were getting larger, containing hundreds of disks, with lots of data.

Backups were becoming more difficult to accomplish during the nighttime window due to the surge in storage capacity.

Customers wanted to attach more arrays to servers than the servers were designed to support (servers have a limited number of slots for interface cards).

Customers wanted to start sharing their large storage arrays among different servers.

The logical solution was to eliminate direct attached storage and share the storage over a network. The birth of storage area networks (SANs) provided the answer to these demands.

Review Questions - Data Transport Protocols and Storage ModelsCongratulations! You have completed this module.

The following review has multiple-choice questions that are designed to both check your understanding and enhance what you learned, by reinforcing important module concepts.

If you need to review the question topic, click the Review icon available at the top corner of some question pages. With every response, you should receive feedback in the area at the bottom of the review question screen.

Click the forward arrow button to begin the reviewWhich data transport protocol is limited by the number of devices that can be attached?

SCSI

FC

DAS devices are readily available to other hosts.

True

False

NAS devices can be added to networks without downtime on the application servers.

True

False

A correctly configured SAN improves availability by eliminating single points of failure.

True

False

Which storage model includes storage directly attached to dedicated servers?

DAS

NAS

SAN

Which storage model is a file centric model?

DAS

NAS

SAN

Which storage model requires a high level of expertise to troubleshoot?

DAS

NAS

SAN

Identify the technology illustrated by this storage topology diagram.

DAS

NAS

SAN

Identify the technology illustrated by this storage topology diagram.

DAS

NAS

SAN

ModuleSummaryThis module provided an overview of the primary data transport protocols used in data storage. It also introduced the three primary storage models included in a storage design: DAS, NAS, and SAN. The main goal of this module was to give you an understanding of the evolution of the SAN model from earlier storage models.

Now that you have completed this module, you should be able to:

Identify the differences between SCSI and FC protocols

Define the evolution and function of the FC protocol

Describe the functions of the FC standards organizations

Describe the characteristics and limitations of DAS

Describe the characteristics and limitations of NAS

Describe the characteristics and limitations of SANs

Module Introduction

SANs have the potential to solve many problems businesses encounter in both storage device management and data management. This module covers the business issues addressed by a SAN.

Upon completion of this module, you should be able to:

Identify how the return on IT infrastructure investments can be maximized in a SAN environment

Identify how a SAN supports backup solutions

Identify how a SAN supports business continuity

TOPIC 1. Objective

IT Infrastructure Return on Investment MaximizedStorage Consolidation

Storage consolidation refers to the ability to efficiently use a large pool of storage over many, possibly heterogeneous, hosts. Industry demand for storage consolidation is increasing, driven primarily by recent increases in available storage densities.

Storage consolidation is sometimes referred to as storage pooling, which allows the aggregation of storage resources into a single and logically versatile storage pool.

Management issues which must be considered with storage consolidation include:

Ensuring that storage resources, such as disks and tapes, are only seen by those server resources that should have access to them

Understanding, controlling, and managing I/O rates that are issued to a single array that has been consolidated from several DAS arrays

Click View Demo to see how storage can be consolidated

Heterogeneous Connectivity

Heterogeneous connectivity refers to the ability to attach host processors and storage devices from several different vendors to the same SAN. For example, several different OS's (such as, Solaris OS, AIX, Microsoft Windows NT) can potentially share the same storage array.

When different OS's share the same storage array, it is very important to manage data access so that different hosts do not have access to the data owned by the other hosts. Zoning is one technique that can be used to manage the isolation of heterogeneous hosts.

Click View Demo to see how zoning can be used to manage data access across heterogeneous hosts.

Data Sharing

Data sharing refers to the access to a single shared data set by multiple hosts. Although the concept of data sharing might seem straightforward at first, it is a complex subject.

In the case of heterogeneous hosts sharing the same data set, imagine the potential technical complexity of translating data formats between different flavors of OS's. Although it is generally more straightforward for homogenous hosts to share the same data set, this process can still be problematic and therefore needs to be carefully designed and managed.

It might seem straightforward to allow two homogenous hosts to access a single shared data set, with one host having read/write access, and the other host having read-only access. However, most file systems use some form of caching to improve performance. Writes, issued from the host with read/write access, modify file system data that might be cached in memory. This cached data is not be visible to the host that has read-only access.

Even if you use techniques to bypass the file system buffer cache, metadata is still cached. Metadata refers to data about data, in other words, data relating to file system structure, such as file sizes, access times, modification times, and so on. To share file systems between homogenous hosts, it is necessary to manage data access

Massive Scalability

The implementation of a SAN reduces the limitations imposed by the number of slots in the servers and the number of interconnect devices. This process results in a more scalable architecture. You can increase storage without adding servers. If you take this concept to its extreme, you can imagine one server that can access any storage device in the SAN.

Although SANs do support massive scalability, you still need to understand and manage multiple connections from servers to the SAN for reasons of redundancy, performance, and maintenance.

Click View Demo to see how storage can be scaled in a SAN.

TOPIC 2. Objective

SAN Support of Backup SolutionsLAN-Free Backup

LAN-free backup is sometimes referred to as LAN-less backup. The single most important feature of the SAN model is the replacement of dedicated DAS storage, and dedicated file servers, with a dedicated storage network. This storage network has transport protocols optimized for data movement and data access and storage resources not directly attached to any one host.

The benefits of the LAN-free backup come from the fact that the SAN does not use LAN resources for the transport of backup data. Instead, you use the dedicated storage network. The use of a dedicated storage network eliminates the limitations of the more traditional model, in which tape devices are directly attached to individual server resources. In the traditional model, backup operations are limited to server-attached tapes or transport of backup data over the LAN or WAN.

The traditional model also drives the need to perform backups during quiet periods of LAN activity (for example, 1 A.M. to 5 A.M.). This available backup window might not be wide enough to perform full or even incremental backups. The contention between the preferred backup window and the time needed for the backup process is removed to a great extent in a SAN model.

Click View Image to see an illustration of a LAN-free backup configuration

Server-Free Backup

Server-free backup is sometimes referred to as server-less backup. This term can be somewhat confusing because backups must, essentially, involve a server at some point. The server-free backup uses a backup data mover that is able to copy data directly from storage device to storage device. This data mover can typically reside on Fibre Channel switches or Fibre Channel-to-SCSI bridges.

The backup data mover is a processing system that recognizes file, file system, application data semantics, and backup policies. For example, on Friday evenings you backup all files which have been modified in the last 24 hours, and on Saturday mornings you perform a full backup.

You still require a backup server to identify what data files or blocks need to be archived. This backup server provides the list of blocks to the data mover. The data mover copies data blocks directly from disk storage to the backup tape device. The backup data is not processed through the I/O stack of a host or backup server.

Click View Demo to see how data can be backed up without going through a server

TOPIC 3. Objective

SAN Support of Business ContinuityConfigurations Over Extended DistancesSome regulatory agencies require their constituents, such as banks and stock exchanges, to implement business continuity plans that include remote data sites and remote mirroring of data flow to ensure no loss of service. Many other companies do so for their own protection. Keeping copies of data at sites that are remote from servers and from each other is important for:

Disaster recovery - Ability to reconstruct the data over an acceptable time interval during which the business cannot conduct its affairs.

Business continuity - Ability to continue to operate after an outage occurs by switching processing sites quickly and efficiently, making the operation of the business appear seamless in spite of the outage.

A configuration over an extended distance, known as a campus or a short-haul metropolitan distance, is generally several kilometers and is enabled in the FC world. The technical challenges that FC vendors address in such configurations include:

Signal integrity over extended distance

Signal latency over extended distance

Troubleshooting communication problems over extended distance

The use of FC technology, optical fiber cable, and the very coherent property of a laser light source enables engineers to maintain signal integrity over extended distance.

Click View Image to see an illustration of an extended distance configuration. This configuration supports disaster recovery, but not business continuity.

Server Clustering for High Availability

The SAN model implements a network topology for storage. This model enables the highly available configurations that are so characteristic of networking technology.

Vendors of FC disks implement dual-ported drives. These drives have two interfaces through which you can read and write data. If one interface fails, you should still be able to access the data through the remaining interface. This process illustrates the redundant connections through the dual-ported drive interfaces.

Although there is no physical single point of failure in this configuration, it still needs to be carefully managed through a software framework, such as Sun Cluster hardware and software. Such frameworks:

Implement logical volume RAID levels on the storage

Manage multiple TCP/IP network interfaces for client connections to the servers

Automatically move logical data volumes from the control of a host that might have a hardware fault, to the control of a healthy host.

Click View Demo to see an example of a generic high availability configuration. This demonstration shows redundant servers, switches, and cable connections to dual-ported storage. There should be no single point of failure in such a configuration.

Review Questions - Business Issues Addressed by a SAN

Congratulations! You have completed this module.

The following review has multiple-choice questions that are designed to both check your understanding and enhance what you learned, by reinforcing important module concepts.

If you need to review the question topic, click the Review icon available at the top corner of some question pages. With every response, you should receive feedback in the area at the bottom of the review question screen.

Storage consolidation refer to grouping arrays together into pools of storage that can be centrally managed?

True

False

A SAN can support only a single OS across the servers attached to it.

True

False

When a server is attached to a SAN, only one storage device can be accessed through that server.

True

False

Which of the following business functions can a SAN support?

Disaster recovery

Business continuance

Reduced Voice Costs

ModuleSummaryThe return on investment in IT infrastructure can be maximized in a SAN environment by using storage consolidation, heterogeneous connectivity, data sharing, and massive scalability.

SAN supports backup solutions in the areas of LAN-free backups and server-free backups. Business requirements drive the requirement for the support of backup and recovery strategies in the SAN environment.

Business continuity has been a driving force in the incorporation of configurations over extended distances. The clustering of servers for high availability in recent years has brought key technical components into the SAN environment.

Now that you have completed this module, you should be able to:

Identify how the return on IT infrastructure investments can be maximized in a SAN environment

Identify how a SAN supports backup solutions

Identify how a SAN supports business continuity

Multipathing

The SAN supports multipathing for fast, redundant access to critical data located on high capacity arrays. This module defines multipathing and identifies the business needs supported by multipathing. It also describes the features and technical benefits of multipathing.

Upon completion of this module, you should be able to define multipathing and identify its features and technical benefits.

TOPIC 1. Objective

Features and Technical Benefits of MultipathingNeed for Redundant Paths

Redundant paths to components reduce the possibility of data loss or lack of data access. In the event of a single component failure, system administrators can perform maintenance services with the OS running and the data accessible by the remaining active paths. This can prevent system downtime, which improves availability.

Click View Demo to see an example of how multipathing improves availability

Features of Multipathing

With multipathing, you can attach a dual-ported DAS device to multiple ports on a server. Multipathing is an improvement over SCSI-attached devices in the FC environment. This improvement is due to the multipathed FC environment's ability to:

Provide a fail over or redundant path to the DAS device to overcome potential hardware failures.

Improve performance and throughput by using dual active paths to the DAS device.

Seamlessly integrate into the OS and driver stack. This process allows automatic fail over and path selection to occur without having to modify the applications that use the DAS resources

Benefits of Multipathing

Multipathing supports high availability and concurrent maintenance to improve system performance. By having multiple paths connected directly to a single array, individual component failures do not prevent access to critical data.

You can also use redundant paths to gain greater throughput to an array. The ability to spread I/O transactions over more than one path means that the data can be delivered more effectively to the array. This results in greatly improved application and system performance, allowing IT managers to meet increasing numbers of Service Level Agreements (SLAs) with their customers.

Load balancing can also be impacted by the ability to access a disk array over two paths at the same time. You can have two transactions in progress at the same time if you have two paths.

Click View Demo to see an example of how multipathing can balance loads at the server.

Review Questions Multipathing

Congratulations! You have completed this module.

The following review has multiple-choice questions that are designed to both check your understanding and enhance what you learned, by reinforcing important module concepts.

If you need to review the question topic, click the Review icon available at the top corner of some question pages. With every response, you should receive feedback in the area at the bottom of the review question screen

Review Question

Principio del formulario

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1 Multipathing reduces the impact of single component failures.

True

False

Final del formulario

Principio del formulario

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1 Multipathing can improve performance through load balancing.

True

False

Final del formulario

Principio del formulario

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1 All storage devices support multipathing.

True

False

Final del formulario

Multipathing is supported in a SAN.

True

False

ModuleSummaryWhen a SAN is multipathed, the multiple paths to an array increase the availability of that array. In the event of the failure of one of the redundant paths to a device, the device is still available through the other path or paths. Multipathing also supports concurrent maintenance because there is still an open path to the device. Multipathing provides the added benefit of balancing loads to improve performance.

Now that you have completed this module, you should be able to define multipathing and identify its features and technical benefits

CourseSummaryThis course provided you with the fundamental knowledge to begin understanding the evolution of the storage models found in a SAN environment.

You were introduced to different storage models and their key features, benefits, and limitations. You were also shown the business issues addressed by a SAN. You were given an overview of the features and functionality you can expect from a multipathed SAN.

For more information about additional Sun Educational Services courses, visit the SunTM Web Learning Center web site at http://suned.sun.com/WLC/registration.

Thank you for taking this Sun Educational Services Web-based course. We trust your time was both productive and enjoyable. Sun dedicates many resources to providing quality training to our customers. In order to continue to provide exceptional training experiences, we need your help.

Please take a few minutes to complete the survey to inform us of your experience with the Sun Web Learning Center. Click the Go icon to begin.

SUN RESPECT(S) YOUR DESIRE FOR PRIVACY AND WILL USE ANY PERSONAL INFORMATION CONNECTED TO THIS SURVEY FOR PURPOSES OF EVALUATING SUN'S ONLINE COURSES AND POSSIBLY TO CONTACT YOU REGARDING YOUR SATISFACTION. FOR MORE INFORMATION, SEE THE SUN PRIVACY POLICY AT HTTP://WWW.SUN.COM/PRIVACY/.

WE-2032: Storage Area Network (SAN) Components

Module 1: SAN Server Hardware and Software Module 2: SAN Storage Module 3: Switches and Hubs Module 4: Other Interconnect Devices Module 5: Port Types Module 6: Exercises Course SummaryModule 1: SAN Server Hardware and Software

This module contains the following topics. Use the course menus button to view the course table of contents

Module Introduction Topic 1: Typical SAN Components Topic 2: SAN Servers Topic 3: SAN Device Firmware and Drivers Topic 4: Device Drivers Topic 5: Multipathing and the Leadville Driver Review Questions Module SummaryModule Intraduction

This module introduces the primary hardware and software components that are implemented in a SAN environment. It also provides detailed information on server hardware and software.

Upon the completion of this module, you should be able to:

Identify the components of a typical SAN

Define the role of servers in a SAN

Define the role of device firmware in a SAN

Define the role of device drivers in a SAN

Define the role of the Leadville driver to support multipathing

TOPIC 1. Objective

Typical SAN ComponentsKey Features of the Fibre Channel (FC) SAN

Some of the key features of a Fibre Channel (FC) SAN include:

Dedicated FC serial storage network (replacing the legacy parallel Small Computer Systems Interface (SCSI)).

New FC components, that is, FC switches and FC-to-SCSI bridges.

Replacement of dedicated DAS storage and dedicated file servers with a storage pool that can, potentially, be accessed by any host on the SAN.

No reliance on traditional LAN/WAN protocols for the movement of data.

The most common data transport protocol within a SAN is FC. FC is a serial transport protocol. The physical cabling mechanism uses just two lines, one for data transmit and one for data receive. This serial transport mechanism replaces the more traditional SCSI transport, which is a parallel transport.

Typical SAN ComponentsComponents in SAN Environments

SAN environments include a mixture of hardware and software components. These components are implemented to meet a common goal. That goal is to access the needed resources in a timely manner. The core component of an FC SAN is the FC switch, as shown in the following diagram

Hardware Components

typical FC SAN can include any of the hardware components in the following list.

Servers:

File servers

Database servers

Backup servers

Application servers

Storage devices:

RAID controller disk arrays, or hardware RAID arrays

Simple disk arrays

Tape libraries

Legacy SCSI devices

Interconnect hardware:

FC hubs

FC switches, also called fabric switches

FC host bus adapters (HBAs)

FC cables

Gigabit interface converters (GBICs), gigabit link modules (GLMs), and media interface adapters (MIAs)

FC-to-SCSI bridges

Software Components

A typical FC SAN can include any of the software components in the following list.

Device drivers and device firmware

SAN and storage management software:

SAN management software

Switch management software

Device and data management software

Diagnostic/troubleshooting software

Click View Example to see an illustration of software components

Data Servers

A data server in a SAN environment is a computer accessed through the LAN, by a LAN client, that has access to storage on the SAN.

Data servers in a SAN environment perform the same function as data servers in a legacy LAN/DAS environment, that is, they are the systems that issue I/O requests to read data from, and write data to, storage devices. Examples of servers in a SAN include, file servers, database servers, application servers, and backup servers.

Click View Example to see an example of a data server in a SAN

TOPIC 2. Objective

SAN ServersServers in a SAN Environment

In a SAN environment, just as in legacy network environments, servers still have the role of initiating the I/O for networking applications. As a result, servers and their capabilities are integral to SAN design. Although a few server models offer on-board FC connectivity, most servers still require at least one Host Bus Adapter (HBA) for FC access

Server Operating Systems

There are numerous server class operating systems that can facilitate a SAN through a compatible server with FC access. Some of thes include the following:

SolarisTM Operating System (Solaris OS)

HP-UX

Linux

AIX

Microsoft Windows NT, 2000, and XP

These operating systems (OS's) can manage several terabytes of FC storage and support the FC products of multiple storage vendors.

HP-UX, AIX, and the Solaris OS are all based on the AT&T System V version of the UNIX operating system and afford true multiuser multithreaded operation in a multi-CPU environment. Many other operating systems are rooted from single-user multitasking architectures, which cannot exploit the full bandwidth capabilities of the FC.

Click View Example to see an example of heterogeneous servers in a SAN

Mixed Operating Systems

You must be careful when adding Microsoft NT servers to an existing SAN. The NT servers should not have access to arrays used by other systems. The NT server automatically detects all storage arrays in the SAN and checks to see if they are owned by another NT server by looking for a Common Internet File System (CIFS) entry on the mounted volume. If none is discovered, the NT server writes one. The result of this action is the corruption of any existing UNIX volume. To ensure that the NT does not corrupt non-NT fabric arrays, those LUNs must be mapped out (masked) or defined in a separate zone.

Although each previously mentioned OS has individual strengths and weaknesses, each shares common requirements for FC-based storage. Each OS requires specific HBAs or FC controller chips on the motherboard with the appropriate firmware and drivers to communicate to the FC devices. The OS's could require configuration file modifications to handle the FC disks.

Click View Example to see how zones can be used to isolate arrays for Windows and Solaris OS's.

TOPIC 3. Objective

SAN Device Firmware and DriversDevice Firmware in a SAN

The role of device firmware and drivers in a SAN is a role similar to that in other storage solutions. Device firmware and drivers are the communications connection between hardware devices and the software.

HBAs, storage devices, switches, hubs, and bridges all require firmware to allow the software to communicate with the hardware. Device firmware is usually provided by the vendor that provided the hardware device.

The device firmware for a SAN has a similar functionality to that of other storage models. Device firmware versions determines the level of functionality and support for a device within a SAN. It is important to keep like products at similar firmware levels to allow for specific features, trouble shooting and possibly performance considerations. Not all possible combinations of component firmware may be supported. Device firmware levels for different vendor products must also be compatible.

Click View Example to see an illustration of the locations of device firmware in a SAN.

Device Firmware

Device firmware represents low-level software code that is I/O controller (that is, hardware) specific. Device firmware is executed on the I/O controller. It is written and optimized for the particular hardware circuitry implemented on the controller. Device firmware controls the fundamental operations of the I/O controller, including power-on-self-test (POST), transmission and reception of signals, and error detection.

When a new HBA is installed into a host server, you must manage the following tasks:

Ensure that the correct device driver kernel module is installed on the operating system, and is successfully loaded at boot time.

Check, and upgrade if necessary, the firmware version of the HBA.

Ensure that the HBA is maintained correctly when subsequent firmware versions are released by the HBA vendor

Devices That May Include Firmware

Device firmware runs on the following, to name a few, devices:

SCSI HBAs

FC HBAs

Sun StorEdgeTM A5x00 interface boards

Sun StorEdge A3x00 RAID controllers

Sun StorEdge T3 array RAID controllers

FC disk drives

FC switches

TOPIC 4. Objective

Device DriversDevice Drivers in a SAN

You need device drivers to communicate with storage devices and HBAs. Device drivers are the bridge between the device-specific hardware and firmware and the Upper Layer Protocols (ULPs). Device drivers dictate what a device does with requests that come through the firmware.

Device drivers are usually provided by the vendor that manufactures the hardware component--an HBA for example. Sun provides a unique driver stack for its FC HBAs. Other vendors, such as JNI, provide a different driver for use with their HBAs, even though both may be used in a Solaris OS server

Operating System Device Drivers

OS device drivers are software components that are generally loaded into host kernel memory during system boot/startup. On the UNIX operating system, device drivers are usually written in the C programming language and compiled into the binary code appropriate to the host CPU instruction set.

Device drivers direct and manage I/O requests (such as, read, write, inquiry, and status) that are issued to host bus adapters and I/O controllers. Device drivers are host resident software modules, because they run in host kernel memory.

Driver Stacks

Driver stacks represent the various functions and data flows that are performed by elements of the I/O subsystem. Driver stacks are provided by the manufacturer of the HBA (native) or by the manufacturer of the OS that is running on the server (OS dependent).

The HBA manufacturer writes native driver stacks and ports them to various operating systems to support specific requirements that may be unique to that manufacturer or may be missing in the host based OS. Native driver stacks are associated with a specific manufacturer's HBA and are maintained and supported by that company. Many UNIX-based OS's support native driver stacks from various vendors. Native driver stacks support specific HBAs.

The manufacturer typically writes OS-specific driver stacks for the OS that is resident on the server. The stacks have typically been optimized to provide kernel -level integration, with the vendor's other services, to improve the manageability and performance of the server. OS-specific stacks support specific servers.

TOPIC 5. Objective

Multipathing and the Leadville DriverLeadville Driver

In the Solaris OS, the SAN Foundation Software (SFS), commonly known as the Leadville driver, is available as of Solaris 8 OS 4/01 and later releases. Leadville has been written to support several Solaris OS functions, including Sun StorEdge Traffic Manager (SSTM) and Dynamic Reconfiguration (DR). Leadville uses the supported Fibre Channel HBAs from Sun, such as the Sun StorEdge 2-gigabit (GB) PCI Fibre Channel Network Adapter.

The following diagram illustrates the role of the Leadville driver

Resolving Limitations of the Operating System

Historically, many OS's did not have a means to recognize that the same storage array could be seen down multiple physical paths. In the SCSI realm there were many efforts to handle multi-initiators (multiple HBAs). Implementation proved to be quite problematic and was typically proprietary.

With the advent of FC, arrays can now uniquely identify themselves through the FC protocol. New features added to the OS allow these unique identifiers to be recognized and resolved back to a single array. There can be a single virtual path to the applications, even though there are multiple physical paths.

Sun StorEdge Traffic Manager (SSTM)

Sun StorEdge Traffic Manager (SSTM), from Sun Microsystems, allows the OS to resolve multiple names for the same device. (This product was formally named MPxIO.) It supports multiple paths to the device without conflict.

SSTM is available with the Solaris OS 8 4/01 and higher. It is also available for other OS's. SSTM provides kernel-level multipathing components to ensure compatibility and minimal overhead. SSTM uses the Leadville driver stack to support multipathing in an FC environment. Currently, SSTM supports two physical paths to an array. Both links can load balance the traffic for the array.

How and Why Businesses Use Multipathing

Multipathing allows fast, redundant access to critical data located on high capacity storage arrays that use FC technology. To support multiple paths, many OS's were modified to allow the storage array to appear as a single entity. In some cases, drivers and kernel updates were required. In others, storage or HBA vendors provided specific software tools to enable multipathing.

Volume manager products provide multipathing as a layer on top of the OS. Multipathing is turned on by default and handles multipathing only for file systems under the control of the Volume Manager product. The paths may be active/active, where both links to the array are used to load balance traffic using a round robin methodology. The paths may also be active/passive, where the active path is always used unless there is a failure. Then a switch-over to the passive or standby path is required to maintain access to the array. Volume Manager multipathing components are a kernel-level tool. Veritas Volume Manger is compatible with Sun's SSTM and is supported on several disk arrays and OS's.

Multipathing in a SAN introduces additional opportunities for redundant paths and components, eliminating single points of failure. As multipathing products mature, additional path choices will be available to administrators.

Troubleshooting a fabric-based multipath environment introduces its own challenges. Diagnostic tool sets must be able to discern the route within the fabric that a given set of components use to access stored data. This presents a challenge as SAN environments continue to grow.

Review Questions - SAN Server Hardware and Software

Congratulations! You have completed this module

The following review has multiple-choice questions that are designed to both check your understanding and enhance what you learned, by reinforcing important module concepts.

If you need to review the question topic, click the Review icon available at the top corner of some question pages. With every response, you should receive feedback in the area at the bottom of the review question screen.

Click the forward arrow button to begin the reviewReview Question

Principio del formulario

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1

HTMLCONTROL Forms.HTML:Hidden.1 What is a key element of an FC SAN?

LAN/WAN

FC Switch

Parallel SCSI

Final del formulario

What is the primary role of a data server in a SAN?

Support for legacy applications

Initiating I/O for networked applications.

SAN management

Which components require firmware to allow the software to communicate to the hardware?

HBAs

Storage Devices

Switches

Bridges

What is the function of STMS?

Monitors system performance.

Allows the server to communicate with storage devices and HBAs.

Allows the OS to resolve multiple names for the same path.

ModuleSummaryThis module introduced you to the primary hardware and software components in a SAN. It also provided more information on device firmware and drivers.

Now that you have completed this module, you should be able to:

Identify the components of a typical SAN

Define the role of servers in a SAN

Define the role of device firmware in a SAN

Define the role of device drivers in a SAN

Define the role of the Leadville driver to support multipathing

Module 2: Business Issues Addressed by a SANThis module contains the following topics. Use the course menus button to view the course table of contents.

Module Introduction Topic 1: IT Infrastructure Return on Investment Maximized Topic 2: SAN Support of Backup Solutions Topic 3: SAN Support of Business Continuity Review Questions Module SummaryModule IntroductionSANs have the potential to solve many problems businesses encounter in both storage device management and data management. This module covers the business issues addressed by a SAN.

Upon completion of this module, you should be able to:

Identify how the return on IT infrastructure investments can be maximized in a SAN environment

Identify how a SAN supports backup solutions

Identify how a SAN supports business continuity

Topic 1: IT Infrastructure Return on Investment MaximizedStorage Consolidation

Storage consolidation refers to the ability to efficiently use a large pool of storage over many, possibly heterogeneous, hosts. Industry demand for storage consolidation is increasing, driven primarily by recent increases in available storage densities.

Storage consolidation is sometimes referred to as storage pooling, which allows the aggregation of storage resources into a single and logically versatile storage pool.

Management issues which must be considered with storage consolidation include:

Ensuring that storage resources, such as disks and tapes, are only seen by those server resources that should have access to them

Understanding, controlling, and managing I/O rates that are issued to a single array that has been consolidated from several DAS arrays

_1496209662.unknown

_1496209666.unknown

_1496209668.unknown

_1496209669.unknown

_1496209667.unknown

_1496209664.unknown

_1496209665.unknown

_1496209663.unknown

_1496209658.unknown

_1496209660.unknown

_1496209661.unknown

_1496209659.unknown

_1496209656.unknown

_1496209657.unknown

_1496209654.unknown

_1496209655.unknown

_1496209653.unknown