Cisco MDS/Nexus SAN Portfolio: Next phase of Storage Networking
Mark Allen – Technical Leader
BRKARC-1222
• Data Center Trends
• Fibre Channel Storage Networking
• Converged Storage Networking
• Storage Deployments
• Storage Networking Management
• Wrap Up
Agenda
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Purpose of Storage
Everything we do in storage (including storage networking) is based around completing that job safely, securely, reliably, and without error
One Main
Job: “Give me back the correct bit
I asked you to hold on for me.”
4© 2016 Cisco and/or its affiliates. All rights reserved. Cisco PublicBRKARC-1222 4
Data Center Trends
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Sources: Cisco Visual Networking Index (VNI), Cisco Cloud Index, IDC, Gartner, IDC WW Integrated Systems Forecast 2014-2018 (Nov ‘14)IDC WW Hyperconverged Systems 2015-2019 Forecast (April ’15)
EFFECT ON STORAGE INFRASTRUCTURE
6%
14%
58%
Integrated PlatformCAGR by 2015-18
Integrated InfrastructureCAGR by 2015-18
HyperconvergedCAGR by 2015-18
Infrastructure Agility Needed to Enable Greater Speed of Business
Internet of Things40% CAGR by 2017
Cloud69% CAGR by 2017
Analytics69% CAGR by 2014-19
INDUSTRY DYNAMICS
Growth in Information Created by 2020
Higher Demand
on Multiprotocol Storage
STORAGE GROWTH
10Xs
IT Environments Experiencing Accelerating Transitions
DATA CENTER TRENDS
Server Virtualization85% by 2018
Software Defined
Infrastructure65% Growth by 2017
Increased Flash Usage7x Growth SSD by 2018
BRKARC-1222 6
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco’s Family of Storage Networking Solutions
Nexus UCS
LAN/SAN SAN COMPUTE
MDS
BRKARC-1222 7
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco Multi-Protocol PortfolioSAN, LAN & Compute
LAN/SAN
Cisco Nexus 9000
Cisco Nexus 7000
Cisco Nexus 5600
Cisco Nexus 5500
CiscoNexus 3000
CiscoNexus 2000
Nexus 5672UP-16G
SAN COMPUTE
Nexus 2348UPQ
CiscoMDS 9706/9710
48x16G Line-Rate FC
CiscoMDS 9148S
48x10G Line-Rate FCoE
CiscoMDS 9396S
CiscoMDS 9718
24 x 40GLine-Rate FCoE
CiscoMDS 9250i
Cisco UCS C-SeriesRack Servers
Cisco UCS B-SeriesBlade Servers
Cisco UCS 6200 Series FI
Cisco UCS 6300 Series FI
Common OS, Common Management
BRKARC-1222 8
Fibre Channel Storage Networking
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Deploy Small, Medium, Large FC SANs with Cisco MDS 9000 Family
12-48
48-96
48-192
48-384
20-40
Cisco 16G Multi-layer
Director SeriesCisco 16G Multilayer
Fabric Switch Series
Cisco Multiservice
Fabric Switch
Fib
re C
ha
nn
el S
ca
le
48-768
Cisco MDS 9000 16G Fibre Channel Family
MDS 9148S MDS 9396S MDS 9250i MDS 9706 MDS 9710 MDS 9718
BRKARC-1222 10
MDS 9700 Series Multi Layer Directors
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco MDS 9710 Director
Cisco MDS 9706 Director
Cisco 48x16G
Line-rate FC Module
Cisco 48x10GE
Line-rate FCoE Module
Cisco MDS 9706, 9710 and 9718 Product Portfolio
Cisco MDS 9718 Director
Cisco 24x40GE
Line-rate FCoE Module
16G FC, 10/40GE FCoE Today32G FC Ready
BRKARC-1222 12
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
1.5-Tbps/Slot384 Line-Rate 16G FC Ports
STORAGE DIRECTORN+1 Fabric
INDUSTRY’S MOST RELIABLE
WITH MULTI-PROTOCOLCONNECTIVITY
UNMATCHEDFLEXIBILITY
INDUSTRY’S HIGHESTPERFORMANCE AND
CAPACITY
14 RU
Launched May 2013
Cisco MDS 9710 Multilayer Director
• Up to 8 Line Cards
• Up to 6 Fabric Modules
• Dual Supervisors
Investment Protection for the Next Decade
BRKARC-1222 13
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
3x THE PERFORMANCE
OF ANY COMPACT
DIRECTOR
INDUSTRY’S MOST
RELIABLE COMPACT
DIRECTOR
• AND 15X the performance of current MDS 9506 director
• Grow without forklift –investment protection for future
1.5 Tbps/slot Switching Capacity
• Preserve IT operations and Knowledge – ease of migration with NX-OS and DCNM
• Eliminate loss of bandwidth
N+1 Fabric Redundancy
• Eliminate Downtime
In-Service Software Upgrade
Dual, Redundant Supervisors
Redundant power supplies/fans
• Maintain Performance
Reduced Failure Domains
Launched September 2014
9RU
Cisco MDS 9706 Multilayer DirectorExtending MDS 9710 Director Qualities to a Smaller Form Factor
Front-Back Airflow
Scale up to 192 Line Rate Ports –
16G FC or 10G FCoE
BRKARC-1222 14
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Designed for Consolidation and High-Scale SAN Fabrics
Same Features and Functionality as MDS 9706 and 9710
• High Availability – Redundant Supervisors, Power Supplies and Fabric Modules
• Performance – 1.5 Tbps bandwidth per slot = 768 x 32Gbs line rate ports
• Investment Protection – Same NX-OS, Line Cards, Power Supplies as other MDS 9700 directors
Plus
• New Supervisor 1E (Enhanced) with 2x Compute and 4x Memory of Supervisor 1
• Fully populated Fabric Modules – 32Gbps ready at FCS
Cisco MDS 9718 Multilayer DirectorIndustry’s First Ultra-High Density Director
BRKARC-1222 15
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Comparison: 9706, 9710 and 9718
Hardware Feature MDS 9706 MDS 9710 MDS 9718
Line Card slots 4 8 16
RU 9RU 14RU 26RU
Fabric Module slots
(default / available)3 / 6 3 / 6 6 / 6
Sup Slots 2 2 2
Fabric Module location Back Back Back
Airflow Front to Back Front to Back Front to Back
Power Supply slots 4 8 16
BRKARC-1222 16
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FC, FCoE Modules
Cisco MDS 9700 - Designed for Scale1
2
3
4
5
6
Host Ports
Number of Fabric
CardsFC Bandwidth per Slot FCoE Bandwidth per Slot
1 256 Gbps 220 Gbps
2 512 Gbps 440 Gbps
3 768 Gbps 660 Gbps
4 1024 Gbps 880 Gbps
5 1280 Gbps 1100 Gbps
6 1536 Gbps 1320 Gbps
BRKARC-1222 17
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
The Benefits Of Multiple Fabrics
Two key design benefits:
1. Operational Redundancy: Uptime at full performance
2. Investment Protection: Scale up with addition rather than replacement: (32G FC)
Number Of
Fabric Cards
Front-Panel FC
Bandwidth / Slot
1 256 Gbps
2 512 Gbps
3 768 Gbps
4 1024 Gbps
5 1280 Gbps
6 1536 Gbps
0
256
512
768
1024
1280
1536
1 2 3 4 5 6
Band
width-Gb
ps
NumberOfFabricCards
PossibleFront-PanelBandwidthPerSlot
Future
Proof
Growth
N+1 redundancy at 8G FC
N+1 redundancy at 16G FC
Cisco MDS 9700
BRKARC-1222 18
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
• New half-width form factor
• 10/100/1000 Management port, RJ45 Console port, 2 x USB 2.0 ports
• More powerful Supervisors allow greater scalability
MDS 9500
Sup 2A
MDS 9706/9710
Sup 1
MDS 9718
Sup 1E
Memory 2G 8G 32G
# of Cores 1 4 8
Clock Speed 1.4 Ghz 2.1 GHz 2.1 GHz
Instruction 32-bit 64-bit 64-bit
MDS 9710 Supervisor Modules
BRKARC-1222 19
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
MDS 9700 Power Supplies
Grid A Grid B
Power Distribution Unit-B
Power Distribution Unit-A
• New form factor 3000W power supply module
• Autosensing voltage detection
• 80Plus Platinum Certified (>94% efficiency)
• Both AC and DC power supplies available
• Can mix AC and DC power supplies in same chassis
• Grid Redundancy for all chassis
• Four Power Supplies – MDS 9706
• Six to Eight Power Supplies – MDS 9710
• Twelve to Sixteen Power Supplies – MDS 9718
• N+N:N+N Power Redundancy optional for MDS 9710 and 9718 chassis
BRKARC-1222 20
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
48-port 16-Gbps Line Card
Performance 48 x 16 Gbps ports, 768 Gbps (3 Fabric Modules for line rate speed)
Port Speeds 2, 4, 8, 10 and 16 Gbps Fibre Channel
Optics (sfp+) 2/4/8G FC, 4/8/16G FC, 10G FC, 10GE (FC with 10GE clock)
Port Types F, FL, TF, E, TE, SD, ST
Port Groups Twelve 4-port port-groups
Intelligent Capabilities VSAN, IVR, FC Redirect
Buffer-to-buffer credits Up to 500 per port, 4095 with Enterprise License (510km @ 16G)
MDS 9700 48-Port 16G Fibre Channel Line Card
BRKARC-1222 21
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
MDS 9700 48-port 10GE FCoE Line Card
48-Port 10GE FCoE Line Card
Performance 48 x 10 Gbps ports, 480 Gbps (3 Fabric Modules for line rate speed)
Port Speed 10 Gbps Ethernet
Port Types VF, VE, VTE
Port Groups Twelve 4-port port-groups
Intelligent Capabilities VSAN, IVR
BRKARC-1222 22
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
MDS 9700 24-port 40GE FCoE Line Card
24-Port 40GE FCoE Line Card
Performance 24 x 40 Gbps ports, 960 Gbps (5 Fabric Modules for line rate speed)
Port Speed 40 Gbps Ethernet
Port Types VF, VE, VTE
Port Groups Four 6-port port-groups
Intelligent Capabilities VSAN, IVR
BRKARC-1222 23
MDS 9000 Series Fabric Switches
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
High-Performance, Easy to Deploy,Enterprise-class Fabric Switch
Cisco MDS 9148S Fabric Switch
VERSATILE EASY TO USE ENTERPRISE-CLASS
• Line-rate 16/8/4/2G FC Ports
• Industry-leading port range
12-port base
Scale up with 12-port licenses
Full 48-port option available
• Automated Provisioning
• Quick Configuration Wizard
• Same OS and Management across Industry’s broadest SAN Portfolio
• Non-disruptive software upgrades
• Up to 32 Virtual SANs (VSANs)
• Inter-VSAN Routing (IVR), QOS, PortChannels, N-Port ID Virtualization (NPIV), N-Port Virtualization (NPV), Comprehensive Security
• Hardware-based slow-drain detection and recovery
Back
Dual Power Supplies and Fans for Enterprise-Class Availability
Front
48 x 16G FC Line Rate Performance
Expand from 12- to 48-ports in 12-port increments
1 RU
BRKARC-1222 25
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
MDS 9396S Fabric Switch2RU 96-port 16G Line-Rate Switch
Front View
Blend of Fabric Switch and Director CapabilitiesDeploy as a Stand-Alone switch or as an Edge Switch for an Edge-Core Network
Rear View
Hardware Platform• 2RU 96-port 16G FC fixed fabric switch• 2/4/8 and 16-Gbps Line-rate Fibre Channel speeds• Redundant power supplies and fans, with Port Side Exhaust or Port Side
Intake air flow
Enterprise Class • Up to 4095 B2B credits / port• FC TrustSec Encryption• Forward Error Correction
Fabric Switch-like Capabilities• On demand port licensing (48-port base with 12-port incremental licenses)• NPV Mode• Power-On-Auto-Provisioning (POAP), Quick Configuration Wizard
BRKARC-1222 26
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
MDS 9396S: Performance of a Director
• Same Port ASICs
• Same Crossbar ASIC
Based on the architecture of 48-port 16G FC line card of MDS9700 director
BRKARC-1222 27
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Entry-Level SANs Mid-Tier Standalone SANs Mid-Tier Private Cloud SANs
MDS 9148S
• Start with 12-Port 16G FC base• Grow in 12-port increments
12 24 36 48 ports• Two base configurations
12, 48 ports
• Start with 48-Port 16G FC base• Grow in 12-port increments
48 60 72 84 96 ports• Two base configurations
48, 96 ports
MDS 9396S
MDS 9396S deployed
as a Middle-of-Row (MoR)
Switch
. . . . . . .
MDS 9700
• Deploy MDS 9396S across two racks as a MoR switch – connect to end-of-row MDS 9700 Director
• Deploy in N-Port Virtualization (NPV) mode to reduce number of managed switches
Use MDS 16G FC Fabric Switches to deploy state-of-the-art SAN solutions for small to mid-Tier SANs
Versatile, High-Performance FC Solutions
BRKARC-1222 28
MDS 9000 Series Multiservice Fabric Switch
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FCoE1 2 3 4 5 6 7 8
MDS 9250i Multiservice Fabric Switch
FeaturesLine Rate Performance for:
• 16G FC, 10GE FCoE, 10GE FCIP, FICON, iSCSI
Rich set of Storage Services for FC and FCoE SANs:
• FCIP, IO Accelerator (IOA), Data Mobility Migration (DMM),
• Integrated Management via Data Center Network Manager (DCNM)
Next-Gen Storage Services Platform for Unified Fabric
40 Ports 16G FC 8 Ports 10GE FCoE
2 Ports 10GE FCIP/iSCSI 1+1 Redundant Fans2+1 Redundant Power Supplies
BenefitsSingle Platform for deploying Storage Services across FC and FCoE Storage Area Networks (SANs):
• High-Bandwidth SAN Extension across MAN/WAN• Migrate Data between FC and FCoE arrays
BRKARC-1222 30
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
MDS 9250i Multiservice Fabric SwitchBUSINESS CONTINUITY/DISASTER RECOVERY
Production
DC
Disaster Recovery
DC
IP WAN
FC FCoE
SAN
FC FCoE
SAN
DATA MIGRATION
FC FCoE
SAN SAN
FC SAN GATEWAY
FC
FCoE
Converged
Fabric
SAN
FC SAN SWITCH
FC
FC
SAN
Migrate data between
heterogenous storage
One SAN Appliance, Multiple Use-CasesBRKARC-1222 31
MDS Architecture
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Crossbar Switch with Central Arbitration
Ingress
Egress
Crossbar
Control and
Scheduling
Cisco MDS Architecture Overview
MDS Family - Arbitrated Crossbar Architecture
• Crossbar maybe external (MDS 9700 Directors, MDS 9396S) or integrated into ASIC (MDS 9250i and MDS 9148S)
• Crossbar establishes a temporary connection between input and output port for duration of the frame exchange
• Frames are not transmitted unless there is an available path
• Scheduler uses arbiter grants to provide traffic fairness and QoS
• Virtual Output Queues used to eliminate HOL blocking
BRKARC-1222 33
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Cisco MDS Frame Flow
• Frames always follow same logic – ingress to egress (consistent switching latency)
• Newer generation line cards use fewer components compared to first generation line cards, however, the frame forwarding logic is the same
SupCentral
Arbiter
Egress Access Control and QoS
Forwarding and Load balancing
Inter VSAN Routing
SupCentral
Arbiter
MAC
Port Type / Modes:
• F_Port
• Loop port (FL_Port)
• ISL (E_Port)
• EISL (TE_Port)
• SPAN Port (SD_Port)
PHY / MAC Forwarding
Queuing
PHY
Optical to Electrical
interface:
• SFP+ (2/4/8/10/16 Gbps)
Error Checking
Timestamp
VSAN Header
Linecard
Access Control
Forwarding and Load balancing
Inter VSAN Routing
Interface Statistics
Buffering and Virtual Output Queues
QoS
Arbitration Requests
Redundant Arbiters
Transmit, loopback or Expire frame
Remove VSAN header (end device) or prepend
EISL header (EISL to Cisco switch) Redundant crossbar fabrics
BRKARC-1222 34
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Frame Flows
• PHY
• Optical to electrical conversion
• 2/4/8/10/16-Gbps SFP
• MAC
• Decipher FC frames from incoming bit stream using SOF/EOF primitives
• Issue R_RDYs (return buffer credits)
• Check incoming frame for errors (CRC check)
• Prepend switch-internal header• Ingress port, ingress VSAN, QoS markings, frame arrival time, etc
1. Frame arrives on ingress PHY/MAC
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
SupCentral
Arbiter
SupCentral
Arbiter
PHY / MAC Forwarding
Queuing
BRKARC-1222 35
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Frame Flows
• H/W-based frame forwarding distributed on each line card:
• VSAN-aware (forwarding based on VSAN & D_ID – “virutal fabric aware”)
• Ingress ACLs (Hard zoning)
• Load-balancing (FSPF and/or Port Channels)
• Inter VSAN Routing including FC-NAT (integrated – no loss of performance)
2. Ingress Forwarding logic
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
SupCentral
Arbiter
SupCentral
Arbiter
PHY / MAC Forwarding
Queuing
Per-VSAN
Ingress ACL
Lookup
PermitPer-VSAN
Forwarding Table
Lookup
Multiple
Destinations?
SRC/DST
require
Inter VSAN
Routing?
Incoming
Frame
Load-balancing
Update Flow,
Interface & VSAN
Statistics
Statistics Lookup
YesYes
FC-NAT
to Queuing
Drop Frame
BRKARC-1222 36
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Frame Flows3. Ingress Queuing / Request-to-send logic
Ingress FC Port
Transmit
Queue
PQ
Classify
DWRR 2
DWRR 3
DWRR 4
Choose Crossbar
Channel
Signal free
Destination
Queue slot
Wait
For
Grant
Incoming
Frame
Request Grant from
Arbiter
Queue Frame on
ingress-port VoQ
Determine Destination
Virtual Output QueueTransmit Frame
to Crossbar
Switch Fabric
Virtual Output queuing
Interface to centralized arbitration
Buffering/Queuing (in case of congestion)
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
SupCentral
Arbiter
SupCentral
Arbiter
PHY / MAC Forwarding
Queuing
BRKARC-1222 37
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Frame Flows4a. Centralized Arbiters
Responsible for scheduling frames from ingress to egress
Capable of scheduling more than a billion frames/sec
Ensures fairness when there is congestion on output ports:
Congestion does not result in blocking because of VoQs
VOQ model
Frame to port 5Frame to port 5 Frame to port 6
Frame to port 4Frame to port 4
Frame to port 6
Frame to port 6Frame to port 4
Input queue at port 1
Top of virtual output queue
Input queue at port 1 Input queue at port 1
Top of virtual output queue
Top of virtual output queue
Switch without VOQ
Frame to port 5Frame to port 5Frame to Port 6Frame to port 4Frame to port 4Frame to port 6Frame to port 6Frame to port 4
Input queue at port 1
Top of queue
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
SupCentral
Arbiter
SupCentral
Arbiter
PHY / MAC Forwarding
Queuing
BRKARC-1222 38
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Frame Flows
• Low-latency, high-throughput non-blocking, non-oversubscribed crossbar switch fabric
• Guaranteed bandwidth for frames being transmitted
• No over-subscription in MDS 16G platforms
4b. Crossbar switch fabrics
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
SupCentral
Arbiter
SupCentral
Arbiter
PHY / MAC Forwarding
Queuing
BRKARC-1222 39
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Frame Flows
• Check frame integrity (again)
• Egress ACLs
• Egress QoS
5. Egress forwarding logic
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
SupCentral
Arbiter
SupCentral
Arbiter
PHY / MAC Forwarding
Queuing
Incoming
frame
from Crossbar
Switch Fabric
IVR
FC-NAT RewritePer-Port
Class-of-Service
Output Queue
Per-VSAN Egress ACL
Lookup
Permit
DenyDrop Frame
Valid
BadDrop Frame
Check Frame for
Errors
BRKARC-1222 40
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Frame Flows
• Remove internal switch header
• If output port is a Trunked E_Port (VSAN Trunking), prepend EISL header
• Check frame timestamp (drop expired frames)
• Transmit frame onto wire with appropriate encoding & primitives
6. Egress MAC/PHY logic
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
Fabric
Crossbar
Switch
Fabric
SupCentral
Arbiter
SupCentral
Arbiter
PHY / MAC Forwarding
Queuing
BRKARC-1222 41
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
VSANs – The Technical Details
• Several fields are added to the VSAN header that provide a variety of functions
• Each frame on a VSAN trunk carries up to an extra 8 bytes of header which includes:• User priority – 3 bits – used for QoS functionality to designate
priority of frame
• VSAN ID – 12 bits – used to mark the frame as part of a particular VSAN – supports up to 4096 VSANs
• MPLS flag – 1 bit – used to designate whether this frame is subject to Multi-Protocol Label Switching processing – future use
• Time-to-live (TTL) – 8 bits – used to help avoid routing loops
• Other misc. fields including version, frame type, and other reserved fields
EISL Header – What’s inside?SOF
EISL Header
4B
8B
FC Header24B
Payload≤2112B
CRC
EOF
4B
4B
BRKARC-1222 42
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Designed For Fabric Resiliency
• Lossy Media such as loose SFPs, dirty cables can result in packets getting corrupted
• CRC Checking drops corrupted frames from end devices or internally corrupted frames
• FEC corrects frames corrupted in-flight to preserve frames
CRC Detection and 16 Gbps FC Forward Error Correction (FEC)
Ingress
CRC
checking
Internal
CRC
checking
Forward
Error
Correction
Drop
frame
Drop
frame
BRKARC-1222 43
Converged Storage Networking
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Industry’s Broadest Converged Ethernet Portfolio
Max FCoE / IP
Ports/Chassis
Nexus 7700/7000 Nexus 5600 Nexus 2300 Nexus 9000
10GE1536 (IP),
768 (FCoE)384 48 2048
40GE 384 96 6 512
Cisco NEXUS
Use 10/40G Ethernet to Converge IP and Fibre Channel Storage Traffic
Application Centric
Infrastructure (ACI)
MDS 9000
768 (FCoE only)
2 (IP)
384 Ports
Cisco MDS
UCS FI
6200/6300
96
32
Cisco UCS
BRKARC-1222 45
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
6x 40G QSFP+ Ports
Flexibility to use 4x10G or
40G
24 Fixed 1/10G SFP+
Ports24 Unified Ports provide
2/4/8/16G FC
or 10G Ethernet/FCoE
Cisco Nexus 5672UP-16G
• Flexible – Traditional Ethernet plus Storage: File, iSCSI, FCoE and FC
• Reduced cost – Deploy once, implement solutions as needed
All Features of Nexus 5600 and More
Enhanced 5672UP for SAN
BRKARC-1222 46
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Unified Ports provide 2/4/8/16G
FC or 10G Ethernet/FCoE
Up to 24 x 16G FC ports
Up to 48 x 2/4/8G FC ports
6 x 40G QSFP+ Ports
Flexibility to use
4 x 10G or 40G
• Flexible – Deploy ports as required, LAN or SAN
• Reduced cost – Lower cost than Ethernet switch
• Reduced management –Configuration and OS done on parent switch
First FEX Solution Designed for All Storage Connectivity
Cisco Nexus 2348UPQ Fabric Extender
BRKARC-1222 47
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FI 633232 x 40GbE QSFP+ ports
FI 6332-16UP24 x 40GbE QSFP+ and 16 x UP ports (1/10GbE or 4/8/16G FC)
IOM 23048 x 40GbE server links and 4 x 40GbE QSFP+ uplinks
Next Generation UCS Fabric Interconnect UCS FI 6332, UCS FI6332-16UP, IOM 2304
Enabling High Performance, Low-Latency and Lossless Fabric
• High-density 40GbE
ports: enables 40G
end-to-end Fabric
• 2.6X increase in
throughput
• 3X lower latency
BRKARC-1222 48
Storage Deployments
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
CORE-EDGE DESIGN
6 x 384 port directors
256 ISL ports
1792 ports deployed
704 Server ports
per fabric
64 Target ports
per fabric
COLLAPSED CORE DESIGN
2 x 768 port directors
0 ISL ports
1536 ports deployed
704 Server ports
per fabric
64 Target ports
per fabricREDUCED
MANAGEMENT
Fewer switches to manage
No ISLs to manage
REDUCED
POWER
Fewer switches
Fewer ports deployed (no
ports used by ISLs)
REDUCED
CABLING Elimination of ISL cables
BENEFITS:
MDS 9718 Use Case:Switch Consolidation
BRKARC-1222 50
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
EDGE-CORE-EDGE DESIGN
26 x 384 port directors
3072 ISL ports
9216 ports deployed
Core at maximum capacity
384 Target ports
per fabric
2688 host ports per
fabric
….8…. ….8….
CORE-EDGE DESIGN
4 x 768 port directors, 16 x 384 port directors
1536 ISL ports
7680 ports deployed
Core can continue to grow storage and edges
384 Target ports
per fabric
2688 host ports per
fabric
….8…. ….8….
REDUCED
OPEX
Fewer switches to manage
Fewer ISLs to manage
REDUCED
CAPEX
Fewer switches deployed
Fewer ports deployed
(fewer ISLs)
REDUCED
CABLING
Elimination of ISL
ports/cables
BENEFITS:
MDS 9718 USE CASE:SCALED GROWTH
CORE-EDGE DESIGN
5151
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
CISCO 40G QSFP BIDI TRANSCEIVERSUTILIZING EXISTING DUPLEX FIBER
LC Fiber Cable
Duplex 20 Gbps (Receive and
Transmit on Two Different Wavelengths)
OM3 MMF: 100m
OM4 MMF: 150m
LC Cable with 40G QSFP-BiDi
Duple
x L
C
Duple
x L
C
OM3 MMF: 100m
OM4 MMF: 150m
MPO-12 Fiber Cable
8 Fiber Strands Carry 10 Gbps Each;
4 Fiber Strands Unused
MPO-12 Cable with 40G QSFP-
SR4
12-F
iber
MP
O
12-F
iber M
PO
LC Fiber Cable
Each Fiber Strand Carries
16G/10 Gbps
OM3 MMF: 300m (10G), 100m (16G)
OM4 MMF: 400m (10G), 125m (16G)
LC Cable with SFP+
LC
LC
No Need to Upgrade the Fiber Plant
52BRKARC-1222 52
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
40GE FCoE ISLs
6 x 384 port directors
96 ISL ports
1632 ports deployed
704 Server ports
per fabric
64 Target ports
per fabric
96
ISL Ports
REDUCED
MANAGEMENT Fewer ISLs to manage
REUSE
CABLING
BiDi Optics allow use of
existing LC cabling
BENEFITS:
16G FC ISLs
6 x 384 port directors
256 ISL ports
1792 ports deployed
704 Server ports
per fabric
64 Target ports
per fabric
256
ISL Ports
40GE ISL USE CASE:ISL CONSOLIDATION
53BRKARC-1222 53
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
NEXUS Fixed / Modular
40GE FCoE
10GE FCoE
NEXUS with FEX
40GE FCoE
10GE FCoE
UCS with 6333 Fabric Interconnect
40GE FCoE
40GE ISL USE CASES:CONVERGED NETWORKS TO FIBRE CHANNEL SAN
54BRKARC-1222 54
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Ethernet and FC Connectivity
5672UP-16
16G FC
8/16G FC
Ethernet LAN
Converged Access to FC and IP Storage
10GE FCoE
16G FC 40GE
Ethernet LAN
5672UP-16
FC SAN
IP Storage
10GE 5672UP-16
Nexus Spine
40GE
Cisco Nexus 5672UP-16G Use Cases
55BRKARC-1222 55
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
UP FEX RACK ARCHITECTURE
2 x LAN/SAN
2348UPQ FEX
To LAN/SAN Parent Switch
Rack Mount
Servers
TRADITIONAL RACK
ARCHITECTURE
2 x TOR LAN
Switches
To LAN
Rack Mount
Servers
To SAN
2 x TOR SAN
Switches
CISCO NEXUS 2348UPQ USE CASE:LAN/SAN ACCESS CONVERGENCE
56BRKARC-1222 56
Data Mobility Manager (DMM)
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco Data Mobility Manager (DMM)Migrating data across heterogeneous storage array with minimum disruption
SAN “A” SAN “B”Host I/Os
Data Migration
Existing Storage Pool
Server
New Storage Pool
BRKARC-1222 58
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
DMM Synchronous
• Supports Dual Fabric Topology
• One DMM node in each Fabric
• DMM node performsMirroring for Server WRITE I/Os
Data movement for Migration
• Server HBA Port, Existing/New Storage Port needs to be in the same VSAN (per Fabric)
Existing Storage Pool
Server
New Storage Pool
MDS 9700
MDS 9700
MDS 9250iw/DMM
MDS 9250iw/DMM
BRKARC-1222 59
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
DMM Asynchronous
• Additional Replication Fabric added for “out-of-band” copy
• One DMM node in each Production Fabric and one DMM node for Replication Fabric
• Replication DMM node copies data from existing Storage to new Storage with information provided by production DMM nodes.
• Replication SAN can be local (same Data Center) or extended between Data Centers (DWDM or FCIP)
Existing Storage Pool
Server
New Storage Pool
Replication SAN
MDS 9250iw/DMM
BRKARC-1222 60
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Up to 16 TB LUN size
Up to 7680 LUNs
3.7 TB/hr Data Migration Throughput
Save Time: Resume job after WAN link failure
MDS DMM: Meeting SLAsFast Time To Completion Is Key
BRKARC-1222 61
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
DMM Job Resumption after Failure
• WAN link fails due to failed nodes or bad links or CRC errors
• DMM Migration job needs to be restarted from beginning, loosing time
• With NX-OS 6.2(9) release, job resumes after WAN link recovery
Existing Storage Pool
Server
New Storage Pool
Replication SAN
Save time & effort and meet SLA requirementsBRKARC-1222 62
I/O Acceleration (IOA)
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Application Acceleration
• Distance between data centers impacts performance of disk replication and tape backups
• Latency introduced by distance is compounded by multiple round trips per command
• Different acceleration methods are available to accelerate data over distance
I/O Accelerator (IOA) for disk and tape over FC or FCIP
Write Acceleration for disk over FCIP (FCIP-WA)
Tape Acceleration for tape over FCIP (FCIP-TA)
BRKARC-1222 64
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
WRITE-1
XFER_RDYXFER_RDY
STATUS
DATA
Tape Acceleration (TA)
STATUS
WRITE-2
XFER_RDY
DATA
STATUS
XFER_RDY
STATUS
WRITE-1
WRITE-2
WRT file mark
WRT file mark
WRT fm stsWRT fm sts
TA TAWRITE
XFER_RDYXFER_RDY
Write Acceleration (WA)
Reduction in I/O
Latency ~equal to one
round trip time (RTT)
STATUS
DATA
WA WA
Acceleration Data Flow Concepts
• Synchronous Replication and Tape Backup are Similar –both have one outstanding I/O
• Tape drives further impacted by limited buffering and physical media
• Write Acceleration spoofs Transfer Ready only, Tape Acceleration spoofs Command Status
BRKARC-1222 65
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Replication IO Acceleration Use Cases
• Acceleration only works for replication or back-up protocols that use a SCSI write or SCSI-like write sequence with two round-trips
• EMC SRDF/s, SRDF/A, SRDF/AC, Mirrorview and SANcopy
• HDS TrueCopy
• HP CA-XP, CA-MVA
• IBM FlashCopy, FastT, XIV
• Protocols that use a single round trip do not require IO Acceleration
• EMC SRDF/S with SiRT enabled
• HDS Universal Replicator (HUR)
• HP CA-EVA
• IBM PPRC, PPRC-XD, XRC/Global Mirror, Metro Mirror
• NetApp Metro Cluster
BRKARC-1222 66
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
• Port Based FCIP-WA and FCIP-TA only accelerate IOs egressing through their bound 1/10GE port
• Only works with FCIP
• Restrictions on HA topologies
• Accelerates all flows over a given FCIP interface
• Network Based IO Acceleration (IOA) accelerates any IO in the fabric over any ISL type
• No restrictions on HA topologies
• Can selectively accelerate flows based on PWWN of devices
• Both methods function the same at the SCSI layer
Port vs. Network Based Acceleration
BRKARC-1222 67
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Comparison of Acceleration Methods
IOA-WA and IOA-TA FCIP-WA FCIP-TA
Attached Devices 1/2/4/8/16G 1/2/4/8/16G 1/2/4/8/16G
ISLs supported FC and FCIP FCIP FCIP
ISL Speed 1/2/4/8/10/16G 1/10GE 1/10GE
Port Channels Yes, up to 16 ISLs Yes, up to 16 ISLs No
Equal Cost Multi-Path Yes, up to 16 No No
Disk Acceleration Yes Yes No
Tape Acceleration Yes No Yes
BRKARC-1222 68
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FA RA
1) Server writes to disk
2) Local Array (I) writes to remote Array (T)
3) FCR switch redirects original flow I, T to IOA
engine (I, VT1)
RA FA
IOA
4) IOA accelerates flow and sends to remote IOA
engine over normal routing path (VI1, VT2)
5) Remote IOA engine changes flow to VI2, T
6) FCR switch change flow back to I, T
• IOA configured - Virtual Initiators and Virtual
Targets are created and distributed through
the fabric
• Virtual Initiators (VI) and Virtual Targets (VT)
are transparent to the end devices
• No changes to end devices or zoning required
IOAI,T
I,VT1
VI1,VT2
VI1,VT2
VI1,VT2
VI2, T
I,T
IOA Operation
BRKARC-1222 69
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Site: rtp-bldg8Site: sjc-bldg6
FA RA RA FA
IOA Sites and Engines• IOA Site
A local set of switches within the fabric (e.g. sjc-bldg6)
• IOA Engine (interface)
Represents a Storage Service Engine within the MDS 9250i, MDS 9222i, SSN-16 or MSM-18/4
Ex. ioa2/1, ioa3/1-3/4
Note: a separate IOA license is required for each engine
IOA IOA
BRKARC-1222 70
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Site: rtp-bldg8Site: sjc-bldg6
FA RA RA FA
IOA Clusters• IOA Cluster
A set of IOA engines in a pair of IOA sites that operate as one group to provide IOA service
E.g. switches S1, S2, S3 and S4
Automatic load balancing among engine pairs
A site can be in multiple clusters (bunker sites)
A switch can be in multiple clusters
Note: an engine is bound to only one cluster
IOA IOA
IOA IOA
S1
S2
S3
S4
Cluster 1
Site: bunker
IOA
IOA
BRKARC-1222 71
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Site: rtp-bldg8Site: sjc-bldg6
FA RA RA FA
IOA Flows• IOA Flow
A flow accelerated within an IOA cluster
Each flow is identified by
{Initiator PWWN, Target PWWN, VSAN ID}
E.g. (DI1, DT1, V1) or (BI1, TT1, V1)
• IOA Flow Group
A set of IOA flows classified for a given purpose.
E.g. SRDF flow group and TSM flow group.
IOA IOADI1 DT1
BI2
TT1
BI1
BRKARC-1222 72
Ethernet Storage Networking
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
New Application Design is EmergingIP Storage Trends Tracking these changes
BRKARC-1222 74
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Resource sharing amongst multiple application workloads
Low latency high throughput networks for sensitive workloads
BANDWIDTH
LATENCY
SCALABILITY
HIGH AVAILABILITY
Meet the increasing demands of network and data without compromising fan out and performance
Hardware and software redundancy needed at the Server, Fabric and Storage levels
Design Considerations for IP SANS
BRKARC-1222 75
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Workloads are increasing
demand on networks Increased Virtualization
Hyper-convergence
Image courtesy of the Ethernet Alliance: http://www.ethernetalliance.org/wp-
content/uploads/2015/03/Front-of-Map-04-28-15.jpg
Future technologies will drive
requirements further NVMe over fabrics
RoCE
iWARP
RDMA
Data Center Bandwidth Requirements
BRKARC-1222 76
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
1 2
1 2 20
Probability of 100% throughput = 3.27%
Probability of 100% throughput = 99.95%
20×10Gbps
Uplinks2×100Gbps
Uplinks
11×10Gbps flows
(55% load)
On the Data Path Performance of Leaf-Spine Datacenter Fabrics - M. Alizadeh, T. Edsall:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627738
BANDWIDTH = BETTER EFFICIENCYHIGHER SPEED LINKS IMPROVE MULTI-PATH EFFICIENCY
Higher speed links improve multi-path efficiency77BRKARC-1222 77
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
3-Tier Network
L2
L3
Spine Leaf Network
L2/L3
LATENCYNETWORK DESIGN CONSIDERATION FOR LATENCY
78BRKARC-1222 78
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Flexible Scalibility
………
………
Future Data Centers need
to be flexible Grow Vertically (racks)
Grow Horizontally (rows)
Scale Performance
Solution Architectures and
Infrastructure need to be
designed to meet these needs
from the start
BRKARC-1222 79
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Typical Hierarchal Design – Dual
Networks for Redundancy If Network A path breaks all traffic
fails over to Network B
If a core switch goes down, the
entire Network A goes down
Impact of is failure high, so
greater requirement for
equipment HA
Typical High Availability
XX
BRKARC-1222 80
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Spine/Leaf topology reduces
impact of failure XX
Spine
Leaf
Reduces impact of failure
Reduces dependency hardware
redundancy
Loss of path reduces edge switch
bandwidth fractionally
Loss of Spine does not take down
entire fabric, only fractional capacity lost
Rethinking High Availability
BRKARC-1222 81
Storage Networking Management
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Center Components
Network:
Nexus and MDS Fabrics
Virtual Compute:
VMware
Storage Arrays:
FC Block Based Vendors
Compute:
Cisco UCSNetwork
Compute
Storage
BRKARC-1222 83
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
AUTOMATE
CONFIGURE
VISUALIZE
DIAGNOSE
MEASURE
SC
AL
AB
LE
Simplified Operations of NX-OS
PR
OG
RA
MA
BL
E
Nexus and MDS Fabric Management Solution
BRKARC-1222 84
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Slowdrain Analytics Automates Troubleshooting• Collects the whole fabric at once
Automates Collection• From hours of collection to minutes
Shows Fluctuations in counters• Graphs counters • Enables user to zero in on specific counters
Reduces False Positives • prioritizing ports highest severity counters
BRKARC-1222 85
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Masking
DatabaseVSAN 2,10,12
SAN Host Path Redundancy AnalysisOn-Demand and Automated Redundancy Checks
Reduce
Mean Time to RepairRisk to Switch and Array
Upgrades
Risk to Hardware
Maintenance Activities
Enclosure
Port Down on
Redundant
Path
VSAN initiator
to target
Mismatch
VSAN
Segmentation
Both Paths On
Same Line
Card
Physical
Fabric
Separation
Host_Memoria
VSAN 10
VSAN 11
Nexus5000-FabA
Nexus5000-FabB
MDS9710-FabA
MDS9710-FabB
VSAN 3, 11
VSAN 10 Port A
VSAN 11 Port BVSAN 13 Storage
Array
LUN 1 > A/B
LUN 2 > A/B
LUN 3 > A
LUN 4 > A/B
Root
Cause
VSAN 3, 13
Port LUN
Masking
Mismatch
Fabric Separation
Fabric A
Fabric B
HBA1
HBA0
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
N5K-access-71
N5K-access-70
M97-75
M97-74
Storage
Array
Visualize Hypervisor: VMware
End-to-End Visibility
Windows
2k8-R2-151.102
172.20.151.101
RedHat-151.103
VMsESX
UCSor GENERIC
SERVER
MDSor NEXUS ISL/FCIP MDS or NEXUS
STORAGE PORT
& LUN(s)
FC + FICON CNA Adapters
FCoE + iSCSI
• VM and ESX Inventory
• Monitor VM and ESX
CPU/Memory
• Monitor storage and
network traffic & Events
• Max Datastore Latency
• VM to DataStore
Mapping
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
UCS-FI-6120XP MDS9710-16GBIT
Storage
Array
UCS-FI-6120XP MDS9710-16GBIT
Visualize Compute: Cisco UCS
End-to-End Visibility
Windows
2k8-R2-151.102
172.20.151.101
RedHat-151.103
VMsESX
UCSor GENERIC
SERVER
UCS FI or MDSor NEXUS ISL/FCIP MDS or NEXUS
STORAGE PORT
& LUN(s)
FC + FICONCNA Adapters
FCoE + iSCSI
• Mapping Service Profile
• Blade Inventory
• Module and Port view
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Windows
2k8-R2-151.102
172.20.151.101
RedHat-151.103
Storage
Array
Visualize Virtual and Physical Fabrics
End-to-End Visibility
VMsESX
UCSor GENERIC
SERVER
MDSor NEXUS ISL/FCIP MDS or NEXUS
STORAGE PORT
& LUN(s)
FC + FICONCNA Adapters
FCoE + iSCSI
N5K-access-71 M97-75
N5K-access-70 M97-74
• Switch and Port Events
• Average & Peak TX/RX
• Switch CPU/Memory
• Performance monitoring
of ISLs, Trunks, FCIP
• Discards and Errors
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
N5K-access-71 M97-75
N5K-access-70 M97-74
Windows
2k8-R2-151.102
172.20.151.101
RedHat-151.103
Visualize Storage Systems
End-to-End Visibility
VMs
ESX
UCS
or GENERIC
SERVER
MDS
or NEXUS ISL/FCIP MDS or NEXUS
STORAGE PORT
& LUN(s)
FC + FICONCNA Adapters
FCoE + iSCSI
Storage
Array
• Array port mapping
• Host to LUN mapping
• Array Capacity View
• Array Inventory
• Capacity Report
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Configure: Wizards and Templates
Provision Virtual
Port Channel
Provision
Zone
StorageNetwork
Fabric Provisioning
• Wizards & Templates
OS Upgrade Wizard
Backup/Roll Back
Data Replication
FCIP IO Acceleration
BRKARC-1222 91
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Diagnose
Database Crashes
Event Filtering and De-dup
Misconfiguration Analysis
Rule Based Fault Notification
Health Score
BRKARC-1222 92
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Measure
Identify
Bottlenecks
Port Capacity Array Capacity
Identify Orphaned
PortsTop Array
Ports & Hosts
Port inventory
FORECAST
VIEW + REPORT CAPACITY and PERFORMANCE
BRKARC-1222 93
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
DCNM Resources
http://cisco.com/go/dcnm
Videos
Release Notes
Datasheets
Configuration Guides
Installation and Licensing Guide
Programmable Guide
http://cisco.com/go/license
Evaluation and Permanent Licensing
License transfer to another DCNM server
BRKARC-1222 94
Wrap Up
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Networking for Agile Data Centers
Deploy Cisco MDS for Fibre Channel Storage Networks with Superior
Performance, Scale, and Architectural Flexibility
Enable Seamless Multi-protocol Fabrics with Nexus and MDS for highly efficient
architectures that can adapt to changing storage
requirements
Provision, Monitor, and Automate Physical and
Virtual Datacenters using integrated and third party cloud software platforms
BRKARC-1222 96
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys through the Cisco Live mobile app or from the Session Catalog on CiscoLive.com/us.
BRKARC-1222 97
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions• BRKSAN-2883 - Advanced Storage Area Network Design
• BRKSAN-3101 - Troubleshooting Cisco MDS 9000 Fibre Channel Fabrics
• BRKSAN-3446 - SAN Congestion! Understanding, Troubleshooting, Mitigating in a Cisco Fabric
• PSOSAN-1089 - Cisco Storage Networking: Linking Technology and Innovation to Solve Business Problems
• BRKVIR-1121 - Fiber Channel Networking for the IP Network Engineer and SAN Core Edge Design Best Practices
BRKARC-1222 98
Please join us for the Service Provider Innovation Talk featuring:
Yvette Kanouff | Senior Vice President and General Manager, SP Business
Joe Cozzolino | Senior Vice President, Cisco Services
Thursday, July 14th, 2016
11:30 am - 12:30pm, In the Oceanside A room
What to expect from this innovation talk
• Insights on market trends and forecasts
• Preview of key technologies and capabilities
• Innovative demonstrations of the latest and greatest products
• Better understanding of how Cisco can help you succeed
Register to attend the session live now or
watch the broadcast on cisco.com
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Center / Virtualization Cisco Education OfferingsCourse Description Cisco Certification
Introducing Cisco Data Center Networking (DCICN);
Introducing Cisco Data Center Technologies (DCICT)
Learn basic data center technologies and skills to build a
data center infrastructure.
CCNA® Data Center
Implementing Cisco Data Center Unified Fabric (DCUFI);
Implementing Cisco Data Center Unified Computing (DCUCI)
Designing Cisco Data Center Unified Computing (DCUDC)
Designing Cisco Data Center Unified Fabric (DCUFD)
Troubleshooting Cisco Data Center Unified Computing
(DCUCT)
Troubleshooting Cisco Data Center Unified Fabric (DCUFT)
Obtain professional level skills to design, configure,
implement, troubleshoot data center network infrastructure.
CCNP® Data Center
Product Training Portfolio: DCNMM, DCAC9K, DCINX9K,
DCMDS, DCUCS, DCNX1K, DCNX5K, DCNX7K
Gain hands-on skills using Cisco solutions to configure,
deploy, manage and troubleshoot unified computing, policy-
driven and virtualized data center network infrastructure.
Designing the FlexPod® Solution (FPDESIGN);
Implementing and Administering the FlexPod® Solution
(FPIMPADM)
Learn how to design, implement and administer FlexPod
solutions
Cisco and NetApp Certified
FlexPod® Specialist
For more details, please visit: http://learningnetwork.cisco.com
Questions? Visit the Learning@Cisco Booth or contact [email protected]
BRKARC-1222 100
Thank you
Top Related