ACI

download ACI

If you can't read please download the document

Transcript of ACI

  • Intercloud Data Center ACI 1.0Implementation Guide

    February 20, 2015

    Building Architectures to Solve Business Problems

  • CCDE, CCENT, CCSI, Cisco Eos, Cisco Explorer, Cisco HealthPresence, Cisco IronPort, the Cisco logo, Cisco Nurse Connect, Cisco Pulse, Cisco SensorBase,Cisco StackPower, Cisco StadiumVision, Cisco TelePresence, Cisco TrustSec, Cisco Unified Computing System, Cisco WebEx, DCE, Flip Channels, Flip for Good, FlipMino, Flipshare (Design), Flip Ultra, Flip Video, Flip Video (Design), Instant Broadband, and Welcome to the Human Network are trademarks; Changing the Way We Work,Live, Play, and Learn, Cisco Capital, Cisco Capital (Design), Cisco:Financed (Stylized), Cisco Store, Flip Gift Card, and One Million Acts of Green are service marks; andAccess Registrar, Aironet, AllTouch, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, theCisco Certified Internetwork Expert logo, Cisco IOS, Cisco Lumin, Cisco Nexus, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,Collaboration Without Limitation, Continuum, EtherFast, EtherSwitch, Event Center, Explorer, Follow Me Browsing, GainMaker, iLYNX, IOS, iPhone, IronPort, theIronPort logo, Laser Link, LightStream, Linksys, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, PCNow, PIX, PowerKEY,PowerPanels, PowerTV, PowerTV (Design), PowerVu, Prisma, ProConnect, ROSA, SenderBase, SMARTnet, Spectrum Expert, StackWise, WebEx, and the WebEx logo areregistered trademarks of Cisco and/or its affiliates in the United States and certain other countries.

    All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1002R)

    THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

    The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCBs public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California.

    NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

    IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

    Cisco Confidential Partners ONLY

    Intercloud Data Center ACI 1.0, Implementation Guide

    Service Provider Segment 2015 Cisco Systems, Inc. All rights reserved.

  • Implementation Guide

    Port Channel with SPort Channel Co

    Connectivity to ComputevPC to Fabric IntercoFEX to Bare Metalon in ACI Fabric 3-5VI for L3 Extension 3-12nfiguration on Border Leaf 3-13C O N T E N T S

    Preface iii

    Audience iii

    CHAP T E R 1 Solution Overview 1-1

    Implementation Overview 1-4

    Solution Architecture 1-4

    Mapping ACI Concepts to IaaS 1-8

    Service Tiers 1-9Reference IaaS Tenant Network Containers 1-12Bronze 1-13Silver 1-14Expanded Gold Container (E-Gold) 1-15Copper 1-17

    Solution Components 1-18

    CHAP T E R 2 ACI Policy Model 2-1

    Accessing Managed Object Data through REST API 2-2

    Authenticating and Maintaining an API Session 2-4

    Layer 4 to Layer 7 Service Insertion 2-4L4 to L7 Service Parameters 2-6

    CHAP T E R 3 Data Center Fabric Implementation with ACI 3-1

    Fabric Implementation Highlights 3-1

    APIC Attachment Points 3-2

    Fabric Load Balancing 3-3

    External Connectivity to PE 3-3vPC Connectivity to PE for L2 Extension 3-3

    vPC ConfiguratiiIntercloud Data Center Application Centric Infrastructure 1.0

    3-17

    nnects 3-173-20

  • ContentsAttaching FEX to the ACI Fabric 3-20Profile Configuration 3-23Bare Metal Server Attachment to FEX 3-29

    Bare Metal and KVM Direct to 93128 and 9396 3-30

    Connectivity to Services Appliances 3-32ASA 5585 Active/Active Cluster Implementation 3-32

    ASA 5585 for Expanded-Gold and Copper using vPC 3-36

    Connectivity to Storage 3-36NetApp Cluster Connectivity for NFS 3-36

    VPC Configuration 3-36Storage Tenant Configuration 3-38

    Storage Multi-Tenancy Considerations 3-40High Availability Implications 3-40

    Data Center Fabric ManagementOut of Band (OOB) 3-40Connectivity to OOB Ports on all Fabric Switches 3-41Connectivity to APIC OOB Ports 3-43Connectivity from APIC to VMMs 3-43Connectivity from APIC to Services Appliances (ASA 5585) 3-44

    Deployment Considerations 3-44

    CHAP T E R 4 VMWare ICS Compute and Storage Implementation 4-1

    VMWare Based FlexPod Aligned ICS 4-1Reference Architecture 4-1UCS Fabric Interconnects and B-Series setup 4-2Cisco Application Virtual Switch (AVS) 4-3

    Forwarding Modes 4-3Cisco AVS Integration with VMware vCenter 4-4Cisco AVS Installation 4-5AVS Virtual Machine Kernel (VMK) NIC connectivity 4-6NetApp NFS Storage 4-8

    CHAP T E R 5 Openstack Compute and Storage Implementation 5-1

    Physical Connectivity Layout 5-1C-Series Server Attachment 5-2C-Series Server NIC Layout 5-3

    OpenStack Services and Access Implementation 5-4MaaS and Juju Servers 5-5OpenStack Horizon Dashboard Access 5-5iiIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

    OpenStack SWIFT/RADOS Gateway Object Store Access 5-5

  • ContentsOpenStack Host Access to NFS Storage 5-6

    Canonical OpenStack Implementation 5-6Metal as a Service (MaaS) 5-6Juju 5-7Charms 5-8

    Nexus1000v for KVM Implementation 5-8Nexus 1000v High Availability Model 5-9Nexus 1000v Architecture 5-10

    Virtual Supervisor Module (VSM) 5-10Virtual Ethernet Module (VEM) 5-11OpenStack Nexus 1000v Components 5-11VXLAN Gateway (VXGW) 5-12

    Nexus 1000v Packet Flow 5-12Nexus 1000v Charms 5-13

    VSM Charm 5-13VEM Charm 5-14Quantum Gateway (Neutron) Charm 5-14Nova Cloud Controller Charm 5-15Nova Compute Charm 5-15OpenStack Dashboard Charm 5-15

    Nexus 1000v for KVM Work Flow 5-15OpenStack to Nexus 1000v Object Mapping 5-15Configuration Work Flow 5-16

    OpenStack Installation 5-18High Availability 5-18High Availability Components 5-19Ubuntu MaaS Installation 5-21Ubuntu Juju Installation 5-27Installation of OpenStack Icehouse using Ubuntu MaaS/Juju 5-29

    Juju-Deployer and Configuration File 5-29Deploying OpenStack Charms 5-35Post Juju-Deployer 5-39Troubleshooting 5-40

    Install Python OpenStack Clients 5-41

    OpenStack Configuration 5-42Tenant Configurations 5-42Networking Configuration 5-42

    Server Networking Configuration 5-43iiiIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

    Tenant Networking Configuration 5-44

  • ContentsAdditional Nexus 1000v configurations 5-48Storage Configuration and Implementation 5-49

    Block Storage with Ceph 5-49Block Storage with NetApp 5-50Image Storage 5-54Object Storage 5-54

    Instance Migration 5-55Cold Migration 5-55Live Migration 5-56

    Host Failure Scenarios 5-56Compute Nodes 5-56Control Nodes 5-57

    CHAP T E R 6 WAN Edge Implementation with ASR9K 6-1

    Network Virtualization Edge on the ASR 9000 6-1Benefits 6-1Requirements 6-1Restrictions 6-2Control-plane Extension 6-2Data Plane Extension 6-2Link Distribution 6-2ASR 9000 as the Data Center Provider Edge router 6-3

    ASR 9000 Data Center Provider Edge Implementation Toward MPLS Core 6-4Provider Edge and Customer Edge BGP to Tenant 6-5L3 Bronze Configuration 6-7

    Provider EdgeCustomer Edge using Static Routing 6-7ASR 9000 as Internet Router 6-8

    E-Gold Tenant Internet Connection Configuration on ASR 9000 Data Center Provider Edge 6-9Interface Configuration 6-10Routing Configuration 6-11

    Deployment Considerations 6-11

    CHAP T E R 7 End-to-End QoS Implementation 7-1

    QoS Domains and Trust Boundaries 7-2

    QoS Transparency 7-3Trust Boundaries 7-3QoS per Service Tier 7-4Tenant Type Mapping to QoS Traffic Classes 7-5ivIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

    ACI Fabric QoS 7-7

  • ContentsClassification 7-9Trust 7-11Marking 7-11

    UCS QoS 7-12AVS Encapsulation 7-12QoS System Class 7-12QoS Policy 7-13

    ASR 9000 Data Center PE QoS 7-13

    Deployment Considerations 7-14

    CHAP T E R 8 Expanded Gold Tenant Container 8-1

    Dual Zones Layout for Workload VMs 8-2

    High Availability 8-3

    Traffic Flows 8-4Private Zone 8-4Demilitarized Zone 8-5

    Expanded Gold Tenant Container Configuration 8-6

    Prerequisites 8-8

    Summary of Steps 8-9Detailed Steps 8-12Decommission the Expanded Gold Tenant Container 8-75

    Expanded Gold Tenant Container with ASAv 8-78

    CHAP T E R 9 Silver Tenant Container 9-1

    Silver Tenant Container Layout 9-1Physical Topology 9-2Logical Topology 9-3APIC Tenant Construction 9-3

    User Roles and Security Domain 9-4Create Tenant 9-6Private Network 9-7Bridge Domain 9-9Application Profile 9-11End Point Groups 9-12Filters 9-14Contracts 9-16External Routed Networks 9-19

    Traffic Flow Paths 9-28vIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • ContentsServer Load Balancing using NetScaler 1000v 9-30NetScaler 1000v Overview 9-30NetScaler 1000v Implementation 9-31

    One-Arm Mode 9-31High Availability (HA) Configuration 9-31Network Setup 9-32

    NetScaler 1000v L4-7 Load Balancing Policies 9-32Server 9-32Services / Service Groups 9-33Load Balanced Virtual Server 9-33Health Monitoring 9-33

    NetScaler 1000v Implementation using Service Graph 9-33Citrix NetScaler Device Package 9-34L4-L7 Devices Implementation 9-35

    Device Cluster (Logical Devices) 9-36Concrete Devices 9-36Logical Interfaces 9-36

    Service Graph 9-42Service Graph Configuration 9-43Configuring device/function parameters under Service Graph 9-51Configuring L4-L7 Parameters under EPG 9-54Device Selection Policies 9-58Deploying Service Graph 9-60

    Network Parameter Configuration 9-62Load-Balancing Implementation 9-62

    ApplicationHTTP 9-63Application FTP 9-64ApplicationMySQL 9-66

    SSLOffload Implementation 9-68

    References 9-70

    CHAP T E R 10 Bronze Tenant Container 10-1

    Overview 10-1

    Layer 3 Bronze 10-1Physical Topology 10-1Logical Topology 10-2Prerequisites 10-4L3 Bronze Tenant Configuration Procedure 10-4viIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

    Verify Configuration 10-21

  • ContentsL2 Bronze with Default Gateway on ASR 9000 nV Edge 10-23Physical Topology 10-24Logical Topology 10-24L2 Bronze Tenant Configuration Procedure 10-25Verify Configuration 10-31

    Deployment Considerations 10-32

    CHAP T E R 11 Copper Tenant Container 11-1

    Copper Tenant Logical Layout 11-1Logical Topology 11-1Copper Container Traffic Flow 11-2

    ACI Fabric Configuration 11-3Overview 11-3ACI Link Configuration 11-4ACI Tenant Configuration 11-4

    Base Tenant Configuration 11-4Server-to-ASA Configuration 11-5ASA-to-ASR 9000 Configuration 11-9Object Storage (swift/RADOS GW) Access Configuration 11-11NFS Storage Access Configuration 11-12

    ASA Firewall Configuration 11-13ASA System Context 11-14

    Interface Configuration 11-14BGP Configuration 11-15

    ASA Copper Context 11-15Base Configuration 11-15BGP Configuration 11-16NAT Configuration 11-16

    Deployment Considerations 11-17viiIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • ContentsviiiIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Preface

    The Cisco Intercloud Datacenter ACI 1.0 (ICDC ACI 1.0) system provides design and implementation guidance for building cloud infrastructures for both Enterprises deploying Private cloud services, and Service Providers building Public Cloud and Virtual Public Cloud services. With the goal of providing an end-to-end system architecture, ICDC ACI 1.0 integrates Cisco and third-party products in the cloud computing ecosystem. This preface explains the objectives and intended audience of the Cisco Intercloud Data Center ACI 1.0 solution and this implementation guide.

    The Intercloud Data Center system is a continuation of the Virtualized Multi-Service Data Center (VMDC) systems, and this implementation guide is based on Application Centric Infrastructure (ACI) technology that Cisco has just released. In this first release of the implementation guide, focus is placed on showing how to build complex tenancy constructs using ACI.

    Product screen shots and other similar material in this guide are used for illustrative purposes only and show trademarks of EMC Corporation (VMAX), NetApp, Inc. (NetApp FAS3250), and VMware, Inc. (vSphere). All other marks and names mentioned herein may be trademarks of their respective companies.

    Use of the word partner or partnership does not imply a legal partnership relationship between Cisco and any other company.

    AudienceThis guide is intended for, but not limited to, system architects, network design engineers, system engineers, field consultants, advanced services specialists, and customers who want to understand how to deploy a Public or Private cloud data center infrastructure using ACI. This guide assumes that you are iiiIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

    familiar with the basic concepts of Infrastructure as a Service, Cisco Virtualized Multi-service Data Center (VMDC) Solution, IP protocols, Quality of Service (QoS), and High Availability (HA), and that you are aware of general system requirements and data center technologies.

    This implementation guide provides guidance for cloud service providers to build cloud infrastructures using the Cisco Application Centric Infrastructure (ACI) Technology. This implementation guide is part of the Cisco reference design for cloud infrastructures called Cisco Intercloud Data Center ACI 1.0 release.

  • PrefaceivIntercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • InterclouImplementation Guide

    Cisco Application Virtual Switch (AVS) to extend ACI integration all the way to virtual access layer, and configuration of the virtual switch port-groupbased compute pod is also validated using the Nexwas targeted for providing lower cost hosting servdistribution of OpenStack Icehouse release with Uintegration.s is also done via APIC. Additionally, OpenStack us1000V for KVM platform this implementation ices. This was implemented using Canonical buntu 14.04 LTS, and does not include APIC CH A P T E R 1Solution Overview

    The goal of implementing cloud infrastructures is to provide highly scalable, efficient, and elastic services accessed on-demand over the Internet or intranet. In the cloud, compute, storage, and network hardware are abstracted and delivered as a service to run the workloads that provide value to its users. The end users, also called tenants, utilize the functionality and value provided by the service as and when needed without the need to build and manage the underlying data center infrastructure. A cloud deployment model differs from traditional deployments as the focus is on deploying application by consuming a service from a provider, and results in business agility and lower cost due to consuming only the resources needed and for the duration needed. For the provider of the cloud infrastructure, the compute, storage, networking and services infrastructure in the Data Center are pooled together as a common shared fabric of resources, hosted by at the providers facility, and consumed by tenants using automation via API or portals. The key requirements for cloud service providers are multi-tenancy, high scale, automation to deploy tenant services and operational ease.

    With the availability of ACI, powerful technology is now available to build highly scalable and programmable data center infrastructures. ACI technology brings in software defined networking principles using centralized policy controller to configure, deploy and manage datacenter infrastructure including services appliances, and scales vastly by using overlay technology in hardware yielding high performance and enhanced visibility to the network. It also introduces a different paradigm of designing and running applications in a datacenter in a multi-tenant environment, with enhanced security.

    ACI is supported on the new Nexus 9000 series switches, and the centralized policy controller is called Application Programming Infrastructure Controller (APIC). Limited First Customer Shipment (FCS) release of this software was done in summer of 2014, with General Availability (GA) in Nov 2014.

    This guide documents the implementation of reference Infrastructure as a Service (IaaS) containers using the FCS ACI software release and includes detailed configurations and findings based on solution validation in Cisco labs. The focus is to show the use of ACI based constructs to build similar reference IaaS containers as shown in past Cisco Virtualized Multi-service Data Center (VMDC) Cisco Validated Designs (CVD), to enable Cisco customers to understand how ACI can be applied to build Cloud infrastructures. This is the first release of this system, and hence focus has been to show the functional capabilities of ACI and how it applies to building reference containers. Validation of scalability of ACI will be covered in subsequent updates or releases of this solution and implementation guide.

    This release of Intercloud Data Center ACI 1.0 includes VMware vSphere-based hypervisor and uses 1-1d Data Center Application Centric Infrastructure 1.0

  • Chapter 1 Solution OverviewPrevious Cisco cloud reference designs were named Virtualized Multi-Service Data Center (VMDC), and going forward, the naming of these systems has been changed to Intercloud Data Center starting with this system release. For reference purposes, details are provided here about the previously released VMDC design and implementation guides. There have been several iterations of the VMDC solution, with each phase encompassing new platforms, versions, and technologies.

    VMDC Virtual Services Architecture 1.0/1.0.1/1.0.2

    VMDC 2.3

    VMDC Data Center Interconnect (DCI) 1.0/1.0.1

    VMDC Security 1.0

    This implementation guide introduces several ACI based design elements and technologies:

    Scaling with VXLAN based overlaysACI uses VXLANs internally to scale beyond the 4000 VLAN limit to implement Layer 2 (L2) segments.

    Data Center fabric uses the Clos design, allowing for large cross sectional bandwidth, and smaller failure domain using dedicated spines. All attachment of servers and external network is to the leaf nodes.

    Centralized policy controlSDN & programmable Data Center network: Using APIC GUI or REST API, the whole ACI fabric can be configured.

    Multi-tenant configuration modelACI configuration is by design multi-tenant and allows for configuration of tenant elements using role-based access control (RBAC).

    Application centric deployment models and application security.

    Integration with Virtual Machine Managers (VMM)vSphere 5.1 using Application Virtual Switch.

    Service integration of Firewall and Server Load Balancer using ACI Service Graphing technology.

    The Intercloud ACI 1.0 solution addresses the following key requirements for cloud infrastructure providers:

    1. Tenancy ScaleMulti-tenant cloud infrastructures require the use of multiple Layer 2 segments per tenant, and each tenant needs layer 3 contexts for isolation to support security as well as overlapping IP address spaces. These are typically implemented as VLANs and VRFs on the data center access and aggregation layers and extending the Layer-3 isolation all the way to the DC provider edge. Due to the 4000 VLAN limit, overlays are required and ACI uses VXLAN technology within the fabric to scale to a very high number of bridge domains. The number of tenants is similarly very high with plans to support 64000 tenants in future releases. The implementation of VXLANs in hardware provides for large scale, high performance and throughput, innovative visibility to tenant traffic, as well as new security models.

    2. Programmable DC NetworkThe data center network is configured using APIC which is the central policy control element. The DC fabric and tenant configurations can be created via the APIC GUI or via REST API Calls, allowing for highly programmable and automatable data center. There is integration with the Virtual Machine Managers VMware vSphere 5.1 currently using Application Virtual Switch (AVS), so that the Tenant L2 segments can be created via APIC.

    3. Integration of ServicesDeploying services for tenants such as Firewall and Server Load balancer requires separate configuration of these devices via orchestration tools. With ACI, these devices can be also configured via APIC, allowing for a single point of configuration for the data center services. Each service platform publishes the supported data items via a device package, which then APIC exposes via its user interface. Currently Cisco ASA Firewalls and Citrix NetScaler Server Load Balancers (SLB) are among the supported devices, and a number of other vendors are building their own device packages to allow for integration with ACI.1-2Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewIn summary, the Intercloud ACI 1.0 solution provides the following benefits to cloud providers:

    Increased tenancy scaleup to 64000 tenants (in future releases).

    Increased L2 segment scaleVXLAN overlays in fabric provide higher L2 scale and also normalizes the encapsulation on the wire.

    Single Clos based Data Center fabric that scales horizontally by adding more leafs.

    Large cross sectional bandwidth using Clos fabric, smaller failure domains and enhanced HA using ACI virtual port-channels from 2 leaf nodes to an external device.

    SDNsoftware defined network and services integration all configurations through a centralized policy controller.

    Improved agility and elasticity due to programmability of the network.

    Enhanced security and application centric deployment.

    Multi-tenancy and RBAC built-in.

    APIC provides integration to the virtual access layer using Application Virtual Switch for the VMWare vSphere 5.1 Hypervisor based virtual machines - no additional configurations required.

    OpenStack Icehouse-based compute with Nexus1000V KVM virtual switch implementation to support tenants that need OpenStack-based IaaS service. Both traditional storage and software-defined storage using Red Hat Ceph, are covered as storage options

    The Intercloud ACI 1.0 solution (as validated) is built around Cisco UCS, AVS, Nexus 9000 ACI switches, APIC, ASR 9000, Adaptive Security Appliance (ASA), Cisco NetScaler 1000V, VMware vSphere 5.1, Canonical OpenStack, KVM, Nexus 1000V, NetApp FAS storage arrays and Ceph storage.

    Figure 1-1 shows the functional infrastructure components comprising the Intercloud ACI 1.0 solution.

    Figure 1-1 Intercloud ACI 1.0 Infrastructure Components

    Cisco Nexus 2232 FEXCisco Nexus 2248 FEX

    ApplicationVirtual Switch

    Data Center Network

    ACI with Nexus 9000and APIC

    Management

    Storage

    Services

    Hypervisors

    Compute

    Virtual Access

    2984

    60

    NetScaler1000VASA 5585-X ASAv

    EMC VMAX,VNX or any other

    NetApp FAS

    APIC, VMware vCenter,OpenStack Horizon,Cisco UCSM

    UCS 6200 Fabric Interconnectwith UCS-B Series Blade Servers

    VMware vSphere 5.1OpenStack KVM

    vSphere

    Data Center PE Cisco ASR 9000or ASR 1000

    Nexus 9508with 9736PQ

    Nexus9393PQ

    Nexus93128TX

    UCS C-SeriesRack Servers1-3Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewImplementation OverviewImplementation OverviewThe Intercloud ACI 1.0 solution utilizes a Clos design for large capacity DC fabric with High Availability (HA) and scalability. All external devices are connected to the leaf nodes. This design uses multiple Nexus 9500 series spine switchesat least two spine switches are required, with 4 spines being preferred to provide smaller failure domains. Each Nexus 9300 series leaf node is connected to all spines using 40Gbps connections, and the paths between leafs is highly available via any of the spines.

    The external devices are attached to leaf nodesthese include Integrated Compute stacks, Services appliances such as Firewalls and server load balancers and also the WAN routers that form the DC Provider edge. These devices are attached to 2 Nexus 9300 series leaf nodes using virtual Port-channels to ensure high availability from single leaf or link failure. Each service appliance also supports high availability using redundant appliances either in active/standby or active/active cluster mode to provide HA and scale. The fabric normalizes the encapsulation used to each external device, and re-encapsulates using enhanced VXLAN within the fabricthis allows for highly flexible connectivity options and horizontal scaling. By allowing connectivity of all types of devices to a common fabric, and interconnecting using overlays, Data Centers can be built in a very highly scalable and flexible manner, and expanded by adding more leaf nodes as needed.

    Using the Application Virtual Switch (AVS) for VMware vSphere based workloads, extends the ACI fabric to the virtual compute workloads, with the creation of the port-groups for different tenant segments and end point groups done via the APIC.

    BGP or static routing is used to connect the ACI fabric to the ASR 9000 DC Edge for Layer-3 external connectivity models, while L2 external connectivity to the ASR 9000 is used for some tenant containers.

    Solution ArchitectureThe Intercloud Data Center ACI 1.0 architecture is comprised of ACI Fabric, WAN layer, Compute and Services layer. All the layers attach to the ACI fabric leaf nodes, and considerations to attach which devices to which leafs is driven by physical considerations as well as scale per leaf considerations.

    Figure 1-2 shows a logical representation of the Intercloud ACI 1.0 solution architecture.1-4Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewSolution ArchitectureFigure 1-2 Logical Representation of Intercloud ACI 1.0 Solution Architecture

    The layers of the architecture are briefly described below.

    WAN/EdgeThe WAN or DC Edge layer connects the DC to the WAN. Typically, this provides IP or Multiprotocol Label Switching (MPLS)-based connectivity to the Internet or intranet. The ASR 9010 is used as an MPLS provider edge router in this design, providing L3VPN connectivity to the provider IP/MPLS network as well as Internet Gateway function. It also provides aggregation of all the Data Center pods as they connect directly to the ASR 9010 provider edge, wherein each PoD could be an ACI fabric or could be other legacy data center pods in a brown field deployment scenario. The ASR 9010 is utilized in Network Virtualization (nV) mode, where two physical ASR 9000 devices have a single control plane and appear as a single logical device to adjacent nodes. Connection of the ACI fabric to the ASR 9000 can be done using external layer-3 routed connections or using layer-2 extension with vPC. Tenant separation is done by using VRFs on the ASR9000 series routers.

    ACI Fabric

    SpineNexus 9508 with 9736PQ line cards are used as ACI Fabric spine. In this implementation, 4 Nexus 9508 spine nodes are used. Each 9736PQ line card has 36 40Gbps ports, and up to 8 such line cards can be installed in each Nexus 9508 chassis. Only leaf nodes attach to the spines. Leafs can be connected via a single 40G to each spine, or via multiple 40Gs. Since each leaf connects to spine, the number of spine ports determines the total size of the fabric and additional spine cards in spine nodes can be attached to increase the number of leaf nodes supported. Additional form-factors of the Nexus 9500 ACI spine node will be released in the future.

    ACI Fabric

    Managment

    2984

    61

    VMware Based ICS

    WAN (MPLS Core) Tenant Sites

    Tenant routesover L3VPN

    L2 external isover vPC

    iBGP or static or L2 external per tenantbetween ACI and DC-PEAny leaf can be borderleaf connecting to DC-PE

    Tenant VMs defaultGateway on ACI orASA firewall

    ASR 9000 nV

    DC Edge

    vCenterMaaSJuju

    DC Network

    Service

    Compute

    Storage

    OpenStack Based ComputeBare Metal Servers

    OOB-Mgmt

    Ceph Nodes

    VM VM VM VM VM VM VM VM VM VM VM1-5Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewSolution Architecture LeafsNexus 9396PX or Nexus 93128TX Leaf switches can be used. These switches have 12x 40G ports to be used to connect to spines. All connections external to the ACI Fabric are made using the edge 1G/10GE ports on the leaf nodes. This includes connecting to the ICS, WAN/Edge provider edge, and services appliances, as well as storage devices. Scale considerations based on consumption of hardware resources per leaf node, will determine the scale per leaf of the number of mac addresses, endpoint groups, bridge domains and security policy filters. The fabric allows for very high scaling by adding more leaf nodes as needed.

    ServicesNetwork and security services, such as Firewalls, server load balancers, intrusion prevention systems, application-based Firewalls, and network analysis modules, attached directly to the Nexus 9300 series leaf switches. Virtual Port-channels are used to connect to 2 different leaf nodes for HA. In this implementation, ASA-5585X physical Firewalls are used, and connected via vPC to a pair of Nexus 9300 Top of Rack switches. Virtual appliances like the ASAv virtual FW and NetScaler 1000V virtual SLB are used, but these run over VMware vSphere hypervisor on the integrated compute stack.

    Integrated Compute Stack using VMware vSphereThis is the ICS stack such as FlexPod or Vblock. These typically consist of racks of compute based on UCS, and storage devices and attach to a pair Nexus 9300 series leaf pair. Storage can be via IP transport such as NFS, iSCSI or CIFS. Alternatively, FC/FCoE based SANs can be used by connecting UCS Fabric Interconnect 6200s to a pair of SAN Fabrics implemented using MDS switches. The Compute and Storage layer in the Intercloud ACI 1.0 solution has been validated with a FlexPod aligned implementation using the following components:

    ComputeCisco UCS 6296 Fabric Interconnect switches with UCS 5108 blade chassis populated with UCS B200 and B230 half-width blades. VMware vSphere 5.1 ESXi is the hypervisor for virtualizing the UCS blade servers.

    StorageIP based storage connected directly to the Nexus 9300 series leaf switches. NetApp FAS storage devices (10G interfaces) are connected directly to leaf nodes, and NFS based storage for tenant workloads is used.

    Virtual AccessCisco Application Virtual Switch (AVS) is used on VMWARE vSphere 5.1 with full integration with APIC. APIC creates port-groups for EPGs and maps it to a VLAN on the wire.

    OpenStack Compute PodOpenStack is setup as an alternative for tenants that desire to use OpenStack-based virtualization. Canonical OpenStack Icehouse release with Ubuntu 14.04 LTS Linux is utilized, with a 3 node High availability configuration. Both control nodes and compute nodes are Cisco UCS C-series servers and connected to the ACI fabric using virtual Port-channels. The virtual access switch used is Nexus1000V for KVM, using the Nexus1000V Neutron plugin. For this implementation, the OpenStack compute is validated with copper container only, and hence the default gateway for all tenant VMs is the ASA Firewall. Each tenant gets an ASA sub-interface which is extended via the ACI fabric to the compute layer for hosting Tenant VMs. This release with OpenStack IceHouse does not have integration between APIC and OpenStack and the tenant EPGs are statically mapped to VLANs.

    ComputeCisco UCS C-series servers. These are also Ceph nodes, hence local disks are configured and used by Ceph as OSDs. The compute nodes also have access to traditional storage using NetApp.

    Storage

    Traditional storage using NetApp NFS shares. Cinder is setup to mount NFS shares on compute nodes, and use it for running instances.

    Software defined storage using Ceph. Compute nodes use the built in RBD client to access the Ceph OSDs.1-6Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewSolution ArchitectureSwift service for tenants is via RADOS GW.

    Glance is setup using RBD client to implement glance storage using Ceph as well.

    Virtual AccessNexus1000V-KVM is used as the virtual switch. Networks are created on Horizon dashboard and published to the Nexus1000V VSM. Nexus1000V neutron plugin is used.

    Figure 1-3 provides a logical representation of the OpenStack Pod.

    Figure 1-3 Logical Representation of OpenStack Pod

    The OpenStack implementation is targeted for smaller deployments of up to 256 hosts with current Nexus 1000v KVM verified scalability, and in future releases can scale higher. The High Availability for the OpenStack control plane is implemented with 3 nodes running all OpenStack services in an active/active cluster configuration. Canonical-recommended High Availability (HA) designs require running each OpenStack service in a separate node for production and scaled-up environments, and alternatively running services on independent Virtual Machines during staging. For this implementation, 3 node HA cluster was setup and Linux containers (LXC) are used to isolate individual OpenStack services on these nodes (Figure 1-4).

    Figure 1-4 OS Services Mapping to Small HA Model

    2984

    67

    Management Network viaNexus 5000 pair, UCS-Cservers attach via bondedor dual ethernet

    Data Network via ACI Fabric, CN and SN attachvia bonded 2x 10G to any of the ACI leaf pair.

    Storage network is oneof the VLANs on bonded2x 10G Cisco vic

    cimc

    Juju

    MaaS

    NetApp NFS/via Cinder

    Nexus 1000VVSM Nodes Compute Nodes

    Also Ceph OSDsand MON

    PoD 1 PoD 2Management PoD

    C240-M3

    Nexus93128

    Nexus 9396Nexus 9396

    Nexus93128

    C240-M3 C220-M3

    C240-M3

    C220-M3

    C220-M3

    C220-M3C220-M3

    C240-M3

    C240-M3

    C220-M3

    OpenStack ControlNodes + Rados GW

    ACI

    Controller Nodes

    2984

    68

    Control Node 01

    Control Node 02Control Node 01Ceph OSD/mon

    Control Node 02Ceph OSD/mon

    Control Node 03Ceph OSD/mon

    Control Node 03Ceph OSD

    Control Node 03

    Control KVMNode 05

    Control KVMNode 04

    Build NodeMaaS

    Juju(bootstrap)

    Nexus 1000VVSM

    Nexus 1000VVSM

    Workload Pod(s)Management Pod

    HA ProxyRabbit MQ clusterMySQLPercona/Galera cluster

    KeystoneGlanceNeutronNovaCinderHorizonRados-GW1-7Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewMapping ACI Concepts to IaaSMapping ACI Concepts to IaaSIn this section, a brief review of key ACI concepts is followed by considerations for use in Cloud Service Provider deployment for IaaS services.

    Note Refer to the following document for more details on ACI terminology. http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-731960.html

    End Point GroupA set of endpoints either VMs or hosts, to be treated similarly from policy perspective. From the perspective of ACI Fabric, each endpoint is a MAC address and IP address. In virtualized environments (currently only VMware vSphere), the EPGs are extended all the way to the virtual switch with the Cisco Application Virtual Switch, and port-groups are created by the APIC on the vCenter that can then be used to attach VMs the port-groups for the specific EPG. Currently, EPGs can be mapped to VMM domains, wherein the APIC automatically assigns the VLAN (from a pool) and creates port-groups with the name of the EPG to indicate to server admins to attach VMs to those port-groups. Alternative for non-integrated external devices is to statically map EPG to a certain VLAN on an interface. Multiple such VLANs are allowed at different points in the fabric allowing flexibility in stitching together a tenant container.

    ContractsWhitelist policy allowing specific TCP/UDP ports to be opened to allow communication between EPGs. By default, communication between EPGs is not allowed, that is, deny everything. Using contracts, specific protocols and services are allowed between EPGs.

    Note Note that within an EPG, all communication is allowed without restriction.

    Note Note there are some protocols not filtered by contracts, please see the following URL: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/release/notes/aci_nxos_rn_1102.html

    Application ProfileA set of EPGs and contracts between them, implementing a specific multi-tier application profile. For example, a 3 tier web/app/db type of application might have 3 EPGs, with contracts for outside to web, web to app and app to db. All together these form the Application Profile.

    APIC TenantsAPIC is by design multi-tenant, and creation of policies and configuration are done on a per tenant basis. Role based access control allows each tenant admin to be able to configure policies for that specific tenant.

    Bridge DomainsBridge domains are L2 segments overlaid over the fabric. At the edges, the tenant bridge domains are mapped to VLANs or VXLANs on the wire, and carried over the fabric with enhanced VXLAN encapsulation.

    Private NetworksPrivate networks are similar to VRFs on traditional routers. Each Private network has its own addressing space and routing space.

    SubnetsSubnets are IP subnets attached to Bridge domains. There can be one or more subnets attached to a bridge domain and these are similar to primary and secondary addresses. SVIs are created on the fabric for these subnets, and these exist on all of the leaf nodes where the bridge domain exists providing a proxy default gateway for these subnets at the local leaf.1-8Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService Tiers External Routing OptionsCurrently iBGP sessions or static routing can be used between the border leaf to external router. This is on a per tenant basis. The scale of external routing adjacencies per leaf is currently 32 iBGP sessions. Only one session per tenant is allowed per leaf. A contract is required to allow destinations outside of the fabric to be reached from inside, and an external EPG is created to represent outside destinations.

    L2 ExternalWhen layer-2 connections are extended to outside of the ACI fabric, l2 external connections can be configured, with a contract to secure traffic between external endpoints and internal endpoints.

    From the perspective of IaaS services offered by the Cloud Service Providers, the following considerations are used in this implementation:

    1. CSPs can use APIC tenancy constructs to provide multi-tenant Role based access control to the configuration. The APIC and ACI Fabric scale can go up to a large number of tenants however currently released software has verified scalability of 100 tenants.

    2. Cloud Service Providers for IaaS want to provide logical containers for hosting VMs, without being aware of application specifics. On ACI this maps to the CSP providing Bridge domains (L2 segments) to tenants and creating one EPG per Bridge domain to host any number of applications. The contracts would need to allow access to all application services that are hosted in that L2 segment. While multiple EPGs can be mapped to same BD, and save on the hardware resources in the leaf VLAN table, separate BD per EPG is used in this implementation to isolate multicast and broadcast traffic.

    3. Use of L3 vs L2 based containerscurrently the ACI fabric verified scalability is 100 VRFs (called private networks in ACI), and hence use of VRF per tenant allows for that many tenants. To scale beyond that limit, for some tenancy models, instead of creating a per-tenant APIC tenant and APIC VRF, just a L2 segment is created and default gateway is setup on an external device. This is particularly a good choice for low-end tenants with no features/services, such as Bronze and Copper tenancy models, and it allows scaling the number of such tenants to a very high number.

    4. Use of service graphingService Graphing allows APIC to configure services devices such as Firewalls and load balancers. In the current software release, there is no redirection capability, so all traffic has to be routed or switched to the services appliance explicitly. Additionally, there is another restriction that there is no routing within fabric, hence this restricts the stitching of the services to a subset of scenarios. In this implementation, 1arm routed mode is used for Server Load balancer with default gateway on ACI fabric. However for ASA Firewall, when used in routed-mode, the default-gateway has to be on the ASA Firewall and not on the ACI fabric and hence this model is implemented in this release.

    5. Additional restrictions on service graphing are covered in the detail in later chapters of this implementation guide.

    Service TiersCloud providers, whether Service Providers or Enterprises, want an IaaS offering that has multiple feature tiers and pricing levels. To tailor workload or application requirements to specific customer needs, the cloud provider can differentiate services with a multi-tiered service infrastructure and Quality of Service (QoS) settings. The Cisco Intercloud architecture allows customers to build differentiated service tiers and service level agreements that support their tenant or application requirements. Such services can be used and purchased under a variable pricing model. Infrastructure and resource pools can be designed so that end users can add or expand services by requesting additional compute, storage, or network capacity. This elasticity allows the provider to maximize the user experience by offering a custom, private Data Center in virtual form.1-9Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService TiersThe Intercloud ACI 1.0 solution supports a reference multi-tier IaaS service model of Gold, Silver, Bronze, and Copper tiers, very similar to what was shown in the previous Cisco VMDC reference designs. These service tiers (or network containers) define resource and service levels for compute, storage, and network performance. This is not meant to be a strict definition of appliance and resource allocation, but to demonstrate how differentiated service tiers could be built. These are differentiated based on the following features:

    Network ResourcesDifferentiation based on network resources and features.

    Application TiersService tiers can provide differentiated support for application hosting. In some instances, applications may require several tiers of VMs (for example, web, application, database, and so on). Intercloud ACI 1.0 Gold and Silver class tenant containers are defined with three application tiers on three separate Bridge Domains and 3 separate EPGs to host web, application, and database services on different VMs. The Bronze and Copper service is defined with one Bridge Domain and 1 EPG only, so if there are multi-tiered applications, they must reside on the same L2 segment or potentially on the same VM (Linux, Apache, MySQL, PHP, Perl, and Python (LAMP)/Windows Apache, MySQL, PHP, Perl, and Python (WAMP) stack).

    Access Methods and SecurityThe Gold and Silver service tiers are defined with separate service appliances per-tenant to provide security and isolation. The Gold tier offers the most flexible access methodsthrough Internet, L3VPN, and secure VPN access over the Internet. Also, the Gold tier has multiple security zones for each tenant. The Silver and Bronze tiers do not support any perimeter Firewall service and provide access through L3VPN only. The Copper tier supports access over Internet only, along with perimeter Firewall service and NAT. In this release, the goal was to have all of the services implemented through the APIC using service graphing feature. However the device package support for integrating with APIC was not yet available for certain functionality at the time of testing, notably NAT and RA-VPN/Secure VPN access on the ASA device package. These services can still be implemented albeit via directly configuring the service appliance itself and in future will be supported via APIC.

    Stateful ServicesTenant workloads can also be differentiated by the services applied to each tier. The Expanded Gold tier is defined with an ASA based perimeter Firewall and dual security zonesPVT zone and DMZ zone. Both physical ASA-5585-X and ASAv models were validated and either option can be used depending on customer requirements. The ASA-5585-X based implementation uses multi-context mode with each tenant getting a context on a pair of physical ASAs, whereas if virtual ASAv is used, each tenant gets a pair of single-context dedicated ASAv. Support for configuring policies inside an ASA context on a multi-context ASA through APIC will be released in future release, and in this implementation, beta code was used to validate this functionality. The Gold and Silver tiers are defined with a NetScaler 1000V SLB service. The Bronze tier is defined with no Firewall or SLB services. The Copper tier provides NAT and perimeter Firewall services with a context shared amongst all copper tenants on the ASA-5585 Firewall.

    QoSBandwidth guarantee and traffic treatment can be a key differentiator. QoS policies can provide different traffic classes to different tenant types and prioritize bandwidth by service tier. The Gold tier supports VoIP/real-time traffic, call signaling and data class, while the Silver, Bronze, and Copper tiers have only data class. Additionally, Gold and Silver tenants are guaranteed bandwidth, with Gold getting more bandwidth than Silver. In this release, ACI does not support rate-limiting. Additionally deploying different classes of traffic to the same tenant requires either separating the traffic by EPGs or trusting the DSCP set by tenant VM.

    VM ResourcesService tiers can vary based on the size of specific VM attributes such as CPU, memory, and storage capacity. The Gold service tier is defined with VM characteristics of 4 vCPU and 16 GB memory. The Silver tier is defined with VMs of 2 vCPU and 8 GB, while the Bronze and Copper tier VMs have 1 vCPU and 4 GB each.1-10Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService Tiers Storage ResourcesStorage multi-tenancy on NetApp FAS storage arrays using clustered data on tap was implemented to provide dedicated NetApp Storage VMs to Gold class tenants, whereas Silver tenants share a single SVM, but use dedicated volumes, and Bronze and Copper share volumes as well. The storage performance can be also differentiated, for example, Gold tier is defined with 15000 rpm FC disks, the Silver tier on 10000 rpm FC disks, and the Bronze tier on Serial AT Attachment (SATA) disks. Additionally, to meet data store protection, the recovery point, or the recovery time objectives, service tiers can vary based on provided storage features such as Redundant Array of Independent Disks (RAID) levels, disk types and speeds, and backup and snapshot capabilities.

    Table 1-1 lists the four service tiers or network container models defined and validated in the Intercloud ACI 1.0 solution. Cloud providers can use this as a basis and define their own custom service tiers, based on their own deployment requirements. For similar differentiated offerings for Compute and Storage, reference service tiers can be found in previously published Cisco VMDC VSA 1.0 Implementation Guide.

    Table 1-1 ServiceTiers

    E-Gold Silver Bronze Copper

    Secure Zones Two,

    PVT

    DMZ

    1 1 1

    Perimeter Firewalls Two None None 1 shared with other copper tenants

    Access Methods Internet, L3VPN, RA-VPN L3VPN L3VPN Internet

    Public IP/NAT Yes n/a n/a Yes

    VM L2 Segments

    (1 segment = 1 BD, and 1 EPG)

    3 in PVT zone

    1 in DMZ zone

    3 in PVT 1 in PVT 1

    External Routing Static IBGP or Static IBGP or Static EBGP or Static

    Def Gwy ASA ACI Fabric ACI fabric ASA

    Security between L2 segments

    ASA ACI Fabric Not available OpenStack security groups 1-11Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService TiersReference IaaS Tenant Network ContainersThe tenant network container is a logical (virtual) slice of the shared (common) physical network resources end-to-end through the Data Center that carries a specific tenants traffic. The physical infrastructure is common to all tenants, but utilizing ACI multi-tenancy constructs, each tenant gets its own l2 segments and l3 routing instances, which connect the tenant compute through segregated overlay networks isolated from other tenants, to Data Center Provider edge routers, where each tenant is isolated using VRFs and further extended via L3VPN to the tenant sites. Hence the tenants appear to have their own isolated network with independent IP addressing and security policies. The service appliances such as ASA Firewalls are either multi-context with each tenant getting a context, or using virtual appliances with each tenant getting their own dedicated ASAv and NetScaler 1000v VMs.

    Figure 1-5 shows the reference IaaS Tenant containers defined in different versions of the Cisco VMDC reference architecture.

    Services ASA or ASAv based perimeter Firewall

    ASA or ASAv based Firewall between L2 segments

    DMZ zone

    Netscaler1000V based SLB, one per each zone

    NAT on ASA (not via Service Graphs)

    RA-VPN with ASAv (not tested)

    NetScaler 1000V based SLB

    None ASA based internet Firewall

    NAT (not via Service Graphing)

    QoS 3 classes of traffic allowed:

    1 dscp=ef Real-time with low-latency switching

    2 dscp=cs3 Call Signaling (lumped with tenant data inside ACI fabric)

    3 Tenant data is mapped to Premium Data class (BW guaranteed)

    All tenant data is mapped to premium data class (BW Guaranteed)

    Standard Data class, Available BW service (Best effort)

    Standard Data class, Available BW service (Best effort)

    Table 1-1 ServiceTiers (continued)

    E-Gold Silver Bronze Copper1-12Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService TiersFigure 1-5 IaaSTenant Containers

    In this document, implementation details of Expanded-Gold, Silver, Bronze and Copper containers are provided. A high level overview of implementing these containers with ACI is provided here, and the specific implementation and configuration details are provided in subsequent chapters on each of the container types.

    First the simplest container Bronze is explained, followed by Silver and E-Gold. Lastly the Copper container which has a shared Firewall and Internet-based access for low cost tenancy model.

    BronzeThe Bronze reference container is a simple, low-cost tenancy container.

    Each Bronze tenant container has one Layer 2 segment for tenant VMs, implemented with one ACI BD/EPG. There is one VRF on the Data Center provider edge for each Bronze tenant, and tenants access their cloud service over L3VPN.

    The Bronze Tenant traffic is mapped into the standard data class and can use available bandwidth (best effort), that is, no bandwidth guarantee.

    There are two options to implement Bronze with ACI, with different scaling considerations.

    L3-BronzeThis has default gateway of the VMs on the ACI fabric. L3 external routing is used between the ACI fabric for each of the L3-bronze containers, and can be either IBGP or static. On the Data Center provider edge router, a VRF for each L3-Bronze is used with a sub-interface towards the ACI fabric. Two independent L3 links are configured to two different leafs to provide redundancy for high availability. Each leaf runs an IBGP session or has static routing configured.

    Bronze Silver Gold

    VMVM VM VMVM VM VMVM VM VMVM VM

    vFW vFW vFW

    L3

    Public Zone

    Private ZoneL3

    L2

    L3

    L3

    L2

    L3

    FW

    LB LB

    LB

    LB

    L3

    L2

    L3

    FW

    L2

    VM VM

    Expanded Gold

    VMVM VM

    vFW

    vFW

    ProtectedFront-End

    ProtectedBack-End

    vLB

    vLB

    L3

    IOSvFW

    IOSvFW

    L2

    VM VM

    Expanded Palladium

    2984

    63

    Copper

    VM VM

    vFW

    L3

    SharedFW

    Context

    L3

    L2

    Zinc

    VM VM

    vFW

    L3

    DedicatedvFW

    L21-13Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService TiersFigure 1-6 Layer 3 Bronze Container LogicalTopology

    L2-BronzeThis design has the ACI provide only a BD/EPG for each tenant, and the BD is configured without unicast routing. Tenant VMs have default gateway on the Data Center provider edge ASR 9000 tenant VRF. L2 external configuration on the BD is used and ACI contracts can be setup to protect the Tenant VMs for outside to inside traffic. The connection between the Data Center provider edge ASR 9000 nV cluster and the ACI fabric is a virtual port channel (vPC) connecting to two different ACI leaf nodes, and connecting to two different chassis on the ASR 9000 nV side.

    SilverFigure 1-7 shows the Silver container logical topology. The Silver tenant accesses its cloud service via L3VPN. Each Silver tenant container has 3 EPGs for tenant workloads, mapped to 3 different BDs, allowing for 3-tier apps. Additionally, the silver tenant has Server Load Balancer implemented using NetScaler 1000V and this is configured via the APIC using service graphing using the NS1000V device package. Contracts on the ACI can be used to enforce security policy between tiers as well as between external to tiers.

    This Silver service tier provides the following services:

    Routing (IBGP) from ACI Fabric to the Data Center Edge ASR9000 router

    Access from MPLS-VPN to tenant container (virtual data center).

    1 ZonePVTto place workloads, with 3 BD/EPGs.

    ACI Fabric default Gateway, and contracts and filters to implement policy between tiers.

    SLB on the Netscaler1000V to provide L4-7 load balancing and SSL Offload services to tenant workloads.

    Medium QoS SLA with one traffic classpremium data class for in-contract traffic.

    Redundant virtual appliances for HA.

    Compute

    2984

    62

    BXX_VRF

    Default Gateway

    AVS

    Tenant VMs

    Bxx_EPGBxx_BD

    Port-groupcreated by APIC

    CustomerVRF

    Per L3-Bronze IaaS Tenant:

    APIC Tenant: 1ACI VRF (private network): 1DC-PE VRF: 1L3-ext: 1 (iBGP or static)BDs: 1 Subnets: 1EPGs: 1 + 1 (external EPG) Contract: 1Server Leaf VLANs: 2 (EPG + BD)Border Leaf VLANs: 1

    iBGP/StaticVLAN

    VLAN

    *Redundant boxesnot shown

    ASR 9000 nV

    Bxx Tenant in ACI

    VM1-14Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService TiersFigure 1-7 Silver Container LogicalTopology

    Expanded Gold Container (E-Gold)Figure 1-8 shows the Expanded-Gold container logical topology. The Expanded Gold tenant gets two security zones to place workloads into with 2 Firewall instances to protect to and between each zone. The Internet facing connection has a DMZ Firewall instance, and a DMZ Bridge Domain, where there is one EPG to host applications that run in the DMZ. This DMZ Firewall also has a connection to the PVT Firewall instance, which is another independent Firewall instance for this tenant to protect the private l2 segments for hosting the secure backend applications. The PVT Zone has 3 Bridge domains and each BD has an EPG for the endpoints in that BD. The connectivity via L3VPN is from the PVT Firewall instance.

    Compute

    2984

    64

    SiXX_VRF

    T2_EPGT2_subnet

    T3_EPGT3_subnet

    T1_EPGT1_subnet

    AVS

    Tenant VMs

    Vips_subnetSnip subnet

    CustomerVRF

    Per Silver IaaS Tenant:

    APIC Tenant: 1ACI VRF (private network): 1DC-PE VRF: 1L3-ext: 1 (iBGP or static)BDs: 4 Subnets: 5 (3 tenant tiers, 1/1 vip/snip)EPGs: 4 + 1 (external EPG) Contract: 3 (out to t1, t1 to t2, t2 to t3 )Server Leaf VLANs: 8 (EPG + BD)Border Leaf VLANs: 1Service Graph: 1, instances 3

    SG-lb SG-lb-t1 SG-lb-t2 SG-lb-t3

    L3-ext

    *Redundant boxesnot shown

    ASR 9000

    SiXX Tenant in ACI

    VMVM VM

    NS1000V1-15Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService TiersFigure 1-8 Expanded-Gold Container LogicalTopology

    This E-Gold service tier provides the highest level of sophistication by including the following services:

    Default Gateway for the VMs is on their respective Zone Firewall, that is, for the PVT zone BD/EPGs, the default gateway is on the PVT Firewall instance of the tenant, and the DMZ BD/EPG VMs have default gateway (def gwy) on the DMZ FW instance of the tenant. Default gateway on ASA is required in this design to use APIC integration for configuring the ASA Firewalls with the Firewalls in routed mode.

    Access from Internet or MPLS-VPN to tenant container (virtual data center).

    2 ZonesPVT and DMZto place workloads. Each zone has its own BD/EPGs which are basically L2 segments.

    Either Physical ASA-5585-X in multi-context mode with each tenant getting dedicated contexts or dedicated ASAv virtualized ASA can be used.

    IPsec Remote-Access VPN using the ASA or ASAv, to provide Internet-based secure connectivity for end-users to their virtual data center resources this was not implemented as the device package support to configure via APIC is not yet available.

    Stateful perimeter and inter-Zone Firewall services to protect the tenant workloads via ASA or ASAv

    Network Address Translation (NAT) on the ASA/ASav, to provide Static and Dynamic NAT services to RFC1918 addressed VMs, however, configuring NAT via APIC/Device package has limitations that dont allow it at this time. Enhancements are in progress and will be supported in future releases.

    SLB on the NetScaler 1000V to provide L4-7 load balancing and SSL Offload services to tenant. One NetScaler 1000V instance for each zone.

    Higher QoS SLA and three traffic classes - real-time (VoIP), call signaling and premium data.Please note within datacenter, the call signaling and premium data travel in same ACI class however in the MPLS WAN, 3 separate classes are used, one each for VoIP, Call signaling and Data.

    Redundant virtual appliances for HA.

    Compute

    2984

    65

    Pvtdmz BD/EPG

    L2-extL2-ext pubout-BD/EPGpubout-BD/EPG

    SG-fw-slbSG-fw

    SG-fw-fw

    T2_EPGT2_subnet

    T3_EPGT3_subnet

    T1_EPGT1_subnet

    AVS

    Private Zone Tenant VMs

    CustomerVRF

    GlobalASR 9000

    Per E-Gold IaaS Tenant:

    APIC Tenant: 1ACI VRF (private network): 2 (reserved for future)DC-PE VRF: 1L2-ext: 2 BDs: 9 Subnets: 0 (L2 only model used)EPGs: 6 + 2 (external EPG) Contract: Server Leaf VLANs: 8 (EPG + BD)Border Leaf VLANs: 2Service Graph: 3, instances see below

    SG-fw-slb: 2 SG-fw-slb-pvt SG-fw-slb-dmzSG-fw-fw: 1 SG-fw-fwSG-fw: 4 SG-fw-pvt-t1 SG-fw-pvt-t2 SG-fw-pvt-t3 SG-fw-dmz

    *Redundant boxesnot shown

    GoXX Tenant

    VM

    DMZ_EPGDMZ_subnet

    AVS

    VMVM VM

    NS1000V NS1000V

    DMZ Tenant VMs

    GoXX_PVT_VRF GoXX_DMZ_VRF1-16Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewService TiersThe two zones can be used to host different types of applications to be accessed through different network paths.

    The two zones are discussed in detail below.

    PVT ZoneThe PVT, or Private Zone, and its VMs can be used for cloud services to be accessed through the customer MPLS-VPN network. The customer sites connect to the provider MPLS Core and the customer has their own MPLS-VPN (Customer-VRF). The Data Center Edge router (ASR 9000 provider edge) connects to the customer sites through the MPLS-VPN (via the Customer-VRF). This Customer-VRF is connected through a VLAN on Virtual Port-channel to a pair of ACI leafs, and configured as a L2-external connection in ACI. This L2-external connection is extension of a bridge domain that also has the PVT Firewall ASA outside interface EPG. From the perspective of the ASA, the next hop is the ASR9K sub-interface which is in the Customer-VRF. The ASA is either a dedicated ASAv or ASA-5585 context. PVT BDs are L2 only BDs, that is, no unicast routing, and default gateway is on the PVT ASA for the VMs in the BD/EPGs in PVT zone.

    DMZThe Intercloud ACI 1.0 E-Gold container supports a DMZ for tenants to place VMs into a DMZ area, for isolating and securing the DMZ workloads from the PVT workloads, and also to enable users on the Internet to access the DMZ-based cloud services. The ASR 9000 provider edge WAN router is also connected to the Internet, and a shared (common) VRF instance (usually global routing table) exists for all E-Gold tenants to connect to (either encrypted or unencrypted). The ASR9000 Internet table VRF is connected via an ASR 9000 sub-interface to the tenants dedicated DMZ Firewall, and the sub-interface VLAN is trunked over vPC to ACI Fabric and is mapped to a L2-external on the DMZ-external BD. On this DMZ-external BD, an EPG exists that is mapped to the external interface of the DMZ ASA FW. Thus, the DMZ FW outside interface and the ASR9000 sub-interface in global table are L2 adjacent and IP peers. The ASR9000 has a static route for the tenant public addresses pointing to the DMZ ASA FW outside interface address, and redistributes static into BGP for advertising towards Internet. The DMZ ASA FW has a static default pointing back to the ASR 9000 sub-interface, as well as static routes towards L3VPN and PVT subnets pointing back to the PVT FW.

    The DMZ can be used to host applications like proxy servers, Internet-facing web servers, email servers, etc. The DMZ consists of one L2 segment implemented using a BD and an EPG and default gateway is on the DMZ ASA FW. For SLB service in the DMZ, there is a NetScaler 1000V. For RA-VPN service, currently the integration with APIC to configure this service does not exist, hence manual configuration of the ASAv is required.

    As an option, the E-Gold container may be deployed in simplified manner with only one zone, either PVT zone with only the PVT Firewall with L3VPN connection (previous VMDC designs called this as Gold container) or with DMZ only, with DMZ Firewall and access via internet only, and additional secure access via RA-VPN (similar to the Zinc container in the previously released VMDC VSA 1.0 solution).

    CopperFigure 1-9 shows the Copper container logical topology. The Copper tenant gets one zone to place workloads into and just one L2 segment for tenant VMs, implemented with one ACI BD/EPG, and default gateway is on the ASA Shared Firewall. Multiple Copper tenants share the same Firewall, with each tenant getting a different inside interface, but sharing the same outside/Internet-facing interface. ASA Security policy restricts access to the tenant container from outside or other tenants, as well as allows for NAT for reduced public address consumption.

    Routing (static or Exterior BGP) from ASA shared Firewall to the Data Center provider edge ASR 9000, to connect all of the copper tenant virtual data centers to the global table (internet) instance on the ASR 9000 router, and advertise towards Internet all the tenants public IP addresses.1-17Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewSolution Components Access from internet to tenant container (virtual data center).

    ASA Firewall Security policy allows only restricted services and public IPs to be allowed to be accessed from outside.

    1 ZonePVTto place workloads, with 1 L2 segment in the zone.

    Lower QoS SLA with one traffic class, standard data.

    The shared ASA context is configured manually, i.e ACI service graphing is not utilized

    Figure 1-9 Logical Copper Container

    Solution ComponentsTable 1-2, and Table 1-3 list Cisco and Third Party product components for this solution, respectively.

    Compute

    2984

    66

    CuXX-bd

    Global

    Cu_VRFNot used, but created toseparate BDs into its own VRF

    Cu1_EPG(static VLANs)

    Port-group manualconfig no APICintegration

    Many Tenants, Cu1 to Cuxxxx, each tenant with a L2 segment and VMs onthe segment with default gateway to a tenant specific sub interface on the ASA

    Cu2_EPGStatic VLANs

    CuOS_EPGStatic VLANs

    CuXX-bd CuOS-bd

    Cuout-bd Cuout-EPG

    Management Network forHorizon Access

    VLAN

    VLAN VLAN VLAN

    VLAN

    eBGP

    VLAN

    Nexus 1000V-KVM

    Tenant VMsfor Cu1

    Tenant VMsfor Cu2

    OS

    Default GatewayASAShared context for allCopper customers, noservice graphing

    ASR 9000 nV

    *Redundant boxesnot shown

    Copper Tenant

    VM VM VM

    Table 1-2 Cisco Products

    Product Description Hardware Software

    ASR 9000 Data Center Provider Edge ASR9010-NV

    A9K-RSP440-SE

    A9K-24x10GE-SE

    A9K-MOD80-SE

    A9K-MPA-4X10GE

    IOS-XR5.1.2

    APIC Centralized Policy Controller APIC-CLUSTER-L 1.0(2j)

    Nexus 9500 ACI Fabric Spine Nexus 9508

    9736PQ

    11.0(2j)1-18Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewSolution ComponentsNexus 9300 ACI Fabric Leaf Nexus 9396

    Nexus 93128

    11.0(2j)

    UCS 6200 UCS Fabric Interconnect UCS-FI-6296UP 2.2(1d)

    UCS B-Series Blade Servers UCS-5108, B200-M3, UCS VIC 1240/1280, UCS 2204XP

    2.2(1d)

    UCS C-Series Rack Servers C240-M3, C220M3 CIMC: 2.0(1a)

    Nexus 2000 FEX Nexus 2232PP 11.0(2j)

    ASA-5585-X ASA Firewall ASA-5585-X w SSP60 9.3.1

    ASAv ASA Virtual Firewall 9.3.1

    Device package - 1.0.1

    NetScaler 1000V Server Load Balancer, virtualized 10.1

    Device package - 10.5

    Cisco AVS Application Virtual Switch 4.2(1)SV2(2.3)

    Table 1-3 Third Party Products

    Product Description Hardware Software

    VMWare ESXi Hypervisor N/A 5.1.0 Build 1483097

    VMWare vCenter Management tool N/A 5.1.0 Build 1473063

    NetApp FAS3250 StorageArray FAS3250 8.2.2 cDoT

    Linux Tenant VM Centos

    Ubuntu 14.04 LTS

    Linux Openstack Nodes Ubuntu 14.04 LTS

    Openstack Cloud Platform Icehouse release

    Ceph Software defined storage

    0.80.5

    Table 1-2 Cisco Products (continued)

    Product Description Hardware Software1-19Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 1 Solution OverviewSolution Components1-20Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • InterclouImplementation GuideCH A P T E R 2ACI Policy Model

    The Cisco InterCloud Application Centric Infrastructure (ACI) fabric is a model-driven architecture. The policy model manages the entire fabric, including the infrastructure, authentication, security, services, applications, and diagnostics. Logical constructs in the policy model define how the fabric meets the needs of any fabric function. Figure 2-1 provides an overview of the ACI policy model logical constructs.

    Figure 2-1 ACI Policy Model Logical Constructs

    As a model-driven architecture, Cisco Application Policy Infrastructure Controller (APIC) maintains a complete representation of the administrative and operational state of the system (the model). The model applies uniformly to fabric, services, system behaviors, as well as virtual and physical devices attached to the network. The logical model itself consists of objects - configuration, policies and runtime states - and their attributes. In the ACI framework, this model is known as the management information tree (MIT). Each node in the MIT represents a managed object or group of objects. These objects are organized in a hierarchical way, creating logical object containers. Every managed object in the system can be identified by a unique distinguished name. Figure 2-2 depicts the logical hierarchy of the MIT object model.

    Endpoints Servers, VMs, Storage, Internet Clients, etc.

    Location Independent Resource Pool

    Endpoint Groups(EPGs)

    Named groups of related endpoints, eg finance Static or dynamic membership

    Bridge Domains(BD)

    L3 functions Subnet, default gateway

    Contracts

    The rules that govern the interactions of EPGs Contracts determine how applications use the network

    Contexts(VRFs)

    Unigue L3 forwarding domain Relation to application profile(s) with their policies

    Tenant1 ... Tenantn

    3485

    04

    APIC Policy2-1d Data Center Application Centric Infrastructure 1.0

  • Chapter 2 ACI Policy ModelAccessing Managed Object Data through REST APIFigure 2-2 Management InformationTree Overview

    In the ACI framework, a tenant is a logical container (or a unit of isolation from a policy perspective) for application policies that enable an administrator to exercise domain-based access control. Tenants can represent a customer in a service provider setting, an organization or domain in an enterprise setting, or just a convenient grouping of objects and policies. Figure 2-3 provides an overview of the tenant portion of the MIT. The tenant managed object is the basis for the Expanded Gold Tenant Container.

    Figure 2-3 ACITenant (348505)

    Accessing Managed Object Data through REST APIRepresentational state transfer (REST) is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system. REST-style architectures conventionally consist of clients and servers; clients initiate requests to servers, while servers process requests and return appropriate responses. The REST style builds requests and responses around the transfer of representations of resources. A resource can be any body of information, static or variable. A representation of a resource is typically a document that captures the current or intended state of a resource.

    APIC supports REST Application Programming Interface (API) for programmatic access to the MOs on ACI fabric. The API accepts and returns HTTP or HTTPS messages that contain JavaScript Object Notation (JSON) or Extensible Markup Language (XML) data structure and provides the essential information necessary to execute the command.

    1 11 11

    11 1 1

    1

    n n

    n n

    n

    nnn n

    n

    n n n n

    OutsideNetwork

    ApplicationProfile

    EndpointGroup

    BridgeDomain

    Context(VRF)

    Tenant

    Contract

    Subnet Subject

    Filter

    3485

    05

    Legend: Solid lines indicate that objects contain the ones below. Dotted lines indicate a relationship. 1:n indicates one to many; n:n indicates many to many.2-2Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 2 ACI Policy ModelAccessing Managed Object Data through REST APINote In the JSON or XML data structure, the colon after the package name is omitted from class names and method names. For example, in the data structure for a managed object of class zzz:Object, label the class element as zzzObject.

    Managed objects can be accessed with their well-defined address, the REST URLs, using standard HTTP commands. The URL format used can be represented as follows:

    {http|https}://host[:port]/api/{mo|class}/{dn|className}.{json|xml}[?options]

    Where:

    Note By default, only HTTPS is enabled on APIC. HTTP or HTTP-to-HTTPS redirection, if desired, must be explicitly enabled and configured. HTTP and HTTPS can coexist on APIC.

    The API supports HTTP POST, GET, and DELETE request methods as follows:

    An API command to create or update a managed object, or to execute a method, is sent as an HTTP POST message.

    An API query to read the properties and status of a managed object, or to discover objects, is sent as an HTTP GET message.

    An API command to delete a managed object is sent as either an HTTP POST or DELETE message. In most cases, a managed object can be deleted by setting its status to deleted in a POST operation.

    The HTML body of a POST operation must contain a JSON or XML data structure that provides the essential information necessary to execute the command. No data structure is required with a GET or DELETE operation.

    Note The API is case sensitive. When sending an API command with 'api' option in the URL, the maximum size of the HTML body for the POST request is 1 MB.

    The API model documentation is embedded within APIC, accessible with the following URL:https://{apic_ip_or_hostname}/doc/html/

    host Specifies the hostname or IP address of APIC.port (Optionally) specifies the port number for communicating with APIC.api Specifies that the message is directed to API.mo|class Specifies whether the target of the operation is a managed object, or an object class.dn|className Specifies the DN of the targeted managed object, or the name of the targeted class.json|xml Specifies whether the encoding format of the command or response HTML body is

    JSON or XML.?options (Optionally) specifies one or more filters, selectors, or modifiers to a query. Multiple

    option statements are joined by an ampersand (&).2-3Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 2 ACI Policy ModelAuthenticating and Maintaining an API SessionAuthenticating and Maintaining an API SessionAPIC requires users authentication before allowing access to the API. On APIC, when a login message is accepted, the API returns a data structure that includes a session timeout period and a token that represents the session. The session token is also returned as a cookie in the HTTP response header. The login refresh message allows the user to maintain the API session, if no other messages are sent for a period time longer than the session timeout period. The token changes each time the session is refreshed. These API methods manage session authentication:

    aaaLoginSent as a POST message to log in a user and open a session. The message body contains an aaa:User object with the name and password attributes, and the response contains a session token and cookie.

    aaaRefreshSent as a GET message with no message body or as a POST message with the aaaLogin message body, this method resets the session timer. The response contains a new session token and cookie.

    aaaLogoutSent as a POST message, to log out the user and close the session. The message body contains an aaa:User object with the name attribute. The response contains an empty data structure.

    The example below shows a user login message that uses a XML data structure. The example makes use of user ID with a login domain, with the following format:

    apic#{loginDomain}\{userID}POST https://{apic_ip_or_hostname}/api/aaaLogin.xml

    After the API session is authentication and established, retrieve and send the token or cookie with all subsequent requests for the session.

    Layer 4 to Layer 7 Service InsertionACI treats services as an integral part of an application. Any services that are required are treated as a service graph that is instantiated on the ACI fabric from APIC. A service graph is represented as two or more tiers of an application with the appropriate service function inserted between. APIC provides the user with the ability to define a service graph with a chain of service functions such as application firewall, load balancer, SSL offload, and so on. The service graph defines these functions based on a user-defined policy for a particular application.

    Figure 2-4 Service Insertion Graph

    A service graph is inserted between source/provider EPG and destination/consumer EPG by a contract. After the service graph is configured on APIC, APIC automatically configures the services according to the service function requirements that are specified in the service graph. APIC also automatically configures the network according to the needs of the service function that is specified in the service

    2987

    39

    SourceEPG

    DestinationEPG

    InputNode

    OutputNodeSSL

    Terminal node connectorsFunction node connectorsFunction nodesInput or output nodes

    FW

    Service Graph

    Contract2-4Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 2 ACI Policy ModelLayer 4 to Layer 7 Service Insertiongraph. A physical or virtual service appliance/device performs the service function within the service graph. A service appliance, or several service appliances, render the services required by a service graph. A single service device can perform one or more service functions.

    APIC offers a centralized touch point for configuration management and automation of L4-L7 services deployment, using the device package to configure and monitor service devices via the southbound APIs. A device package manages a class of service devices, and provides APIC with information about the devices so that the APIC knows what the device is and what the device can do. A device package is a zip file that contains the following:

    Device SpecificationThe device specification is an XML file that provides a hierarchical description of the device, including the configuration of each function, and is mapped to a set of managed objects on APIC. The device specification defines the following:

    ModelModel of the device.

    VendorVendor of the device.

    VersionSoftware version of the device.

    Functions provided by a device, such as firewall, L4-L7 load balancing, SSL offload, etc.

    Configuration parameters for the device.

    Interfaces and network connectivity information for each function.

    Service parameters for each function.

    Device ScriptThe device script, written in Python, manages communication between the APIC and the service device. It defines the mapping between APIC events and the function calls that are defined in the device script. The device script converts the L4-L7 service parameters to the configuration that is downloaded onto the service device.

    Figure 2-5 shows the APIC service automation and insertion architecture through the device package.

    Figure 2-5 APIC Service Automation and Insertion Architecture via Device Package

    After a unique device package is uploaded on APIC, APIC creates a namespace for it. The content of the device package is unzipped and copied to the name space. The device specification XML is parsed, and the managed objects defined in the XML are added to the APIC's managed object tree. The tree is maintained by the policy manager. The Python scripts that are defined in the device package are launched within a script wrapper process in the namespace. Access by the device script to APICs file system is restricted.

    Multiple versions of a device package can coexist on the APIC, because each device package version runs in its own namespace. Administrators can select a specific version for managing a set of devices.

    The following REST request uploads the device package on APIC. The body of the POST request should contain the device package zip file being uploaded. Only one package is allowed in a POST request:

    POST https://{apic_ip_or_hostname}/ppi/mo.xml

    2987

    40

    Policy ManagerAPIC

    Device Specification XML

    Linux NameSpace for Device

    Package

    Script Wrapper Process

    API/CLI Interface to Service Device

    Upload DevicePackage

    Device Scripts

    Device Package ZIP File

    Supporting Files

    Device Scripts

    Device Specification XML2-5Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 2 ACI Policy ModelLayer 4 to Layer 7 Service InsertionNote When uploading a device package file with 'ppi' option in the URL, the maximum size of the HTML body for the POST request is 10 MB.

    L4 to L7 Service ParametersThe XML file within the device package describes the specification for the service device. This specification includes device information as well as various functions provided by the service device. This XML specification contains the declaration for the L4-L7 service parameters needed by the service device. The L4-L7 service parameters are needed to configure various functions that are provided by the service device during service graph instantiation.

    You can configure the L4-L7 service parameters as part of the managed objects such as bridge domains, EPGs, application profiles, or tenant. When the service graph is instantiated, APIC passes the parameters to the device script that is within the device package. The device script converts the parameter data to the configuration that is downloaded onto the service device. Figure 2-6 shows the L4-L7 service parameters hierarchy within a managed object.

    Figure 2-6 L4-L7 Service Parameters

    The vnsFolderInst is a group of configuration items that can contain vnsParamInst and other nested vnsFolderInst. A vnsFolderInst has the following attributes:

    KeyDefines the type of the configuration item. The key is defined in the device package and can never be overwritten. The key is used as a matching criterion as well as for validation.

    NameDefines the user defined string value that identifies the folder instance.

    ctrctNameOrLblFinds a matching vnsFolderInst during parameter resolution. For a vnsFolderInst to be used for parameter resolution, this attribute must match with the name of the contract that is associated with the service graph. Otherwise, this vnsFolderInst is skipped and parameters are not used from this vnsFolderInst.

    The value of this field can be any to allow this vnsFolderInst to be used for all contracts.

    graphNameOrLblFinds a matching vnsFolderInst during parameter resolution. For a vnsFolderInst to be used for parameter resolution, this attribute must match with the service graph name. Otherwise, this vnsFolderInst is skipped and parameters are not used from this vnsFolderInst.

    The value of this field can be any to allow this vnsFolderInst to be used for all service graphs.

    nodeNameOrLblFinds a matching vnsFolderInst during parameter resolution. For a vnsFolderInst to be used for parameter resolution, this attribute must match with the function node name. Otherwise, this vnsFolderInst is skipped and parameters are not used from this vnsFolderInst.

    2987

    41

    vnsFolderInst

    vnsParamInstvnsFolderInst

    vnsParamInstvnsParamInst

    vnsCfgRelInst

    vnsFolderInst

    vnsParamInstvnsParamInst2-6Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 2 ACI Policy ModelLayer 4 to Layer 7 Service InsertionThe value of this field can be any to allow this vnsFolderInst to be used for all nodes in a service graph.

    The vnsParamInst is the basic unit of configuration parameters that defines a single configuration parameter. A vnsParamInst has the following attributes:

    KeyDefines the type of the configuration item. The key is defined in the device package and can never be overwritten. The key is used as a matching criterion as well as for validation.

    NameDefines the user defined string value that identifies the parameter instance.

    ValueHolds the value for a given configuration item. The value of this attribute is service device specific and depended on the Key. The value of this attribute is case sensitive.

    The vnsCfgRelInst allows one vnsFolderInst to refer to another vnsFolderInst. vnsCfgRelInst has following attributes:

    KeyDefines the type of the configuration item. The key is defined in the device package and can never be overwritten. The key is used as a matching criterion as well as for validation.

    NameDefines the user defined string value that identifies the config relationship/reference instance.

    targetNameHolds the path for the target vnsFolderInst. The value of this attribute is case sensitive.

    Note By default, if the L4-L7 service parameters are configured on EPG, APIC only picks up the L4-L7 service parameters configured on the provider EPG, parameters configured on the consumer EPG are ignored. The vnsRsScopeToTerm relational attribute for a function node or a vnsFolderInst specifies the terminal node where APIC picks up the parameters.

    When a service graph is instantiated, APIC resolves the configuration parameters for a service graph by looking up the L4-L7 service parameters from various MOs. After resolution completes, the parameter values are passed to the device script. The device script uses these parameter values to configure the service on the service device. Figure 2-7 shows the L4-L7 service parameter resolution flow.

    Figure 2-7 L4-L7 Service Parameter Resolution Steps

    Note By default, the scoped By attribute of L4-L7 service parameter is set to epg; APIC starts the parameter resolution from the EPG, walking up the MIT to the application profile and then to the tenant to resolve the service parameter.

    2987

    42

    Lookup service parameters declared inthe device package. These serviceparameters are used as the input for theresolution phase.

    Lookup service parameters configured onthe function profile. Use the configurationvalues as the default values for theservice parameters.

    Lookup service parameters configured onthe function node in the service graph,these values overwrite the defaults from the function profile.

    Use the scopedBy attribute to find teh starting MO; starts resolution from thisMO, walking up the tree towards thetenant to resolve the service parameters.

    Lookup service parameters configured onthe EPG, Application Profile, Tenant, orother MO; these values overwrite thevalues from the function nodes.

    Start

    End2-7Intercloud Data Center Application Centric Infrastructure 1.0

    Implementation Guide

  • Chapter 2 ACI Policy ModelLayer 4 to Layer 7 Service InsertionThe flexibility of being able to configure L4-L7 service parameters on various MOs allows an administrator to configure a single service graph and then use it as a template for instantiating different service grap