UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and...

114
CISCO CONFIDENTIAL UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access Design and Implementation Guide SDU-6516 Version 1 Sept. 11, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

description

h

Transcript of UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and...

Page 1: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - LargeNetwork, Multi-Area IGP Design with IP/MPLSAccess

Design and Implementation Guide

SDU-6516

Version 1

Sept. 11, 2012

Americas HeadquartersCisco Systems, Inc.170 West Tasman DriveSan Jose, CA 95134-1706USAhttp://www.cisco.comTel: 408 526-4000

800 553-NETS (6387)Fax: 408 527-0883

Page 2: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

C I S C O C O N F I D E N T I A L

Cisco Validated Design

The Cisco Validated Design Program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, andmore predictable customer deployments. For more information visit www.cisco.com/go/validateddesigns.

THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGEWITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BEACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULLRESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THEINFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOUARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FORA COPY.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) aspart of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS AREPROVIDED "AS IS" WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSEDOR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE ANDNONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTALDAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USEOR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCHDAMAGES.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list ofCisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners.The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Anyexamples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposesonly. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access Design and Implementation Guide © 1992-2012 Cisco Systems, Inc. All rights reserved.

Page 3: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 iii

C I S C O C O N F I D E N T I A L

C O N T E N T S

Preface ix

Document Version and System Release ix

Document Organization x

Obtaining Documentation, Obtaining Support, and Security Guidelines x

C H A P T E R 1 Introduction 1 - 1

C H A P T E R 2 Reference Topology 2 - 1

C H A P T E R 3 Unified MPLS Transport 3 - 1

3.1 Multi-Area IGP Design with Labeled BGP Access 3 - 1

3.1.1 Core Route Reflector Configuration 3 - 2

3.1.2 Mobile Transport Gateway Configuration 3 - 5

3.1.3 Core Area Border Router Configuration 3 - 8

3.1.4 Pre-Aggregation Node Configuration 3 - 12

3.1.5 Cell Site Gateway Configuration 3 - 16

3.2 Multi-Area IGP Design with IGP/LDP Access 3 - 18

3.2.1 Pre-Aggregation Node Configuration 3 - 20

3.2.2 Cell Site Gateway Configuration 3 - 25

C H A P T E R 4 Services 4 - 1

4.1 L3 MPLS VPN Service Model for LTE 4 - 1

4.1.1 MPLS VPN Transport for LTE S1 and X2 Interfaces 4 - 1

4.1.2 MPLS VPN Control Plane 4 - 5

4.2 L2 MPLS VPN Service Model for 2G and 3G 4 - 9

4.2.1 CESoPSN VPWS Service from CSG to MTG 4 - 9

4.2.2 SAToP VPWS Service from PAN to MTG 4 - 12

4.2.3 ATM Clear-channel VPWS Service from PAN to MTG 4 - 15

4.2.4 ATM IMA VPWS Service from PAN to MTG 4 - 18

4.3 Fixed-Mobile Convergence Use Case 4 - 20

Page 4: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Contents

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

iv SDU-6516

C H A P T E R 5 Synchronization Distribution 5 - 1

5.1 Hybrid Model Configuration 5 - 1

C H A P T E R 6 High Availability 6 - 1

6.1 Transport Network High Availability 6 - 1

6.1.1 Loop-Free Alternate Fast Reroute with BFD 6 - 1

6.1.2 BGP Fast Reroute 6 - 5

6.1.3 Multihop BFD for BGP 6 - 6

6.2 Service High Availability 6 - 8

6.2.1 MPLS VPN- BGP FRR Edge Protection and VRRP 6 - 8

6.2.2 Pseudowire Redundancy for ATM and TDM services 6 - 10

C H A P T E R 7 Quality of Service 7 - 1

C H A P T E R 8 Operations, Administration, and Maintenance 8 - 1

8.1 IP SLA Implementation 8 - 3

C H A P T E R 9 Related Documents 9 - 1

Page 5: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 v

C I S C O C O N F I D E N T I A L

F I G U R E S

Figure 1-1 UMMT Transport Models 1 - 1

Figure 2-1 Reference Topology - Large Network, Multi-Area IGP-based Design 2 - 1

Figure 3-1 Unified MPLS Transport for Multi-Area IGP Design with Labeled BGP Access 3 - 1

Figure 3-2 Centralized Core Route Reflector (CN-RR) 3 - 3

Figure 3-3 Mobile Transport Gateway (MTG) 3 - 6

Figure 3-4 Core Area Border Router (CN-ABR) 3 - 9

Figure 3-5 Pre-Aggregation Node (PAN) 3 - 12

Figure 3-6 Cell Site Gateway (CSG) 3 - 16

Figure 3-7 Unified MPLS Transport for Multi-Area IGP Design with IGP/LDP Access 3 - 19

Figure 3-8 Pre-Aggregation Node (PAN) 3 - 20

Figure 3-9 Cell Site Gateway (CSG) 3 - 25

Figure 4-1 MPLS VPN Service Implementation for LTE Backhaul 4 - 1

Figure 4-2 BGP Control Plane for MPLS VPN Service 4 - 5

Figure 4-3 CESoPSN Service Implementation for 2G and 3G Backhaul 4 - 9

Figure 4-4 SAToP VPWS Service Implementation for 2G Backhaul 4 - 12

Figure 4-5 ATM VPWS Service Implementation for 3G Backhaul 4 - 15

Figure 4-6 ATM VPWS Service Implementation for 3G Backhaul 4 - 18

Figure 5-1 Test Topology 5 - 2

Figure 6-1 Multihop BFD Topology 6 - 7

Figure 6-2 CESoPSN/SAToP Service Implementation for 2G and 3G Backhaul 6 - 11

Figure 6-3 ATM VPWS Service Implementation for 3G Backhaul 6 - 13

Figure 7-1 QoS Enforcement Points 7 - 1

Figure 8-1 OAM implementation for Mobile RAN Service Transport 8 - 1

Page 6: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

vi SDU-6516

Page 7: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 vii

C I S C O C O N F I D E N T I A L

T A B L E S

Table i-1 Document Version i - ix

Table i-2 Document Organization i - x

Table 1-1 Approaches for Extending the Unified MPLS LSP 1 - 2

Table 5-1 Connectivity Table 5 - 2

Page 8: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

viii SDU-6516

Page 9: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 ix

C I S C O C O N F I D E N T I A L

Preface

This implementation guide is for Large Network, Multi-Area IGP Design with IP/MPLS Access forUMMT 3.0.

The huge growth in mobile data traffic is challenging legacy network infrastructure capabilities andforcing transformation of mobile transport networks. Until now, mobile backhaul networks have beencomposed of a mixture of many legacy technologies that are operationally complex and have reachedthe end of their useful life. The market inflection point for mobile backhaul is introduction of 4G/LTE. The majority of deployments require concurrent legacy (2G/3G radio) backhaul support whilegracefully introducing long term evolution (LTE) to the service mix, along with virtualization of thepacket transport to deliver multiple services.

The Unified MPLS Mobile Transport (UMMT) System is a comprehensive RAN backhaul solutionthat forms the foundation for LTE backhaul, while integrating components necessary for continuedsupport of legacy 2G GSM and existing 3G UMTS transport services. The UMMT System enables acomprehensive and flexible framework that integrates key technologies from Cisco's Unified MPLSsuite of technologies to deliver a highly scalable and simple-to-operate MPLS-based RAN backhaulnetwork.

This preface includes the following major topics:

• Document Version and System Release

• Document Organization

• Obtaining Documentation, Obtaining Support, and Security Guidelines

Document Version and System ReleaseThis is the UMMT System Release 3.0 Large Network, Multi-Area IGP Design with IP/MPLS AccessImplementation Guide.

Document Version

Table i-1 lists document version information.

Table i-1. Document Version

Document Version Date Notes

1 9/11/2012 Initial release.

System Release

Page 10: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Preface

Document Organization

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

x SDU-6516

UMMT System Release 3.0 covers Unified MPLS for Mobile Transport supporting LTE and theintroduction of ASR 901.

Document OrganizationThe chapters in this document are described in Table i-2.

Table i-2. Document Organization

Chapter Major Topics

Chapter 1, Introduction Introduction to Large Network, Multi-Area IGP Design with IP/MPLS Access in the UMMT System.

Chapter 2, ReferenceTopology

Describes the UMMT 3.0 reference topology for the Large Network,Multi-Area IGP Design with IP/MPLS Access transport model.

Chapter 3, Unified MPLSTransport

Includes Multi-Area IGP Design with Labeled BGP Access andMulti-Area IGP Design with IGP-LDP Access configurations for theUMMT System.

Chapter 4, Services Describes the services associated with the Large Network, Multi-Area IGP Design with IP/MPLS Access transport model.

Chapter 5, SynchronizationDistribution

Includes configurations for the hybrid model (SyncE and IEEE1588v2) in the UMMT System.

Chapter 6, HighAvailability

Describes high availability for this transport model at the transportnetwork level and the service level.

Chapter 7, Quality ofService

Includes QoS configurations for the UMMT System.

Chapter 8, Operations,Administration, andMaintenance

Describes the IP SLA implementation in the UMMT System.

Chapter 9, RelatedDocuments

Describes and provides links for all UMMT 3.0-related collateral.

Obtaining Documentation, Obtaining Support, and Security

GuidelinesSpecific information about UMMT can be obtained at the following locations:

• Cisco ASR 901 Series Aggregation Services Routers: http://www.cisco.com/en/US/products/ps12077/index.html

• Cisco ASR 903 Series Aggregation Services Routers: http://www.cisco.com/en/US/products/ps11610/index.html

• Cisco ME 3800X Series Carrier Ethernet Switch Routers: http://www.cisco.com/en/US/products/ps10965/index.html

• Cisco ASR 9000 Series Aggregation Services Routers: http://www.cisco.com/en/US/products/ps9853/index.html

• Cisco Carrier Routing System: http://www.cisco.com/en/US/products/ps5763/index.html

Page 11: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Preface

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 xi

For information on obtaining documentation, submitting a service request, and gathering additionalinformation, see the monthly What's New in Cisco Product Documentation, which also lists all newand revised Cisco technical documentation, at:

http://www.cisco.com/en/US/docs/general/whatsnew/whatsnew.html

Subscribe to the What's New in Cisco Product Documentation as a Really Simple Syndication (RSS)feed and set content to be delivered directly to your desktop using a reader application. The RSS feedsare a free service and Cisco currently supports RSS version 2.0.

Page 12: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Preface

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

xii SDU-6516

Page 13: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 1-1

C I S C O C O N F I D E N T I A L

C H A P T E R 1Introduction

The UMMT System provides the architectural baseline for creating a scalable, resilient, andmanageable mobile backhaul infrastructure that is optimized to seamlessly interwork with the MPC.The system is designed to concurrently support multiple generations (2G/3G/4G) of mobile serviceson a single converged network infrastructure. The system supports graceful introduction of LTE withexisting 2G/3G services with support for pseudowire emulation (PWE) for 2G GSM and 3G UMTS/ATM transport, L2VPNs for 3G UMTS/IP, and L3VPNs for 3G UMTS/IP and 4G LTE transport. Itsupports essential features like network synchronization (physical layer and packet based), H-QoS,OAM, performance management, and fast convergence. It is optimized to cater to:

• Advanced 4G requirements like IPSec and authentication.

• Direct eNodeB communication through the X2 interface.

• Multicast for optimized video transport.

• Virtualization for RAN sharing.

• Capability of distributing the EPC gateways.

• Traffic offload.

Figure 1-1. UMMT Transport Models

As described in the UMMT 3.0 Design Guide, the transport architecture structuring based on accesstype and network size leads to five architecture models that fit various customer deployments andoperator preferences. This implementation guide of the UMMT System covers the transport model fora Large Network, Multi-Area IGP-based design with an IP/MPLS Access Network, and presentstwo approaches for extending the unified MPLS LSP into the mobile RAN access domain.

Page 14: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 1 Introduction

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

1-2 SDU-6516

Table 1-1. Approaches for Extending the Unified MPLS LSP

Option Description

Option-1: Multi-Area IGPDesign with Labeled BGPAccess

Utilizes a separate IGP area or level for the access domain, andextends the labeled BGP control plane to the Cell Site Gateway(CSG). This model is developed for operators that wish to employ asingle control plane end-to-end for mobile service transport.

Option-2: Multi-Area IGPDesign with IGP/LDPAccess

Utilizes a separate IGP process for the access domain, andredistributes selected prefixes between this IGP and BGP at the pre-aggregation node (PAN). The labeled BGP control plane extendsonly to the PAN. This model is developed for operators that, due tooperational factors, wish to deploy a transport network in this fashionfor mobile service transport.

Page 15: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 2-1

C I S C O C O N F I D E N T I A L

C H A P T E R 2Reference Topology

The network design follows a Unified MPLS Transport implementation where the organizationbetween the core and aggregation domains is based on a single autonomous system, multi-area IGPdesign.

Figure 2-1 shows a detailed view of Metro-1 with access and aggregation domains interfacing with thecore network.

Figure 2-1. Reference Topology - Large Network, Multi-Area IGP-based Design

In the core network, the mobile transport gateways (MTG) are provider edge (PE) devices terminatingMPLS VPNs and/or AToM pseudowires to provide connectivity to the EPC gateways (SGW, PGW,MME) in the MPC. At each core PoP, the core nodes (CN-ABR) are area border routers (ABR)

Page 16: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 2 Reference Topology

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

2-2 SDU-6516

between the core and aggregation domains. The multi-area IGP organization between the core andaggregation is based on segmenting the core network in IS-IS Level 2 and the aggregation networks inIS-IS Level 1. The default IS-IS behavior of Level 1 to Level 2 route distribution is prevented to keepthe aggregation and core domains isolated and contain the route scale within the respective domains.

The PANs are ABRs between the aggregation and RAN access domains. Each mobile access networksubtending from a pair of PANs is based on a different IGP process. As shown in Figure 2-1, inaddition to the "core-agg IGP" IS-IS process at Level-1, the PANs run another independent "ran IGP"IS-IS process at Level-1. All CSG access rings (or hub-and-spokes) subtending from the same pair ofPANs are part of this RAN IGP IS-IS process.

Note IS-IS is shown as RAN IGP in Figure 2-1, but OSPF can also be used.

Partitioning these network layers into such independent and isolated IGP domains helps reduce thesize of routing and forwarding tables on individual routers in these domains, which, in turn, leads tobetter stability and faster convergence within each of these domains. LDP is used for label distributionto build intra-domain LSPs within each independent access, aggregation, and core IGP domain. Inter-domain LSPs across the core, aggregation, and access domains is based on BGP-labeled unicast asdescribed in the next section.

Page 17: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-1

C I S C O C O N F I D E N T I A L

C H A P T E R 3Unified MPLS Transport

This chapter includes the following major topics:

• Section 3.1 Multi-Area IGP Design with Labeled BGP Access

• Section 3.2 Multi-Area IGP Design with IGP/LDP Access

3.1 Multi-Area IGP Design with Labeled BGP AccessThis option assumes an end-to-end labeled BGP transport where the access, aggregation, and corenetworks are integrated with unified MPLS LSPs by extending labeled BGP from the core all theway to the CSGs in the RAN access. Any node in the network that requires inter-domain LSPs toreach nodes in remote domain acts as a labeled BGP PE and runs iBGP IPv4 unicast+label with itscorresponding local route reflectors (RR).

Figure 3-1. Unified MPLS Transport for Multi-Area IGP Design with Labeled BGP Access

The CN-ABRs are labeled BGP ABRs between the core and aggregation domains. They peerwith iBGP labeled-unicast sessions with the centralized core route reflector (CN-RR) in the corenetwork, and act as inline-RRs for their local aggregation network PAN clients. The CN-ABRs insertthemselves into the data path to enable inter-domain LSPs by setting next-hop-self (NHS) on alliBGP updates towards the CN-ABR in core network and their local aggregation network PAN clients.The MTGs residing in the core network are labeled BGP PEs. They peer with iBGP labeled-unicast

Page 18: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.1.1 Core Route Reflector Configuration

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-2 SDU-6516

sessions with the CN-RR, and advertise their loopbacks into iBGP labeled-unicast with a commonBGP community (MPC BGP community) representing the MPC.

The PANs are labeled BGP ABRs between the aggregation and RAN access domains. They peer withiBGP labeled-unicast sessions with the higher level CN-ABR inline-RRs in the aggregation network,and act as inline-RRs for their local RAN access network CSG clients. All the PANs in the aggregationnetwork that require inter-domain LSPs to reach remote PANs in another aggregation network, orthe core network (to reach the MTGs, for example), also act as labeled BGP PEs and advertise theirloopbacks into BGP labeled-unicast with a common BGP community that represents the aggregationcommunity. The PANs learn labeled BGP prefixes marked with the aggregation BGP community andthe MPC BGP community. The PANs insert themselves into the data path to enable inter-domain LSPsby setting NHS on all iBGP updates towards the higher level CN-ABR inline-RRs and their local RANaccess CSG clients.

The CSGs in the RAN access networks are labeled BGP PEs. They peer with iBGP-labeled unicastsessions with their local PAN inline-RRs. The CSGs advertise their loopbacks into BGP-labeledunicast with a common BGP community that represents the RAN access community. They learnlabeled BGP prefixes marked with the MPC BGP community for reachability to the MPC, and theadjacent RAN access BGP community if inter-access X2 connectivity is desired.

The MTGs in the core network are capable of handling large scale and will learn all BGP-labeledunicast prefixes since they need connectivity to all the CGSs in the entire network. As describedabove, since all prefixes are colored with BGP Communities, prefix filtering is performed on theCN-RRs for constraining IPv4+label routes from remote RAN access regions from proliferating intoneighboring aggregation domains where they are not needed. The PANs only learn labeled BGPprefixes marked with the aggregation BGP community and the MPC BGP community. This allows thePANs to enable inter-metro wireline services across the core, and also reflect the MPC prefix to theirlocal access networks. Isolating the aggregation and RAN access domain by preventing the defaultredistribution enables the mobile access network to have limited route scale, since the CSGs only learnlocal IGP routes and labeled BGP prefixes marked with the MPC BGP community.

This section includes the following topics:

• Section 3.1.1 Core Route Reflector Configuration

• Section 3.1.2 Mobile Transport Gateway Configuration

• Section 3.1.3 Core Area Border Router Configuration

• Section 3.1.4 Pre-Aggregation Node Configuration

• Section 3.1.5 Cell Site Gateway Configuration

3.1.1 Core Route Reflector Configuration

This section shows the IGP/LDP configuration required to build intra-domain LSPs and the BGPconfiguration required to build the inter-domain LSPs on the centralized CN-RR.

Page 19: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-3

Figure 3-2. Centralized Core Route Reflector (CN-RR)

Interface Configuration

interface Loopback0 description Global Loopback ipv4 address 100.111.4.3 255.255.255.255!interface GigabitEthernet0/1/0/0 <<< Core interface description To CN-K0201 Gig0/2/0/0 cdp ipv4 address 10.2.1.3 255.255.255.254 negotiation auto load-interval 30!interface GigabitEthernet0/1/0/1 <<< Core interface description To CN-K0401 Gig0/1/0/1 cdp ipv4 address 10.4.1.1 255.255.255.254 negotiation auto load-interval 30!

IGP/LDP Configuration

router isis core set-overload-bit on-startup 360 net 49.0100.1001.1100.4003.00 log adjacency changes lsp-gen-interval maximum-wait 5000 initial-wait 50 secondary-wait 200 lsp-refresh-interval 65000 max-lsp-lifetime 65535 address-family ipv4 unicast metric-style wide

Page 20: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-4 SDU-6516

ispf spf-interval maximum-wait 5000 initial-wait 50 secondary-wait 200 ! interface Loopback0 passive address-family ipv4 unicast ! ! interface GigabitEthernet0/1/0/0 <<< Core interface circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast ! ! interface GigabitEthernet0/1/0/1 <<< Core interface circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast ! !!

mpls ldp router-id 100.111.4.3 graceful-restart log neighbor graceful-restart ! interface GigabitEthernet0/1/0/0 ! interface GigabitEthernet0/1/0/1 !

BGP Configuration

!router bgp 100 nsr bgp router-id 100.111.4.3 address-family ipv4 unicast additional-paths receive <<< BGP add-path additional-paths send additional-paths selection route-policy add-path-to-ibgp network 100.111.4.3/32 allocate-label all ! address-family vpnv4 unicast <SNIP> ! session-group infra <<< session group for iBGP clients (CN-ABRs and MTGs) remote-as 100 password encrypted 082D4D4C update-source Loopback0 ! neighbor-group mtg <<< MTG neighbor group use session-group infra address-family ipv4 labeled-unicast route-reflector-client maximum-prefix 150000 85 warning-only !

Page 21: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.1.2 Mobile Transport Gateway Configuration

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-5

address-family vpnv4 unicast <SNIP> ! ! neighbor-group cn-abr <<< neighbor group for CN-ABRs in PoP-1,2,3... etc. use session-group infra address-family ipv4 labeled-unicast route-reflector-client route-policy BGP_Egress_Transport_Filter out <<< Egress filter to drop unwanted RAN loopbacks towards neighboring aggregation regions. ! ! neighbor 100.111.2.1 use neighbor-group cn-abr ! neighbor 100.111.4.1 use neighbor-group cn-abr ! neighbor 100.111.10.1 use neighbor-group cn-abr ! neighbor 100.111.10.2 use neighbor-group cn-abr ! neighbor 100.111.15.1 use neighbor-group mtg ! neighbor 100.111.15.2 use neighbor-group mtg !!

route-policy BGP_Egress_Transport_Filter if community matches-any Deny_Transport_Community then drop else pass endif end-policy

community-set Deny_Transport_Community 10:10 end-set

Note Please refer to the "Prefix Filtering" section in the UMMT 3.0 Design Guide for a detailed explanationof how egress filtering is done at the CN-RR for constraining IPv4+label routes from remote RANaccess regions.

3.1.2 Mobile Transport Gateway Configuration

This section shows the IGP/LDP configuration required to build the intra-domain LSPs, and the BGPconfiguration required to build the inter-domain LSPs from the MPC to the aggregation and accessdomains.

Page 22: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-6 SDU-6516

Figure 3-3. Mobile Transport Gateway (MTG)

Interface Configuration

interface Loopback0 description Global Loopback ipv4 address 100.111.15.1 255.255.255.255!interface TenGigE0/0/0/0 description To CN-K0201 Ten0/0/0/0 <<< Core facing interface cdp service-policy output PMAP-NNI-E ipv4 address 10.2.1.9 255.255.255.254 carrier-delay up 2000 down 0 load-interval 30 transceiver permit pid all!interface TenGigE0/0/0/1 <<< Core facing interface description To CN-K0401 Ten0/0/0/1 cdp service-policy output PMAP-NNI-E ipv4 address 10.4.1.5 255.255.255.254 carrier-delay up 2000 down 0 load-interval 30 transceiver permit pid all!

IGP Configuration

router isis core set-overload-bit on-startup 400 net 49.0100.1001.1101.5001.00 nsf cisco log adjacency changes lsp-gen-interval maximum-wait 5000 initial-wait 50 secondary-wait 200 lsp-refresh-interval 65000 max-lsp-lifetime 600 address-family ipv4 unicast metric-style wide

Page 23: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-7

ispf spf-interval maximum-wait 5000 initial-wait 50 secondary-wait 200 ! interface Loopback0 passive point-to-point address-family ipv4 unicast ! ! interface TenGigE0/0/0/0 circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast fast-reroute per-prefix mpls ldp sync ! ! interface TenGigE0/0/0/1 circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast mpls ldp sync !!

mpls ldp router-id 100.111.15.1 discovery targeted-hello accept nsr graceful-restart session protection log neighbor graceful-restart session-protection nsr ! interface TenGigE0/0/0/0 ! interface TenGigE0/0/0/1 !!

BGP Configuration

router bgp 100 nsr bgp router-id 100.111.15.1 bgp redistribute-internal bgp graceful-restart ibgp policy out enforce-modifications address-family ipv4 unicast additional-paths receive <<< BGP add-path to receive multiple paths from CN-RR additional-paths selection route-policy add-path-to-ibgp network 100.111.15.1/32 route-policy MTG_Community <<< Color loopback prefix in BGP with MPC_Community allocate-label all ! address-family vpnv4 unicast !

Page 24: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.1.3 Core Area Border Router Configuration

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-8 SDU-6516

session-group infra remote-as 100 password encrypted 011F0706 update-source Loopback0 ! neighbor-group cn-rr use session-group infra address-family ipv4 labeled-unicast maximum-prefix 150000 85 warning-only next-hop-self ! address-family vpnv4 unicast ! ! neighbor 100.111.4.3 <<< CN-RR use neighbor-group cn-rr ! <SNIP>!

route-policy MTG_Community set community MTG_Community end-policy!community-set MTG_Community 1001:1001 end-set!

route-policy add-path-to-ibgp set path-selection backup 1 install end-policy

3.1.3 Core Area Border Router Configuration

This section shows the IGP/LDP configuration required to build intra-domain LSPs and the BGPconfiguration required to build the inter-domain LSPs on the core ABRs. The multi-level IGPorganization between the core and aggregation is based on segmenting the core network in IS-IS Level2 and the aggregation networks in IS-IS Level 1. The CN-ABR is a ISIS L1-L2 node, where core-facing interfaces are IS-IS L2 links, local aggregation network-facing interfaces are IS-IS L1 links, andinterface connecting the redundant ABR is a IS-IS L1/L2 link.

Page 25: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-9

Figure 3-4. Core Area Border Router (CN-ABR)

Interface Configuration

!interface Loopback0 ipv4 address 100.111.10.1 255.255.255.255!interface TenGigE0/0/0/0 <<< Interface connecting the redundant ABR cdp service-policy output PMAP-NNI-E ipv4 address 10.10.1.0 255.255.255.254 load-interval 30 frequency synchronization !!interface TenGigE0/0/0/1 <<< Core facing interface cdp service-policy output PMAP-NNI-E ipv4 address 10.6.1.1 255.255.255.254 load-interval 30!interface TenGigE0/0/0/2 <<< Aggregation facing interface cdp service-policy output PMAP-NNI-E ipv4 address 10.5.1.5 255.255.255.254 carrier-delay up 2000 down 0 load-interval 30 frequency synchronization !!

IGP/LDP Configuration

router isis core-agg net 49.0100.1001.1101.0001.00 nsf cisco log adjacency changes lsp-gen-interval maximum-wait 5000 initial-wait 50 secondary-wait 200 lsp-refresh-interval 65000 max-lsp-lifetime 65535 address-family ipv4 unicast metric-style wide spf-interval maximum-wait 5000 initial-wait 50 secondary-wait 200

Page 26: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-10 SDU-6516

propagate level 1 into level 2 route-policy drop-all <<< Disable default IS-IS L1 to L2 redistribution ! interface Loopback0 passive point-to-point address-family ipv4 unicast tag 1000 ! ! interface TenGigE0/0/0/0 <<< L1/L2 link bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point link-down fast-detect address-family ipv4 unicast mpls ldp sync ! ! interface TenGigE0/0/0/1 <<< L2 link circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point link-down fast-detect address-family ipv4 unicast mpls ldp sync ! ! interface TenGigE0/0/0/2 <<< L1 link circuit-type level-1 bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point link-down fast-detect address-family ipv4 unicast mpls ldp sync ! <SNIP> ! !

route-policy drop-all <<< Route policy to disable IS-IS L1 to L2 redistribution drop end-policy

mpls ldp router-id 100.111.10.1 nsr graceful-restart session protection ! interface TenGigE0/0/0/0 ! interface TenGigE0/0/0/1 ! interface TenGigE0/0/0/2

BGP Configuration

Page 27: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-11

router bgp 100 nsr bgp router-id 100.111.10.1 bgp cluster-id 1001 <<< Redundant ABRs K1001 and K1002 have a cluster ID 1001 to prevent loops. bgp graceful-restart ibgp policy out enforce-modifications address-family ipv4 unicast additional-paths receive <<< BGP add-path to receive multiple paths from CN-RR additional-paths selection route-policy add-path-to-ibgp network 100.111.10.1/32 route-policy CN_ABR_Community <<< Color loopback prefix in BGP with CN_ABR_Community allocate-label all ! address-family vpnv4 unicast ! session-group infra remote-as 100 password encrypted 03085A09 cluster-id 1001 update-source Loopback0 graceful-restart ! neighbor-group cn-rr <<< iBGP neighbor group for CN-RR use session-group infra address-family ipv4 labeled-unicast <<< Address family for RFC 3107 based transport maximum-prefix 150000 85 warning-only next-hop-self <<< next-hop-self to insert into data path ! neighbor-group pan <<< iBGP neighbor group for PANs in local Aggregation network use session-group infra address-family ipv4 labeled-unicast <<< Address family for RFC 3107 based transport route-reflector-client <<< PANs are RR Clients next-hop-self <<< next-hop-self to insert into data path ! address-family vpnv4 unicast <SNIP> ... ! ! neighbor 100.111.4.3 <<< CN-RR use neighbor-group cn-rr ! neighbor 100.111.9.7 << PAN K0907 use neighbor-group agg ! neighbor 100.111.9.8 << PAN K0908 use neighbor-group agg ! neighbor 100.111.9.9 << PAN K0909 use neighbor-group agg ! neighbor 100.111.9.10 << PAN K0910 use neighbor-group agg !!

!route-policy add-path-to-ibgp set path-selection backup 1 install end-policy

Page 28: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.1.4 Pre-Aggregation Node Configuration

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-12 SDU-6516

!

!route-policy CN_ABR_Community set community CN_ABR_Community end-policy!community-set CN_ABR_Community 1000:1000 end-set!

3.1.4 Pre-Aggregation Node Configuration

This section shows the IGP/LDP configuration required to build the intra-domain LSPs, and theBGP configuration required to build the inter-domain LSPs in the aggregation network. The PANsare ABRs between the aggregation and RAN access domains. The segmentation between the twodomains is achieved by enabling two different IGP processes on the PANs. The first process is thecore/aggregation IGP process, and the second process is another independent RAN IGP process. AllCSGs subtending from the same pair of PANs are part of this RAN IGP process.

Figure 3-5. Pre-Aggregation Node (PAN)

Interface Configuration

!interface Loopback0 <<< Loopback for core-agg IGP process ip address 100.111.9.7 255.255.255.255!interface Loopback100 <<< Loopback for RAN IGP process ip address 100.111.99.7 255.255.255.255!

mpls ldp router-id Loopback0 force

!interface TenGigabitEthernet0/1 <<< Interface facing Core network description To AG2-K0501 no switchport ip address 10.5.1.3 255.255.255.254

Page 29: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-13

ip router isis core-agg mpls ip synchronous mode bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point !interface TenGigabitEthernet0/2 <<< Interface connecting redundant PAN. VLANed to close Aggregation and RAN IGP processes. description To AG1-K0908 switchport trunk allowed vlan 10,20 switchport mode trunk synchronous mode!interface Vlan10 <<< VLAN 10 used for Core-Aggregation IGP process ip address 10.9.7.0 255.255.255.254 ip router isis core-agg mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point !interface Vlan20 <<< VLAN 20 used for for RAN IS-IS IGP process ip address 10.9.7.4 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process !interface GigabitEthernet0/1 <<< RAN Access Ring facing interface description To CSN-K1317 no switchport ip address 10.9.7.2 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip mpls ldp discovery transport-address 100.111.99.7 <<< LDP discovery transport address changed to Loopback100 synchronous mode bfd interval 500 min_rx 500 multiplier 3 isis circuit-type level-2-only <<< Ignore if OSFP is used as RAN IGP Process isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process !

Note Interface TenGigabitEthernet0/2 is the link that connects to the redundant PAN. Since we are runningtwo independent IGP processes on the PANs, we need to make sure we provide a closed path for bothprocesses. This is enabled by VLAN'ing the common link between the two PANs. Here VLAN 10 isused to close the core-agg IGP process and VLAN 20 to close the RAN IGP process.

Note PAN K0907's LDP ID Loopback0 is in the core-aggregation IS-IS process and not in the RAN IS-ISIGP process. The command mpls ldp discovery transport-address 100.111.99.7, changes the transportaddress to Loopback100 for LDP discovery out of the RAN access ring-facing interface G0/1.

Core-Aggregation LDP/IGP Process Configuration

router isis core-agg

Page 30: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-14 SDU-6516

net 49.0100.1001.1100.9007.00 is-type level-1 <<< IS-IS Level-1 ispf level-1 metric-style wide fast-flood max-lsp-lifetime 65535 lsp-refresh-interval 65000 spf-interval 5 50 200 prc-interval 5 50 200 lsp-gen-interval 5 50 200 no hello padding log-adjacency-changes passive-interface Loopback0 bfd all-interfaces!

Note Depending on the choice of the operator, the RAN IGP process could either be a IS-IS process at Level2, or a OSPF process in Area 0. The following sections describe configurations for both options.

Option-1: OSPF as RAN IGP Process

This shows the configurations if OSPF is used as the choice of RAN IGP process.

router ospf 1 router-id 100.111.99.7 ispf timers throttle spf 50 50 5000 timers throttle lsa 10 20 5000 timers lsa arrival 10 timers pacing flood 5 network 100.111.99.7 0.0.0.0 area 0 <<< Loopback100 interface for RAN IGP network 10.9.7.2 0.0.0.1 area 0 <<< RAN Access facing interface network 10.9.7.4 0.0.0.1 area 0 <<< Interface VLAN 20. Trunked on redundant PAN facing link. bfd all-interfaces!

Option-2: IS-IS as RAN IGP Process

This shows the configurations if IS-IS is used as the choice of RAN IGP process.

router isis ran net 49.1100.1011.1100.9007.00 is-type level-1 <<< IS-IS Level-1 ispf level-1 metric-style wide fast-flood max-lsp-lifetime 65535 lsp-refresh-interval 65000 spf-interval 5 50 200 prc-interval 5 50 200 lsp-gen-interval 5 50 200 no hello padding log-adjacency-changes passive-interface Loopback100 <<< Loopback100 interface for RAN IGP bfd all-interfaces!

BGP Configuration

Page 31: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-15

!router bgp 100 bgp router-id 100.111.9.7 bgp cluster-id 907 bgp log-neighbor-changes bgp graceful-restart restart-time 120 bgp graceful-restart stalepath-time 360 bgp graceful-restart no bgp default ipv4-unicast neighbor csg peer-group <<< Peer group for CSGs in local RAN network neighbor csg remote-as 100 neighbor csg password lab neighbor csg update-source Loopback100 <<< RAN IGP Loopback100 used as source neighbor abr peer-group <<< Peer group for CN-ABRs neighbor abr remote-as 100 neighbor abr password lab neighbor abr update-source Loopback0 <<< Core-Aggregation IGP Loopback0 used as source neighbor 100.111.10.1 peer-group abr neighbor 100.111.10.2 peer-group abr neighbor 100.111.13.17 peer-group csg neighbor 100.111.13.18 peer-group csg neighbor 100.111.13.19 peer-group csg ! address-family ipv4 <<< Address family for RFC 3107 based transport bgp redistribute-internal network 100.111.9.7 mask 255.255.255.255 route-map AGG_Community <<< Advertise Loopback-0 with Aggregation Community neighbor abr send-community neighbor abr next-hop-self all <<< Set NHS to insert into data path neighbor abr send-label <<< send labels with BGP routes neighbor csg send-community neighbor csg route-map BGP_Egress_Transport_Filter out ***************** FILTER PAN COMMUNITY TOWARDS CSGS ****************** neighbor csg route-reflector-client neighbor csg next-hop-self all neighbor csg send-label neighbor 100.111.10.1 activate <<< CN-ABR K1001 neighbor 100.111.10.2 activate <<< CN-ABR K1002 neighbor 100.111.13.17 activate <<< CSG K1317 in local RAN neighbor 100.111.13.18 activate <<< CSG K1318 in local RAN neighbor 100.111.13.19 activate <<< CSG K1318 in local RAN exit-address-family ! address-family vpnv4 <SNIP> ! address-family rtfilter unicast <SNIP> !

!route-map AGG_Community permit 10

Page 32: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.1.5 Cell Site Gateway Configuration

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-16 SDU-6516

set community 100:100 100:101 <<< 100:100 is the common aggregation community. 100:101 is the community identifying this PAN as being in metro-1, location-1.

3.1.5 Cell Site Gateway Configuration

The IGP organization between the aggregation and RAN access networks is based on running twodifferent IGP processes on the PANs as discussed in the previous section. The first process is the core-aggregation IGP process, and the second process is an independent RAN IGP process for the mobileRAN network. The CSGs in all access rings/hub-spokes subtending from the same pair of PANs arepart of this RAN IGP process.

Depending on the operator, the RAN IGP process could either be an IS-IS process at Level 2, or anOSPF process in Area 0. Configurations for both options are shown below.

Figure 3-6. Cell Site Gateway (CSG)

Interface Configuration

!interface Loopback0 ip address 100.111.13.18 255.255.255.255 isis tag 10 <<< Ignore if OSFP is used as RAN IGP Process!interface GigabitEthernet0/10 <<< NNI: Ring Facing Interface description To CSG-K1317 synchronous mode service instance 10 ethernet encapsulation untagged

Page 33: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-17

bridge-domain 10!interface GigabitEthernet0/11 <<< NNI: Ring Facing Interface description To CSG-K1319 synchronous mode service instance 20 ethernet encapsulation untagged bridge-domain 20!interface Vlan10 ip address 10.9.7.3 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process!interface Vlan20 ip address 10.13.18.0 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process

Option-1: OSPF as RAN IGP Process

This section shows the CSG configuration if OSPF is used as the RAN IGP.

router ospf 1 router-id 100.111.13.18 ispf timers throttle spf 50 50 5000 timers throttle lsa 10 20 5000 timers lsa arrival 10 timers pacing flood 5 passive-interface Vlan200 passive-interface Loopback0 network 10.9.0.0 0.0.255.255 area 0 network 10.13.0.0 0.0.255.255 area 0 network 100.111.0.0 0.0.255.255 area 0 bfd all-interfaces

Option-2: IS-IS as RAN IGP Process

This section shows the CSG configuration if IS-IS is used as the RAN IGP.

router isis ran net 49.1100.1011.1101.3018.00 is-type level-1 <<< IS-IS Level-1 ispf level-1 metric-style wide fast-flood max-lsp-lifetime 65535 lsp-refresh-interval 65000 spf-interval 5 50 200 prc-interval 5 50 200 lsp-gen-interval 5 50 200 no hello padding log-adjacency-changes passive-interface Loopback0 bfd all-interfaces

BGP Configuration

!router bgp 100

Page 34: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.2 Multi-Area IGP Design with IGP/LDP Access

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-18 SDU-6516

bgp router-id 100.111.13.18 bgp log-neighbor-changes no bgp default ipv4-unicast neighbor pan peer-group neighbor pan remote-as 100 neighbor pan password lab neighbor pan update-source Loopback0 neighbor 100.111.9.7 peer-group pan neighbor 100.111.9.8 peer-group pan ! address-family ipv4 network 100.111.13.18 mask 255.255.255.255 route-map CSG_Community <<< Advertise Loopback-0 with CSG Community redistribute connected neighbor pan send-community neighbor pan next-hop-self neighbor pan send-label neighbor 100.111.9.7 activate neighbor 100.111.9.8 activate exit-address-family ! address-family vpnv4 <SNIP> ! address-family rtfilter unicast <SNIP> exit-address-family !

!route-map CSG_Community permit 10 set community 10:10 10:101 <<< 10:10 is the common CSG community. 10:101 is the community identifying this CSG as being in metro-1, location-1.!

3.2 Multi-Area IGP Design with IGP/LDP AccessThis option follows the approach of enabling labeled BGP across the core and aggregation networksand extends the unified MPLS LSP to the access by redistribution between labeled BGP and the RANaccess domain IGP. Any node in the core and aggregation network that requires inter-domain LSPs toreach nodes in remote domains acts as a labeled BGP PE and runs iBGP IPv4 unicast+label with itscorresponding local RR.

Page 35: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-19

Figure 3-7. Unified MPLS Transport for Multi-Area IGP Design with IGP/LDP Access

The CN-ABRs are labeled BGP ABRs between the core and aggregation domains. They peer withiBGP labeled-unicast sessions with the CN-RR in the core network, and act as inline-RRs for theirlocal aggregation network PAN clients. The CN-ABRs insert themselves into the data path to enableinter-domain LSPs by setting next-hop-self on all iBGP updates towards the CN-ABR in corenetwork and their local aggregation network PAN clients. The MTGs residing in the core networkare labeled BGP PEs. They peer with iBGP labeled-unicast sessions with the CN-RR, and advertisetheir loopbacks into iBGP labeled-unicast with a common BGP community (MPC BGP community)representing the mobile packet core.

All the PANs in the aggregation network that require inter-domain LSPs to reach remote PANs inanother aggregation network, or the core network (to reach the MTGs, for example), act as labeledBGP PEs and peer with iBGP labeled-unicast sessions with the higher level CN-ABR inline-RRs.The PANs advertise their loopbacks into BGP labeled-unicast with a common BGP community thatrepresents the aggregation community. They learn labeled BGP prefixes marked with the aggregationBGP community and the MPC BGP community.

The inter-domain LSPs are extended to the MPLS/IP RAN access with a controlled redistributionbased on IGP tags and BGP communities. Each mobile access network subtending from a pair ofPANs is based on a different IGP process. At the PANs, the inter-domain core and aggregation LSPsare extended to the RAN access by redistributing between iBGP and RAN IGP. In one direction,the RAN access node loopbacks (filtered based on IGP tags) are redistributed into iBGP labeled-unicast and tagged with RAN access BGP community that is unique to that RAN access region. In theother direction, the MPC prefixes filtered based on MPC-marked BGP communities, and optionally,adjacent RAN access prefixes filtered based on RAN-region-marked BGP communities (if inter-accessX2 connectivity is desired), are redistributed into the RAN access IGP process.

The MTGs in the core network are capable of handling large scale and will learn all BGP-labeledunicast prefixes since they need connectivity to all the CGSs in the entire network. Simple prefixfiltering based on BGP communities is performed on the CN-RRs for constraining IPv4+label routesfrom remote RAN access regions from proliferating into neighboring aggregation domains, wherethey are not needed. The PANs only learn labeled BGP prefixes marked with the aggregation BGPcommunity and the MPC BGP community. This allows the PANs to enable inter metro Wirelineservices across the core, and also redistribute the mobile packet core prefix to their local accessnetworks. Using a separate IGP process for the RAN access enables the mobile access network tohave limited control plane scale, since the CSGs only learn local IGP routes and labeled BGP prefixesmarked with the MPC BGP community.

Page 36: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.2.1 Pre-Aggregation Node Configuration

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-20 SDU-6516

Note The network infrastructure organization of this model at the top layers of network (namely, the coreand aggregation domains) is identical to that defined in Section 3.1 Multi-Area IGP Design withLabeled BGP Access. The difference here is that labeled BGP spans only the core and aggregationnetworks and does not extend to the RAN access. Instead, the end-to-end unified MPLS LSP isextended into the RAN access with selective redistribution between labeled BGP and the RAN accessdomain IGP at the PAN. Please refer to the Multi-Area IGP Design with Labeled BGP Access sectionfor configuration details on the Core Route Reflector, Mobile Transport Gateway, and Core AreaBorder Router since the same configuration is also applied to this model.

This section includes the following major topics:

• Section 3.2.1 Pre-Aggregation Node Configuration

• Section 3.2.2 Cell Site Gateway Configuration

3.2.1 Pre-Aggregation Node Configuration

This section shows the IGP/LDP configuration required to build the intra-domain LSPs, and the BGPconfiguration required to build the inter-domain LSPs in the aggregation network. The segmentationbetween the aggregation and RAN access domains is achieved by enabling two different IGP processeson the PANs. The first process is the core/aggregation IGP process, and the second process is anotherindependent RAN IGP process. All CSGs subtending from the same pair of PANs are part of this RANIGP process.

Figure 3-8. Pre-Aggregation Node (PAN)

Interface Configuration

!interface Loopback0 <<< Loopback for core-agg IGP process ip address 100.111.9.7 255.255.255.255!interface Loopback100 <<< Loopback for RAN IGP process ip address 100.111.99.7 255.255.255.255!

Page 37: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-21

mpls ldp router-id Loopback0 force

!interface TenGigabitEthernet0/1 <<< Interface facing Core network description To AG2-K0501 no switchport ip address 10.5.1.3 255.255.255.254 ip router isis core-agg mpls ip synchronous mode bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point !interface TenGigabitEthernet0/2 <<< Interface connecting redundant PAN. VLANed to close Aggregation and RAN IGP processes. description To AG1-K0908 switchport trunk allowed vlan 10,20 switchport mode trunk synchronous mode!interface Vlan10 <<< VLAN 10 used for Core-Aggregation IGP process ip address 10.9.7.0 255.255.255.254 ip router isis core-agg mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point !interface Vlan20 <<< VLAN 20 used for for RAN IS-IS IGP process ip address 10.9.7.4 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process !interface GigabitEthernet0/1 <<< RAN Access Ring facing interface description To CSN-K1317 no switchport ip address 10.9.7.2 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip mpls ldp discovery transport-address 100.111.99.7 <<< LDP discovery transport address changed to Loopback100 synchronous mode bfd interval 500 min_rx 500 multiplier 3 isis circuit-type level-2-only <<< Ignore if OSFP is used as RAN IGP Process isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process !

Note Interface TenGigabitEthernet0/2 is the link that connects to the redundant PAN. Since we are runningtwo independent IGP processes on the PANs, we need to make sure we provide a closed path for bothprocesses. This is enabled by VLAN'ing the common link between the two PANs. Here, VLAN 10 isused to close the core-agg IGP process and VLAN 20 to close the RAN IGP process.

Page 38: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-22 SDU-6516

Note PAN K0907's LDP ID Loopback0 is in the core-aggregation IS-IS process and not in the RAN IS-ISIGP process. The command mpls ldp discovery transport-address 100.111.99.7, changes the transportaddress to Loopback100 for LDP discovery out of the RAN access ring-facing interface G0/1.

Core-Aggregation LDP/IGP Process Configuration

router isis core-agg net 49.0100.1001.1100.9007.00 is-type level-1 <<< IS-IS Level-1 ispf level-1 metric-style wide fast-flood max-lsp-lifetime 65535 lsp-refresh-interval 65000 spf-interval 5 50 200 prc-interval 5 50 200 lsp-gen-interval 5 50 200 no hello padding log-adjacency-changes passive-interface Loopback0 bfd all-interfaces!

Note Depending on the operator, the RAN IGP process could either be a IS-IS process at Level 2 or a OSPFprocess in Area 0. The following sections describe configurations for both options.

Option-1: OSPF as RAN IGP Process

This shows the configurations if OSPF is used as the choice of RAN IGP process.

router ospf 1 router-id 100.111.99.7 ispf timers throttle spf 50 50 5000 timers throttle lsa 10 20 5000 timers lsa arrival 10 timers pacing flood 5 redistribute isis core-agg level-1 subnets route-map Local_ABR_Loopback <<< Redistribute Core ABR loopback for Primary Reference Clock (PRC) source reachability redistribute bgp 100 subnets route-map BGP_TO_RAN_ACCESS <<< Redistribute MPC/EPC prefixes based on BGP community filtering network 100.111.99.7 0.0.0.0 area 0 <<< Loopback100 interface for RAN IGP network 10.9.7.2 0.0.0.1 area 0 <<< RAN Access facing interface network 10.9.7.4 0.0.0.1 area 0 <<< Interface VLAN 20. Trunked on redundant PAN facing link. distribute-list route-map BGP_redistributed_prefixes in <<< Ensure BGP label unicast learnt prefixes are preferred due to dual-redistribution bfd all-interfaces!

route-map BGP_TO_RAN_ACCESS permit 10 <<< BGP community based filtering for iBGP to RAN IGP redistribution match community EPC_Community <<< Only redistribute prefixes marked with MPC/EPC PE BGP Community set tag 1000 <<< Mark redistributed prefixes with IGP tag 1000!

Page 39: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-23

ip community-list standard EPC_Community permit 1001:1001 <<< 1001:1001 is the BGP Community assigned to the MPC/EPC PEs!

route-map BGP_redistributed_prefixes deny 10 <<< OSPF Inbound Filtering based on IGP tags match tag 1000 <<< Drop prefixes marked with IGP tag 1000 during iBGP to RAN IGP redistribution!route-map BGP_redistributed_prefixes permit 20

Note Since we are dealing with a ring access network, we have a dual redistribution scenario between theaggregation network iBGP and the RAN IGP. In such a situation, even though the MPC prefixes arelearned via iBGP on the PANs, the lower admin distance leads to the IGP routes being preferred overiBGP routes. The solution is to tag the routes during iBGP to RAN OSPF redistribution on one side ofthe access ring, and use OSPF inbound filtering using route maps with a distribute list to filter out theseroutes when they are learnt from the other side of the access ring.

Option-2: IS-IS as RAN IGP Process

This shows the configurations if IS-IS is used as the choice of RAN IGP process.

router isis ran net 49.1100.1011.1100.9007.00 is-type level-1 <<< IS-IS Level-1 ispf level-1 metric-style wide fast-flood max-lsp-lifetime 65535 lsp-refresh-interval 65000 spf-interval 5 50 200 prc-interval 5 50 200 lsp-gen-interval 5 50 200 no hello padding log-adjacency-changes redistribute isis core-agg ip route-map Local_ABR_Loopback <<< Redistribute Core ABR loopback for Primary Reference Clock (PRC) source reachability redistribute bgp 100 route-map BGP_TO_RAN_ACCESS level-1 <<< Redistribute MPC/EPC prefixes based on BGP community filtering passive-interface Loopback100 <<< Loopback100 interface for RAN IGP distance 201 100.111.99.8 0.0.0.0 BGP_redistributed_prefixes <<< Change admin distance so that BGP label unicast learnt prefixes are preferred due to dual-redistribution bfd all-interfaces!

route-map BGP_TO_RAN_ACCESS permit 10 <<< BGP community based filtering for iBGP to RAN IGP redistribution match community EPC_Community <<< Only redistribute prefixes marked with MPC/EPC PE BGP Community set tag 1000 <<< Mark redistributed prefixes with IGP tag 1000!ip community-list standard EPC_Community permit 1001:1001 <<< 1001:1001 is the BGP Community assigned to the MPC/EPC PEs!

ip access-list standard BGP_redistributed_prefixes <<< Ensure BGP label unicast learnt prefixes are preferred due to dual-redistribution

Page 40: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-24 SDU-6516

deny 100.111.13.0 0.0.0.255 deny 10.0.0.0 0.255.255.255 permit any

Note Since we are dealing with a ring access network, we have a dual redistribution scenario between theaggregation network iBGP and the RAN IGP. In such a situation, even though the MPC prefixes arelearned via iBGP on the PANs, the lower admin distance leads to the IGP routes being preferred overiBGP routes. Unlike OSPF, IS-IS does not support inbound filtering using route maps with a distributelist. The solution is to bump up the IGP admin distance based on the IGP originator address for theIBGP to RAN IGP-redistributed prefixes.

BGP Configuration

!router bgp 100 bgp router-id 100.111.9.7 bgp cluster-id 907 bgp log-neighbor-changes bgp graceful-restart restart-time 120 bgp graceful-restart stalepath-time 360 bgp graceful-restart no bgp default ipv4-unicast neighbor csg peer-group <<< Peer group for CSGs in local RAN network - Only used for VPNv4 address-family neighbor csg remote-as 100 neighbor csg password lab neighbor csg update-source Loopback100 <<< RAN IGP Loopback100 used as source neighbor abr peer-group <<< Peer group for CN-ABRs neighbor abr remote-as 100 neighbor abr password lab neighbor abr update-source Loopback0 <<< Core-Aggregation IGP Loopback0 used as source neighbor 100.111.10.1 peer-group abr neighbor 100.111.10.2 peer-group abr neighbor 100.111.13.17 peer-group csg neighbor 100.111.13.18 peer-group csg neighbor 100.111.13.19 peer-group csg ! address-family ipv4 <<< Address family for RFC 3107 based transport bgp redistribute-internal network 100.111.9.7 mask 255.255.255.255 route-map AGG_Community <<< Advertise Loopback-0 with Aggregation Community redistribute ospf 1 <<< Option-1 (If OSPF used as RAN IGP) or redistribute isis ran level-1 route-map RAN_ACCESS_TO_BGP <<< Option-2 (If IS-IS used as RAN IGP) neighbor abr send-community neighbor abr next-hop-self <<< Set NHS to assign local labels for RAN redistributed prefixes neighbor abr send-label <<< send labels with BGP routes neighbor 100.111.10.1 activate <<< CN-ABR K1001 neighbor 100.111.10.2 activate <<< CN-ABR K1002 exit-address-family ! address-family vpnv4 <SNIP> ! address-family rtfilter unicast

Page 41: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

3.2.2 Cell Site Gateway Configuration

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-25

<SNIP> !

route-map AGG_Community permit 10 <<< Advertise Loopback-0 with Aggregation Community set community 100:100 <<< Community assigned to all Aggregation Nodes!

route-map RAN_ACCESS_TO_BGP permit 10 <<< Redistribute prefixes based on IGP tags match tag 10 <<< Only redistribute prefixes from RAN Access tagged with IGP tag 10.!

3.2.2 Cell Site Gateway Configuration

The IGP organization between the aggregation and RAN access networks is based on running twodifferent IGP processes on the PANs, as discussed in the previous section. The first process is the core-aggregation IGP process and the second process is an independent RAN IGP process for the mobileRAN network. The CSGs in all access rings/hub-spokes subtending from the same pair of PANs arepart of this RAN IGP process.

Depending on the operator, the RAN IGP process could either be an IS-IS process at Level 2 or anOSPF process in Area 0. Configurations for both options are shown below.

Figure 3-9. Cell Site Gateway (CSG)

Interface Configuration

Page 42: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-26 SDU-6516

!interface Loopback0 ip address 100.111.13.18 255.255.255.255 isis tag 10 <<< Ignore if OSFP is used as RAN IGP Process!interface GigabitEthernet0/10 <<< NNI: Ring Facing Interface description To CSG-K1317 synchronous mode service instance 10 ethernet encapsulation untagged bridge-domain 10!interface GigabitEthernet0/11 <<< NNI: Ring Facing Interface description To CSG-K1319 synchronous mode service instance 20 ethernet encapsulation untagged bridge-domain 20!interface Vlan10 ip address 10.9.7.3 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process!interface Vlan20 ip address 10.13.18.0 255.255.255.254 ip router isis ran <<< Ignore if OSFP is used as RAN IGP Process mpls ip bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point <<< Ignore if OSFP is used as RAN IGP Process

Option-1: OSPF as RAN IGP Process

This section shows the CSG configuration if OSPF is used as the RAN IGP.

router ospf 1 router-id 100.111.13.18 ispf timers throttle spf 50 50 5000 timers throttle lsa 10 20 5000 timers lsa arrival 10 timers pacing flood 5 passive-interface Vlan200 passive-interface Loopback0 network 10.9.0.0 0.0.255.255 area 0 network 10.13.0.0 0.0.255.255 area 0 network 100.111.0.0 0.0.255.255 area 0 bfd all-interfaces

Option-2: IS-IS as RAN IGP Process

This section shows the CSG configuration if IS-IS is used as the RAN IGP.

router isis ran net 49.1100.1011.1101.3018.00 is-type level-1 <<< IS-IS Level-1 ispf level-1 metric-style wide fast-flood max-lsp-lifetime 65535 lsp-refresh-interval 65000 spf-interval 5 50 200 prc-interval 5 50 200 lsp-gen-interval 5 50 200

Page 43: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 3-27

no hello padding log-adjacency-changes passive-interface Loopback0 bfd all-interfaces

Page 44: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 3 Unified MPLS Transport

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

3-28 SDU-6516

Page 45: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-1

C I S C O C O N F I D E N T I A L

C H A P T E R 4Services

This chapter includes the following major topics:

• Section 4.1 L3 MPLS VPN Service Model for LTE

• Section 4.2 L2 MPLS VPN Service Model for 2G and 3G

• Section 4.3 Fixed-Mobile Convergence Use Case

4.1 L3 MPLS VPN Service Model for LTEThis section includes the following topics:

• Section 4.1.1 MPLS VPN Transport for LTE S1 and X2 Interfaces

• Section 4.1.2 MPLS VPN Control Plane

4.1.1 MPLS VPN Transport for LTE S1 and X2 Interfaces

This section describes the L3VPN configuration aspects on the CSGs in the RAN access, and theMTGs in the core network required for implementing the LTE backhaul service for X2 and S1interfaces.

Figure 4-1. MPLS VPN Service Implementation for LTE Backhaul

CSG MPLS VPN Configuration

Page 46: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-2 SDU-6516

The MPLS VPN configuration on the CSGs, which is minimal, is the same on all CSGs in a givenRAN region. The configuration shown includes the required commands to enable BGP PIC protectionsupport for MPLS VPNs.

eNodeB UNI Interface

interface GigabitEthernet0/1 <<< eNodeB UNI description Connected to eNodeB service-policy output PMAP-eNB-UNI-P-E service instance 100 ethernet encapsulation untagged service-policy input PMAP-eNB-UNI-I bridge-domain 100 ! !interface Vlan100 vrf forwarding LTE2 ip address 113.26.23.1 255.255.255.0 !

VRF Definition

vrf definition LTE2 rd 1111:1111 ! address-family ipv4 export map ADDITIVE route-target export 10:101 <<< Local RAN RT. Same RT on all CSGs in a given RAN region. route-target import 10:101 <<< Imported by the CSGs in the same RAN region to enable X2 communication. route-target import 1001:1001 <<< MPC RT. Imported by every CSG in the entire network. route-target import 10:103 <<< Optional import of adjacent RAN region RT. Only required to enable inter-RAN-region X2 communication. exit-address-family

• Route map to export common RT 1111:1111 in addition to Local RAN RT 10:101

route-map ADDITIVE permit 10 set extcommunity rt 1111:1111 additive <<< Common RAN RT. Exported by every CSG in the entire network.

VPNv4 BGP Configuration

router bgp 100 ! address-family ipv4 vrf LTE2 bgp additional-paths install bgp nexthop trigger delay 0 redistribute connected exit-address-family ! address-family vpnv4 bgp additional-paths install bgp nexthop trigger delay 1

Page 47: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-3

Note The above route target 10:101 implies that this is a CSG in metro-1, RAN access region-1. Similarly,the route target 10:103 implies that this is a CSG in metro-1, RAN access region-3. Please referto the "L3 MPLS VPN Service Model for LTE" section in the UMMT 3.0 Design Guide for adetailed explanation of how inter-access X2 communication is enabled with labeled BGP using BGPcommunities.

PAN BGP Configuration

The following BGP configuration is required on the PANs to facilitate BGP PIC resiliency for MPLSVPNs.

router bgp 1000 ! address-family ipv4 bgp additional-paths receive bgp additional-paths install bgp nexthop trigger delay 0 ! address-family vpnv4 bgp additional-paths install bgp nexthop trigger delay 1

(Optional) PAN MPLS VPN Configuration

The following MPLS VPN configuration on the PANs is optional, and is only required in deploymentswhere there are cell sites close to the pre-aggregation network with eNBs directly connected to thePANs at the CO.

eNodeB UNI Interface

interface GigabitEthernet0/3/6 vrf forwarding LTE2 ip address 114.1.23.1 255.255.255.0 load-interval 30 negotiation auto ipv6 address 2001:114:1:23::1/64end

VRF Definition

vrf definition LTE2 rd 1111:1111 ! address-family ipv4 export map ADDITIVE route-target export 10:101 route-target import 10:101 route-target import 1001:1001 exit-address-family ! address-family ipv6 export map ADDITIVE route-target export 10:101 route-target import 10:101 route-target import 1001:1001 exit-address-family

• Route map to export common RT 1111:1111 in addition to Local RAN RT 10:101

!

Page 48: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-4 SDU-6516

route-map ADDITIVE permit 10 set extcommunity rt 1111:1111 additive!

VPNv4/v6 BGP Configuration

!router bgp 100 bgp router-id 100.111.14.1 <SNIP> ! address-family ipv4 vrf LTE2 redistribute connected exit-address-family ! address-family ipv6 vrf LTE2 redistribute connected exit-address-family

Mobile Transport Gateway MPLS VPN Configuration

This is a one-time MPLS VPN configuration done on the MTGs. No modifications are made whenadditional CSGs in any RAN access or other MTGs are added to the network.

SAE GW UNI Interface

interface TenGigE0/0/0/2.1100 description Connected to SAE Gateway. vrf LTE2 ipv4 address 115.1.23.3 255.255.255.0 ipv6 nd dad attempts 0 ipv6 address 2001:115:1:23::3/64 encapsulation dot1q 1100

VRF Definition

vrf LTE2 address-family ipv4 unicast import route-target 1111:1111 <<< Common CSG RT imported by MTG 1001:1001 <<< Import MPC RT ! export route-target 1001:1001 <<< Export MPC RT. This is imported by every CSG in the entire network. ! ! address-family ipv6 unicast import route-target 1111:1111 <<< Common CSG RT imported by MTG 1001:1001 <<< Import MPC RT ! export route-target 1001:1001 <<< Export MPC RT. This is imported by every CSG in the entire network.

MTG 1 VPNv4/v6 BGP Configuration

router bgp 100 bgp router-id 100.111.15.1 bgp update-delay 120 <SNIP> !

Page 49: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

4.1.2 MPLS VPN Control Plane

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-5

vrf LTE2 address-family ipv4 unicast redistribute connected ! address-family ipv6 unicast redistribute connected !

MTG 2 VPNv4/v6 BGP Configuration

router bgp 100 bgp router-id 100.111.15.2 <SNIP> ! vrf LTE2 address-family ipv4 unicast redistribute connected ! address-family ipv6 unicast redistribute connected !

Note This version of the UMMT System release supports v6 MPLS VPNs using 6VPE functionality asdefined in RFC 4659 only on the ASR 903 PAN and ASR 9000 MTG. 6VPE will be supported on theASR 901, ME3800X and ME3600X-24CX platforms in the next system release.

4.1.2 MPLS VPN Control Plane

This section describes the BGP control plane aspects for the VPNv4 LTE backhaul service.

Figure 4-2. BGP Control Plane for MPLS VPN Service

CSG LTE VPNv4 PE Configuration

router bgp 100 bgp router-id 100.111.13.18 <SNIP> neighbor 100.111.14.1 peer-group pan << Inline-RR PAN K1401 neighbor 100.111.14.2 peer-group pan << Inline-RR PAN K1402 ! address-family vpnv4

Page 50: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-6 SDU-6516

neighbor pan send-community extended neighbor 100.111.14.1 activate neighbor 100.111.14.2 activate exit-address-family ! address-family rtfilter unicast << RT Constrained Route Distribution neighbor pan send-community extended neighbor 100.111.14.1 activate neighbor 100.111.14.2 activate exit-address-family !

Note Please refer to the "Prefix Filtering" section in the UMMT 3.0 Design Guide for a detailed explanationof how RT-constrained RD is used for constraining VPNv4 routes from remote RAN access regions.

PAN Inline-RR Configuration

The BGP configuration for the inline-RR function on the PAN shown below requires the small changeof activating the neighborship when new CSG is added to local access network.

router bgp 100 bgp router-id 100.111.14.1 <SNIP> neighbor 100.111.10.1 peer-group abr << CN-ABR K1001 neighbor 100.111.10.2 peer-group abr << CN-ABR K1002 neighbor 100.111.13.17 peer-group csg <<< RR-Client CSG K1317 neighbor 100.111.13.18 peer-group csg <<< RR-Client CSG K1318 neighbor 100.111.13.19 peer-group csg <<< RR-Client CSG K1319 ! address-family ipv4 bgp nexthop trigger delay 2 <SNIP> exit-address-family ! address-family vpnv4 << VPNv4 AF bgp nexthop trigger delay 3 neighbor csg send-community extended neighbor csg route-reflector-client <<< CSGs are RR clients neighbor abr send-community both <<< CN-RR is next level RR neighbor 100.111.10.1 activate neighbor 100.111.10.2 activate neighbor 100.111.13.17 activate neighbor 100.111.13.18 activate neighbor 100.111.13.19 activate exit-address-family ! address-family vpnv6 << VPNv6 AF for 6VPE bgp nexthop trigger delay 3 neighbor abr send-community both neighbor 100.111.10.1 activate neighbor 100.111.10.2 activate exit-address-family ! address-family rtfilter unicast << RT Constrained Route Distribution towards CSGs neighbor csg send-community extended neighbor 100.111.13.17 activate neighbor 100.111.13.18 activate neighbor 100.111.13.19 activate exit-address-family !

CN-ABR Inline-RR Configuration

Page 51: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-7

router bgp 100 bgp router-id 100.111.10.1 <SNIP> ! neighbor-group cn-rr <<< iBGP neighbor group for CN-RR use session-group infra <SNIP> ... ! address-family vpnv4 unicast ! address-family vpnv6 unicast ! ! neighbor-group pan <<< iBGP neighbor group for PANs in local Aggregation network use session-group infra <SNIP> ... ! address-family vpnv4 unicast route-reflector-client ! address-family vpnv6 unicast route-reflector-client ! ! neighbor 100.111.4.3 <<< CN-RR use neighbor-group cn-rr ! neighbor 100.111.9.7 << PAN K0907 use neighbor-group agg ! neighbor 100.111.9.8 << PAN K0908 use neighbor-group agg ! neighbor 100.111.9.9 << PAN K0909 use neighbor-group agg ! neighbor 100.111.9.10 << PAN K0910 use neighbor-group agg !!

CN-RR Configuration

router bgp 100 bgp router-id 100.111.4.3 <SNIP> ! address-family vpnv4 unicast nexthop trigger-delay critical 2000 ! address-family vpnv6 unicast nexthop trigger-delay critical 2000 ! neighbor-group mtg << neighbor-group for MTGs use session-group intra-as <SNIP> ... ! address-family vpnv4 unicast route-reflector-client ! address-family vpnv6 unicast route-reflector-client ! ! neighbor-group cn-abr << neighbor-group for CN-ABRs use session-group intra-as

Page 52: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-8 SDU-6516

<SNIP> ... ! address-family vpnv4 unicast route-reflector-client route-policy BGP_Egress_RAN_Filter out ! address-family vpnv6 unicast route-reflector-client route-policy BGP_Egress_RAN_Filter out ! ! neighbor 100.111.2.1 use neighbor-group cn-abr ! neighbor 100.111.4.1 use neighbor-group cn-abr ! neighbor 100.111.10.1 use neighbor-group cn-abr ! neighbor 100.111.10.2 use neighbor-group cn-abr ! neighbor 100.111.15.1 use neighbor-group mtg ! neighbor 100.111.15.2 use neighbor-group mtg

route-policy BGP_Egress_RAN_Filter if extcommunity rt matches-any Deny_RAN_Community then drop else pass endif end-policy

extcommunity-set rt Deny_RAN_Community 1111:1111 end-set

Note Please refer to the "Prefix Filtering" section in the UMMT 3.0 Design Guide for a detailed explanationof how egress filtering is done at the CN-RR for constraining VPN routes from remote RAN accessregions.

MTG LTE VPNv4/v6 PE Configuration

router bgp 100 bgp router-id 100.111.15.1 <SNIP> ! neighbor-group cn-rr use session-group intra-as <SNIP> ... ! address-family vpnv4 unicast ! address-family vpnv6 unicast ! ! neighbor 100.111.4.3 use neighbor-group cn-rr

Page 53: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

4.2 L2 MPLS VPN Service Model for 2G and 3G

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-9

!

Note This version of the UMMT System release supports v6 MPLS VPNs using 6VPE functionality, asdefined in RFC 4659 only on the ASR 903 PAN and ASR 9000 MTG. 6VPE will be supported on theASR 901, ME3800X and ME3600X-24CX platforms in the next system release.

4.2 L2 MPLS VPN Service Model for 2G and 3GLayer 2 MPLS VPN service models provide TDM Circuit Emulation Services (CES) for 2G backhauland ATM CES for 3G backhaul. The following services were validated as part of UMMT 3.0:

• TDM backhaul from the CSG to the MTG, utilizing the structured CESoPSN mechanism

• TDM backhaul from the PAN to the MTG, utilizing the unstructured SAToP mechanism

• ATM backhaul from the PAN to the MTG, supporting both ATM clear-channel and ATM IMAcircuits.

This section includes the following topics:

• Section 4.2.1 CESoPSN VPWS Service from CSG to MTG

• Section 4.2.2 SAToP VPWS Service from PAN to MTG

• Section 4.2.3 ATM Clear-channel VPWS Service from PAN to MTG

• Section 4.2.4 ATM IMA VPWS Service from PAN to MTG

4.2.1 CESoPSN VPWS Service from CSG to MTG

Circuit Emulation Services over Packet Switched Network (CESoPSN) provides structuredtransport of TDM circuits down to the DS0 level across an MPLS-based backhaul architecture. Theconfigurations for the CSG and MTGs are outlined in this section, including an illustration of basicbackup pseudowire configuration on the CSG to enable transport to redundant MTGs. Complete highavailability configurations are available in the Service High Availability section.

Figure 4-3. CESoPSN Service Implementation for 2G and 3G Backhaul

Page 54: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-10 SDU-6516

Note Regarding the example shown:

• ASR 901 Motherboard with built-in 12GE, 1FE, 16T1E1 (A901-12C-FT-D) is used to createCEM interface for TDM pseudowire.

• Both ASR 9000 MTGs utilize 1-port channelized OC3/STM-1 ATM and circuit emulation SPA(SPA-1CHOC3-CE-ATM) in a SIP-700 card for the TDM interfaces.

• CESoPSN encapsulates T1/E1 structured (channelized) services. Structured mode (CESoPSN)identifies framing and sends only payload, which can be channelized T1s within DS3 and DS0swithin T1. DS0s can be bundled to the same packet. This mode is based on IETF RFC 5086.

• "MPLS LDP discovery targeted-hello accept" is required because of its LDP session via PWtunnel between PEs are not directly connected and targeted-hello response is not configured soboth sessions will be showing as passive, which is normal.

ASR 901 Cell Site Gateway Configuration

card type t1 0 0!controller T1 0/0 framing esf linecode b8zs cablelength short 133 cem-group 0 timeslots 1-24! pseudowire-class CESoPSN encapsulation mpls control-word!!interface CEM0/0 no ip address load-interval 30 cem 0 xconnect 100.111.15.1 13261501 encapsulation mpls pw-class CESoPSN backup peer 100.111.15.2 13261502 pw-class CESoPSN ! hold-queue 4096 in hold-queue 4096 out!interface Loopback0 ip address 100.111.13.26 255.255.255.255 isis tag 10!router isis agg-acc passive-interface Loopback0!!mpls ldp discovery targeted-hello accept !# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

ASR 9000 Mobile Transport Gateway Configuration The other MTG configuration is identical, except with a Loopback 0 IP address of 100.111.15.2, andan pw-id of 13261502.

hw-module subslot 0/2/1 cardtype sonet !

Page 55: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-11

controller SONET0/2/1/0 description To ONS15454-K1410 OC3 port 4/1 ais-shut report lais report lrdi sts 1 mode vt15-t1 delay trigger 250 ! clock source line ! controller T1 0/2/1/0/1/1/3 cem-group framed 0 timeslots 1-24 forward-alarm AIS forward-alarm RAI clock source line ! interface CEM0/2/1/0/1/1/3:0 load-interval 30 l2transport ! ! interface Loopback0 description Global Loopback ipv4 address 100.111.15.1 255.255.255.255 ! l2vpn ! pw-class CESoPSN encapsulation mpls control-word ! ! xconnect group TDM-K1326 p2p T1-CESoPSN-01 interface CEM0/2/1/0/1/1/3:0 neighbor 100.111.13.26 pw-id 13261501 pw-class CESoPSN ! ! !! router isis core ! interface Loopback0 passive point-to-point address-family ipv4 unicast ! ! router bgp 1000 bgp router-id 100.111.15.1 address-family ipv4 unicast network 100.111.15.1/32 route-policy MTG_Community ! mpls ldp router-id 100.111.15.1 discovery targeted-hello accept ! !# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

Page 56: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

4.2.2 SAToP VPWS Service from PAN to MTG

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-12 SDU-6516

4.2.2 SAToP VPWS Service from PAN to MTG

Structure-Agnostic Time Division Multiplexing over Packet (SAToP) provides unstructured transportof TDM circuits across an MPLS-based backhaul architecture. The configurations for the PAN andMTGs are outlined in this section, including an illustration of backup pseudowire configuration on thePAN to enable transport to redundant MTGs.

Figure 4-4. SAToP VPWS Service Implementation for 2G Backhaul

Note Regarding the example shown:

• ASR 903 utilizes a 16 port T1/E1 Interface Module (A900-IMA16D) for TDM interfaces.

• ME 3600X 24CX utilizes on-board T1/E1 interfaces.

• Both ASR 9000 MTGs utilize 1-port channelized OC3/STM-1 ATM and circuit emulation SPA(SPA-1CHOC3-CE-ATM) in a SIP-700 card for the TDM interfaces.

• SAToP encapsulates T1/E1 services, disregarding any structure that may be imposed on thesestreams, in particular the structure imposed by the standard TDM framing. This mode is based onIETF RFC 4553.

• "MPLS LDP discovery targeted-hello accept" is required because of its LDP session via PWtunnel between PEs are not directly connected and targeted-hello response is not configured, soboth sessions will be showing as passive, which is normal.

ASR 903 Pre-Aggregation Node Configuration

card type t1 0 5! controller T1 0/5/0 framing unframed clock source internal linecode b8zs cablelength short 110 cem-group 0 unframed!pseudowire-class SAToP encapsulation mpls control-word! ! interface CEM0/5/0 no ip address load-interval 30 cem 0 xconnect 100.111.15.1 14011501 encapsulation mpls pw-class SAToP backup peer 100.111.15.2 14011502 pw-class SAToP ! hold-queue 4096 in hold-queue 4096 out!

Page 57: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-13

interface Loopback0 ip address 100.111.14.1 255.255.255.255!! router isis agg-acc passive-interface Loopback0 ! ! mpls ldp discovery targeted-hello accept!# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

ME 3600X 24CX Pre-Aggregation Node Configuration

card type t1 0 1! controller T1 0/1 framing unframed clock source internal linecode b8zs cablelength short 110 cem-group 0 unframed!pseudowire-class SAToP encapsulation mpls control-word!! interface CEM0/1 no ip address load-interval 30 cem 0 xconnect 100.111.15.1 9171501 encapsulation mpls pw-class SAToP backup peer 100.111.15.2 9171502 pw-class SAToP !!interface Loopback0 ip address 100.111.9.17 255.255.255.255!! router isis agg-acc passive-interface Loopback0 ! ! mpls ldp discovery targeted-hello accept!# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

ASR 9000 Mobile Transport Gateway Configuration The other MTG configuration is identical, except with a Loopback 0 IP address of 100.111.15.2, andpw-ids ending in 1502 instead of 1501.

hw-module subslot 0/2/1 cardtype sonet !controller SONET0/2/1/0 description To ONS15454-K1410 OC3 port 4/1 ais-shut report lais report lrdi sts 1 mode vt15-t1

Page 58: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-14 SDU-6516

delay trigger 250 ! clock source line!controller T1 0/2/1/0/1/1/1 cem-group unframed forward-alarm AIS forward-alarm RAI clock source line!controller T1 0/2/1/0/1/1/2 cem-group unframed forward-alarm AIS forward-alarm RAI clock source line!interface CEM0/2/1/0/1/1/1 load-interval 30 l2transport !!interface CEM0/2/1/0/1/1/2 load-interval 30 l2transport !!! interface Loopback0 description Global Loopback ipv4 address 100.111.15.1 255.255.255.255 ! l2vpn pw-class SAToP encapsulation mpls control-word ! ! xconnect group TDM-K0917 p2p T1-SAToP-01 interface CEM0/2/1/0/1/5/1 neighbor 100.111.9.17 pw-id 9171501 pw-class SAToP ! ! ! xconnect group TDM-K1401 p2p T1-SAToP-01 interface CEM0/2/1/0/1/1/2 neighbor 100.111.14.1 pw-id 14011501 pw-class SAToP ! ! !router isis core interface Loopback0 passive point-to-point address-family ipv4 unicast ! !!router bgp 1000 bgp router-id 100.111.15.1 address-family ipv4 unicast network 100.111.15.1/32 route-policy MTG_Community ! mpls ldp router-id 100.111.15.1 discovery targeted-hello accept

Page 59: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

4.2.3 ATM Clear-channel VPWS Service from PAN to MTG

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-15

! !# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

4.2.3 ATM Clear-channel VPWS Service from PAN to MTG

Any Transport over MPLS (AToM) pseudowire circuits are utilized to provide ATM circuit transportacross an MPLS-based backhaul architecture. The ATM interface is configured for AAL0 to allow fortransparent transport of the entire PVC across the transport network. The configurations for the PANand MTGs are outlined in this section. QoS implementation for ATM circuits is covered in the Qualityof Service section, and resiliency via pseudowire redundancy and MR-APS is covered in the HighAvailability section.

Figure 4-5. ATM VPWS Service Implementation for 3G Backhaul

Regarding the example shown:

Note • ASR 903 utilizes a 16 port T1/E1 Interface Module (A900-IMA16D) for ATM interfaces.

• Both ASR 9000 MTGs utilize 1-port OC3/STM-1 ATM SPA (SPA-1XOC3-ATM-V2) in aSIP-700 card for the TDM interfaces.

• ATM transport via Pseudowire over an MPLS infrastructure is detailed in IETF RFC 4447.

• The PE side of the ATM interface uses aal0 encapsulation, and the CE side uses aal5snapencapsulation

• "MPLS LDP discovery targeted-hello accept" is required because of its LDP session via PWtunnel between PEs are not directly connected and targeted-hello response is not configured, soboth sessions will be showing as passive, which is normal.

CE node connected to PAN

!card type t1 0 0!controller T1 0/3 framing esf clock source line linecode b8zs cablelength long 0db mode atm !interface ATM0/3 ip address 100.14.15.26 255.255.255.252 load-interval 30 no scrambling-payload no atm enable-ilmi-trap pvc 100/4011

Page 60: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-16 SDU-6516

protocol ip 100.14.15.25 broadcast encapsulation aal5snap !!interface Vlan201 ip address 214.14.6.17 255.255.255.252 load-interval 30 no ptp enable!interface GigabitEthernet0/2 description Traffic Generator with IP 214.14.6.18 switchport access vlan 201 switchport mode access load-interval 30!ip route 214.15.3.16 255.255.255.252 100.14.15.25!

ASR 903 Pre-Aggregation Node Configuration

card type t1 0 5license feature atm controller T1 0/5/2 framing esf clock source internal linecode b8zs cablelength long 0db atm!interface ATM0/5/2 no ip address no atm enable-ilmi-trap!interface ATM0/5/2.100 point-to-point no atm enable-ilmi-trap pvc 100/4011 l2transport encapsulation aal0 xconnect 100.111.15.1 1401150115 encapsulation mpls !!interface Loopback0 ip address 100.111.14.1 255.255.255.255!! router isis agg-acc passive-interface Loopback0 ! ! mpls ldp discovery targeted-hello accept!# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

ASR 9000 Mobile Transport Gateway Configuration

The other MTG configuration is identical, except with a Loopback 0 IP address of 100.111.15.2, andpw-ids ending in 1502 instead of 1501.

interface ATM0/2/3/0 load-interval 30!interface ATM0/2/3/0.100 l2transport pvc 100/4011 encapsulation aal0

Page 61: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-17

shape vbr-rt 20000 14000 7000 !!! interface Loopback0 description Global Loopback ipv4 address 100.111.15.1 255.255.255.255 ! l2vpn pw-class ATM encapsulation mpls ! ! xconnect group ATM-K1401 p2p T1-ATM-01 interface ATM0/2/3/0.100 neighbor 100.111.14.1 pw-id 1401150115 pw-class ATM ! ! !router isis core interface Loopback0 passive point-to-point address-family ipv4 unicast ! !!router bgp 1000 bgp router-id 100.111.15.1 address-family ipv4 unicast network 100.111.15.1/32 route-policy MTG_Community ! mpls ldp router-id 100.111.15.1 discovery targeted-hello accept ! !# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

CE device connected to MTGs

interface ATM3/1/0 no ip address no atm ilmi-keepalive no atm enable-ilmi-trap!interface ATM3/1/0.100 point-to-point ip address 100.14.15.25 255.255.255.252 no atm enable-ilmi-trap pvc 100/4011 protocol ip 100.14.15.26 broadcast encapsulation aal5snap !!interface GigabitEthernet2/3/0 description Traffic Generator with IP 214.15.3.18 ip address 214.15.3.17 255.255.255.252 load-interval 30 speed 1000 no negotiation auto cdp enable!ip route 214.14.6.16 255.255.255.252 ATM3/1/0.100

Page 62: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

4.2.4 ATM IMA VPWS Service from PAN to MTG

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-18 SDU-6516

!

4.2.4 ATM IMA VPWS Service from PAN to MTG

Any Transport over MPLS (AToM) pseudowire circuits are utilized to provide ATM circuit transportacross an MPLS-based backhaul architecture. The ATM interface in this example is configured forIMA. The configurations for the PAN and MTGs are outlined in this section. QoS implementation forATM circuits is covered in the Quality of Service section, and resiliency via pseudowire redundancyand MR-APS is covered in the High Availability section.

Figure 4-6. ATM VPWS Service Implementation for 3G Backhaul

Regarding the example shown:

Note • ASR 903 utilizes a 16 port T1/E1 Interface Module (A900-IMA16D) for ATM interfaces.

• Both ASR 9000 MTGs utilize 1-port OC3/STM-1 ATM SPA (SPA-1XOC3-ATM-V2) in aSIP-700 card for the TDM interfaces.

• ATM transport via Pseudowire over an MPLS infrastructure is detailed in IETF RFC 4447.

• The PE side of the ATM interface uses aal0 encapsulation, and the CE side uses aal5snapencapsulation

• "MPLS LDP discovery targeted-hello accept" is required because of its LDP session via PWtunnel between PEs are not directly connected and targeted-hello response is not configured, soboth sessions will be showing as passive, which is normal.

CE node connected to PAN

!card type t1 0 0!controller T1 0/3 framing esf clock source internal linecode b8zs cablelength short 110 ima-group 0 no-scrambling-payload!! interface ATM0/IMA0 ip address 100.14.15.30 255.255.255.252 ima group-id 0 no atm ilmi-keepalive no atm enable-ilmi-trap pvc 200/4021 protocol ip 100.14.15.29 broadcast encapsulation aal5snap ! !

Page 63: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-19

interface Vlan201 ip address 214.7.1.1 255.255.255.0 no ptp enable!interface GigabitEthernet0/3 switchport access vlan 201 switchport mode access!ip route 215.3.1.0 255.255.255.0 100.14.15.29!

ASR 903 Pre-Aggregation Node Configuration

card type t1 0 5license feature atm controller T1 0/5/3 framing esf clock source internal linecode b8zs cablelength short 110 ima-group 1!interface ATM0/5/ima0 no ip address atm bandwidth dynamic no atm enable-ilmi-trap no atm ilmi-keepalive pvc 200/4021 l2transport encapsulation aal0 xconnect 100.111.15.1 1402150116 encapsulation mpls backup peer 100.111.15.2 1402150216 !interface Loopback0 ip address 100.111.14.1 255.255.255.255!! router isis agg-acc passive-interface Loopback0 ! ! mpls ldp discovery targeted-hello accept!# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

ASR 9000 Mobile Transport Gateway Configuration

The other MTG configuration is identical, except with a Loopback 0 IP address of 100.111.15.2, andpw-ids ending in 1502 instead of 1501.

interface ATM0/2/3/0 load-interval 30!interface ATM0/2/3/0.200 l2transport pvc 200/4021 encapsulation aal0 !!! interface Loopback0 description Global Loopback ipv4 address 100.111.15.1 255.255.255.255 ! l2vpn

Page 64: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

4.3 Fixed-Mobile Convergence Use Case

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-20 SDU-6516

pw-class ATM encapsulation mpls ! ! xconnect group ATM-K1402 p2p T1-ATM-IMA-01 interface ATM0/2/3/0.200 neighbor 100.111.14.2 pw-id 1402150116 pw-class ATM ! ! !router isis core interface Loopback0 passive point-to-point address-family ipv4 unicast ! !!router bgp 1000 bgp router-id 100.111.15.1 address-family ipv4 unicast network 100.111.15.1/32 route-policy MTG_Community ! mpls ldp router-id 100.111.15.1 discovery targeted-hello accept ! !# !# ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote PE so as to establish AToM PW !#

CE device connected to MTGs

interface ATM3/1/0 no ip address no atm ilmi-keepalive no atm enable-ilmi-trap!interface ATM3/1/0.200 point-to-point ip address 100.14.15.29 255.255.255.252 no atm enable-ilmi-trap pvc 200/4021 protocol ip 100.14.15.30 broadcast encapsulation aal5snap !!interface GigabitEthernet2/1/0 ip address 215.3.1.1 255.255.255.0!ip route 214.7.1.0 255.255.255.0 100.14.15.30!

4.3 Fixed-Mobile Convergence Use CaseThe following example illustrates how an E-Line wireline service is transported end-to-end via anEoMPLS pseudowire between CSGs, passing through the respective aggregation networks and corenetwork. Unlike the case of mobile services, where the remote end of the service is either an MTG(S1 or ATM/TDM PW) or a CSG in the same access domain (X2), an E-Line wireline service willtypically span the entire transport network and terminate in an arbitrary access-domain. To accountfor all possible remote destinations, this would require significantly larger routing tables in theaccess domains, far exceeding the capacities of the CSGs. To avoid this, an on-demand mechanism

Page 65: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-21

is implemented in the CSGs to dynamically update the routing prefix-lists at the time of serviceactivation and deactivation, adding only the specific routes of the remote nodes necessary to supportthe currently configured services. This mechanism is detailed at the end of this section.

CSG-901-K1323

route-map CSG_Community permit 10 set community 10:10 10:203 20:20!router bgp 1000 address-family ipv4 network 100.111.13.23 mask 255.255.255.255 route-map CSG_Community!!interface GigabitEthernet0/3 service instance 500 ethernet encapsulation dot1q 500 rewrite ingress tag pop 1 symmetric xconnect 100.111.13.26 500 encapsulation mpls!

Core ABR K0601

route-policy BGP_Egress_Transport_Filter if community matches-any (20:*) then if community matches-any (20:*) then pass elseif community matches-any (10:*) then drop else pass endif endifend-policy

Core Route Reflector K0403

community-set Pass_WireLine_Community 20:20 end-set!community-set Deny_Transport_Community 10:10 end-set!route-policy pass-all pass end-policy!!route-policy BGP_Egress_Transport_Filter if community matches-any Pass_WireLine_Community then pass elseif community matches-any Deny_Transport_Community then drop else pass endif end-policy!

For brevity, only one aggregation network configuration is shown.

Page 66: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-22 SDU-6516

Pre-Aggregation Node K0917

router bgp 101 address-family ipv4 neighbor csg route-map BGP_Egress_Transport_Filter out ip bgp-community new-formatip community-list standard MTG_Community permit 1001:1001ip community-list standard Inter-Access-X2 permit 10:101ip community-list standard WL_Community permit 20:20! route-map L1intoL2 permit 10 match ip address Pre-Agg set level level-2!route-map PRE_AGG_Community permit 10 set community 100:100 100:104!route-map BGP_Egress_Transport_Filter permit 10 match community MTG_Community set mpls-label!route-map BGP_Egress_Transport_Filter permit 20 match community WL_Community set mpls-label

CSG-K901-K1326

router bgp 101 address-family ipv4 network 100.111.13.26 mask 255.255.255.255 route-map CSG_Community network 213.26.22.0 mask 255.255.255.252 neighbor 100.111.9.17 activate neighbor 100.111.9.17 route-map Inbound_Filter in neighbor 100.111.9.18 activate neighbor 100.111.9.18 route-map Inbound_Filter in ip bgp-community new-formatip community-list standard Allowed_Communities permit 1001:1001! ip prefix-list WL-Service-Destinations seq 10 permit 100.111.13.24/32ip prefix-list WL-Service-Destinations seq 15 permit 100.111.13.26/32ip prefix-list WL-Service-Destinations seq 20 permit 100.111.13.66/32ip prefix-list WL-Service-Destinations seq 25 permit 100.111.13.23/32 route-map Inbound_Filter permit 10 match community Allowed_Communities!route-map Inbound_Filter permit 20 match ip address prefix-list WL-Service-Destinations!route-map CSG_Community permit 10 set community 10:10 10:104 20:20!interface GigabitEthernet0/3 service instance 500 ethernet encapsulation dot1q 500 rewrite ingress tag pop 1 symmetric xconnect 100.111.13.23 500 encapsulation mpls!

Page 67: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 4-23

Each time a new wireline service is enabled on the CSG, the route-map for the inbound filter on theCSG needs to be updated to allow the remote destination loopback of wireline service to be accepted.This process can be easily automated using a simple Embedded Event Manager (EEM) script shownbelow. With this EEM script in place on the CSG, when the operator configures a new VPWS serviceon the device using the "xconnect destination vc-id encapsulation mpls" command, the remote T-PEloopback corresponding to the destination argument will automatically be added to the "WL-Service-Destinations" prefix-list of allowed wireline destinations. The script will also trigger a dynamicinbound soft reset using the "clear ip bgp destination soft in" command to initiate a nondisruptivedynamic route refresh.

event manager applet UpdateInboundFilter event cli pattern ".*xconnect.*encapsulation.*mpls" sync no skip no action 10 regexp " [0-9.]+" "$_cli_msg" result action 20 cli command "enable" action 21 cli command "conf t" action 30 cli command "ip prefix-list WL-Service-Destinations permit $result/32" action 31 puts "Inbound Filter updated for $result/32" action 40 cli command "end" action 50 cli command "enable" action 60 cli command "clear ip bgp 100.111.9.17 soft in" action 61 cli command "clear ip bgp 100.111.9.18 soft in" action 80 puts "Triggered Dynamic Inbound Soft Reset towards PANs"

Similarly, when a wireline service is removed from the CSG, the route-map for the inbound filteron the CSG needs to be updated to remove the remote destination loopback of the deleted wirelineservice. The following two EEM scripts will automate this process on the CSG through the followinglogic:

1. The operator removes the XConnect by the "no xconnect" command. There is no IP addresswhich is required to remove a correct line from a prefix-list.

2. To obtain the IP address, the "tovariable" applet was used with an environmental variable $_int.This applet is triggered by the "interface" command which informs the variable that there can bea potential change in the configuration.

3. The second applet "UpdateInboundFilter2" is triggered by the "no xconnect" command, and usesthe interface derived from the "tovariable" applet to obtain the IP address and remove it from theprefix-list.

event manager applet tovariable event cli pattern "interface" sync no skip no action 10 cli command "enable" action 20 cli command "conf t" action 30 cli command "event manager environment _int $_cli_msg" event manager applet UpdateInboundFilter2 event cli pattern "no xconnect" sync no skip yes action 10 cli command "enable" action 20 cli command "show run $_int" action 30 regexp "xconnect.*" "$_cli_result" line action 40 regexp " [0-9.]+" "$line" result action 50 cli command "enable" action 60 cli command "conf t" action 70 cli command "no ip prefix-list WL-Service-Destinations permit $result/32" action 80 cli command "$_int" action 90 cli command "no xc" action 100 cli command "clear ip bgp 100.111.9.17 soft in" action 110 cli command "clear ip bgp 100.111.9.18 soft in" action 120 puts "Triggered Dynamic Inbound Soft Reset towards PANs"

Page 68: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 4 Services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

4-24 SDU-6516

Note Future system releases will include feature enhancements to the Unified MPLS toolbox to fullyautomate the scale control in the RAN access while enabling wireline deployments without needingmanual prefix filtering and EEM scripting.

Page 69: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 5-1

C I S C O C O N F I D E N T I A L

C H A P T E R 5Synchronization Distribution

Unified MPLS for Mobile Transport Release 3.0 implements a comprehensive hybrid synchronizationdistribution model, utilizing a combination of both Synchronous Ethernet (SyncE) and IEEE 1588v2Precision Time Protocol (PTP) to deliver Frequency, Phase, and Time of Day (ToD) distributionacross the transport network. This model implements a hybrid PTP Boundary Clock (BC) function,which use frequency from SyncE and phase from PTP to recover clock locally, and regenerates clockto deliver frequency, phase and ToD to downstream devices. This hybrid model provides the mostaccurate method of synchronization distribution:

• SyncE is utilized for Frequency Distribution, and each hop in the transport network is configuredto support SyncE. SyncE is implemented per the ITU-T G.8264 specification, which outlinesthe use of Synchronization Status Message (SSM) protocol carried via Ethernet synchronizationmessaging channel for passing Quality Level (QL) information between nodes. This methodenables ring architectures to allow for redundant SyncE sources and prevent any synchronizationloops.

• 1588v2 PTP is utilized for Phase and ToD Distribution. Hybrid BC functionality is implementedin the aggregation nodes, PANs, and CSGs to distribute synchronization to all end points withoutrequiring an individual PTP stream from the PRC/PRTC to each end point. A Best Master ClockAlgorithm (BMCA) implemented in a node allows for redundant upstream PTP sources to beconfigured.

This model can be modified to suit certain deployment scenarios or limitations. For example, if theservices delivered over the transport network require only frequency synchronization, then only SyncEneed be deployed. If the end-to-end transport network contains devices that do not support SyncE, then1588 may be utilized for frequency distribution in that portion of the network.

The following section includes the configuration for the hybrid synchronization model:

• Section 5.1 Hybrid Model Configuration

The configurations for each node are divided into SyncE and 1588v2 sections, so that one or theother can be configured separately if the particular deployment scenario only requires or permits onetechnology to be deployed.

5.1 Hybrid Model ConfigurationThis section shows the end-to-end configurations to implement the hybrid clocking model in theUMMT 3.0 System architecture. Note that while the topology reflects an Inter-AS deployment, theconfigurations are identical for a single AS deployment.

Page 70: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

5-2 SDU-6516

Figure 5-1. Test Topology

Notes on Figure 5-1:

• TP-5000 has two ethernet connections to MTG-K1501 and MTG-K1502, which provide PTPPRC source redundancy marked with green and blue (green with priority 100/105, blue withpriority 110/115).

• TP-5000 has two E1 output connects to ASBR-AGG-K1001 and ASBR-AGG-K1002 RSP BITSfront panel ports. For redundancy under RSP switchover or node failure, it's recommended to usetwo IOC modules to provide a total of four E1 outputs to connect to both RSP of ASBR-AGGnodes.

• ASBR-AGG node with 1588 PTPv2 hybrid BC getting frequency from SyncE and phase from1588. Green PTP source has higher priority so both ASBR-AGG nodes synchronize with greenPRC source. Recovered PTP clock is regenerated from ASBR-AGG WAN interface PTP masterport and provides 1588 clock to PAN Pre-AGG nodes.

• PAN with 1588 PTPv2 hybrid BC getting frequency from SyncE and phase from 1588. EachPAN, based on its metric, will choose the closest ASBR-AGG as primary PRC source and theother as backup.

• Access CSG nodes with 1588 BC only and without BMCA will pick up one of the PANs as PRCsource and provide clock for the downstream eNodeB from the recovered clock regenerated fromthe BC master port. Full Hybrid BC functionality on the ASR 901 will be covered in an update.

Table 5-1. Connectivity Table

Role Device Interface VLAN-ID IP Address Note

Master PTP Clock TP5000 ioc1-1 (eth1) 300 200.10.1.9 In global of MTG-9006-K1501GigabitEthernet0/1/0/0.300

ioc1-2 (eth2) 400 200.10.2.9 In global MTG-9006-K1502GigabitEthernet0/1/0/0.400

Master SyncE Clock TP5000 Port1 (E1 output) NA NA In ABR-K1001 BITS port

Backup SyncE Clock TP5000 Port2 (E1 output) NA NA In ABR-K1002 BITS port

Master Clock (TP5000)

Page 71: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 5-3

*** TP5000 sync to lab GPS *** tp5000> show gps GPS Information GPS Mode - auto GPS Mask - 10 GPS Antenna Delay - 240 GPS Latitude - N35:51:18.886 GPS Longitude - W78:52:31.209 GPS Height - 125.6 ---------------------------------------------------------- |Index |No. |SNR |Health |Azimuth |Elevation| |------|---------|---------|---------|---------|---------| |1 |3 |34 |healthy |128 |69 | |......|.........|.........|.........|.........|.........| |2 |6 |33 |healthy |84 |60 | |......|.........|.........|.........|.........|.........| |3 |7 |33 |healthy |306 |25 | |......|.........|.........|.........|.........|.........| |4 |13 |35 |healthy |299 |54 | |......|.........|.........|.........|.........|.........| |5 |16 |33 |healthy |35 |52 | |......|.........|.........|.........|.........|.........| |6 |19 |31 |healthy |169 |43 | |......|.........|.........|.........|.........|.........| |7 |23 |35 |healthy |222 |65 | ----------------------------------------------------------tp5000> *** TP5000 PTP Status per IO Controller / Interface *** tp5000> show ptp-status ioc-1 Grandmaster status information: Port Enabled : yes Clock Id : 00:B0:AE:FF:FE:01:90:86 Profile : unicast Clock Class : locked to reference Clock Accuracy : within 100ns Timescale : PTP Num clients : 5 Client load : 1% Packet load : 1% tp5000> show ptp-status ioc-2 Grandmaster status information: Port Enabled : yes Clock Id : 00:B0:AE:FF:FE:01:90:87 Profile : unicast Clock Class : locked to reference Clock Accuracy : within 100ns Timescale : PTP Num clients : 2 Client load : 0% Packet load : 0%

Page 72: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

5-4 SDU-6516

tp5000> *** TP5000 VLAN onfiguration towards clients *** tp5000> show vlan-config ioc1-1 Index VLAN-ID Pri State Address Netmask Gateway1 100 5 enable 20.10.1.9 255.255.255.254 20.10.1.82 300 5 enable 200.10.1.9 255.255.255.254 200.10.1.8 tp5000> show vlan-config ioc1-2 Index VLAN-ID Pri State Address Netmask Gateway1 200 5 enable 20.10.2.9 255.255.255.254 20.10.2.82 400 5 enable 200.10.2.9 255.255.255.254 200.10.2.8 tp5000> show ptp-config common ioc1-1 PTP Timescale AUTOPTP State enabledPTP Max Number Clients 500PTP Profile unicastPTP ClockId 00:B0:AE:FF:FE:01:90:86PTP Priority 1 100PTP Priority 2 105PTP Domain 0PTP DSCP 46PTP DSCP State enabledPTP Sync Limit -7PTP Announce Limit -3PTP Delay Limit -7PTP Unicast Negotiation enabledPTP Unicast Lease Duration 300PTP Dither disabled tp5000> show ptp-config common ioc1-2 PTP Timescale AUTOPTP State enabledPTP Max Number Clients 500PTP Profile unicastPTP ClockId 00:B0:AE:FF:FE:01:90:87PTP Priority 1 110PTP Priority 2 115PTP Domain 0PTP DSCP 46PTP DSCP State enabledPTP Sync Limit -7PTP Announce Limit -3PTP Delay Limit -7PTP Unicast Negotiation enabledPTP Unicast Lease Duration 300PTP Dither disabled tp5000>

MTG Connections to PTP Grandmaster Clock TP-5000 (This provides BMCA source for downstream devices

(MTG-K1501

*** Interface To TP5000 Eth IOC-1 on MTG-K1501***!

Page 73: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 5-5

interface GigabitEthernet0/0/1/0.300 service-policy output PMAP-NNI-E_PTP-Test ipv4 address 200.10.1.8 255.255.255.254 load-interval 30 encapsulation dot1q 300! *** Interface To TP5000 Eth IOC-1 on MTG-K1502***!interface GigabitEthernet0/0/1/0.400 service-policy output PMAP-NNI-E_PTP-Test ipv4 address 200.10.2.8 255.255.255.254 encapsulation dot1q 400!

Aggregation Node Configurations (SyncE and 1588 PTPv2 hybrid BC with BMCA (AGN-ASBR-K1001

BITS Clock-Interface for SyncE frequency source. Same for both nodes

*** Clock Interface Configuration - BITS link to MasterClock TP5000 ***!clock-interface sync 0 location 0/RSP0/CPU0 port-parameters bits-input e1 non-crc-4 hdb3 ! frequency synchronization selection input priority 1 wait-to-restore 0 quality receive exact itu-t option 1 PRC !!clock-interface sync 0 location 0/RSP1/CPU0 port-parameters bits-input e1 non-crc-4 hdb3 ! frequency synchronization selection input priority 1 wait-to-restore 0 quality receive exact itu-t option 1 PRC !!

AGN-ASBR K1001 SyncE Configuration

*** Global Configuration ***!frequency synchronization quality itu-t option 1 log selection changes!*** Interface Configuration ***RP/0/RSP0/CPU0:ASBR-9006-K1001#show run int ten0/0/0/0interface TenGigE0/0/0/0 description To AGN-ASBR-K1002 T0/0/0/2 frequency synchronization selection input priority 1 !! RP/0/RSP0/CPU0:ASBR-9006-K1001#show run int ten0/0/0/1interface TenGigE0/0/0/1 description To AGN-K0502::T0/0/0/1 frequency synchronization

Page 74: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

5-6 SDU-6516

selection input priority 2 !!

AGN-ASBR K1001 1588 Configuration

*** PTP profile ***RP/0/RSP0/CPU0:ASBR-9006-K1001#show run ptpptp clock domain 0 identity mac-address router ! profile AGN-ASBR-BC-Slave dscp 46 transport ipv4 port state slave-only master ipv4 200.10.1.9 priority 120 ! master ipv4 200.10.2.9 priority 125 ! sync frequency 64 clock operation one-step source ipv4 address 100.100.10.1 delay-request frequency 64 ! profile AGN-ASBR-BC-Master dscp 46 transport ipv4 sync frequency 64 clock operation one-step announce timeout 2 source ipv4 address 10.5.2.1 delay-request frequency 64 ! *** PTP hybrid BC slave port ***RP/0/RSP0/CPU0:ASBR-9006-K1001#show run int ten0/0/0/2interface TenGigE0/0/0/2 description To CN-ASBR-K0601::T0/0/0/2 ptp profile AGN-ASBR-BC-Slave sync frequency 64 delay-request frequency 64 ! *** PTP hybrid BC master port ***RP/0/RSP0/CPU0:ASBR-9006-K1001#show run int ten0/0/0/1interface TenGigE0/0/0/1 description To AGN-K0502::T0/0/0/1 ptp profile AGN-ASBR-BC-Master sync frequency 64 delay-request frequency 64 !!

AGN-ASBR K1002 SyncE Configuration

*** Global Config *** frequency synchronization quality itu-t option 1

Page 75: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 5-7

log selection changes *** Interface Config ***RP/0/RSP0/CPU0:ASBR-9006-K1002#show run int ten0/0/0/0Mon Aug 27 10:35:14.629 EDTinterface TenGigE0/0/0/0 description To ASBR-K1001 T0/0/0/0 frequency synchronization selection input priority 1 !! RP/0/RSP0/CPU0:ASBR-9006-K1002#show run int ten0/0/0/1Mon Aug 27 10:35:16.282 EDTinterface TenGigE0/0/0/1 description To AGN-K0302::T0/0/0/1 frequency synchronization selection input priority 2 !!

AGN-ASBR K1002 1588 Configuration

*** PTP profile ***RP/0/RSP0/CPU0:ASBR-9006-K1002#show run ptpptp clock domain 0 identity mac-address router timescale PTP ! profile AGN-ASBR-BC-Slave dscp 46 transport ipv4 port state slave-only master ipv4 200.10.1.9 ! master ipv4 200.10.2.9 ! sync frequency 64 clock operation one-step source ipv4 address 100.100.10.2 delay-request frequency 64 ! profile AGN-ASBR-BC-Master dscp 46 transport ipv4 sync frequency 64 clock operation one-step announce timeout 2 source ipv4 address 10.3.2.3 delay-request frequency 64 !!*** PTP hybrid BC slave port ***RP/0/RSP0/CPU0:ASBR-9006-K1002#show run int ten0/0/0/2interface TenGigE0/0/0/2 description To CN-ASBR-K1201::Te0/0/0/2 ptp profile AGN-ASBR-BC-Slave sync frequency 64 delay-request frequency 64 ! *** PTP hybrid BC master port ***RP/0/RSP0/CPU0:ASBR-9006-K1002#show run int ten0/0/0/1

Page 76: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

5-8 SDU-6516

interface TenGigE0/0/0/1 description To AGN-K0302::T0/0/0/1 ptp profile AGN-ASBR-BC-Master sync frequency 64 delay-request frequency 64 !

ASR 903 PAN Nodes (SyncE and 1588 PTPv2 hybrid BC with BMCA (PAN-K1401

PAN-K1401 SyncE Configuration

*** Global Config ***network-clock revertive network-clock synchronization automaticnetwork-clock synchronization mode QL-enablednetwork-clock input-source 2 interface TenGigabitEthernet0/0/0network-clock input-source 1 interface TenGigabitEthernet0/1/0esmc process *** Interface Config ***interface TenGigabitEthernet0/0/0 description To PAN-903-K1402::TenGigabitEthernet0/0/0 synchronous mode interface TenGigabitEthernet0/1/0 description To AGN-K0301::T0/0/0/2 synchronous mode

PAN-K1401 1588v2 Configuration

*** PTP configuration ***!interface Loopback1 description PTP BC Slave to PRC transport intf ip address 100.100.14.1 255.255.255.255!interface Loopback2 description PTP BC master to CSG transport intf ip address 100.101.14.1 255.255.255.255!ptp clock boundary domain 0 hybrid output 1pps R0 clock-port BC_Slave_K1401 slave delay-req interval -6 transport ipv4 unicast interface Lo1 negotiation clock source 10.3.2.3clock source 10.5.2.1 1 clock-port BC_Master_K1401 master sync interval -6 transport ipv4 unicast interface Lo2 negotiation!router isis agg-acc passive-interface Loopback1 passive-interface Loopback2!

PAN-K1401 SyncE Configuration

*** Global Config ***network-clock revertive network-clock synchronization automaticnetwork-clock synchronization mode QL-enablednetwork-clock input-source 1 interface TenGigabitEthernet0/0/0network-clock input-source 2 interface TenGigabitEthernet0/1/0

Page 77: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 5-9

esmc process *** Interface Config ***interface TenGigabitEthernet0/0/0 description To PAN-903-K1401::TenGigabitEthernet0/0/0 synchronous mode interface TenGigabitEthernet0/1/0 description to PAN-901-K0912::Ten0/1 synchronous mode

PAN-K1401 1588v2 Configuration

*** PTP configuration ***!interface Loopback1 description PTP BC Slave to PRC transport intf ip address 100.100.14.2 255.255.255.255!interface Loopback2 description PTP BC master to CSG transport intf ip address 100.101.14.2 255.255.255.255!ptp clock boundary domain 0 hybrid clock-port BC_Slave_K1402 slave delay-req interval -6 transport ipv4 unicast interface Lo1 negotiation clock source 10.5.2.1 clock source 10.3.2.3 1 clock-port BC_Master_K1402 master sync interval -6 transport ipv4 unicast interface Lo2 negotiation!router isis agg-acc net 49.0100.1001.1101.4002.00 ispf level-1-2 passive-interface Loopback1 passive-interface Loopback2!

Access Nodes (SyncE and 1588 BC (ASR901-K1330 shown as reference))

CSG-K1330 SyncE Configuration

interface GigabitEthernet0/10 description To CSG-901-K1329::GigabitEthernet0/11 synchronous mode!interface GigabitEthernet0/11 description To CSG-901-K1331::GigabitEthernet0/10 synchronous mode!

CSG-K1330 1588v2 Configuration

*** PTP Configuration ***!interface Loopback1 description PTP BC Slave to PAN transport intf ip address 100.100.13.30 255.255.255.255!interface Loopback2 description PTP BC master to eNB transport intf ip address 100.101.13.30 255.255.255.255!

Page 78: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 5 Synchronization Distribution

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

5-10 SDU-6516

ptp clock boundary domain 0 1pps-out 1 4096 ns clock-port BC_Slave_K1330 slave transport ipv4 unicast interface Lo1 negotiation clock source 100.101.14.1 clock-port BC_Master_K1330 master transport ipv4 unicast interface Lo2 negotiation!router isis agg-acc advertise passive-only passive-interface Loopback1 passive-interface Loopback2!!interface Vlan400 description To K1407 1588 PTPv2 client!interface GigabitEthernet0/6 description To K1407 1588 PTPv2 client service instance 400 ethernet encapsulation untagged bridge-domain 400 !

Simulated IP-NodeB (CSG-2941-K1407)

Simulated IP-NodeB 1588v2 Client

*** PTP Configuration ***!interface Vlan400 description UMMT3.0 1588v2 client ptp announce interval 3 ptp announce timeout 2 ptp sync interval -4 ptp slave unicast negotiation ptp clock-source 100.101.13.30 ptp enable!interface GigabitEthernet0/0 description UMMT3.0 1588v2 client (K1330 model 2.3) switchport access vlan 400!network-clock-select hold-timeout infinitenetwork-clock-select mode nonrevertnetwork-clock-select 1 PACKET-TIMINGptp mode ordinaryptp priority1 128ptp priority2 128ptp domain 0!ip route 100.101.13.0 255.255.255.0 114.10.7.2!

Page 79: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 6-1

C I S C O C O N F I D E N T I A L

C H A P T E R 6High Availability

As highlighted in the UMMT 3.0 Design Guide chapter on Redundancy and High Availability, theUMMT 3.0 System architecture implements high availability at the transport network level and theservice level. By utilizing these various technologies throughout the network, the UMMT design iscapable of meeting the stringent NGMN requirements of 200ms recovery times for LTE realtimeservices.

Implementation of the high availability technologies at both levels is covered in this chapter.Synchronization resiliency implementation is covered in the Synchronization Distribution chapter.

This chapter includes the following major topics:

• Section 6.1 Transport Network High Availability

• Section 6.2 Service High Availability

6.1 Transport Network High AvailabilityHigh availability at the transport network layer is provided through the combination of threetechnologies:

• Loop-Free Alternate Fast Reroute

• BGP Core and Edge Fast Reroute and Edge Protection

• Bi-directional Failure Detection (BFD) at the IGP

The configuration for each of these technologies is illustrated in the corresponding section:

• Section 6.1.1 Loop-Free Alternate Fast Reroute with BFD

• Section 6.1.2 BGP Fast Reroute

• Section 6.1.3 Multihop BFD for BGP

6.1.1 Loop-Free Alternate Fast Reroute with BFD

Loop-Free Alternate Fast Reroute (LFA FRR) was introduced in UMMT 1.0. For UMMT 3.0, remoteLFA FRR functionality has now been integrated into the network design, extending LFA FRRfunctionality to ring networks and other topologies. LFA FRR pre-calculates a backup path for everyprefix in the IGP routing table, allowing the node to rapidly switch to the backup path when a failure isencountered, with recovery times on the order of 50 msec. More information regarding LFA FRR canbe found in IETF RFC 5286, 5714, and 6571.

Page 80: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

6-2 SDU-6516

Also integrated are BFD rapid failure detection and ISIS/OSPF extensions for incremental SPF andLSA/SPF throttling. Configurations to enable BFD are shown below as well.

Currently, remote LFA FRR is supported on the ASR 901 and ASR 903 platforms. The followingconfiguration illustrates how to enable remote LFA FRR on the ASR 901 with BFD enabled in theIGP:

ASR 901 Example

interface Vlan10 !<<<Interface towards upstream ASR 903 ip router isis agg-acc mpls ldp igp sync delay 10 bfd interval 50 min_rx 50 multiplier 3 no bfd echo isis network point-to-point !interface Vlan11 !<<<Interface towards adjacent ASR 901 ip router isis agg-acc mpls ldp igp sync delay 10 bfd interval 50 min_rx 50 multiplier 3 isis network point-to-point !router isis agg-acc fast-reroute per-prefix level-1 all fast-reroute remote-lfa level-1 mpls-ldp bfd all-interfaces! remotelfa-frr enable no l3-over-l2 flush buffers ! mpls ldp discovery targeted-hello accept ! !

ASR 903 Example

cef table output-chain build favor memory-utilization !<<<Optimizes ASR 903 for LFA FRR interface TenGigabitEthernet0/0/0 !<<<Interface towards adjacent ASR 903 description To PAN-903-K1402::TenGigabitEthernet0/0/0 ip router isis agg-acc mpls ldp igp sync delay 10 bfd interval 50 min_rx 50 multiplier 3 no bfd echo isis network point-to-point !interface TenGigabitEthernet0/1/0 !<<<Interface towards upstream ASR 9000 description To AGN-K0301::T0/0/0/2 ip router isis agg-acc mpls ldp igp sync delay 10 bfd interval 50 min_rx 50 multiplier 3 no bfd echo isis circuit-type level-2-only isis network point-to-point !interface BDI10 !<<<Interface towards downstream ASR 901 description To K1314::Gi0/10 ip router isis agg-acc mpls ldp igp sync delay 10 bfd interval 50 min_rx 50 multiplier 3 no bfd echo isis network point-to-point

Page 81: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 6-3

!router isis agg-acc fast-reroute per-prefix level-1 all fast-reroute remote-lfa level-1 mpls-ldp bfd all-interfaces! remotelfa-frr enable no l3-over-l2 flush buffers ! mpls ldp discovery targeted-hello accept ! !

The ASR 9000 and CRS-3 currently support LFA FRR, with remote LFA FRR support expected inXR 4.3.1. A workaround for ring support, involving a statically-configured MPLS-TE FRR tunnelimplemented between routers, is shown below. The example illustrates the tunnel between core ABRrouters, but the same configuration could be utilized between ASR 9000 aggregation nodes as well.

MTG K1501

router isis core interface TenGigE0/0/0/0 !<<<Interface to K0201 Ten0/0/0/0 circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point link-down fast-detect address-family ipv4 unicast fast-reroute per-prefix mpls ldp sync !!mpls ldp router-id 100.111.15.1 discovery targeted-hello accept ! interface TenGigE0/0/0/0 !!

Core Node K0201

explicit-path name TAKETHISPATH index 1 next-address strict ipv4 unicast 100.111.4.1 index 2 next-address strict ipv4 unicast 100.111.12.1!interface Loopback0 ipv4 address 100.111.2.1 255.255.255.255!interface tunnel-te212 ipv4 unnumbered Loopback0 autoroute announce destination 100.111.12.1 path-option 1 explicit name TAKETHISPATH logging events link-status!router isis core address-family ipv4 unicast mpls traffic-eng level-1-2 mpls traffic-eng router-id Loopback0 interface TenGigE0/0/0/0 circuit-type level-2-only bfd minimum-interval 15

Page 82: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

6-4 SDU-6516

bfd multiplier 3 bfd fast-detect ipv4 point-to-point link-down fast-detect address-family ipv4 unicast fast-reroute per-prefix mpls ldp sync interface HundredGigE0/1/0/0 circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point link-down fast-detect address-family ipv4 unicast fast-reroute per-prefix level 2 fast-reroute per-prefix exclude interface TenGigE0/0/0/0 fast-reroute per-prefix exclude interface TenGigE0/0/0/1 fast-reroute per-prefix exclude interface TenGigE0/0/0/3 fast-reroute per-prefix lfa-candidate interface tunnel-te212 mpls ldp sync ! !!mpls traffic-eng interface TenGigE0/0/0/3 bfd fast-detect ! interface HundredGigE0/1/0/0 bfd fast-detect !!mpls ldp router-id 100.111.2.1 igp sync delay 10 ! interface tunnel-te212 ! interface TenGigE0/0/0/0 ! interface TenGigE0/0/0/1 ! interface TenGigE0/0/0/3 ! interface HundredGigE0/1/0/0 igp sync delay 5

Core ABR K1201

router isis core address-family ipv4 unicast mpls traffic-eng level-1-2 mpls traffic-eng router-id Loopback0 ! interface TenGigE0/0/0/0 circuit-type level-2-only bfd minimum-interval 15 bfd multiplier 3 bfd fast-detect ipv4 point-to-point link-down fast-detect address-family ipv4 unicast mpls ldp sync ! !!mpls traffic-eng

Page 83: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

6.1.2 BGP Fast Reroute

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 6-5

interface TenGigE0/0/0/0 bfd fast-detect ! interface TenGigE0/0/0/5 bfd fast-detect !!mpls ldp router-id 100.111.12.1 discovery targeted-hello accept ! interface TenGigE0/0/0/0 ! interface TenGigE0/0/0/5 !

Note A ring with an odd number of nodes would require two MPLS TE-FRR tunnels to implement thisworkaround. Again, the need for any statically-defined tunnels will be eliminated with XR 4.3.1.

6.1.2 BGP Fast Reroute

BGP Fast Reroute (FRR) provides deterministic network reconvergence, even with the BGP prefixscale encountered in UMMT 3.0. BGP FRR is similar to remote LFA FRR in that it pre-calculatesa backup path for every prefix in the BGP forwarding table, relying on a hierarchical Kabeled-FIB(LFIB) structure to allow for multiple paths to be installed for a single BGP next hop. BGP FRRconsists of two different functions: core and edge. Each function handles different failure scenarioswithin the transport network:

• FRR core protection is used when the BGP next-hop is still active, but there is a failure in thepath to that next hop. As soon as the IGP has reconverged, the pointer in BGP is updated to usethe new IGP next hop and forwarding resumes. Thus, the reconvergence time for BGP is thesame as the IGP reconvergence, regardless of the number of BGP prefixes in the RIB.

• Edge protection and FRR edge protection is used for redundant BGP next-hop nodes, as in thecase with redundant ABRs. BGP additional-paths functionality is configured on the PE routersand RRs to install both ABR nodes' paths in the RIB and LFIB instead of just the best path.When there is a failure of the primary ABR, BGP forwarding simply switches to the path of thebackup ABR instead of having to wait for BGP to reconverge at the time of the failure.

The following example illustrates how to configure BGP FRR Core and Edge Protection from theMTG to the CSG across the reference topology:

Mobile Transport Gateway K1501

router bgp 1000 address-family ipv4 unicast additional-paths receive additional-paths selection route-policy add-path-to-ibgp ! address-family vpnv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! route-policy add-path-to-ibgp set path-selection backup 1 install end-policy

Core Route Reflector K0403

Page 84: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

6.1.3 Multihop BFD for BGP

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

6-6 SDU-6516

router bgp 1000 address-family ipv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! address-family vpnv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! route-policy add-path-to-ibgp set path-selection backup 1 advertise install end-policy

Core ABR K0601

router bgp 1000 address-family ipv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! route-policy add-path-to-ibgp set path-selection backup 1 advertise install end-policy

Core ABR K1001

router bgp 1000 address-family ipv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! route-policy add-path-to-ibgp set path-selection backup 1 advertise install end-policy

PAN 903 K1401

cef table output-chain build favor convergence-speed !<<< This command optimizes ASR 903 for BGP FRR router bgp 1000 address-family ipv4 bgp redistribute-internal bgp additional-paths receive

CSG 901 K1314

router bgp 101 address-family ipv4 bgp additional-paths install

6.1.3 Multihop BFD for BGP

BFD multihop (BFD-MH) is a BFD session between two addresses that are not on the same interface.An example of BFD-MH is a BFD session between routers that are several TTL hops away. The first

Page 85: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 6-7

application that makes use of BFD multihop is BGP. BFD multihop supports BFD on arbitrary paths,which can span multiple network hops.

The BFD-MH feature provides sub-second forwarding failure detection for a destination more than onehop, and up to 255 hops, away.

Note For IOS-XR based devices, the following command is needed to enable at least one linecard for theunderlying mechanism used to send and receive multihop packets:

bfd multipath include location 0/0/CPU0

For IOS-based devices, this is not needed.

In the example below, BFD session is created between CSG-K1323 and PAN-K1403/K1404.

Figure 6-1. Multihop BFD Topology

CSG-901-K1323#

key chain mhop-keykey 0 key-string cisco123key 1 key-string cisco456 key chain mhop-key-xyzkey 0 key-string ciscokey 1 key-string labkey 2

Page 86: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

6.2 Service High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

6-8 SDU-6516

key-string lab1!! bfd map ipv4 100.111.14.3/32 100.111.13.23/32 MBFD1bfd map ipv4 100.111.14.4/32 100.111.13.23/32 MBFD1bfd-template multi-hop MBFD1 interval min-tx 50 min-rx 50 multiplier 3 authentication sha-1 keychain mhop-key router bgp 1000 neighbor 100.111.14.3 fall-over bfd neighbor 100.111.14.4 fall-over bfd!

PAN-903-K1403#

key chain mhop-key key 0 key-string cisco123 key 1 key-string cisco456 key chain mhop-key-xyz key 0 key-string cisco key 1 key-string lab key 2 key-string lab1 ! bfd map ipv4 100.111.13.23/32 100.111.14.3/32 MBFD1 bfd-template multi-hop MBFD1 interval min-tx 50 min-rx 50 multiplier 3 authentication sha-1 keychain mhop-key router bgp 1000 neighbor 100.111.13.23 fall-over bfd !

6.2 Service High AvailabilityIn addition to the technologies implemented at the Transport Network layer to provide HighAvailability functionality, there are technologies implemented at the Service layer as well. For MPLSVPN services, BGP Edge protection and BGP FRR Edge protection mechanisms are supported,and VRRP is enabled on the MTGs for redundant connectivity to the MPC. For ATM and TDMpseudowire-based services, pseudowire redundancy is supported, and Multi-Router AutomaticProtection Services (MR-APS) is enabled for redundant connectivity to the BSC or RNC. Theprotection mechanisms for each service are described in the corresponding section below.

6.2.1 MPLS VPN- BGP FRR Edge Protection and VRRP

As described in the Transport Network High Availability section, BGP Fast Reroute (FRR) providesdeterministic network reconvergence, even with the BGP prefix scale encountered in UMMT 3.0. BGPFRR Edge functionality is supported at the MPLS VPN service level as well as the transport networklevel. The following example illustrates how to configure BGP FRR Edge from the MTG to the CSGacross the reference topology:

Page 87: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 6-9

Note Note that the the VRF configuration under the BGP process uses a unique route-descriptor per MTG.This unique route descriptor (RD) configuration in each MTG, combined with the BGP and VRRPtimer adjustments in MTG 1, enables the ability for the rest of the transport infrastructure to optimizeMPLS VPN protection via BGP FRR. This RD does not have to match the route-target defined for theMPLS VPN VRF. The need for this unique RD will be eliminated once support for "bgp additional-paths receive" is implemented for BGP vpnv4 address-family configuration, thus allowing for multipleMTG information to be propagated for the MPLS VPN.

Mobile Transport Gateway 1 VRRP Configuration

router vrrp interface TenGigE0/0/0/2.1100 delay minimum 1 reload 240 address-family ipv4 vrrp 110 priority 254 preempt delay 900 timer msec 100 force address 115.1.23.1 ! ! address-family ipv6 vrrp 100 priority 254 preempt delay 900 timer msec 100 force address global 2001:115:1:23::1 address linklocal autoconfig ! !

BGP FRR Edge Configuration

router bgp 1000 address-family vpnv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! vrf LTE2 rd 1111:1111 !route-policy add-path-to-ibgp set path-selection backup 1 install end-policy

Mobile Transport Gateway 2 VRRP Configuration

router vrrp interface TenGigE0/0/0/2.1100 delay minimum 1 reload 240 address-family ipv4 vrrp 110 priority 253 timer msec 100 force address 115.1.23.1

Page 88: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

6.2.2 Pseudowire Redundancy for ATM and TDM services

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

6-10 SDU-6516

! ! address-family ipv6 vrrp 100 priority 253 timer msec 100 force address global 2001:115:1:23::1 address linklocal autoconfig ! !

BGP FRR Edge Configuration

router bgp 1000 address-family vpnv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! vrf LTE2 rd 1111:2222 !route-policy add-path-to-ibgp set path-selection backup 1 install end-policy

Core Route Reflector K0403

router bgp 1000 address-family vpnv4 unicast additional-paths receive additional-paths send additional-paths selection route-policy add-path-to-ibgp ! route-policy add-path-to-ibgp set path-selection backup 1 advertise install end-policy

CSG 901 K1314

router bgp 101 address-family vpnv4 bgp additional-paths install

6.2.2 Pseudowire Redundancy for ATM and TDM services

High availability for ATM and TDM services transported via Circuit Emulation over Packetpseudowires is achieved through pseudowire redundancy over the transport network, where a backuppseudowire pointing to an alternate MTG is configured for each primary pseudowire. Correspondingto these redundant pseudowires is MR-APS functionality on the ATM/TDM side of the MTGs, whichprovides redundant ATM/TDM connectivity to the Mobile Packet Core equipment. Configuration ofboth technologies are illustrated below.

TDM Services

Page 89: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 6-11

Figure 6-2. CESoPSN/SAToP Service Implementation for 2G and 3G Backhaul

ASR 901 Cell Site Gateway Configuration

Note that the only difference between CESoPSN and SAToP configuration is the lack of "control-word" in the pseudowire-class for SAToP configs.

pseudowire-class CESoPSN encapsulation mpls control-word!!interface CEM0/0 cem 0 xconnect 100.111.15.1 13261501 encapsulation mpls pw-class CESoPSN backup peer 100.111.15.2 13261502 pw-class CESoPSN!interface Loopback0 ip address 100.111.13.26 255.255.255.255 isis tag 10!

ASR 9000 Mobile Transport Gateway 1 Configuration

aps group 1 timers 10 15 channel 0 remote 100.111.15.2 channel 1 local SONET0/2/1/0!!controller SONET0/2/1/0 description To BSC ais-shut report lais report lrdi sts 1 mode vt15-t1 delay trigger 250 ! clock source line ! controller T1 0/2/1/0/1/1/3 cem-group framed 0 timeslots 1-24 forward-alarm AIS forward-alarm RAI ! interface Loopback0 description Global Loopback ipv4 address 100.111.15.1 255.255.255.255 ! l2vpn ! pw-class CESoPSN encapsulation mpls control-word

Page 90: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

6-12 SDU-6516

! ! xconnect group TDM-K1326 p2p T1-CESoPSN-01 interface CEM0/2/1/0/1/1/3:0 neighbor 100.111.13.26 pw-id 13261501 pw-class CESoPSN ! ! !!

ASR 9000 Mobile Transport Gateway 2 Configuration

aps group 1 revert 8 timers 10 15 channel 0 local SONET0/2/1/0 channel 1 remote 100.111.15.1 signaling sonet!!controller SONET0/2/1/0 description To ONS15454-K1410 OC3 port 4/1 ais-shut report lais report lrdi sts 1 mode vt15-t1 delay trigger 250 ! clock source line ! controller T1 0/2/1/0/1/1/3 cem-group framed 0 timeslots 1-24 forward-alarm AIS forward-alarm RAI ! interface Loopback0 description Global Loopback ipv4 address 100.111.15.2 255.255.255.255 ! l2vpn ! pw-class CESoPSN encapsulation mpls control-word ! ! xconnect group TDM-K1326 p2p T1-CESoPSN-01 interface CEM0/2/1/0/1/1/3:0 neighbor 100.111.13.26 pw-id 13261502 pw-class CESoPSN ! ! !!

ATM Services

This example illustrates ATM PW redundancy for a clear-channel implementation. The sameconfigurations would be used for an IMA implementation as well, just using ATM IMA interfacesinstead.

Page 91: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 6-13

Figure 6-3. ATM VPWS Service Implementation for 3G Backhaul

ASR 903 Pre Aggregation Node Configuration

interface ATM0/5/2.100 point-to-point pvc 100/4011 l2transport encapsulation aal0 xconnect 100.111.15.1 1401150115 encapsulation mpls backup peer 100.111.15.2 1401150215 !!interface Loopback0 ip address 100.111.14.1 255.255.255.255!!

ASR 9000 Mobile Transport Gateway 1 Configuration

aps group 2 channel 0 remote 100.111.15.2 channel 1 local SONET0/2/3/0!controller SONET0/2/3/0 path delay trigger 250!interface ATM0/2/3/0.100 l2transport pvc 100/4011 encapsulation aal0 !! interface Loopback0 description Global Loopback ipv4 address 100.111.15.1 255.255.255.255 ! l2vpn pw-class ATM encapsulation mpls ! ! xconnect group ATM-K1401 p2p T1-ATM-01 interface ATM0/2/3/0.100 neighbor 100.111.14.1 pw-id 1401150115 pw-class ATM ! ! !

ASR 9000 Mobile Transport Gateway 2 Configuration

aps group 2 revert 8 channel 0 local SONET0/2/3/0 channel 1 remote 100.111.15.1

Page 92: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 6 High Availability

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

6-14 SDU-6516

signaling sonet!controller SONET0/2/3/0 path delay trigger 250 !!interface Loopback0 description Global Loopback ipv4 address 100.111.15.2 255.255.255.255 ! l2vpn pw-class ATM encapsulation mpls ! ! xconnect group ATM-K1401 p2p T1-ATM-01 interface ATM0/2/3/0.100 neighbor 100.111.14.1 pw-id 1401150215 pw-class ATM ! ! !

Page 93: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 7-1

C I S C O C O N F I D E N T I A L

C H A P T E R 7Quality of Service

The UMMT System uses a DiffServ QoS model across across all network layers of the transportnetwork to guarantee proper treatment of all services being transported. This QoS model guaranteesthe SLA requirements of TDM circuits, ATM Classes of Service, and various LTE QCI values thatcorrespond to different traffic types (voice, video, etc.) with varying resource types (CBR, VBR-RT,VBR-NRT, and UBR for ATM; GBR or non-GBR for LTE). Note that IEEE 1588v2 PTP traffic ismarked with DSCP of 46 by the grandmaster, and thus receives EF PHB treatment across the transportnetwork. QoS policy enforcement is accomplished with flat QoS policies with DiffServ queuingon all NNIs, and H-QoS policies with parent shaping and child queuing on the UNIs and interfacesconnecting to microwave access links. The classification criteria used to implement the DiffServ PHBsis covered in the UMMT 3.0 Design Guide.

Figure 7-1. QoS Enforcement Points

Note In Figure 7-1, the following elements are called out:

• (a) H-QoS policy map on CSG UNIs.

• (b) H-QoS policy map on pre-aggregation NNI connecting microwave access network.

• (1), (2) Flat QoS policy map on CSG and pre-aggregation NNIs in fiber access network.

• (3) Flat QoS policy map on aggregation and core network NNIs.

• (4) Flat QoS policy map on ingress for ATM and TDM UNIs

Note QoS for ATM circuits is currently supported only on the ASR 9000. All ATM traffic on the ASR 901and ASR 903 is currently receives default markings of EXP 0. QoS for ATM circuits on the ASR 901and ASR 903 will be covered in a future release.

Page 94: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

7-2 SDU-6516

Note QoS service policies for TDM circuits is currently supported on the ASR 9000 and ASR 901. TheASR 903 currently implements a workaround which tags all TDM PW traffic with an EXP of 5. TheASR 901 currently has no workaround, and all traffic is marked with EXP of 0. Coverage for QoSservice policies for TDM circuits on the ASR 901 and ASR 903 will be covered in a future release.

CSG QoS Configuration

Class Maps

• QoS classification at the UNI in the ingress direction for upstream traffic is based on IP DSCPwith the marking done by the connected basestation.

class-map match-any CMAP-NMgmt-DSCP << Network management traffic match dscp cs7 ! class-map match-all CMAP-RT-DSCP << Voice/Real-Time traffic match dscp ef ! class-map match-any CMAP-HVideo-DSCP << Video traffic match dscp cs3

• QoS classification at the UNI in the egress direction for downstream traffic is based on QoSgroups with the QoS group mapping being done at the ingress NNI.

• QoS classification at the NNI in the egress direction is based on QoS groups, with:

– QoS group mapping for upstream traffic being done at the ingress UNI.

– QoS group mapping for traffic transiting the access ring being done at the ingress NNI.

class-map match-any CMAP-NMgmt-GRP << Network management traffic match qos-group 7 ! class-map match-any CMAP-CTRL-GRP << Network Control traffic match qos-group 6 ! class-map match-all CMAP-RT-GRP << Voice/Real-Time traffic match qos-group 5 ! class-map match-any CMAP-HVideo-GRP << Video traffic match qos-group 3

• QoS classification at the NNI in the ingress direction is based on MPLS EXP.

class-map match-any CMAP-NMgmt-EXP << Network management traffic match mpls experimental topmost 7 ! class-map match-any CMAP-CTRL-EXP << Network Control traffic match mpls experimental topmost 6 ! class-map match-all CMAP-RT-EXP << Voice/Real-Time traffic match mpls experimental topmost 5 ! class-map match-any CMAP-HVideo-EXP << Video traffic match mpls experimental topmost 3

eNodeB UNI QoS Policy Map

• Upstream traffic: Flat QoS policy map with policing applied in the ingress direction.

Page 95: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 7-3

• Downstream traffic: H-QoS policy map with parent shaper and child queuing applied in theegress direction.

interface GigabitEthernet0/2 << QoS enforcement point (a). Interface connecting eNodeB. service-policy output PMAP-eNB-UNI-P-E service-policy input PMAP-eNB-UNI-I ! ! policy-map PMAP-eNB-UNI-I class CMAP-RT-DSCP police cir 20000000 set qos-group 5 class CMAP-NMgmt-DSCP police 5000000 set qos-group 7 class CMAP-HVideo-DSCP police 100000000 set qos-group 3 class class-default police 200000000 ! ! policy-map PMAP-eNB-UNI-P-E class class-default shape average 425000000 service-policy PMAP-eNB-UNI-C-E ! policy-map PMAP-eNB-UNI-C-E class CMAP-RT-GRP priority percent 5 class CMAP-NMgmt-GRP bandwidth percent 1 class CMAP-HVideo-GRP bandwidth percent 25 class class-default

TDM CEM UNI QoS Policy Map

policy-map PMAP-TDM-UNI-Iclass class-default set qos-group 5 interface CEM0/0 service-policy input PMAP-TDM-UNI-I

Fiber Ring NNI QoS Policy Maps

• The model assume microwave system has ability to do DiffServ QoS that matches with CSGNNI EF and AF DiffServ classes.

• Downstream and transit traffic: Flat QoS policy map with group mapping applied in the ingressdirection.

• Upstream and transit traffic: Flat QoS policy map with DiffServ queuing applied in the egressdirection.

! interface GigabitEthernet0/10 << QoS enforcement point (1). Interface connecting Fiber Access Ring. service-policy input PMAP-NNI-I service-policy output PMAP-NNI-E hold-queue 1500 in

Page 96: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

7-4 SDU-6516

hold-queue 1500 out ! ! policy-map PMAP-NNI-I class CMAP-RT-EXP set qos-group 5 class CMAP-NMgmt-EXP set qos-group 7 class CMAP-CTRL-EXP set qos-group 6 class CMAP-HVideo-EXP set qos-group 3 class class-default ! ! policy-map PMAP-NNI-E class CMAP-RT-GRP priority percent 20 class CMAP-NMgmt-GRP bandwidth percent 5 class CMAP-CTRL-GRP bandwidth percent 2 class CMAP-HVideo-GRP bandwidth percent 50 class class-default

Note The RT traffic mapped to LLQ is implicitly policed at rate configured.

Microwave Ring NNI QoS Policy Maps

• The model assume microwave system has ability to do DiffServ QoS that matches with CSGNNI EF and AF DiffServ classes.

• Downstream and transit traffic: H-QoS policy map with parent shaper and child queuing appliedin the UNI egress direction.

– MPLS EXP classification for CSN ring local traffic need MPLS explicit-null (mpls ldpexplicit-null) over all CSN routers. MPLS EXP must be marked indirectly by marking qos-group on ingress.

– QoS classification at the UNI in the egress direction for downstream traffic is based on QoSgroups with the QoS group mapping being done at the ingress NNI.

• Upstream and transit traffic: QoS policy map with DiffServ queuing based on QoS groups, with:

– QoS group mapping for upstream traffic being done at the ingress UNI.

– QoS group mapping for traffic transiting the access ring being done at the ingress NNI.

policy-map PMAP-eNB-UNI-I class CMAP-RT-DSCP police cir 20000000 set qos-group 5 class CMAP-NMgmt-DSCP police 5000000 set qos-group 7 class CMAP-HVideo-DSCP police 100000000 set qos-group 3 class class-default police 200000000 !

Page 97: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 7-5

!policy-map PMAP-eNB-UNI-C-E class CMAP-RT-GRP priority percent 5 class CMAP-NMgmt-GRP bandwidth percent 1 class CMAP-HVideo-GRP bandwidth percent 25 class class-default ! policy-map PMAP-eNB-UNI-P-E class class-default shape average 350000000 service-policy PMAP-eNB-UNI-C-E ! ! policy-map PMAP-NNI-I class CMAP-RT-EXP set qos-group 5 class CMAP-NMgmt-EXP set qos-group 7 class CMAP-CTRL-EXP set qos-group 6 class CMAP-HVideo-EXP set qos-group 3 class class-default ! ! policy-map PMAP-NNI-E class CMAP-RT-GRP priority percent 20 class CMAP-NMgmt-GRP bandwidth percent 5 class CMAP-CTRL-GRP bandwidth percent 2 class CMAP-HVideo-GRP bandwidth percent 50 class class-default

Pre-Aggregation Node QoS Configuration

Class Maps

• QoS classification at the UNI in the ingress direction for upstream traffic is based on IP DSCPwith the marking done by the connected basestation.

class-map match-any CMAP-NMgmt-DSCP << Network management traffic match dscp cs7 ! class-map match-all CMAP-RT-DSCP << Voice/Real-Time traffic match dscp ef ! class-map match-any CMAP-HVideo-DSCP << Video traffic match dscp cs3

• QoS classification at the NNIs is based on MPLS EXP.

class-map match-any CMAP-NMgmt-EXP << Network management traffic match mpls experimental topmost 7 ! class-map match-any CMAP-CTRL-EXP << Network Control traffic match mpls experimental topmost 6 ! class-map match-all CMAP-RT-EXP << Voice/Real-Time traffic match mpls experimental topmost 5 !

Page 98: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

7-6 SDU-6516

class-map match-any CMAP-HVideo-EXP << Video traffic match mpls experimental topmost 3

Microwave Access NNI QoS Policy Map

• Downstream traffic: H-QoS policy map with parent shaper and child queuing applied in theegress direction to 1G interface connecting microwave access. The 1G interface is shaped to400Mbps microwave link capacity.

ASR 903-based Pre-Aggregation Node

interface GigabitEthernet0/22 << QoS enforcement point (b). Interface connecting uWave Access Network. service-policy output PMAP-uW-NNI-P-E policy-map PMAP-NNI-uw-C-E class CMAP-RT-EXP priority class CMAP-CTRL-EXP bandwidth remaining ratio 1 class CMAP-NMgmt-EXP bandwidth remaining ratio 2 class CMAP-HVideo-EXP bandwidth remaining ratio 3 class class-default ! policy-map PMAP-NNI-uw-P-E class class-default shape average 400M service-policy PMAP-NNI-uw-C-E

ME3800X-based Pre-Aggregation Node

interface GigabitEthernet0/22 << QoS enforcement point (b). Interface connecting uWave Access Network. service-policy output PMAP-uW-NNI-P-E ! policy-map PMAP-NNI-uw-C-E class CMAP-RT-EXP priority police 80m class CMAP-NMgmt-EXP bandwidth 10000 class CMAP-HVideo-EXP bandwidth 80000 class class-default !policy-map PMAP-NNI-uw-P-E class class-default shape average 400M service-policy PMAP-NNI-uw-C-E

Fiber Access NNI QoS Policy Map

• Downstream traffic: Flat QoS policy map with DiffServ queuing applied in the egress directionon the pre-aggregation NNI facing the 1G fiber access network.

ME3800X-based Pre-Aggregation Node

! interface GigabitEthernet0/1 << QoS enforcement point (2). Interface connecting Fiber Access Node.

Page 99: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 7-7

service-policy output PMAP-PreAG-NNI-E end ! policy-map PMAP-PreAG-NNI-E class CMAP-RT-EXP priority police cir 200000000 class CMAP-CTRL-EXP bandwidth 15000 class CMAP-NMgmt-EXP bandwidth 50000 class CMAP-HVideo-EXP bandwidth 200000 class class-default

ASR 903-based Pre-Aggregation Node

• All other configuration details not detailed below are the same as the ME3800X example shownabove.

policy-map PMAP-NNI-E class CMAP-RT-EXP priority class CMAP-CTRL-EXP bandwidth remaining ratio 1 class CMAP-NMgmt-EXP bandwidth remaining ratio 2 class CMAP-HVideo-EXP bandwidth remaining ratio 3 class class-default

Aggregation NNI QoS Policy Map

• Upstream and transit traffic: Flat QoS policy map with DiffServ queuing applied in the egressdirection on the pre-aggregation NNI facing the 10G aggregation network.

! interface TenGigabitEthernet0/1 << QoS enforcement point (3). Interface connecting Aggregation Network. service-policy output PMAP-NNI-E end ! policy-map PMAP-NNI-E class CMAP-RT-EXP police cir 1000000000 priority class CMAP-CTRL-EXP bandwidth 150000 class CMAP-NMgmt-EXP bandwidth 500000 class CMAP-HVideo-EXP bandwidth 2000000 class class-default

Aggregation and Core Network QoS Configuration

Class Maps

• QoS classification at the NNIs is based on MPLS EXP.

class-map match-any CMAP-RT-EXP match mpls experimental topmost 5

Page 100: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

7-8 SDU-6516

end-class-map ! class-map match-any CMAP-CTRL-EXP match mpls experimental topmost 6 end-class-map ! class-map match-any CMAP-NMgmt-EXP match mpls experimental topmost 7 end-class-map ! class-map match-any CMAP-HVideo-EXP match mpls experimental topmost 3 end-class-map

NNI QoS Policy Maps

• (3) Flat QoS policy map with DiffServ queuing applied at all NNIs.

policy-map PMAP-NNI-E class CMAP-RT-EXP priority level 1 police rate 1000000000 bps ! ! class CMAP-CTRL-EXP bandwidth 150000 kbps ! class CMAP-NMgmt-EXP bandwidth 500000 kbps ! class CMAP-HVideo-EXP bandwidth 2000000 kbps ! class class-default ! end-policy-map

MTG Configuration for ATM and TDM UNIs

Class Maps

• The only ingress marking to match is the ATM Cell Loss Priority (CLP) bit on ATM UNIs,which indicates a discard preference for marked cells within a particular ATM Class of Service(CoS). This can be utilized to offer a bursting capability in a particular ATM CoS.

class-map match-any CMAP-ATM-CLP0-UNI-I match atm clp 0 end-class-map!class-map match-any CMAP-ATM-CLP1-UNI-I match atm clp 1 end-class-map

ATM UNI QoS Policy Maps

• Shown are two ATM policy maps. The first corresponds to an ATM VBR-RT service, wherecells are marked with a CLP of 1 above a certain cell rate. The second corresponds to an ATMUBR service, again where cells are marked with a CLP of 1 above a certain cell rate. The propermap is applied to an ATM PVC which corresponds to the ATM CoS carried on that PVC.

policy-map PMAP-ATM-UNI-I class CMAP-ATM-CLP0-UNI-I set mpls experimental imposition 5

Page 101: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 7-9

! class CMAP-ATM-CLP1-UNI-I set mpls experimental imposition 4 ! class class-default ! end-policy-map!policy-map PMAP-ATM-UNI-DATA-I class CMAP-ATM-CLP0-UNI-I set mpls experimental imposition 4 ! class CMAP-ATM-CLP1-UNI-I set mpls experimental imposition 0 ! class class-default ! end-policy-map! interface ATM0/2/3/0 load-interval 30!interface ATM0/2/3/0.100 l2transport pvc 100/4011 service-policy input PMAP-ATM-UNI-I encapsulation aal0 shape vbr-rt 20000 14000 7000 !!interface ATM0/2/3/0.101 l2transport pvc 100/4012 service-policy input PMAP-ATM-UNI-DATA-I shape ubr 40000 !!

TDM UNI QoS Policy Map

• The TDM UNI policy map simply marks all traffic on a CEM interface with an MPLS EXP of5, to ensure that all traffic associated with the CEM interface is given EF treatment. This ensuresthat the emulated TDM circuit is transported with minimum Packet Delay Variation (PDV) toguarantee the quality of the TDM circuit.

policy-map PMAP-TDM-UNI-I class class-default set mpls experimental imposition 5 ! end-policy-map!interface CEM0/2/1/0/1/4/4 service-policy input PMAP-TDM-UNI-I load-interval 30 l2transport !!

Page 102: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 7 Quality of Service

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

7-10 SDU-6516

Page 103: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 8-1

C I S C O C O N F I D E N T I A L

C H A P T E R 8Operations, Administration, and Maintenance

This chapter includes the following major topic:

• Section 8.1 IP SLA Implementation

This section describes the Operations, Administration, and Maintenance (OAM) and PerformanceManagement (PM) implementation for mobile RAN service transport in the UMMT System.

Figure 8-1. OAM implementation for Mobile RAN Service Transport

Service OAM Implementation for LTE and 3G IP UMTS RAN Transport with MPLS Access

OAM and PM functions were validated between the following pairs of devices (initiator listed first,responder listed second):

• CSG to CSG, to monitor the performance of the microwave link connecting the two CSGs in theaccess network ring.

• PAN to CSG, to monitor the performance of the access network.

Page 104: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

8-2 SDU-6516

• MTG to PAN, to monitor the performance of the aggregation and core networks.

Note CSG to MTG OAM and PM functions were validated in UMMT 1.0, and may be found in thedocumentation for that release.

The following OAM and PM functions are enabled on the initiator and responder in each case:

• IP ping and traceroute operations for LTE deployment with native IP/MPLS support.

• VRF-aware IP ping operations for connectivity check for LTE deployment with MPLS VPNs.

• IP SLA responder enabled on responder only for both native IP/MPLS and MPLS VPNdeployments.

• IP SLA UDP Echo Probes configured on initiator for round trip time (RTT) measurement.

• IP SLA UDP Jitter Probe configured on initiator for one-way latency, packet loss, and packetdelay variation measurement.

Service OAM Implementation for ATM and TDM Circuit Emulation Pseudowires for 2G and 3G RAN

Transport

OAM and PM functions were validated between the following pairs of devices (initiator listed first,responder listed second):

• MTG to PAN, to monitor the performance of the aggregation and core networks for ATM andTDM PWs.

• MTG to CSG, to monitor the performance of the access, aggregation, and core networks forTDM PWs.

The following OAM and PM functions are enabled on the initiator and responder in each case:

• MPLS Pseudowire ping and traceroute operations for connectivity check for ATM and TDMPWs.

• IP SLA responder enabled on responder for both ATM and TDM PW deployments. Loopbackaddress of node is used for IP SLA measurements.

• IP SLA UDP Echo Probes configured on initiator for round trip time (RTT) measurement.

• IP SLA UDP Jitter Probe configured on initiator for one-way latency, packet loss, and packetdelay variation measurement.

Transport OAM

The MPLS transport for mobile backhaul is based on the unified MPLS approach. The unified MPLSdeployment approach allows for end-to-end LSPs to be built between the RAN access domain or PANand the centralized MTG in the MPC:

• IP ping and traceroute operations for verifying data plane against control plane, and isolatingfaults within the inter-domain LSP in the MPLS/IP network between the CSGs or PANs and theMTG.

• MPLS LSP ping and traceroute operations for verifying data plane against control plane, andisolating faults within the intra-domain LSPs in the access, aggregation, and core domains.

• MPLS LSP ping and traceroute operations for inter-domain LSPs will be supported in a futurerelease.

Page 105: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

8.1 IP SLA Implementation

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 8-3

8.1 IP SLA ImplementationIP SLA Responder Configuration

The CSG and PAN act as IP SLA responders for different measurement scenarios. Minimalconfiguration is required for enabling the responder function.

ip sla responder

Cell Site Gateway Initiator Configuration for IP SLA

The CSG is configured with IP SLA probes for initiating measurement of packet loss, packet delay,and packet delay variation (i.e., jitter) towards the PAN.

ip sla 1 udp-jitter 100.111.13.16 59000 num-packets 100 request-data-size 160 tos 96 verify-data frequency 30 ip sla schedule 1 life forever start-time now ip sla reaction-configuration 1 react rtt threshold-value 1 1 threshold-type immediate action-type trapOnly ip sla reaction-configuration 1 react jitterDSAvg threshold-value 1 1 threshold-type immediate action-type trapOnly ip sla logging traps ip sla enable reaction-alerts

Pre-Aggregation Node Initiator Configuration for IP SLA

The PAN is configured with IP SLA probes for initiating measurement of packet loss, packet delay,and packet delay variation (i.e., jitter) towards the CSG. It acts as a responder only to the MTG.

ip sla 1 udp-jitter 100.111.13.26 59000 num-packets 100 request-data-size 160 verify-data frequency 30 ip sla schedule 1 life forever start-time now ip sla 2 udp-jitter 113.31.23.1 13031 num-packets 100 request-data-size 160 tos 192 verify-data vrf LTE2 <<<< with vpn frequency 30 ip sla schedule 2 life forever start-time now ip sla reaction-configuration 1 react rtt threshold-value 1 1 threshold-type immediate action-type trapOnly ip sla reaction-configuration 1 react jitterDSAvg threshold-value 1 1 threshold-type immediate action-type trapOnly ip sla logging traps ip sla enable reaction-alerts

Mobile Transport Gateway Initiator Configuration for IP SLA

The MTG is configured with IP SLA probes for initiating measurement of packet loss, packet delay,and packet delay variation (i.e., jitter) towards the PAN.

Page 106: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

8-4 SDU-6516

Note The ToS values in IOS-XR are equal to 4x the desired DSCP value. Please refer to Chapter 7, Qualityof Service for LTE QCI to DSCP mapping.

VPN: Jitter Probes

operation 11 type udp jitter vrf CSN packet count 100 tos 184 << Real-Time (EF) Class destination address 113.2.1.1 destination port 58100 frequency 30 ! ! operation 12 type udp jitter vrf CSN packet count 100 tos 224 << Network Management Class destination address 113.2.1.1 destination port 58110 frequency 30 ! ! operation 13 type udp jitter vrf CSN packet count 100 tos 192 << Network Control Class destination address 113.2.1.1 destination port 58120 frequency 30 ! ! operation 14 type udp jitter vrf CSN packet count 100 tos 96 << Video Class destination address 113.2.1.1 destination port 58130 frequency 30 ! ! operation 15 type udp jitter vrf CSN packet count 100 tos 0 << Best Effort Class destination address 113.2.1.1 destination port 58140 frequency 30 ! ! schedule operation 11 start-time now life forever ! schedule operation 12 start-time now life forever ! schedule operation 13 start-time now

Page 107: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 8-5

life forever ! schedule operation 14 start-time now life forever ! schedule operation 15 start-time now life forever !

VPN: Echo Probes

operation 16 type udp echo vrf CSN tos 184 destination address 113.2.1.1 destination port 58150 frequency 30 ! ! operation 17 type udp echo vrf CSN tos 224 destination address 113.2.1.1 destination port 58160 frequency 30 ! ! operation 18 type udp echo vrf CSN destination address 113.2.1.1 destination port 58170 frequency 30 ! ! operation 19 type udp echo vrf CSN tos 96 destination address 113.2.1.1 destination port 58180 frequency 30 ! ! operation 20 type udp echo vrf CSN tos 0 destination address 113.2.1.1 destination port 58190 frequency 30 ! ! schedule operation 16 start-time now life forever ! schedule operation 17 start-time now life forever ! schedule operation 18 start-time now life forever

Page 108: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

8-6 SDU-6516

! schedule operation 19 start-time now life forever ! schedule operation 20 start-time now life forever

Global: Jitter Probes

! operation 21 type udp jitter packet count 100 tos 184 source address 100.111.11.1 source port 58201 destination address 100.111.13.19 destination port 58200 frequency 30 ! ! operation 22 type udp jitter packet count 100 tos 224 source address 100.111.11.1 source port 58211 destination address 100.111.13.19 destination port 58210 frequency 30 ! ! operation 23 type udp jitter packet count 100 tos 192 source address 100.111.11.1 source port 58221 destination address 100.111.13.19 destination port 58220 frequency 30 ! ! operation 24 type udp jitter packet count 100 tos 96 source address 100.111.11.1 source port 58231 destination address 100.111.13.19 destination port 58230 frequency 30 ! ! operation 25 type udp jitter packet count 100 tos 0 source address 100.111.11.1 source port 58241 destination address 100.111.13.19 destination port 58240 frequency 30 ! ! schedule operation 21

Page 109: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 8-7

start-time now life forever ! schedule operation 22 start-time now life forever ! schedule operation 23 start-time now life forever ! schedule operation 24 start-time now life forever ! schedule operation 25 start-time now life forever

Global: Echo Probes

! operation 26 type udp echo tos 184 source address 100.111.11.1 source port 56251 destination address 100.111.13.19 destination port 56250 frequency 30 ! ! operation 27 type udp echo tos 224 source address 100.111.11.1 source port 56261 destination address 100.111.13.19 destination port 56260 frequency 30 ! ! operation 28 type udp echo tos 192 source address 100.111.11.1 source port 56271 destination address 100.111.13.19 destination port 56270 frequency 30 ! ! operation 29 type udp echo tos 96 source address 100.111.11.1 source port 56281 destination address 100.111.13.19 destination port 56280 frequency 30 ! ! operation 30 type udp echo tos 0 source address 100.111.11.1 source port 56291 destination address 100.111.13.19

Page 110: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

8-8 SDU-6516

destination port 56290 frequency 30 ! ! schedule operation 26 start-time now life forever ! schedule operation 27 start-time now life forever ! schedule operation 28 start-time now life forever ! schedule operation 29 start-time now life forever ! schedule operation 30 start-time now life forever ! end

Reaction Configuration

The reaction configuration defines the thresholds for the previously configured probes, and thecorresponding actions to be taken when those thresholds are exceeded. The following configurationshows a single example of a jitter probe reaction and an echo probe reaction.

reaction operation 11 << Jitter Probe react connection-loss action logging action trigger threshold type immediate ! react jitter-average dest-to-source action logging action trigger threshold type immediate threshold lower-limit 10 upper-limit 15 ! react jitter-average source-to-dest action logging action trigger threshold type immediate threshold lower-limit 10 upper-limit 15 ! react packet-loss dest-to-source action logging action trigger threshold type immediate threshold lower-limit 3 upper-limit 5 ! react packet-loss source-to-dest action logging action trigger threshold type immediate threshold lower-limit 3 upper-limit 5 ! react rtt action logging action trigger threshold type immediate threshold lower-limit 5 upper-limit 15

Page 111: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 8-9

! reaction operation 16 << Echo Probe react connection-loss action logging action trigger threshold type immediate ! react rtt action logging action trigger threshold type immediate threshold lower-limit 5 upper-limit 15

Page 112: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 8 Operations, Administration, and Maintenance

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

8-10 SDU-6516

Page 113: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

SDU-6516 9-1

C I S C O C O N F I D E N T I A L

C H A P T E R 9Related Documents

The UMMT Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Accessis part of a set of resources that comprise the UMMT System documentation suite. The resourcesinclude:

• UMMT 3.0 System Brochure: At a Glance brochure of the Unified MPLS for Mobile TransportSystem (http://sdu.cisco.com/publications/viewdoc.php?docid=6472)

• UMMT 3.0 Design Guide: Provides detailed information about Cisco's UMMT 3.0 systemarchitecture, components, service models, and implementation details. (http://sdu.cisco.com/publications/viewdoc.php?docid=6432

• UMMT Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLSAccess: Implementation guide with configurations for a transport model where the networkorganization between the core and aggregation domains is based on a single autonomous system,multi-area IGP design. The model follows the approach of enabling Unified MPLS LSPs usinghierarchical-labeled BGP LSPs across the core and aggregation network, and presents twoapproaches for extending the unified MPLS LSPs into the mobile RAN access domain. (http://sdu.cisco.com/publications/viewdoc.php?docid=6516)

• UMMT Implementation Guide - Large Network, Inter-AS Design with IP/MPLS Access:This document. Implementation guide with configurations for a transport model where the coreand aggregation networks are organized as different autonomous systems. The model followsthe approach of enabling Unified MPLS LSPs using hierarchical labeled BGP LSPs based oniBGP labeled unicast within each AS, and eBGP labeled unicast to extend the LSP across ASboundaries. Two approaches are presented for extending the unified MPLS LSPs into the mobileRAN access domain. (http://sdu.cisco.com/publications/viewdoc.php?docid=6517)

• UMMT Implementation Guide - Large Network, Inter-AS Design with non-IP/MPLSAccess: Implementation guide with configurations for a transport model where the core andaggregation networks are organized as different autonomous systems. The model follows theapproach of enabling Unified MPLS LSPs using hierarchical labeled BGP LSPs based oniBGP labeled unicast within each AS, and eBGP labeled unicast to extend the LSP across ASboundaries. The model assumes a non-MPLS IP/Ethernet or TDM access where all mobileand potentially wireline services are enabled by the aggregation nodes. (http://sdu.cisco.com/publications/viewdoc.php?docid=6518)

• UMMT Implementation Guide - Small Network, Integrated Core and Aggregation withIP/MPLS Access: Implementation guide with configurations for a transport model in a smallnetwork where the core and aggregation networks are integrated into a single IGP/LDP domain.The aggregation nodes have subtending mobile RAN access networks that are MPLS enabled,

Page 114: UMMT 3.0 Implementation Guide - Large Network Multi-Area IGP Design With IPMPLS Access Design and Implementation Guide

Chapter 9 Related Documents

C I S C O C O N F I D E N T I A L

UMMT 3.0 Implementation Guide - Large Network, Multi-Area IGP Design with IP/MPLS Access DIG, v.1

9-2 SDU-6516

and the unified MPLS LSPs are extended into the access network using hierarchical-labeled BGPLSPs. (http://sdu.cisco.com/publications/viewdoc.php?docid=6519)

• UMMT Implementation Guide - NMS Testing: Implementation guide with integration ofCisco Prime to the UMMT 3.0 system. (http://sdu.cisco.com/publications/viewdoc.php?docid=6520)