Габриэль Готье, Ассоциация телекоммуникационных операторов Франции
NFV в сетях операторов связи
Transcript of NFV в сетях операторов связи
NFV в сетях операторов связи
Евгений Бугаков
Старший системный инженер, JNCIE-SP
Juniper Networks Russia and CIS
Москва, 18 ноября 2014
Virtualization strategy & Goals
BranchOffice
HQ Carrier Ethernet Switch
Cell Site Router
Mobile & Packet GWs
Aggregation Router/ Metro
Core
DC/CO Edge RouterService Edge
Router
Core
Enterprise Edge/Mobile Edge
Aggregation/Metro/Metro core
Service Provider Edge/Core and EPC
vCPE, Virtual Branch Virtual PE (vPE), Virtual BNG (vBNG)
Virtual Routing Engine (vRE), Virtual Route Reflector (vRR)
MX SDN Gateway
MX Virtualization strategy
Hardware Virtualization
SW Control plane and OS: Virtual JunOS, Forwarding plane: Virtualized Trio
Leverage development effort and JunOS feature velocity across all virtualization initiatives
vBNG, vPE, vCPE
Data center
Ap
plic
atio
ns
Juniper Networks Carried Grade
Virtual Router
vMX
VMX goals
Agile and Scalable
Orchestrated
Leverage JUNOS and Trio
• Scale-out elasticity by spinning up new instances
• Faster time-to-market offering• Ability to add new services via service
chaining
• vMX treated similar to a cloud based application
• Leverages the forwarding feature set of Trio
• Leverages the control plane features of JUNOS
VMX product overview
VMX a scale-out router
Scale-out (Virtual MX)Scale-up (Physical MX)
• Optimize for density in a single instance of the platform.
• Innovate in ASIC, power and cooling technologies to drive density and most efficient power footprint.
• Virtualized platforms not optimized to compete with physical routers with regards to capacity per instance.
• Each instance is a router with its own dedicated control-plane and data-plane. Allows for a smaller footprint deployment with administrative separation per instance.
Virtual and Physical MX
PFE vPFE
Microcod
e
TRIO x86
CONTROL
PLANE
DATA
PLANE
ASIC
PLATFOR
M
Virtualization techniques
Application
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, VMWare ESXi
Physical layer
VirtIO drivers
Device emulation
Para-virtualization
• Guest and Hypervisor work together to make emulation efficient
• Offers flexibility for multi-tenancy but with lower I/O performance
• NIC resource is not tied to any one application and can be shared across multiple applications
• vMotion like functionality possible
PCI-Pass through with SR-IOV
• Device drivers exist in user space• Best for I/O performance but has dependency on NIC
type• Direct I/O path between NIC and user-space
application bypassing hypervisor• vMotion like functionality not possible
Application
Virtual NICs
Guest VM#2
VirtIO drivers
Application
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, VMWare ESXi
Physical layer
Device emulation
Application
Virtual NICs
Guest VM#2
Device emulation
PC
I Pas
s-th
rou
gh
SR-I
OV
VMX Product
• Virtual JUNOS to be hosted on a VM
• Follows standard JUNOS release cycles
• Additional software licenses for different
applications
(vPE, vRR, vBNG)
• Hosted on a VM, Bare Metal, Linux Containers
• Multi Core
• DPDK, SR-IOV, virtIO
VCP(Virtualized Control Plane)
VFP(Virtualized Forward Plane)
VMX overview
Efficient separation of control and data-plane
– Data packets are switched within vTRIO
– Multi-threaded SMP implementation allows core elasticity
– Only control packets forwarded to JUNOS
– Feature parity with JUNOS (CLI, interface model, service configuration)
– NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)
Guest OS (Linux)Guest OS (JUNOS)
Hypervisor
x86 Hardware
CH
ASS
ISD
RP
D
LC-
Ke
rne
l
DC
D
SNM
P
Virtual TRIO
VFP VCP
Intel DPDK
vMX Performance
VMX Environment
• CPU assignments• Packet Processing Engine in VFP
• Variable based on desired performance• Packet IO
• One core per 10G port• VCP - RE/Control plane • VCP-VFP Communication • Emulators
• 20 GB memory• 16 GB for RIOT [VFP]• 4 GB for RE [VCP]
• 6x10G Intel NICs with DPDK
VMX Baseline Performance
VMX
Tester
Test setup
8.9
G
8.9
G8
.9G
8.9
G8
.9G
8.9
G8
.9G
8.9
G8
.9G
8.9
G8
.9G
8.9
G
• Single instance of VMX with 6 ports of 10G sending bidirectional traffic
• 16 cores total
• Up to 60G of bidirectional (120G unidirectional) performance per VMX instance (1 VCP instance and 1 VFP instance) @ 1500 bytes
• No packet loss
• IPv4 Throughput testing only
Port0 Port1 Port2 Port3 Port4 Port5
VMX performance improvements
Ivy Bridge
Haswell (current gen)
Sandy Bridge
Forw
ard
ing
pe
rfo
rman
ce
Intel architecture changes & increase in # of cores/socket
• VMX performance improvements will leverage the advancements in Intel Architecture
• Generational changes happen every 2 years and provide about a 1.5x-2x improvement in performance
• Iterative process to optimize efficiency of Trio ucode compiled as x86 instructions
• Streamline forwarding plane with reduced feature-set to increase packet per second performance i.e Hypermode for vMX
Incremental improvements in virtualized forwarding
plane Fo
rwar
din
g p
erf
orm
ance
Incremental improvements in forwarding efficiency
Broadwell (Next gen)
Virtual Routing Engine
VM
XVM
X
VM
X
• Scale-out VMX deployment with multiple VMXs controlled by a single control plane
Scale out VMX
vMX use cases and deployment models
Service Provider VMX use case – virtual PE (vPE)
DC/COGateway
ProviderMPLScloudCPE
L2PE
L3PE
CPE
Peering
Internet
SMBCPE
Pseudowire
L3VPN
IPSEC/Overlaytechnology
BranchOffic
e
BranchOffic
e
DC/COFabric
vPE
• Scale-out deployment
scenarios
• Low bandwidth, high
control plane scale
customers
• Dedicated PE for new
services and faster time-
to-market
Market Requirement
• VMX is a virtual extension
of a physical MX PE
• Orchestration and
management capabilities
inherent to any virtualized
application apply
VMX Value Proposition
Example VMX connectivity model – option 1
L2PEP DC/CO
GW&ASBRProvider MPLS
Cloud
vPE
LDP+IGP LDP+IGP
L2/L3 overlay
BGP-LU
MPLS
BGP-LU
CPE
RR
NHS no NHS
Pseudowire
DC/CO Fabric
ASBR
BGP-LU
NHSno NHS
RR
• Extend SP MPLS cloud to the VPE• L2 backhaul from CPE • Scale the number of vPEs within the DC/CO by using concepts
from Seamless MPLS
VMX as a DC Gateway
VM VM VM
ToR(IP)
ToR(L2)
NonVirtualizedenvironment(L2)
VXLANGateway(VTEP)
VTEP
VM VM VM
VTEP
VirtualizedServer VirtualizedServer
VPNCustA VPNCustB
VRFA
VRFB
MPLSCloud
VPNGateway(L3VPN)
VMX
VirtualNetworkB VirtualNetworkA
VM VM VM VM VM VM
DataCenter/CentralOffice
• Service Providers need a
gateway router to connect
the virtual networks to the
physical network
• Gateway should be capable
of supporting different DC
overlay, DC Interconnect
and L2 technologies in the
DC such as GRE, VXLAN,
VPLS and EVPN
Market Requirement
• VMX supports all the overlay, DCI
and L2 technologies available on
MX
• Scale-out control plane to scale
up VRF instances and number of
VPN routes
VMX Value Proposition
VMX to offer managed CPE/centralized CPE
Service providers want to offer a managed CPE service and centralize the CPE functionality to avoid “truck rolls” Large enterprises want a centralized CPE offering to manage all their branch sites Both SPs and enterprises want the ability to offer new services without changing the CPE device
MARKET REQUIREMENT
VMX with service chaining can offer best of breed routing and L4-L7 functionality Service chaining offers the flexibility to add new
services in a scale-out manner
VMX VALUE PROPOSITION
vMXasvCPE(IPSec,NAT)
vSRX(Firewall)
BranchOffic
e
Switch
ProviderMPLScloud
DC/COGW
BranchOffic
e
Switch
ProviderMPLScloud
DC/COFabric+Contrailoverlay
vMXasvPE
BranchOffic
e
Switch
L2PE
L2PE
PE
InternetContrail
Controller
SwitchSwitch
Switch
BranchOffice Branch
Office
BranchOffice
PrivateMPLScloud
Internet
vMXasvCPE(IPSec,NAT)
vSRX(Firewall)
vMXasWANrouter
Contrail
Controller
Enterprise HQ
Switch
BranchOffice
Storage&Compute
Enterprise Data Center
Service Provider Managed Virtual CPE Large enterprise centralized Virtual CPE
Example VMX connectivity model – option 2
L2PE P DC/CO GW
Provider MPLS Cloud
vPE
LDP+IGP
VLANMPLS
CPE
Pseudowire
DC/CO Fabric
• Terminate the L2 connection from CPE on the DC/CO GW• Create a NNI connection from the DC/CO GW to the VPE
instances
vMX FRS
VMX FRS product
• Official FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R5 • High level overview of FRS product
• DPDK integration. Min 60G throughput per VMX instance• OpenStack integration• 1:1 mapping between VFP and VCP• Hypervisor support: KVM, VMWare ESXi, Xen• High level feature support for FRS:
• Full IP capabilities• MPLS: LDP, RSVP• MPLS applications: L3VPN, L2VPN, L2Circuit• IP and MPLS multicast• Tunneling: GRE, LT• OAM: BFD• QoS: Intel DPDK QoS feature-set
VFP VCP
Hypervisor/Linux
NIC drivers, DPDK
Server, CPU, NIC
Juniper deliverable
Customer defined
vMX Roadmap
VMX QoS model
VFP
Physical
NICs
Virtual NICs
WAN traffic
• Utilize the Intel DPDK QoS toolkit to implement the
scheduler
• Existing JUNOS QoS configuration applies
• Destination Queue + Forwarding Class used to determine
scheduler queue
• Scheduler instance per Virtual NIC
QoS scheduler
implemented per
VNIC instance
Port: Shaping-rate
VLAN: Shaping-rate 4k per IFD
Queues: 6 queues 3 priorities
1 High 1 medium 4 low
Priority groups scheduling follows strict priority for a given VLAN
Queues of the same priority for a given VLAN use WRR
High and medium queues are capped at transmit-rate
Qu
eue
0
Qu
eue
1
VLAN-1
Port
High Priority
Qu
eue
5
Qu
eue
2
Qu
eue
4
Qu
eue
3
Low PriorityMedium Priority
Rate-limiter
VMX QoS model
Rate-limiter
VMX with vRouter and Orchestration
• vMX with vRouter integration
• VirtIO utilized for Para-
virtualized drivers
• Contrail OpenStack for
• VM management
• Setting up overlay
network
• NFV Orchestrator (potentially
OpenStack Heat templates)
utilized to easily create and
replicate VMX instances
• Utilize OpenStack Ceilometer
to determine VMX instance
utilization for billing
VCPVFP
Physical NICs
WAN traffic
Guest VM (Linux + DPDK)
Cores Memory
OOB Management
Contrail vRouter
vRouter AgentvRouter Agent
Contrail controller
NFV orchestrator
Template based config• BW per instance• Memory• # of WAN ports
Vir
tIO
Vir
tIO
Guest VM (FreeBSD)
Physical layer
Physical & Virtual MX
• Offer a scale-out model across
both physical and virtual
resources
• Depending on the type of
customer and service offering
NFV orchestrator decides
whether to provision the
customer on a physical or
virtual resource
Physical Forwarding resources
L2 interconnect
Virtual Forwarding resources
Contrail controller
NFV orchestratorTemplate based config• BW per instance• Memory• # of WAN ports
Virtual Routing Engine
VMX1 VMX2
vBNG (Virtual Unified Edge Solution)
vBNG, what is it?
• Runs on x86 inside virtual machine
• Two virtual machines needed, one for forwarding and one for control plane
• First iteration supports KVM for hypervisor and OpenStack for orchestration
• VMWARE support planned
• Based on the same code base and architecture as Juniper’s VMX
• Runs Junos
• Full featured and constantly improving
• Some features, scale and performance of vBNG will be different than pBNG
• Easy migration from pBNG
• Supports multiple BB models
• vLNS
• BNG based on PPP, DHCP, C-VLAN and PWHT connections types
vBNG Value proposition
• Assumptions• Highly utilized physical BNGs (pBNG) cost less (capex) than x86 based
BNGs (vBNG)
• Installation (rack and stack) of pBNG costs more (opex) than installation of vBNGs
• Capex cost of the cloud infrastructure (switches, servers and software) is spread over multiple applications (vRouters and other applications)
• vBNG is a candidate when• a single pBNG serves 12,000 or fewer subscribers or
• pBNG peak utilization is about 20 Gb/s or less or
• BNG utilization and subscriber count fluctuates significantly over time or
• The application has many subscribers and small bandwidth
• pBNG is the best answer when• BNGs are centralized and serve >12000 subscribers or >20 Gb/s
Target use cases for vBNG
• vBNG for BNG near CO
• vLNS for business
• vBNG for lab testing new features or new releases
• vLNS for applications where the subscribers count fluctuates
vBNG for BNG near CO
vBNGDeployment
Model
SP Core
vBNG
InternetOLT/DSLAM
DSL or FiberCPE in BB Homes
Last Mile
OLT/DSLAM
DSL or FiberCPE in BB Homes
Last Mile
OLT/DSLAM
DSL or FiberCPE in BB Homes
Last Mile
Central OfficeWith Cloud Infrastructure
L2 Switch L2 Switch
• Business case is strongest when
vBNG aggregates 12K or fewer
subscribers
• 1 – 10 OLTs/DSLAMs
vRR (Virtual Route Reflector)
Route Reflector PAIN POINTs addressed by VRR
Route Reflectors are characterized by RIB scale (available memory) and BGP Performance (Policy Computation, route resolver, network I/O -determined by CPU speed)
Memory drives route reflector scaling
• Larger memory means that RRs can hold more RIB routes
• With higher memory an RR can control larger network segments –lower number of RRs required in a network
CPU speed drives faster BGP performance
• Faster CPU clock means faster convergence
• Faster RR CPUs allow larger network segments controlled by one RR -lower numbers of RRs required in a network
vRR product addresses these pain point by running Junos image as an RR application on faster CPUs and with memory on standard servers/appliances
Juniper vRR DEVELOPMENT Strategy
• vRR development is following three pronged approach
1. Evolve platform capabilities using virtualization technologies
• Allow instantiation of Junos image on a non RE hardware
• Any Intel Architecture Blade Server / Server
2. Evolve Junos OS and RPD capabilities
• 64 bit Junos kernel
• 64 bit RPD improvements for increased scale
• RPD modularity / multi-threading for better convergence performance
3. Evolve Junos BGP capabilities for RR application
• BGP Resilience and Reliability improvements
• BGP monitoring protocol
• BGP Driven Application control – DDoS prevention via FlowSpec
Virtual Route Reflector delievering
• Support network based as well as data center based RR design
• Easy deployment as scaling & flexibility is built into virtualization technology, while maintaining all essential product functionality
Virtual RR
Junos Image
Any Intel Server for instantiating vRR
Ge
ne
ric
X8
6
Pla
tfo
rmJu
no
s So
ftw
are
JunosXXXX.img software image as vRR
No hardware is included
Includes ALL currently supported address families - IPv4 /IPv6, VPN, L2, multicast AF (as today’s product does)
Exact same RR functionality as MX
No Forwarding Plane
Software SKUs for primary and standby RR
Customer can choose any x86 platform
Customer can choose CPU and memory size as per scaling needs
VRR: First implementation
• Junos Virtual RR
• Official Release:13.3 R3
• 64-bit kernel; 64-bit RPD
• SCALING: driven by memory allocated to vRR instance
• Virtualization Technology: QEMU-KVM
• Linux distribution: CENTOS 6.4, Ubuntu 14.04 LTS
• Orchestration Platform: LIBVIRT 0.9.8 , Openstack (Icehouse), ESXi 5.5
vRR scaling
VRR: Reference hardware
• Juniper is testing vRR on following reference hardware
• CPU: 16-core Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
• Available RAM: 128G
• Only 32G per VM instance is being tested
• On-chip cache memory:
• L1 cache
• I-cache: 32KB D-cache:32KB
• L2 cache: 256KB
• L3 cache: 12MB
• Linux distribution: CentOS release 6.4 - KVM/QEMU
• Juniper will provide scaling guidance based on this hw specs
• Performance behavior might defer if a different HW is chosen
64 bit FreeBSD does not work on IvyBridge, due to known software bugs, please refrain from using IvyBridge
VRR Scaling Results
* The convergence numbers also improve with higher clock CPU
Tested with 32G vRR instance
Address Family
# of advertizing
peers active routes Total Routes
Memory Utilization(for
receive all routes)
Time takento receive all
routes# of receiving
peers Time taken to advertise
the routes and Mem Utils.
IPv4 600 4.2 million 42Mil(10path) 60% 11min 600 20min(62%)
IPv4 600 2 million 20Mil(10path) 33% 6min 600 6min(33%)
IPv6 600 4 million 40Mil(10path) 68% 26min 600 26min(68%)
VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%)
VPNv4 600 4.2Mil8.4Mil (2 paths
) 19% 5min 600 23min(24%)
VPNv4 600 6Mil12Mil (2 paths
) 24% 8min 600 36min(32%)
VPNv6 600 6Mil12Mil (2 paths
) 30% 11min 600 11min(30%)
VPNv6 600 4.2Mil8.4Mil (2 paths
) 22% 8min 600 8min(22%)
vRR FRS
VRR: FEATURE Support
vRR Features Support Status
Support for all BGP address familiesSupported today : same as chassis based
implementation
L3 unicast address families IPv4, IPv6, VPNv4 and VPNv6, BGP-LU
Supported today : same as chassis based implementation
L3 multicast address families IPv4, IPv6, VPNv4 and VPNv6
Supported today : same as chassis based implementation
L2VPN address families (RFC4761, RFC6074)Supported today : same as chassis based
implementation
Route Target address family (RFC4684)Supported today : same as chassis based
implementation
Support for the BGP ADD_PATH featurestarting in 12.3 (IPv4, IPv6, Labeled unicast v4 and
labeled unicast v6
Support for 4-byte AS numbers Supported today
BGP neighborsSupported today : same as chassis based
implementation
OSPF adjacenciesSupported today : same as chassis based
implementation
ISIS adjacencies Supported today : same as chassis based
implementation
LDP adjacencies Not supported at FRS
VRR: FEATURE Support – part 2
vRR Features Support StatusAbility to control BGP learning and advertising of routes based on any combination of the following attributes:
Prefix, prefix length, AS-Path, CommunitySupported today : same as chassis based
implementation
Interfaces must support 802.1Q VLAN encapsulation Supported
Interfaces must support 802.1ad (QinQ) VLAN encapsulation Supported
Ability to run at least two route reflectors as virtual routers in the one physical router
Yes - via difference spawning different instances of route reflectors on different cores
Non-stop routing for all routing protocols and address families Not at FRS; need to schedule
Graceful restart for all routing protocols and address families
Supported today ; same as chassis based implementation
BFD for BGPv4 Supported today - control plane BFD implementation
BFD for BGPv6 Supported today - control plane BFD implementation
Multihop BFD for both BGPv4 and BGPv6 Supported today
vRR Use cases and deployment models
Network based Virtual Route Reflector Design
Client 1
vRRs can be deployed in the same locations in the network
Same connectivity paradigm between vRRs and clients as today’s RRs and clients
vRR instantiation and connectivity (“underlay”) provided by Openstack
Client 2
Client 3
Client n
Junos VRR on VMsOn standard servers
CLOUD Based Virtual Route Reflector DESIGNSolving the best path selection problem for cloud virtual route reflector
VRR 1
Region 1
Regional
Network 2
VRR 2
Region 2Data Center
CloudBackbone
GRE, IGP
VRR 2 selects path based on R1 view
R1
R2 VRR 2 selects path based on R2 view
vRR as an “Application” hosted in DC
GRE tunnel is originated from gre.X (control plane interface)
VRR behaves like it is locally attached to R1 (requires resolution RIB config)
Client 2
Client 1Regional
Network 1
Client 3
iBGP
Cloud Overlay w/ Contrail or VMWare
Virtual CPE
“There is a App for That”
EVOLVING SERVICE DELIVERY to bring cloud properties to managed BUSINESS services
“30Mbps Firewall”
“Application Acceleration”
“Remote access for 40 employees”
“Application Reporting”
“There is an App for That”
The concept of Cloud Based CPE
• A Simplified CPE
• Remove CPE barriers to service innovation
• Lower complexity & cost
DHCPFirewallRouting / IP
ForwardingNAT
Modem / ONTSwitchAccess
Point
VoiceMoCA/ HPAV/ HPNA3
DHCP
FWRouting / IP
Forwarding
NAT
Modem / ONTSwitchAccess
Point
VoiceMoCA/ HPAV/ HPNA3
In Network CPE functions
Leverage & integrate with other network services
Centralize & consolidate
Seamless integrate with mobile & cloud based services
Direct Connect
Extend reach & visibility into the home
Per device awareness & state
Simplified user experience
Simplify the device required on the customer premise
Centralize key CPE functions & integrate them into the network edge
BNG / PE in SP Network
CLOUD CPE ARCHITECTURE HIGH LEVEL COMPONENTS
Customer Site
vCPE Context
Onsite CPE
Services
Management
Edge Router
Simplified CPE device with
switching, access point,
upstream QoS and WAN
interfaces
Optional L3 and Tunneling
CPE specific context in router
Layer 3 services (addressing &
routing)
Basic value services (NAT,
Firewall,…)
Advanced security services (UTM, IPS,
Firewall,…)
Extensible to other value added services
(M2M, WLAN, Hosting, Business apps,…)
Cloud based
Provisioning
Monitoring
Customer Self-care
Virtual CPE use cases
VCPE MODELSSCENARIO A: Integrated v- BRANCH ROUTER
Ethernet NID
Switch with Smart SFP
DHCP
Routing NAT,
FWVPN
Cloud CPE Context
Edge Router
L2 CPE(optionally with L3 awareness
for QoS and Assurance)
LAG, VRRP, OAM, L2 Filters,..
Statistics and Monitoring per vCPE
Addressing, Routing, Internet & VPN, QoS
NAT, Firewall, IDP
vCPE instance = VPN routing instance
Pros• Simplest onsite CPE• Limited investments• LAN extension• Device visibility
Cons• Access network impact• Limited services• Management impact
Juniper • MX• JS Self-Care App• NID Partners
VCPE MODELSSCENARIO B : OVERLAY v- BRANCH ROUTER
CPE
VPN
Lightweight L3 CPE
(Un)Secure TunnelL2 or L3 Transport
vCPE instance = VR on VM
Pros• No domain constraint• Operational isolation• VM flexibility• Transparent to existing network
Cons• Pre-requisites on CPE• Blindsided Edge• Virtualization Tax
Juniper • Firefly• Virtual Director
CPE
VM
VM
VM
VM can be shared across sites
BROADBAND DEVICE VISIBILITY EXAMPLE: PARENTAL CONTROL BASED ON DEVICE POLICIES
HOME NETWORK
Laptop
L2 Bridge
Tablet
Little Jimmy’s
Desktop
ACTIVITY REPORTING
Volumes Content
Facebook.comTwitter.comHulu.comWikipedia.comIwishiwere21.com
Portal / Mobile App
Self-care & Reporting
CONTENT FILTER
You have tried to access www.iwishiwere21.com
This site is filtered in order to protect you
TIME OF DAY
Internet access from this device is not permitted between 7pm and 7am.
Try again tomorrow
CLOUD CPE APPLICATIONS CURRENT STATUS
Market
Technology
Benefits well understood
Existing demand at the edge and in the data center
NSP and government projects
Driven by product, architecture, planning departments
Business Cloud CPE Residential Cloud CPE
Emerging concept, use cases under definition (which cloud services?)
No short term commercial demand
Standardization at BBF (NERG)
Driven by NSP R&D departments
Extension to MPLS VPN requirements
Initial focus on routing and security services
L2 CPE initially, L3 CPE coming
Concern on redundancy
Extension to BNG
Focus on transforming complex CPE features into services
Key areas: DHCP, NAT, UPnP, management, self-care
Requires very high scale
All individual components available and can run separately
MX vCPE context based on routing instance, with core CPE features and basic services on MS-DPC. L2 based.
“System integration” work in progress + roadmap for next steps
Evangelization
Working on marketing demo (focused on use cases/ benefits)
Involvement in NSP proofs of concept
Standardization
Design in progress
How to get this presentation?
• Scan it
• Download it!
• Join at Facebook: Juniper.CIS.SE (Juniper techpubs ru)
Вопросы?
Спасибо за внимание!