Introduction to nfv movilforum
-
Upload
movilforum -
Category
Technology
-
view
720 -
download
0
Transcript of Introduction to nfv movilforum
Introduction to NFV_
Network Innovation & Virtualisation
Global CTO Unit
09.06.2015
FRANCISCO-JAVIER RAMÓN SALGUEROHead of Network Virtualisation – GCTO Unit, Telefónica
Chair of ETSI NFV Testing, Experimentation and Open Source
[email protected] / @fjramons
DISCOVER, DISRUPT, DELIVER
Getting the most from disruptive technologies often
requires a trial and error phase…
DISCOVER, DISRUPT, DELIVER
REASONS FOR A TRANSFORMATION
Why evolution is not enough to stay in business
DISCOVER, DISRUPT, DELIVER
NFV promises to transform the Telco industry, being
operation one of the key levers
• Making infrastructure uniform
• Facilitating interoperability
• Improving risk management in a
changing environment
• Fostering competition &
innovation
• Minimizing entry barriers
• Making capacity addition flexible
and easy
• Simplifying network operation
DISCOVER, DISRUPT, DELIVER
Virtual Network
Infrastructure
Legacy
InfrastructureVirtual CPE
SwitchAccess
Point Modem
2G/3G/4G
xDSL
FTTx
BY RELEVANT USE CASES EVOLUTIONARY APPROACH
Adapt to:
- New specific needs for different nodes
- Reuse of equipment still in amortization
- Leverage on new planned elements in
architecture
Telefónica is fully committed to this view,
advancing through TCO-driven use cases
TCO Benefits
Tech difficulty
•vCPE
•vBRAS•vEPC
•vIMS
30% of new
network
elements to
be virtualized
by 2017
OUR AMBITION:
DISCOVER, DISRUPT, DELIVER
There are relevant facts to take into account
for significant business impact
Source: Deloitte. Feb 2015
Source: Own, based on Ovum data
DATA
PLANE
CONTROL
PLANE
…while DATA PLANE is a must to make the case
Integration with LEGACY SYSTEMS
& INTEROPERABILITY are
perceived as the main hurdles…
DISCOVER, DISRUPT, DELIVER
NFV Reference Lab
In fact, we are already beyond theory
Flagship trial: Residential vCPE
>45 VNFs >25 VendorsMassive pre-commercial trial Q1
DISCOVER, DISRUPT, DELIVER
NETWORK FUNCTIONS VIRTUALISATION (NFV) implies the
separation of the FUNCTION from the CAPACITY
DPIBRAS
GGSN/
SGSN
Firewall
CG-NAT
PE Router
VIRTUAL NETWORK
FUNCTIONS
COMMON HW
(Servers & Switches)
FUNCTION
(semantics)
CAPACITY
(resource
mgmt)
Decoupled
DISCOVER, DISRUPT, DELIVER11
Once Network Functions are SW-based, they might be
moved to the most convenient location
Avoiding “virtual NF stitching”… or how to combine
a NFV with a common networking infrastructure
How can communications between VNFs
be smoothly controlled?
DISCOVER, DISRUPT, DELIVER12
Software Defined Networking (SDN) & OpenFlow
come to the rescue!
SDN paradigm: Fully decouple data and control planes
Switches: Simple packet processing elements (per-packet rules)
Controllers: Software-based controlling components (high-level decisions)
OpenFlow provides an open interface between control and data plane, making
SDN possible
Much like a processor instruction set
Even bypassing conventional L2/L3 protocols
CURRENT PARADIGMSDN
FEATURE FEATURE
OPERATING SYSTEM
SPECIALIZED PACKET
FORWARDING HARDWAREFEATURE FEATURE
OPERATING SYSTEM
SPECIALIZED PACKET
FORWARDING HARDWARE
FEATURE FEATURE
OPERATING SYSTEM
SPECIALIZED PACKET
FORWARDING HARDWAREFEATURE FEATURE
OPERATING SYSTEM
SPECIALIZED PACKET
FORWARDING HARDWARE
DISCOVER, DISRUPT, DELIVER
Capacity is SLICED in VMs, interconnected by switches
BNGCONTROL CG-NAT
SW-defined Network Functions Running in slices of the infrastructure
Unaware of the particular server where the
slice was taken from
NFV
Interconnects the Virtual Network
Functions Behaves as backplane
Also provides connectivity to external
physical nodes/networks
SDN
DHCP
UPnP
TR-069
IPv4 /
IPv6
Session
mgmtNAT
NAT
ctrl.
Pool
admin
POOL
MGMT
DISCOVER, DISRUPT, DELIVER
NETWORK FUNCTIONS (VNFs) are composed of a set of
interconnected VMs…
VNF
VM
Internalinterfaces
External interfaces
VM
VM VM
Externalinterfaces
FLEXIBLE SCALING- Add more VMs as you grow
SIMPLER ADDITION OF NEW FEATURES- Can be isolated in new VMs
DISCOVER, DISRUPT, DELIVER
… and VIRTUALISED NETWORKS are built of a set of
interconnected VNFs…
VNF
VM
Internalinterfaces
External interfaces
VM
VM VM
VNF
VM
VNF
VM
Internalinterfaces
VM
Externalinterfaces
Externalinterfaces
FULL NETWORK SCENARIOS CAN BE EASILY CLONED, MOVED, RESIZED, etc.
DISCOVER, DISRUPT, DELIVER
VNF
VM
Internalinterfaces
External interfaces
VM
VM VM
VNF
VM
VNF
VM
Internalinterfaces
VM
Externalinterfaces
Externalinterfaces
Fortunately, the NFV ORCHESTRATION (NFV-O) not only automates
network deployments, but also hides that complexity
VNF VNF
VNF
OUR NETWORK NODES ARE BACK!- No need to worry about VMs!
SCENARIOS CAN BE ABSTRACTED- Parameters for E2E management can be exposed
DISCOVER, DISRUPT, DELIVER
ETSI's NFV architecture provides a framework to apply
virtualisation technologies to a network environment
Execution environment
Commodity Servers
Commodity Switching
infrastructure
INFRASTRUCTURE HW
(servers and switches)
INFRASTRUCTURE SW
Allows to slice server
capacity
OS + Hypervisor
SW-NODES RUNNING ON TOP
Composed of VMs, etc.
Virtual Network
Functions
DISCOVER, DISRUPT, DELIVER
ETSI's NFV architecture provides a framework to apply
virtualisation technologies to a network environment
Execution environment
Commodity Servers
OS + Hypervisor
Commodity Switching
infrastructure
Virtual Network
Functions
Management environment
?
DISCOVER, DISRUPT, DELIVER
ETSI's NFV architecture provides a framework to apply
virtualisation technologies to a network environment
Management environment
Execution environment
Virtual
MachinesCommodity Servers
OS + Hypervisor
Commodity Switching
infrastructure
Virtualised
Infrastructure
Manager
Virtual Network
Functions
VM
VMVM
VM
VM
VM
VM
VM VM
INFRASTRUCTURE
Network
FunctionsVNF Manager VM
VMVM
VM
VM
VM
VM
VM VMVNF
VNF
VNFSW NODE LIFECYCLE
VNF
VNFVNF
VNFVNF
Legacy
OSS/BSS Orchestrator
NETWORK
SERVICENetwork
Scenarios
DISCOVER, DISRUPT, DELIVER
The target: to define a future-proof network
architecture…
vv
COTS HW
LOCAL PoPs REGIONAL DATA CENTRES
Control Plane can be
Centralised
Data Plane must be
Distributed
OS + Hypervisor
MPLS/SDN/Optical
Infrastructure
Service Domain
Network Domain
CDN Video
P-CSCF
EPC BRAS
CG-NATDPI
SDP CSFB
IMS
DHCP PCRF
DNS UDB
COTS HW
OS + Hypervisor
MPLS/SDN/Optical
SRVCC
HW and SW
decoupling
HW and SW
decoupling
GGSN
PE
Security
NGIN
M/SMSC
There will be two kinds of Virtualized Network
Infrastructure: local PoPs and regional Data Centers
DISCOVER, DISRUPT, DELIVER
Network Virtualisation is not just cloud computing applied
to network environments
The network differs from the computing environment in 2 key factors:
Data plane workloads
(which are huge!)
Network requires shape
(+ E2E interconnection)
NEED OF HIGH AND
PREDICTABLE PERFORMANCE
(as with current equipment)
GLOBAL NETWORK VIEW IS
REQUIRED FOR MANAGEMENT
1
2
…which are big challenges for vanilla cloud computing
Network Virtualisation is not Cloud Computing!!
DISCOVER, DISRUPT, DELIVER
Data plane performance is key for the most interesting
cases in terms of TCOAcceptable performance
BareMetal VM
@Cloud
GAP
BEFORE
DISCOVER, DISRUPT, DELIVER
@CloudVM
BareMetal
Data plane performance is key for the most interesting
cases in terms of TCOAcceptable performance
BareMetal VM
@Cloud
GAP
x10
BareMetal
VM @vPoP
BEFORE AFTER
EX
EC
UT
ION
MA
NA
GE
ME
NT
EX
EC
UT
ION
MA
NA
GE
ME
NT
DISCOVER, DISRUPT, DELIVER
Going beyond the limits in bare metal
Acceptable performance
x10
BareMetal VM
@Cloud
BareMetal
VM @vPoP
BEFORE AFTER
GAP
DISCOVER, DISRUPT, DELIVER
Why now?
BETTER PROCESSOR SUPPORT (FOR I/O)Direct cache access for I/O transactions
Direct PCIe connection to the processor without I/O hub
Large pages support for I/O transactions
BETTER SW SUPPORTPolling-mode drivers, able to poll the NIC and reducing the
number of interruptions by the Operating System
Specialized SW libraries (e.g. DPDK) for data plane
operations
DISCOVER, DISRUPT, DELIVER
NUMA awareness and CPU pinning to optimise cache usage
Direct cache access minimises RAM accesses
in favour of cache access, much quicker
NUMA 0
CacheL1/L2
Cache L3
MEM
ORY
I/O
device
I/O
device
CacheL1/L2
Core
T0 T1
Core
T0 T1...
NUMA 1
CacheL1/L2
Cache L3
MEM
ORY
I/O
device
I/O
device
CacheL1/L2
Core
T0 T1
Core
T0 T1...
Minimise QPI usage
Max. cache sharing
Min. mem. translations
Polling mode drivers
Full assignment to process
Direct access to last level cache
QPI
Enable
hugepages usage
DISCOVER, DISRUPT, DELIVER
Direct PCIe connection to processor
Source: Intel® Xeon® E5 Processor Family
SANDY BRIDGEE5-2600 Product Family
based Platform
SandyBridge
Core
SandyBridge
Core
DDR3
DDR3
DDR3
DDR3
DDR3
DDR3
DDR3
DDR3
x8
QPI
QPI
WESTMEREIntel® Xeon® 5500 / 5600
based Platform
Xeon® 5500Xeon® 5600
Core
Xeon® 5500Xeon® 5600
Core
DDR3
DDR3
DDR3
DDR3
DDR3
DDR3
QPI
up toDDR3 1333
up to6.4 GT/s
Intel 5500Series(IOH)
QP
I
QP
I
up to36 lanesPCIe
up toDDR3 1600 up to
8.0 GT/s
up to 40 lanesPCIe per socket
Direct PCIe connection improves I/O latency and
removes the bottleneck from the I/O hub
DISCOVER, DISRUPT, DELIVER
Some fundamentals on memory (and I/O) pagination
Operating System uses pagination to:
• Distribute and organize RAM memory among processes
• Allow protection
Logical
(Process A)
4 KB
Physical
OSLogical
(Process B)
W
WR
Not loaded
R
Not loaded
Pagination tablePagination tables assisted by hardware (4KB)
• Filled by OS, read by hardware
• Pagination tables are paginated!!!
DISCOVER, DISRUPT, DELIVER
Large pages support for memory & I/O transactions
Every instruction requires a translation from a linear address to a
physical memory address, involving several lookups
Linear address
A
Physical
memory
B C
Y
ZXA
B
2nd translation
table (X)
1st translation
table3rd translation
table (X.Y)
D
C
X.Y.Z 4 KBTLB
DISCOVER, DISRUPT, DELIVER
Large pages support for memory & I/O transactions
Large pages minimize the number of translation lookups…
Linear address
A
Physical
memory
B C
Y
XA
B
2nd translation
table (X)
1st translation
table3rd translation
table (X.Y)
X.Y
2 MB
TLB
DISCOVER, DISRUPT, DELIVER
Large pages support for memory & I/O transactions
Logical memory
(program)
Small pages (4 KB)
Physical memory
512 translations to address
2MB of memory
2 MB
Logical memory
(program)Physical memory
2 MB
1 single translation to address
2MB of memory
Large pages (2 MB)
4 KB
4 KB
4 KB
4 KB
4 KB
4 KB
4 KB4 KB
4 KB
2 MB
… and increases the hit probability in translation cache (TLB)
DISCOVER, DISRUPT, DELIVER
Polling mode drivers
Hardware
Operating System
Data-plane apps
TRAFFIC
Polling mode driver
Avoidance of OS interruptions increases I/O
performance and cache hits dramatically
DISCOVER, DISRUPT, DELIVER
Proper HW use by SW (DPDK enablement)
• DPDK is a set of libraries and
drivers for fast packet
processing
• Open Source BSD licensed
(dpdk.org)
• Allow exclusive CPUs,
hugepages and NIC usage
(DPDK) Libraries
poll-mode drivers
Multicore framework
Environment Abstraction Layer
Kernel space
User space
NFV Application
Huge page Memory
Ring Buffers
NIC Poll Mode Library
Hardware
Example of Linux GRUB configuration:
default_hugepagesz=1G hugepagesz=1G
hugepages=120 isolcpus=2-23,26-47
DISCOVER, DISRUPT, DELIVER
Line rate is feasible even for realistic BW and small packets
0
10
20
30
40
50
60
70
80
0 200 400 600 800 1000 1200 1400 1600
Thro
ugh
pu
t (M
pp
s)
Packet Size (Bytes, between CPE and BRAS)
BRAS Performance on bare metalSwitching capacity for 1 CPU socket (4 interfaces)
and extrapolated to full server (8 interfaces)
Switching capacity Host (Mpps) - full server (extrapolated)
Switching capacity Host (Mpps) - 1 CPU socket
IMIX avg frame size
Real avg frame size
DISCOVER, DISRUPT, DELIVER
Approaching the HW limits with virtual machines
Acceptable performance
x10
BareMetal VM
@Cloud
BareMetal
VM @vPoP
BEFORE AFTER
GAP
DISCOVER, DISRUPT, DELIVER
Why now?
Per
form
ance
1st set of improvements (Intel ® Sandy Bridge):• Second-level address translations for memory R/W from CPU
(small and large pages)
• Second-level address translations for memory R/W from I/O
(small pages)
• I/O interrupt remapping to support NICs in passthrough
VM
2nd set of improvements (Intel ® Ivy Bridge):• Second-level address translations for memory R/W from I/O
(large pages)
Processors support to virtualisation allows to
minimize hypervisor intermediation…
… while SR-IOV allows efficient NIC sharing, with
no effect in memory
DISCOVER, DISRUPT, DELIVER
Second-level address translation
Logical memory
(program)
4KB pages
Physical memory Logical memory
(program)
2MB pages
Physical memoryBARE METAL
VIRTUALIZED
Logical
Memory
(program)“Physical”
Memory (VM)
Physical
Memory (host)“Physical”
Memory (VM)
Physical
Memory (host)
Logical
Memory
(program)
DISCOVER, DISRUPT, DELIVER
Data plane performance requires proper HW view
(Enhanced Platform Awareness)…CLOUD COMPUTING VIEW
MEMORYI/O device
CPU
Core Core Core Core
Core Core Core Core
CPUCore Core
Core Core
CPUCore Core
Core Core
QPI
I/O
device
I/O
deviceI/O
device
I/O
device
ME
MO
RY
ME
MO
RY
I/O device
NETWORK VIRTUALISATION VIEW Minimise QPI usage
Max. cache sharing
Min. mem. translationsPolling mode drivers
Full assigment to processTRAFFIC
I/O device
I/O device
Enable
hugepages
usage
DISCOVER, DISRUPT, DELIVER
…while avoiding unintended contention…
CPU
QPI
I/O
device
I/O
device
Core Core Core CoreCore
Core Core Core CoreCore
ME
MO
RY
I/O
device
I/O
device
CPU
I/O
device
I/O
device
Core Core Core CoreCore
Core Core Core CoreCore
I/O
device
I/O
device
ME
MO
RY
• Dedicated resource allocation:
• Memory: huge pages
• CPUs: not oversubscribed, isolated from host OS
• I/O devices: passthrough, SR-IOV
• Modern chipset families can even avoid cache memory contention
Host OS + Hypervisor VNF 1 VNF 2 VNF 3Not used
DISCOVER, DISRUPT, DELIVER
…and bypassing critical bottlenecks whenever needed
CLOUD COMPUTING NFV
Hardware
OS + Hypervisor
Virtual HW
SW libsOS
Virtual machine 1
Virtual HW
SW libsOS
Virtual machine N…
Virtual
Apps
Virtual
Network
Functions
Virtual
Apps
Virtual
Network
Functions
Hardware
OS + Hypervisor
Virtual HW
OS
Virtual machine 1 Virtual machine N…
Virtual
Apps
Virtual
Apps
Virtual HW
OS
UPSTREAM
TRAFFIC
DOWNSTREAM
TRAFFIC
BYPASSED
DATA
PLANE IS
MANAGED
DIRECTLY
vSwitch TRAFFIC
DISCOVER, DISRUPT, DELIVER
Negligible gap between virtualised and bare metal
Results are expressed in kpps per interface
(x4 for 1 CPU socket, x8 for whole server extrapolation)
DISCOVER, DISRUPT, DELIVER
Use of small pages
Results are expressed in kpps per interface
(x4 for 1 CPU socket, x8 for whole server extrapolation)
DISCOVER, DISRUPT, DELIVER
I/O going through OS kernel
Relying on OS kernel
is not recommended
for data plane
x10
DISCOVER, DISRUPT, DELIVER
No hyper-threading
Results are expressed in kpps per interface
(x4 for 1 CPU socket, x8 for whole server extrapolation)
DISCOVER, DISRUPT, DELIVER
Proper orchestration is needed
Why a vanilla CMS does not fulfil the expectations
DISCOVER, DISRUPT, DELIVER
Reaching HW limits thanks to proper orchestration
Acceptable performance
x10
BareMetal VM
@Cloud
BareMetal
VM @vPoP
BEFORE AFTER
GAP
DISCOVER, DISRUPT, DELIVER
EPA must be coherent across the NFV elements,
including the MANO stack
NFVO
NFVI
Hypervisor SwitchesServers
VNFs
VNF 1 VNF 2
NFVI optimized for
NFV (EPA-enabled)
Well designed VNFs -
leveraging EPA
VIM
EPA-enabled
VIM
Information Models
include EPA
requirements
NFV Orchestrator
interprets open Info
Model and optimally
deploys VNF
DISCOVER, DISRUPT, DELIVER
EXPERIENCE: 2 identical HW setups, but with different
MANO will exhibit very different performance…
TRADITIONAL CLOUD NFV
Same:• VNFs
• Servers
• Switches
• Hypervisor
Servers
Switch
Servers
Switch
• CMS acting as VIM
No Enhanced Platform Awareness
Networks based on vSwitch
• Descriptors à la cloud
• NFV-ready VIM (EPA enabled)
CPU & NUMA pinning, PCI passthrough,
hugepages, etc.
Networks based on ToR Openflow switch
• Descriptors are EPA-enabled
TRADITIONAL CLOUD NFV
THEN WHAT’S THE DIFFERENCE?
VNFs VNFs
DISCOVER, DISRUPT, DELIVER
… even with the same Network Scenario in both setups
vRouter B
vRouter A vRouter C
20Gbps
20Gbps
DISCOVER, DISRUPT, DELIVER
Just some simple maths before going on…
Gbps = Mpps x frame_size
Performance limit is
given by this value
Attention is often
paid here
Gbps = Gigabits per second
Mpps = Millions of packets per second
frame_size = Frame size (in kilobits)
Tweaking this parameter,
higher Gbps can be ‘advertised’
DISCOVER, DISRUPT, DELIVER
NFV+EPA vs. Vanilla Cloud
x100
Line rate with 192
bytes frame size
x100
Having x100 times better scalability should be sufficiently
appealing!
With the right exposure of HW resources to the VNFs, critical bottlenecks
like the vSwitch can be bypassed.
DISCOVER, DISRUPT, DELIVER
From Cloud Computing to Network Virtualisation
CLOUD COMPUTINGNETWORK
VIRTUALISATION
1. PERFORMANCE BOUND TO CPU1. PERFORMANCE BOUND TO
I/O & MEMORY ACCESS
2. AGGREGATED VIEW OF
RESOURCES (CPU, memory, etc.)
2. NUMA VIEWInternal architecture is relevant for guests
3. ENDPOINTSApplications need the OS
3. MIDDLEPOINTSData-plane network functions bypass the OS
4. NODE-CENTRICShapeless interconnection
4. NETWORK-CENTRICThe network has a shape
5. MANY AND SMALL VMs 5. FEW AND LARGE VMs
DISCOVER, DISRUPT, DELIVER
What NFV is not: Infrastructure as a Service (IaaS)
Management environment
Execution environment
Virtual
Machines
Network
Functions
Network
Scenarios
Commodity HW
OS + Hypervisor
Commodity Switching
infrastructure
Virtualised
Infrastructure
Manager
VNF ManagerVirtual Network
Functions
Orchestrator
Legacy
OSS/BSS
VM
VMVM
VM
VM
VM
VM
VM VM
VM
VM VM
VM
VM
VM
VM
VM VM
CLOUD IaaS OFFERS VMs ON DEMAND, FOR COMPUTING PURPOSES
ONLY VMs ARE
CONSIDERED
• No notion of network node/scenario
• Poor performance (low and unpredictable)
• BW, shape or QoS are ignored. Connection just implies “visibility”
• Difficult integration to external networks (especially L2)
UNRELIABLE BEHAVIOUR
(GAPS NEED TO BE ADDRESSED)
DISCOVER, DISRUPT, DELIVER
What NFV is not: Platform as a Service (PaaS)
Management environment
Execution environment
Virtual
Machines
Network
Functions
Network
Scenarios
Commodity HW
OS + Hypervisor
Commodity Switching
infrastructure
Virtualised
Infrastructure
Manager
VNF ManagerVirtual Network
Functions
Orchestrator
Legacy
OSS/BSS
CLOUD PaaS AUTOMATES APPS INSTALLATION IN A SET OF VMs
- E.g. Deployment of a web server connected to a database
• Inherits all IaaS issues: low performance, shapeless, external
networks, etc.
• Not such a thing as a VNF
• No reusable “nodes”
• “Scenarios” need to be composed of VMs from scratch
• No actual links
UNRELIABLE
BEHAVIOUR
NO CONCEPT
OF VNF
VM
VMVM
VM
VM
VM
VM
VM
VM VM
VM
VM
VM
VM
DISCOVER, DISRUPT, DELIVER
Instead, the NFVO should conciliate the proper network-
level abstraction with the low-level topology
LOGICAL VIEW
PHYSICAL VIEW
63
DISCOVER, DISRUPT, DELIVER
Network functionalities are fully defined by SW, minimising
dependence on HW constraints
Network Virtualisation provides a mean to make the network more
flexible by minimising dependence on HW constraints…
Network functionalities are based on specific HW
with specific SW linked to HW vendors
One physical node per role
Virtualised Network Model:
VIRTUAL APPLIANCE APPROACH
Network functionalities are SW-based over COTS HW
Multiple roles over same HW
DPIBRAS
GGSN/SGSN
Session Border
ControllerFirewall CG-NAT
PE Router ORCHESTRATED, AUTOMATIC
& REMOTE INSTALL
Traditional Network Model:
APPLIANCE APPROACH
DPIBRAS
GGSN/
SGSN
Firewall
CG-NAT
PE Router
VIRTUAL
APPLIANCES
STANDARD
HIGH VOLUME
SERVERS &
SWITCHES
Imagenio STB
Fu
nc
tion
Cap
ac
ity
DISCOVER, DISRUPT, DELIVER65
… helping to reduce network management complexity, as
HW can be treated as a pool of resources
APPLIANCE APPROACH
LOAD = 95%
LOAD = 40%
VIRTUAL APPLIANCE APPROACH
SWITCHING
RESOURCES
SESSION MGT
RESOURCES LOAD = 40%
LOAD = 15%
• Node sizing is determined by the
bottleneck of its functionalities
• Capacity growth often leads to
node growth or silo HW purchase
SESSION MGT LIMITATIONS PER NODE
LEADING TO 2nd NODE PURCHASE Vs.
• HW becomes interchangeable and
aggregatable (pool)
• Resource assignation becomes
fully flexible and dynamic
SESSION MGT
SWITCHING
SPARE CAPACITY FOR
EXTRA GROWTH
(in any functionality)
PROCESSING CAPACITY BECOMES
COMMODITY & MANAGED AS A CONTINUUM