HR-ConteXtream-Fit-VNF-WP-12-8-14
-
Upload
aviva-gatt -
Category
Documents
-
view
52 -
download
1
Transcript of HR-ConteXtream-Fit-VNF-WP-12-8-14
White Paper
The "Fit VNF" & Intelligent NFV
Infrastructure: Designing for
Service Agility
Prepared by
Gabriel Brown
Senior Analyst, Heavy Reading
www.heavyreading.com
on behalf of
www.conteXtream.com
December 2014
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 2
Service Agility in Telecom Networks Network functions virtualization (NFV) and software-defined networking (SDN) are
two of the most important trends in networking. Used together, these technologies
offer service providers the opportunity to transform their networks to become more
responsive to innovation in the service layer and, therefore, more economically sus-
tainable. With the right software-centric network strategy, operators can maintain
their role in the value chain and extend their addressable markets.
This white paper will argue that to realize the full benefits of SDN and NFV, operators
will need to go beyond straightforward replacement of hardware appliances with
virtualized network functions (VNFs) and adopt a "cloud-native" approach to VNF
design and to NFV networking. Specifically, it will discuss the case for the "Fit VNF" and
the need for a scalable SDN fabric to connect VNFs over distributed infrastructure.
NFV Development & Deployment
ETSI's formation of the NFV Industry Specification Group in 2012 served to accelerate
industry adoption of VNFs, helped crystalize a common set of operator requirements
and led to the creation of a multilateral proof-of-concept (PoC) program. Many
operators now have "pre-NFV" virtualized functions live in their networks, and in
regards to NFV specifically, there has been substantial progress over the past two
years with trials and deployment plans under way around the world. For example:
A significant number of completed or ongoing PoCs and trials – at the time
of writing, 25 PoCs were ongoing within the ETSI framework.
Commitments to live deployments from global Tier 1 operators, including
DoCoMo, AT&T and Telefonica.
Widespread expectations for deployment of "NFV cloud." In a Heavy Read-
ing survey, 51% of operators said they expect to have many VNFs deployed
on an advanced carrier cloud infrastructure within 2-5 years.
As a result of this progress, we now believe that NFV will become a mainstream
deployment option in two or three years, particularly for less demanding use cases.
More demanding applications may take longer to develop and "harden" into mass-
market propositions; however, even here, progressive operators are already prepar-
ing for deployment – for example, of virtual EPC in the mobile core.
Heavy Reading operator surveys indicate commercial NFV deployments will be on
shared-resource telco-cloud platforms that are largely agnostic to the VNF type and
capable of supporting a wide range of workloads. This, in turn, drives the require-
ment for dynamic and scalable networking solutions to connect the VNFs needed
to create services. Operators need a networking solution for NFV.
Service Agility Driving NFV
NFV was initially pitched as way to reduce network capex and opex. By leveraging
the high-volume data center ecosystem and the automation inherent to cloud net-
working, operators can reduce their costs of production to profitably meet the
growing demand for data services.
However, operators also need to reestablish their relevancy in the service and ap-
plication layer to maintain revenue and profitability. Since the advent of NFV, we
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 3
have identified increasing support for the view that the real opportunity in the trans-
formation to software-centric networks is greater service agility and flexibility. Figure
1, from a Heavy Reading operator survey, shows this is clearly the case.
The desire to be more agile is partly born of frustration with the classic operating
model: to introduce new equipment and services is very slow once procurement,
testing, installation, etc., has been completed. Moreover, the resulting architectures
are static and difficult to change, which leads to "network ossification" and in turn
restricts operator ability to participate in innovation in the service layer.
One challenge is that "service agility" is easy to talk about and aspire to, but harder
to describe because it differs according to the network and customer context. How-
ever, some examples of service agility might include:
Ability to on-board new customers with short lead time
Deployment of a customer-specific network instances or "slices," perhaps
mapped to line-of-business cloud applications
User-programmable network services controlled via remote portal and self-
configured to the customer requirement
New services also come with a risk of failure and it is important that the operational
and financial requirements associated with service launch are kept to a minimum.
If operators are to have the confidence to experiment and pursue "devops-style"
working methods, the business-case threshold for new service introduction must be
lowered. Then, when services are successful, they should scale quickly; or when ser-
vices fail, they can be withdrawn with minimal impact.
Figure 1: Importance of Factors Driving NFV Deployment
Source: Heavy Reading’s NFV Operator Survey (n=71), 2014
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 4
Introducing the "Fit VNF" NFV will change the design of telecom application software (VNFs). A consequence
of the desire for services that are more adaptable to network conditions and to
end-user demand is greater granularity of service provision. More granular services
leads, logically, to the idea of smaller, thinner or fitter VNFs.
Telecom network software is traditionally developed with the constraints of the
hardware (e.g., the chassis or line card) in mind and focuses on maximizing the per-
formance of that resource. Once lengthy development and testing processes have
been satisfactorily completed, equipment is deployed for a multi-year lifecycle.
In a virtual environment, where applications are extracted from hardware, VNF de-
signers face different challenges and opportunities. If operators are to achieve a
step change in service agility, it should be possible to provision VNFs on a quasi-on-
demand basis. Fit VNFs can be dedicated to a single function and then connected
together using traffic steering (or "chaining") mechanisms to create services.
VNFs Designed for the Cloud
VNFs themselves should be designed for the cloud. There is to some extent a prob-
lem of terminology: NFV refers to the process of abstracting telecom applications
from hardware and running them in virtual machines (VMs) in the cloud. The base
meaning is to take existing applications and transform (virtualize) them. This is not
the wrong thing to do; indeed, it has potential benefits. However, experience from
enterprise and Web services has shown that porting legacy functions to the cloud
results in less than optimal architectures that are subject to disruption from "cloud
native" services.
The argument, therefore, is that telecom network software (VNFs) must be adapted
for – and, ideally, designed for – the cloud. Figure 2 compares and contrasts some
of the major differences in approach.
Figure 2: VNFs Designed for the Cloud vs. Hardware Appliances
HARDWARE APPLIANCE CLOUD-BASED VNFS
Deployment Manual installation after exten-
sive testing; long lifecycle
VNF provisioning on-demand;
shorter lifecycles
Connectivity/
Networking Integrated into appliance Provided by NFV cloud platform
Resiliency 1+1 with state replication 1+N with rapid instantiation of new
VNFs in event of failure
State Management Local to appliance Distributed across infrastructure
Scaling Increase capacity with new
hardware and/or via pooling
Scale out via new VNF instantia-
tion (automated)
Methodology Waterfall Agile
Source: Heavy Reading
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 5
Our research is very clear that operators do not view "quick and dirty" ports of ap-
pliance software to a VM environment as satisfactory. They are concerned about
the performance of "Frankenstein" applications running in the cloud and about their
ability to manage such functions in an automated way.
Defining the "Fit VNF"
Designers of VNFs that are "optimized for the cloud" will strip out unnecessary fea-
tures that were built into the functional software when hardware optimization was
paramount and when competition encouraged feature stacking by vendors seek-
ing differentiation. The idea is to pare back the VNF to its core function and remove
extraneous capabilities no longer needed in the cloud. The resulting application
can be called the "Fit VNF."
There are several ways an application can be pared-back to create a "Fit VNF,"
including:
Removing redundant software modules. At a basic level this entails removal
of components such as chassis management from the VNF. Over time,
there are also more sophisticated functions, such as relates to high-availa-
bility and redundancy, which may no longer be needed in certain VNF
types because, in principle, responsibility for failover can migrate to the
cloud management layer.
Simplifying the VNF networking stack. Non-optimized VNFs, ported from leg-
acy appliances, typically include sophisticated networking logic. However,
the networking function required of a hardware appliance is no longer
needed because that capability can be fulfilled by the network fabric that
underpins the cloud infrastructure. How to separate "NFV networking" from
VNFs is a subject of great interest to operators.
Deconstruct multi-function nodes. In practice, network equipment often in-
corporates more than one function – or, in other words, appliances are of-
ten multi-functional. For example, a firewall box may also supports distrib-
uted denial-of-service (DDoS) protection, network address translation
(NAT), deep packet inspection (DPI) and access control. In NFV there is no
particular reason to combine these functions into one do-it-all node (a.k.a.
a non-optimized "heavy" VNF). In fact, it is logical to separate them into the
discrete functions and deploy them as needed in service chains.
Modular scaling. Even single-function applications are, in practice, made
up of multiple modules integrated by the vendor to create the node. These
applications can be decomposed into discrete software components that
can be scaled independently according to the service demand. One
example from mobile networks might be to abstract the GTP user plane
from packet core equipment, to then deploy it either in-pool centrally or at
the edge, according to the preferred architecture. This is disruptive and will
take time to implement, but is potentially important.
Smarter NFV Infrastructure: Offloading to the Network
Deconstructing VNFs designed for a hardware appliance to create a "Fit VNF" ob-
viously has implications – side effects, in other words. This is because the compo-
nents to be removed served a useful function (or they wouldn't exist in the first place)
and these functional needs must be met elsewhere, which, in this case, means in
the NFV cloud networking platform.
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 6
The idea is extract connectivity and management functions from the VNF and mi-
grate them to the cloud platform that hosts and connects multiple VNFs. In this way
operators can create a generic NFV cloud that can support services composed of
the appropriate "Fit VNFs" placed in sequence. This platform is, in principle, agnostic
to VNFs, which makes the introduction of new services much faster and experimen-
tation far less risky. In this way, the "Fit VNF" and "Smart Platform" become important
enablers of service agility.
Figure 3 shows the classic appliance model to the left and the "Fit VNF" model on
the right. Redundant functions are extracted from the application and migrated to
the NFV Infrastructure (NFVI) layer, with the result that the VNF becomes "fitter" and
the platform becomes "smarter." To the left of the diagram is the classic hardware
appliance; in the middle is what could be called the "Heavy VNF" because, while
the application has been ported to a virtual environment and deployed on an NVFI,
it is not yet optimized for cloud. To the right the "Fit VNF" has handed over even more
functionality to the NFVI and, as a result, is leaner and more focused.
Ultimately, VNFs could become library, or "catalog," items that can be deployed on
the NFV cloud platform, via instruction from the NFV service orchestrator, on an on-
demand basis. This type of implementation is sometimes described as a "Real-time
Cloud OS" for NFV. It will require sophisticated NFVI management and a high-per-
formance, scalable NFV networking solution.
Figure 3: Migration to the “Fit VNF”
Source: Heavy Reading
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 7
"Smart" NFV Infrastructure The transition from "Heavy NVFs" running on static infrastructure and "Fit VNFs" run-
ning on smart, dynamic infrastructure is a major undertaking and, in practice, we
think operators, and the industry at large, will pursue a phased approach, such that
there will be a co-evolution of the applications (VNFs) and the platform (NFVI). We
expect that simpler "water-carrier" VNFs (e.g., middle boxes) will surrender auton-
omy to the platform before the "big beast" VNFs (e.g., EPC or IMS).
The Smart VNFI
Figure 4 identifies some of the differences between the two models. To the left is the
current model of full-featured VNFs (ported from hardware) running on so-called
"dumb NFVI." Because this is a relatively static platform, the VNF must be aware of
factors, such as subscriber state and load/availability, and should manage its own
redundancy and resiliency. This is replicated for each VNF deployed, increasing
complexity, making it harder to scale and change services.
To the right is the "Smart Platform" approach in which the "Fit VNF" has been pared-
back to its essential function, while the generic (yet sophisticated) capabilities
Figure 4: Migration to the “Fit VNF”
FAT VNF + DUMB NFVI FIT VNF + SMART NFVI
VNF Network sub-functions in software Network functions in software
Dynamic Virtual Connectivity
Scalability to run on multiple cores/servers
Load balancing and health checks
Internal SFC to connect sub-function instances
Elasticity to change scale to needs
Distribution to multiple data centers while
maintaining state
Analytics collection
Service awareness
Subscriber awareness
High availability and redundancy
None
NFVI Static Virtual Connectivity Static Virtual Connectivity
None Dynamic Virtual Connectivity
Scalability to run on multiple cores/servers
Load balancing and health checks
Dynamic SFC to connect function instances
Elasticity to change scale to needs
Distribution to multiple data centers while
maintaining state
Analytics collection
Service awareness
Subscriber awareness
High availability and redundancy
Source: ConteXtream
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 8
needed to manage the VNF in its service context have been migrated to the plat-
form. The smart NFVI platform provides enabling functions across multiple applica-
tions and dynamic connectivity between VNFs. This is important to service agility
because with a scalable, dynamic networking platform, operators can much more
easily introduce new VNFs and combinations of VNFs into their network. Such capa-
bility is the basis for new and differentiated services.
Open Platform for NFV
VNF software vendors, of course, need to know what platform they should "write to"
when they design and optimize their products. One of the challenges with NFV cur-
rently is that there are not yet industry-standard implementations. Vendors have de-
veloped solutions with slightly different platform and integration requirements,
which, again, is not scalable from an operator perspective. To address this industry
players have proposed the Open Platform for NFV (OPNFV).
OPNFV is an initiative hosted within the Linux Forum and supported by a good number
of operators and vendors. The intent is to establish a carrier-grade, open source NFVI
reference platform to help improve consistency and interoperability between NFV
components. The initial scope of OPNFV is on the NFVI, the Virtualized Infrastructure
Management (VIM), and application programmable interfaces (APIs) to other NFV
elements, such as management and orchestration (MANO). Certification programs
to ensure interoperability between VNFs and OPNFV are already emerging.
OPNFV has the potential to be an important part of the "Smart NFVI." Collaboration
with groups like ETSI, ONF and IETF indicates clearly that this is the direction the pro-
ject is heading. The challenge is how to balance the common denominator func-
tions that will drive broad interoperability in a short timeframe with how quickly the
NFVI (based on OPNFV) can made "smart" and, at the same time, interoperable.
Interoperable NFVI
A core tenet of NFV is that the architecture should be open and interoperable, so
that operators can create networks and services using components from multiple
suppliers. A challenge is to identify the capabilities that need to be included in the
base NFVI platform, and those that should be "bespoke" because they are used for
niche or more difficult use cases. The biggest benefits to NFV will come when many
adjacent functions in an operator network are virtualized. This drives the need for
platforms that will support a wide variety of VNF types and for VNFs that will run on
many NFVI instances, as visualized in Figure 5.
Figure 5: Open Platform for NFV
Source: Heavy Reading
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 9
There is inevitably tension in this process, and a risk that the lowest common denom-
inator, adopted in the interests of simplicity and expediency, does not support a
sufficiently broad range of VNF types and associated service chains; or that services
can be created on the platform, but at the cost of manual intervention to manage
the connectivity between VNFs.
Where the NFVI is more capable ("intelligent"), and VNFs simpler, operators can
more quickly deploy functions and provision services. The challenge is how hard
and fast operators should push for the extra intelligence ultimately needed in the
platform. By over-specifying the platform upfront, operators will lose the agility they
desire and place undeliverable requirements on suppliers. Therefore, we expect
both the platform and VNFs to evolve in tandem. As more platform capability is
available, VNFs can be simplified accordingly.
This will likely result in a situation where less demanding functions will be the first to
surrender their autonomy to the platform. An example might be in the mobile core
where it will be more attractive to "streamline" simpler functions from the Gi-LAN
(load balancers, video optimizers, HTTP proxies, etc.) than it will be to try and rede-
sign the more critical and complex EPC functions too radically.
Programmable Networking Requirements
NFV networking is a big topic of great importance. Quite clearly, operators want
networks that are more programmable and faster to change. Current network and
service configurations are associated with long and static life cycles, which means
operators cannot react to, or participate in, the rapid pace of innovation in the
application layer. This limits their addressable markets and is clearly a bad thing for
a sector with limited revenue growth.
Programmable, dynamic connectivity between VNFs is, therefore, valuable. If an NFV
service orchestrator can push rules to a platform that can quickly provision the VNFs
need to support a service, and the associated service function paths, operators will
move a big step closer to the agility they desire from NFV. This capability is a large
component of what makes an NFV platform "smart" and is why SDN and network
virtualization are important.
In telecom networks there are additional requirements that make NFV networking
more challenging than single data center network virtualization, including: the need
to operate in a distributed mode; the need to maintain state across multiple data
centers locations; and the need to manage "subscriber aware" traffic flows at scale.
Figure 6 illustrates a network of distributed NFVI points of presence connected over
a physical network. VMs (running VNFs) are then connected using an SDN-controlled
network – either a virtual overlay or physical assets. The chart uses OpenFlow as an
example of an SDN protocol. There is debate about the extent to which networking
should be integrated with the NFV cloud platform, and the extent to which it is a
separate, specialist function.
In single data center environments, the argument for the NFV cloud management
platform to incorporate networking is stronger. In telecom networks, where there is
likely to be distributed VNFs (placed according to performance requirements), the
argument for a specialist SDN solution for NFV is stronger because the need to man-
age application and subscriber state across locations. Conceivably, network ser-
vices (and end-user services) will be created through the composition of VNFs
hosted centrally and/or at the edge, depending on factors like the user context
and network or the content being consumed.
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 10
Figure 6: Distributed NFVI
Source: Heavy Reading
HEAVY READING | DECEMBER 2014 | WHITE PAPER | THE "FIT VNF" & INTELLIGENT NFV INFRASTRUCTURE 11
Background to This Paper
About the Author
Gabriel Brown
Senior Analyst, Heavy Reading
Gabriel covers the mobile network system architecture, including evolution of the
RAN, the mobile core, and service-layer platforms and applications. Key technolo-
gies in his coverage area include LTE Advanced, small cells, Evolved Packet Core,
carrier WiFi and software-centric networking technologies such as NFV, SDN and
service chaining. Gabriel has covered mobile networking since 1998 through pub-
lished research, live events, operator surveys and custom consulting. Before moving
to Heavy Reading, Gabriel was Chief Analyst of the monthly Insider Research Ser-
vices, published by Heavy Reading's parent company Light Reading. Gabriel is
based in the U.K. and can be reached at [email protected].
About Heavy Reading
Heavy Reading (www.heavyreading.com), the research division of Light Reading,
offers deep analysis of emerging telecom trends to network operators, technology
suppliers and investors. Its product portfolio includes in-depth reports that address
critical next-generation technology and service issues, market trackers that focus on
the telecom industry's most critical technology sectors, exclusive worldwide surveys
of network operator decision-makers that identify future purchasing and deploy-
ment plans, and a rich array of custom and consulting services that give clients the
market intelligence needed to compete successfully in the global telecom industry.
Heavy Reading
P.O. Box 1953
New York, NY 10156
Phone: +1 212-600-3000
www.heavyreading.com