THE SOFTWARE DEFINED DATA CENTER ˜ THE THREE … · Al Gore was the45th Vice President of the...
Transcript of THE SOFTWARE DEFINED DATA CENTER ˜ THE THREE … · Al Gore was the45th Vice President of the...
THE SOFTWARE DEFINED DATA CENTER - THE THREE CABALLEROSFINALLY HAVE THEIR CLOUDY DAY
Paul BrantSr. Technical Education ConsultantEMC Corporation
2013 EMC Proven Professional Knowledge Sharing 2
2013 EMC Proven Professional Knowledge Sharing 3
Table of Contents Table of Figures...................................................................................................5
Abstract ................................................................................................................6
Introduction .........................................................................................................7
Hitting Complexity Head On and the Rise of Orderliness ................................... 10
Coping with Complexity ..................................................................................... 11
Operations Are Being Transformed ................................................................... 14
Software Defined Storage.................................................................................. 14
Enterprise Applications are being Transformed ................................................. 15
Networks are Being Transformed ...................................................................... 16
The Consumerization of IT ................................................................................ 18
The Network Caballero ..................................................................................... 18
Today’s Network - An Artifact, Not a Discipline .................................................. 21
Best Practice – Extract simplicity and let go of mastering complexity ................ 21
Best Practice – Embrace the power of abstraction ............................................ 22
Network Evolution ............................................................................................. 23
Data and Control planes .................................................................................... 24
Drilling Down on the Network ............................................................................ 24
Software-Defined Networking (SDN) ................................................................. 26
Best Practice – Consider SDN’s Real-Time Decision making attributes............. 30
SDN USE CASES ............................................................................................. 30
OpenFlow .......................................................................................................... 31
Best Practice – Do not underestimate the “Network innovators dream” ..................... 32
OpenFlow Switch Components ................................................................................. 33
Best Practice – Consider using OpenFlow as a competitive differentiator ................. 36
Best Practice – Consider OpenFlow solutions from various vendors for the SDDC ... 38
The Storage Caballero ...................................................................................... 40
Software Defined Storage.................................................................................. 40
2013 EMC Proven Professional Knowledge Sharing 4
Best Practice – Consider Data Lifecycle Management as the new Software Defined
Storage .............................................................................................................. 42
Best Practice – Consider legacy storage solutions in the SDDC ........................ 43
Best Practice – Consider that storage media itself is transforming ..................... 43
Best Practice – Do not underestimate how Flash Is Transforming Storage ........ 44
The Server Caballero ........................................................................................ 44
Network-aware applications ............................................................................... 45
Advantages of Application Awareness ............................................................... 47
Best Practice – Consider the infrastructure and Big Data as the emerging app for the
SDDC ................................................................................................................ 47
Orchestration ..................................................................................................... 49
Best Practice – Understand the efficiencies of command and control in the SDDC
.......................................................................................................................... 51
Best Practice – Consider that Simple Orchestration does not mean simple solutions
.......................................................................................................................... 53
Security .............................................................................................................. 55
Traditional Network Vulnerabilities ..................................................................... 56
Best Practice – Consider implementing an Openflow Network security Kernel
architecture ....................................................................................................... 57
Best Practice – Consider implementing an upcoming SDN security solution - FortNOX
.......................................................................................................................... 59
Conclusion ......................................................................................................... 61
Author’s Biography ........................................................................................... 61
Appendix A – References ................................................................................. 62
Index ................................................................................................................... 64
2013 EMC Proven Professional Knowledge Sharing 5
Table of Figures Figure 1 - The Three Caballeros ................................................................................................ 6
Figure 2 - Al Gore at his desk ..................................................................................................... 7
Figure 3 - The Three Caballeros singing .................................................................................... 7
Figure 4 - Cockpit of the Boeing 787 .......................................................................................... 8
Figure 5 - Confusion vs. Complicated ........................................................................................ 9
Figure 6 – Disorder and Complexity vs. time .............................................................................10
Figure 7 - Donald the Server, José the Network, and Panchito the storage ...............................12
Figure 8 - Siloed Applications ....................................................................................................12
Figure 9 - Single Logical Abstracted Pool ..................................................................................13
Figure 10 - IP Hourglass Protocol Stack....................................................................................21
Figure 11 - Network Technologies over the years .....................................................................23
Figure 12 - Traditional Network Switch ......................................................................................24
Figure 13 - Software Defined Network Switch Architecture .......................................................24
Figure 14 - Software Defined Network Layers ...........................................................................26
Figure 15 - Control Plane and Data Plane Visibilities ................................................................28
Figure 16 - Software Defined Network Virtulization Integration ..................................................29
Figure 17 - How to Enhance Innovation ....................................................................................32
Figure 18 - OpenFlow Instruction Set Examples .......................................................................33
Figure 19 - OpenFlow Internal Switch Architecture ..................................................................33
Figure 20 - OpenFlow Pipeline Architecture ..............................................................................34
Figure 21 - Flow Table Entry layout ...........................................................................................34
Figure 22 - SDN Controller Application Integration ....................................................................47
Figure 23 - Application Accelleration using SDN .......................................................................48
Figure 24 - The OODA Loop .....................................................................................................51
Figure 25 - SDN Defense Approaches ......................................................................................57
Figure 26 - Fort NOX Architecture .............................................................................................59
Disclaimer: The views, processes or methodologies published in this article are those of the
authors. They do not necessarily reflect EMC Corporation’s views, processes or
methodologies.
2013 EMC Proven Professional Knowledge Sharing 6
Abstract When we think of the data center, visions of large storage arrays, servers and network switches
come to mind, with lots of hardware. Well, the data center is in major transformation mode. Data
centers are going soft, meaning, the trend is data centers are defined not for its hardware
components, even though hardware is still needed, but for its software, that controls it. This
transformation is the final step in allowing cloud services to be delivered most efficiently!
For many IT vendors, the traditional business model was to have their own software run on their
own hardware. Well, things are changing. Enter the world of Software Defined Data Centers.
With the world embracing the “cloud”, IT vendors have two choices, one can try to whistle by the
graveyard and hope IT vendors can continue selling proprietary software/hardware solutions or
decide to become open and decouple the hardware from the software.
So, what is the software-defined data center? Every data center has three caballeros, server,
network and storage and we all know the way to the cloud is through some sort of virtualization.
However, one might ask, virtualization has been around for a while. Is what we have today good
enough? If not, why do we need better virtualization? How do we get there?
The good news is that many companies are doing
wonderful things to get the three caballeros on well
bread virtualized horses. Recent developments
such as software-defined networking (SDN)
technologies and other initiatives will be the
cornerstone of creating a true software-defined
data center.
Is the “software-defined data center" just another way of saying "the cloud"? No. The cloud truly
represents how customers procure resources, on demand, through Web forms. The software-
defined data center is something else that will be discussed in depth.
In summary, this Knowledge Sharing article will describe how data centers are going soft and
why this is pivotal in transforming the data center into a true cloud delivery solution offering best
practices that will align with the most important goal, creating the next-generation data center,
which addresses the business challenges of today and tomorrow through business and
technology transformation.
Figure 1 - The Three Caballeros
2013 EMC Proven Professional Knowledge Sharing 7
Introduction I propose that the Software Defined Data Center (SDDS) has its roots in Al Gore and three
Caballeros. One might ask, what do Al Gore and The Three Caballeros have to do with this new
SDDC concept? The SDDC has its roots in:
1. The Internet
2. Orderliness
3. Managing complexity
4. Abstraction
5. Orchestration
6. Security
These six concepts all have a major
role in what will be the next wave in
what we are calling the SDDC.
Al Gore was the 45th Vice President of the United States (1993–2001), under President Bill
Clinton. In the 1970’s, he was the first elected official to grasp the vast potential of computer
communications. He understood that it could have a broader impact than just improving the
conduct of science and scholarship; it was the precursor to the Internet. The Internet, as we
know it today, was not deployed until 1983. This led to the ubiquitous IP (Internet Protocol),
which will be discussed later.
Another aspect of Al Gore is seen by looking at his office as shown in Figure 2 - Al Gore at his
desk. Al sits tranquil amid the apparent chaos of his desk. How does he cope with all that
complexity? He would explain that there is order and structure to the apparent complexity. It is
easy to test: if one asks the desk owner for something,
they know just where to go, and the item is often retrieved
much faster than from someone who keeps a neat and
orderly workplace. Those who are comfortable with
apparent chaos face the challenge that others are
continually trying to help them, and their biggest fear is
that one day they will return to their office and discover
someone has cleaned up all the piles and put things into
their “proper” places. Do that and the underlying order is
lost: “Please don’t try to clean up my desk,” they beg, “because if you do, it will make it
Figure 3 - The Three Caballeros singing
Figure 2 - Al Gore at his desk
2013 EMC Proven Professional Knowledge Sharing 8
impossible for me to find anything.” Despite the chaotic appearance, there is an underlying
structure that only they are aware of. How does one cope with such apparent disorder? Why
can’t things be simple, like The Three Caballeros singing in wonderful harmony as sown in
Figure 3 - The Three Caballeros singing? The answer lies in the phrase “underlying structure
and abstraction”. Once the structure is revealed and understood, with the right amount of
abstraction, the complexity fades away.
So it is with our data center technology, and
we will see that this is also of the many
attributes of the SDDC. The reason our
technology is so complex is that life in
general is complex. Computer systems and
data centers are complex, as will be
discussed. For the average individual, an
airplane cockpit is complex as shown in
Figure 4 - Cockpit of the Boeing 787. It is
complex because it contains all that is
required to control the plane safely, navigate
the airline routes with accuracy, keep to the
schedule while making the flight comfortable for the passengers, and cope with whatever
mishap might occur enroute.
It is important that one distinguish between complex and complicated. The word “complex”
really describes the state of the world. The word “complicated” describes a state of mind. The
dictionary definition of “complex” suggests things with many intricate and interrelated parts. The
definition for “complicated” includes a secondary meaning; “confusing”.
The word “complex” is used to describe the state of the world, the tasks we do, and the tools we
use to deal with them. One can use the word “confused” to describe the psychological state of a
person in attempting to understand, use, or interact with something in the world. Modern
technology can be complex, but complexity in itself is neither good nor bad; it is confusion that is
bad. Forget the complaints against complexity; instead, complain about confusion. We should
complain about anything that makes us feel helpless, powerless in the face of mysterious forces
that take away control and understanding.
Figure 4 - Cockpit of the Boeing 787
2013 EMC Proven Professional Knowledge Sharing 9
Does this confuse you?Its complicated,
but it has structure
Network ProtocolsSoftware Stack
Figure 5 - Confusion vs. Complicated
For example, some of us are Cisco certified network engineers (CCNE) and we are very proud
of that fact. The reason we are proud is that it is very difficult to become certified because
networking, in general, is very
complex.
The Internet is based on a
relatively simple protocol, IP.
Networks are very complicated
and many will tell you that they
are also confusing. As shown in
Figure 5 - Confusion vs.
Complicated, it compares
network protocols and the
application developer stack.
Which one looks more
confusing? As discussed, the Internet was built on a simple premise, but along the way, we
added all of these other protocols, which really complicated things. Many believe that today’s
network is an artifact, not a discipline! For me, the network set of protocols is a clear winner,
when it comes to complexity and confusion! The software stack does not seem to be as
confusing and there is a reason why. Note the term “software” in the software stack. We will find
out why software structures really make things less complicated.1
1 Jumbled protocol picture source, Nick Mcgowen- Stamford
2013 EMC Proven Professional Knowledge Sharing 10
Hitting Complexity Head On and the Rise of Orderliness This section will explain why there is so
much complexity in IT. It is a bit
technical, so if you prefer to skip the
nuts and bolts, you may want to jump to the next section. To understand why IT is complex one
needs to consider three things; Glass’s Law, Emergent Complexity, and Increasing Entropy in
Technology.
Introduced by Robert Glass in his book “Facts and Fallacies of Software Engineering”, Glass’s
Law holds that for every 25% increase in functionality that vendors add to their devices, there is
a 400% multiplying effect in terms of complexity of that system. Many argue that just because
hardware and software engineers can build a new feature or functionality, it does not
necessarily make it a good idea. Adding functionality can often be a terrible idea, given the
downstream impact in terms of complexity that the already over-burdened IT/network operations
staff has to manage. This situation is only likely to be exacerbated by unknown multiples in the
near future, if one considers the following trends:
1. Traffic – Global IP networking traffic will quadruple by 2015 from its current levels,
reaching nearly a
zettabyte of data.
2. FLASH – High-
performance flash
memory-based storage
prices have come down
dramatically in the last
20 years. According to
Gartner research, one
gigabyte of flash
memory cost nearly
$8000 in 1997. Today,
the same gigabyte
costs 25 cents as of this
publication release.
3. I/O throughput – Gartner states that by 2016, I/O throughput per rack in a data center,
will increase from current levels by an astounding 25X.
Figure 6 – Disorder and Complexity vs. time
TimeNascent Mature
OrderSimplicity
DisorderComplexity
Emergen
t Com
plexit
y
Increasing entropy (Disorder)
Evolution of Complexity
IPProtocol
NetworkProtocols
SDDC
2013 EMC Proven Professional Knowledge Sharing 11
Many believe that over-engineering and innovation are two very different things. Glass’s Law
endorses this belief.
The evolution of complexity within the data center is outlined in the diagram below. The Second
Law of Thermodynamics2, states that the entropy of any closed system tends to increase with
time until it reaches a maximum value at some point in time. In addition, the law states that the
entropy of the system never decreases. Here, “entropy” measures how “random”, “unstructured”
or “disordered” a system is. This is outlined in Figure 6 – Disorder and Complexity vs. time. The
point of interest in this diagram is that as emergent technology and complexity is added to the
system, becoming more entropic as it matures, yielding a more homogenous environment, will,
at some point, move the system back to a more predictable and controllable world. In other
words, at some point, order and simplicity will return based on the demand for the system to
continue to operate3. As shown in Figure 6, describing Disorder/Complexity vs. time, the red line
shows an ever-increasing entropy or complexity of the system. The blue line describes the
varying levels of order to disorder, shows that the disorder of the system increases at a much
faster rate or delta, and then at some level of disorder, the system, in order to function properly
becomes more orderly. The increasing entropy or disorder is a natural process as data centers
address the ever-growing task of supporting the needs of this growing digital universe.
The only real solution is to utilize the technologies and methodologies within the SDDC to
address the evolving complexity appropriately. Orderliness through abstraction is the key as will
be discussed in the next section. So, unless we lose interest in our smartphones and web
browsing, entropy (disorder) will continue. Therefore, a googol years from now, after the last
black holes have sputtered away in bursts of Hawking radiation, the Universe will be just a high-
entropy soup of low-energy particles. However, today, in between, the Universe contains
interesting structures such as galaxies and brains, hot-dog-shaped novelty vehicles, and data
centers. So, dealing with IT complexity will be here for a while.
Coping with Complexity Whether a SAN administrator, storage manager, application developer, or any other IT
practitioner, how do “we” deal with complexity? Many believe that the keys to coping with
2 http://en.wikipedia.org/wiki/Second_law_of_thermodynamics 3 http://www.scottaaronson.com/blog/?p=762
2013 EMC Proven Professional Knowledge Sharing 12
complexity are found in two aspects of understanding. The questions that need to be answered
are:
1. Is it the design of the technology or process itself that determines its understandability?
Does it have an underlying logic to it, a foundation that, once mastered, makes
everything fall into place?
2. Is our own set of abilities and skills adequate? Have we taken the time and effort to
understand and master the structure?
Understandability and understanding are the two critical keys to mastery. Things we understand
are no longer complicated, no longer confusing. The
Boeing 787 airplane cockpit looks complex but is
understandable. It reflects the required complexity of a
technological device—the modern commercial jet aircraft—
tamed through three attributes: intelligent organization,
excellent modularization and structure, and the training of
the pilot. Al Gore also has been known to see the big
picture. With his work in environmental issues, his ability to
abstract ideas is well known. The SDDC does have a
limited impact on the environment, but as we will see, the
software defined data center’s core strength is in
the abstraction of the underlying infrastructure.
Lastly, as shown in Figure 7, the Amigos—Donald
Duck, José Carioca, the Brazilian parrot, and the
Mexican rooster Panchito Pistoles—so
marvelously depicted in Walt Disney’s full length
picture in 1944, are the Server, Network, and
Storage of this story. Their wonderful singing and
ability to work together in wonderful orchestration
allows beautiful things to happen. Using
orchestration and automation will drive even
further efficiency and responsiveness in the
SDDC. A part of what the SDDC is doing is
putting familiar application constructs into nice
virtual containers, and reconstructing the
Figure 7 - Donald the Server, José the Network, and Panchito the
storage
Figure 8 - Siloed Applications
2013 EMC Proven Professional Knowledge Sharing 13
surrounding orchestration around new pools of resources thus making it simple and less
complex.
The idea is simple yet powerful; take other familiar infrastructure entities and re-envision them
as virtualized capabilities that can be dynamically invoked and orchestrated. Now you are not
only expressing applications as virtualized constructs, you are also expressing the infrastructure
services they consume as virtualized constructs. Another way of looking at software-defined
data centers is to contrast against the familiar, historical approach. It is common, even today, to
walk into an enterprise IT environment and see multiple hard-wired stacks, one for each set of
applications that IT has to run; the SAP stack, the Exchange stack, etc. as shown in Figure 8.
There may be some resource commonality and pooling going on at different layers, but typically,
it’s neither ubiquitous nor architected. In this traditional model, resources, hardware, and
software capabilities are tightly coupled to the application objects they support. It seems
apparent that this approach is not ideal from either an efficiency or agility perspective, but
historically, this is the way data centers have evolved.
Compare that perspective with a simplified view of a software-defined data center as shown in
Figure 9 - Single Logical
Abstracted Pool. In this
model, the goal is to create
a single logical abstracted
pool of dynamically
instantiated and
orchestrated infrastructure
resources that are available
to “all” workloads,
regardless of their individual
characteristics as shown in Figure 9.
For example, one workload requires great transactional performance, advanced data protection,
and advanced security. Another workload requires sufficient bandwidth, reasonable data
protection, and moderate security. As a result, a single workload starts out requiring one
resource, but grows to a point where it needs something very different. Rather than building
static stacks for each requirement, the goal is to dynamically provision the service levels and
infrastructure resources as pure software constructs vs. specialized hardware and software.
Figure 9 - Single Logical Abstracted Pool
2013 EMC Proven Professional Knowledge Sharing 14
Operations Are Being Transformed There is wide appreciation that the operations model for any cloud is considerably different from
the legacy one that preceded it. Not only is it distinct, it is the foundational basis for much of the
efficiency and agility advantages that can result from cloud infrastructure. Many of these
resource domains (server, storage, network, security) have rich, robust management
environments of their own, many using element managers. The predominant direction has been
to expose underlying resource management domains upwards to a higher-level integrator (such
as vCloud Director) to surface capabilities, drive workflows, and other management functions.
Compared with previous approaches, it is a vast improvement.
However, there is always room for improvement. If we go back to our SDDC construct, one
aspect of what's really happening here is that a good portion of the embedded intelligence and
functionality and the corresponding management constructs of each of these domains gets
abstracted and virtualized as well. This deeper abstraction of function creates the potential for
newer orchestration and operations constructs that are even more efficient, optimized and
responsive than the previous model. Examples might include dynamically realigning storage
service delivery levels against the same pool of assets, or dynamically reconfiguring network
assets based on transient workloads. The orchestration and the resources being orchestrated
are being brought even closer together through deeper levels of abstraction, creating the
capability for superior operational results: efficiency and agility. We will find that this is done
through enhancements in command and control. We will also find out how.
Software Defined Storage Software-defined Storage is about creating key abstractions that "wrap" or abstract today's
world of purpose-built storage arrays and internal devices, and provide a consistent set of
storage services regardless of the underlying storage device. The goal is to create a single
control plane (passive reporting first, dynamic reconfiguration to follow) that is largely agnostic
to the underlying storage device, which is typically already intelligent.
The next step is to create abstracted data-plane presentations of those resource pools of the
familiar block, file, and object, regardless of whether the underlying device supported it natively.
If those capabilities could be exposed via RESTful APIs to the orchestration layer (VMware's
vCloud Suite as an example), this would then supply the necessary coordination and
orchestration delivering composite infrastructure services as well as presentation methodologies
to the user. This would be a key goal.
2013 EMC Proven Professional Knowledge Sharing 15
Inevitably, as more intelligence is found in today's storage arrays, advanced capabilities will
most likely be implemented as pure virtual machines offering many of key features of
virtualization such as scalability, movement, elasticity and availability if desired. Virtual Storage
Appliances (VSAs) are the start in this evolution, but there is much more that needs to be done.
One way of describing the trend is that traditional storage array functionality can "float upwards"
via a virtual machine running on a server farm. Conversely, an application function that is data
intensive (and not CPU intensive) that is encapsulated in a virtual machine can conceptually
"float downward" into the array itself, providing a potentially superior level of performance for
specific tasks. In both cases, virtualization serves as a convenient abstraction that enables
functionality to be run where it belongs, and not be rooted to a specific type of hardware device.
We will find out this this is being done through application aware methodologies in subsequent
sections.
Enterprise Applications are being Transformed Enterprise applications are usually nothing more than instantiations of business logic. There is a
better way to run a business process, and the enterprise app embodies that thinking. If the
business process being discussed does not lead to a particular competitive advantage in the
business model, the choice is usually around packaged software, consumed via a traditional
App or SaaS fashion. However, the line of thinking becomes quite different when the business
process of choice is intended to contribute to some sort of differentiated competitive advantage.
Enter the world of application platforms and app factories. Indeed, ask most CIOs about which
part of their operation gets the lion's share of their attention, and it is inevitably the application
side of the organization. In many ways, that is where much of the business value of IT is
created. However, there has been a noticeable shift in the app world over the last few years.
Historically, the primary rationale for enterprise applications was improved efficiency: here is
what we are spending on process X today; here is what we could be saving if we invested in
automating process X. There is nothing really wrong with that approach, and it continues to
drive a fair share of enterprise application spending across IT.
However, the new wave of enterprise application development seems to have an entirely new
motivation; it is called “value creation”. It is less about doing familiar things better, it is about
doing entirely new things. These newer applications are quite often natively mobile, and not a
mere adaptation of a familiar desktop or web presentation. They are inherently social and
collaborative, not by linking to familiar external social services, but by embracing social,
2013 EMC Proven Professional Knowledge Sharing 16
community, and workflow into the application process itself. The more advanced examples
support real-time decision-making by users, hence a strong preference to deliver analytics in
some capability as part of the decision-making process being supported. Let us not forget a
sub-group that is starting to harness the power of big data behind those analytical capabilities.
In addition, the application is also becoming more infrastructure-aware. As we will see, this
infrastructure-aware paradigm will be key in the SDDC. It is a new world! We will find out how
this is being done.
Networks are Being Transformed Many believe that traditional network architectures are not equipped to meet the requirements of
today’s enterprises, carriers, and end users. Thanks to a broad industry effort, spearheaded by
the Open Networking Foundation (ONF) and Software-Defined Networking (SDN) standards,
networking architectures are being transformed.
In the SDN architecture, the control and data planes are decomposing, network intelligence and
state management are logically moving and morphing, and the underlying network infrastructure
is abstracting. As a result, enterprises and carriers gain unprecedented programmability,
automation, and network control, enabling them to build highly scalable, flexible networks that
readily adapt to changing business needs.
The ONF is a non-profit industry consortium that is leading the advancement of SDN and
standardizing critical elements of the SDN architecture such as the OpenFlow protocol, which
structures communication between the control and data planes of supported network devices.
OpenFlow is the first standard interface designed specifically for SDN, providing high-
performance, granular traffic control across multiple vendors’ network devices. OpenFlow-based
SDN is currently being rolled out in a variety of networking devices and software, delivering
substantial benefits to both enterprises and carriers, including:
1. Centralized management and control of networking devices from multiple vendors
2. Improved automation and management by using common APIs to abstract the
underlying networking details from the orchestration and provisioning systems and
applications
3. Rapid innovation through the ability to deliver new network capabilities and services
without the need to configure individual devices or wait for vendor releases;
2013 EMC Proven Professional Knowledge Sharing 17
Programmability by operators, enterprises, independent software vendors, and users (not just
equipment manufacturers) using common programming environments, gives all parties new
opportunities to drive revenue and differentiation.
1. Increased network reliability and security as a result of centralized and automated
management of network devices, uniform policy enforcement, and fewer configuration
errors
2. More granular network control with the ability to apply comprehensive and wide-ranging
policies at the session, user, device, and application levels
3. Better end-user experience as applications exploit centralized network state information
to seamlessly adapt network behavior to user needs
SDN is a dynamic and flexible network architecture that protects existing investments while
future-proofing the network. With SDN, today’s static network can evolve into an extensible
service delivery platform capable of responding rapidly to changing business, end-user, and
market needs.
The explosion of mobile devices and content, server virtualization, and advent of cloud services
are among the trends driving the networking industry to reexamine traditional network
architectures. Many conventional networks are hierarchical, built with tiers of Ethernet switches
arranged in a tree structure. This design made sense when client-server computing was
dominant, but such a static architecture is ill-suited to the dynamic computing and storage
needs of today’s enterprise data centers, campuses, and carrier environments. Within the
enterprise data center, traffic patterns have changed significantly. In contrast to client-server
applications where the bulk of the communication occurs between one client and one server,
today’s applications access different databases and servers, creating a flurry of “east-west”
machine-to-machine traffic before returning data to the end user device in the classic “north-
south” traffic pattern.
At the same time, users are changing network traffic patterns as they push for access to
corporate content and applications from any type of device (including their own), connecting
from anywhere, at any time. Finally, many enterprise data center managers are contemplating a
utility computing model, which might include a private cloud, public cloud, or some mix of both,
resulting in additional traffic across the wide area network.
2013 EMC Proven Professional Knowledge Sharing 18
That is what I call Rigidity!
The Consumerization of IT Users are increasingly employing mobile personal devices such as smartphones, tablets, and
notebooks to access the corporate network. IT is under pressure to accommodate these
personal devices in a fine-grained manner while protecting corporate data and intellectual
property and meeting compliance mandates through a SDDC construct. Examples include:
• The rise of cloud services: Enterprises have enthusiastically embraced both public and
private cloud services, resulting in unprecedented growth of these services. Enterprise
business units now want the agility to access applications, infrastructure, and other IT
resources on demand and à la carte. To add to the complexity, IT’s planning for cloud
services must be done in an environment of increased security, compliance, and
auditing requirements, along with business reorganizations, consolidations, and mergers
that can change assumptions overnight. Providing self-service provisioning, whether in a
private or public cloud, requires elastic scaling of computing, storage, and network
resources, ideally from a common viewpoint and with a common suite of tools.
• “Big data” means more bandwidth: Handling today’s “big data” or mega datasets
requires massive parallel processing on thousands of servers, all of which need direct
connections to each other. The rise of mega datasets is fueling a constant demand for
additional network capacity in the data center. Operators of hyper scale data center
networks face the daunting task of scaling the network to previously unimaginable size,
maintaining any-to-any connectivity.
The Network Caballero Meeting current network market requirements is difficult with traditional network architectures for
many reasons. Faced with flat or reduced budgets, enterprise IT departments are trying to
squeeze the most from their networks using device-level management tools and manual
processes. Carriers face similar challenges as demand for mobility and bandwidth bursts.
Profits are being eroded by escalating capital equipment costs and flat or declining revenue.
Existing network architectures were not designed to meet the requirements of today’s users,
enterprises, and carriers; rather, network designers are constrained by the limitations of current
networks, which include:
• Complexity that leads to rigidity: Networking technology, to
date, has consisted largely of discrete sets of protocols
designed to connect hosts reliably over arbitrary distances,
link speeds, and topologies.
2013 EMC Proven Professional Knowledge Sharing 19
To meet business and technical needs over the last few decades, the industry has
evolved networking protocols to deliver higher performance and reliability, broader
connectivity, and more stringent security. Protocols tend to be “defined” in isolation, with
each solving a specific problem and without the benefit of any fundamental abstractions.
This has resulted in one of the primary limitations of today’s networks, complexity. For
example, to add or move any device, IT must touch multiple switches, routers, firewalls,
Web authentication portals, etc. and update ACLs, VLANs, quality of services (QoS),
and other protocol-based mechanisms using device-level management tools. In addition,
network topology, vendor switch models, and software versions all must be taken into
account. Due to this complexity, today’s networks are relatively static as IT seeks to
minimize the risk of service disruption.
The static nature of networks is in stark contrast to the dynamic nature of today’s server
environment, where server virtualization has greatly increased the number of hosts
requiring network connectivity and has fundamentally altered assumptions about the
physical location of hosts. Prior to virtualization, applications resided on a single server
and primarily exchanged traffic with select clients. Today, applications distribute across
multiple virtual machines (VMs), which exchange traffic flows with each other. VMs
migrate to optimize and rebalance server workloads, causing the physical end-points of
existing flows to change (sometimes rapidly) over time. VM migration challenges many
aspects of traditional networking, from addressing schemes and namespaces to the
basic notion of a segmented, routing-based design.
In addition to adopting virtualization technologies, many enterprises today operate an IP-
converged network for voice, data, and video traffic. While existing networks can provide
differentiated QoS levels for different applications, the provisioning of those resources is
highly manual. IT must configure each vendor’s equipment separately, and adjust
parameters such as network bandwidth and QoS on a per-session, per-application
basis. Because of its static nature, the network cannot dynamically adapt to changing
traffic, application, and user demands.
• Inconsistent policies: To implement a network-wide policy, IT may have to configure
thousands of devices and mechanisms. For example, every time a new virtual machine
is brought up, it can take hours, in some cases days, for IT to reconfigure ACLs across
the entire network. The complexity of today’s networks makes it very difficult for IT to
2013 EMC Proven Professional Knowledge Sharing 20
apply a consistent set of access, security, QoS, and other policies to increasingly mobile
users, which leaves the enterprise vulnerable to security breaches, noncompliance with
regulations, and other negative consequences.
• Inability to scale: As demands on the data center rapidly grow, so too must the network
grow. However, the network becomes vastly more complex with the addition of hundreds
or thousands of network devices that must be configured and managed. IT has also
relied on link oversubscription to scale the network, based on predictable traffic patterns.
However, in today’s virtualized data centers, traffic patterns are incredibly dynamic and
therefore unpredictable. Mega-operators, such as Google, Yahoo!, and Facebook, face
even more daunting scalability challenges. These service providers employ large-scale
parallel processing algorithms and associated datasets across their entire computing
pool. As the scope of end-user applications increases (for example, crawling and
indexing the entire World Wide Web to instantly return search results to users), the
number of computing elements explodes and data-set exchanges among compute
nodes can reach petabytes. These companies need so-called hyper scale networks that
can provide high-performance, low-cost connectivity among hundreds of thousands or
more of physical servers. Such scaling cannot be done with manual configuration.
To stay competitive, carriers must deliver higher value, better-differentiated services to
customers. Multi-tenancy further complicates their task, as the network must serve
groups of users with different applications and different performance needs. Key
operations that appear relatively straightforward, such as steering a customer’s traffic
flows to provide customized performance control or on-demand delivery, are very
complex to implement with existing networks, especially at carrier scale. They require
specialized devices at the network edge, thus increasing capital and operational
expenditure as well as time-to-market to introduce new services.
• Vendor dependence: Carriers and enterprises seek to deploy new capabilities and
services in rapid response to changing business needs or user demands. However, their
ability to respond is hindered by vendors’ equipment product cycles, which can range to
years or more. Lack of standards and open interfaces limits the ability of network
operators to tailor the network to their individual environments. This mismatch between
market requirements and network capabilities has brought the industry to a tipping point.
2013 EMC Proven Professional Knowledge Sharing 21
In response, the industry has created the Software-Defined Networking (SDN) architecture and
is developing associated standards with the goal of making network design more of a discipline
to achieve the SDDC vision.
Today’s Network - An Artifact, Not a Discipline It is important to outline the genesis of the current IP network. Many
argue that today’s IP network is complex, but it is really based on
simple principles. As in the IP hourglass protocol stack as shown in
Figure 10, the key to the Internet’s success is a layered architecture.
Applications today are built on reliable or unreliable transports, with
best-effort global and local packet delivery and finally, the physical
transfer of bits. The layering hour glass structure is comprised of the
relatively simple Internet protocol, the neck of the hourglass, which
is the fundamental layer to which all other protocols are tied. It is
composed of fundamental components, independent but compatible
innovation at each layer and many believe it is an amazing success,
However, many believe the Internet infrastructure is an academic
disappointment. The reason is the set of Internet protocols. In other
fields such as Operating System and Database design, as well as programmable languages
and IDE’s (Integrated development environments), where basic principles are taught that, when
done right, will yield easily managed solutions that continue to evolve. Networking, on the other
hand, are composed of many protocols, which have been extremely difficult to manage and
evolve very slowly. Many believe that Networking is not a discipline, but an artifact of various
decoupled protocol definitions. The reason why networks built on artifacts are a problem is that
networking is lags behind server and storage development. The reason is because networks
used to be simple, but new control requirements led to great complexity. Fortunately, the
infrastructure still works, but only because of our collective ability to master complexity. This
ability to master complexity can be a blessing and a curse!
Best Practice – Extract simplicity and let go of mastering complexity
Networking architects have embraced mastering complexity.
However, the ability to master complexity is not the same as the
ability to extract simplicity. Like many complex systems, when
first getting systems to work, complexity is part of the equation
Figure 10 - IP Hourglass Protocol
Stack
2013 EMC Proven Professional Knowledge Sharing 22
and focusing on mastering complexity is required. The next step however, is to make systems
easy to use and understand and focus on extracting simplicity.
Many argue that one will never succeed in extracting simplicity if we do not recognize it as
different from mastering complexity. Networking has never made the distinction and therefore
has never made the transition. Often, networking designers continue trying to master
complexity, with little emphasis on extracting simplicity from the control plane. Extracting
simplicity builds intellectual foundations and as such, the outcome creates a discipline. This
transition has many examples. In programming, machine languages have no abstractions so
mastering complexity is crucial. Higher-level languages, operating systems, file systems, virtual
memory abstract logical function and data types. With today’s modern languages, with object
orientation and garbage collection, abstractions are the key to extracting simplicity.
Best Practice – Embrace the power of abstraction
In the world of the software-defined data center, it is important to consider the embrace of the
power of abstraction. As shown in the figure below, abstractions lead to standard interfaces,
which then lead to modularity and all of the efficiencies that go along with it. Barbra Liskov, a
leader in structured programming
states that “Modularity based on
abstraction is the way things get
done4”. For example, in
programming, what if programmers
had to specify where each bit was stored, explicitly deal with all internal communication errors
and deal with a programming language with limited verboseness. Programmers would redefine
the problem by defining a higher-level abstraction for memory, build on reliable communication
abstractions, and use a more general language. In networking today, abstractions are limited.
The IP network layers only deal with the data plane and there are no powerful control plane
abstractions! How do we find these abstractions? First, one needs to define the problem and
then decompose it. Today, the current Network Control plane attributes and the next
generations of network control plane requirements are shown in Table 1. There is a need to
develop a high level of abstraction that simplifies configuration, allows for distributed states, and
creates a general forwarding model. Distributed State Abstractions shields mechanisms from
variances of distributed state, while allowing access to this state. Part of what comes out of this 4 http://www.znu.achttp://www.znu.ac.ir/members/afsharchim/lectures/p50-liskov.pdf
P. Brant 12-20-2012
Interfaces
The power of Abstraction
Abstractions Modularity
2013 EMC Proven Professional Knowledge Sharing 23
is a natural abstraction model with a global network view that can be annotated on a network
graph provided through an API. This control mechanism can now program high-level interfaces
using this API using a various shortest path algorithms such as Dijkstra or Bellman-Ford.
Table 1 - Network Control plane attributes
Current Network Control Plane
Attributes
New SDDC Network Control Plane
Attributes Computes the configuration of each physical
device, for example, forwarding tables, ACLs,
Need an abstraction that simplifies
configuration
Operates without communication guarantees Need an abstraction for distributed state
Operates within the given network-level
protocol with minimal interaction with the level
above it or below it
Need an abstraction for general forwarding
model
Network Evolution
The network, specifically data center
networks, is an ever-evolving entity as
shown in the graph5. Many disruptions
have occurred over the last 20 years,
from Storage Area Networking (SAN),
InfiniBand, and PCIe (Peripheral
component Interface express) within
the server itself. The important thing
to note in Figure 11 - Network Technologies over the years, is that moving into the future,
Ethernet will be the predominant protocol within the Software Defined Data Center world.
VMware and Microsoft are both involved in developing tools that will eventually allow the world
to make a completely automated data center, which can be the basis for cloud solutions, be it
public, private, or hybrid. VMware calls the technology a Software-defined Data Center (SDDC)
and is the genesis of this term. Microsoft and others speak of Software-defined Networking
(SDN), believing that the network is the last piece of the data center puzzle that needs
automation. Computing, storage, and availability resources have been somewhat automated for
5 http://opennetsummit.org/talks/ONS2012/recio-wed-enterprise.pdf
Figure 11 - Network Technologies over the years
2013 EMC Proven Professional Knowledge Sharing 24
many years, but companies are working on virtualizing and automating networking and its
associated policies and security. Before we continue, let’s understand in more detail, from a
network perspective, what we have today.
Data and Control planes
The current network consists of two tightly-
coupled network constructs for data
transfer and control; they are the data and
control plane as shown in Figure 12 -
Traditional Network Switch. The Data
Plane does the processing and delivery of
packets. The routing is based on the state
in routers and endpoints such as IP, TCP,
Ethernet, etc. It supports fast timescales
(per-packet). The Control Plane
establishes the state in routers and
determines how and where packets are
forwarded. Other functions include routing, traffic engineering, firewall state, and others. Slow
time-scales, per control event, are typical for this plane, similar to other management and
control functions.
Drilling Down on the Network Let’s begin by taking a more detailed look at how traditional networking works. The most
important thing to
notice in Figure 13
- Software Defined
Network Switch
Architecture, is the
separate but
linked control and
data planes. Each
plane has
separate tasks
that provide the
Network Switch
Figure 12 - Traditional Network Switch
Figure 13 - Software Defined Network Switch Architecture
2013 EMC Proven Professional Knowledge Sharing 25
overall switching and routing functionality. The control plane is responsible for configuration of
the device and programming the paths that will be used for data flows. When you are managing
a switch, you are interacting with the control plane. Things like route tables and Spanning-Tree
Protocol (STP) are calculated in the control plane. This is done by accepting information frames
such as BPDUs or Hello messages and processing them to determine available paths. Once
these paths have been determined, they are pushed down to the data plane and typically stored
in hardware. The data plane then typically makes path decisions in hardware based on the
latest information provided by the control plane. This has traditionally been a very effective
method. The hardware decision-making process is very fast, reducing overall latency while the
control plane itself can handle the heavier processing and configuration requirements.
This construct is not without problems. One problem we will focus on is scalability. In order to
demonstrate the scalability issue, it is easiest to use Quality of Service (QoS) as an example.
QoS allows forwarding priority to be given to specific frames for scheduling purposes based on
characteristics in those frames. This allows network traffic to receive appropriate handling in
times of congestion. For example, latency-sensitive voice and video traffic is typically
engineered for high priority to ensure the best user experience. Traffic prioritization is typically
based on tags in the frame known as Class of Service (CoS) and or Differentiated Services
Code Point (DSCP.). These tags must be marked consistently for frames entering the network
and rules must then be applied consistently for their treatment on the network. This becomes
cumbersome in a traditional multi-switch network because the configuration must be duplicated
in some fashion on each individual switching device.
Another example of the current administrative challenges is, consider each port in the network a
management point, meaning each port must be individually configured. This is both time
consuming and cumbersome. Additional challenges exist in properly classifying data and routing
traffic. A good example of this would be two different traffic types; iSCSI and voice. iSCSI is
storage traffic and typically a full size packet or even jumbo frame while voice data is typically
transmitted in a very small packet. Additionally they have different requirements, voice is very
latency-sensitive in order to maintain call quality, while iSCSI is less latency-sensitive but will
benefit from more bandwidth. Traditional networks have few if any tools to differentiate these
traffic types and send them down separate paths, which are beneficial to both types.
2013 EMC Proven Professional Knowledge Sharing 26
Software-Defined Networking (SDN) First, from a high level, what challenges does Software-Defined Networking try to solve? The
four key elements are:
1. Ability to manage the forwarding of frames/packets and applying policy
2. Ability to perform this at scale in a dynamic fashion
3. Ability to be programmed
4. Visibility and manageability through centralized control
SDN’s primary goals are not about network virtualization. They are about changing how we
design, build, and operate networks to achieve business agility. Allowing applications to be
rolled out in hours instead of weeks, and potentially using lower-cost hardware since each
device will not be doing its own control functions are among the many goals of SDN. This,
therefore, could be a big market transition!
Software Defined
Networking (SDN) is
an emerging network
architecture where
network control is
decoupled from
forwarding and is
directly
programmable. This
migration of control,
formerly tightly
bound in individual
network devices, into
accessible
computing devices
enables the underlying infrastructure to be abstracted for applications and network services,
which can treat the network as a logical or virtual entity. As shown in Figure 14 - Software
Defined Network Layers, describing the application, control, and infrastructure layers, depicts a
logical view of the SDN architecture.
Figure 14 - Software Defined Network Layers
2013 EMC Proven Professional Knowledge Sharing 27
Network intelligence is logically centralized in software-based SDN controllers, which maintain a
global view of the network. As a result, the network appears to the applications and policy
engines as a single, logical switch. With SDN, enterprises and carriers gain vendor-independent
control over the entire network from a single logical point, which greatly simplifies the network
design and operation. SDN also greatly simplifies the network devices themselves, since they
no longer need to understand and process thousands of protocol standards but merely accept
instructions from the SDN controllers. Perhaps most importantly, network operators and
administrators can programmatically configure this simplified network abstraction rather than
having to hand-code tens of thousands of lines of configuration scattered among thousands of
devices. In addition, leveraging the SDN controller’s centralized intelligence, IT can alter
network behavior in real-time and deploy new applications and network services in a matter of
hours or days, rather than the weeks or months needed today.
By centralizing network state in the control layer, SDN gives network managers the flexibility to
configure, manage, secure, and optimize network resources via dynamic, automated SDN
programs. In addition, one can write these programs themselves and not wait for features to be
embedded in vendors’ proprietary and closed software environments. In addition to abstracting
the network, SDN architectures support a set of APIs that make it possible to implement
common network services, including routing, multicast, security, access control, bandwidth
management, traffic engineering, quality of service, processor and storage optimization, energy
usage, and all forms of policy management, custom tailored to meet business objectives. This
API currently is based on a standard called “OpenFlow” that will be covered in the following
sections. For example, an SDN architecture makes it easy to define and enforce consistent
policies across both wired and wireless connections on a campus. Likewise, SDN makes it
possible to manage the entire network through intelligent orchestration and provisioning
systems (See the section “Orchestration” on page 49). The Open Networking Foundation is
studying open APIs to promote multi-vendor management, which opens the door for on-demand
resource allocation, self-service provisioning, truly virtualized networking, and secure cloud
services. Thus, with open APIs between the SDN control and applications layers, business
applications can operate on an abstraction of the network, leveraging network services and
capabilities without being tied to the details of their implementation. SDN makes the network not
so much “application-aware” as “application-customized” and applications not so much
“network-aware” as “network-capability-aware”. As a result, computing, storage, and network
resources can be optimized.
2013 EMC Proven Professional Knowledge Sharing 28
Note that software defined networking
does not necessarily need to be “Open” or
become a standard or be interoperable. A
proprietary architecture can meet the
definition and provide the same benefits. A
SDN architecture must be able to
manipulate frame and packet flows through
the network at large scale, and do so in a
programmable fashion. The hardware
plumbing of a SDN will typically be
designed as a converged (capable of
carrying all data types including desired forms
of storage traffic) mesh of large lower-
latency pipes commonly called a fabric, as shown in Figure 15 - Control Plane and Data
Plane. The SDN architecture itself will in turn provide a network-wide view and the ability to
manage the network and network flows centrally. This architecture leverages the separated
control plane and data plane devices and provides a programmable interface for that separated
control plane. The data plane devices receive forwarding rules from the separated control plane
and apply those rules in hardware ASICs [24]. These ASICs can be either commodity switching
ASICs or customized silicone depending on the functionality and performance aspects required.
Again, Figure 15 - Control Plane and Data Plane Visibilities depicts this relationship. In this
model, the SDN controller, in green, provides the control plane and the data plane is comprised
of hardware switching devices. These devices can either be new hardware devices or existing
hardware devices with specialized firmware. This will depend on vendor, and deployment
model. One major advantage that is clearly shown in this example is the visibility provided to the
control plane. Rather than each individual data plane device relying on advertisements from
other devices to build its view of the network topology, a single control plane device has a view
of the entire network. This provides a platform from which advanced routing, security, and
quality decisions can be made, hence the need for programmability. Another major capability
that can be drawn from this centralized control is visibility. With a centralized controller device, it
is much easier to gain usable data about real time flows on the network, and make decisions
(automated or manual) based on that data.
Figure 15 - Control Plane and Data Plane Visibilities
2013 EMC Proven Professional Knowledge Sharing 29
Note that Figure 15 - Control Plane and Data Plane Visibilities only shows a portion of the
picture as it is focused on physical infrastructure and serves. Another major benefit is the
integration of virtual server environments into SDN networks. This allows centralized
management of consistent policies for both virtual and physical resources. Integrating a virtual
network is done by having a Virtual Ethernet Bridge (VEB)6 in the hypervisor that can be
controlled by an SDN controller. As shown in Figure 16 - Software Defined Network Virtulization
Integration, this diagram depicts the integration between virtual networking systems and
physical networking systems
in order to have cohesive
consistent control of the
network. This plays a more
important role as virtual
workloads migrate. Because
both the virtual and physical
data planes are managed
centrally by the control plane
when a VM migration
happens, it’s network
configuration can move with
it regardless of destination in
the fabric. This is a key
benefit for policy enforcement in virtualized environments because more granular controls can
be placed on the VM itself as an individual port and those controls stick with the VM throughout
the environment. Note: These diagrams are a generalized depiction of a SDN architecture. Methods other than a
single separated controller could be used, but this is the more common concept.
6 http://www.dmtf.org/sites/default/files/standards/documents/DSP2025_1.0.0b.pdf
Figure 16 - Software Defined Network Virtulization Integration
2013 EMC Proven Professional Knowledge Sharing 30
Best Practice – Consider SDN’s Real-Time Decision making attributes
With a network in place to have centralized command and control of the network through SDN
and a programmable interface, more intelligent
processes can now be added to handle complex
systems. Real time decisions can be made for the
purposes of traffic optimization, security, outage, or
maintenance. Separate traffic types can be run side-
by-side while receiving different paths and
forwarding that can respond dynamically to network
changes. This shift will provide extreme benefits in the form of flexibility, scalability, and traffic
performance for data center networks. While not all aspects are defined, SDN projects such as
OpenFlow (www.openflow.org) provide the tools to begin testing and developing SDN
architectures on supported hardware.
SDN USE CASES
Service providers, systems and applications developers, software and computer companies,
and semiconductor and networking vendors are all part of the ONF. This diverse cross-section
of the communications and computing industries is helping to ensure that SDN and associated
standards effectively address the needs of network operators in each segment of the
marketplace, including:
1. Campus – SDN’s centralized, automated control and provisioning model supports the
convergence of data, voice, and video as well as anytime, anywhere access by enabling
IT to enforce policies consistently across both wired and wireless infrastructures.
Likewise, SDN supports automated provisioning and management of network resources,
determined by individual user profiles and application requirements, to ensure an optimal
user experience within the enterprise’s constraints.
2. Data center – The SDN architectures facilitates network virtualization, which enables
hyper-scalability in the data center, automated VM migration, tighter integration with
storage, better server utilization, lower energy use, and bandwidth optimization.
3. Cloud – Whether used to support a private or hybrid cloud environment, SDN allows
network resources to be allocated in a highly elastic way, enabling rapid provisioning of
cloud services and more flexible hand-off to the external cloud provider. With tools to
2013 EMC Proven Professional Knowledge Sharing 31
safely manage their virtual networks, enterprises and business units will trust cloud
services more and more.
4. Carriers and Service Providers – SDN offers carriers, public cloud operators, and
other service providers the scalability and automation necessary to implement a utility
computing model for IT-as-a-Service, by simplifying the rollout of custom and on-
demand services, along with migration to a self-service paradigm. SDN’s centralized,
automated control and provisioning model makes it much easier to support multi-
tenancy; to ensure network resources are optimally deployed; to reduce both CapEx and
OpEx; and to increase service velocity and value.
OpenFlow
OpenFlow is the first standard communications interface defined between the control and
forwarding layers of a SDN
architecture. OpenFlow (OF)
allows direct access to and
manipulation of the
forwarding plane of network
devices such as switches
and routers, both physical
and virtual (hypervisor-
based). It is the absence of
an open interface to the forwarding plane that has led to the characterization of today’s
networking devices as monolithic, closed, and mainframe-like. The SDN community has
developed standards and open source OF control platforms to ease implementation and
interoperability; they are called NOX and POX7. NOX is the original OpenFlow controller, and
facilitates development of fast C++ controllers on Linux. POX is great for diving into SDN using
Python on Windows, Mac OS, or Linux. The value of POX is easy implementation and the ability
to control OpenFlow switches in a matter of seconds after downloading it. It's targeted largely at
research and education, and we use it for ongoing work on defining key abstractions and
techniques for controller design.
7 http://www.noxrepo.org/
2013 EMC Proven Professional Knowledge Sharing 32
Best Practice – Do not underestimate the “Network innovators dream” Networks have become part of the critical infrastructure of our businesses, homes, and schools.
This success has been both a blessing and a curse for networking researchers and innovators.
The problem is that, given the networks ubiquity, any chance of making an impact is more
distant. The reduction in real-world impact of any given network innovation is because of the
enormous installed base of equipment and protocols, and the reluctance to experiment with
production traffic. This has created an exceedingly high barrier to entry for new ideas. Today,
there is almost no practical way to experiment with new network protocols (e.g., new routing
protocols, or alternatives
to IP) in sufficiently
realistic settings (e.g., at
scale carrying real traffic)
to gain the confidence
needed for their
widespread deployment.
The result is that most
new ideas from the
networking research
community go untried and
untested; hence the
commonly held belief that
the network infrastructure
has “ossified”. Virtualized
programmable networks
could lower the barrier to
entry for new ideas,
increasing the rate of innovation in the network infrastructure. OpenFlow could enhance the
dreams of the experimenter and innovator and potentially change the landscape of vendor
strength. Innovators can now unlock the proprietary ties and allow production and experimental
networks to live together as shown in Figure 17 - How to Enhance Innovation. No other
standard protocol does what OpenFlow does, and a protocol like OpenFlow is needed to move
network control out of the networking switches to logically centralized control software.
OpenFlow can be compared to the instruction set of a CPU.
Figure 17 - How to Enhance Innovation
2013 EMC Proven Professional Knowledge Sharing 33
As shown in the Figure 18 - OpenFlow Instruction Set Examples, the protocol specifies basic
primitives that can be used by an external software application to program the forwarding plane
of network devices, just like
the instruction set of a CPU
would program a computer
system.
The OpenFlow protocol is
implemented on both sides of
the interface between network
infrastructure devices and the
SDN control software.
OpenFlow uses the concept
of flows to identify network
traffic based on pre-defined match rules that can be statically or dynamically programmed by
the SDN control software. It also allows IT to define how traffic should flow through network
devices based on parameters such as usage patterns, applications, and cloud resources. Since
OpenFlow allows the network to be programmed on a per-flow basis, an OpenFlow-based SDN
architecture provides extremely granular control, enabling the network to respond to real-time
changes at the application, user, and session levels.
OpenFlow Switch Components As shown in Figure 19 - OpenFlow
Internal Switch Architecture, an
OpenFlow Switch consists of one or
more flow tables and a group table,
which perform packet lookups and
forwarding, as well as an OpenFlow
channel to an external controller.
The controller manages the switch
via the OpenFlow protocol. Using
this protocol, the controller can add,
update, and delete entries, both
reactively (in response to packets)
Figure 18 - OpenFlow Instruction Set Examples
OpenFlow Software with Commodity CPU
Table 1
Data Network
Switch/Router
Table 2 Table 3... Table NData Plane
Secure ChannelControlProxy
Open API
Flow Tables
P. Brant 12-20-2012
SDN Controller
OpenFlow Protocol
SSL
Group Table
Figure 19 - OpenFlow Internal Switch Architecture
2013 EMC Proven Professional Knowledge Sharing 34
and proactively. Each flow table in
the switch contains a set of flow
entries; each flow entry (see Figure
20 - Flow Table Entry layout)
consists of match fields, counters,
and a set of instructions to apply to
matching packets. Packet matching
starts at the first row table and may
continue to additional flow tables.
Flow entries match packets in
priority order, with the first matching
entry in each table being used. If a
matching entry is found, the instructions associated with the specific flow entry are executed. If
no match is found in a flow table, the outcome depends on switch configuration: the packet may
be forwarded to the controller over the OpenFlow channel, dropped, or may continue to the next
flow table.
Instructions associated with each flow entry describe packet forwarding, packet modification,
group table processing, and pipeline processing. Pipeline processing instructions allow packets
to be sent to subsequent tables (Figure 21 - OpenFlow Pipeline Architecture, below) for further
processing and allow information, in the form of metadata, to be communicated between tables.
Table pipeline processing stops when the instruction set associated with a matching flow entry
does not specify a next table; at this point, the packet is usually modified and forwarded. Flow
entries may
forward to a port.
This is usually a
physical port, but
it may also be a
virtual port
defined by the
switch or a
reserved virtual port defined by this specification. Reserved virtual ports may specify generic
forwarding actions such as sending to the controller, flooding, or forwarding using non-
OpenFlow methods, such as “normal" switch processing, while switch-defined virtual ports may
Figure 20 - Flow Table Entry layout
Table0
Table2
Table1
Tablen
Execute Action Set...
Action Set = Final
Packet Out
Ingress Port
Action Set = Null
Packet In
Action Set = Packet +
Ingress Port +
metadata
Packets are matched against multiple tables in the pipeline
OpenFlow Switch Pipeline
P. Brant 12-20-2012
Figure 21 - OpenFlow Pipeline Architecture
2013 EMC Proven Professional Knowledge Sharing 35
specify link aggregation groups, tunnels, or loopback interfaces. Flow entries may also point to a
group, which specifies additional processing. Groups represent sets of actions for flooding, as
well as more complex forwarding semantics (e.g. multipath, fast reroute, and link aggregation).
As a general layer of indirection, groups also enable multiple rows to forward to a single
identifier (e.g. IP forwarding to a common next hop). This abstraction allows common output
actions across flows to be changed efficiently. The group table contains group entries; each
group entry contains a list of action buckets with specific semantics dependent on group type.
The actions in one or more action buckets are applied to packets sent to the group.
Switch designers are free to implement the internals in any way convenient, provided that
correct match and instruction semantics are preserved. For example, while a flow may use an
all group to forward to multiple ports, a switch designer may choose to implement this as a
single bitmask within the hardware-forwarding table. Another example is matching; the pipeline
exposed by an OpenFlow switch may be physically implemented with a different number of
hardware tables. OpenFlow-compliant switches come in two types:
1. OpenFlow-only switches: This option supports only OpenFlow operation, in those
switches all packets are processed by the OpenFlow pipeline, and cannot be processed
otherwise.
2. OpenFlow-hybrid switches: Support both OpenFlow operation and normal Ethernet
switching operation (i.e. traditional L2 Ethernet switching, VLAN isolation, L3 routing,
ACL, and QoS processing).
Those switches should provide a classification mechanism outside of OpenFlow that routes
traffic to either the OpenFlow pipeline or the normal pipeline. For example, a switch may use the
VLAN tag or input port of the packet to decide whether to process the packet using one pipeline
or the other, or it may direct all packets to the OpenFlow pipeline. This classification mechanism
is outside the scope of this specification. OpenFlow-hybrid switches may also allow a packet to
go from the OpenFlow pipeline to the normal pipeline through the NORMAL and FLOOD virtual
ports.
The OpenFlow pipeline of every OpenFlow switch contains multiple row tables, each flow table
containing multiple flow entries. The OpenFlow pipeline processing defines how packets interact
with those flow tables (Figure 21 - OpenFlow Pipeline Architecture. above). An OpenFlow switch
with only a single flow table is valid; in this case, pipeline processing is greatly simplified.
2013 EMC Proven Professional Knowledge Sharing 36
The flow tables of an OpenFlow switch are sequentially numbered, starting at zero. Pipeline
processing always starts at the first flow table: the packet is first matched against entries of flow
table 0. Other flow tables may be used depending on the outcome of the match in the first table.
If the packet matches a flow entry in a flow table, the corresponding instruction set is executed.
The instructions in the flow entry may explicitly direct the packet to another flow table, where the
same process is repeated again. A flow entry can only direct a packet to a flow table number
that is greater than its own flow table number. Pipeline processing can only go forward and not
backward.
Current IP-based routing does not provide this level of control, as all flows between two
endpoints must follow the same path through the network, regardless of their different
requirements. The OpenFlow protocol is a key enabler for software-defined networks and
currently is the only standardized SDN protocol that allows direct manipulation of the forwarding
plane of network devices. While initially applied to Ethernet-based networks, OpenFlow
switching can extend to a much broader set of use cases. OpenFlow-based SDNs can be
deployed on existing networks, both physical and virtual. Network devices can support
OpenFlow-based forwarding as well as traditional forwarding, which makes it very easy for
enterprises and carriers to progressively introduce OpenFlow-based SDN technologies, even in
multi-vendor network environments.
The Open Networking Foundation is chartered to standardize OpenFlow and does so through
technical working groups responsible for the protocol, configuration, interoperability testing, and
other activities, helping to ensure interoperability between network devices and control software
from different vendors. OpenFlow is being widely adopted by infrastructure vendors, who
typically have implemented it via a simple firmware or software upgrade. OpenFlow-based SDN
architecture can integrate seamlessly with an enterprise or carrier’s existing infrastructure and
provide a simple migration path for those segments of the network that need SDN functionality
the most.
Best Practice – Consider using OpenFlow as a competitive differentiator For enterprises and carriers alike, SDN makes it possible for the network to be a competitive
differentiator, not just a mandatory cost center. OpenFlow-based SDN technologies enable IT to
address the high bandwidth, dynamic nature of today’s applications, adapt the network to ever-
changing business needs, and significantly reduce operations and management complexity.
2013 EMC Proven Professional Knowledge Sharing 37
The benefits that enterprises and carriers can achieve through an OpenFlow-based SDN
architecture include:
1. Centralized control of multi-vendor environments: SDN control software can control
any OpenFlow-enabled network device from any vendor, including switches, routers,
and virtual switches. Rather than having to manage groups of devices from individual
vendors, IT can use SDN-based orchestration and management tools to quickly deploy,
configure, and update devices across the entire network.
2. Reduced complexity through automation: OpenFlow-based SDN offers a flexible
network automation and management framework, which makes it possible to develop
tools that automate many management tasks that are done manually today. These
automation tools will reduce operational overhead, decrease network instability
introduced by operator error, and support emerging IT-as-a-Service and self-service
provisioning models. In addition, with SDN, cloud-based applications can be managed
through intelligent orchestration and provisioning systems, further reducing operational
overhead while increasing business agility.
3. Higher rate of innovation: SDN adoption accelerates business innovation by allowing
IT network operators to literally program—and reprogram—the network in real time to
meet specific business needs and user requirements as they arise. By virtualizing the
network infrastructure and abstracting it from individual network services, for example,
SDN and OpenFlow give IT— and potentially even users—the ability to tailor the
behavior of the network and introduce new services and network capabilities in a matter
of hours.
4. Increased network reliability and security: SDN makes it possible for IT to define
high-level configuration and policy statements, which are then translated down to the
infrastructure via OpenFlow. An OpenFlow-based SDN architecture eliminates the need
to individually configure network devices each time an end point, service, or application
is added or moved, or a policy changes, which reduces the likelihood of network failures
due to configuration or policy inconsistencies. Because SDN controllers provide
complete visibility and control over the network, they can ensure that access control,
traffic engineering, quality of service, security, and other policies are enforced
consistently across wired and wireless network infrastructures, including branch offices,
campuses, and data centers. Enterprises and carriers benefit from reduced operational
expenses, more dynamic configuration capabilities, fewer errors, and consistent
configuration and policy enforcement.
2013 EMC Proven Professional Knowledge Sharing 38
5. More granular network control: OpenFlow‘s flow-based control model allows IT to
apply policies at a very granular level, including the session, user, device, and
application levels, in a highly abstracted, automated fashion. This control enables cloud
operators to support multi-tenancy while maintaining traffic isolation, security, and elastic
resource management when customers share the same infrastructure.
6. Better user experience: By centralizing network control and making state information
available to higher-level applications, a SDN infrastructure can better adapt to dynamic
user needs. For instance, a carrier could introduce a video service that offers premium
subscribers the highest possible resolution in an automated and transparent manner.
Today, users must explicitly select a resolution setting, which the network may or may
not be able to support, resulting in delays and interruptions that degrade the user
experience. With OpenFlow-based SDN, the video application would be able to detect
the bandwidth available in the network in real time and automatically adjust the video
resolution accordingly.
Best Practice – Consider OpenFlow solutions from various vendors for the SDDC A number of companies have announced solutions using OpenFlow. NEC had one of the first
switches. At the most recent InterOp show, HP introduced a broad line of OpenFlow products,
including a controller, and an application called Security Sentinel.
Big Switch: No company has been talking about SDN and OpenFlow more than Big Switch
Networks. A few weeks ago, it finally announced its product line, including its Big Network
Controller platform and a couple of applications that run on top of it. One difference, the
company says, is it that all of these products are generally available now.
Big Switch's controller is based in part on the FloodLight open-source OpenFlow kernel with
extensions, and the company describes it as essentially a platform for running OpenFlow
applications. Big Switch says its controller will work with products from 27 partners. These
include hypervisors from Citrix, Microsoft, Canonical, and Red Hat; and physical switches from
Arista, Brocade, Dell, Extreme Networks, and Juniper. In addition, as of this writing, Big Switch
indicates testing with IBM and HP switches, and says it has tested with VMware. (Big Switch
has only limited support for VMware today, though it says it expects to have more robust
support in 2013.).Big Switch's first two applications, which go on top of that, are Big Virtual
Switch, which provides data center network virtualization, and Big Tap, a network-monitoring
2013 EMC Proven Professional Knowledge Sharing 39
tool. SDN will go beyond virtual networks toward applications, and the company's Big Tap
unified network monitoring tool is an example of this.
Nicira, Midokura, and Other SDN Vendors The other SDN Company that is getting a lot of attention lately is Nicira, which was bought by
VMware in the summer of 2012. Nicira touts its Network Virtualization Platform (NVP),
essentially software designed to create an intelligent abstraction layer between hosts and the
existing network—in other words, an overlay network. This is meant to be a series of virtual
switches (which it calls Open vSwitch) and to be part of a Distributed Virtual Network
Infrastructure (DVNI) architecture. VMware has talked about Nicira being part of its "software-
defined data center" concept, which seems to fit in with its Pivotal Initiative focused on cloud
computing.
Another interesting approach comes from Midokura, a smaller Japanese-U.S. startup that is
offering MidoNet. This is aimed primarily at virtualized hosts in an Information as a Service
(IaaS) environment, and fits at the hypervisor layer. The concept is to provide network isolation
and fault-tolerant distributed environments without an intermediating network. The company
says this does not require any new hardware, but simply IP connectivity among network devices
and is "truly scalable." This was announced at the recent OpenStack conference.
There are other alternatives to OpenFlow for separating the data plane and control plane, and
cloud platforms such as OpenStack and CloudStack could work with or without OpenFlow
controllers. However, the concept is certainly pushing companies such as Cisco, Juniper, and
Alcatel-Lucent to provide more open access to their networking products via APIs.
Cisco Cisco has itself promoted the concept of software-defined networking, saying its switches
support the concept of network virtualization and have for years, and will at some point support
Open Flow. However, the company sees the big benefit in software-defined networking as
getting the applications to be more intelligent about the network and the resources it consumes.
SDN is often seen as a way of reducing the cost of networking, of making networks more open,
and of making managing virtual machines and virtual data centers much easier. As such, it both
provides opportunities and challenges for Cisco (the leading network vendor), VMware (the
leading virtual machine vendor), and for many smaller companies that seem more likely to
2013 EMC Proven Professional Knowledge Sharing 40
embrace the open-source and Open Flow concepts more fully, believing it gives them a new
weapon against the more proprietary products that dominate the market today.
Right now, the big beneficiaries will be organizations that run big cloud data centers, whether
public or private, as they face the issues of dealing with network complexity and a need for
faster changes. Over time, though, we could see more network-aware applications, which could
be much more important. Placing the control with the applications rather than the administrators
is a big change, and teaching programmers how to create applications for this—and indeed,
figuring which applications will be able to really taken advantage of—seems like it will be a
challenge. Still, the concept has a lot of promise, even if it is still very early.
The Storage Caballero Virtualization has radically changed the economics and operations of data centers. High-
performance hypervisors have created an abstraction layer between the hardware and the
application. With this advancement, software deployment of entire application stacks has
replaced tedious and expensive manual provisioning. Implementing software control
revolutionized operations. The technologies on which modern data centers run, such as live
migration, VM fail-over, and self-provisioning were not possible without the hypervisor
abstraction layer and corresponding software management. Now moving, adding, and deleting
server resources independently is fast, seamless, and secure. Is it possible to expand this
efficiency model to storage, where there is a
growing problem with partitioned data and a
clear need for comprehensive data lifecycle
management?
Software Defined Storage The second Caballero, storage and his friends
seem to be moving to abstractions like the rest
of the Industry. Before it became truly useful,
server virtualization needed to be pervasive
and high-performance. Cooperation between
processor, server system, and software vendors quickly led to industry standards for
virtualization extensions and protocols. Today, the ecosystem enjoys a high degree of
interoperability and has thrived, to the benefit of both vendors and their customers. Similarly,
data center networking is poised to be changed by software-defined networking and the
2013 EMC Proven Professional Knowledge Sharing 41
adoption of OpenFlow as an industry standard, driven by the Open Networking Foundation and
its more than seventy member companies as previously discussed.
Both instances point to collaborative, open industry standards driving abstraction layers as the
solution for significant data center inadequacy. Today’s general data center inefficiencies are
clearly laid out by VMware CTO Steve Herrod who stated, “Specialized software will replace
specialized hardware. The other approach, specialized hardware-based systems, leads to silo
type systems and people8”.
The Storage Landscape is changing. Storage is unfortunately one of the most siloed
departments in the modern data center. Storage performance is closely tied to the physical
characteristics of the hardware components. The physics of spinning or spooling magnetic
media has defined storage performance. To work around limitations, specialized silicon,
firmware and protocols were developed to create storage systems optimized for specific
workloads and data patterns. Even as the hardware components have become more general
purpose, storage software features, such as tiering, snapshots and provisioning, have remained
tied tightly to a particular array.
Storage architects have typically responded to the need to scale storage with two strategies.
The first is to invest heavily in the storage of a single large OEM based on an over-provisioned,
scale-up chassis. At the high-end, these solutions provide software management within the
system to allocate different types of storage with a common management interface. The second
option is to isolate storage according to workload. This optimizes cost efficiency and utility, but
places a higher burden on administrators to manage multiple storage silos. Both strategies
suffer from the same core problem; application data access is tied explicitly to a particular
hardware platform. However, before we look for a solution, there are several key attributes of
the storage function that demand careful consideration:
1. The value of storage is the data it contains. Business continuity is dependent on the
availability of information.
2. Application performance is often dependent on storage performance. However, not all
applications have the same requirements. It may be the latency, bandwidth, or capacity
that is most important to a particular application.
8 Source: http://www.informationweek.com/news/hardware/virtual/240000054
2013 EMC Proven Professional Knowledge Sharing 42
3. The need for more storage performance is ubiquitous across businesses and
institutions. For most enterprises, growth rates exceed 40 percent year-on-year.
4. Finally, not all data is of equal value.
Moreover, advances in server and silicon technology have increased application dependency on
storage. CPU and network capabilities shifted the balance from local memory and dedicated
local disk to shared networked storage. With more contention on local memory, applications
become dependent upon storage for application data. Application density has changed storage
patterns from predictable sequential access to almost all random. Traditional storage is
challenged to keep up.
Adding further complexity is the evolving role of data in the enterprise. Many companies are
now using data from historically separated islands to create rich pictures of their customers and
their operations. Because the data needs to be accessed in different environments, storage is
seeing a far-reaching change in what it is being asked to do. Not only are its traditional
workloads like databases growing, but the dynamic nature of virtualization and an entirely new
class of information in the form of Big Data is impacting the storage function. Unlike server
virtualization or software-defined networking, to date, there hasn’t been a standard technology
that abstracts the applications’ use of data from the physical presence of the storage system.
Best Practice – Consider Data Lifecycle Management as the new Software Defined Storage
The roadblocks to efficient growth of storage go beyond the traditional use cases of explosive
data growth such as Big Data and its multi-stage, multi-user analysis model. Companies must
look to manage their overall data lifecycle, rather than focus on the more narrow function of
simple storage and retrieval. While the definition of this concept varies somewhat, in general, it
includes multiple stages of data movement throughout an organization such as creation, usage,
transport, maintenance, and archiving. Approaching all phases with a clear strategy presents a
challenge for storage practitioners. Consider the following:
1. Data is created in different environments such as databases, flat files, and applications
and each has a different performance and back-up profile.
2. Data ages and very little data can be deleted today. Some needs to be retained for
regulatory compliance reasons, while other information simply needs to be moved to the
lowest cost environment. Thus, we are seeing a natural lifecycle from creation, to
analysis, to final, accessible archiving.
2013 EMC Proven Professional Knowledge Sharing 43
3. Virtualization offers the opportunity for increased efficiency through storage mobility.
Theoretically, the next generation storage paradigm will include virtualization on the top
tier, providing an effective and powerful comingling of data, with balanced loads across a
unified, scalable, and interoperable computation and storage network.
These varied issues point to a need for IT to approach data in a new way with the
understanding it will move through a complete lifecycle and needs to move through the
organization in a much more dynamic and fluid way than ever before. The need for an industry-
wide strategy is obvious.
Best Practice – Consider legacy storage solutions in the SDDC With this changing view of the data center to a more API or abstracted view of storage
resources, one might conclude that intelligent storage devices instantaneously become obsolete
and/or commoditized. One can argue that abstracting just creates an easier way to consume the
various underlying storage options that would be available to infrastructure architects as it
relates to offering various optimization points. It can also be argued that there is plenty of room
for differentiated capabilities both above and below the virtualization abstraction point. A case in
point is OpenStack vs. VMware's solutions. Each offers various levels of abstraction, but the
goals are the same. While storage consumers appreciate tight integration of their chosen
technologies from the vendor community, nobody wants to be locked in at one layer because
they've adopted another layer.
Best Practice – Consider that storage media itself is transforming Storage is transforming in the following areas:
1. Storage Consumption: The consumption model is changing to easy-to-consume and
easy-to-control storage services, whether those are delivered internally by the IT
organization or externally by an IT service provider.
2. Rise of the Storage API: Domain-specific storage operations are being augmented by
services exposed by RESTful APIs such as EMC’s SRM suite. 3. Reduced distance limitations: Distance limitations are being minimized. Technologies
are being introduced at the storage layer, allowing newer active-active models for
resource balancing and failover, such as EMC’s VPLEX® solution.
4. Storage convergence: Convergence is causing physical storage to become tightly
integrated, managed, and supported with other infrastructure components such as
server and networks.
2013 EMC Proven Professional Knowledge Sharing 44
5. Scale-out proliferating: Scale-up architectures are giving way to scale-out approaches
as data volumes grow and simplicity becomes paramount.
6. Commodization of Storage: The underlying technology base is moving away from
proprietary ASICs and FPGAs to industry-standard components found in servers that
now run software stacks able to be virtualized and can potentially run on standard
servers, as well as enabling migration of new workloads into the array itself.
7. Evolving media technology: The underlying media types are rapidly changing: from
tape to disk, and from disk to Flash.
While any one of these topics are probably much better served in other supporting papers,
focusing on one of these factors in driving the SDDC would be appropriate.
Best Practice – Do not underestimate how Flash Is Transforming Storage Storage costs usually have to be evaluated against three criteria: cost per unit capacity, cost per
unit performance, and cost per degree of protection. Flash storage completely revolutionizes
that second metric (performance) in a substantial way. The bar has obviously been raised
dramatically as to what's now achievable, and the cost associated to deliver a given measure of
storage performance drops dramatically. Yes, there are always exceptions, but we're focusing
on the broad trends here vs. specific corner cases. There's no axiomatic "best place" to put
Flash in the storage hierarchy, in the array, in the storage network, in the server, etc.
For many familiar use cases, there's a well-understood benefit for mixing in a bit of Flash with
more traditional disk drives via a hybrid storage array. Thanks to our good amigo LoR (locality of
reference, or data skew), a small amount of Flash combined with intelligent software can often
result in astonishing performance benefits. But if your need for speed is extreme, there's nothing
faster in the storage world than a server-based Flash pool accessed over the internal PCIe bus
vs. a traditional storage network I/O. The only thing faster would be terabytes of volatile server
DRAM that is hard to consider as storage as it isn't persistent. Whether that server flash is
delivered via an in-server PCIe card, or perhaps pooled as an external shared caching device
via a low-latency interconnect, many believe that's the near
ultimate in storage speed.
The Server Caballero One of the many trends at the server is how software-defined
data centers can change how the application is developed,
2013 EMC Proven Professional Knowledge Sharing 45
provisioned, used, and managed. One major change is how applications communicate with
each other. Another major change is how applications can control the infrastructure dynamically
to levels never done before. As we will see in the following sections, just about every networking
and storage vendor is at least nodding at the concept, particularly as a way of handling more
complex cloud-based environments. Configuring the networks themselves often requires lots of
manual tasks because each device on the network has separate policies and consoles, as
previously discussed. With the move toward server virtualization and cloud computing, this has
become even more complex, which is one of the reasons why a number of organizations and
companies have been focused on solving the issues of complexity that the SDDC is addressing.
Network-aware applications Network engineers and researchers have long sought effective ways to make networks and the
infrastructure in general more “application-aware”. A variety of methods for optimizing the
network to improve application performance or availability has been considered. Some of these
approaches have been edge-based, for example tuning protocol parameters at end-hosts to
improve throughput, or choosing overlay nodes to direct traffic over application-optimized paths.
Examples of network-centric approaches include providing custom instances of routing
protocols to applications9, or even allowing applications to embed code in network devices to
perform application processing.
Recent emergence of the software-defined networking paradigm has created renewed interest
in tailoring the network to address the needs of the applications more effectively. By providing a
well-defined programming interface to the network (See page 31 for more on Open-Flow), SDN
provides an opportunity for more dynamic and flexible interaction with the network. Despite this
promising vision, the ability of SDDC to effectively configure or optimize the infrastructure to
improve application performance and availability is still nascent. Trends in data center
applications, network and storage aware architecture present a new opportunity to leverage the
capabilities of SDDC for truly application-aware infrastructure comingling.
9 P. Chandra, A. Fisher, C. Kosak, T. S. E. Ng, P. Steenkiste, E. Takahashi, and H. Zhang.
Darwin: Resource management, for value-added customizable network service. In IEEE,
ICNP’98, October 1998
2013 EMC Proven Professional Knowledge Sharing 46
The trends are:
1. Software Defined Networking - see page 26 for details on Software-Defined Networking
(SDN)
2. Big Data Applications – There is a growing importance of big data applications, which
are used to extract value and insights efficiently from very large volumes of data. Many
of these applications process data according to well-defined computation patterns and
also have a centralized management structure, which makes it possible to leverage
application-level information to optimize the network.
3. Energy Consumption and Performance – A growing number of proposals10 for data
center network architectures that leverage optical switches to provide significantly
increased point-to-point bandwidth with low cabling complexity and energy consumption.
Some of this work has demonstrated how to collect network-level traffic data and
intelligently allocate optical circuits between endpoints (e.g. top-of-rack switches) to
improve application performance. However, without a true application-level view of traffic
demands and dependencies, circuit utilization and application performance can be less
than optimal.
These three trends taken together, software-defined networking, dynamically reconfigurable
optical circuits, and structured big data applications will be more tightly coupled and rely more
on the SDN controller/app as we move into the future. This can be done by using a “cross-layer”
approach that configures the network based on big data application dynamics at run-time.
Methodologies such as topology construction and routing mechanisms for a number of
communication patterns are possible, including single aggregation, data shuffling, and partially
overlapping aggregation traffic patterns to improve application performance. However, a number
of challenges remain which have implications for SDN controller architectures. For example, in
contrast to common SDN use cases such as WAN traffic engineering and cloud network
provisioning, run-time network configuration for big data jobs requires more rapid and frequent
flow table updates. This imposes significant requirements on the scalability of the SDN
controller and how fast it can update state across the network. Another challenge is in
maintaining consistent network-wide routing updates with low latency, and in coordinating
network reconfiguration requests from different applications in multi-tenancy environments. This
10 G. Wang, D. Andersen, M. Kaminsky, K. Papagiannaki, T. S. E. Ng, M. Kozuch, and M. Ryan.
c-Through: Part-time optics in data centers. In ACM SIGCOMM, August 2010.
2013 EMC Proven Professional Knowledge Sharing 47
is a work-in-progress, and there is clearly more work to be done to realize and fully evaluate the
design. Nevertheless, this work shows early promise for achieving one of the major goals of
software-defined networking, that is to tightly integrate applications with the network to improve
performance and utilization and rapid configurability.
Advantages of Application Awareness
As will be discussed in the next section, for big data applications, an application-aware network
controller provides improved performance. By carefully allocating and scheduling high-
bandwidth links via optical paths, job completion time can be reduced significantly. Data center
operators also benefit from better utilization of the relatively limited set of high-bandwidth optical
links. Current approaches for allocating optical circuits in data centers typically rely on network
level statistics to estimate the traffic demand matrix in the data center. While these designs
show the potential to benefit applications, a true application-level view of traffic demands and
dependencies, circuit utilization, and application performance can be poor. The reason is, in
many cases, it is difficult to estimate real application traffic demand based only on readings of
network level statistics. Without accurate information about application demand, optical circuits
may be configured between the wrong locations, or circuit flapping may occur from repeated
corrections. Second, blindly optimizing circuit throughput without considering application
structure could cause blocking among interdependent applications and poor application
performance.
Best Practice – Consider the infrastructure and Big Data as the emerging app for the SDDC
Given all that has been discussed about the three caballeros, Server, Network and Storage, one
common attribute is that they are closely tied to the infrastructure. Not to minimize them, but
they are important only to the extent that
they support the delivery of valuable
applications along with the data, of course.
Applications produce results that have
meaning and value. Infrastructure is just the
means to that end.
The term "application" shows up a lot in
SDDC conversations, and that's good, Figure 22 - SDN Controller Application Integration
2013 EMC Proven Professional Knowledge Sharing 48
because it more closely links SDDC’s to where the real value is created. Consider that the
canonical architecture diagram that's emerged around OpenFlow-based SDN architectures
shown in Figure 22 - SDN Controller Application Integration, such as firewalls and load
balancers are now becoming "applications".
Traditionally, Excel, Oracle Financials, and Hadoop are applications. With SDN, the network
infrastructure is now programmable, so firewalls and load balancers will be seen as applications
in the sense that they are software programs that instantiate functions in the network via the
SDN controller.
Firewalls and load balancers are better labeled as "software-defined infrastructure". Many
consider that the SDDC will cause an IT revolution in application development and the
infrastructure itself as the next killer app. Think of the PC. While PC hardware manufacturers
drove commodity platforms, it was the emergence of killer applications such as spreadsheets
and desktop publishing that really drove adoption. Many argue that another killer app for the
SDDC would be one with significant economic benefits to lots of companies, and one that
requires specifically SDN in order to be viable on a broad scale.
Big data may be one such application area. First, big data has unlocked big value for some
companies. For example, it's been
reported that Google generates
95% of its revenue11 from the ability
to target ad placements based on
search terms. Big data apps will be
used pervasively if they can drive
even a few percentage points of
revenue (or save equivalent costs)
for many organizations. Hadoop, for
example, can leverage an
integrated network control plane,
and utilize job-scheduling strategies
to accommodate dynamic network
11http://blogs.hbr.org/cs/2012/10/big_data_hype_and_reality.html
1600
1200
800
400
01000 2000 3000 40000
Data Size in MB
Hadoop J
ob taken in s
econds Static Network Queues
No SDN
SDN application aware network routing
Application aware SDN accelleration
Figure 23 - Application Acceleration using SDN
2013 EMC Proven Professional Knowledge Sharing 49
configuration. This is a perfect example of utilizing the power of the SDDC, specifically the
integral part of this architecture, the SDN.
In an analysis12 conducted by Infoblox, an SDN-aware version of Hadoop achieved a 40%
reduction on a key Hadoop benchmark when executed over an SDN network. The Hadoop
application set up prioritized queues (flows) in the underlying OpenFlow switches at
initialization, giving the highest priority to traffic from particular HTTP ports. The ports,
corresponding to the most latency-sensitive Hadoop traffic, received the highest throughput
using application aware SDN acceleration, as shown in Figure 23 - Application Acceleration
using SDN, above. As a result, critical Hadoop traffic was forwarded ahead of other, less critical
traffic, and so the job completed faster, even when executing on a congested network. The 40%
performance improvement is impressive, even more so when one considers that there was no
feedback or runtime optimization. Applications that speak SDN can dynamically monitor network
state along with app execution and make network optimizations dynamically.
The implications of this analysis could be significant for SDNs and big data. Applications like
Hadoop rely on distributed computing in order to achieve acceptable performance. Distributed
computing requires a network. If SDNs can enable applications to make better use of distributed
processing resources, then big data applications can deliver acceptable performance using
much less computing hardware, making them affordable for more organizations. That could
make big data a killer app for SDNs.
This is just one example. Delivery of rich media like video and audio also has potential. Just as
the PC (and now mobile devices) unleashed waves of programmer creativity, it will be exciting
to see what programmers will do when their applications can define and become the network.
Orchestration As discussed previously, even though abstraction is a good thing, the concept of a software
defined data center is nothing new. The SDDC has been floating around for over a decade with
the appearance of VMware. Many will remember, “Utility computing”, or “N113” from Sun, which
one can argue is really a variation on the software-defined theme. Today though, the vision for
12http://cloud.github.com/downloads/FlowForwarding/incubation/Infoblox_Hadoop_OF_NDM_11
06.pdf 13 http://en.wikipedia.org/wiki/Oracle_Grid_Engine
2013 EMC Proven Professional Knowledge Sharing 50
IT has been laser focused on reducing complexity and speeding the delivery of services for a
long time driven by the cloud.
It’s clear that not only is the vision of the autonomic data center alive and well, but as an
industry, vendors are making progress on delivering that vision. Over the last decade the
servers have become largely abstracted, defined, not by the device, but in software. At the
same time, the cloud has changed the way IT professionals consume resources. The question
today is what’s next? How, now that the infrastructure stovepipes of the data center have been
defined in software, does the industry leverage all this effort? To what purpose and to what end
is this entire software-defined infrastructure applied?
As outlined by EMA14 (Enterprise Management Associates) recently, the “ultimate goal of the
Software-Defined Data Center (SDDC) is to centrally control all aspects of the data center,
compute, networking, and storage, through hardware-independent management and
virtualization software.” It seems that the end game of the SDDC is to orchestrate, coordinate,
and apply resources from server, storage, and networking pools to ensure that the applications
or services meet the capacity, availability, and response time SLAs that the business requires.
In addition, when we talk about cloud, what often comes to mind first is virtualization and virtual
machines produces elasticity, manageability, and ease of use.
The problem is, in the real world, the data center is a diverse and heterogeneous place. Some
applications will always remain on bare metal hardware because, for performance, sizing, or
even licensing reasons, they are not well suited to be run in virtual machines. For those
applications that are well suited to virtualization, the cloud has become a mixture running many
different hypervisors. Even VMware is seeing that. Finally, physical hardware is, was and
forever will be the engine of the data center. Even hypervisors have to run on hardware.
Therefore, by definition, the world will have a mix of physical and virtual. As such, to use the
SDDC to provide and mange a cloud offering, you need a solution that can manage both
physical and virtual along with the network and storage. How does one address this diversity of
heterogeneous hardware and software solutions to orchestrate an effective solution? One can
argue command and control that is quick and accurate in the changes in the environment allows
an efficient and manageable solution for orchestration within the software defined data center.
14 http://www.prweb.com/releases/2011/10/prweb8866515.htm
2013 EMC Proven Professional Knowledge Sharing 51
Best Practice – Understand the efficiencies of command and control in the SDDC For orchestration to be effective, a best practice is to understand the importance and best
policies for command and control. The OODA loop (for observe, orient, decide, and act), as
shown, is a concept originally applied to the combat operations process, often at the strategic
level in military operations. It is now also often applied to understand commercial operations,
learning processes and orchestration methods. The concept was developed by military
strategist and USAF Colonel John Boyd15, father of the F-16 jet fighter.
The OODA loop
has become an
important
concept in both
business and
military strategy.
According to
Boyd, decision-
making occurs in
a recurring cycle
of observe-
orient-decide-act. An entity (whether an individual or an organization) that can process this cycle
quickly, observing and reacting to unfolding events more rapidly than an opponent, can thereby
"get inside" the opponent's decision cycle and gain the advantage. Boyd developed the concept
to explain how to direct one's energies to defeat an adversary and survive. Boyd emphasized
that "the loop" is actually a set of interacting loops that are to be kept in continuous operation
during combat. He also indicated that the phase of the battle has an important bearing on the
ideal allocation of one's energies.
Boyd’s diagram shown in Figure 24 - The OODA Loop, shows that all decisions are based on
observations of the evolving situation tempered with implicit filtering of the problem being
addressed. These observations are the raw information on which decisions and actions are
based. The observed information must be processed to orient it for further making a decision. In
notes from his talk “Organic Design for Command and Control”, Boyd said, the second O,
15 http://www.fastcompany.com/44983/strategy-fighter-pilot
Figure 24 - The OODA Loop
2013 EMC Proven Professional Knowledge Sharing 52
orientation is defined as the repository of our genetic heritage, cultural tradition, and previous
experiences. Many argue that this is the most important part of the O-O-D-A loop since it
shapes the way we observe, the way we decide, the way we act. As stated by Boyd and shown
in the “Orient” box, there is much filtering of the information through our culture, genetics, ability
to analyze and synthesize, and previous experience. Since the OODA Loop was designed to
describe a single decision maker, the situation is usually much worse than shown as most
business and technical decisions have a team of people observing and orienting, each bringing
their own cultural traditions, genetics, experience, and other information. It is here that decisions
often get stuck, which does not lead to winning, since, in order to win, we should operate at a
faster tempo or rhythm than our adversaries. This means that one needs to get inside the
adversary's Observation-Orientation-Decision-Action time cycle or loop.
The OODA loop can be used in the command and control of a Software Defined Data Center.
With the ability to observe the server,
storage and especially, the network in a
centralized manner, the ability to orient,
decide, and act will accelerate and ultimately
lead to automation even in heterogeneous
environments. Therefore, just as VMs were
the container for server virtualization, the
Virtual Data Center (VDC) is the container for the SDDC. VDC deployments are completely
automated.
An example of this OODA model is VMware’s vCloud Director. This solution can handle policy-
based deployment of compute and storage, delegating networking and security deployments to
the vCloud Networking & Security sub-system. A centralized command and control mechanism
(vShield Manager) takes inventory of all the abstractions and pools, and is responsible for
managing and mapping these pools into the needs of higher-level entities like tenants or apps,
and aligning with higher-level virtual containers. Notions of multi-tenancy, isolation, elasticity,
and programmability via RESTful interfaces are also handled here.
Software-defined networking (SDN) will redefine networking as we know it in the decade to
come. At the very core of the SDN architecture, the network control plane is separated from the
network-forwarding plane. In essence, the network of the future follows the path internal network
2013 EMC Proven Professional Knowledge Sharing 53
device design has taken for the last two decades. Here, high intelligence directs fast forwarding
multi OODA loop decision-making achieving efficient command and control.
Best Practice – Consider that Simple Orchestration does not mean simple solutions Simple orchestration is not as easy as it sounds. Some believe that given the higher level of
abstraction that the SDDC offers that the underlying infrastructure will become simple and
commoditized. Server, network, and storage are similar in how one initiates command and
control methodologies. As such, the focus will be on the network.
One can argue that the low-cost generic network device will not succeed in the SDN
environment since high intelligence typically has been a requirement for directed fast forwarding
and routing. So, does this mean that as the SDN model takes hold and SDN solutions solidify,
devices deployed in the network—switches, routers, access points, etc.—are relegated to fast
forwarding duties only and, therefore, grow to be less intelligent and far less costly over time.
The answer is simple. No!
On the contrary, the network device in the SDDC environment will grow more intelligent, not
less. The five main reasons for this are [25]:
1. The SDN network device needs to take and execute orders on the fly. With a central
command and control system operating at the core of a SDN environment, orders will come
fast and furious. Triggered by any number of events or directives such as changing network
conditions, policy enforcement, service activation, and electronic threats, new orders will be
passed along to individual or groups of network devices in bunches, at any time. The
expectation is for these devices to execute these orders immediately upon receipt.
Understanding that the device is already hard at work directing traffic across the network
according to the last set of orders, incorporating new orders received will be a delicate and
challenging task. This task will require intelligent processing, decision-making, and
ultimately, execution. The reality is, the network cannot stop and start all over again.
2. The SDN network device needs to provide constant yet controlled feedback to the central controller. The central command and control system within a SDN environment
depends on a timely stream of accurate and relevant information relating to the operating
condition of the network. Network devices are the primary source of this information. They
form the eyes and ears of the central command and control system. Without these network
devices providing this information, the central command and control system is operating in
2013 EMC Proven Professional Knowledge Sharing 54
the dark. It makes sense that the more information, the better the decision, and resulting
orders of the central command and control system. However, as we all know, more
information is not always better. Here, network devices will likely need to make intelligent
decisions with regard to what should and should not be provided to the central command
and control system. Without some intelligent filtering, information overload could easily
swamp the central command and control system. Here, the term swamp translates to
slowdowns or even failures. This would be catastrophic in a SDDC environment.
3. The SDN network device needs to be easily and highly programmable. Improved
network dynamics is one of the core reasons behind the transition to SDDC. While today's
networks are able to adapt to changing network conditions to some degree (e.g. link
redirects, hardware failovers, policy changes), true network autonomics are still more dream
than reality. While SDN will be no "hands-off" panacea with respect to network self-
management, it will allow our networks to be more self-aware and self-directed. The
network device in this environment needs to be able to "change its spots" on-demand.
Today, changing how a network device operates most often requires, at a minimum, staff
intervention and, in a worst-case scenario, device upgrades. Imagine a device that can take
on new responsibilities or even completely different roles at the click of an icon or even in
reaction to a new order from the central command and control system. This type of
programmability requires not only very sophisticated software, but also well-thought out
hardware designs. The generic hardwired processor, adapter, or device will not survive in
this ever-shifting brave new SDDC world.
4. The SDN network device needs to match service needs to traffic conditions. Within a
SDN, the network device, while provided continual direction from the central command and
control system, is responsible for controlling and moving traffic efficiently and effectively
across the network in real-time. The network device is the police officer directing traffic at a
busy intersection during rush hour. While orders from the city planner or the police captain
may provide overall direction to the traffic control officer, critical decisions still need to be
made by the officer on the fly as conditions change—for better or worse. So too must the
network device merge policy and execution as needed. While lines of communication with
the central command and control system should always be open, decisions that must be
made immediately will need to be made by the network device itself. While the SDN model
separates the control plane from the forwarding plane, there are still control decisions to be
made at the level of the forwarding plane. For example, a large spike in traffic requires
2013 EMC Proven Professional Knowledge Sharing 55
some fine-tuning of the traffic queues to maintain service levels for certain flows (e.g. video
or voice). This fine-tuning may go against central orders, but it provides the best possible
service at the moment the spike hits. The performance requirements of tomorrow's
networked users and applications will demand empowered network devices—devices that
will be required to make intelligent decisions quickly.
5. The SDN network device needs to be simple to deploy, operate, and enhance.
Leonardo DaVinci had a famous statement, "Simplicity is the ultimate sophistication." In
networking, complexity is the enemy. And simple does not equate to dumb. On the contrary,
the simple network device does more, while demanding less. It requires less time. It
requires less staff. It requires less reinvestment. It drives less risk. Think how many
versions of IOS are active in your network. Think how difficult it was to accommodate that
last business application dropped on your network. In networking, it seems that nothing is
ever simple. Why is that? Because simple is hard. As DaVinci indicated, simple is
sophisticated. The simple network device operating within a SDN environment will be both
simple and sophisticated.
If you look at SDN as a way to "dumb down" your network and lower your cost of networking
through generic device purchases, you are not interpreting the future benefits of SDN properly.
The low-cost generic network device will not succeed in the SDN environment. Rather, the
network device grows to be even more capable, more responsible, and more decisive in the
SDDC future. If you, instead, look at SDN as a way to simplify your network, better utilize
networking and networked resources, and, last but certainly not least, improve network
dynamics and therefore network service levels, then you are more on track to the SDDC future
and more in line with future network device design.
Security Security is a pivotal aspect in the overall architecture of the
Software Defined Data Center. One of the major changes in
this architecture paradigm is security in the network. As
such, SDN and security are synonymous. This section will
discuss the security vulnerabilities and protection focus
areas that are changing in this new “software” world.
What are the SDDC fabric requirements? They include a
scalable fabric with Multi‐pathing for Virtual Machines,
Large cross‐section bandwidth, HA (High Availability) with
2013 EMC Proven Professional Knowledge Sharing 56
fast convergence, Switch clustering, (less switches to manage), Secure fabric services. For the
physical and virtual workloads, a converged network supporting Storage: FCoE, iSCSI, NAS &
FC‐attach, Cluster: RDMA over Ethernet, Link: flow control, bandwidth allocation, congestion
management. High bandwidth links are also becoming a requirement, going from 10GE to 40
GE to100 GE.
Traditional Network Vulnerabilities The traditional attack targets are applications:
(network apps, general apps, including Database),
servers: transport, OS and hypervisor’s, network:
routers, switches, virtual switches, and other
middleboxes. Table 2 - Traditional Network
Vulnerability Issues, shows typical vulnerability
examples from the various layers of the network.
From cross-site scripting to physical link tapping, all
of these examples are applicable within any data
center. Traditional network defenses or protection
mechanisms include:
Physical: secure perimeter, Servers: security
protocols, defensive programming, etc…
Network layers & Apps: Firewall, intrusion detection,
intrusion prevention, security protocols, ACLs, etc…
as shown in Table 3 - New Attack Vectors. Plus, new
attack targets are being defined as a result of the new
SDN architecture such as the SDN controller itself. In
addition, the new virtual infrastructure, application
enhancements relating to server attacks on the
hypervisor, virtual switch, and VM. The network itself
with the requirement to adhere to the OpenFlow
protocol for OF enabled devices opens potential
issues. Possible solutions for SDN defense
approaches include the traditional protection
mechanisms, plus one or more of the following as
shown in Figure 25 - SDN Defense Approaches:
Table 2 - Traditional Network Vulnerability Issues
Table 3 - New Attack Vectors
2013 EMC Proven Professional Knowledge Sharing 57
1. Virtual Security Apps: Push VM‐VM traffic thru virtual
appliance
2. Physical Security Appliance: Push traffic thru physical
appliance
3. SDN hosted Security Appliance: Push traffic thru SDN
based appliance
In all three cases, there are advantages and disadvantages. With
software defined methodologies addressing security, there are a
number of potential solutions that should be considered.
Best Practice – Consider implementing an Openflow Network security Kernel architecture
Dynamic network orchestration, driven by the benefits for elasticity
of server and desktop virtualization, delivers computing resources
and network services on demand, spawned and recycled in reaction to network service
requests. Frameworks such as Open-Flow (OF), discussed in detail in the section titled
“OpenFlow”, embraces the paradigm of highly programmable switch infrastructure, that
computes optimal flow routing rules from remote clients to virtually spawned computing
resources. The question is; what network security policy can be embodied across a set of OF
(Open Flow) switches enabling a set of OF applications that will react to the incoming stream of
flow requests securely?
The state of an OF switch must be continually reprogrammed to address the current flows, the
question of what policy was embodied in the switch 5 minutes prior is as elusive to discern as
what the policy will be 5 minutes into the future. One proposed solution is to develop virtual
network slicing, such as in FlowVisor16 and in the Beacon OpenFlow controller17, to enable
secure network operations by segmenting, or slicing, network control into independent virtual
machines. Each network domain is governed by a self-consistent OF application, which is
architected to not interfere with OF applications that govern other network slices. In this sense,
OpenFlow security is cast as a non-interference property. However, even within a given network
16 R. Sherwood, G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, N. McKeown, and G. Parulkar. Can the
Production Network Be the Testbed. In Proceedings of the Usenix Symposium on Operating System
Design and Implementation (OSDI), 2010. 17 OpenFlowHub. BEACON. http://www.openflowhub.org/display/Beacon.
Figure 25 - SDN Defense Approaches
2013 EMC Proven Professional Knowledge Sharing 58
slice the problem remains that a network operator may still want to instantiate network security
constraints that must be enforced within the slice. It is a best practice to the needs for well-
defined security policy enforcement that can occur within the emerging software-defined
network paradigm, but also that this paradigm offers the opportunity for radically new
innovations in dynamic network defense.
The “FortNOX” Enforcement Kernel is a new security policy enforcement kernel that is an
extension to the open source NOX OpenFlow controller18. FortNOX incorporates a live rule
conflict detection engine, which mediates all Open-Flow rule insertion requests. A rule conflict is
said to arise when the candidate OpenFlow rule enables or disables a network flow that is
otherwise inversely prohibited (or allowed) by existing rules. Rule conflict analysis is performed
using a novel algorithm, which we call alias set rule reduction, that detects rule contradictions,
even in the presence of dynamic flow tunneling using “set” and “goto” actions. When such
conflicts are detected, FortNOX may choose to accept or reject the new rule, depending on
whether the rule insertion requester is operating with a higher security authorization than that of
the authors of the existing conflicting rules. FortNOX implements role-based authentication for
determining the security authorization of each OF applications (rule producer), and enforces the
principle of least privilege to ensure the integrity of the mediation process.
There are challenges in security policy enforcement in SDN’s. Security Policy in SDNs is a
function of what connection requests are received by OF applications. OF applications may
compete, contradict, override one another, incorporate vulnerabilities or possibly be written by
adversaries. In the worst case scenario, a foe can use the deterministic OF application to
control the state of all OF switches in the network. The possibility of multiple (custom and third-
party) OpenFlow applications running on a network controller device introduces a unique policy
enforcement challenge, since different applications may insert different control policies
dynamically. How does the OF controller guarantee that they are not in conflict with each other?
The challenge is ensuring that all OF controller applications do not violate security policies in
large real-world (enterprise/cloud) networks with many OF switches, diverse OF applications,
and complex security policies. Conducting this kind of job manually is clearly error-prone and
challenging.
18 N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker. NOX: Towards an
Operating System for Networks. In Proceedings of ACM Computer Communications Review, July 2008.
2013 EMC Proven Professional Knowledge Sharing 59
Best Practice – Consider implementing an upcoming SDN security solution - FortNOX
FortNOX19 extends the NOX OpenFlow controller (see the section titled OpenFlow, for NOX
details) by providing non-bypassable policy-based flow rule enforcement over flow rule insertion
requests from OpenFlow applications. Its goal is to enhance NOX with an ability to enforce
network flow constraints (expressed as flow rules)
produced by OF-enabled security applications that wish to
reprogram switches in response to perceived runtime
operational threats. Once a flow rule is inserted to
FortNOX by a security application, no peer OF application
can insert flow rules into the OF network that conflict with these rules. Further, it enables a
human administrator to define a strict network security policy that overrides the set of all
dynamically derived flow rules. By conflict, we refer to one or more candidate flow rules that are
determined to enable a communication flow that is otherwise prohibited by one more existing
flow rules.
FortNOX’s ability to prevent
conflicts is substantially greater
than simple overlap detection,
commonly provided in
switches. FortNOX
comprehends conflicts among
flow rules, even when the
conflict involves flow rules that
use set operations to rewrite
packet headers in ways that
establish virtual tunnels
between two end points.
FortNOX resolves conflicts in
rules by deriving authorization roles using digitally signed flow rules, where each application can
sign its flow rule insertion requests, resulting in a privilege assignment for the candidate flow
rule.
19 http://www.techrepublic.com/whitepapers/a-security-enforcement-kernel-for-openflow-
networks/12844380
OpenFlow Switches
Figure 26 - Fort NOX Architecture
2013 EMC Proven Professional Knowledge Sharing 60
The diagram shown in Figure 26 - Fort NOX Architecture, above illustrates the components that
compose the FortNOX extension to NOX. At the center of NOX lays an interface called
send_openflow_command(), which is responsible for relaying flow rules from an OF application
to the switch. FortNOX extends this interface with four components. A Role-based Source
Authentication module provides digital signature validation for each flow rule insertion request,
assigning the appropriate priority to a candidate flow rule, or the lowest priority if no signature is
provided. The Conflict Analyzer is responsible for evaluating each candidate flow rule against
the current set of flow rules within the Aggregate Flow Table. If the Conflict Analyzer determines
that the candidate flow rule is consistent with the current network flow rules, the candidate rule
is forwarded to the switch and stored in the aggregate flow table, maintained by the State Table
Manager.
FortNOX adds a flow rule timeout callback interface to NOX, which updates the aggregate flow
table when switches perform rule expiration. Two additional interfaces are added that enable
FortNOX to provide enforced flow rule mediation. An IPC Proxy enables a legacy native
command OF application to be instantiated as a separate process, and ideally operated from a
separate no privileged account. The proxy interface adds a digital signature extension, enabling
these applications to sign flow rule insertion requests, which then enables FortNOX to impose
role separations based on these signatures. Through process separation, we are able to
enforce a least privilege principle in the operation of the control infrastructure. Through the
proxy mechanism, OF applications may submit new flow rule insertion requests, but these
requests are mediated separately and independently, by the conflict resolution service operated
within the controller. A security directive translator, which enables security applications to
express flow constraint policies at a higher layer of abstraction, agnostic to the OF controller,
OF protocol version, or switch state can be developed. The translator receives security
directives from a security application, then translates the directive into applicable flow rules,
digitally signing these rules, and forwards them to FortNOX.
FortNox is still in development and ongoing efforts and future work continues. Prototype
implementations for newer controllers such as “Floodlight20” and “POX” are ongoing with
support for security enforcement in multicontroller environments and Improving error feedback
to OF applications as well as optimizing rule conflict detection. Hopefully, this technology will not
only allow the SDDC to achieve its goals outlined, but also keep us safe. 20 http://floodlight.openflowhub.org/
2013 EMC Proven Professional Knowledge Sharing 61
Conclusion The data center is in major transformation mode. Data centers are going soft in a big way. The
trend toward a more software defined data center was discussed. This transformation is the final
step in allowing cloud services to be delivered most efficiently! Vendors will need to address this
change and understand that “lock in”, from a solution perspective, may no longer be the best
business model.
As IT practitioners, we all need to understand the importance of abstraction and deal with
complexity head on. Through proper education venues including EMC Proven Professional,
developing the right skills as well as embracing abstraction is key and confusion will be a thing
of the past.
The three caballeros on well-bred virtualized horses will lead the charge toward this new way of
looking at ways to operationalize the data center. This Knowledge Sharing article described how
data centers are going soft and why the Software Defined Data Center is so pivotal in
transforming the data center into a true cloud delivery model. Solutions where discussed
offering best practices that will align with the most important goal, creating the next-generation
data center, addressing the business challenges of today and tomorrow through business and
technology transformation.
Author’s Biography Paul Brant is a Senior Education Technology Consultant at EMC in the New Product Readiness
Group based in Franklin MA. He has over 29 years’ experience in semiconductor design, board
level hardware and software design, as well as IT technical pre-sales solutions selling as well as
marketing, and educational development and delivery. He also holds a number of patents in the
data communication and semiconductor fields. Paul has a Bachelor (BSEE) and Master’s
Degree (MSEE) in Electrical Engineering from New York University (NYU), located in downtown
Manhattan as well as a Master’s in Business Administration (MBA), from Dowling College,
located in Suffolk County, Long Island, NY. In his spare time, he enjoys his family of five,
bicycling, and other various endurance sports. Certifications include EMC Proven Cloud
Architect Expert, Technology Architect, NAS Specialist, VMWare VCP5, Cisco CCDA
2013 EMC Proven Professional Knowledge Sharing 62
Appendix A – References [1] Onix: A Distributed Control Platform for Large-scale Production Networks, Teemu Koponen,
Martin Casado, Natasha Gude, Jeremy Stribling, Leon Poutievskiy, Min Zhuy, Rajiv Ramanathany, Yuichiro Iwataz, Hiroaki Inouez, Takayuki Hamaz, Scott Shenkerx
[2] Information-Centric Networking: Seeing the Forest for the Trees, Ali Ghodsi, KTH / UC Berkeley, Teemu Koponen, Nicira Networks, Barath Raghavan, ICSI, Scott Shenker, ICSI / UC Berkeley, Ankit Singla, UIUC, James Wilcox, Williams College
[3] 802.1ag - Connectivity Fault Management Standard. http://
[4] www.ieee802.org/1/pages/802.1ag.html.
[5] BELARAMANI, N., DAHLIN, M., GAO, L., NAYATE, A., VENKATARAMANI, A.,
YALAGANDULA, P., AND ZHENG, J. PRACTI Replication. In Proc. NSDI (May 2006).
[6] CAESAR, M., CALDWELL, D., FEAMSTER, N., REXFORD, J., SHAIKH, A., AND VAN DER
MERWE, K. Design and Implementation of a Routing Control Platform. In Proc. NSDI (April
2005).
[7] CAI, Z., DINU, F., ZHENG, J., COX, A. L., AND NG, T. S. E. The Preliminary Design and
Implementation of the Maestro Network Control Platform. Tech. rep., Rice University,
Department of Computer Science, October 2008.
[8] CASADO, M., FREEDMAN, M. J., PETTIT, J., LUO, J., MCKEOWN, N., AND SHENKER, S.
Ethane: Taking Control of the Enterprise. In Proc. SIGCOMM (August 2007).
[9] CASADO, M., GARFINKEL, T., AKELLA, A., FREEDMAN, M. J., BONEH, D., MCKEOWN,
N., AND SHENKER, S. SANE: A Protection Architecture for Enterprise Networks. In Proc.
Usenix Security (August 2006).
[10] CASADO, M., KOPONEN, T., RAMANATHAN, R., AND SHENKER, S. Virtualizing the
Network Forwarding Plane. In Proc. PRESTO (November 2010).
[11] COOPER, B. F., RAMAKRISHNAN, R., SRIVASTAVA, U., SILBERSTEIN, A.,
BOHANNON, P., JACOBSEN, H.-A., PUZ, N., WEAVER, D., AND YERNENI, R. PNUTS:
Yahoo!’s Hosted Data Serving Platform. In Proc. VLDB (August 2008).
[12] DECANDIA, G., HASTORUN, D., JAMPANI, M., KAKULAPATI, G., LAKSHMAN, A.,
PILCHIN, A., SIVASUBRAMANIAN, S., VOSSHALL, P., AND VOGELS, W. Dynamo:
Amazon’s Highly Available Key-value Store. In Proc. SOSP (October 2007).
[13] DIXON, C., KRISHNAMURTHY, A., AND ANDERSON, T. An End to the Middle. In Proc.
HotOS (May 2009).
2013 EMC Proven Professional Knowledge Sharing 63
[14] DOBRESCU, M., EGI, N., ARGYRAKI, K., CHUN, B.-G., FALL, K., IANNACCONE, G.,
KNIES, A., MANESH, M., AND RATNASAMY, S. RouteBricks: Exploiting Parallelism To
Scale Software Routers. In Proc. SOSP (October 2009).
[15] FARREL, A., VASSEUR, J.-P., AND ASH, J. A Path Computation Element (PCE)-Based
Architecture, August 2006. RFC 4655.
[16] GODFREY, P. B., GANICHEV, I., SHENKER, S., AND STOICA, I.
[17] Pathlet Routing. In Proc. SIGCOMM (August 2009).
[18] GREENBERG, A., HAMILTON, J. R., JAIN, N., KANDULA, S., KIM, C., LAHIRI, P.,
MALTZ, D. A., PATEL, P., AND SENGUPTA, S. VL2: A Scalable and Flexible Data Center
Network. In Proc. SIGCOMM (August 2009).
[19] GREENBERG, A., HJALMTYSSON, G., MALTZ, D. A., MYERS, A., REXFORD, J., XIE,
G., YAN, H., ZHAN, J., AND ZHANG, H. A Clean Slate 4D Approach to Network Control and
Management. SIGCOMM CCR 35, 5 (2005), 41–54.
[20] GUDE, N., KOPONEN, T., PETTIT, J., PFAFF, B., CASADO, M., MCKEOWN, N., AND
SHENKER, S. NOX: Towards an Operating System for Networks. In SIGCOMM CCR (July
2008).
[21] HANDLEY, M., KOHLER, E., GHOSH, A., HODSON, O., AND RADOSLAVOV, P.
Designing Extensible IP Router Software. In Proc. NSDI (May 2005).
[22] HINRICHS, T. L., GUDE, N. S., CASADO, M., MITCHELL, J. C., AND SHENKER, S.
Practical Declarative Network Management. In Proc. of SIGCOMM WREN (August 2009).
[23] HUNT, P., KONAR, M., JUNQUEIRA, F. P., AND REED, B. ZooKeeper: Wait-free
Coordination for Internet-Scale Systems. In Proc. Usenix Annual Technical Conference
(June 2010).
[24] http://www.definethecloud.net/sdn-centralized-network-command-and-control
[25] http://www.bradreese.com/blog/8-29-2012.htm
2013 EMC Proven Professional Knowledge Sharing 64
Index abstraction, 8, 11, 12, 14, 15, 22, 23, 27, 35,
39, 40, 41, 43, 49, 53, 60, 61
Al Gore, 7, 12
API, 23, 27, 43
Application, 41, 42, 47
Artifact, 21
attack, 56
Caballeros, 1, 7, 8
Carriers, 18, 20, 31
CCNE, 9
Cloud, 30, 61
cloud services, 6, 17, 27, 31, 61
complex, 8, 9, 12, 19, 20, 21, 30, 35, 45, 58
Consumerization, 17
Discipline, 21
EMC, 5
Entropy, 9
FLASH, 10
InfiniBand, 23
Integrated development environments, 21
IP, 7, 9, 10, 19, 21, 22, 24, 32, 35, 36, 39,
63
John Boyd, 51
mastering complexity, 21, 22
middleboxes, 56
military, 51
network, 6, 9, 10, 13, 14, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
32, 33, 36, 37, 38, 39, 40, 42, 43, 44, 45,
46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57,
58, 59, 60, 63
Network Caballero, 18
OODA, 51, 52, 53
OpenFlow, 16, 27, 30, 31, 32, 33, 35, 36,
37, 38, 39, 41, 48, 49, 56, 57, 58, 59
Orchestration, 7, 49, 53
Orderliness, 7, 9, 11
RESTful, 14, 43, 52
Robert Glass, 9
SAN, 11, 23
SDDC, 7, 8, 11, 12, 14, 16, 17, 20, 23, 38,
43, 44, 45, 47, 48, 49, 50, 51, 52, 53, 54,
55, 60
SDN, 6, 16, 17, 20, 23, 26, 27, 28, 29, 30,
31, 33, 36, 37, 38, 39, 45, 46, 48, 49, 52,
53, 54, 55, 56, 57, 58, 59
secure perimeter, 56
security, 13, 16, 18, 19, 24, 27, 28, 30, 37,
38, 52, 55, 56, 57, 58, 59, 60
Security, 7, 38, 52, 55, 57, 58, 62
server, 6, 13, 15, 17, 19, 21, 23, 29, 30, 40,
42, 43, 44, 50, 52, 56, 57
Service Providers, 31
simplicity, 11, 21, 22, 44
Software Defined Storage, 14, 40, 42
storage, 6, 10, 11, 13, 14, 17, 18, 21, 23,
25, 27, 28, 30, 40, 41, 42, 43, 44, 45, 50,
52, 53
vCloud, 14, 52
vCloud Director, 14, 52
virtual switch, 56
VMware, 14, 23, 38, 39, 41, 43, 49, 50, 52
2013 EMC Proven Professional Knowledge Sharing 65
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION
MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.