Post on 04-Jan-2016
Networks as a ResourceJerry Sobieski
NORDUnet A/S Presented to:Workshop on Trans-Atlantic Networking for LHCJune 10-11, 2010CERN, Geneva, CH
Overview
• “Network as a Resource” is a paradigm for framing network service capabilities as definable quantifiable assets for an application.– This concept requires that we think of network
services not as general purpose cloud, but in ways that the user can be guaranteed certain qualities – like a certain performance level, or predictability, schedulabilty, ownership, etc…
• This presentation is a somewhat meandering description of several similar concepts and several overlapping activities…
What’s different now?
• From the network perspective, best effort IP services are no longer the only network service presented to the user:– IPv6, – Ethernet services, VPN services,– wavelength services with a choice of framing– dynamic provisioning, – ---- How do we present these network capabilities to the user?
• From the application perspective we no longer consider an executable object module as the application…Indeed, a broad set of interacting processes and facilities are now routinely considered to be – as a whole – the “application”
– LHC activities - High Energy Physics – SKA eVLBI - Radio Astronomy– Gene sequencing – Biomedical
• [high end] Applications are large scale, long term endeavors that involve many cooperative processes of gathering, analyzing, storing, and interpretting scientific information
Evolving
• From the network perspective, we still approach networking in essentially the same fashion we did in the 1990s- there is a 7 layer model, each layer acts independently of the others, and TCP/UDP are as high up the stack as we go.– With only some exceptions at the very high end (LHCnet being probably a good emerging
example) our network services model has not changed substantially since the Internet took off.
– We look at network services only at the flow level, measured in bandwidth provided/used, packets dropped, and latency.
• With the advent of Grid Technolgies, these applications have become much more dynamic
• Some very ambitious applications are trying to emerge that combine large data flows with real-time requirements and/or very low packet loss rates
• While there are well established mechanisms for finding and allocating computation resources and storage resources, and for sharing and allocating access to expensive instruments and sensors, we still think of network resources as independent of these other IT resources
“Resources”
• A “resource” is a quantifiable capability that can be incorporated into a task or activity and enables that task to be accomplished in a particular manner or timeframe – Ex: Computational resources, storage resources, sensors and
instruments– New-> Network transport capability…the “network resource”
• Resources are defined with respect to the anticipated manner in which they are to be used:– For example: a “Computational” resource may be characterized by its
processor architecture, clock speed, L1/L2 cache, associated RAM, operating system, etc.
– A “storage” resource may be characterized by capacity, transfer rates, robustness, persistence, access mode(s), user interface, etc.
– Network resources are characterized by capacity, framing, end points, etc
“Resources”
• A resource is represented as a discrete object with a unique name and certain characteristics
• A resource also has a defined set of interfaces that allow it to be logically linked with other resources
• Resources such as CPUs, storage, etc can be combined so as to create sophisticated information processing environments that are tailored to a particular need – Example: 32 computational nodes may be combined to create a
“cluster” for some computational process.
• Network resources can be similarly defined and manipulated
Network Resources
• The fundamental purpose of the network is to transport data from one location to another• Circuit based networks use information apriori to establish the transport path• Conventional packet networks do this on the fly based upon packet headers
• The fundamental network resource is a “Connection”: (or connectivity between source and sink)– A transparent conduit that carries user data from an ingress point to an egress point.
• In the NaaR working concept, network connections are viewed as Resource Requests• The Resources are drawn from multiple pools of infrastrucutre that have been designed
to meet certain types of transport capabilities – the Service Definition or the Service Specification
• Processes such as Path Finding are viewed as constraint based search across the resource pools (topology)
• This model is being developed in several activities:– OGF Network Services Interface WG– GEANT3 SA2-T1 Multi-Domain Service Architecture – Global Lambda Integrated Facility (GLIF)– FEDERICA– MANTICORE
Network Resources vs nework services
• A network resource is a quantifiable portion of the [network] infrastructure allocated to a particular purpose.
• In this NaaR model, the application is described at an abstracted logical level removed from particular hardware assignments
• This allows middleware to select the physical resources that will host the application.– This works for network resources as well as other
non-network resources
The Service Definition
• Network Resources are allocated from the service capabilities designed and engineered into the network– Resource requests presented to that network must
conform to the characteristics designed into the network– This seems trite, but we build networks on an
autonomous basis – and services do not always completely match.
• The Service Definition is a [proposed] mechanism for formally comparing service capabilities
The Service Definition
• The concept is that each network publishes a [machine-readable] description of their connection service capabilities:– Service name: EtherBasic
• Capacity: 50Mbps .. 1000Mbps, 50 Mbps• Framing: 802.1, 802.1Q, 802.1ad• MTU: 1500 Bytes• MaxFrameLossRate: 1x10^-6
• These service definitions can be duplicated in different networks, defined by consensus, or not– Different SDs can be compared in an automated fashion to
insure compatibility with a Resource request
Comparing Service Definitions
Max_MTU_Size
Protocol_Bandwidth
Max
_BE
R
1 Gbs
< 1 x 10-15
9000 B
Max_MTU_Size
Max
_BE
R
1 Gbs
< 1 x 10-13
9000 B
Network AService Definition
Network BService Definition
Network Resource Requests
• The network Resource request is issued by a “requesting agent” to a “provider agent” (NSI terminology)
• The resource request is applied to the service definition to validate the request, i.e. to tell if the network has the
• The values specified in the resource request are then used to perform a constrained k-shortest path search of the topology to produce the infrastructure components that meet the resource request.
• The infrustructure constitutes the resource pool from which the resources are allocated.
Consensus
• These notions of how one defines a network service and manages the resource pool is still being “discussed”– For instance: The NSI WG architecture has been using a
unidirectional connection as the atomic resource unit• Bi-directional connections are constructed from two uni-
directional resources • The GN3-SA2-T1 discussed this and decided that for other
(practical) reasons it was better to define the connection resources as a bidirectional unit
– Consensus is prefered, but not required as long as the SDs indicate the differences in the allocatable resource unit.
NSI Architecture
Requesting Agent
Resource Manager (RM)
Provider Agent
NSI protocol
NSARM
NSA
NSI protocol
A
E
C
D
D E
Network CNetwork B
B
A
Request Processing
Tree
Abstracted NSI Topology Model
Aruba
Bonaire
Dominica
Curacao
Ashley
Chuck
Aruba
Bonaire
Curacao
Dominica
AshleyChuck
Federated Transport Topology
Aruba
Bonaire
Curacao
Dominica
AshleyChuck
Physcial Transport Topology
A Network from the Resources
• Allocating and integrating network transport resources – connections – is generally not sufficient to realize the application’s interprocess communications environment– IP services are desired, or necessry, to be mapped into the logical
application specification – Address block must be available (can often be private addresses)– Subnets must be allocated from that block– Addresses assigned to the interfaces
• Routing and forwarding requirements need to be considered and the topology adjusted to incorporate these routers– Logical routers can be allocated on existing infrastructure routing
hardware– Internal routing protocol (e.g. OSPF) must be selected and configured– External routing protocol (E.g. BGP) must be configured
TNC 2008 MANTICORE Demo
17
• During the TERENA Networking Conference 2008 (Bruges, 19-22 May) at the Juniper booth, the following scenario was demonstrated (using 1 Juniper M7i router)
router1.rediris.es
router4.rediris.es
router2.rediris.es
router3.rediris.es
router5.rediris.es
AREA 0
AS10AS20
ge-0/0/0
ge-0/0/0
ge-1/0/0
ge-1/0/0
ge-2/0/0 ge-2/0/0
ge-3/0/0
ge-3/0/0
ge-3/0/0
ge-3/0/0
192.168.0.1
192.168.0.2
192.168.1.1
192.168.1.2
192.168.2.2
192.168.2.1
192.168.20.1
192.168.20.2
192.168.10.1
192.168.10.2
AS1
eBGPlo0: 10.10.1.1/32
lo0: 10.10.1.3/32
lo0: 10.10.1.2/32
lo0: 10.10.10.4/32
lo0: 10.10.20.5/32
static
An Application Example:• Electronic Very Long Baseline Interferometry
(E-VLBI)– Radio astronomy community shares their
resources…An international collaborative effort:– 25 to 30 antennae …
• 3000 sensors in SKA– 10 to 15 correlator sites (special DSP hardware)
• Distributed software correlators are being developed (work being done at JIVE and UvA)
– Real-time or nearRT processing desired for portions of the data, with post processing for most data
– Approx 10-30 Gbps today, • 5 Tbps ASKAP (today)• 200 Tbps for full SKA (2015-2017)
The E-VLBI poster child example: Real-time Application Specific Sensor Network
Correlator
Global R&E Hybrid
Infrastructure
Visualization station
Simple 1st phase EVLBI Application Specific Topology
The “Application Specific Network”
Telescopes
Correlator C
X
Y
Z
Logical e-VLBI Topology:
Physical Instantiations of the Application Specific Topology
Kashima, JP
MIT Haystack, US C
X
Y
Z
Westford, USNASA Goddard, US Onsala, SE
Dwingeloo, NL C
X
Y
Z
Koke Park, HI
Seshan, CNKashima, JP
Networks as a Resource
E-VLBI AST
HEP AST
BioInfo AST
Big Data…
• Point: While LHC is the 600 kg Gorilla today…– Other applications are emerging that are architecting
much larger next generation distributed applications
• These larger applications will not be linked to specific hardware, but will ebb and flow and morph to where the cyber-resources are available
• We should be pursuing these distributed virtualization models for large scale applications
Getting there from here…
• First, from the application perspective, we need a means of describing the application in a formal abstracted manner
• This must be able to describe all the key functional components of the application using a parsable syntax/semantics
• We must have appropriate middleware that can take the abstracted application description and locate and aquire the resources– Grid middleware already does this for the computational processes– But incorporating network resources (or other) types of resources injects
resource dependencies,• i.e. Is there adequate network capacity between selected
computational resources?– Directories, Resource Brokers, Resource Managers, …
Getting there from here…
• Since the network resources span multiple geographic and administrative boundaries, we need a common interface to request the resources from the network
• The Network Service Interface (NSI) recommendation is making its way though OGF.– It describes a simple but flexible Requester/Provider process
that allows a user to request resources in the form of a conenction. (Later releases of the recommendation will define other types of NSI interactions to share topology, directory info, and possibly montioring information. )
– The RA/PA relationship is duplicated at the inter-domain boundaries so that resource requests can be disected and acquired and assembled by making resource requests downstream.
So, …Challenges
• Adoption vs Adaptation– Interoperability, and critical mass of networks deploying
them is key.• We don’t really need any new provisioning tools until we have a
working ubiquitous global service environment
– We now need to adopt a single common interdomain interface protocol for requesting and reserving network resources• Growth of hybrid services – and the resulting network capabilities such
as application specific networks and NaaR – is being retarded by constantly moving software tools
– UCLP, DCN/OSCARS, AutoBahn, DRAC, Fenius,…• Which of these posit an interdomain interoperability?
Challenges
• Simplification– Network engineers are not systems developers.– The software suites we develop and expect to
function in production environments must be easier to install, deploy, configure and manage. • This is particularly true of software interfaces for the user
community – e.g. complex web services based tools are often (mostly) too difficult for the user community to incorporate.
• Follow the commercial example of simple download and install in order that adotption is not impeded by the installation.
Challenges
• Integration– We need to begin defining applications as logical
constructs abstracted from the hardware• Understand how the network transport requirmeents interact
with the computational and other requirements of the overall application
– This will allow automated tools to more effectively manage the allocation and mapping of these applications to the hardware infrastructure we deploy• This will formalization will help us also understand how
overall application performance can be improved and help us understand better which applications are using the [network] facilities we engineer and operate.
Challenges
• Virtualization– The ability to completely define an application in a manner that captures that
essential functional requirements of the user without incorporating specific physical infrastructure.
• This allows the control plane/middleware software to find and incorporate resources that meet the needs of the user whereever they are available
– Recursive virtualization will provide better ways to allocate and suballocate resources.
• Application Specific Networks– Given the ability to request and manipulate connections as well defined objects, and
then Incorporate many such connections that connect computational, storage, and sensors/instruments devices
– How do we map a “network” onto these resources?• What addressing scheme do we use? How do we allocate subnets? What routing protocols
should we use? What switching or forwarding capabilities do we require? How do we advertise the internal reachability to the external world?
• MANTICORE seems to stand alone in the R&E space for automating the IP layer configuration
Challenges
• Software Support• As successful tools and BCPs emerge, how do we create
reliable and supported software that we are willing to deploy on a production basis?
– Feature Roadmaps• We are doing things too far ahead of the technology wave for
the vendor community to realize a market as soon as we do a demo…
• We need a mechanism that plans our software roadmap to identify, devlop, and support a continuing stream of new features until the traction is present in the commercial market to sustain the process.
Summary
• Look for more integration and abstraction of resources within applications
• Better and more formalized service architecting will be emerging in the next 2+ years via Service Definitions and standardization.
• NSI 1.0 recommendation to form common interdomain interface for dynamic network connections – This is the single most important requirment for
global adoption
The End
Thank You