Advanced Network Services Tomorrow October 3, 2011.

62
Advanced Network Advanced Network Services Tomorrow Services Tomorrow October 3, 2011

Transcript of Advanced Network Services Tomorrow October 3, 2011.

Page 1: Advanced Network Services Tomorrow October 3, 2011.

Advanced Network Services Advanced Network Services TomorrowTomorrow

October 3, 2011

Page 2: Advanced Network Services Tomorrow October 3, 2011.

• Advanced Network Services - Today– Yesterday!– Current Services– Operations Status– Upgrade Overview

2 – 04/20/23, © 2009 Internet2

Advanced Network Services – Today and Tomorrow

• Advanced Network Services - Tomorrow– Today!– Initiatives– Current development– Next steps

Page 3: Advanced Network Services Tomorrow October 3, 2011.

Seven strategic focus areas

3 – 04/20/23, © 2011 Internet2

Advanced network and network services leadershipAdvanced network and network services leadership

Internet2 Net+: services “above the network”Internet2 Net+: services “above the network”

U.S. UCANU.S. UCAN

National/Regional collaborationNational/Regional collaboration

Global reach and leadershipGlobal reach and leadership

Research community development and engagementResearch community development and engagement

Industry partnership development and engagementIndustry partnership development and engagement

Page 4: Advanced Network Services Tomorrow October 3, 2011.

• New 17,500 mile community owned 20+ year IRU network• 88 wave 8.8 Tbps Ciena 6500 optronics with 55 add/drop sites• Just completing from Sunnyvale to Chicago to Washington to New York• Remainder of the network delivered by this time next year

• Upgraded 100 Gbps IP/MPLS/ION Network with 10 Juniper T1600’s• Upgraded peering service network with 6 Juniper MX960’s• Deployment of a new Layer 2 service on NDDI/OS3E network• Enhanced research programs and support

The New Internet2 Network

Page 5: Advanced Network Services Tomorrow October 3, 2011.

• Research Partnership and Engagement– NDDI/OS3E– Campus Support for Data Intensive Science– Performance Initiatives / Performance Portal

• Global Reach and Leadership– R&E Networking in the Global Arena– International Exchange Points– NSF-Funded International Links (IRNC)

5 – 04/20/23, © 2011 Internet2

Agenda

Page 6: Advanced Network Services Tomorrow October 3, 2011.

Network Development and Deployment Initiative (NDDI)

Partnership that includes Internet2, Indiana University, & the Clean Slate Program at Stanford as contributing partners. Many global collaborators interested in interconnection and extension

Builds on NSF's support for GENI and Internet2's BTOP-funded backbone upgrade

Seeks to create a software defined advanced-services-capable network substrate to support network and domain research [note, this is a work in progress]

Page 7: Advanced Network Services Tomorrow October 3, 2011.

Components of the NDDI Substrate 30+ high-speed Ethernet switches deployed across

the upgraded Internet2 network and interconnected via 10G waves

A common control plane being developed by IU, Stanford, and Internet2

Production-level operational support Ability to support service layers & research slices

48 x 10G SFP+ 1.28 Tbps non-blocking4 x 40G QSFP+ 1 RU

Page 8: Advanced Network Services Tomorrow October 3, 2011.

The NDDI Control Plane The control plane is key to placing the forwarding

behavior of the NDDI substrate under the control of the community and allowing SDN innovations

Eventual goal to fully virtualize control plane to enable substrate slices for community control, research and service development

Will adopt open standards (e.g., OpenFlow) Available as open source (Apache 2.0 License)

Page 9: Advanced Network Services Tomorrow October 3, 2011.

Today Future

Layer-2 WAN(OS³E)

Layer-3 WAN(OSRF)

At-Scale Testbed(GENI)

NewService

ExperimentExperimentExperimentExperiment

FeatureFeatureFeatureFeature

ExperimentExperimentExperimentExperiment

FeatureFeatureFeatureFeature

ExperimentExperimentExperimentExperiment

NDDI SubstrateNDDI SubstrateNDDI SubstrateNDDI Substrate

Page 10: Advanced Network Services Tomorrow October 3, 2011.

Open Science, Scholarship and Services Exchange (OS3E)

An example of a community defined network service built on top of the NDDI substrate.

The OS3E will connect users at Internet2 POP’s with each other, existing exchange points and other collaborators via a flexible, open Layer 2 network.

A nationwide distributed Layer 2 “exchange” Persistent Layer 2 VLANs with interdomain support

Production services designed to support the needs of domain science (e.g., LHCONE, etc.)

Will support open interdomain standards Initially IDC, eventually NSI

Available as open source (Apache 2.0 License)

Page 11: Advanced Network Services Tomorrow October 3, 2011.

OS3E Service Description• This service is being developed in response to the request from the

community as expressed in the report from the NTAC and subsequent approval by the AOAC.

• Service Description– Best effort service– National Ethernet Exchange Service (Layer 2)

• User Provisioned VLANs (WebUI or API)• Time-Limited and Persistent VLANs

– Different price points for hairpin service and inter-node service – Open access policy– Underlying wave infrastructure will be augmented as needed using the

same general approach as used in the IP network.– Inter-Domain Provisioning– Built on SDN: Open/Flexible Platform to Innovate

Page 12: Advanced Network Services Tomorrow October 3, 2011.

OSE Key Features

• Develop once, implement on many different switches

• VLAN Provisioning Time: < 1 sec• Automated Failover to Backup Path• Controller Redundancy• Auto Discovery of New Switches & Circuits• Automated Monitoring

04/20/23 12

Page 13: Advanced Network Services Tomorrow October 3, 2011.

OS3E Use Cases Dedicated Bandwidth for Large File Transfers or

Other Applications Layer-2 Connectivity Between Testbeds Redundancy / Disaster Recovery Connectivity

Between Internet2 Members Connectivity to Other Services

e.g. Net+ Services

Page 14: Advanced Network Services Tomorrow October 3, 2011.

OS3E / NDDI TimelineApril, 2011 Early Program AnnouncementMay-September Hardware, Controller selection

Substrate developmentOctober, 2011 First Deployment and National Demo

Link Policy & funding discussionNext site group selectioniNDDI engagement

November, 2011 Expanded Deployment Inter-domain capabilities

December, 2011 Initial release of NDDI softwareJanuary, 2011 Large scale national deployment

Page 15: Advanced Network Services Tomorrow October 3, 2011.

Support for Network Research OS3E: Layer-2 Interconnect for Testbeds and

Experiments OS3E: Open Platform for Evolving the Network NDDI substrate control plane key to supporting

network research At-scale, high performance, researcher-defined network

forwarding behavior Virtual control plane provides the researcher with the

network “LEGOs” to build a custom topology employing a researcher-defined forwarding plane

NDDI substrate will have the capacity and reach to enable large testbeds

Page 16: Advanced Network Services Tomorrow October 3, 2011.

Making NDDI global… Substrate will support IDC (i.e., it will be inter-

domain capable) Expect interconnection with other OpenFlow testbeds as

a first step (likely staticly) While the initial investors are US-based, NDDI seeks

global collaborators on the substrate infrastructure as well as control plane features

Currently collecting contact information for those interested in being a part of NDDI

please send e-mail to [email protected]

Page 17: Advanced Network Services Tomorrow October 3, 2011.

“Open”• Although it may be disruptive to existing business models, we

are committed to extending a policy-free approach • Each individual node should function like an “exchange point”

in terms of policy, cost, capabilities• Fully distributed exchange will operate as close to exchange

point as possible given constraints: i.e. transport has additional associated costs– Inter-node transport scalability and funding needs discussion– Initially, an open, best effort service– Potential to add a dedicated priority queuing feature

• Internet2 would like to position this service on the forefront of pushing “open” approaches in distributed networks.

Page 18: Advanced Network Services Tomorrow October 3, 2011.

18 – 04/20/23, © 2011 Internet2

NDDI & OS3E

Page 19: Advanced Network Services Tomorrow October 3, 2011.

19 – 04/20/23, © 2011 Internet2

NDDI & OS3E

Page 20: Advanced Network Services Tomorrow October 3, 2011.

20 – 04/20/23, © 2011 Internet2

NDDI & OS3E

Page 21: Advanced Network Services Tomorrow October 3, 2011.

• Deployment– NEC G8264 switch selected for initial deployment– 4 nodes installed (NYC, DC, Chicago, LA)– 5th node (Seattle) by SC

• Software– NOX OpenFlow controller selected for initial implementation– Software functional to demo Layer 2 VLAN service (OS3E) over

OpenFlow substrate (NDDI) by FMM– Software functional to peer with ION (and other IDCs) by SC11– Software to peer with SRS OpenFlow demos at SC11– Open source software package to be made available in 2012

21 – 04/20/23, © 2011 Internet2

NDDI / OS3E Implementation Status

Page 22: Advanced Network Services Tomorrow October 3, 2011.

04/20/23 22

Page 23: Advanced Network Services Tomorrow October 3, 2011.

Getting Connected:•Deployment of additional switches on demand.•Internet2 NOC will deploy switch and circuits, and tie them into the OS3E software•After switch deployment, work with Internet2 NOC to get OS3E account(s), workgroup created•Begin creating circuits using OS3E web interface or API

Demo

23 – 04/20/23, © 2011 Internet2

OS3E / NDDI Demo

Page 24: Advanced Network Services Tomorrow October 3, 2011.

24 – 04/20/23, © 2009 Internet2

Try out the OS3E UI: http://os3e.net.internet2.edu/user: os3e password: os3edemo

OS3E / NDDI Demo

Page 25: Advanced Network Services Tomorrow October 3, 2011.

OS3E Costs and Fees• We understand the costs.• There will likely be graduated fees:

– A fee for connectors only wishing to peer with other connectors on the same switch.

– A greater fee for connectors wishing to utilize the network interconnecting these exchange facilities.

• It is hard at this point to suggest exact fees, they could be adapted depending on the adoption levels.

• This discussion is more about gathering information from the community.

Page 26: Advanced Network Services Tomorrow October 3, 2011.

• Research Partnership and Engagement– NDDI/OS3E– Campus Support for Data Intensive Science– Performance Initiatives / Performance Portal

• Global Reach and Leadership– R&E Networking in the Global Arena– International Exchange Points– NSF-Funded International Links (IRNC)

26 – 04/20/23, © 2011 Internet2

Agenda

Page 27: Advanced Network Services Tomorrow October 3, 2011.

• ION– Shared VLAN service across the Internet2 backbone– Implemented as combination dedicated / scavenger service atop the

Layer 3 infrastructure– Implements IDC protocol– Implemented with OSCARS and perfSONAR-PS

• What’s new (October, 2011)– Modified ION to be a persistent VLAN service– Integrated with ESnet SDN, GÉANT AUTObahn, and USLHCnet as part

of DICE-Dynamic Service• What’s planned (late 2011)

– As DYNES service rolls out, ION is the backbone linking the various regional deployments

– Peer ION with OS3E in a few locations, run both services in parallel– Campus / Regional IDCs can also peer with OS3E

27 – 04/20/23, © 2011 Internet2

ION

Page 28: Advanced Network Services Tomorrow October 3, 2011.

• DYNES (NSF #0958998) = Enable dynamic circuit services end to end– Deploy equipment at the regional and campus levels– Based on OSCARS to control circuits, FDT to move data,

perfSONAR to monitor performance– Funding – May 2010 – May 2013– Emphasis to enable this service for scientific use

• Current Status– Through with our first deployment group, into testing of the

software and hardware– Configuring second group, shipments have started– Third group in planning stages

28 – 04/20/23, © 2011 Internet2

DYNES

Page 29: Advanced Network Services Tomorrow October 3, 2011.

29 – 04/20/23, © 2011 Internet2

DYNES Projected Topology (Fall 2011)

• Based on applications accepted• Showing peerings to other Dynamic Circuit Networks (DCN)

Page 30: Advanced Network Services Tomorrow October 3, 2011.

• Group A – Fully deployed, still undergoing testing (Caltech, Vanderbilt, UMich, MAX, Rutgers, UDel, JHU, SOX, AMPATH)

• Group B – Configuring now, deployment expected fall (TTU, UTA, UTD, SMU, UH, Rice, LEARN, MAGPI, UPenn, MREN, UChicago, UWisc, UIUC, FIU)

• Group C – Late Fall/Winter configuration expected, deployment and testing into next year (UCSD, UCSC, UNL, OU, UIowa, NOX, BU, Harvard, Tufts, FRGP, UColorado)

30 – 04/20/23, © 2011 Internet2

DYNES Deployment Status

Page 31: Advanced Network Services Tomorrow October 3, 2011.

LHCONE Status

• LHCONE is a response to the changing dynamic of data movement in the LHC environment.

• It is composed of multiple parts:– North America, Transatlantic Links, Europe– Others?

• It is expected to be composed of multiple services– Multipoint service– Point-to-point service– Monitoring service

Page 32: Advanced Network Services Tomorrow October 3, 2011.

LHCONE Multipoint Service

• Initially created as a shared Layer 2 domain.• Uses 2 VLANs (2000 and 3000) on separate

transatlantic routes in order to avoid loops.• Enables up to 25G on the Trans-Atlantic routes for

LHC traffic.• Use of dual paths provides redundancy.

Page 33: Advanced Network Services Tomorrow October 3, 2011.

LHCONE Multipoint Service

04/20/23 33

Page 34: Advanced Network Services Tomorrow October 3, 2011.

LHCONE Multipoint service in North America

Page 35: Advanced Network Services Tomorrow October 3, 2011.

LHCONE Point-to-Point Service

• Planned point-to-point service• Suggestion: Build on efforts of DYNES and DICE-

Dynamic service• DICE-Dynamic service being rolled out by ESnet,

GÉANT, Internet2, and USLHCnet– Remaining issues being worked out– Planned commencement of service: October, 2011– Built on OSCARS (ESnet, Internet2, USLHCnet) and

AUTOBAHN (GÉANT), using IDC protocol

04/20/23 35

Page 36: Advanced Network Services Tomorrow October 3, 2011.

LHCONE Monitoring Service

• Planned monitoring service• Suggestion: Build on efforts of DYNES and

DICE-Diagnostic service• DICE-Diagnostic service, being rolled out by

ESnet, GÉANT, and Internet2– Remaining issues being worked out– Planned commencement of service: October, 2011– Built on perfSONAR

04/20/23 36

Page 37: Advanced Network Services Tomorrow October 3, 2011.

• Simple to integrate DYNES/ION and LHCONE point-to-point service

• Possible to integrate DYNES/ION and LHCONE multipoint service?– DYNES / LHCONE Architecture team discussing ways to

integrate DYNES functionality with LHCONE– It is expected that point to point connections through DYNES

would work …– Possible to position DYNES as an ‘onramp’ or ‘gateway’ to

the multipoint service?• Glue a dynamic connection from a campus (through a regional)

into the VLAN 2000/3000• Requires some adjustments to the DYNES end-site addressing and

routing configurations to integrate into LHCONE multipoint layer2 environment

• Would allow smaller T3 sites in the US instant access as soon as they get their DYNES gear.

37 – 04/20/23, © 2011 Internet2

DYNES/ION and LHCONE

Page 38: Advanced Network Services Tomorrow October 3, 2011.

Campus Support forData Intensive Science

• Current Network Regime: A big wall around a network of laptops administered by students– Breaks the end-to-end model– Performance tuned for small flows– Security in the net

• Augmented Network Regime:– Area in the network where science is supported– Reinstate the end-to-end model– Performance tuned for large flows– Security at the node, not in the net

04/20/23 38

Page 39: Advanced Network Services Tomorrow October 3, 2011.

Network Issues forData Intensive Science

• Flow Type Co-Mingling• “Fair” Transport Protocols (Congestion

Control)• Lack of Network Awareness by Application /

Middleware• Firewall Limitations• Network Elements with Small Buffers

04/20/23 39

Page 40: Advanced Network Services Tomorrow October 3, 2011.

Network Solutions forData Intensive Science

• Dedicated transfer nodes– High performance systems

• Suite of software • Specialized performance-focused configurations

• Dedicated transfer facilities– Dedicated transfer nodes– Associated networking equipment– Networking connections

• Network Solutions– Internet2 IP service, NDDI, OS3E, ION

04/20/23 40

Page 41: Advanced Network Services Tomorrow October 3, 2011.

Implementing Network Support for Data Intensive Science

04/20/23 41

Figure 1 LSTI Solution Option

Space

Page 42: Advanced Network Services Tomorrow October 3, 2011.

• Research Partnership and Engagement– NDDI/OS3E– Campus Support for Data Intensive Science– Performance Initiatives / Performance Portal

• Global Reach and Leadership– R&E Networking in the Global Arena– International Exchange Points– NSF-Funded International Links (IRNC)

42 – 04/20/23, © 2011 Internet2

Agenda

Page 43: Advanced Network Services Tomorrow October 3, 2011.

43 – 04/20/23, © 2009 Internet2

Performance Architecture

Page 44: Advanced Network Services Tomorrow October 3, 2011.

Performance Infrastructure

• perfSONAR is performance middleware, designed to integrate performance monitoring tools across a wide range of networks

• perfSONAR is or will soon be widely deployed across campuses, regional networks, backbone networks, and transoceanic links

• Layer 2 and Layer 3 data gathering tools are or will soon be widely deployed across multiple networks

04/20/23 44

Page 45: Advanced Network Services Tomorrow October 3, 2011.

Performance Use Cases• CIO/CEO wants to see global view of network

activity and performance comparisons with other networks.

• End user wants to look at a network weather map to determine if there are currently ‘storms’ in the area.

• End user wants to evaluate if a specific applicationthey want to use is likely to work or not.

• Network engineer wants to diagnose local or inter-domain network performance problems.

04/20/23 45

Page 46: Advanced Network Services Tomorrow October 3, 2011.

Performance: What’s Missing?

• Analysis and visualization tools• Collective support for Inter-domain

performance problems– Help desk– Training classes

04/20/23 46

Page 47: Advanced Network Services Tomorrow October 3, 2011.

Performance:A vision for the future

• We intend to bring together the missing components in the form of a comprehensive performance program– Performance portal– Monitoring as an integral component of Campus

Support for Data Intensive Science– Data collection integrated into network services– Community engagement in collective problem of

end-to-end performance

04/20/23 47

Page 48: Advanced Network Services Tomorrow October 3, 2011.

• Research Partnership and Engagement– NDDI/OS3E– Campus Support for Data Intensive Science– Performance Initiatives / Performance Portal

• Global Reach and Leadership– R&E Networking in the Global Arena– International Exchange Points– NSF-Funded International Links (IRNC)

48 – 04/20/23, © 2011 Internet2

Agenda

Page 49: Advanced Network Services Tomorrow October 3, 2011.

Seven strategic focus areas

49 – 04/20/23, © 2011 Internet2

Advanced network and network services leadershipAdvanced network and network services leadership

Internet2 Net+: services “above the network”Internet2 Net+: services “above the network”

U.S. UCANU.S. UCAN

National/Regional collaborationNational/Regional collaboration

Global reach and leadershipGlobal reach and leadership

Research community development and engagementResearch community development and engagement

Industry partnership development and engagementIndustry partnership development and engagement

Page 50: Advanced Network Services Tomorrow October 3, 2011.

• Campuses and researchers feel the need to think globally, not locally– Requires a strategic focus by Internet2

• Global architecture needs to be more cohesive– Work intentionally in partnership on global

network capacity• Services need to be global in scope

– Support for Data intensive science– Telepresence– International Campuses

50 – 04/20/23, © 2011 Internet2

R&E Networking in the Global Arena

Page 51: Advanced Network Services Tomorrow October 3, 2011.

• Research Partnership and Engagement– NDDI/OS3E– Campus Support for Data Intensive Science– Performance Initiatives / Performance Portal

• Global Reach and Leadership– R&E Networking in the Global Arena– International Exchange Points– NSF-Funded International Links (IRNC)

51 – 04/20/23, © 2011 Internet2

Agenda

Page 52: Advanced Network Services Tomorrow October 3, 2011.

MAN LAN

• New York Exchange Point• Ciena Core Director and Cisco 6513• Current Connections on the Core Director:

– 11 OC-192’s– 9 1 Gig

• Current Connection on the 6513– 16 10G Ethernets– 7 1G Ethernet

Page 53: Advanced Network Services Tomorrow October 3, 2011.

MAN LAN Roadmap

• Switch upgrade:– Brocade MLXe-16 was purchased with:

• 24 10G ports• 24 1 G ports• 2 100G ports

– Internet2 and ESnet will be connected at 100G.• The Brocade will allow landing transatlantic

circuits of greater then 10G.• An IDC for Dynamic circuits will be installed.

– Comply with GLIF GOLE definition

Page 54: Advanced Network Services Tomorrow October 3, 2011.

MAN LAN Services• MAN LAN is an Open Exchange Point.• 1 Gbps, 10 Gbps, and 100 Gbps interfaces on the

Brocade switch. – 40 Gbps could be available by 2012.

• Map dedicated VLANs through for Layer2 connectivity beyond the ethernet switch.

• With the Brocade the possibility of higher layer services should there be a need.– This would include OpenFlow being enabled on the

Brocade.• Dynamic services via an IDC.• perfSONAR-ps instrumentation.

Page 55: Advanced Network Services Tomorrow October 3, 2011.

• Research Partnership and Engagement– NDDI/OS3E– Campus Support for Data Intensive Science– Performance Initiatives / Performance Portal

• Global Reach and Leadership– R&E Networking in the Global Arena– International Exchange Points– NSF-Funded International Links (IRNC)

55 – 04/20/23, © 2011 Internet2

Agenda

Page 56: Advanced Network Services Tomorrow October 3, 2011.

• Research Partnership and Engagement– NDDI/OS3E– Campus Support for Data Intensive Science– Performance Initiatives / Performance Portal

• Global Reach and Leadership– R&E Networking in the Global Arena– Manhattan Landing Exchange Point (MAN LAN)– Washington DC International Exchange Point (WIX)– NSF-Funded International Links (IRNC)

56 – 04/20/23, © 2011 Internet2

Agenda

Page 57: Advanced Network Services Tomorrow October 3, 2011.

• International Research Network Connections (IRNC) is an NSF Office of Cyberinfrastructure program to enable collaboration, in the research and education community, on a global scale. – These grants facilitate hardware and software solutions to foster

access to remote instruments, data, and computational resources located throughout the world.

• Internet2 was awarded 2 IRNC Special Projects awards in May of 2010:– Integrate IRNC-funded links into distributed monitoring

framework available across R&E networks– Integrate IRNC-funded links into dynamic circuit framework

available across R&E networks

57 – 04/20/23, © 2011 Internet2

IRNC Program

Page 58: Advanced Network Services Tomorrow October 3, 2011.

58 – 04/20/23, © 2011 Internet2

GLIF 2011 - Topology

Page 59: Advanced Network Services Tomorrow October 3, 2011.

Advanced Network Services TomorrowAdvanced Network Services TomorrowOctober 3rd, 2011, Internet2 Fall Member MeetingEric Boyd, Internet2

For more information, visit http://www.internet2.edu/

59 – 04/20/23, © 2011 Internet2

Page 60: Advanced Network Services Tomorrow October 3, 2011.

• NSF Grant # 0962704• IRIS will provide a software framework to simplify the task

of end-to-end network performance monitoring and diagnostics– Based on the widely deployed perfSONAR-PS infrastructure

and protocols– Facilitates broader deployment of perfSONAR enabled

resources; increasing the likelihood of diagnostic resources being available along the end-to-end paths

• Integrates with existing deployments on R&E networks (ESnet, GÉANT, Internet2, Regional and NRENs) as well as those maintained by scientific VOs (USATLAS, LHCOPN, eVLBI, REDDnet)

• Will work with IRNC ProNet awardees to customize deployment for target network functionality

• Target end date is April 201360 – 04/20/23, © 2011 Internet2

IRNC SP:IRIS

Page 61: Advanced Network Services Tomorrow October 3, 2011.

• NSF Grant # 0962705• DyGIR will provide a component based solution for

scheduling dynamic circuits on IRNC ProNet infrastructure– Utilizes the OSCARS software suite, developed by ESnet– Integrates circuit statistics and networking monitoring via

the perfSONAR-PS framework• Capabilities will integrate with existing backbone networks

(ESnet SDN, GÉANT AutoBAHN, Internet2 ION), as well as emerging campus deployments (DYNES – an NSF MRI Funded effort).

• Will work with IRNC ProNet awardees to customize deployment for target network functionality

• Target end date is April 2013

61 – 04/20/23, © 2011 Internet2

IRNC SP:DyGIR

Page 62: Advanced Network Services Tomorrow October 3, 2011.

• ACE– perfSONAR-PS test points to be available, along with periodic monitoring to select locations

within GEANT– Exploring what makes sense from a dynamic circuit network perspective. Given proximity to

MAN-LAN, a MAN-LAN supported IDC may make an ACE specific one unnecessary.• AmLight

– Participated in Joint demonstration of DYNES and RNP’s Dynamic Circuit Infrastructure at GLIF– IDC available in the AMPATH exchange for use on International Links– perfSONAR-PS monitoring available

• GLORIAD– Discussed current DYGIR and IRIS activities– Currently have ability to create VLANs on infrastructure. Looking to integrate with dynamic

circuit networks over next couple of years.– Exploring current GLORIAD peerings and heavy users to determine what use cases could

benefit from dynamic circuits.– Exploring best way to capitalize on perfSONAR – specifically looking at ways to publish passive

flow data currently collected by GLORIAD using perfsONAR protocols• TransLight/Pacific Wave

– Established working group to install perfSONAR-PS monitoring software at endpoints and participants in the region

– Evaluating dynamic capabilities• TransPAC3

– Have traded topology information and are determining what switches in the infrastructure to put under dynamic control. Will likely peer with ION and JGN2.

– Currently has perfSONAR-PS test points, and periodic tests with APAN

62 – 04/20/23, © 2011 Internet2

IRNC Outreach to ProNET Awardees