Toward Practical Convergence of Middleboxes and Software -Defined Networking
SCinet: The Annual Convergence of Advanced Networking and High Performance Computing
description
Transcript of SCinet: The Annual Convergence of Advanced Networking and High Performance Computing
![Page 1: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/1.jpg)
SCinet: The Annual Convergence of
Advanced Networking and High Performance Computing
Steve Corbató, Internet2
MasterWorks track
14 November 2001
![Page 2: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/2.jpg)
SC99 GNAP Demo Network15-18 November, 1999
Portland, Oregon
![Page 3: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/3.jpg)
Outline
• SCinet • Wide area connectivity• Fiber• Wireless• Infrastructure• Operations, Measurement, & Security• Events
– Xnet, Bandwidth Challenge, SC Global
• Trends• Q&A
![Page 4: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/4.jpg)
SCinet is 4 networks
• Production commodity network
• Ubiquitous wireless network
• High-performance/availability exhibit floor network
• Bleeding-edge testbed - Xnet
![Page 5: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/5.jpg)
Scinet is people (and employers)• Basil Decina • Bill Iles • Bill Kramer• Bill Nickless• Bill Wing • Bob Stevens • Brad Pope • Brent Sweeny• Caren Litvanyi• Chris Wright • Chuck Fisher• Dave Koester• Davey Wheeler • David Mitchell • David Richardson • Debbie Mantano • Dennis Duke • Doug Luce • Doug Nordwall • Eli Dart • Erik Plesset • Gayle Allen • Greg Goddard • Hal Edwards • Hoan Mai • James Patton • Janet Hull • Jeff Carrell • Jeff Mauth • Jerry Sobieski • Jim Rogers • John Dysert • John Jamison (JJ• Jon Dugan • Kevin Oberman Walsh • Kim Anderson
• Linda Winkler • Martin Swany • Marvin Drake• Matt Zekauskas • Paola Grosso • Patrick Dorn• Paul Daspit• Paul Love • Paul Reisinger • Rex Duncan • Rick Bagwell• Rick Mauer • Riki Kurihara • Rob Jaeger • Robert Riehl • Roland Gonzalez • Russ Wolf • Seth Viddal • Stanislav Shalunov • Steve Corbato • Steve Kapp • Steve Shultz • Steve Tenbrink • Thomas Hutton • Tim LeMaster • Tim Toole • Tom Kile • Tom Lehman • Tony Rimovsky • Tracey Wilson • Warren Birch • Will Murray • Derek Gassen • Paul Fernes • Steve Pollock
![Page 6: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/6.jpg)
![Page 7: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/7.jpg)
SC2001 Leadership
• Bill Wing, ORNL – chair• Jim Rogers, CSC – vice chair• Dennis Duke, FSU – incoming
chair
• Chuck Fisher, ORNL – hardware• Jeff Mauth, PNNL – fiber• Martin Swany, UTK –monitoring• Eli Dart, NERSC – security• Bill Nickless, ANL – routing• Tim Toole, SNL – wireless• David Koester, Mitre – Xnet
• Jon Dugan, NCSA – net mgmt• Bill Kramer, NERSC –
Bandwidth Challenge• Greg Goddard, UFl –
monitoring• Kevin Oberman, LBL – Denver
fiber• Steve Corbató, Internet2 –
WAN• Debbie Montano, Qwest –
Denver connectivity• Linda Winkler, ANL – SC
Global
![Page 8: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/8.jpg)
SCinet Committee process
• Conference calls – biweekly weekly• Planning meetings (x3)• Venue recon trips (fiber, wireless)• Staging (~3 weeks before SCxy)• Build (starts Monday before SCxy)• Booth drops (~36 hours before gala reception)• Operate network for ~6 days• Tear down (starts Thursday 4:01p)• Rest & do day job for four months and then start
again…
![Page 9: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/9.jpg)
Staging
![Page 10: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/10.jpg)
Wide area connectivity
• Denver: 15 Gbps– 2xOC-48c: Abilene (Denver) – 2xGigE: STAR LIGHT (Chicago) – 1xOC-48c: Pacific/Northwest Gigapop (Seattle)– 2xOC-48c: ESnet (Sunnyvale & Chicago)
• Level(3) provided wide area connectivity
• Qwest provided local dark fiber
![Page 11: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/11.jpg)
WAN Bandwidth trends
• SC98 (Orlando): 200 Mbps
• SC99 (Portland): 13 Gbps
• SC2000 (Dallas): 10 Gbps
• SC2001 (Denver): 15 Gbps
• SC2002 (Baltimore): Nx10-Gbps ’s??
• Increasing focus on BW utilization
![Page 12: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/12.jpg)
![Page 13: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/13.jpg)
Abilene & SCxy
Escalating bandwidth– SC99 Portland: OC-12c SONET (622 Mbps)– SC2000 Dallas: OC-48c SONET (2.5 Gbps)– SC2001 Denver: 2xOC-48c SONET (5 Gbps)
SCxy transit connectivity offered to domestic & international R&E nets
Backbone MTU raised to 9K bytesTraffic engineering for SC2001End-to-End Performance: GigaTCP testingSC2002 Baltimore: 10-Gbps (planned)
![Page 14: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/14.jpg)
Abilene traffic engineering – SC2002
![Page 15: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/15.jpg)
Fiber (Jeff Mauth)
• 60+ miles of fiber deployed in exhibit hall– 0.3+ FTE-year– ~1.5 fiber-miles/hour
• 120 fiber drops (90% multimode)• Pirelli 24 strand MM fiber used since ’98• Deployment custom engineered to the venue
selected for SCxy• ST fiber connectors standard
– Will review choice for SC2002
![Page 16: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/16.jpg)
Fiber timeline – SC2002
• 5 scouting trips• Tue 11/6 9p – gained access to 2/3 of hall• Thu 11/8 6p – gained access to rest of hall• Fri 11/9 a.m. – fiber done• Sun 11/11 a.m. – equipment patching• Sun 11/12 p.m. – booth drops start
– wireless & HP Jornada
• Mon 11/12 noon – drops complete• Mon 11/12 7p – gala opening (D-DAY)• DANGER: carpet layers (20-30 cuts this year)
![Page 17: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/17.jpg)
Wireless (Tom Hutton)
• Significant 802.11b effort this year• 35 Cisco wireless access points (13 in exhibit hall)
– One on DCC roof pointed at Embassy Suites
• Wireless still requires a lot of wires & work– 5000’ of wiring in exhibition hall– Several site surveys over the year
• Totally flat LAN (3.5 Gbps switched BW)• Wireless really helps show set-up
– Booth drop teams, booth connectivity prior to fiber
• Clients seen: 618 peak, 246 average
![Page 18: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/18.jpg)
Infrastructure (Chuck Fisher)
• SC98– Core Routing provided by traditional Cisco 7500 series
routers
– First "production" use of gigabit Gigabit Ethernet (only 1 customer drop requested)
– Most booth service was 10Base-FL and 100-FX provided via Fore Power Hubs
– Limited use of network monitoring and statistics
![Page 19: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/19.jpg)
An earlier topology…
![Page 20: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/20.jpg)
Infrastructure trends - II
• SC99– Core Routing provided by Cisco GSR series
routers– Concept of a routing core and a layer of L3
distribution switches adopted– Extensive use of DWDM hardware to provide
WAN badwidth– Xnet introduced as a showcase for "bleeding
edge" hardware
![Page 21: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/21.jpg)
Infrastructure trends - III
• SC2000– Core routing provided by Cisco and Juniper– Increased focus on network monitoring and
statistics– First Xnet demonstration of 10 Gigabit Ethernet– Bandwidth Challenge introduced to SC
![Page 22: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/22.jpg)
SCinet 2001 Network Topology
![Page 23: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/23.jpg)
Infrastructure trends -IV• SC2001 Contributing Hardware Vendors
– Cisco– Juniper– Marconi– Nortel– Spirent– Force10– Foundry– ONI– LuxN
• Equivalent to 3-5 bldg advanced campus network on major R&E backbones
![Page 24: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/24.jpg)
Operations
• Servers– DNS, DHCP, NTP, Performance, beacons
• Database
• Network monitoring
• Help desk
• Trouble ticket system
• Routing support (unicast, multicast, v6)
![Page 25: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/25.jpg)
Measurement and Security
![Page 26: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/26.jpg)
![Page 27: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/27.jpg)
Security monitoring
Local Nimbda infections 2
Clear Text Root Logins 68
Clear Text Passwords 1483
Code Red 1 Infections 1
External Nimbda sources 857
Passwordless accounts 59
Scans 137
![Page 28: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/28.jpg)
Xnet
![Page 29: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/29.jpg)
TeraGrid Distributed Backplane -
NCSA, ANL, SDSC, Caltech
NCSA/UIUC
ANL
UICMultiple Carrier Hubs
Starlight / NW Univ
Ill Inst of Tech
Univ of Chicago Indianapolis (Abilene NOC)
I-WIRE
StarLightInternational Optical Peering Point
(see www.startap.net)
Los Angeles
San Diego
DTF Backplane (4x: 40 Gbps)
Abilene
Chicago
IndianapolisUrbana
OC-48 (2.5 Gb/s, Abilene)
Multiple 10 GbE (Qwest)
Multiple 10 GbE (I-WIRE Dark Fiber)
• Solid lines in place and/or available by October 2001• Dashed I-WIRE lines planned for summer 2002
Source: Charlie Catlett, Argonne
![Page 30: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/30.jpg)
Xnet
![Page 31: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/31.jpg)
Trends
… or what we might see in Baltimore?
![Page 32: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/32.jpg)
Optical networking •Dense Wave Division Multiplexing (DWDM)
–Current systems can support >160 10-Gbps ’s (1.6 Tbps!)
–Optical growth can overwhelm Moore’s Law (routers)
•Costs scale dramatically with distance•Three possible scenarios for the future
–Enhanced IP transport (higher BW and circuit multiplicity)–Fine-grained traffic engineering
• p2p links between campuses, HPC centers, & Gigapops
–Physical e2e switched circuits (a la ATM SVCs)
•Evolution of optical switching will be critical–Don’t write off OEO
![Page 33: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/33.jpg)
Future of Abilene
• Extension of Qwest’s original commitment to Abilene for another 5 years – 10/01/2006
–Originally expired March, 2003
• Upgrade of Abilene backbone to optical transport capability - ’s
–x4 increase in the core backbone bandwidth• OC-48c SONET (2.5 Gbps) to 10-Gbps DWDM
–Capability for flexible provisioning of 10-Gbps ’s to support future point-to-point experimentation & other projects
• Emphasis on v6, network measurement, & measurement capabilities
![Page 34: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/34.jpg)
SC2002/Baltimore crystal ball
• Strong local networking community– MAX Gigapop (University of Maryland) – DARPA Supernet (ISI-East, NRL)
• Dark fiber & network presences in region• Abilene is aiming for 10-Gbps connectivity• Increased focus on e2e performance & multicast
reliability• More wireless (add 802.11a); less ATM?• 10 Gigabit Ethernet should be standardized• Optical switch in Xnet?
![Page 35: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing](https://reader036.fdocuments.us/reader036/viewer/2022070408/568143fa550346895db08e45/html5/thumbnails/35.jpg)
Conclusion
• Scinet is…… a diverse group of very committed and
talented people and companies working very hard under extreme time constraints and trying conditions to make both the expected and the new and impossible in SCxy networking happen for one week in November and then return to do it again the next year