Iota Sigma Pi 31 st Triennial Convention June 19 – 22, 2014 Emeryville, CA.
DOE 2004 PI Network Research PI Meeting Dantong Yu RCF/USATLAS Technology Meeting Monday, June 01,...
-
date post
18-Dec-2015 -
Category
Documents
-
view
217 -
download
2
Transcript of DOE 2004 PI Network Research PI Meeting Dantong Yu RCF/USATLAS Technology Meeting Monday, June 01,...
DOE 2004 PI Network Research PI Meeting
Dantong YuDantong Yu
RCF/USATLAS Technology MeetingRCF/USATLAS Technology Meeting
Tuesday, April 18, 2023Tuesday, April 18, 2023
DOE Network Research PI Meeting
FNAL September 15-17, 2004 2
Outline
Network Applications, Production Network Infrastructure, Network Applications, Production Network Infrastructure, Research Network InfrastructureResearch Network Infrastructure
MPLS, QoS, and Traffic Engineering MPLS, QoS, and Traffic Engineering
Ultra-Science Net Tested Ultra-Science Net Tested
High-Speed Transport Protocols & Storage Systems High-Speed Transport Protocols & Storage Systems
Network Security Network Security
Past Research ProjectsPast Research Projects
SBIR ProgramsSBIR Programs
DiscussionDiscussion
DOE Network Research PI Meeting
FNAL September 15-17, 2004 3
Network Applications, Production Network Infrastructure, Research Network Infrastructure
CMS Challenge (Skipped)CMS Challenge (Skipped)
National Fusion ResearchNational Fusion Research
Astrophysics Applications on High Performance NetworksAstrophysics Applications on High Performance Networks
ESNet – DoE production NetworkESNet – DoE production Network
Ultra Science Network – DoE research NetworkUltra Science Network – DoE research Network
DOE Network Research PI Meeting
FNAL September 15-17, 2004 4
Anticipated Network Requirements For Magnetic Fusion Energy Science
Magnetic fusion energy sciences is a worldwide effortMagnetic fusion energy sciences is a worldwide effort
— — Next generation experimental devices will be outside the US Next generation experimental devices will be outside the US
Collaborative technology critical to the success of the FES program Collaborative technology critical to the success of the FES program Experimental: Fewer, larger machines in future (ITER, KSTAR) Computation: Moving toward integrated simulation (FSP)
A capable network & network service infrastructure is critical A capable network & network service infrastructure is critical More than moving bits around (1000 Mb/s) Services over the WAN: Grids, QoS, security,
MFE RESEARCH IS WORLDWIDE MFE RESEARCH IS WORLDWIDE
The US can lead by example in fusion networking Opportunity for a development project and a demonstrated
capability that might allow the US to take a major lead in the future worldwide fusion networking requirements
DOE Network Research PI Meeting
FNAL September 15-17, 2004 5
Fusion Data Requirement
Test Physics Theory and Extend performanceTest Physics Theory and Extend performance Pulsed: 10s duration plasma/20 minutes
20-40 people in control room,20-40 people in control room, More from remote locations More from remote locations
10,000 separate measurements/plasma 10,000 separate measurements/plasma kHz to MHZ sample rates
Between pulse analysis
Not batch analysis and not a needle in a haystack problemNot batch analysis and not a needle in a haystack problem Rapid “real-time” analysis of many measurements
On the order of 1 GB per plasma pulse
More informed decisions result in better experimentsMore informed decisions result in better experiments
DOE Network Research PI Meeting
FNAL September 15-17, 2004 6
Understanding Core-Collapse Supernovae
DOE SciDAC ProjectDOE SciDAC Project ORNL & 8 Universities
Multidisciplinary CollaborationMultidisciplinary Collaboration Astrophysics, Nuclear Physics,
Transport, CFD, Applied Math, High-
Energy Physics…
Large-Scale SimulationsLarge-Scale Simulations NERSC, CCS
Network Demands (Dreams)Network Demands (Dreams) Monitor, analyze, visualize and
collaborate
DOE Network Research PI Meeting
FNAL September 15-17, 2004 7
Collaborative Visualization
• Currently we post derived graphics products (e.g., EnSight enliten files) on a web page, then talk on the phone.
• Can we speed up the discovery process through interactive collaborative visualization?
DOE Network Research PI Meeting
FNAL September 15-17, 2004 8
New ESnet Architecture Needs to Accommodate OSC
The essential requirements cannot be met with the current, The essential requirements cannot be met with the current,
telecom provided, hub and spoke architecture of ESnettelecom provided, hub and spoke architecture of ESnet
The core ring has good capacity and resiliency against single The core ring has good capacity and resiliency against single
point failures, but the point-to-point tail circuits are neither point failures, but the point-to-point tail circuits are neither
reliable nor scalable to the required bandwidthreliable nor scalable to the required bandwidth
ESnetCore
New York (AOA)
Chicago (CHI)
Sunnyvale (SNV)
Atlanta (ATL)
Washington, DC (DC)
El Paso (ELP)
DOE sites
DOE Network Research PI Meeting
FNAL September 15-17, 2004 9
Basis for a New Architecture
Goals for each site:
• fully redundant connectivity
• high-speed access to the backbone
Meeting these goals requires a two part approach:
• Connecting to sites via a MAN ring topology to provide:• Dual site connectivity and much higher site bandwidth
• Employing a second backbone to provide:• Multiply connected MAN ring protection against hub failure
• Extra backbone capacity
• A platform for provisioned, guaranteed bandwidth circuits
• An alternate path for production IP traffic
• Access to carrier neutral hubs
DOE Network Research PI Meeting
FNAL September 15-17, 2004 10
ESnetExisting
Core
New York (AOA)Chicago (CHI)
Sunnyvale(SNV)
Atlanta (ATL) Washington, DC (DC)
El Paso (ELP)
DOE/OSC sites
Existing hubs
Bay Area, Chicago, New York MANs
DOE Network Research PI Meeting
FNAL September 15-17, 2004 11
Addition of Second Backbone
EuropeAsia-
Pacific
ESnetExisting
Core
New York (AOA)
Chicago (CHI)
Sunnyvale(SNV)
Atlanta (ATL)
Washington, DC (DC)
El Paso (ELP)DOE/OSC sites
New hubs
Existing hubs
2nd Backbone
USN
USN
DOE Network Research PI Meeting
FNAL September 15-17, 2004 12
The UltraScience Net Lambdas
• Only National-scale switched-circuit testbed with multiple lambdas• Only switched-lambda testbed with national scale
• This is both good news– – and bad news
• You can’t walk across the hall and reboot a switch• You have to provide redundant remote control• Even though it is a research network, it must have robust operational controls
DOE Network Research PI Meeting
FNAL September 15-17, 2004 13
UltraScience Net Data Plane
CoreDirector
CoreDirector
CoreDirector Core
Director
SONET
Sunnyvale
Chicago
Seattle
ORNL
MSPP
host
hostEthernet
MSPP
MSPPhost
host
To PNNLStorage
Storage
Storage
DOE Network Research PI Meeting
FNAL September 15-17, 2004 14
Need for Secure Control Plane
Security of control plane is extremely important Security of control plane is extremely important Tl1 or GMPLS commands in the “clear”
Can be sniffed to profile the network Can be injected to take over the control plane
Following cyber attacks can be easily launched Network-level Attacks:
Hijack the dedicated circuits– Sniff to understand the configuration– Inject crafted packets hijack the channels
Sustain a DOS flood to prevent recovery Host Attacks:
Takeover/flood UltraScienceNet end hosts and switching gear
DOE Network Research PI Meeting
FNAL September 15-17, 2004 15
UltraScience Control-Plane: Phase I
CoreDirector
CoreDirector
CoreDirector
CoreDirector
Lambda: OC192
Sunnyvale
Chicago
Seattle
ORNL
VPNNS-5
IP network
VPN
VPNNS-5
VPNNS-5
VPNNS-50
ORNLhost
VPN:Authorized accessEncrypted trafficSupports
Tl1 commandsMonitoring and management
DOE Network Research PI Meeting
FNAL September 15-17, 2004 16
Control-Plane
Phase IPhase I Centralized VPN connectivity
TL1-based communication with Core Directors and MSPPs
User access via centralized web-based scheduler
Phase IIPhase II GMPLS direct enhancements and wrappers for Tl1
User access via GMPLS and web to bandwidth scheduler
Inter-domain GMPLS-based interface
DOE Network Research PI Meeting
FNAL September 15-17, 2004 17
Summary
First connectivity tests scheduled for this FridayFirst connectivity tests scheduled for this Friday
Control-plane phase I close to completion Control-plane phase I close to completion Tl1over VPN (Oct-Nov); Scheduler (Sept)
20Gbps connectivity – this fall20Gbps connectivity – this fall
Users interested in connecting to or using hub-Users interested in connecting to or using hub-
located hosts – please contact Thomas Ndousse located hosts – please contact Thomas Ndousse
(and us)(and us)
DOE Network Research PI Meeting
FNAL September 15-17, 2004 18
MPLS, QoS, and Traffic Engineering
OSCAROSCAR
BNL MPLS ProjectsBNL MPLS Projects
Monitoring Projects: Datagrid WAN Network Monitoring Projects: Datagrid WAN Network
Monitoring Infrastructure Monitoring Infrastructure
Network Quality of Sevice for Magnet Fusion Network Quality of Sevice for Magnet Fusion
ResearchResearch
DOE Network Research PI Meeting
FNAL September 15-17, 2004 19
Motivation: • Service sensitive applications (such as remote controlled
experiments, time constrained massive data transfers, video-conferencing, etc.), require network guarantees.
Objective:• To develop and deploy a new service that can provide
secure guaranteed bandwidth circuits within ESnet.
Purpose of The ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS)
DOE Network Research PI Meeting
FNAL September 15-17, 2004 20
Issues That OSCARS Must Address (1/3)
Adopting The Appropriate Service Model and Protocol
Configuring Acceptable Availability Levels
Tracking Network Outages
Having Appropriate User Interfaces
Scheduling Bandwidth Reservations
Securing The System
Monitoring Usage
Usage Policies
DOE Network Research PI Meeting
FNAL September 15-17, 2004 21
Guaranteed Bandwidth Circuit• Multi-Protocol Label Switching (MPLS) and Resource
Reservation Protocol (RSVP) is used to create a Label Switched Path (LSP)
• Quality-of-Service (QoS) level is assign to the LSP to guarantee bandwidth.
Components That Make Up OSCARS (1/2)
RSVP, MPLSenabled on
internal interfaces
Source Sink
LSP between ESnet border routers
MPLS labels are attached onto packets from Source andplaced in separate queue to ensure guaranteed bandwidth.
Regular production traffic queue.Interface queues
DOE Network Research PI Meeting
FNAL September 15-17, 2004 22
UserApplication
Reservation Manager • Web-Based User Interface (WBUI) will prompt the user for a
username/password and forward it to the AAAS.• Authentication, Authorization, and Auditing Subsystem (AAAS)
will handle access, enforce policy, and generate usage records.
• Bandwidth Scheduler Subsystem (BSS) will track reservations and map the state of the network (present and future).
• Path Setup Subsystem (PSS) will setup and teardown the on-demand paths (LSPs).
Components That Make Up OSCARS (2/2)
UserInstructions to
setup/teardownLSPs on routers
Web-BasedUser Interface
Authentication,Authorization,And AuditingSubsystem
BandwidthSchedulerSubsystem
Path SetupSubsystem
Reservation Manager
User app request via AAAS
User request via WBUI
Userfeedback
DOE Network Research PI Meeting
FNAL September 15-17, 2004 23
This project will investigate the integration and use of MPLS based This project will investigate the integration and use of MPLS based
differentiated network services in the ATLAS data intensive differentiated network services in the ATLAS data intensive
distributed computing environment as a way to manage the network distributed computing environment as a way to manage the network
as a critical resource; as a critical resource;
The program intends to explore network configurations from The program intends to explore network configurations from
common shared infrastructure (current IP networks) to dedicated common shared infrastructure (current IP networks) to dedicated
optical paths point-to-point, using MPLS/QoS to span the optical paths point-to-point, using MPLS/QoS to span the
intervening possibilities.intervening possibilities.
The Collaboration includes:The Collaboration includes: Brookhaven National Laboratory (US ATLAS Tier 1, ESNet)
Univ. of Michigan (US ATLAS Candidate Tier 2 Center, Internet2,
UltraLight)
What is Terapaths ?
DOE Network Research PI Meeting
FNAL September 15-17, 2004 24
Project Goal and Objectives
The primary goal of this project is to investigate the use The primary goal of this project is to investigate the use of this technology in the ATLAS data intensive distributed of this technology in the ATLAS data intensive distributed computing environment. In addition we intend to:computing environment. In addition we intend to:
Develop expertise in MPLS based QoS technology which will be important to ATLAS and the LHC community more generally.
Dedicate fractions of the available WAN bandwidth via MPLS to ATLAS Tier 1 data movement, RHIC data replications to assure adequate throughput and limit their disruptive impact upon each other.
Enhance technical contact between the ATLAS tier 1 at BNL and its network partners including the Tier 0 center at CERN, potential ATLAS Tier 2’s and other members of the Grid3+ (OSG-0) community of which it is a part.
DOE Network Research PI Meeting
FNAL September 15-17, 2004 25
Proposed Prototype/Primitive Infrastructure
GridFtp & SRM
MPLS PathESnet
Network resource manager
MPLS requests
Traffic IdentificationTCP syn/fin packages,
addresses, port #
Grid AA
Network Usage Policy
Translator
MPLS Bandwidt
h Requests
& Releases
OSCARSINGRESS
Monitoring Direct MPLS
/Bandwidth Requests
SE
Second/Third yearSecond/Third year
DOE Network Research PI Meeting
FNAL September 15-17, 2004 26
Work Plans
Terapaths envisions a multiple year program to deliver a high-Terapaths envisions a multiple year program to deliver a high-performance, QoS enable network infrastructure for ATLAS/LHC performance, QoS enable network infrastructure for ATLAS/LHC computing. Each year will determine the following year(s)’s direction.computing. Each year will determine the following year(s)’s direction.
Phase I: Establish Initial Functionality (08/04 ~ 07/05).
Help to steer the direction of the following two phases.
Phase II:Establish Prototype Production Service (08/05 ~ 07/06).
Depends on the success of Phase 1.
Phase III: Establish Full Production Service, Extend Scope and Increase Functionality (08/06 ~ 07/07).
The level of service and its scope will depends on the available project funding and some additional resources.
Broaden deployment and capability to Tier2s, partners.
DOE Network Research PI Meeting
FNAL September 15-17, 2004 27
Datagrid WAN Network Monitoring Infrastructure
Data intensive science (e.g. HENP) needs to share data at Data intensive science (e.g. HENP) needs to share data at
high speedshigh speeds
Needs high-performance, reliable e2e paths and the ability Needs high-performance, reliable e2e paths and the ability
to use themto use them
End users need long and short term estimates of network End users need long and short term estimates of network
and application performance for: Planning, setting and application performance for: Planning, setting
expectations & trouble shootingexpectations & trouble shooting
You can’t manage what you can’t measureYou can’t manage what you can’t measure
DOE Network Research PI Meeting
FNAL September 15-17, 2004 28
Based on IEPM-BW
Toolkit:Toolkit: Enables regular, E2E measurements with user selectable:
Tools: iperf (single & multi-stream), bbftp, bbcp, GridFTP, ping (RTT), traceroute
Periods (with randomization) Remote hosts (RH) to monitor from monitoring hosts (MH)
Hierarchical to match the tiered approach of BaBar, D0, CDF & LHC computation / collaboration infrastructures
Includes: Auto-clean up of hung processes at both ends Management tools to look for failures (unreachable hosts, failing tools etc.) Web navigation of results Visualization of data as time-series, histograms, scatter plots, tables Access to data in machine readable form Documentation on host etc. requirements, program logic manuals, methods
DOE Network Research PI Meeting
FNAL September 15-17, 2004 29
Deliverables: Monitoring Hosts deployment
Focused on critical target audience:Focused on critical target audience: SLAC (BaBar), FNAL (CDF, D0, CMS)
In place, will upgrade to version 3 when ready
BNL (Atlas), CERN (LHC: Atlas/CMS) Following successful deployment of v3 to SLAC & FNAL
ESnet, StarLight (networking sites)
Caltech (CMS: tier 2), UMich (Atlas: tier 2)
Optional European sites: INFN, IN2P3, RAL, GEANT
DOE Network Research PI Meeting
FNAL September 15-17, 2004 30
ADVANCED RESERVATION COMPUTATION FOR DATA ANALYSIS TO SUPPORT EXPERIMENTAL
SCIENCE Long-term vision: SciDAC code on supercomputer between pulsesLong-term vision: SciDAC code on supercomputer between pulses
Data management
Network QoS Guaranteed bandwidth at particular times or with certain characteristics
Visualization
CPU scheduling
Faster CPUs and algorithms
End-to-end agreement being prototyped in the NFC projectEnd-to-end agreement being prototyped in the NFC project CPU reservation
Network transfer agreements based on simple prediction
Has the potential to greatly impact the quality of experimental scienceHas the potential to greatly impact the quality of experimental science
DOE Network Research PI Meeting
FNAL September 15-17, 2004 31
Ultra-Science Net Tested
Lambba StationLambba Station
Ultrascale Network Research Testbed Enabling Computational Ultrascale Network Research Testbed Enabling Computational GenomicsGenomics
Enabling Supernova Computations by Integrated Transport Enabling Supernova Computations by Integrated Transport
Towards Scalable Cost Effective Survivability in Ultra High Towards Scalable Cost Effective Survivability in Ultra High Speed NetworksSpeed Networks
ISOGA: Integrated Services Optical Grid Architecture for ISOGA: Integrated Services Optical Grid Architecture for Emerging e-Science Collaborative ApplicationsEmerging e-Science Collaborative Applications
Design and Analysis of a Dynamic DWDM Multi-Terabits/sec Design and Analysis of a Dynamic DWDM Multi-Terabits/sec Packet Switch FabricPacket Switch Fabric
DOE Network Research PI Meeting
FNAL September 15-17, 2004 32
LAMDBA STATON: Problem statement
Experiments and applications now running, or starting soon, will benefit Experiments and applications now running, or starting soon, will benefit
from data movement capabilities now available only on bleeding-edge from data movement capabilities now available only on bleeding-edge
networks.networks.
These systems are connected to production site networks. Duplicating These systems are connected to production site networks. Duplicating
site infrastructure to connect them to special-purpose networks is an site infrastructure to connect them to special-purpose networks is an
expense to be avoided if possible.expense to be avoided if possible.
Multihoming the endpoints to production and specialized networks is Multihoming the endpoints to production and specialized networks is
complicated and expensive and it (nearly) precludes graceful failover to complicated and expensive and it (nearly) precludes graceful failover to
when one path is lost.when one path is lost.
Applications (and operating systems) should not have to be re-Applications (and operating systems) should not have to be re-
customized for every new network technology or high-performance customized for every new network technology or high-performance
path.path.
DOE Network Research PI Meeting
FNAL September 15-17, 2004 33
LambdaStation
FunctionFunction Schedule use of one or more reserve-able network paths
Arrange for traffic to be forwarded onto such a path
DOE Network Research PI Meeting
FNAL September 15-17, 2004 34
Summary
LambdaStation’s role in data-intensive science is to LambdaStation’s role in data-intensive science is to
dynamically connect production end-systems to advanced dynamically connect production end-systems to advanced
high-performance wide-area networks.high-performance wide-area networks. Bring the systems to the network
Bring the network to the systems
Prototyping has shown the feasibility of using dynamically Prototyping has shown the feasibility of using dynamically
selected network paths for traffic between production site selected network paths for traffic between production site
networks.networks. PoliciesBased
Network Availability
DOE Network Research PI Meeting
FNAL September 15-17, 2004 35
Ultrascale Network Research TestbedEnabling Computational Genomics
To research, design, and build a network research testbed over the To research, design, and build a network research testbed over the
DOE UltraScience Net to enable computational genomic applications. DOE UltraScience Net to enable computational genomic applications.
This testbed will enable researchers to test and demonstrate This testbed will enable researchers to test and demonstrate
applications that cause significant problems for traditional TCP/IP applications that cause significant problems for traditional TCP/IP
networks.networks.
DOE Network Research PI Meeting
FNAL September 15-17, 2004 36
We will deliver a proof of concept
Microscope prototype demonstrationMicroscope prototype demonstration
Exert remote control of high speed microscope in real time
Near-real time visualization of the experiment
Send high quality data to a network cluster for 3D rendering
Store the high quality image for further analysis & replay
DOE Network Research PI Meeting
FNAL September 15-17, 2004 37
Issues / Design Considerations
Real time data transferReal time data transfer Will approach 2.5 Gbps
Remote Instrument OperationRemote Instrument Operation Need real time transfer of control signals No packet loss or delay jitter
Remote VisualizationRemote Visualization Need near real-time access to visualizations of running experiments to gain
collective insights into cellular responses, and to make immediate decisions regarding the future course of the experiment.
Bulk Data TransferBulk Data Transfer Currently at 15 terabytes a day 12 instruments x 24 runs = 2.8TB/day Overall datasets will be approaching petabyte sizes
DOE Network Research PI Meeting
FNAL September 15-17, 2004 38
Terascale Supernova Initiative - TSI
Science Objective: Understand supernova evolutions DOE SciDAC Project: ORNL and 8 universities Teams of field experts across the country collaborate on
computations Experts in hydrodynamics, fusion energy, high energy
physics Massive computational code
Terabyte/day generated currently Archived at nearby HPSS Visualized locally on clusters – only archival data
Current Networking Challenges Limited transfer throughput
Hydro code – 8 hours to generate and 14 hours to transfer out
Runaway computations Find out after the fact that parameters needed adjustment
DOE Network Research PI Meeting
FNAL September 15-17, 2004 39
Current DOE ORNL-UVA Project:Complementary Roles
•Project Components:•Provisioning for UltraScience Net - GMPLS•File transfers for dedicated channels•Peering – DOE UltraScience Net and NSF CHEETAH•Network optimized visualizations for TSI•TSI application support over UltraScience Net + CHEETAH
Peering
ORNL UVA
VisualizationTSI Application
ProvisioningFile Transfers
This project leverages two projects•DOE UltraScience Net•NSF CHEETAH
DOE Network Research PI Meeting
FNAL September 15-17, 2004 40
Peered UltraScienceNet-CHEETAH
CERN
Chicago
Sunnyvale
Atlanta
ANLFNAL
ORNL
CalTech
SLAC
LBL
NERSC
PNNL
10 Gbps
10 Gbps
DOE Science UltraNet + NSF CHEETAH
Seattle
BNL
JLab
University
DOE National Lab
Future Connections
UltraNetCHEETAH
UVa
NCSU
CUNY
Enables coast-to-coast dedicated channels
Phase I: TL1-GMPLS cross conversion
Phase II: GMPLS-based
DOE Network Research PI Meeting
FNAL September 15-17, 2004 41
UVA work items
Provisioning across CHEETAH and UltraScience networksProvisioning across CHEETAH and UltraScience networks
Transport protocol for dedicated circuits: Fixed-Rate Transport protocol for dedicated circuits: Fixed-Rate
Transport Protocol (FRTP)Transport Protocol (FRTP)
Extend CHEETAH concept to enable heterogeneous Extend CHEETAH concept to enable heterogeneous
connections – “connection-oriented internet”connections – “connection-oriented internet”
DOE Network Research PI Meeting
FNAL September 15-17, 2004 42
Towards Cost-Effective Provisioning and Survivability in Ultra High Speed Networks
Target the need of supporting DOE science mission Target the need of supporting DOE science mission
Study the scheduled traffic model and its variations, and Study the scheduled traffic model and its variations, and
service provisioning, bandwidth scheduling, collaboration service provisioning, bandwidth scheduling, collaboration
scheduling, as well as QoS provisioning under this traffic scheduling, as well as QoS provisioning under this traffic
model to support bandwidth on-demand model to support bandwidth on-demand
Enhance the network's ability to survive network faults by Enhance the network's ability to survive network faults by
providing cost-effective survivability with a fast recovery providing cost-effective survivability with a fast recovery
speed and a variety of survivability options speed and a variety of survivability options
DOE Network Research PI Meeting
FNAL September 15-17, 2004 43
ISOGA: Integrated Services Optical Grid Architecturefor Emerging Scientific Collaborative Applications
Intelligent Control Plane Services over Optical-Switched NetworkIntelligent Control Plane Services over Optical-Switched Network
••Enable user-centric or application-centric dynamic lightpath provisioningEnable user-centric or application-centric dynamic lightpath provisioning
••On-demand setup and advanced scheduling of lightpathsOn-demand setup and advanced scheduling of lightpaths
••Dynamic lightpath restoration or self-healingDynamic lightpath restoration or self-healing
••Enable multi-domain lightpath provisioning Enable multi-domain lightpath provisioning
••Interoperate heterogeneous networks. Interoperate heterogeneous networks.
Advanced Transport Services over Optical-Switched NetworkAdvanced Transport Services over Optical-Switched Network ••Enable multiple transport servicesEnable multiple transport services
••Gigabit-rate stream traffic (single lambda per application).Gigabit-rate stream traffic (single lambda per application). ••Sub-gigabit-rate stream traffic (sub-lambda per application).Sub-gigabit-rate stream traffic (sub-lambda per application). ••Terabit-rate stream traffic (multiple lambdas per application).Terabit-rate stream traffic (multiple lambdas per application).
••Variable Burst trafficVariable Burst traffic
••Multicast stream trafficMulticast stream traffic ••Enable optical network-aware middleware and transport ProtocolsEnable optical network-aware middleware and transport Protocols
DOE Network Research PI Meeting
FNAL September 15-17, 2004 44
Next-Generation Terabit/sec Fabric Architecture
N x NPassiveOptical
Star
N x NPassiveOptical
Star(10 Gbps/port)
K:1 Packet Mux
#K
#1
K:1 Packet Mux
Ingress port 1
Ingress port 2
Ingress port N
#K-1
Packet Mux
R
R
R
R/KR/K
R
R
R
R/K
GCSR Tunable Lasers
Virtual Output Queueing
Scheduler
DOE Network Research PI Meeting
FNAL September 15-17, 2004 45
Roadmap
1.1. Packet switching engine over bonded/aggregated Packet switching engine over bonded/aggregated ’s’s
2.2. Design & analysis of Multi-Terabit/sec DWDM-based Design & analysis of Multi-Terabit/sec DWDM-based
switchesswitches
Proof stability under admissible traffic scenarios
Performance evaluation
Study implementation considerations
3.3. Develop comprehensive simulation platformDevelop comprehensive simulation platform
4.4. FPGA PrototypingFPGA Prototyping
Scheduling algorithm
DOE Network Research PI Meeting
FNAL September 15-17, 2004 46
High-Speed Transport Protocols & Storage Systems
Highly Scalable, UDT-Based Network Transport Protocols Highly Scalable, UDT-Based Network Transport Protocols
for Lambda and 10 GE Routed Networkfor Lambda and 10 GE Routed Network
Overlay Transit Networking for Scalable, High-Performance Overlay Transit Networking for Scalable, High-Performance
Data Communication across Heterogeneous Infrastructure Data Communication across Heterogeneous Infrastructure
Phoebus: Network Middleware for Next-Generation Phoebus: Network Middleware for Next-Generation
Network ComputingNetwork Computing
GridFTP LiteGridFTP Lite
Multi-Multi-GbpsGbps OS bypass System for Grid Computing OS bypass System for Grid Computing
Network ModelingNetwork Modeling
DOE Network Research PI Meeting
FNAL September 15-17, 2004 47
Introduction - What is UDT?
UDT is UDP based Data Transfer protocolUDT is UDP based Data Transfer protocol
It is an open source C++ transport service libraryIt is an open source C++ transport service library
Application level: no kernel recompilation, no root Application level: no kernel recompilation, no root
privilege needed to install library or run applicationsprivilege needed to install library or run applications
It is fast, friendly to TCP, and fair to other teraflowsIt is fast, friendly to TCP, and fair to other teraflows
DOE Network Research PI Meeting
FNAL September 15-17, 2004 48
Phoebus: Network Middleware for Next-Generation Network Computing
Another network service middlewareAnother network service middleware
It deals with heterogeneous network infrastructure and It deals with heterogeneous network infrastructure and
domains. domains.
It handles hop-by-hop negotiation for an end-to-end path It handles hop-by-hop negotiation for an end-to-end path
that may span multiple types of connects.that may span multiple types of connects.
Separates the legacy applications from the transport Separates the legacy applications from the transport
interface, while continues improves performance and interface, while continues improves performance and
requires minimum changesrequires minimum changes
DOE Network Research PI Meeting
FNAL September 15-17, 2004 49
GridFtp Lite: Feature enhancements to improve GridFTP’s utility in non-production testbed uses.
Network researchers and others are interested in using
GridFTP for a limited amount of time, in non-production, low
risk scenarios—for example, as part of research projects
exploring new network protocols. GSI can represent a time
investment that is not justified, due to the associated need
to establish, configure, and manage an appropriate PKI.
Thus, this community has expressed a strong interest in
seeing extensions to GridFTP that would allow for
alternative flexible security solutions.
Need Feedbacks.
DOE Network Research PI Meeting
FNAL September 15-17, 2004 50
Network Security
Firewall Architectures for High Speed Networks Firewall Architectures for High Speed Networks
Game-Theoretic Approach to Cyber Security Game-Theoretic Approach to Cyber Security
Detecting and Blocking Network Attacks at Ultra-high speedDetecting and Blocking Network Attacks at Ultra-high speed
Group SecurityGroup Security
DOE Network Research PI Meeting
FNAL September 15-17, 2004 51
Firewall Architectures for High Speed Networks
Develop Develop policy optimization techniques policy optimization techniques Formal models for rules and security policies
Reduce the processing requirement per packet
Low impact solutions for current and future firewall systems
Models used to distribute rules in parallel firewall designs
Investigate Investigate high-speed firewall designs high-speed firewall designs Distributed firewalls, process packets in parallel
Maintain QoS requirements and differentiation
Scalable with increasing network volumes and speeds
More robust (highly available) and able to survive DoS attacks
Incentive-based Modeling and Inference of Attacker Intent, Objectives and Strategies
Our defenses need to be more intelligent tomorrow.
Game theoretic approaches may help:
-- Model and infer (predict) attacker intent and strategies
-- Measure the resilience of a “secured” network
-- Do proactive or predicative intrusion response
Our research goals
system
Attacker vs. Defender
attacks
attack actions
effects
Attack vs. Defense
risk
cost
attack strategy
defense strategy
defense actions
events
conditions
defense posture
response
intrusions
security metrics
security vector
security degradation
intent
incentives
constraints
motive
intelligence
knowledge
uncertainty
value systems
rationality
objectives
utilities
battle
strategy space
alert
security mechanisms
payoffs
vulnerability
threat
preference
Our approach
Step 1: conceptual modeling of AIOS
Step 2: game theoretic formalization
Step 3: solvethe game
AIO
S i
nfe
ren
ces
DOE Network Research PI Meeting
FNAL September 15-17, 2004 56
Shunting: Hardware to Support Intrusion Detection for Ultra-High-Speed Networks
Wanted: Inline, High-Rate, Scalable IDSWanted: Inline, High-Rate, Scalable IDS Inline:Inline:
All packets pass through the IDS (potentially) IDS can block attack packet as well as subsequent packets IDS can also normalize the flows it observes
High rate and scalable:High rate and scalable: Running at full 1 Gbps?
Interrupt & bus-transfer overhead renders commodity PCs unsuitable:Must move 4 Gbps across memory and peripheral busses for inline operation
Standard PCI bus provides only 1 Gbps peak bandwidth Running at 40 Gbps?
Hardware design needs to smoothly scale Software remains effectively unchanged
High capacity:High capacity: Simultaneously block 10s-100s of thousands of offending hosts
DOE Network Research PI Meeting
FNAL September 15-17, 2004 57
Proposal:Shunting
Couple existing software IDS to Couple existing software IDS to shunt shunt hardwarehardware Can integrate shunt into the IDS host or standalone
All traffic passes through the shuntAll traffic passes through the shunt Shunt consults “flow” table to determine what to do:
Forward packet to destination (“cut through”) Drop packet Mirror packet to IDS while forwarding Shunt packet through the IDS
Default: shunt all packets to the IDS
IDS can manipulate shunted packetsIDS can manipulate shunted packets Examine and then reinject through the shunt Or: examine and then drop to prevent their delivery
IDS can add/alter/delete entries in shunt’s flow tableIDS can add/alter/delete entries in shunt’s flow table
IDS
Shunt
DOE Network Research PI Meeting
FNAL September 15-17, 2004 58
ESG Authorization Model
Password | Username
MyProxy/GridLogon used for portal authentication
Username | UserDN
MyProxy/GridLogon used for UserDN mapping
UserDN | Group
Group membership assignment
Group | Operation | LFileAccess Policy expressed with groups,
actions and logical file names
User with “Username” is allowed to invoke “Operation” on physical file “Pfile”
Derived Access Decision Statement LFile | PFileMapping of logical file names
to physical file paths
DOE Network Research PI Meeting
FNAL September 15-17, 2004 59
ESG Portal Access
Portal
MyProxyUser
usernamepassword
usernameuserDN
userDNgroup
GroupActionLFile
LFilePFile
FileServer
Pfile*
PFile
login
browse
PFile retrieva
l
policy enforcement
username/passwordvalidation
userDN mapping
PFile access&
integration
DOE Network Research PI Meeting
FNAL September 15-17, 2004 60
MPLS, QoS, and Traffic Engineering Discuss
Present:Present: Thomas, David Schissel (GA – Fusion user), Dantong Yu (BNL - MPLS),
Les Cottrell (SLAC - monitoring), Martin Swany (U Del – Phoebus short
RTT depots)
Need to add Chin Guok (ESnet MPLS in WAN), others from Need to add Chin Guok (ESnet MPLS in WAN), others from
SLAC, BNL, GA?SLAC, BNL, GA?
DOE Network Research PI Meeting
FNAL September 15-17, 2004 61
Discussions
ESnet provides MPLS tunnels over WANESnet provides MPLS tunnels over WAN
What happens in LANWhat happens in LAN Not determined by ESnet
But useful to recommend and educate
Possibilities include: Overprovision, 10 GE, multiple Aggregated Ether Channel Policy based QoS, need to understand
Solution may vary with time
Must be compatible/work with WAN solution
DOE Network Research PI Meeting
FNAL September 15-17, 2004 62
Technology Need to develop mechanisms for applications/users to be able Need to develop mechanisms for applications/users to be able
to specify service and then to mark the packets so routers can to specify service and then to mark the packets so routers can
select MPLS tunnels (classification)select MPLS tunnels (classification)
For simplicity start with only 2 classes of service, but must be For simplicity start with only 2 classes of service, but must be
scalable to morescalable to more
Longer term look at prioritizing classified traffic in the LAN and Longer term look at prioritizing classified traffic in the LAN and
how does it fit in with WAN, what is the interfacehow does it fit in with WAN, what is the interface
Need monitoring of MPLS pathsNeed monitoring of MPLS paths To select/reserve paths, provide guidance (before and during transfer)
Impact of MPLS on other traffic
How/can we use Phoebus to bridge to USNHow/can we use Phoebus to bridge to USN
DOE Network Research PI Meeting
FNAL September 15-17, 2004 63
Other stuff
Current project is envisioned as 2 years, if successful Current project is envisioned as 2 years, if successful
then extended project for extra 2-3 yearsthen extended project for extra 2-3 years
Possible NSF funded peer project including PPPL, MIT, Possible NSF funded peer project including PPPL, MIT,
UMich, UCSDUMich, UCSD Introduces MPLS inter-domain signalling issues – major
challenge
Need web site showing performance between sitesNeed web site showing performance between sites
Look at installing monitoring at GALook at installing monitoring at GA
DOE Network Research PI Meeting
FNAL September 15-17, 2004 64
Action Items
Set up MPLS mailing listSet up MPLS mailing list
Establish initial teleconferenceEstablish initial teleconference Ensure ESnet fully involved
Establish ongoing monthly teleconferencesEstablish ongoing monthly teleconferences
Web site for monitoring resultsWeb site for monitoring results
Add monitoring for BNL and GAAdd monitoring for BNL and GA
DOE Network Research PI Meeting
FNAL September 15-17, 2004 65
Conclusion
QoS MPLS and Traffic Engineering QoS MPLS and Traffic Engineering
Ultra-Sciences Net tst bed (There is) Ultra-Sciences Net tst bed (There is)
High Speed Transport Protocols and Storage Systems. High Speed Transport Protocols and Storage Systems.
Thomas gives the follows, Thomas would be willing to set aside funds Thomas gives the follows, Thomas would be willing to set aside funds for this : for this : Scalable high-speed file transfer protocols Technology transfer for recent advances in TCP reseach to fiel transfers. Host and Storage system Issues
Network Security: 1) DOE Cyber security policies. 2) Assessment of Network Security: 1) DOE Cyber security policies. 2) Assessment of MICS security portfolio. Maybe there is a need for a cyber security MICS security portfolio. Maybe there is a need for a cyber security workshop. --- Thomas wonders if we could assess his MICS portfolio workshop. --- Thomas wonders if we could assess his MICS portfolio and see how it helps the open science. Doing research is different and see how it helps the open science. Doing research is different than doing business. It is necessary to do this, but also to do it than doing business. It is necessary to do this, but also to do it securely. Thomas would be willing to entertain a proposal to hold securely. Thomas would be willing to entertain a proposal to hold
such a workshop.such a workshop.