GEC17 Using ExoGENI

Post on 23-Feb-2016

76 views 0 download

Tags:

description

GEC17 Using ExoGENI. Ilya Baldin ibaldin@ renci.org. ExoGENI Overview. Testbed. 14 GPO-funded racks built by IBM Partnership between RENCI, Duke and IBM IBM x3650 M4 servers (X-series 2U) 1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 8-core CPU - PowerPoint PPT Presentation

Transcript of GEC17 Using ExoGENI

GEC17 Using ExoGENI

Ilya Baldin ibaldin@renci.org

2

ExoGENI Overview

3

14 GPO-funded racks built by IBM◦ Partnership between RENCI, Duke and IBM◦ IBM x3650 M4 servers (X-series 2U)

1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 8-core CPU Dual 1Gbps adapter (management network) 10G dual-port Chelseo adapter (dataplane)

◦ BNT 8264 10G/40G OpenFlow switch◦ DS3512 6TB sliverable storage

iSCSI interface for head node image storage as well as experimenter slivering

◦ Cisco(UCS-B) and Dell configuration also exist Each rack is a small networked cloud

◦ OpenStack-based with NEuca extensions◦ xCAT for baremetal node provisioning

http://wiki.exogeni.net

Testbed

4

ExoGENI is a collection of off-the shelf institutional clouds◦ With a GENI federation on top◦ xCAT – IBM product◦ OpenStack- RedHat product

Operators decide how much capacity to delegate to GENI and how much to retain for yourself

Familiar industry-standard interfaces (EC2) GENI Interface

◦ Mostly does what GENI experimenters expect

ExoGENI

ExoGENI at a glance

6

Rack Software Stack

Deployment structure

An ExoGENI cloud “rack site”

11

ExoGENI racks are separate aggregates but also act as a single aggregate◦ Transparent stitching of resources from multiple

racks ExoGENI is designed to bridge distributed

experimentation, computational sciences and Big Data◦ Already running HPC workflows linked to OSG and

national supercomputers◦ Newly introduced support for storage slivering◦ Strong performance isolation is one of key goals

ExoGENI unique features

12

GENI tools: Flack, GENI Portal, omni◦ Give access to common GENI capabilities◦ Also mostly compatible with

ExoGENI native stitching ExoGENI automated resource binding

ExoGENI-specific tools: Flukes◦ Accepts GENI credentials◦ Access to ExoGENI-specific features

Elastic Cluster slices Storage provisioning Stitching to campus infrastructure

ExoGENI experimenter tools

Presentation title goes here 13

ExoGENI activities

14

Compute nodes◦ Up to 100 VMs in each full rack◦ A few (2) bare-metal nodes◦ BYOI (Bring Your Own Image)

True Layer 2 slice topologies can be created ◦ Within individual racks ◦ Between racks◦ With automatic and user-specified resource binding and slice

topology embedding◦ Stitching across I2, ESnet, NLR, regional providers. Dynamic

wherever possible OpenFlow experimentation

◦ Within racks◦ Between racks◦ Include OpenFlow overlays in NLR (and I2)◦ On-ramp to campus OpenFlow network (if available)

Experimenters are allowed and encouraged to use their own virtual appliance images

Since Dec 2012◦ 2500+ slices

Experimentation

Virtual network exchange

Virtual colocampus net to circuit fabric

Multi-homedcloud hosts

with network control

Topology embedding and stitching

Computed embedding

Workflows, services,

etc.

Slice half-way around the world ExoGENI rack in Sydney, Australia Multiple VLAN tags on a pinned path from Sydney to

LA Internet2/OSCARS ORCA-provisioned dynamic circuit

◦ LA, Chicago NSI statically pinned segment with multiple VLAN

tags◦ Chicago, NY, Amsterdam◦ Planning to add dynamic NSI interface

ExoGENI rack in Amsterdam◦ ~14000 miles◦ 120ms delay

17

Strong isolation is the goal Compute instances are

KVM based and get a dedicated number of cores

VLANs are the basis of connectivity◦ VLANs can be best effort or

bandwidth-provisioned (within and between racks)

ExoGENI slice isolation

Scientific Workflows

Workflow Management Systems◦ Pegasus, Custom scripts, etc.

Lack of tools to integrate with dynamic infrastructures◦ Orchestrate the infrastructure in response to application◦ Integrate data movement with workflows for optimized

performance◦ Manage application in response to infrastructure

Scenarios◦ Computational with varying demands◦ Data-driven with large static data-set(s)◦ Data-driven with large amount of input/output data

Scientific Workflows

Dynamic Workflow Steps (Computational)

21

Workflow slices• 462,969 condor jobs since using the on-ramp to engage-submit3 (OSG)

Dynamic Workflow Steps (On-Ramp)

Dynamic Workflow Steps (Data-driven)

Hardware-in-the loop slices•Hardware-in-the-Loop Facility Using RTDS & PMUs (FREEDM Center, NCSU)

RENCI Data Center

Experimental PMU Data from RTDS

0 500 1000 1500 2000 2500 3000 35000.98

0.99

1

1.01

1.02

1.03

1.04

Time in seconds (sec.) with origin at: 12:38:00 EST

Vol

tage

Mag

nitu

de (p

u)

PMU Voltage Magnitudes in AEP - Richview Event

JF KR MF RK OR OS

60 65 70 75 80 85 90-0.02

-0.015

-0.01

-0.005

0

0.005

0.01

0.015

0.02

Time (sec)

Fast

Osc

illatio

ns (p

u)

Bus 1Bus 2Midpoint

800 810 820 830 840 850 860 8709.1

9.2

9.3

9.4

9.5

9.6

9.7

9.8

Ang

le d

iffer

ence

(deg

)

Time in seconds starting at:04-Aug-2007 16:30:00

tva mac

PMU Measurements

1 #

112

11

ModeInterarea

aa

aa

sss

2 #

222

22

ModeInterarea

aa

aa

sss

3 #

332

33

ModeInterarea

aa

aa

sss

Zero/First Order Hold

2 #

222

22

ModeInterarea

i iaia

iaia

sss

Intra-clusterVirtualization

PoP at UNC Chapel Hill

PoP at Duke University

Cluster 1

Cluster 2Cluster 3

NC State PoP

Distributed Execution of Time-critical Synchrophasor Applications

Fiber optic network for PMU data communication using IEEE C37.118

New GENI-WAMS Testbed

Latency & Processing Delays

Packet Loss

Network Jitters

Cyber-security Man-in-middle attacks

Aranya Chakrabortty, Aaron Hurz (NCSU)Yufeng Xin (RENCI/UNC)

28

http://www.exogeni.net◦ ExoBlog http://www.exogeni.net/category/exoblog/

http://wiki.exogeni.net

Thank you!