PlanetLab Applications and Federation Kiyohide NAKAUCHI NICT [email protected] 23 rd ITRC...
-
Upload
verity-williams -
Category
Documents
-
view
227 -
download
5
Transcript of PlanetLab Applications and Federation Kiyohide NAKAUCHI NICT [email protected] 23 rd ITRC...
PlanetLab Applications and Federation
Kiyohide [email protected]
23rd ITRC Symposium2008/05/16
Aki NAKAOUtokyo / [email protected]@nict.go.jp
(1) PlanetLab Applications
CoMon: monitoring slice-level statistics
2008/05/16 K.NAKAUCHI, NICT 2
http://summer.cs.princeton.edu/status/index_slice.htmlOver 400 nodes
Typical Long-running Applications
CDN CoDeeN[Princeton], Coral[NYU], Coweb[Cornell]
Large-file transfer CoBlitz, CoDeploy[Princeton], SplitStream[Rice],
Routing Overlays i3 [UCB], Pluto[Princeton]
DHT / P2P middleware Bamboo[UCB], Meridian[Cornel] , Overlay Weaver[UWaseda]
Brokerage service Sirius[UGA]
Measurement, Monitoring ScriptRoute[Maryland, UWash], S-cube[HPLab] CoMon, CoTop, PlanetFlow[Princeton]
DNS, Anomaly Detection , streaming, multicast, anycast, …
In addition, there are many short-term research projects on PlanetLab
2008/05/16 K.NAKAUCHI, NICT 3
CoDeeN : Academic Content Distribution Network
Improve web performance & reliability 100+ proxy servers on PlanetLab Running 24/7 since June 2003 Roughly 3-4 million reqs/day aggregate One of the highest-traffic projects on PlanetLab
2008/05/16 K.NAKAUCHI, NICT 4
How CoDeen Works?
Each CoDeeN proxy is a forward proxy, reverse proxy, & redirector
CoDeeN Proxy Reques
tResponse
Cache hit
Cache miss
Response
Cache hit
Cache missRespons
eRequest
Cache Miss
2008/05/16 K.NAKAUCHI, NICT 5
Coblitz : Scalable Large-file CDN
Faster than BitTorrent by 55-86% (~500%)
2008/05/16 K.NAKAUCHI, NICT 6
Agent CDNClient
Only reverse proxy(CDN) caches the chunks!
CDN
CDNCDN
CDN ClientAgent
CDN
chunk1
chun
k 1
chunk 2
chunk 3
chunk 2
chunk 5
chunk 5
chunk 1
chunk 1
chunk 4 chunk 5 chunk 5
chun
k 4
chunk1 chunk2
chunk 3 chunk3
chunk5 chunk4
CDN = Redirector + Reverse ProxyDNS
coblitz.codeen.org
OriginServer
HTTP RANGE QUERY
How Does PlanetLab Behave? Node Availability
2008/05/16 K.NAKAUCHI, NICT 7
[Larry Peterson, et al, "experiences building PlanetLab", OSDI’06]
Live Slices
2008/05/16 K.NAKAUCHI, NICT 8
[Larry Peterson, et al, "experiences building PlanetLab", OSDI’06]
50% nodes have 5-10 live slices
Bandwidth
2008/05/16 K.NAKAUCHI, NICT 9
Bandwidth in
Bandwidth out
[Larry Peterson, et al, "experiences building PlanetLab", OSDI’06]
Median: 500-1000 Kbps
(2) Extending PlanetLab
Federation Distributed operation/management
Private PlanetLab Private use, original configuration CORE [UTokyo, NICT]
Hardware support (C/D separation) Custom hardware: Intel IXP, NetFPGA, 10GbE E.g. Supercharging PlanetLab [UWash]
Edge diversity Wireless technologies integration [OneLab] E.g. HSDPA, WiFi, Bluetooth, ZigBee, 3GPP LTE
GENI, VINI
2008/05/16 K.NAKAUCHI, NICT 10
Federation Split PlanetLab
Several regional PlanetLabs with original policy
Interconnection Share node resources among PlanetLabs
InternetInternet
PLC
PLC
PLC
PlanetLab 1
PlanetLab 2
PlanetLab 3…
VMM
NodeMgr
VM1 VM2 VMn
2008/05/16 K.NAKAUCHI, NICT 11
VMM
NodeMgr
VM1 VM2 VMn
Trade
PlanetLab-EU Starts Federation
Emerging European portion of public PlanetLab 33 nodes today (migrated from PlanetLab)
Supported by OneLab project (UPMC, INRIA) Control center in Paris
PlanetLab-JP will also follow federation
2008/05/16 K.NAKAUCHI, NICT 12
MyPLC for Your Own PlanetLab
PlanetLab in a box Complete PlanetLab Central (PLC) portable package
Easy to install, administer Isolate all code in a chroot jail
Single configuration file
2008/05/16 K.NAKAUCHI, NICT 13
/plc/plcPLC
Linux
ApachOpenSSLPostgreSQL…pl_dbplc_wwwplc_apibootmanagerbootcd_v3
Resource Management
Resource sharing policy By contributing 2 nodes to any one PlanetLab, a site can create 10 slices that span the federated PlanetLab
2008/05/16 K.NAKAUCHI, NICT 14
Rspec General, Extensible,
Resource Description Portals presents a
higher-level front-end view of resources
Portals will use RSpec as part of the back-end
Rspec Example<component type=”virtual access point” requestID=”siteA-ap1”
physicalID=”geni.us.utah.wireless.node45”>
<processing requestID=”cpu1”>
<power units=”CyclesPerSecond”>
<value>1000000000</value>
</power>
<function>Full</function>
</processing>
<storage requestID=”disk1”>
<capacity units=”GB”>
<value>10</value>
</capacity>
<access>R/W<access>
</storage>
<wireless:communication requestID=”nic1”>
<medium>FreqShared</medium>
<mediumtype>broadcast</mediumtype>
<wireless:protocol>802.11g</wireless:protocol>
<wireless:frequency type=”802.11channel”>16</wireless:frequency>
</wireless:communication>
</component>
2008/05/16 K.NAKAUCHI, NICT 15
Summary
PlanetLab applications 800+ network services running in their own slice
Long-running infrastructure services Measurement using a set of useful monitoring tools reveals the extensive use of PlanetLab
Federation Distributes operation and management Future PlanetLab = current PL + PL-EU + PL-JP +…
2008/05/16 K.NAKAUCHI, NICT 16
Monitoring Tools CoTop: monitoring what
slices are consuming resources on each node, like “top”
CoMon: monitoring statistics for PlanetLab at both a node-level and a slice-level
2008/05/16 K.NAKAUCHI, NICT 18