PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson,...
-
Upload
isaak-hailstone -
Category
Documents
-
view
219 -
download
0
Transcript of PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson,...
PlanetLab: An Overlay Testbed for Broad-Coverage
ServicesBavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak
Presented by Jason Waddle
Overview
1. What is PlanetLab?2. Architecture
1. Local: Nodes2. Global: Network
3. Details1. Virtual Machines2. Maintenance
What Is PlanetLab?
• Geographically distributed overlay network
• Testbed for broad-coverage network services
PlanetLab Goal
“…to support seamless migration of an application from an early
prototype,through multiple design iterations,to a popular service that continues
to evolve.”
Priorities
• Diversity of Network– Geographic– Links
• Edge-sites, co-location and routing centers, homes (DSL, cable-modem)
• Flexibility– Allow experimenters maximal control
over PlanetLab nodes– Securely and fairly
PlanetLab Architecture
• Node-level– Several virtual machines on each
node, each running a different service• Resources distributed fairly• Services are isolated from each other
• Network-level– Node managers, agents, brokers, and
service managers provide interface and maintain PlanetLab
Services Run in Slices
PlanetLab Nodes
Services Run in Slices
PlanetLab Nodes
Virtual Machines
Service / Slice A
Services Run in Slices
PlanetLab Nodes
Virtual Machines
Service / Slice A
Service / Slice B
Services Run in Slices
PlanetLab Nodes
Virtual Machines
Service / Slice A
Service / Slice B
Service / Slice C
Node Architecture Goals
• Provide a virtual machine for each service running on a node
• Isolate virtual machines• Allow maximal control over virtual
machines• Fair allocation of resources
– Network, CPU, memory, disk
One Extreme: Software Runtime (e.g., Java Virtual Machine)
• High level API• Depend on OS to provide
protection and resource allocation• Not flexible
Other Extreme: Complete Virtual Machine (e.g., VMware)
• Low level API (hardware)– Maximum flexibility
• Excellent protection• High CPU/Memory overhead
– Cannot share common resources among virtual machines• OS, common filesystem
Mainstream Operating System
• API and protection at same level (system calls)
• Simple implementation (e.g., Slice = process group)
• Efficient use of resources (shared memory, common OS)
• Bad protection and isolation• Maximum Control and Security?
PlanetLab Virtualization: VServers
• Kernel patch to mainstream OS (Linux)
• Gives appearance of separate kernel for each virtual machine– Root privileges restricted to activities
that do not affect other vservers
• Some modification: resource control (e.g., File handles, port numbers) and protection facilities added
PlanetLab Network Architecture
• Node manger (one per node)– Create slices for service managers
• When service managers provide valid tickets
– Allocate resources for vservers
• Resource Monitor (one per node)– Track node’s available resources– Tell agents about available resources
PlanetLab Network Architecture
• Agents (centralized)– Track nodes’ free resources– Advertise resources to resource
brokers– Issue tickets to resource brokers
• Tickets may be redeemed with node managers to obtain the resource
PlanetLab Network Architecture
• Resource Broker (per service)– Obtain tickets from agents on behalf of
service managers
• Service Managers (per service)– Obtain tickets from broker– Redeem tickets with node managers to
acquire resources– If resources can be acquired, start
service
Obtaining a Slice
Agent
Service Manager
Broker
Obtaining a Slice
Agent
Service Manager
BrokerResource Monitor
Obtaining a Slice
Agent
Service Manager
BrokerResource Monitor
Obtaining a Slice
Agent
Service Manager
BrokerResource Monitor
ticket
Obtaining a Slice
Agent
Service Manager
Broker
ticket
Resource Monitor
Resource Monitor
Obtaining a Slice
Agent
Service Manager
Broker
ticket
Resource Monitor
Resource Monitor
ticket
ticket
Obtaining a Slice
Agent
Service Manager
Broker
ticket
ticket
ticket
Obtaining a Slice
Agent
Service Manager
Broker
ticket
ticket
ticket
Obtaining a Slice
Agent
Service Manager
Broker
ticket
ticket
ticket
Obtaining a Slice
Agent
Service Manager
Broker
ticket
ticket
ticket
Obtaining a Slice
Agent
Service Manager
Broker
ticket
ticket
ticketNode Manager
Node Manager
Obtaining a Slice
Agent
Service Manager
Broker
ticket
Obtaining a Slice
Agent
Service Manager
Broker
ticket
PlanetLab Virtual Machines: VServers
• Extend the idea of chroot(2)– New vserver created by system call– Descendent processes inherit vserver– Unique filesystem, SYSV IPC, UID/GID
space– Limited root privilege
• Can’t control host node
– Irreversible
Scalability
• Reduce disk footprint using copy-on-write– Immutable flag provides file-level CoW– Vservers share 508MB basic filesystem
• Each additional vserver takes 29MB
• Increase limits on kernel resources (e.g., file descriptors)– Is the kernel designed to handle this?
(inefficient data structures?)
Protected Raw Sockets• Services may need low-level network
access– Cannot allow them access to other services’
packets
• Provide “protected” raw sockets– TCP/UDP bound to local port– Incoming packets delivered only to service with
corresponding port registered– Outgoing packets scanned to prevent spoofing
• ICMP also supported– 16-bit identifier placed in ICMP header
Resource Limits
• Node-wide cap on outgoing network bandwidth– Protect the world from PlanetLab services
• Isolation between vservers: two approaches– Fairness: each of N vservers gets 1/N of the
resources during contention– Guarantees: each slice reserves certain amount
of resources (e.g., 1Mbps bandwidth, 10Mcps CPU)
• Left-over resources distributed fairly
Linux and CPU Resource Management
• The scheduler in Linux provides fairness by process, not by vserver– Vserver with many processes hogs
CPU
• No current way for scheduler to provide guaranteed slices of CPU time
PlanetLab Network Management1. PlanetLab Nodes boot a small Linux OS
from CD, run on RAM disk2. Contacts a bootserver3. Bootserver sends a (signed) startup
script• Boot normally or• Write new filesystem or• Start sshd for remote PlanetLab Admin login
• Nodes can be remotely power-cycled
Dynamic Slice Creation
1. Node Manager verifies tickets from service manager
2. Creates a new vserver3. Creates an account on the node
and on the vserver
User Logs in to PlanetLab Node• /bin/vsh immediately:
1. Switches to the account’s associated vserver2. Chroot()s to the associated root directory3. Relinquishes true root privileges4. Switch UID/GID to account on vserver
– Transition to vserver is transparent: it appears the user just logged into the PlanetLab node directly
PlanetLab Today
• More than 220 nodes• Over 100 sites• More than 200 research projects
have used PlanetLab• Goal: over 1000 geographically
diverse nodes
PlanetLab Today
www.planet-lab.org