Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform...
-
Upload
angelica-carr -
Category
Documents
-
view
217 -
download
1
Transcript of Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform...
Running large scale experimentation on Content-Centric Networking via the Grid’5000 platformMassimo GALLO (Bell Labs, Alcatel - Lucent)
Joint work with:Luca Muscariello (Orange)Giovanna Carofiglio (Bell Labs, Alcatel - Lucent)
Agenda
ICN
Lurch
Experiments
Conclusions and future works
ICN
Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
ICN
Today’s Internet
ICN advantages
imdb.com
www.imdb.com/title/tt12242
www.imdb.com/title/tt12242
Ever-growing amount of digital info Point-to-point dissemination Mobility issues Waste of resources in content
replication
Simplified management Traffic reduction and localization Seamless, ubiquitous
connectivity Congestion reduction Effective Security
named packets( )
Names not addresses Name-based routing/ forwarding In-network storage Pull-based transport
ICN properties
LURCH
Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
Lurch
A newly designed protocol need to be tested Event driven simulation:
limited in the number of events (hence topology size) computation is hard to parallelize
Large scale experiments: Complex to manage
We needed a test orchestrator
From protocol design to large scale experimentation
Lurch
Lurch is a test orchestrator for CCNx1 Simplify and automate ICN’s protocol testing over a
list of interconnected servers (i.e. G5K). Lurch run on a separate machine and control the
test
Controller
Lurch
Application
Control Plane
Virtualized Data Plane
Managem
ent
CCNx
TCP/UDP
Virtualized IP
IP layer
PHY layer
Data PlaneProtocol stack
Architecture Lurch controller:
Virtualized Data plane Control Plane Application layer
Lurch
Lurch
Create virtual interfaces between nodes (i.e. G5K) Bash configuration file computed remotely by the orchestrator and transfered
to experiment nodes Network iptunnels to build virtualized interfaces One physical interface (eth0), multiple virtual interfaces (tap0,..,)
Topology management
#!/bin/bash sysctl -w net.ipv4.ip_forward=1modprobe ipip
iptunnel add tap0 mode ipip local 172.16.49.50 remote 172.16.49.5ifconfig tap0 10.0.0.2 netmask 255.255.255.255 uproute add 10.0.0.1 tap0
iptunnel add tap1 mode ipip local 172.16.49.50 remote 172.16.49.51ifconfig tap1 10.0.0.3 netmask 255.255.255.255 uproute add 10.0.0.4 tap1
1.2.3.4.5.
6.7.
8.
9.10.
tap0
tap1
eth0 eth0 eth0
172.16.9.50 172.16.49.5172.16.49.51
10.0.0.2
10.0.0.3
tap010.0.0.1
tap010.0.0.4
Controller
Virt
ual
Phy
sica
l
Lurch
Remotely assign network resources to nodes preserving physical bandwidth constraints
Bash configuration file computed remotely by the orchestrator and transferred to experiment nodes
Traffic Control Linux tool to limit bandwidth, add delay, packet loss, etc..
Resource management
#!/bin/bash tc qdisc del dev eth0 | cut -d " " -f 1) roottc qdisc add dev eth0 | cut -d " " -f 1) root handle 1: htb default 1
tc class add dev eth0 | cut -d " " -f 1) parent 1: classid 1:1 htb rate 100.0mbit ceil 10.0mbittc filter add dev eth0 | cut -d " " -f 1) parent 1: prio 1 protocol ip u32 match ip dst 172.16.49.5 flowid 1:1
tc class add dev eth0 | cut -d " " -f 1) parent 1: classid 1:2 htb rate 100.0mbit ceil 50.0mbit
tc filter add dev eth0 | cut -d " " -f 1) parent 1: prio 1 protocol ip u32 match ip dst 172.16.49.51 flowid 1:2
1.2.3.
4.5.
6.
7.
8.9.
10Mbps
Controller
Virt
ual
Phy
sica
l
50Mbps
1Gbps
Lurch
Remotely control name-based forwarding tables Bash configuration file computed remotely by the orchestrator and transferred
to experiment nodes CCNx’s FIB control command ccndc
Name-based control plane
#!/bin/bash
ccndc add ccnx:/music UDP 10.0.0.1
ccndc add ccnx:/video UDP 10.0.0.4
1.2.3.4.5.
Name prefix face
ccnx:/music 0
ccnx:/video 1
FIB
ccnx:/music
Controller
Virt
ual
Phy
sica
l
ccnx:/video
Lurch
Remotely control experiment workload File download application started according experiment’s needs
Arrival process: Poisson,CBR File popularity: Zipf, Weibull, et..
Application Workload
Two ways: Centralize workload generation at the
controller Delegated workload generation to clients
for performance improvement
tap0
tap1
eth0 eth0 eth0
172.16.9.50 172.16.49.5172.16.49.51
10.0.0.2
10.0.0.3
tap010.0.0.1
tap010.0.0.4
Controller
Virt
ual
Phy
sica
l
Lurch
Remotely control experiment statistic’s Bash start/stop commands sent remotely
CCNx’s statistics (e.g. caching, forwarding) through logs top / vmstat monitoring active processes CPU usage (e.g. ccnd) Ifstat monitoring link rate
Measurements
At the end of the experiment statistics are collected and transferred to the user
tap0
tap1
eth0 eth0 eth0
172.16.9.50 172.16.49.5172.16.49.51
10.0.0.2
10.0.0.3
tap010.0.0.1
tap010.0.0.4
Virt
ual
Phy
sica
l
Controller
EXPERIMENTS
Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
Experiments
20 different, simultaneous content requests (flows)
1 name prefix in all the FIBs
4
615M
5M
5M
10M
10M
20M
Link i,j Measured/optimal Rate [Mbps]
0 -> 4 4.7 / 5
1 -> 4 9.2 / 10
2 -> 5 2.4 / 2.5
3 -> 5 2.4 / 2.5
4 -> 6 13.9 / 15
5 -> 6 4.8 / 5
5
ccnx:/
ccnx:/
ccnx:/
ccnx:/
ccnx:/
ccnx:/
Experiments
Large topologies Up to 100 physical nodes More than 200 links
Realistic scenarios Mobile Backhaul
CONCLUSIONS AND FUTURE WORKS
Running large scale experimentation on Content-Centric Networking via the Grid’5000 platform
Conclusions and future works
With Lurch, we tested multiple ICN’s mechanisms in a real big test-bed: Forwarding, caching strategies, Congestion control
Ongoing: Project started in the Orange – Bell Labs collaboration and is now under the
SystemX Architecture de Resaux Future open source release
Future works: Extend site based experiments to grid experiments Exploit the power of the servers offered by grid using two or more virtual machines
per server Adapt the tool to run different ICN prototypes (e.g. NDNx)