Deliverable D4.2 - Experiment Definition and Set Up

66
Version 1.0 Page 1 of 66 © Copyright 2015, the Members of the SmartenIT Consortium Socially-aware Management of New Overlay Application Traffic with Energy Efficiency in the Internet European Seventh Framework Project FP7-2012-ICT- 317846-STREP Deliverable D4.2 Experiments Definition and Set-up The SmartenIT Consortium University of Zürich, UZH, Switzerland Athens University of Economics and Business - Research Center, AUEB-RC, Greece Julius-Maximilians Universität Würzburg, UniWue, Germany Technische Universität Darmstadt, TUD, Germany Akademia Gorniczo-Hutnicza im. Stanislawa Staszica W Krakowie, AGH, Poland Intracom S.A. Telecom Solutions, ICOM, Greece Alcatel Lucent Bell Labs, ALBLF, France Instytut Chemii Bioorganiicznej Pan, PSNC, Poland Interroute S.P.A, IRT, Italy Telekom Deutschland Gmbh, TDG, Germany © Copyright 2015, the Members of the SmartenIT Consortium For more information on this document or the SmartenIT project, please contact: Prof. Dr. Burkhard Stiller Universität Zürich, CSG@IFI Binzmühlestrasse 14 CH—8050 Zürich Switzerland Phone: +41 44 635 4331 Fax: +41 44 635 6809 E-mail: [email protected]

description

This deliverable D4.2 – “Experiments Definition and Set-up” presents a detailed description of experiments representing SmartenIT scenarios, both Operator Focused (OFS) and End-user Focused (EFS), as defined in WP1 and matching the use-cases proposed in WP2. WP3 selected and implemented two network traffic management mechanisms for SmartenIT, namely DTM and RB-HORST, hence the experiments described in this document are defined in order to evaluate these two solutions over the SmartenIT test-beds.

Transcript of Deliverable D4.2 - Experiment Definition and Set Up

Page 1: Deliverable D4.2 - Experiment Definition and Set Up

Version 1.0 Page 1 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Socially-aware Management of New Overlay Application Traffic with

Energy Efficiency in the Internet

European Seventh Framework Project FP7-2012-ICT- 317846-STREP

Deliverable D4.2 Experiments Definition and Set-up

The SmartenIT Consortium University of Zürich, UZH, Switzerland Athens University of Economics and Business - Research Center, AUEB-RC, Greece Julius-Maximilians Universität Würzburg, UniWue, Germany Technische Universität Darmstadt, TUD, Germany Akademia Gorniczo-Hutnicza im. Stanislawa Staszica W Krakowie, AGH, Poland Intracom S.A. Telecom Solutions, ICOM, Greece Alcatel Lucent Bell Labs, ALBLF, France Instytut Chemii Bioorganiicznej Pan, PSNC, Poland Interroute S.P.A, IRT, Italy Telekom Deutschland Gmbh, TDG, Germany

© Copyright 2015, the Members of the SmartenIT Consortium For more information on this document or the SmartenIT project, please contact: Prof. Dr. Burkhard Stiller Universität Zürich, CSG@IFI Binzmühlestrasse 14 CH—8050 Zürich Switzerland Phone: +41 44 635 4331 Fax: +41 44 635 6809 E-mail: [email protected]

Page 2: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 2 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Document Control

Title: Experiments Definition and Set-up

Type: Internal

Editor(s): Roman Łapacz

E-mail: [email protected]

Author(s): Jeremias Blendin, Valentin Burger, Paolo Cruschelli, David Hausheer, Fabian Kaup, Roman Łapacz, Łukasz Łopatowski, George Petropoulos, Grzegorz Rzym, Michael Seufert, Rafal Stankiewicz, Matthias Wichtlhuber, Piotr Wydrych, Zbigniew Dulinsli, Krzysztof Wajda

Doc ID: D4.2-v1.0

AMENDMENT HISTORY

Version Date Author Description/Comments

V0.5 April 24, 2014 Roman Łapacz First version, providing ToC

V0.6 June 6, 2014 David Hausheer Include experiment mapping to test-bed

V0.9 Jul 26, 2014 Roman Łapacz Initial information on experiments and show cases (copied from drafts of overall assessment cards)

V0.9 Jan 25, 2015 Michael Seufert RB-HORST experiments

V0.9.1 Mar 23, 2015 George Petropoulos RB-HORST experiments

V0.9.1-0.1

V0.9.1-0.40

Mar 27, 2015

Apr 24, 2015

Michael Seufert, George Petropoulos, Fabian Kaup, Łukasz Łopatowski, Grzegorz Rzym, Rafał Stankiewicz, Jeremias Blendin, Matthias Wichtlhuber, Roman Łapacz

Input and updates from the partners.

V0.9.1-0.41

V0.9.1-0.49

Apr 28, 2015 Michael Seufert, George Petropoulos, Fabian Kaup, Łukasz Łopatowski, Grzegorz Rzym, Rafał Stankiewicz, Jeremias Blendin, Matthias Wichtlhuber, Roman Łapacz

Updates after the D4.2 internal review

V1.0 Apr 30, 2015 Roman Łapacz Final version submitted to the EC

Legal Notices The information in this document is subject to change without notice. The Members of the SmartenIT Consortium make no warranty of any kind with regard to this document, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The Members of the SmartenIT Consortium shall not be held liable for errors contained herein or direct, indirect, special, incidental or consequential damages in connection with the furnishing, performance, or use of this material.

Page 3: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 3 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Table of Contents

1 Executive Summary 5

2 Introduction 6 2.1 Purpose of the Document D4.2 6 2.2 Document Outline 6

3 Experiments 7 3.1 OFS Experiments 7

3.1.1 Evaluation of multi-domain traffic cost reduction in DTM: S-to-S case 17 3.1.2 Evaluation of multi-domain traffic cost reduction in DTM: M-to-M case 23

3.2 EFS Experiments 30 3.2.1 Evaluation of caching functionality in RB-HORST 33 3.2.2 Large-scale RB-HORST++ Study 36 3.2.3 Evaluation of data offloading functionality in RB-HORST 39

4 Showcases 44 4.1 Multi-domain network traffic optimization in DTM 44

4.1.1 Scenario topology 44 4.1.2 Scenario assumptions 47 4.1.3 Reference scenario 48 4.1.4 Showcase scenario 48

4.2 Locality, social awareness and WiFi offloading in RB-HORST 51 4.2.1 Scenario topology 51 4.2.2 Scenario assumptions 52 4.2.3 Reference scenario 52 4.2.4 Showcase scenario 53

4.3 Mobile Internet Access Offloading in EEF/RB-HORST 56 4.3.1 Scenario topology 57 4.3.2 Scenario assumptions 58 4.3.3 Reference scenario 58 4.3.4 Showcase scenario 58

5 Summary 60

6 SMART Objectives 61

7 References 64

8 Abbreviations 65

9 Acknowledgements 66

Page 4: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 4 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

(This page is left blank intentionally.)

Page 5: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 5 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

1 Executive Summary

This deliverable D4.2 – “Experiments Definition and Set-up” presents a detailed description of experiments representing SmartenIT scenarios, both Operator Focused (OFS) and End-user Focused (EFS), as defined in WP1 and matching the use-cases proposed in WP2. WP3 selected and implemented two network traffic management mechanisms for SmartenIT, namely DTM and RB-HORST, hence the experiments described in this document are defined in order to evaluate these two solutions over the SmartenIT test-beds.

Each experiment definition contains the following parts:

Goal – the overall concept and purpose of an experiment,

Deployment infrastructure – network topology and configuration,

Parameters, measurements and metrics – details needed to evaluate the quality of SmartenIT mechanisms and the implementation,

Test procedures – actions to execute the implemented mechanisms.

Such a format has been formalized to hand over complete instructions on how to run the SmartenIT experiments and which metrics an parameters need to be collected from the evaluation of prototype in order to properly assess the SmartenIT solutions.

The authors decided to focus on small set of experiments covering the challenges addressed by the SmartenIT project. The experiments must clearly and accurately evaluate the project solutions and the quality of pilot implementation.

Also, this deliverable reports on preliminary showcases. The project team demonstrated running pilot implementation and major functionalities during the second year technical review with the EC. Showcases can be considered as preliminary experiments aimed at showing the basic behaviour of SmartenIT network traffic mechanisms in a test-bed environment. The experience collected in preparation of the showcases was an important input to the work on the final advanced experiments documented in this deliverable.

Page 6: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 6 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

2 Introduction

The goal of this document is to provide definitions of SmartenIT experiments for the evaluation of prototypes. Presented details instruct how to evaluate the network traffic management mechanisms of SmartenIT scenarios, both Operator Focused Scenario (OFS) and End-user Focused Scenario (EFS), as proposed in WP2 of the SmartenIT project. Apart from test procedures, experimenters are equipped with sets of parameters, metrics, test-bed configurations and other information to properly execute experiments. The key requirement of each experiment is the use of prototype implementation created in WP3. This will allow to evaluate algorithms of network traffic management mechanisms as well as the quality of software implementation.

At the end of year 2, the project team prepared the showcases presenting the behaviour of two network traffic mechanisms: DTM and RB-HORST. These are also reported in this document since the experience achieved during preparation of showcases was an important input to the further work on experiment definitions.

2.1 Purpose of the Document D4.2

Deliverable D4.2 is a guide for those who are going to execute the SmartenIT experiments. The software pilot implementation deployed in test-bed infrastructures must present whether the mechanisms developed by the project meet the requirements and thus address the challenges defined in the project. Execution of experiments defined in this document will provided a set of data required to conduct the evaluation and assessment actions. Moreover, the description of showcases which are simplified versions of experiment scenarios help to easily understand how the pilot implementation and the selected SmartenIT network traffic management mechanisms work.

2.2 Document Outline

This document is organized as follows:

Section 3 provides detailed information about the experiments which are planned to be executed. The results of experiments will be an input to the assessment process and final project conclusions. This section brings to experimenters all information to properly adjust a test-bed environment, configure the SmartenIT prototype, run test procedures and collect the results. As the project is focused on two scenarios, OFS and EFS, each of them is represented by a set of experiments. The OFS experiments evaluate the DTM mechanism, the EFS experiments are focused on RB-HORST.

Section 4 describes the showcases with the SmartenIT pilot implementation which were presented in the Year 2 Technical Review with the EC. All details of three showcases (one for DTM, one for RB-HORST and one for RB-HORST with The Energy Efficiency Measurement Framework) have been reported.

Section 5 summarizes the deliverable and draws the major conclusions on the defined experiments and next steps of the evaluation process.

Section 6 reports on how SMART objectives, as described in SmartenIT’s Description of Work (DoW) [1], have been addressed by the work performed in WP4.

Page 7: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 7 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

3 Experiments

In this section, the experiment definitions of two types, namely OFS and EFS, are described in details. The experiments are defined is a such way to validate the pilot implementation developed in the SmartenIT project.

3.1 OFS Experiments

For OFS experiments a "bulk data transfer for cloud operators" use-case is taken. In all OFS experiments we assume that DTM is used to manage the traffic by ISP which hosts cloud or data center that is receiving the traffic from one or more cloud/data centers. This ISP has two inter-domain links and strives to distribute the traffic among those links in such a way that total traffic cost is minimized.

A set of experiments to evaluate functionality and performance of DTM is planned. The experiments may be classified according to the following two main settings:

the number of cloud/DCs sending or receiving the manageable traffic

the type of tariff used for calculation of cost on inter-domain links.

In the former dimension we can distinguish two groups of experiments:

Single-to-Single (S-to-S): traffic is generated by single source (single cloud/DC) and sent to a single receiver (single cloud/DC). In this case there is a single ISP hosting DC that receives the traffic. The ISP's domain is multi-homed and manages two inter-domain links. The manageable traffic is generated by a single DC located in remote ISP's domain.

Multiple-to-Muliple (M-to-M): There are two ISPs' autonomous systems each hosting a DC that receives the manageable traffic, i.e., there are two ASes that perform traffic management using DTM. Both ASes are multi-homed (each has two inter-domain links). There are also two DCs located in two distinct remote ISPs' domains serve as manageable traffic sources.

The second main experiment classification criteria is based on the tariff used for billing the traffic on inter-domain links. For each of the above two groups of experiments we plan to execute performance evaluation separately for volume-based tariff and 95th percentile tariff.

Another dimension of experiment classification is a distinction of functionality tests and performance evaluation tests. The former will be a short experiments to evaluate the mechanism itself and whether the whole test-bed environment operates correctly. The latter type of experiments will be used to evaluate the performance of DTM in a particular test-bed configuration and under a few configuration settings to prove the benefits of using the mechanism. Both qualitative and quantitative metrics and KPIs will be carefully evaluated in this case.

The current status of specification and implementation of DTM++ does not allow to define experiments. If it will be possible, experiment with S-to-S topology and 95th percentile tariff is planned and will be presented in D4.3.

Common tools and settings

In OFS test-bed experiments we consider one or two domains (autonomous systems) that perform traffic cost optimization and traffic management using DTM. Since DTM

Page 8: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 8 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

operations are possible only if an AS is multi-homed it was assumed that each domain performing DTM has two inter-domain links. Inbound traffic management is performed, i.e. DCs receiving the traffic from remote DCs are hosted by AS-es that use DTM. For simplicity we assume that domains hosting DCs being sources of the traffic are single-homed. This assumption on test-bed construction does not impact on goals and scope of the experiments.

There are two types of traffic: background traffic (non-manageable) and inter-DC manageable traffic. The former is sent over inter-domain links and is dominating but DTM does not influence this traffic. Background traffic is sent over default BGP paths. The manageable traffic is sent between remote Data Centers. It consist of multiple flows of various sizes [2] [3].

Physical test-bed configurations

There are three physical test-bed environments deployed: at TUD, PSNC and IRT premises. All test-bed instances are compatible with the basic test-bed described in D4.1 and use some necessary extensions as detailed in the following [4].

Basic test-bed instance installed at TUD uses three physical servers. Each machine is equipped with one Intel Xeon E5-1410 CPU @2.8GHz (4 cores, 8 threads) and 16GB of RAM. Two of them are provided with 2TB Toshiba SATA3 enterprise HDD, last one with 1TB Seagate enterprise hard drive. Servers are interconnected with four physical 1Gbps NICs as described in D4.1.

The mapping of logical topology to physical machines in TUD test-bed is presented at Figure 1. It is presented on the example of the most complex logical topology, i.e., the one for M-to-M group of experiments. Mapping for S-to-S, S-to-M and M-to-S can be obtained by simply removing unused devices in logical topology and respective virtual machines in physical test-bed.

The test-bed installed at PSNC premises uses only 2 powerful servers instead of the 3 physical machines proposed in the reference basic test-bed design. Server 1 is equipped with 2 CPUs @2,4GHz with 6 cores each as well as 48GB of RAM. Server 2 comprises the same number of CPUs and cores with total amount of 64GB of RAM. Servers are interconnected with two physical Ethernet 1Gb/s links.

There are in total 28 virtual machines deployed in the test-bed (14 VMs on each server) what allows for conducting DTM M-to-M experiments. VMs hosting SmartenIT prototype software as well as traffic generators and receivers (for both inter-DC and background traffic emulation) are running Ubuntu 14.04 64bit operating systems.

IRT test-bed follows the same strategy of PSNC test-bed, with a two physical server deployment. The environment is described as follows: first server is equipped with four CPU (X3210 @ 2.13GHz) and with 8GB RAM and 1TB of HDD disk space, while second server is equipped with 32CPU (CPU E5-2450 @ 2.10GHz), 64G RAM and 1TB of HDD disk space. All VM residing over two servers related to the SmartenIT project have been created starting from Ubuntu 14.04 (x64). Servers have been connected over two dedicated 1Gbps link, while management has been provided over separated link.

Apart from being compatible with the basic test-bed design, test-bed implements a set of required extensions described in D4.1.

Page 9: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 9 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 1 Mapping of M-to-M experiments logical topology to physical test-bed configuration (TUD test-bed instance).

In order to emulate inter-DC traffic a custom traffic generator (described below) was deployed on selected VMs. More detailed information about the generator application is provided later on in this section.

In order to enable sophisticated networking configuration inside the test-bed, software router test-bed extension is implemented. In total 10 software router VMs running Vyatta VC6.6R1 are deployed as a full set of test-bed (multi-to-multi case). Single-to-single scenario will be obtained by powering off VMs belonging to AS4 and AS5.

Moreover basic test-bed extension for OpenFlow-enabled switch has been incorporated. For this purpose Open vSwitch software has been set up on two dedicated VMs.

The mapping of logical topology to physical machines in PSNC test-bed is presented at Figure 2.

Traffic generator

During the experiments, an Internet-like traffic generator is used and it is able to feed the network with distinct unidirectional UDP flows, each being handled by a separate Java thread. The configuration of the generator comprises of:

a definition of flow inter-arrival time distribution, and

an unlimited number of flow templates.

Cloud A

DC-ACloud B

DC-B

AS1

AS2AS3

SDN controllerS-Box

S-Box

Traffic generator

(sender) GA1

Traffic generator

(sender) GC2

Trafficreceiver RA1

Trafficreceiver RA2

Cloud C

DC-C

AS4

S-Box

TrafficReceiver RC1

Trafficreceiver RC2

DC traffic

generator

sender

Traffic

receiver

Traffic

receiver

Traffic generator

(sender) GA2

Traffic generator

(sender) GC1

AS5

SDN controllerS-Box

DC traffic

generator

sender

Cloud D

DC-D

DA-4

DA-1

BG-4.2

BG-4.1

BG-1.2

BG-1.1PC3

PC2

PC1

Page 10: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 10 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 2 Mapping of M-to-M experiments logical topology to physical test-bed configuration (PSNC test-bed instance).

Each time a flow is started, a template is selected and applied. Each template is configured by:

its selection probability,

an unlimited number of flow destinations (i.e., IP address and port range),

a definition of flow length (i.e., time from start to last packet sending) distribution,

a definition of packet inter-arrival time distribution,

a definition of UDP payload size (in bytes) distribution,

a definition of UDP payload type (e.g., random bytes or zeros).

The advantages of the used generator over other available tools are stability, robustness, efficiency, and portability. It uses a stable Well19937c pseudo-random number generator. It has been tested for 24/7 operation stability with and without a receiver and it was verified that each instance is able to generate dozens Mbps of traffic. A sample configuration file is presented below:

Cloud A

DC-ACloud B

DC-B

AS1

AS2AS3

SDN controllerS-Box

S-Box

Traffic generator

(sender) GA1

Traffic generator

(sender) GC2

Trafficreceiver RA1

Trafficreceiver RA2

Cloud C

DC-C

AS4

S-Box

TrafficReceiver RC1

Trafficreceiver RC2

DC traffic

generator

sender

Traffic

receiver

Traffic

receiver

Traffic generator

(sender) GA2

Traffic generator

(sender) GC1

AS5

SDN controllerS-Box

DC traffic

generator

sender

Cloud D

DC-D

DA-4

DA-1

BG-4.2

BG-4.1

BG-1.2

BG-1.1

PC2PC1

Page 11: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 11 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

<?xml version="1.0" encoding="UTF-8"?>

<!-- $Id: background-1.xml 246 2014-12-18 15:41:55Z wydrych $ -->

<flows>

<flow-inter-arrival-time distribution="exponential">

<param name="mean" value="219" />

</flow-inter-arrival-time>

<flow-template weight="0.6">

<destinations>

<destination weight="1" address="10.10.1.3" min-port="10000" max-port="39999" />

</destinations>

<flow-length distribution="pareto">

<param name="scale" value="3400" />

<param name="shape" value="1.5" />

</flow-length>

<packet-inter-arrival-time distribution="exponential">

<param name="mean" value="35" />

</packet-inter-arrival-time>

<payload-size distribution="normal">

<param name="mu" value="1358" />

<param name="sigma" value="25" />

</payload-size>

<payload type="zero" />

</flow-template>

<flow-template weight="0.4">

<destinations>

<destination weight="1" address="10.10.1.3" min-port="10000" max-port="39999" />

</destinations>

<flow-length distribution="pareto">

<param name="scale" value="3400" />

<param name="shape" value="1.5" />

</flow-length>

<packet-inter-arrival-time distribution="exponential">

<param name="mean" value="35" />

</packet-inter-arrival-time>

<payload-size distribution="normal">

<param name="mu" value="158" />

<param name="sigma" value="20" />

</payload-size>

<payload type="zero" />

</flow-template>

</flows>

Billing period and cost functions

Four different lengths of a billing period will be used for experiments: 30 minutes, a few hours, 1 day and a few days (up to one week). For billing periods shorter than 1 day the traffic envelope will be flat. In experiments with billing period of 1 day or longer a daily traffic envelope will be introduced. Short billing periods (30 minutes and a few hours) will be used mainly for functionality tests and possibly for basic performance tests. Main experiments for DTM performance evaluation will be conducted with 1 day or longer billing period.

Another important setting for experiments is selection of cost functions used on links. Generally, piecewise linear functions will be used but particular settings will be carefully selected for each experiment.

Measurement points

For the purposes of performance evaluation of DTM, adequate traffic measurements are needed. The measurement points are presented at Figure 3. For each autonomous system that receives the traffic and uses DTM for cost minimization the traffic measurements must be done on each inter-domain link and each tunnel. On input

Page 12: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 12 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

interfaces of border gateway routers (BG) we measure the total (background + manageable) traffic passing a given link. On DA router (Data Center Attachment point [5]) we measure the manageable traffic incoming via each tunnel. Therefore, assuming that AS has two inter-domain links and two tunnels, in total four measurement points must be defined. This is the case of S-to-S experiments. For more complex scenarios where there is more than one domain generating the traffic and more tunnels, the number of measurement points increases. Detailed list of measurement points for each experiment is provided in respective subsections below. The traffic measurements are realized by polling interface statistics via SNMP using a dedicated application independent from the SmartenIT traffic management software. The frequency of measurements may vary between experiments. After collecting the results, they are correlated and analysed using both dedicated applications and generic data mining tools.

Figure 3 Vantage points for traffic measurements in DTM test-bed.

Performance metrics and KPIs

This section extends KPI definitions provided in deliverable D4.1. We define performance metrics and KPIs as well as notation for measured values separately for experiments with volume based tariff and 95th percentile based tariff. The common and general notation encompasses:

length of the billing period: 𝑇

total traffic on inter-domain link (sum of background and manageable traffic): 𝑋

manageable traffic (sent via tunnels): 𝑍

AS

S-Box

L2 total traffic (background+manageable)

tun 2 traffic (manageable)

L1 total traffic (background+manageable)

tun 1 traffic (manageable)

BG1

BG2

DA

Page 13: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 13 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

For simplicity of description we consider a single multi-homed autonomous system that manages the traffic using DTM (as presented at Figure 3). Description of performance metrics in this section and notation used adheres to a single AS having two inter-domain links and two tunnels for manageable traffic. Performance metrics and KPIs introduced here are general. Variations dependent on a specific experiment setting will be described in respective sections dedicated to experiments' definitions.

We define two types of metrics:

point metrics that represent actual benefits of DTM and are calculated for ended

billing period

live metrics that are observed during the billing period and allows to estimate the

performance of DTM in real-time.

Two important metrics are expected absolute value of inter-domain traffic cost on each link and summary cost. Those values are calculated as follows:

𝑫𝑚𝑅 = 𝑓𝑚(𝑅𝑚)

𝑫𝑅 = ∑𝑫𝑚𝑅

𝑚

where 𝑅𝑚 is m-th component of reference vector �⃗� .

Some KPIs defined below will refer to this expected absolute value of cost incurred by transferred traffic.

Total volume based tariff

The basic metric is a total amount of traffic transferred via inter-domain links during the

billing period. The total traffic accumulated on link 𝑚 is denoted by 𝑋𝑚𝑉 (where 𝑚 ∈ {1,2} for

AS having two inter-domain links). There are cost functions defined for each inter-domain

link. They are denoted as 𝑓𝑚(∙). The actual cost of the traffic send via link 𝑚 is calculated as:

𝐷𝑚 = 𝑓𝑚(𝑋𝑚𝑉 )

The total cost the ISP pays for inter-domain traffic in a billing period is 𝐷 = ∑ 𝐷𝑚𝑚 .

The total amount of manageable traffic that was received via inter-domain link 𝑚 is

denoted by 𝑍𝑚𝑉 . If DTM were not used all the manageable traffic from DC serving as a

source of the traffic to the DC receiving the traffic would pass a default BGP path. This observation leads to a definition of the first KPI:

ξ(1) =𝑓1(𝑋1

𝑉) + 𝑓2(𝑋2𝑉)

𝑓1(𝑋1𝑉 + 𝑍2

𝑉) + 𝑓2(𝑋2𝑉 − 𝑍2

𝑉)

ξ(2) =𝑓1(𝑋1

𝑉) + 𝑓2(𝑋2𝑉)

𝑓1(𝑋1𝑉 − 𝑍1

𝑉) + 𝑓2(𝑋2𝑉 + 𝑍1

𝑉)

KPI ξ(1) denotes the relative monetary gain of using DTM. It is a ratio of total cost with traffic management to the total cost without traffic management, i.e., the case when a

default BGP path is used (all manageable traffic passes inter-domain link 1). In turn ξ(2) denotes a monetary gain of balancing the traffic with DTM instead of using link 2 as default

Page 14: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 14 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

BGP path. If both values are lower than 1 it means that an ISP benefits from using DTM

regardless which link would be used as default BGP path. If for instance ξ1 is greater or equal to 1 that means that it is better for ISP to transfer all manageable traffic via link 1, i.e., use a this link as a default BGP path.

In turn, the absolute cost benefit (or loss) from using DTM is expressed as:

Δ𝐷(1) = 𝑓1(𝑋1𝑉 + 𝑍2

𝑉) + 𝑓2(𝑋2𝑉 − 𝑍2

𝑉) − 𝑓1(𝑋1𝑉) − 𝑓2(𝑋2

𝑉)

or

Δ𝐷(2) = 𝑓1(𝑋1𝑉 − 𝑍1

𝑉) + 𝑓2(𝑋2𝑉 + 𝑍1

𝑉) − 𝑓1(𝑋1𝑉) − 𝑓2(𝑋2

𝑉)

if link 1 or 2 is considered as a default path, respectively.

Another KPI represents relation of achieved cost to the cost expected if the achieved distribution of traffic among links was exactly equal to the reference vector. It can be defined as:

𝜌 =𝐷

𝑫𝑅

Live performance metrics are built on a periodic traffic measurements during the billing

period. Let assume that the billing period of length 𝑇 is divided into 𝑁 measurement

periods of length Δ𝑡, where 𝑚𝑜𝑑𝑇

Δ𝑡= 0. Let's denote by 𝜏𝑖 the time that elapsed from the

beginning of the current billing period to the moment of collection of i-th measurement,

where 𝑖 ∈ [1, 𝑁]. In other words, 𝜏𝑖 = 𝑖 ∗ Δ𝑡.

At each point of time 𝜏𝑖 the accumulated traffic volume denoted as 𝑥𝑚,𝑖𝑉 is measured. Given

traffic volume at 𝜏𝑖 and the length of the billing period 𝑇 the total volume on link 𝑚 expected by the end of the billing period is estimated by linear approximation as:

�̂�𝑚,𝑖 = 𝑥𝑚,𝑖𝑉

𝑇

𝜏𝑖

Then the cost of the traffic on link 𝑚 expected by the end of the billing period estimated at time 𝜏𝑖is calculated as

�̂�𝑚,𝑖 = 𝑓𝑚(�̂�𝑚,𝑖)

where 𝑓𝑚(∙) is a cost function on that link.

The idea of a linear approximation is presented at Figure 4. The same procedure is repeated for each inter-domain link.

Finally for a multi-homed domain having 𝑚 inter-domain links the estimation at time 𝑡𝑖 of

the total cost the ISP expects to pay is calculated as �̂�𝑖 = ∑ �̂�𝑚,𝑖𝑚 . This method was used

for presentation of cost estimation in the showcase during second year project review.

Page 15: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 15 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 4 Estimation of costs on an inter-domain link for volume based tariff.

95th percentile based tariff

In the case of 95th percentile rule tariff the whole billing period is divided into a number of 5-minute long intervals. In each interval the amount of traffic transferred is measured. At the end of the billing period the smallest sample (in the ordered list of samples) such that 95% percent of samples are less than or equal to that value is found. This traffic sample is used to calculate the cost of the traffic. Therefore the actual cost remains unknown until the last sample is in the billing period collected.

Let's define 𝒳𝑚𝐴 = {𝑋𝑚,1,⋯ , 𝑋𝑚,𝑖, ⋯ , 𝑋𝑚,𝑁} as a set of 5-minute samples, where 𝑚 denotes

a link number and 𝑖 is a sample sequential number in the order of sample collection, i.e. sample 1 is the first sample collected in the billing period, sample 𝑖 is the sample collected

when time 𝑖 ∗ 𝑡𝑠 elapsed from the beginning of the billing period. The cardinality of set 𝒳𝑚𝐴

equals 𝑁 =𝑇

𝑡𝑠. Define also a set 𝒵𝑚

𝐴 = {𝑍𝑚,1, ⋯ , 𝑍𝑚,𝑖, ⋯ , 𝑍𝑚,𝑁} that contains samples of

manageable traffic. Element 𝑍𝑚,𝑖 represents the amount of manageable traffic in the

sample 𝑋𝑚,𝑖.

We introduce a sorting function 𝑆: [1, N] → [1, 𝑁] such that 𝑋𝑚,𝑆(𝑙) ≥ 𝑋𝑚,𝑆(𝑙+1). We define a

set 𝒳𝑚𝐻 which contains highest 5-minute samples collected during the billing period on link

𝑚:

𝒳𝑚𝐻 = {𝑋𝑚,𝑗: 𝑋𝑚,𝑗 ∈ 𝒳𝑚

𝐴 ∧ 𝑗 = 𝑆(𝑙) ∧ 𝑙 ∈ [1, 𝐾]},

where 𝐾 =𝑇

𝑡𝑠− ⌈0.95

𝑇

𝑡𝑠⌉ + 1. A set 𝒳𝑚

𝐻 is a subset of 𝒳𝑚𝐴. Each element of set 𝒳𝑚

𝐻 is higher

than any element of a set 𝒳𝑚𝐴\𝒳𝑚

𝐻.

The smallest element of set 𝒳𝑚𝐻 is a sample considered for billing basis on link 𝑚, i.e., it

represents the amount of traffic for which ISP pays according to 95th percentile rule.

Therefore, the value taken for cost calculation is defined as 𝑋𝑚95 = min𝒳𝑚

𝐻. Also, 𝑋𝑚95 =

𝑋𝑚,ℎ where ℎ is the sequential number of a smallest sample in set 𝒳𝑚𝐻. Finally, the cost of

inter-domain traffic on link 𝑚 is calculated as

𝐷𝑚 = 𝑓𝑚(𝑋𝑚95)

The total cost the ISP pays for inter-domain traffic in a billing period is 𝐷 = ∑ 𝐷𝑚𝑚 .

Link

1 V

olu

me [

MB] �̂�1,𝑖

�̂�1,𝑖+1

�̂�1,𝑖

�̂�1,𝑖+1

𝜏𝑖 𝜏𝑖+1 𝑇

Page 16: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 16 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Since all 5-minute samples are collected (set 𝒳𝑚𝐴) and for each sample we know how

much manageable traffic it contains (set 𝒵𝑚𝐴 ) it is possible to predict the cost of the traffic

that would be incurred if DTM was not used. However, the procedure of finding this cost is more complex than for the case of volume based tariff.

Let's assume that there are two inter-domain links a default BGP path is link 1. As defined

above sets 𝒳𝑚𝐴 and 𝒵𝑚

𝐴 contain samples for the case with DTM. By 𝒳1𝐴,(1)

and 𝒳2𝐴,(1)

we

denote the expected sets of samples collected on link 1 and link 2, respectively, if DTM was not used and a default BGP path was link 1. These sets can be predicted as follows:

𝒳1𝐴,(1)

= {𝑋1,1 + 𝑍2,1, ⋯ , 𝑋1,𝑖 + 𝑍2,𝑖, ⋯ , 𝑋1,𝑁 + 𝑍2,𝑁}

𝒳2𝐴,(1)

= {𝑋2,1 − 𝑍2,1, ⋯ , 𝑋2,𝑖 − 𝑍2,𝑖, ⋯ , 𝑋2,𝑁 − 𝑍2,𝑁}

The next step is to find corresponding sets of 𝐾 highest samples on each link: 𝒳1𝐻,(1)

and

𝒳2𝐻,(1)

. Then the smallest samples in each set are found. Sizes of those samples are used

to calculate the cost that an ISP would have to pay without using DTM. Similarly to the approach presented for total volume based tariff the following KPIs can be defined:

relative monetary gain of using DTM instead of using link 1 as a default BGP path:

ξ(1) =𝑓1(𝑋1

95) + 𝑓2(𝑋295)

𝑓1(min𝒳1𝐻,(1)

) + 𝑓2(min𝒳2𝐻,(1)

)

absolute cost benefit (or loss) from using DTM is expressed as:

Δ𝐷(1) = 𝑓1(min𝒳1𝐻,(1)

) + (min𝒳2𝐻,(1)

) − 𝑓1(𝑋195) − 𝑓2(𝑋2

95)

If we assume that link 2 belongs to a default BGP path, then expected traffic samples without DTM are defined as follows:

𝒳1𝐴,(2)

= {𝑋1,1 − 𝑍1,1, ⋯ , 𝑋1,𝑖 − 𝑍1,𝑖, ⋯ , 𝑋1,𝑁 − 𝑍1,𝑁}

𝒳2𝐴,(2)

= {𝑋2,1 + 𝑍1,1, ⋯ , 𝑋2,𝑖 + 𝑍1,𝑖, ⋯ , 𝑋2,𝑁 + 𝑍1,𝑁}

Consequently, we can define KPIs for that case:

relative monetary gain of using DTM instead link 2 as a default path:

ξ(2) =𝑓1(𝑋1

95) + 𝑓2(𝑋295)

𝑓1(min𝒳1𝐻,(2)

) + 𝑓2(min𝒳2𝐻,(2)

)

absolute cost benefit (or loss) from using DTM is expressed as:

Δ𝐷(2) = 𝑓1(min𝒳1𝐻,(2)

) + 𝑓2(min𝒳2𝐻,(2)

) − 𝑓1(𝑋195) − 𝑓2(𝑋2

95)

The KPI representing the relation of the cost achieved to the cost expected if the achieved distribution of traffic among links was exactly equal to the reference vector is also valid for

95th percentile tariff: 𝜌 =𝐷

𝑫𝑅

Similarly to volume based tariff case the real-time estimation of traffic cost will be done during the billing period. The methodology for 95th percentile tariff is different.

Page 17: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 17 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

As mentioned at the beginning of this section to calculate the cost of inter-domain traffic it

is necessary to a set know the size of the smallest sample in a set of 𝐾 =𝑇

𝑡𝑠− ⌈0.95

𝑇

𝑡𝑠⌉ + 1

highest samples. Therefore, to estimate the expected traffic cost during a billing period we

need to collect samples every 5 minutes and update a set of 𝐾 highest samples. Let's define a temporary set of samples as

𝒳𝑚,𝑖𝐴 = {𝑋𝑚,1, ⋯ , 𝑋𝑚,𝑖}

where |𝒳𝑚,𝑖𝐴 | = 𝑖. 𝒳𝑚,𝑖

𝐴 is a set of 𝑖 samples collected on link 𝑚 from the beginning of the

current billing period until time 𝑖 ∗ 𝑡𝑠. Thus, the set is updated every 5 minutes. Additionally,

corresponding sets of manageable traffic samples on each link are collected: 𝒵𝑚,𝑖𝐴 =

{𝑍𝑚,1, ⋯ , 𝑍𝑚,𝑖}. First estimation of expected cost is possible when at least 𝐾 samples are

collected. Then, every 5 minutes a set 𝒳𝑚,𝑖𝐻 of highest samples is found. We introduce a

set of sorting functions 𝑆𝑖: [1, 𝑖] → [1, 𝑖] that 𝑋𝑚,𝑆𝑖(𝑙)≥ 𝑋𝑚,𝑆𝑖(𝑙+1). We define a set 𝒳𝑚,𝑖

𝐻 which

contains highest 5-minute samples collected on link 𝑚 from the beginning of the current billing period until time 𝑖 ∗ 𝑡𝑠:

𝒳𝑚,𝑖𝐻 = {𝑋𝑚,𝑗: 𝑋𝑚,𝑗 ∈ 𝒳𝑚,𝑖

𝐴 ∧ 𝑗 = 𝑆𝑖(𝑙) ∧ 𝑙 ∈ [1, 𝐾]}

Then the smallest sample in set 𝒳𝑚,𝑖𝐻 is taken as a current (𝑖-th)estimate of traffic for which

the operator will pay:

�̂�𝑚,𝑖 = min𝒳𝑚,𝑖𝐻

Then the cost of the traffic on link 𝑚 expected by the end of the billing period estimated at time 𝑖 ∗ 𝑡𝑠 is calculated as

�̂�𝑚,𝑖 = 𝑓𝑚(�̂�𝑚,𝑖)

Note that after all 𝑁 samples are collected (the end of billing period) the estimated cost

equals the actual one: �̂�𝑚,𝑁 = 𝐷𝑚 since 𝒳𝑚,𝑁𝐻 = 𝒳𝑚

𝐻.

Finally for a multi-homed domain having 𝑚 inter-domain links the estimation at time 𝑡𝑖 of

the total cost the ISP expects to pay is calculated as �̂�𝑖 = ∑ �̂�𝑚,𝑖𝑚 .

3.1.1 Evaluation of multi-domain traffic cost reduction in DTM: S-to-S case

The goal of this experiment is to evaluate DTM functionality and performance. The use-case considered is "Bulk data transfer for cloud operators”. The logical topology for this experiment is presented in Figure 5. There are two domains hosting DCs: AS1 and AS3. Data center located at AS3 (DC-B) serves as a source of manageable traffic, while DC-A receives the traffic. AS1 performs management of inbound inter-domain traffic to reduce costs of inter-domain traffic. Using DTM it influences distribution of manageable traffic among two inter-domain links (L1 and L2) in a cost efficient way. It is achieved by selecting one of the tunnels (tun 1 or tun 2) for flows originated at DC-B.

Page 18: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 18 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 5 Logical topology for S-to-S experiment.

Deployment infrastructure

The deployment infrastructure for S-to-S experiment is presented in Figure 6. Addressing scheme is presented in Table 1. Description of all virtual machines can be found in Table 2.

Table 1 Detailed IP Address table for the production network for the DTM evaluation.

IP Address or Address Range Usage

10.0.1.0/24 Interconnection ISP1-ISP2

10.0.2.0/24 Interconnection ISP1-ISP2

10.1.1.0/30 Interconnection ISP2-ISP3

10.10.1.0/24 ISP1

10.10.2.0/24 ISP1

10.10.3.0/24 ISP1

10.1.2.0/24 ISP2

10.1.5.0/24 ISP2

10.1.6.0/24 ISP2

10.1.3.0/24 ISP3

10.1.4.0/24 ISP3

Cloud A

DC-A

Cloud B

DC-B

AS1

AS2

AS3

BGP router

Intra-domain router

Inter-domain link

Intra-domain link

SDN controller

S-Box

S-Box

Traffic generator(sender)

Traffic generator(sender)

Trafficreceiver

Trafficreceiver

Traffic

receiver

DC traffic

generator

sender

DA-1

BG-1.2

BG-1.1

Page 19: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 19 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 6 Deployment of virtual machines on three physical servers in test-bed environment.

Page 20: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 20 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Table 2 Hosts and the services running on them for DTM.

Host Services Comment

dtm-isp1-rtr-bg1 ISP router, interconnection to AS2 (link L1), BG-1.1

Vyatta software router

dtm-isp1-rtr-bg2 ISP router, interconnection to AS2 (link L2), BG-1.2

Vyatta software router

dtm-isp1-rtr-da ISP router allowing connection with DC and S-Box, DA-1

Vyatta software router

dtm-isp1-vmdc Data Center receiving inter-domain traffic: DC-A

dtm-isp1-vmsbox S-Box

dtm-isp1-vmtr1 Receiver of background traffic passing inter-domain link L1

dtm-isp1-vmtr2 Receiver of background traffic passing inter-domain link L2

dtm-isp2-rtr-bg1 ISP router, interconnection to AS1 and AS3 Vyatta software router

dtm-isp2-rtr-bg2 ISP router, interconnection to AS1 Vyatta software router

dtm-isp2-vmtg1 Generator of background traffic passing inter-domain link L1

dtm-isp2-vmtg2 Generator of background traffic passing inter-domain link L2

dtm-isp3-rtr-bg1 ISP router, interconnection to AS2 Vyatta software router

dtm-isp3-ofda OVS connected to SDN controller

dtm-isp3-vmdc DC generating inter-domain traffic: DC-B

dtm-isp3-vmsbox S-Box

dtm-isp3-sdn SDN Controller

Parameters, Measurement and Metrics

Measurement points and measured metrics for S-to-S experiments and for two types of tariff, volume based and 95th percentile based, are presented in Table 3 and Table 4, respectively. Performance metrics and KPIs are shown in Table 5.

Based on collected statistics calculation of KPIs will be proceeded by external application (e.g., MS Excel, Matlab, Mathematica). Achieved results will be presented in form of graphs drawn in a known application (e.g., GNU Plot, Matlab, Mathematica, etc.).

Table 3 Measurement points and measured values: experiment with volume based tariff.

Measured value Measurement point Notation Frequency

Temporary values of total traffic on inter-domain links

AS1 border routers:

BG-1.1 and BG-1.2 𝑥1,𝑖

𝑉 and 𝑥2,𝑖𝑉 Δ𝑡 = 30 𝑠

Temporary values of DA-1 router at AS1 𝑧1,𝑖𝑉 and 𝑧2,𝑖

𝑉 Δ𝑡 = 30 𝑠

Page 21: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 21 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

manageable traffic

Achieved values of total traffic on inter-domain links

AS1 border routers:

BG-1.1 and BG-1.2 𝑋1

𝑉 and 𝑋2𝑉

End of billing period:

𝑇

Achieved values of manageable traffic

DA-1 router at AS1 𝑍1𝑉 and 𝑍2

𝑉 End of billing period:

𝑇

Compensation vector

s-Box at AS1 𝐶 Δ𝑡 = 30 𝑠

Reference vector s-Box at AS1 �⃗� End of billing period:

𝑇

Table 4 Measurement points and measured values: experiment with 95th percentile based tariff.

Measured value Measurement point Notation Frequency

5-minute samples of total traffic on inter-domain links

AS1 border routers:

BG-1.1 and BG-1.2

Elements 𝑖 of sets:

𝒳1,𝑖𝐴 and 𝒳2,𝑖

𝐴 𝑡𝑠 = 5 min

Share of manageable traffic in 5-minute samples

DA-1 router at AS1 Elements 𝑖 of sets:

𝒵1,𝑖𝐴 and 𝒵2,𝑖

𝐴 𝑡𝑠 = 5 min

Sets of samples on each inter-domain link by the end of billing period and the size of sample used for billing

AS1 border routers:

BG-1.1 and BG-1.2

Sets 𝒳1𝐴 and 𝒳2

𝐴

and

samples 𝑋195 and 𝑋2

95

End of billing period:

𝑇

Sets of samples of manageable traffic by the end of billing period

DA-1 router at AS1 𝒵1𝐴 and 𝒵2

𝐴 End of billing period:

𝑇

Compensation vector

s-Box at AS1 𝐶 Δ𝑡 = 30 𝑠

Reference vector s-Box at AS1 �⃗� End of billing period:

𝑇

Page 22: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 22 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Table 5 Performance metrics and KPIs.

Total cost with DTM 𝐷, 𝐷1 and 𝐷2

Cost expected without DTM if link 1 was a default path, and absolute benefit

𝐷(1), 𝐷1(1)

, 𝐷2(1)

, Δ𝐷(1)

Cost expected without DTM if link 1 was a default path

𝐷(2), 𝐷1(2)

, 𝐷2(2)

, Δ𝐷(2)

Cost estimated during the billing period �̂�𝑖, �̂�1,𝑖 and �̂�2,𝑖

Relative gain of using DTM ξ(1) and ξ(2)

Ration of the cost achieved to the cost expected according to reference vector (accuracy of optimization)

𝜌 =𝐷

𝑫𝑅

Test procedures

The following three main stages of experiment are defined.

Stage 1 – Functionality test

The main purpose of functionality test is a coarse observation of procedures of DTM mechanism and evaluation of it in order to validate the implementation. For such test billing period should be setup to one hour. Traffic envelope of generators (both, background and DC-DC traffic) should be flat. After two trial billing period (system warm up), during third hour some burst of background traffic will be manually injected to the network in order to evaluate proper compensation of it. The functionality test will be performed for volume and 95th percentile based tariff test-bed setup.

Stage 2 – Performance evaluation test for volume based tariff

Performance evaluation of DTM for volume based tariff will be performed with usage of KPIs. Billing period should be setup to one week. Since DTM will be started without any initial setup (initial values of Reference and Compensation vectors) first week is needed to collect traffic statistics in order to calculate Reference and Compensation vectors. After that statistics from next two billing period will be collected for evaluation purposes. Performance evaluation test will be started with daily envelope traffic pattern for both, DC-DC traffic and background traffic generators.

Stage 3 – Performance evaluation test for 95th percentile based tariff

Performance evaluation test for 95th percentile based tariff will be performed with usage of KPIs. Similarly as for performance evaluation test for volume based tariff, one week billing period will be used. The measurement will be done during two billing period after the initial belling period. DC-DC and background traffic profile with daily envelope pattern will be used.

Page 23: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 23 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

3.1.2 Evaluation of multi-domain traffic cost reduction in DTM: M-to-M case

The goal of this experiment is to evaluate DTM functionality and performance in a more complex network topology with multiple DCs serving as traffic sources and receivers, and multiple cooperating ISP running DTM. The use-case considered is again "Bulk data transfer for cloud operators”.

The logical topology for this experiment is presented in Figure 7. There are two multi-homed domains performing traffic management with DTM: AS1 and AS4. They host DCs serving as receivers of manageable traffic: DC-A and DC-C, respectively. There are also two domains, AS3 and AS5, which host data centers being traffic sources. There are 8 tunnels established in total. The reference vector and compensation vector calculated at AS1 is sent to AS3 and AS5. On the basis of their values AS3 chooses one of two tunnels (tun BA1 or tun BA2) for flows to be sent from DC-B to DC-A. Similarly, in AS5 one of two tunnels (tun DA1 or tun DA2) is chosen for traffic originated at DC-D and designated to DC-A. Analogously, AS4 performs traffic management and sends reference and compensation vectors to AS3 and AS5 to manage traffic received by DC-C but generated by DC-B or DC-D, respectively.

Figure 7 Logical topology for M-to-M experiment.

Deployment infrastructure

The deployment infrastructure for M-to-M experiment is presented in Figure 8. Addressing scheme is presented in Table 6. Description of all virtual machines can be found in Table 7.

Cloud A

DC-ACloud B

DC-B

AS1

AS2AS3

SDN controllerS-Box

S-Box

Traffic generator

(sender) GA1

Traffic generator

(sender) GC2

Trafficreceiver RA1

Trafficreceiver RA2

Cloud C

DC-C

AS4

S-Box

TrafficReceiver RC1

Trafficreceiver RC2

DC traffic

generator

sender

Traffic

receiver

Traffic

receiver

Traffic generator

(sender) GA2

Traffic generator

(sender) GC1

AS5

SDN controllerS-Box

DC traffic

generator

sender

Cloud D

DC-D

DA-4

DA-1

BG-4.2

BG-4.1

BG-1.2

BG-1.1

Page 24: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 24 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 8 Deployment of virtual machines on three physical servers in test-bed environment.

Page 25: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 25 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Table 6 Detailed IP Address table for the production network for the DTM evaluation.

IP Address or Address Range Usage

10.0.1.0/24 Interconnection ISP1-ISP2

10.0.2.0/24 Interconnection ISP1-ISP2

10.0.3.0/24 Interconnection ISP4-ISP2

10.0.4.0/24 Interconnection ISP4-ISP2

10.1.1.0/24 Interconnection ISP2-ISP3

10.1.12.0/24 Interconnection ISP2-ISP5

10.10.1.0/24 ISP1

10.10.2.0/24 ISP1

10.10.3.0/24 ISP1

10.1.2.0/24 ISP2

10.1.5.0/24 ISP2

10.1.6.0/24 ISP2

10.1.3.0/24 ISP3

10.1.4.0/24 ISP3

10.10.7.0/24 ISP4

10.10.9.0/24 ISP4

10.10.10.0/24 ISP4

10.1.10.0/24 ISP5

10.1.11.0/24 ISP5

Table 7 Hosts and the services running on them for DTM.

Host Services Comment

dtm-isp1-rtr-bg1 ISP router, interconnection to AS2 (link LA1), BG-1.1

Vyatta software router

dtm-isp1-rtr-bg2 ISP router, interconnection to AS2 (link LA2), BG-1.2

Vyatta software router

dtm-isp1-rtr-da ISP router allowing connection with DC and S-Box, DA-1

Vyatta software router

dtm-isp1-vmdc Data Center receiving inter-domain traffic: DC-A

dtm-isp1-vmsbox S-Box

dtm-isp1-vmtr1 Receiver of background traffic passing inter-domain link LA1

dtm-isp1-vmtr2 Receiver of background traffic passing inter-domain link LA2

dtm-isp2-rtr-bg1 ISP router, interconnection to AS1, AS3 and AS4

Vyatta software router

Page 26: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 26 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

dtm-isp2-rtr-bg2 ISP router, interconnection to AS1, AS4 and AS5

Vyatta software router

dtm-isp2-vmtg1 Generator of background traffic passing inter-domain link LA1 and LC1

dtm-isp2-vmtg2 Generator of background traffic passing inter-domain link LA2 and LC2

dtm-isp3-rtr-bg1 ISP router, interconnection to AS2 Vyatta software router

dtm-isp3-ofda OVS connected to SDN controller

dtm-isp3-vmdc DC generating inter-domain traffic: DC-B

dtm-isp3-vmsbox S-Box

dtm-isp3-sdn SDN Controller

dtm-isp4-rtr-bg1 ISP router, interconnection to AS2 (link LC1), BG-4.1

Vyatta software router

dtm-isp4-rtr-bg2 ISP router, interconnection to AS2 (link LC2), BG-4.2

Vyatta software router

dtm-isp4-rtr-da ISP router allowing connection with DCand S-Box, DA-4

Vyatta software router

dtm-isp4-vmdc Data Center receiving inter-domain traffic: DC-C

dtm-isp4-vmsbox S-Box

dtm-isp4-vmtr1 Receiver of background traffic passing inter-domain link LC1

dtm-isp4-vmtr2 Receiver of background traffic passing inter-domain link LC2

dtm-isp5-rtr-bg1 ISP router, interconnection to AS2 Vyatta software router

dtm-isp5-ofda OVS connected to SDN controller

dtm-isp5-vmdc DC generating inter-domain traffic: DC-D

dtm-isp5-vmsbox S-Box

dtm-isp5-sdn SDN Controller

Parameters, Measurement and Metrics

Measurement points and sets of measured metrics are the same as for S-to-S scenario. Also same applications for measurement, data collection, calculation of KPIs will be used. The only difference is that they are defined for two domains that manages the inbound traffic, AS1 and AS4, instead of a single domain. Measurement points and measured values for experiments with volume based tariff and fo 95th percentile tariff are juxtaposed in Table 8 and Table 9, respectively. Performance metrics and KPI are also observed separately for AS1 and AS4, but sets of metrics and KPIs are the same in each domain. They are presented in Table 10.

Page 27: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 27 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Table 8 Measurement points and measured values: experiment with volume based tariff.

Domain Measured value Measurement

point Notation Frequency

AS1

Temporary values of total traffic on inter-domain links in AS1

AS1 border routers:

BG-1.1 and BG-1.2

𝑥1,𝑖𝑉 and 𝑥2,𝑖

𝑉 Δ𝑡 = 30 𝑠

Temporary values of manageable traffic in AS1

DA-1 router at AS1 𝑧1,𝑖𝑉 and 𝑧2,𝑖

𝑉 Δ𝑡 = 30 𝑠

Achieved values of total traffic on inter-domain links in AS1

AS1 border routers:

BG-1.1 and BG-1.2

𝑋1𝑉 and 𝑋2

𝑉 End of billing

period: 𝑇

Achieved values of manageable traffic in AS1

DA-1 router at AS1 𝑍1𝑉 and 𝑍2

𝑉 End of billing

period: 𝑇

Compensation vector in AS1

s-Box at AS1 𝐶 Δ𝑡 = 30 𝑠

Reference vector in AS1

s-Box at AS1 �⃗� End of billing

period: 𝑇

AS4

Temporary values of total traffic on inter-domain links in AS4

AS4 border routers:

BG-4.1 and BG-4.2

𝑥1,𝑖𝑉 and 𝑥2,𝑖

𝑉 Δ𝑡 = 30 𝑠

Temporary values of manageable traffic in AS4

DA-4 router at AS4 𝑧1,𝑖𝑉 and 𝑧2,𝑖

𝑉 Δ𝑡 = 30 𝑠

Achieved values of total traffic on inter-domain links in AS4

AS4 border routers:

BG-4.1 and BG-4.2

𝑋1𝑉 and 𝑋2

𝑉 End of billing

period: 𝑇

Achieved values of manageable traffic in AS4

DA-4 router at AS4 𝑍1𝑉 and 𝑍2

𝑉 End of billing

period: 𝑇

Compensation vector in AS4

s-Box at AS4 𝐶 Δ𝑡 = 30 𝑠

Reference vector in AS4

s-Box at AS4 �⃗� End of billing

period: 𝑇

Page 28: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 28 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Table 9 Measurement points and measured values: experiment with 95th percentile based tariff.

Domain Measured value Measurement

point Notation Frequency

AS1

5-minute samples of total traffic on inter-domain links

AS1 border routers:

BG-1.1 and BG-1.2

Elements 𝑖 of sets:

𝒳1,𝑖𝐴 and 𝒳2,𝑖

𝐴

𝑡𝑠 = 5 min

Share of manageable traffic in 5-minute samples

DA-1 router at AS1

Elements 𝑖 of sets:

𝒵1,𝑖𝐴 and 𝒵2,𝑖

𝐴

𝑡𝑠 = 5 min

Sets of samples on each inter-domain link by the end of billing period and the size of sample used for billing

AS1 border routers:

BG-1.1 and BG-1.2

Sets 𝒳1𝐴 and

𝒳2𝐴

and

samples 𝑋195

and 𝑋295

End of billing

period: 𝑇

Sets of samples of manageable traffic by the end of billing period

DA-1 router at AS1 𝒵1𝐴 and 𝒵2

𝐴 End of billing

period: 𝑇

Compensation vector

s-Box at AS1 𝐶 Δ𝑡 = 30 𝑠

Reference vector s-Box at AS1 �⃗� End of billing

period: 𝑇

AS4

5-minute samples of total traffic on inter-domain links

AS4 border routers:

BG-4.1 and BG-4.2

Elements 𝑖 of sets:

𝒳1,𝑖𝐴 and 𝒳2,𝑖

𝐴

𝑡𝑠 = 5 min

Share of manageable traffic in 5-minute samples

DA-4 router at AS4

Elements 𝑖 of sets:

𝒵1,𝑖𝐴 and 𝒵2,𝑖

𝐴

𝑡𝑠 = 5 min

Sets of samples on each inter-domain link by the end of billing period and the size of sample used for billing

AS4 border routers:

BG-4.1 and BG-4.2

Sets 𝒳1𝐴 and

𝒳2𝐴

and

samples 𝑋195

and 𝑋295

End of billing

period: 𝑇

Sets of samples of manageable traffic by the end of billing

DA-4 router at AS4 𝒵1𝐴 and 𝒵2

𝐴 End of billing

period: 𝑇

Page 29: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 29 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

period

Compensation vector

s-Box at AS4 𝐶 Δ𝑡 = 30 𝑠

Reference vector s-Box at AS4 �⃗� End of billing

period: 𝑇

Table 10 Performance metrics and KPIs for AS1 and AS4.

Total cost with DTM 𝐷, 𝐷1 and 𝐷2

Cost expected without DTM if link 1 was a default path, and absolute benefit

𝐷(1), 𝐷1(1)

, 𝐷2(1)

, Δ𝐷(1)

Cost expected without DTM if link 1 was a default path

𝐷(2), 𝐷1(2)

, 𝐷2(2)

, Δ𝐷(2)

Cost estimated during the billing period �̂�𝑖, �̂�1,𝑖 and �̂�2,𝑖

Relative gain of using DTM ξ(1) and ξ(2)

Ration of the cost achieved to the cost expected according to reference vector (accuracy of optimization)

𝜌 =𝐷

𝑫𝑅

Test procedures

As for a single-to-single case the three stages for multi-to-multi case are proposed: stage 1 – functionality test, stage 2 – performance test with volume based tariff, stage 3 – performance tests with 95th percentile based tariff. Description of each of them is an extension of text regarding the single-to-single test procedure.

Stage 1 – Functionality test

The main goal of functionality test is a basic evaluation of DTM mechanism in multi-to-multi experiment configuration. As for single-to-single experiment, billing period will be setup to one hour. Since more complex topology will be used, more traffic generators and receivers will be utilized. Each traffic generator (background and DC) will be configured to generate flat envelope traffic pattern. The observation of DTM procedures will be done during two billing periods. In order to validate DTM mechanism some bursts of background traffic affecting both receiving ISP will be injected. Functionality test will be performed for volume and 95th percentile based tariff.

Stage 2 – Performance evaluation test for volume based tariff

Performance evaluation test for volume based tariff will be performed in order to calculate KPIs, but calculated separately for two receiving domains (AS1 and AS4). Test setup is the same as for single-to-single scenario (billing period – one week, usable observation time – 2 billing periods, traffic envelope – daily profile).

Stage 3 – Performance evaluation test for 95th percentile based tariff

Performance evaluation test for 95th percentile based tariff will be performed with usage of performance metrics and KPIs, but calculated separately for two receiving domains (AS1

Page 30: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 30 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

and AS4). Test setup is the same as for single-to-single scenario, i.e., billing period – one week, usable observation time – 2 billing periods, traffic envelope – daily profile.

3.2 EFS Experiments

The end-user focused (EFS) scenario aims at providing increased QoE and energy efficiency for the end-users. In particular, this goal is achieved by applying in-network optimization strategies, such as content caching and prefetching, in a socially and energy-aware fashion, while taking ISPs’ and application providers’ interests into account. In this context, RB-HORST mechanism has been designed and implemented to support the aforementioned functionalities. It considers a direct involvement of end-users and its device’s resources in the service delivery chain and is based on the concept of the user-owned nano datacenter (uNaDa).

The EFS experiments will use the released and integrated prototype of RB-HORST, to validate its set of functionalities, as well as evaluate its performance and benefits to all involved stakeholders: end-users, ISPs, and service providers. For each experiment, certain goals and metrics are being defined, aiming to quantify the performance of specific RB-HORST components, while the deployment infrastructure and all test procedures, both functional and performance, are also provided, so that the people running the experiments will have all the necessary information to execute them. The list of EFS experiments is briefly described below:

The caching experiment (Section 3.2.1) aims at validating and evaluating the performance of RB-HORST mechanism’s caching and proxying functionality in a test-bed environment.

The large-scale study (Section 3.2.2), will test the RB-HORST platform in a real-world environment with real users, and will extract all the required measurements to evaluate the performance of the social and overlay prediction algorithms, and what are the benefits from content prefetching for end-users and ISPs.

Finally, the mobile data offloading experiment (Section 3.2.3) will monitor the energy consumption of uNaDas and smartphones, and evaluate the bandwidth and energy consumption savings of WiFi offloading under realistic bandwidth conditions.

Figure 9 shows the basic topology of the caching and mobile data offloading experiments. It consists of 3 NSP domains, 2 access (AS1 and AS3) and 1 transit one (AS2), and 3 users (Andri, Sergios and George), each one having Internet access through their respective ISP. The transit NSP provides access to the rest of the Internet, e.g. Facebook and Vimeo servers.

In each user’s premises there is a uNaDa, which is a Raspberry Pi hosting the RB-HORST software. Each uNaDa is assigned to its owner with his Facebook credentials and provides 2 SSIDs, one open but with no Internet access, and one private with full Internet access. In addition, each user owns an Android smartphone to access the Internet, with RB-HORST Android application and at least a web browser, installed. Of course, depending on the experiment, we could have multiple users, with their respective Android smartphones, accessing and connecting to the uNaDas.

Page 31: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 31 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 9 Basic Topology of EFS experiments.

Figure 10 shows the mapping of the EFS basic topology to the actual test-bed. As indicated in the figure, PC/ISP1, 2 and 3 map to ASes 2, 1 and 3 respectively, meaning that ISP1 is the transit domain, and ISPs 2 and 3 are the access ones.

Page 32: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 32 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 10: Mapping of the EFS basic topology to the SmartenIT test-bed

Page 33: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 33 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

In addition, the following tables also provide the IP addresses and the services used to run the EFS experiments.

Table 11 Detailed IP Address table for the production network for the EFS experiments.

IP Address or Address Range Usage

10.201.0.0/18 Interconnection

10.201.50.0/30 Interconnection ISP1-ISP2

10.201.50.4/30 Interconnection ISP2-ISP3

10.201.50.8/30 Interconnection ISP1-ISP3

10.201.64.0/18 ISP1

10.201.100.0/27 ISP1 core network

10.201.128.0/18 ISP2

10.201.150.0/27 ISP2 core network

10.201.191.0/24 ISP2 RB-HORST network

10.201.192.0/18 ISP3

10.201.200.0/27 ISP3 core network

10.201.255.0/24 ISP3 RB-HORST network

Table 12 Hosts and the services running on them for EFS experiments.

Host Services Comment

isp1-rtr1 Uplink to Internet, Whois Proxy

isp1-rtr2 ISP router, interconnection to ISP 2 and ISP 3

isp2-rtr1 ISP router, interconnection to ISP1

isp2-un1 Hardware uNaDa located at ISP2 Hosts the HORST and RBH_Secured SSIDs.

isp3-rtr1 ISP router, interconnection to ISP1

isp3-un1 Hardware uNaDa located at ISP3 Hosts the HORST-DEMO and RBH_Secured_DEMO SSIDs.

isp3-un2 Headless, software uNaDa located at ISP3

3.2.1 Evaluation of caching functionality in RB-HORST

The goal of this experiment is to test the basic caching functionality of RB-HORST and to evaluate the cache performance. In order to make sure that the implemented prototype is capable of caching, proxy functionality to intercept video requests and the capability to store/serve content to/from the cache of the home router have to be asserted. The performance evaluation of the RB-HORST cache will quantify the performance depending on the content request rate, the content request strategy, cache size, and the number of end devices. The results will be analyzed in terms of bandwidth utilization (saved traffic) and energy consumption, and mapped to subjective QoE.

Page 34: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 34 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Deployment infrastructure

This specific experiment focuses on access domain AS1 (ISP3), aiming to evaluate the caching capabilities of a single uNaDa. For this purpose, several Android smartphones with the RB-HORST Android application and a web browser installed, and the uNaDa hosting the RB-HORST software, are required. Figure 11 presents the test-bed segment that is required to run the experiment.

Figure 11 Test-bed segment for evaluation of caching functionality.

Parameters, Measurements and Metrics

Parameters:

A reference set-up will be used to assess the impact of each parameter. This means, one parameter is varied per test and the reference values are used for the other parameters.

Cache size: 0MB (no caching), 128MB, 256MB, 512MB, 1GB (reference will be selected after cache size performance study)

Number of end devices: 1 (reference), 2, 4, 8 devices

Video request rate: 1/16min, 1/8min (reference), 1/4min, 1/2min, 1/1min, 1/0.5min

Request generator: same video, random video (100 videos, uniform distribution), catalogue (reference, 100 videos, Zipf-distributed probability), avg. video length: 3min

Page 35: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 35 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Measurements (to be conducted on end user device and/or uNaDa):

Content request time (request sent on end user device, request arrived at uNaDa)

uNaDa action upon request (cache hit/miss)

Content serve time (content sent from uNaDa, content arrived at end user device)

Up-/downlink traffic traces measured at uNaDa (end user device - uNaDa, uNaDa - Vimeo)

Energy consumption (end user device, uNaDa)

Metrics (computed from measured data):

Cache hit rate of uNaDa = #(requests served from cache) / #(requests arrived)

Requests served by uNaDa = #(content serve time) [compare to video request rate]

Bandwidth utilization from traffic traces (download bandwidth at end user device [uNaDa QoS], amount of traffic to Vimeo [inter-domain traffic saved])

Energy consumption

QoE (compute stalling events from download bandwidth and video bitrate)

Test procedures

The following tests are defined to validate functionality and estimate performance.

Functionality tests

The functionality tests ensure that the caching functionality works as expected. Therefore, home routers and end devices must be set up and the home router has to be registered in the overlay. It must be tested that content requests are sent from the end device and content can be consumed. The consumed content has to be cached on the home router and a subsequent request to that content must be served from the cache of the home router.

Figure 12 shows the topology of the caching functionality tests. A uNaDa is located in the AS of an access provider. The AS is connected to the Internet via AS2 which might be a transit provider. In the Internet, video content can be accessed from its providers.

The test is set up as follows: Google Nexus 5 smartphones have RB-HORST installed. uNaDa A has Internet connection and SSID HORST A_AP. Users A and C watch common videos and are connected to via WiFi to A_AP.

In the functionality test reference scenario both users A and C downloaded the same video as usual without RB-HORST functionality.

With the use of RB-HORST in the uNaDa a video downloaded by user A is cached on the uNaDa. User C downloads the same video. The request is intercepted by the uNaDa and served by the uNaDa cache.

Page 36: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 36 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 12 Topology of caching functionality tests.

Performance tests

The performance tests quantify the impact of different parameters on the caching functionality of RB-HORST. The reference parameters will be used apart from the respective investigated parameter:

Performance study cache size: reference set-up but change cache size (will provide a reference cache size for further performance tests)

Performance study number of end user devices: reference set-up but change number of end devices

Performance study inter-arrival time of requests: reference set-up but change video request rate

Performance study request strategies: reference set-up but change request strategy

3.2.2 Large-scale RB-HORST++ Study

The goal of the large-scale RB-HORST++ Study is to show that the social-aware prefetching and Wifi offloading mechanisms are functional and they improve the perceived network service for the end-user compared to conventional network access and simple caching approaches. In contrast to caching, where content is kept in local network storage after it has been downloaded once by a user, the social-aware prefetching mechanism proactively downloads content that a local user is likely to watch in the future. Furthermore, the operation of RB-HORST++ in a realistic usage environment and large number of participants is demonstrated and evaluated.

Page 37: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 37 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

The study will be conducted by at least 40 participants at TUD, UZH, UniWue, and AUEB. Additional nodes may be emulated in EmanicsLab (LXC containers) if needed. The goal of the study is to have participants in at least 4 different locations throughout Europe.

For the basic prefetching functionality, the messaging overlay, the content prediction, and the cache management has to be operable. Furthermore, for the mobile offloading features of RB-HORST a large number of participants with realistic social connections are required. The performance of prefetching has to consider distributed home routers over multiple ASes and a social network between the users. The impact of the content request rate, of the content request strategy, and the home router locations will be investigated. The results will be analyzed in terms of prefetching efficiency, bandwidth utilization (saved inter-domain traffic), energy consumption, and subjective QoE. The usage patterns required by the mobile offloading features including the interaction of trusted and untrusted users are investigated by project members and associated University students.

Deployment infrastructure

Access points are used as home routers running RB-HORST. Each of them includes WiFi as well as a wired uplink port. The uplink port is used by the participants to connect the devices to their home Internet access. Furthermore, the participants use their smartphones as mobile devices.

Equipment set:

Home routers with WiFi access running RB-HORST

Internet access located in different ASes in at least 4 different European countries

More than 40 participants with their own mobile devices

ASes are connected to the public Internet

Parameters home router:

Hard disk size of home router: 16GB

Up- / downlink bandwidth of home router: Varying according to the connectivity provided by the participants

CPU, RAM: 4-core CPU, 1GB RAM

Parameters end-device:

Android smart phone with the RB-HORST app installed

Parameters, Measurements and Metrics

The detailed metrics of a measured characteristic is given in brackets behind the name of the characteristic. The trigger for the measurement is given in square brackets. The trigger can be the occurrence of an event or periodically. The list is grouped by measurement points in the experiment setup.

Measurements on the home router:

The data is logged in text files on the home router. These files are uploaded to the data collection server using HTTP and REST every hour.

Page 38: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 38 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Connectivity to monitoring server (bidirectional throughput, traceroute, RTT)

[once after system setup]

Home router device status (CPU usage, network interface counters, available

space on local storage) [periodically 1/s]

Data offload (date, time, duration, transferred volume) [Event]

Details on the RB-HORST operations on the device

Dump friend list of home router owner [periodically 1/day]

Video Request Events (time, video ID, title, length, size, source,

download time) [Event]

Video Prefetching Events (time, video ID, title, length, size, source,

download time) [Event]

Video Serving Events (time, video ID, title, length, size, source, download

time) [Event]

Predictions(raw data, predicted ranking) [Event]

Overlay neighbors (IP address, user) in fixed time intervals e.g. every

hour [Event]

Cache hits (time content, time in cache) [Event]

Cache delete events (time content) [Event]

Raw traceroutes to other RB-HORST instances (IP addresses of hops)

[Event]

Measurements on the Mobile App:

The data is logged into a database and uploaded to the data collection server using HTTP and REST every hour.

Connection status to private RB-HORST WiFi (duration) [periodically 1/s]

WiFi on own home router [on user request, Event: connection change from

Cellular]

Connection to home server (bidirectional throughput, round-trip time,

signal strength)

Connection to monitoring server (bidirectional throughput, traceroute,

round-trip time)

Cellular [Event: connection change from WiFi]

Connected mobile network (cell Id, signal strength, wireless technology,

operator)

Page 39: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 39 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Connection to monitoring server (bidirectional throughput, bidirectional

traceroute, round-trip time)

Mobile device status [periodically 1/s]

CPU utilization

Network Interface counters

Screen status

Power status (battery level, voltage, current)

Metrics:

The following metrics are calculated based on the data collected during the experiments.

The calculations are conducted after the experiment on data collection server.

Energy consumption of home router

Based on energy model for device and the measured device

Prefetching efficiency = #(prefetch time)/#(content serve time)

Cache hit rate = #(content serve time) / #(content request time)

Requests served = #(content serve time)

Bandwidth utilization from traffic traces

Inter-domain traffic produced by prefetching

Inter-domain traffic saved

Test procedures

The test procedures are conducted by the participants under supervision. Each home router device is handed out to a group of at least two participants.

The home router is configured by one of the participants at home and connected to their home network. This network connection is used for Internet connectivity. This participant configures the home router to represent her/his Facebook identity in the RB-HORST system. All participants install the RB-HORST application on their Android smartphone and set it up with their credentials.

Relying on the existing Facebook relation of the participants, they are encouraged to interact, use the RB-HORST system by visiting each other’s location and post RB-HORST compatible content on their Facebook wall. The participants trigger measurements on the home router and on their smartphone. Finally, each participant fills out a survey on their experience and information on their home network.

3.2.3 Evaluation of data offloading functionality in RB-HORST

The goal of this experiment is the comparison of the potential of bandwidth and energy consumption savings of WiFi offloading under realistic bandwidth conditions as experienced by the end-user at home or while moving. For that purpose, the measurement of the probability that users offload at uNaDas of social contacts as well as the achievable savings of overall bandwidth and energy is compared to a transmission via 3G/4G.

Page 40: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 40 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 13: Architecture of the Energy Monitor and Analyzer.

Deployment infrastructure

The infrastructure required for the experiment consists of uNaDas, participating in the overlay and being configured for the respective users. Furthermore, smartphones capable of connecting to the cellular network via WiFi and to the local uNaDas are required. On the smartphone, the RB-HORST Facebook App needs to be installed. The test-bed setup is similar to the experiment described in Section 3.2.1. Additionally, a cellular connection is required to allow accessing the video content independently of RB-HORST. Such, a reference for the remaining tests is established. Similar to that experiment, content is retrieved via the uNaDa, once directly from the server, and the second time from the local cache. For these data transfers, the energy consumption is computed.

The energy monitoring and analyzer are integrated into the SmartenIT architecture. The energy monitor estimates the power consumption of the uNaDa and transfers the measurements to the energy analyzer. This aggregates the samples from multiple uNaDas. The refined data is then used by the traffic manager to adapt the routing to the current energy consumption of the participating devices. The architecture of the energy monitors and analyzer is visualized in Figure 13.

The energy monitoring on the uNaDas is model based, meaning that the instantaneous power consumption of the device is not measured directly, but derived using an energy

Page 41: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 41 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

model. This model is generated by simultaneously measuring the power consumption and monitoring the system utilization of the underlying hardware. Using regression approaches the power model is calculated.

The power model derived from the regression analysis is then used to convert system utilization samples on each uNaDa to power estimates, which are sent to the energy analyzer for aggregation. These models result in a low error (<5%) when applied to the same device type.

This approach is also feasible for smartphones. Here, some additional effects have to be considered, as the user interacts with the device and interfaces are only active for some time. Still, on some devices (e.g. Nexus 5), it is possible to directly read the power consumption using low level system calls or the power API.

Using the power estimates from the participating device, collected on the energy analyzer, consequently allows deriving the cost of a data transmission between any two participating devices. To analyze the energy cost of the RB-HORST mechanism, the power consumption as recorded by the energy analyzer is correlated with the traffic caused by RB-HORST to determine the overall cost of the mechanism. Further, it is also possible to derive the cost of arbitrary data transfers and scheduling decisions of the mechanism.

For a derivation of the overall social mobile offloading potential, interactive experiments with users in the context of the large-scale user study as described in Section 3.2.2 are foreseen. Before the experiments, a survey is handed out to users to fill in basic data that cannot be logged easily automatically, i.e., the name of the person deploying the uNaDa, the location of the deployment (the address) and the nominal bandwidth of the DSL connection as well as the ISP. Moreover, a number of technical parameters will be logged by the RB-HORST uNaDa (social log) as well as the Mobile App (offloading log).

Parameters, Measurements and Metrics

To assess the energy cost of caching a video, the system utilization (i.e. network traffic on each interface in each direction, CPU utilization) on the uNaDas participating in the content distribution needs to be monitored. Based on the power model of the uNaDa, the power consumption for receiving this video is derived.

The cost of transferring a cached video to the mobile device consists of the energy consumption of the mobile device and the uNaDa using the respective power models.

This cost can then be compared with the cost of streaming a video directly from the server. For this, an energy model of the server must be assumed, while the power draw of the uNaDa and the smartphone can be calculated using the calibrated power models.

The parameters of the experiment are:

Size of the video

Number of peers (0:10:50)

SPS support as provided by SEConD (yes/no)

The required measurements are:

uNaDa

o Power

Page 42: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 42 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

o Or:

Ethernet traffic in

Ethernet traffic out

WiFi traffic in

WiFi traffic out

CPU utilization

Smartphone

o Power

o Or:

CPU utilization

WiFi traffic in

WiFi traffic out

Cellular traffic in

Cellular traffic out

Display brightness

Other components

The metrics to be calculated are:

Cost for caching a video (J/MB)

Cost for transferring the video from cache to smartphone (J/MB)

Cost for streaming a video on the smartphone (J/MB)

To calculate the full cost of the mechanism, the derived measurements are then to be combined with the efficiency of the prefetching algorithm, including the cost of needlessly fetched videos. This is done in a post-processing step on the evaluation server.

For the derivation of the overall potential of social mobile offloading, two logs are necessary: The social log (Table 13) is logged regularly once per day by RB-HORST on the uNaDa. The purpose of this log is the dumping of the social connections of the owner of the access point.

Table 13 The social log structure of RB-HORST

1. line MD5(facebook_name_of_unada_owner), SSID used for social offloading

2. line MD5(unada_owners_friend_one)

3. line MD5(unada_owners_friend_two)

… …

n+1. line MD5(unada_owners_friend_n)

Page 43: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 43 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Second, the offloading log (Table 14) contains all offloading events and is logged by the App. The log is updated with a frequent sampling interval.

Table 14 The offloading log structure of RB-HORST

1. line MD5(facebook_name_of_Smartphone_owner), SSID at which offloading took place, timestamp, offloaded volume

… ..

n+1. line MD5(facebook_name_of_Smartphone_owner), SSID at which offloading took place, timestamp, offloaded volume

Both logs are pushed to a central measurement server regularly (once per hour). The MD5 hashes are necessary to maintain the privacy of users. The measured data is sufficient to reconstruct the social graph and all offloading events.

Test procedures

Non-Interactive experiment for energy consumption:

1. Connect mobile phone to cellular network

2. Stream video via cell interfaces and measure

3. Connect to RB-Horst AP

4. Stream video via RB-Horst AP and measure

5. Stream/load cached video via RB-Horst AP and measure

6. Compare consumed energy for each option (2, 4, 5)

Interactive experiment for social mobile offloading:

1. Access point users are encouraged to use the RB-HORST system actively

2. The offloading events and social relations are logged to the social log and the offloading log, respectively

3. The logs are pushed to the logging server in regular intervals

4. The results of this experiment are obtained by post processing the acquired logs

Other relevant Information

The smartphone model used for the energy related experiments should be the Nexus 5, as it simplifies the measurement procedure. It allows measuring the battery current and current draw, and hence the power consumption of the device is derived by multiplying both. The highest accuracy of the measurements can be achieved using the RaspberryPi as uNaDa, as calibrated power models are available.

Page 44: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 44 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

4 Showcases

During the Year 2 Technical Review with the EC the project team demonstrated a running pilot implementation and major functionalities. Showcases can be considered as preliminary experiments aimed at showing the basic behaviour of SmartenIT mechanisms in a test-bed environment.

The three following showcases have been designed to validate selected aspects of the SmartenIT implementation and to present the benefits of particular mechanisms:

Multi-domain network traffic optimization in DTM – presentation of DTM’s network traffic cost optimisation algorithm in a multi-domain network environment.

Locality, social awareness and WiFi offloading in RB-HORST – presentation of major RB-HORST functionalities, i.e. WiFi offloading, overlay prediction and social prediction.

Mobile Internet Access Offloading in EEF/RB-HORST – presentation of data offloading functionality in RB-HORST with the Energy Efficiency Measurement Framework (EEF).

4.1 Multi-domain network traffic optimization in DTM

In this section a showcase of the DTM pilot implementation is documented including details required to set up and run the software in a test-bed network environment.

4.1.1 Scenario topology

DTM showcase test-bed comprises four logically isolated network domains as shown in Figure 14. Inter-domain routing is configured by means of BGP protocol. Two GRE tunnels are configured between routers located in AS3 and AS1. Each of them enters AS1 on different inter-domain link. In AS3 a machine emulating data center that acts as the source of inter-domain traffic is deployed. This traffic traverses one or two transit domains AS2 and AS4 to reach the receiving data center located in AS1. SmartenIT software prototype v1.1 instances are deployed in AS3 (S-Box and SDN Controller) and AS1 (S-Box). Additional software traffic generators and receivers are deployed inside the network to emulate background traffic on links L1 and L2.

The DTM showcase logical test-bed topology is physically deployed on a set of virtual machines distributed over 3 PCs as shown in Figure 15. In physical deployment routers in AS2 and AS3 are represented by one virtual machine.

Table 15 lists the required extensions of the SmartenIT test-bed.

Table 15 Overview on the SmartenIT test-bed extensions used with DTM.

Scenario 4 ISPs

Required test-bed extensions as defined in D4.1

5.1.1 Traffic Generator

5.2.2 VyOS Software Router

5.2.3 OpenFlow

Page 45: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 45 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 14 DTM showcase test-bed logical topology

Table 16 and Table 17 present used IP address range and services deployed on a particular VM.

Table 16 Detailed IP Address table for the production network for the DTM evaluation.

IP Address or Address Range Usage

10.0.0.0/8 Interconnection

10.0.1.0/24 Interconnection ISP1-ISP2

10.0.2.0/24 Interconnection ISP1-ISP4

10.1.6.0/24 Interconnection ISP2-ISP4

10.1.1.0/30 Interconnection ISP2-ISP3

10.10.1.0/24 ISP1

10.10.2.0/24 ISP1

10.10.3.0/24 ISP1

10.1.5.0/24 ISP2

10.1.3.0/24 ISP3

10.1.4.0/24 ISP3

10.1.2.0/24 ISP4

Cloud A

DC-A

Cloud B

DC-B

AS1

AS2

AS4

AS3

BGP router

Intra-domain router

Inter-domain link

Intra-domain link

SDN controller

S-Box

S-Box

Traffic generator(sender)

Traffic generator(sender)

Trafficreceiver

Trafficreceiver

Traffic

receiver

DC traffic

generator

sender

Page 46: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 46 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

PC 2OVS Switch: pc2-swcmgmt

pc2-swcmgmt

OVS: dtm-isp4-

ofs1

eth3:

10.1.6.2/24

KVM (Vyatta): dtm-isp4-rtr-bg2

eth2: 10.1.2.1

/24

mgmt

eth1:

10.1.2.2/24

mgmt

KVM: dtm-isp4-vmtg2

p1p1.201

eth3

PC 3OVS Switch: pc3-swcmgmt

pc3-swvmgmt

p1p1.201

eth4: 10.1.6.1

/24

eth3: 10.1.1.1

/30

mgmt-br

KVM (Vyatta): dtm-isp2-rtr-

bg1

KVM (Vyatta): dtm-isp3-rtr-bg1

eth1: 10.1.1.2

/30

mgmt-br

eth1:10.1.4.1/24 eth1: 10.1.3.2/24

mgmt-vmsboxmgmt-vmdc

KVM: dtm-isp3-vmsbox-sdn

KVM: dtm-isp3-vmdc

p1p1.201

PC 1OVS Switch: pc1-swcmgmt

pc1-swvmgmt

p1p2.201

eth1:

10.0.2.12/24

KVM (Vyatta): dtm-isp1-rtr-

bg2

eth2: 10.10.2.2

/24

mgmt-br

eth1:

10.0.1.12/24

KVM (Vyatta): dtm-isp1-rtr-

bg1

eth2: 10.10.1.2

/24

mgmt-br

eth1: 10.10.3.2/24 eth1: 10.10.1.4/24

mgmt-vmsboxmgmt-vmdcKVM: dtm-isp1-

vmsboxKVM: dtm-isp1-

vmdc

p1p2.201

eth1:

10.0.2.2/24 p1p2.201

eth2: 10.1.3.1/24

eth2.10: 10.1.4.10/24

eth2.20: empty

OVS: dtm- isp2-ofs1

eth1: 10.0.1.2

/24

eth1: 10.1.5.2/24

mgmt-vmtg

KVM: dtm-isp2-vmtg1

eth2: 10.1.5.1

/24

OVS: dtm-ips3-ofs3

KVM (Vyatta): dtm-isp1-rtr-

da

eth1: 10.10.1.1

/24

mgmt-da

eth2: 10.10.2.1

/24

eth1: 10.10.1.3/24

mgmt-vmtr

KVM: dtm-isp1-vmtr1

eth1: 10.10.2.3

mgmt-vmtr

KVM: dtm-isp1-vmtr2

OVS: dtm-isp1-ofs1

OVS: dtm- isp1-ofs2

eth3:10.10.3.1

/24

OVS: dtm-isp1-ofs3

eth1.10

eth1.20

eth2

vmmgmt vmmonitor

eth0eth3

OVS: dtm-ips3-ofs2

OVS: dtm-ips3-ofs1

eth3: 10.1.3.3

/24

KVM: dtm-isp3-

ofsda

Figure 15 Mapping of DTM showcase logical topology to physical test-bed.

Page 47: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 47 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Table 17 Hosts and services running on them for DTM.

Host Services Comment

dtm-isp1-rtr-bg1 ISP router, interconnection to AS2 Vyatta software router

dtm-isp1-rtr-bg2 ISP router, interconnection to AS4 Vyatta software router

dtm-isp1-rtr-da ISP router allowing connection with DC and S-Box

Vyatta software router

dtm-isp1-vmdc Data Center receiving inter-domain traffic

dtm-isp1-vmsbox S-Box

dtm-isp1-vmtr1 Receiver of background traffic passing inter-domain link AS2-AS1

dtm-isp1-vmtr2 Receiver of background traffic passing inter-domain link AS4-AS1

dtm-isp2-rtr-bg1 ISP router, interconnection to AS1, AS3 and AS4

Vyatta software router

dtm-isp2-vmtg1 Generator of background traffic passing inter-domain link AS2-AS1

dtm-isp3-rtr-bg ISP router, interconnection to AS2 Vyatta software router

dtm-isp3-ofda OVS connected to SDN controller

dtm-isp3-vmdc DC generating inter-domain traffic

dtm-isp3-vmsbox-sdn S-Box and SDN controller VM

dtm-isp4-rtr-bg2 ISP router, interconnection to AS1 and AS2 Vyatta software router

dtm-isp4-vmtg2 Generator of background traffic passing inter-domain link AS4-AS1

4.1.2 Scenario assumptions

In order to deploy DTM and enable it to perform efficient inter-domain traffic optimisation a set of assumptions have to be met.

Network traffic managed by DTM is caused by data being exchanged between cloud resources located in more than one data centers running in distant network domains. Data traffic receiving domain needs to have two inter-domain links between which inbound traffic can be dynamically distributed. For the showcase purposes traffic originating from a data center is emulated using a custom software application which allows generation of traffic patterns similar to the ones observed between data centers (i.e. with large number of parallel flows). Additional traffic generators deployed within the test-bed generate the background, non-manageable traffic on the inter-domain links.

Before the showcase is run, the test-bed is completely configured and operational. Traffic generator instances (both senders and receivers) are running and traffic is being sent within the test-bed. S-Box instances and the SDN Controller are correctly configured and started beforehand.

Page 48: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 48 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

4.1.3 Reference scenario

Reference scenario assumes a bulk data transfer between data centers placed in AS1 and AS3 (Figure 14) without implementation of DTM in the network. In such a case only one of AS1’s inter-domain links is always selected by BGP to transfer data between DCs. The cost of traffic received on a particular link depends on the total volume of the traffic received during a billing period. Specific traffic cost functions are assumed on both of the links. Since many parameters of BGP may affect the link selection, calculation of cost for both cases (selection of each link) is considered.

Knowing the background and manageable traffic volume (measured during scenario with DTM) cost incurred by the AS1 without DTM enabled can be easily calculated.

4.1.4 Showcase scenario

The goal of the showcase is to present the benefit of running DTM inside the network which allows for costs reduction resulting from optimal inbound traffic disstribution among the two inter-domain links.

For the real-time showcase purposes the billing period was set up to 30 minutes (instead of the typical 1 month). During consecutive accounting periods it can be observed how the traffic is distributed over the links and how this distribution corresponds to the optimal distribution represented by the DTM reference vectors values. The estimated benefits of using DTM are presented during the entire accounting period and the real benefit gained in the current accounting period is known at the end of the period.

During the showcase a custom visualization application was used. The application view screenshot presented in the Figure 16 consists of 5 real-time plots and one static. Starting from the upper row two first diagrams from the left present total and background traffic volume passing particular inter-domain link: link 1 (AS1-AS2) and link 2 (AS1-AS4), respectively. Last chart presents the real-time estimation of traffic cost in three cases: when DTM is used, and when DTM is disabled and: i) DC to DC traffic passes entirely through link 1 or ii) DC to DC traffic passes entirely through link 2 (depending on default route selected by BGP). As shown in Figure 17 the gap between cost lines with and without DTM at the end of billing period represents the achieved benefit (cost savings).

In the bottom row of the Figure 16 diagram on the left presents the real-time distribution of traffic between inter-domain links (in all 3 previously mentioned cases) and calculated reference vector. Middle graph is a static representation of the left diagram captured at the end of the previous accounting period. Right chart presents in real-time the current value of compensation vector for link 1 (compensation vector for the second link has the same value but an opposite sign). Positive value of the compensation vector means that in this particular moment new flows generated by DC should be sent via tunnel 1 (traversing link 1), negative value – tunnel 2 (traversing link 2) should be selected.

Page 49: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 49 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 16 View of the visualization application for DTM showcase.

Figure 17 Graph from the visualization application presenting real-time cost estimation.

During the showcase presentation some burst of background traffic on link 2 was generated (highlighted in the Figure 18 with red circle). As could be observed such a sudden disruption in traffic distribution caused that any new flow from DC (manageable traffic) was redirected to tunnel passing link 1 (high difference between total and background traffic on link 1 in the top graph of Figure 18). In the bottom graph of the Figure 18 a deviation from the reference vector can be easily observed, however the execution of DTM compensation procedure ensured that the reference vector was shortly met.

Page 50: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 50 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 18 Selected graphs from the visualization application with highlight on traffic burst compensation.

The DTM showcase clearly presents the benefits of implementing the DTM inside the network in terms of inter-domain traffic costs reduction.

Page 51: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 51 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

4.2 Locality, social awareness and WiFi offloading in RB-HORST

In this section a showcase of the RB-HORST pilot implementation is documented including details required to set up and run the software in a test-bed network environment.

4.2.1 Scenario topology

Figure 19 shows the topology of the RB-HORST showcase, which is the same as the basic topology and mapping to the actual test-bed, as described in EFS experiments (Figure 9 included in Section 3.2.1). It consists of 3 NSP domains, 2 access (AS1 and AS3) and 1 transit NSP (AS2), and 3 users, Andri, Sergios and George, each one having Internet access through their respective ISP. This is similar to a real-world scenario, in which Andri is located in Zurich and has a contract with a local ISP, e.g. Swisscom, while Sergios and George are located in different suburbs of Athens, but have a contract with the same ISP, e.g. Wind Telecom. These 2 ISPs are connected through a transit NSP and also provide access to the rest of the Internet, e.g. Facebook and Vimeo servers.

Figure 19 Topology of prefetching and social awareness showcase

In each user’s premises there is a uNaDa, which is a Raspberry Pi hosting the RB-HORST software. Each uNaDa is assigned to its owner with his Facebook credentials and provides 2 SSIDs, one open but with no Internet access, and one private with full Internet access. In addition, each user owns a smartphone to access the Internet.

The RB-HORST showcase logical test-bed topology is physically deployed on a set of virtual machines distributed over 3 PCs as shown in Figure 10 (the test-bed topology and network configurations are presented in Section 3.2.1 as they are the same for the RB-HORST showcase and the RB-HORST experiment). Table 18 lists the required extensions

Page 52: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 52 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

of the SmartenIT test-bed; Table 11 and Table 12 present used IP address range and services deployed on a particular VM.

Table 18 Overview on the SmartenIT test-bed extensions used with RB-HORST.

Scenario 3 ISPs

Required test-bed extensions as defined in D4.1

5.3.1 Raspberry Pi

4.2.2 Scenario assumptions

The RB-HORST showcase execution has the following assumptions and required configuration:

George and Andri own a Google Nexus 5 smartphone, with at least Android OS v4.4.4, and HORST, Facebook and Firefox applications installed.

Andri’s and George’s uNaDas are Raspberry Pis, running the RB-HORST software and offering access-point capabilities, while Sergios’ uNaDa is a virtual machine, only hosting the RB-HORST software.

HORST* SSIDs are open, with no Internet access and are used for the

communication of the HORST Android application with the uNaDa.

RBH_Secured* SSIDs require authentication, and provide full Internet access.

Each user is the owner of his uNaDa, meaning that he has logged in to the RB-HORST service with his Facebook credentials.

Andri and George are Facebook friends, and watch similar videos. This means that their uNaDas’ caches share some common videos and are considered as overlay neighbors. Thus in the next iteration of overlay prediction, their newly-cached contents are likely to be prefetched to each other’s uNaDa.

Sergios is not a Facebook friend with the other 2 users, but belongs to the same domain as George. He has watched some content in the past, which are cached locally and his uNaDa participates in the uNaDas’ overlay network.

4.2.3 Reference scenario

In the reference scenario (Figure 20), George is connected to Andri’s private SSID and browses the Internet. Finally, he watches a Vimeo video (“Italy - A 1 Minute Journey”), which is not present in Andri’s uNaDa cache. The video is fetched from the Vimeo servers, it buffers slowly during playback, resulting in low QoE.

However, the video is inserted to Andri’s uNaDa cache and can be served from there in future requests.

Page 53: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 53 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 20: Reference scenario.

4.2.4 Showcase scenario

WiFi offloading

In the WiFi offloading showcase presented in Figure 21, George has visited Zurich and is located close to Andri’s uNaDa. Currently, he cannot access the Internet, because he has not enabled 3G Roaming to his device.

To gain Internet access, George opens the HORST Android application, which transparently:

Connects to HORST SSID of Andri’s uNaDa,

Provides George’s Facebook ID, and,

Finally receives the private SSID (RBH_secured) credentials and connects to it.

In the meantime, Andri’s uNaDa has verified that George is one of Andri’s friends and can be considered as a trusted user to connect to the private SSID and browse the Internet.

The outcomes of the showcase are presented in:

The HORST Android application, which shows the evolution of the authentication process and finally, the connection to the private SSID, and,

The uNaDa web interface, which presents the authentication attempt by a user different than Andri, and finally, the successful outcome.

Page 54: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 54 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 21: WiFi offloading showcase.

Overlay prediction

The overlay prediction showcase, assumes that the reference scenario has been completed, and a newly cached content (“Italy - A 1 Minute Journey”), has been stored in Andri’s uNaDa cache. Now George returns home and connects to his home gateway and all the following prediction showcases occur at his uNaDa.

As stated in the showcase assumption, Andri’s and George’s uNaDas are overlay neighbors, which means that they already share some common videos, and their overlay prediction algorithms will predict that newly-watched/cached videos should be prefetched. Hence, the overlay prediction in George’s uNaDa predicts that the Vimeo video “Italy - A 1 Minute Journey” is likely to be watched again. Although the video exists in Andri’s uNaDa, it is preferred to be fetched from the Vimeo CDN servers, because they are closer than Andri’s uNaDa.

When George watches the video again, his video request is proxied by the uNaDa, and the video is served from the uNaDa server, resulting in rapid video buffering and higher QoE than the reference scenario.

Page 55: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 55 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 22: Overlay prediction showcase.

The results of the showcase are presented in:

George’s Firefox web browser, in which the video buffers instantly during playback, and,

George’s uNaDa, which shows that the content is prefetched from Vimeo servers instead of Andri’s uNaDa, and George’s request is intercepted and served by the local uNaDa content server.

Social Prediction

Andri watches another video “Brussels in 1 minute” from his home and posts it to his Facebook wall.

Social prediction in George’s uNaDa predicts that this video is likely to be watched by George, and should be prefetched. This video has also been watched by Sergios and is cached to Sergios’ uNaDa, which belongs in the same domain as George. Hence the video is fetched from Sergios’ uNaDa, instead of Vimeo servers, resulting in inter-domain traffic savings.

When George checks his Facebook News Feed, Andri’s Facebook post appears and George tries to watch the posted Vimeo video. His uNaDa proxies his request and serves the video from the local content server, resulting in better QoE than the reference scenario.

Page 56: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 56 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 23: Social prediction showcase.

The results of the social prediction showcase are presented in:

George’s smartphone, in which the video buffers instantly, and,

George’s uNaDa web interface, which shows the outcome of the social prediction and the content source selection (Sergios’ uNaDa instead of Vimeo servers) and the content delivery by the local uNaDa content server.

4.3 Mobile Internet Access Offloading in EEF/RB-HORST

This showcase relates to the evaluation of data offloading functionality in RB-HORST (and RB-HORST++ which is an extension integrating MoNA, vINCENT, and SDN-DC) with the Energy Efficiency Measurement Framework (EEF).

Motivation and Integration into SmartenIT

Today's overlay-based mobile cloud applications determine a challenge to operators and cloud providers in terms of increasing traffic demands and energy costs. The energy efficiency plays a major role as a key performance metric in both the OFS and EFS scenario. Therefore, the SmartenIT consortium has defined energy efficiency as one of the key targets for the design and optimization of traffic management mechanisms for overlay networks and cloud-based applications.

This target is used as design goal for emerging proposals, e.g., to establish content distribution systems minimizing energy consumption by using energy awareness of its architecture elements into efficient structure of caches which minimizes volume of transferred data. For example, the end user focused scenario has the goal of energy

Page 57: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 57 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

efficiency for end users by distributing and moving content and services in an energy aware way, while taking provider's interests into account. Finding the right content placement is an optimization problem to be solved in an energy efficient manner. In order to optimize the placement with respect to energy efficiency, information on energy consumption is needed. This data can be derived from energy models providing an energy estimate of the placement. Energy models which estimate energy consumption from network measurements are existing in literature. Moreover, modelling the energy efficiency of placements allows also for the prediction of future energy consumptions. Also the Operator Focused Scenario (OFS) has the goal of achieving highest operating efficiency in terms of low energy consumption besides other optimization criteria. E.g. Cloud Federation enables collaboration so as to achieve both individual and overall improvement of cost and energy consumption. Moreover, data migration/re-location may often be imposed by the need to reduce overall energy consumption within the federation by consolidating processes and jobs to few DCs only. Therefore, the OFS scenario defines a series of interesting problems to be addressed by SmartenIT, specifically energy efficiency for DCOs, either individually or overall for all member of the federation. To this end, the EEF demo showcases the SmartenIT energy framework. It consists of the Energy Analyzer which offers energy consumption considerations, thereby achieving an energy-efficient network management approach.

A precondition for optimizing energy consumption is the measurement of energy consumption. Thus, a measurement platform for energy consumption based on validated energy models is integrated into the RB-HORST showcase. The measurement of energy consumption without the need of additional measurement hardware is demonstrated and the measurements are visualized.

4.3.1 Scenario topology

The network configuration for the EEF demo is based on the multi-ISP scenario. Three ISPs are used with a total of three uNaDas. ISP2 hosts one uNaDa, ISP3 hosts 2 uNaDas, one of which is running on a Raspberry Pi, and one is running in a VM.

Figure 24: Topology and Scenario Assumptions.

Page 58: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 58 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

The network topology and test-bed mapping is identical to the one used for the RB-HORST showcase as described in Section 3.2.1.

4.3.2 Scenario assumptions

The experiment setup and scenario assumptions are depicted in Figure 24. In particular, the following assumptions are made:

There are two Google Nexus 5 smart phones

RB HORST and EnergyMonitor is installed

Each smartphone is equipped with a tunnel using the 3G interface to the test-bed

A uNaDa with Internet connection is providing open WiFi access

RB HORST and EnergyMonitor installed

The cache of the RB-HORST is emptied before the demo.

Note: RB HORST is considered to be one possible application under test for this demo. It is not argued that RB HORST interfaces with EEF. However, both RB HORST and DTM can use the EEF framework to optimize for energy efficiency.

The energy monitor constantly measures and delivers system parameters to the Energy Analyzer converting the measured data into energy estimates using validate models measured by TUD for the SmartenIT project. The energy estimates are visualized together with the topology shown above.

4.3.3 Reference scenario

The Mobile Internet Access Offloading in EEF/RB-HORST is compared to the case of regular downloads using a 3G or 4G cellular connection. The metrics to be compared are the download speed and the power consumption of the mobile device, combined with an estimate of the backend power consumption.

The reference scenario is established as follows: The user downloads a video using the 3G connection. The video is delivered via 3G. At the same time, the energy consumption of the smart phone is monitored and visualized. This is represented as the red connection marked in Figure 25, not making use of the uNaDa.showcase scenario.

4.3.4 Showcase scenario

The showcase scenario is depicted in Figure 25.

Step 1 (Using WiFi offloading instead of 3G):

The user downloads the same video using the RB-HORST WiFi. While viewing the video, it is cached in the local RB-HORST cache. At the same time, the energy consumption of the smart phone and the uNaDa is monitored and visualized side-by-side. The smart phone shows a better energy efficiency than in the reference case. At the same time, the uNaDa shows higher energy consumption as traffic on the WiFi interface is generated.

Page 59: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 59 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Figure 25: Mobile Internet access offloading in EEF/RB-HORST showcase.

Step 2 (Using WiFi offloading and RB-HORST caching):

The user downloads the same video using the RB-HORST WiFi and the video is delivered from the RB-HORST cache. At the same time, the energy consumption of the smart phone and uNaDa is monitored and visualized side-by-side. A result similar to step 2 is expected, where the RB-HORST access point shows better energy efficiency.

Page 60: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 60 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

5 Summary

The core of this document is a set of experiment definitions which will be used to drive the prototype validation activities. As the project covers two complementary domains, the one where an end-user has a key role and the second one dominated by an ISP and a cloud service provider or a data center, the experiments represent respective characteristics.

The OFS (Operator Focused Scenario) experiments have been designed to validate the prototype implementation of DTM. Two main experiment scenarios have been proposed to prove the optimization benefit of DTM mechanism :

evaluation of multi-domain traffic cost reduction in DTM with data transfer between two locations (distant Data Center/cloud resources),

evaluation of multi-domain traffic cost reduction in DTM with data transfer between multiple locations (DCs/clouds serving as traffic sources and receivers).

The EFS (End-user Focused Scenario) experiments are focused on the implementation of RB-HORST mechanism. They mainly validate optimization techniques for caching and WiFi offloading with the use of social network information and energy efficiency measurement. The following three EFS experiments have been designed:

evaluation of caching functionality in RB-HORST,

large-scale RB-HORST study,

evaluation of data offloading functionality in RB-HORST.

The second EFS experiment is especially interesting and promising because it will be executed in cooperation with students who will be using the RB-HORST implementation.

Moreover, the deliverable D4.2 includes the descriptions of showcases organised during the second year technical review with the EC. The SmartenIT project presented functions of the pilot implementation in a test-bed prepared for the event. The work on showcases was a direct input to the further efforts on experiment definitions.

Each experiment definition and showcase contains sufficient set of details to set up and execute the software by the project team responsible for validation. In the next stage, the results of experiments will be analysed to assess the SmartenIT solutions.

Page 61: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 61 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

6 SMART Objectives

Through this document, four SmartenIT SMART objectives defined in Section B1.1.2.4 of the SmartenIT Description of Work (DoW) have been partially addressed. Namely, one overall (O4, see Table 19) and three practical (O2.2, O3.1 and O3.4; see Table 20) SMART objectives were addressed.

The overall Objective 4 is defined in the DoW as follows:

Objective 4 SmartenIT will evaluate use cases selected out of three real-world scenarios (1) inter-cloud communication, (2) global service mobility, and (3) exploiting social networks information (QoE and social awareness) by theoretical simulations and on the basis of the prototype engineered.

This deliverable provides the definitions of experiments for OFS and EFS scenarios (Section 3). Both scenarios were proposed as an outcome of the analysis and integration process conducted in WP2. The OFS and EFS experiments include the characteristics of all three scenarios listed in Objective 4. The initial evolutions of use cases with the selected network traffic management solutions have been already performed during the Year 2 Review meeting as the showcases (Section 4). Mainly they showed the available functionalities. The advanced experiment definitions described in D4.2 provide required details to run advanced functional and performance tests of the implemented SmartenIT solutions.

Table 19: Overall SmartenIT SMART objective addressed.

Objective No.

Specific

Measurable

Achievable Relevant

Timely

Deliverable Number

Mile Stone Number

O4 Evaluation of use cases

D4.1, D4.2, D4.3 Implementation,

evaluation Complex MS4.3

Table 20: Practical SmartenIT SMART objective addressed.

Objective ID

Specific

Measurable

Achievable Relevant

Timely

Metric Project Month

O2.2

Which parameter settings are

reasonable in a given scenario/application

for the designed mechanisms to work

effectively?

Number of parameters

identified, where a reasonable value range is specified

Design, simulation, prototyping T2.2., T3.4,

T4.2

Highly relevant output of

relevance for providers and

users

M24

Page 62: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 62 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

Objective ID

Specific

Measurable

Achievable Relevant

Timely

Metric Project Month

O3.1

Which techniques to be used to retrieve

management information from cloud platforms and OSNs?

Number of studied cloud providers,

number of identified types of

management information, number of

compared retrieval techniques, number

of studied OSNs, number of identified

types of social information or

meta-information related to users’ social behaviour

Design T1.1., T4.1,

T4.2

Highly relevant output of

relevance for providers

M24

O3.4

How to monitor energy efficiency and

take appropriate coordinated actions?

Number of options identified to monitor

energy consumption on

networking elements and end

users mobile devices,

investigation on which options perform best

(yes/no)

Design, simulation, prototyping T1.3, T2.3, T4.1, T4.2,

T4.4

Highly relevant

output of

relevance for

users

M3.6

This deliverable contributes to answering three specific practical questions:

Objective 2.2: Which parameter settings are reasonable in a given scenario/application for the designed mechanisms to work effectively?

Definitions of experiment in Section 3 provide information about parameters, measurements and metrics which are required to take into account during experiment executions. They have been selected in such a way to collect valuable and accurate results. The right selection is needed to properly evaluate the SmartenIT network traffic management solutions. Objective 3.1: Which techniques to be used to retrieve management information from cloud platforms and OSNs?

The RB-HORST experiments (EFS) obtain and use identities and social data from Facebook. This concrete OSN has been selected to show and evaluate how such a platform can support network traffic management and enhanced traditional management approaches. Additionally, the OFS experiments present a network traffic cost optimisation technique in a cloud-based inter-domain network environment.

Page 63: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 63 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

Objective 3.4: How to monitor energy efficiency and take appropriate coordinated actions?

The answer to the question about energy efficiency monitoring is provided in the experiment which utilises The Energy Efficiency Measurement Framework (EEF). It compares the energy consumption of WiFi and cell data transmissions under realistic bandwidth conditions with the use of RB-HORST.

Page 64: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 64 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

7 References

[1] The SmartenIT Consortium, “Grant Agreement for STREP: Annex I – Description of Work (DoW),” 2012.

[2] T. Benson, A. Akella, D. A. Maltz. Network traffic characteristics of data centers in the wild. 10th ACM SIGCOMM conference on Internet measurement (IMC '10). ACM, New York, NY, USA, 2010.

[3] T. Benson, A. Anand, A. Akella, M. Zhang. Understanding data center traffic characteristics, SIGCOMM Computer Communications Review, Vol. 40, No. 1, January 2010.

[4] The SmartenIT project: Deliverable D4.1 – Test-bed Set-up and Configuration; October 2014.

[5] The SmartenIT project: Deliverable D2.4 - Report on Final Specifications of Traffic Management Mechanisms and Evaluation Results; October 2014.

Page 65: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Version 1.0 Page 65 of 66 © Copyright 2015, the Members of the SmartenIT Consortium

8 Abbreviations

3G Third Generation

AGH Akademia Gorniczo-Hutnicza im. Stanislawa Staszica w Krakowie

AS Autonomous System

BGP Border Gateway Protocol

CDN Content Delivery Network

CPU Central Processing Unit

DA Data Center Attachment Point

DC Data Center

DCO Data Center Operator

DoW Description of Work

DTM Dynamic Traffic Management

EEF The Energy Efficiency Measurement Framework

EFS End-user-Focused Scenario

GRE Generic Routing Encapsulation

HTTP Hyper Text Transfer Protocol

ICOM Intracom S.A. Telecom Solutions

IP Internet Protocol

IRT Interroute S.P.A

ISP Internet Service Provider

KPI Key Performance Indicator

MONA Mobile Network Assistant

M-to-M Multiple to Multiple

NSP Network Service Provider

OFS Operator-Focused Scenario

OSN Online Social Network

OVS Open vSwitch

PC Personal Computer

PSNC Instytut Chemii Bioorganiicznej PAN

QoE Quality of Experience

QoS Quality of Service

RAM Random-Access Memory

REST REpresentational State Transfer

Page 66: Deliverable D4.2 - Experiment Definition and Set Up

D4.2 - Experiments Definition and Set-up Seventh Framework STREP No. 317846 Commercial in Confidence

Page 66 of 66 Version 1.0 © Copyright 2015, the Members of the SmartenIT Consortium

RB-HORST Replicating Balanced tracker - HOme Router Sharing based On truST

RTT Round Trip Time

S-to-S Single to Single

SDN Software Defined Networking

SEConD Socially-aware Efficient Content Delivery

SMART Specific Measurable Achievable Realistic And Timely

SNMP Simple Network Management Protocol

SSID Service Set Identifier

TUD Technische Universität Darmstadt, Germany

UDP User Datagram Protocol

uNaDa User-owned NAno DAtacenter

UniWue Julius-Maximilians Universität Würzburg

UZH University of Zürich

WiFi Wireless Fidelity

vINCENT Virtual Incentives

VM Virtual Machine

9 Acknowledgements

This deliverable was made possible due to the large and open help of the WP4 team of the SmartenIT team within this STREP, which includes besides the deliverable authors as indicated in the document control, Krzysztof Wajda (AGH), Gino Carrozzo (IRT) and Burkhard Stiller (UZH) as well for providing valuable feedback and input.