Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf ·...

39
Project title: A Community networking Cloud in a box. Experimental research evaluation (initial) Deliverable number: D4.3 Version 1.0 This project has received funding from the European Union’s Seventh Programme for research, technological development and demonstration under grant agreement No 317879

Transcript of Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf ·...

Page 1: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

Project title: A Community networking Cloud in a box.

Experimental research evaluation (initial)

Deliverable number: D4.3

Version 1.0

This project has received funding from the European Union’s SeventhProgramme for research, technological development and demonstrationunder grant agreement No 317879

Page 2: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

Project Acronym: ClommunityProject Full Title: A Community networking Cloud in a boxType of contract: Small or medium-scale focused research project (STREP)contract No: 317879Project URL: http://clommunity-project.eu

Editor: Roger Baig (Guifi.net), Felix Freitag (UPC)Deliverable nature: Report (R)Dissemination level: Public (PU)Contractual Delivery Date: April 30, 2014Actual Delivery Date April 30, 2014Suggested Readers: Project partnersNumber of pages: 37Keywords: WP4, experimental research, evaluationAuthors: Vladimir Vlassov (KTH), Hooman Peiro Sajjad (KTH),

Paris Carbone (KTH), Jim Dowling (SICS), Lars Kroll(KTH, SICS), Alexandru - Adrian Ormenisan (KTH, SICS),Amin Khan (UPC), Mennan Selimi (UPC), Felix Freitag (UPC)

Peer review: Roger Pueyo (Guifi.net), Roc Meseguer (UPC)

Abstract

This document presents the work carried out in T4.3 during the first reporting period of the CLOM-MUNITY project to perform experimental evaluations of research on community clouds.The experimental evaluation addressed different levels of the community cloud system. Resourceallocation with incentive-based mechanisms looked at the regulation of IaaS in community clouds.Distributed file system experimentation helped to understand the performance of these services forsuitability in the Cloudy distribution. The experiments on distributed scalable storage solutions gaveinsights for further integration as SaaS in Cloudy. Distributed video streaming experiments exploredthis approach for provisioning such service in community clouds.While the first experiments were supported by simulation methodologies, along with the developmentand deployment of the community cloud testbed in WP4, more and more experiments were conductedon real deployed cloud systems, starting first with using Community-Lab from the CONFINE project,but later on combining Community-Lab devices with our own testbed infrastructure.

Page 3: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

Contents

1 Introduction 51.1 Contents of the deliverable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Relationship to other CLOMMUNITY deliverables . . . . . . . . . . . . . . . . . . 5

2 Research on Incentives-Based Resource Regulation 62.1 Prototype Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.1 Evaluation of Local Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.1.1 Resource Utilization . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.1.2 Credit Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.1.3 Success Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.2 Evaluation of Federated Cloud . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 Simulation Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2.1 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2.2.1 Ratio of Successful Requests . . . . . . . . . . . . . . . . . . . . 122.2.2.2 Breakdown of Request Responses . . . . . . . . . . . . . . . . . . 132.2.2.3 Resource Utilization . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.2.4 Nodes Selection Policies . . . . . . . . . . . . . . . . . . . . . . 14

3 Research on distributed file systems in communities 163.1 Experiments with distributed file systems . . . . . . . . . . . . . . . . . . . . . . . 163.2 Tahoe-LAFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.2.1 The Experiment Environment . . . . . . . . . . . . . . . . . . . . . . . . . 173.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2.2.1 Two-community Network Evaluation . . . . . . . . . . . . . . . . 193.2.2.2 Single-community Network Evaluation . . . . . . . . . . . . . . . 203.2.2.3 Evaluation in local setting . . . . . . . . . . . . . . . . . . . . . . 21

3.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.3 XtreemFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.3.1 Testing environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4 Research on Distributed Scalable Storage 264.1 Experiments with Distributed Storage . . . . . . . . . . . . . . . . . . . . . . . . . 264.2 CaracalDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.2.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.2.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.3 Datamodel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

1

Page 4: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

Contents Contents

4.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5 Research on Distributed Video Streaming 315.1 Video-on-Demand (Gvod) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Conclusions and Outlook 35

Bibliography 35

Licence 37

2Deliverable D4.3

Page 5: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

List of Figures

2.1 Details of the VM request operation by an ON . . . . . . . . . . . . . . . . . . . . . 72.2 Overall resource utilization of the four ONs . . . . . . . . . . . . . . . . . . . . . . 92.3 Distribution of credit among the four ONs . . . . . . . . . . . . . . . . . . . . . . . 92.4 Ratio of fulfilled and rejected requests for each ON . . . . . . . . . . . . . . . . . . 102.5 Super and ordinary nodes in federated community cloud . . . . . . . . . . . . . . . 102.6 Resource assigned over different SN zones . . . . . . . . . . . . . . . . . . . . . . . 112.7 Breakdown of outcome of requests . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.8 Resource utilization along 24 minutes of the experiment . . . . . . . . . . . . . . . 142.9 Success ratio comparison of provider ON selection strategies . . . . . . . . . . . . . 14

3.1 Write/read performance of Tahoe-LAFS in two-community networks . . . . . . . . 193.2 Write operation counts of Tahoe-LAFS per node . . . . . . . . . . . . . . . . . . . 193.3 Write/read performance of Tahoe-LAFS in single-community network . . . . . . . . 213.4 Write/read performance of Tahoe-LAFS in local environment . . . . . . . . . . . . . 223.5 Three local clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.6 Performance of XtreemFS for no replica . . . . . . . . . . . . . . . . . . . . . . . . 243.7 Performance of XtreemFS for 2 replicas . . . . . . . . . . . . . . . . . . . . . . . . 243.8 Performance of XtreemFS for 5 replicas . . . . . . . . . . . . . . . . . . . . . . . . 24

4.1 Reference Topology: 7 co-located VMs with full ethernet connectivity. Node #5is designated as “bootstrap node” and node #7 runs only Yahoo! Cloud ServingBenchmark (YCSB). Round-trip latency 0.4ms. . . . . . . . . . . . . . . . . . . . . 27

4.2 Mixed Topology: 4 virtual machines(VMs) with full ethernet connectivity inBarcelona and 3 VMs in Stockholm routing over Barcelona. Node #5 is designated as“bootstrap node” and node #7 runs only YCSB. Round-trip latency from Stockholmto Barcelona 78ms and among Stockholm nodes 154ms. . . . . . . . . . . . . . . . 28

4.3 Cumulative read/scan latencies for all workloads on the reference topology (left) andthe mixed topology (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.4 Cumulative update/insert latencies for all workloads on the reference topology (left)and the mixed topology (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.5 Datamodel - cumulative read/scan latencies for all workloads on the reference topol-ogy (left) and the mixed topology (right). . . . . . . . . . . . . . . . . . . . . . . . 30

4.6 Datamodel - cumulative update latencies for all workloads on the reference topology(left) and the mixed topology (right). . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5.1 Reference Topology: 7 co-located VMs with full ethernet connectivity. Round-triplatency 0.4ms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.2 Mixed Topology: 7 VMs with full ethernet connectivity in Barcelona and 1 VMs inStockholm routing over Barcelona. . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.3 Download rate - leecher and seeders are in same clouds . . . . . . . . . . . . . . . . 335.4 Gvod - leecher and seeders are in different clouds . . . . . . . . . . . . . . . . . . . 33

3

Page 6: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

List of Tables

2.1 Resource capacity of nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Two cases with different resource distribution between zones . . . . . . . . . . . . . 112.3 Configuration for each node in a zone with shared and total instances . . . . . . . . . 122.4 Success ration of nodes for different configurations with effort and contribution based

incentives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.1 Nodes in the Tahoe-LAFS grid and space that is shared with the grid . . . . . . . . . 183.2 RTT and Bandwidth between client and storage nodes in the two-community network

scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3 RTT and Bandwidth between client and storage nodes in the single-community scenario 21

5.1 Download time - leecher and seeders are in same clouds . . . . . . . . . . . . . . . . 325.2 Download time - leecher and seeders are in different clouds . . . . . . . . . . . . . . 33

4

Page 7: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

1 Introduction

1.1 Contents of the deliverable

This deliverable describes the work carried out in T4.3 of WP4 during the first reporting period of theCLOMMUNITY project.Task T4.3 supports with experimental evaluations the research work done in WP3. This experimentalevaluation has the option to be done on the experimental community cloud testbed, built and operatedby task T4.2 along the duration of the project.

1.2 Relationship to other CLOMMUNITY deliverables

Deliverable D4.3 is closely related with D3.1 and D3.2 of WP3, since in WP3 the research taskswhich require experimental evaluation are conducted. As such, while D4.3 focuses on reporting theexperimental evaluation, D3.1 and D3.2 focus on explaining the corresponding research issues whichoriginated the experiments.D4.3 is also related with D2.1 and D2.2, since the implementation of some of the components whichare deployed and evaluated by T4.3 are developed in WP2 and reported in the corresponding deliver-ables.There is also a close relation with D4.2, which describes the community cloud testbed itself whichhas been deployed and operated, on which several of the reported experiments were run. The resultsobtained in Task 4.1 in addition influenced on research issues of WP3 and on how these were evaluatedexperimentally.

5

Page 8: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2 Research on Incentives-Based Resource Regulation

The participants in community network are mainly volunteers so it is necessary that the communitycloud has incentive mechanisms in place that encourage members to contribute with their hardware,effort and time. When designing such mechanisms, one has to take into account the heterogeneity ofthe network, nodes and communication links, since each member brings in a widely varying set ofresources and physical capacity to the system.A community network distinguishes between super nodes (SN) and ordinary nodes (ON). SNs have atleast two wireless links, each to other SNs. ONs only connect to a SN, but do not route any traffic. TheONs of the community network host the virtual machine (VM) instances and constitute the hardwarelayer of the cloud architecture. The core layer residing in the SNs contains the software for managing,scheduling and monitoring the virtual machines (VMs) running in ONs.As detailed in [1, 2, 3], we propose an incentive mechanism which applies reciprocity-based resourceallocation. This is inspired by the Parecon economic model [4, 5] which focuses on social welfare byconsidering the inequality between nodes. In this model, nodes’ rewards are calculated based on theireffort, which is a function of their capacity as well as their contribution to the system.The criteria that a SN uses to evaluate requests from ONs is the following: When an ON asks for aresource from a SN, which in this case is the number of VMs and the duration for which they areneeded, the SN first checks whether the ON’s credit is sufficient to cover the cost of the transaction.This cost is proportional to the number of VMs requested and the duration they are occupied. If theON does not have sufficient credit, the request is rejected. However, SN sometimes allows requestsfrom ONs with zero or negative credit so as to encourage them to participate in the system and earncredit by contributing more VMs. If an ON has enough credits, the SN searches for VMs providedby the ONs in its zone. If the demand cannot be met locally, the SN forwards the request to otherSN zones. For each ON which provides VMs, the SN calculates the transaction cost and adds it tothat ON’s credits, while the cost is deducted from the consumer ON’s credits. Once the operation iscompleted, the effort for each ON involved in the transaction is recalculated. The effort of a nodeexpresses its relative contribution to the system since the mechanism considers the capacity of a nodeas well. The significance of this is that a node with low capacity has put in more effort than a nodewith higher capacity even if both of them donated an equal number of VMs.

2.1 Prototype Implementation for Incentive Mechanisms

We have implemented a prototype of the incentive-based regulation mechanism that was proposedin [1, 2]. We implemented the components in the Python programming language and used CouchDB1

as database. We chose Python because the current host operating system installed on ONs is Open-WRT2, which supports Python, but does not support many other languages such as Java. We selectedCouchDB because among its advantages, it is lock-free, schema-less and provides a REST interface,and is also part of other components of the SN’s cloud management software being developed. In the

1http://couchdb.apache.org2http://openwrt.org

6

Page 9: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2. Research on Incentives-Based Resource Regulation 2.1. Prototype Experiments

Figure 2.1: Details of the VM request operation by an ON

SNs, Debian operating system is installed.

ONs use the remote procedure call (RPC) mechanism to connect to the SN. First of all, an ON assignsitself to a parent SN with a register message which includes metadata of that ON such as IP address,total capacity and number of VMs shared. This registration information is stored in the ON-Listdatabase of the parent SN by creating an entry for the corresponding ON. After that, the ON is readyto send request messages to its parent SN. Figure 2.1 shows the request processing algorithm followedby SN [2]. When an ON requests its parent SN for any VMs, it specifies the duration for how longit needs to use the VMs. This request is evaluated by performing incentive and decision mechanismsas explained in section 3. If a request cannot be met locally, the corresponding parent SN checks itsSN-List database to find another zone with available resources. The interactions between SNs are alsomade through an RPC mechanism. In the SN controller software, there is a separate process whichregularly checks the database for any updates. If a consumer ON’s resource request duration has ex-pired, it frees the VMs and make them available again for the provider ON, and updates the metadataentries of the corresponding ONs in the ON-List database. The current implementation keeps track ofthe number of VMs contributed and consumed by each ON. The system copes with ONs connectingand disconnecting from the SN at any time since ONs periodically send heartbeat messages to theSN. The design allows us to include, in addition, values of metrics like CPU, memory and bandwidthusage, which in the future could be used for fine-grained decisions on resource assignments.

We deploy the prototype [3] of the regulation component of the cloud coordinator from communitycloud manager in the Community-Lab3 testbed [6], which is provided by the CONFINE Europeanproject4 [7]. The cloud coordinator components are installed on nodes of the Community-Lab testbed,which consist of devices of the model Jetway JBC372F36W. Depending on the experiment, one ortwo nodes operate as SNs, while each ON hosts between one and four VM instances. The objectivesof the experiments are twofold:

3http://community-lab.net4http://confine-project.eu

Deliverable D4.37

Page 10: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2.1. Prototype Experiments 2. Research on Incentives-Based Resource Regulation

Table 2.1: Resource capacity of nodes

Node IDs Capacity Shared VMs

f101 2 1

f102 3 3

f103 3 1

f104 1 1

1. Experiment 1: Assess the prototype operation regarding the incentive-based resource assign-ment algorithm in a local community cloud scenario.

2. Experiment 2: Study the coordination between SNs from different zones in the federated com-munity cloud scenario with heterogeneous resource distribution.

2.1.1 Experiment 1: Resource Assignment in Local Community Cloud Scenario

In order to study the performance of the prototype in a real deployment of a local community cloud,we install our software components in five nodes of the Community-Lab testbed, which are connectedto the Guifi.net community network. Each node behaves as an ON but with different configuration,in order to have a heterogeneous set of cloud resources. Table 2.1 shows the capacity of the differentnodes in terms of VM instances and the number of VMs made available for sharing with other nodes.For instance, node f102 shares all of its available capacity, while node f103 shares only one-third ofits capacity with the other nodes. In this experiment, one node acts as SN while four nodes act asONs which connect to the single SN. Each ON sends request for VM instances to the SN at regularintervals. VMs are requested for a 20 seconds interval at a time. Each ON requests as many VMs asits total capacity, for example node f101 always requests 2 VMs. If the request is accepted by the SN,the ON obtains the VMs for the next 20 seconds. If the request is rejected, the ON waits for 5 secondsbefore making any further requests. The experiment is run with this setup for around 5 minutes. Weanalyse the different aspects of the system behaviour in the following.

2.1.1.1 Resource Utilization

Figure 2.2 shows the level of resource utilization in the system in terms of the number of reservedVMs versus the total number of VMs. It can be seen that resource utilization varies widely and 100%utilization, meaning all the VMs are occupied, occurs only for short intervals. This is because asnodes obtain VMs, they spend their credit and can no longer request further VMs. At approximatelysecond 80, the utilization gets very low. Nodes then need to earn credits by providing VMs to othersbefore they can request VMs again. So even though VMs are available, they cannot be utilized due tothe lack of credit in the system.

2.1.1.2 Credit Distribution

Figure 2.3 shows the credit distribution among the four ONs during the 5 minutes of the experiment.A node’s credit is affected by how many VMs it shares and how much credit it spends to obtainVMs. When a node shares most of its capacity, like ON f102 providing all its 3 VMs, it earns morecredits and so maintains a high credit level during the experiment. On the other hand, when a node

8Deliverable D4.3

Page 11: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2. Research on Incentives-Based Resource Regulation 2.1. Prototype Experiments

Figure 2.2: Overall resource utilization of the four ONs

Figure 2.3: Distribution of credit among the four ONs

continuously consumes VMs like ON f101 and f104, it keeps on spending its credit which does notgo beyond a certain level. Of particular interest is the behaviour of ON f103, which earns credit in thestart and gets a spike in credit level halfway through the experiment, but then quickly spends it as itrequests VMs from others. Note that an ON’s credit can be negative or higher than 100% of the totalcredit because in the current implementation SN can allow requests from ONs with zero or negativecredit.

2.1.1.3 Success Ratio

Figure 2.4 shows the ratio of the fulfilled requests for each node, which is affected by the level ofcredit of the node and the amount of resources available in the system. ON f104 has the most successsince it requests only one VM at a time while ON f103 has the least success since it requests 3 VMs,which is half of the total shared VMs in the system. ON f101, on the other hand, gets requests rejectedbecause of the lack of credit. Therefore, this node has to wait to gain the needed credits.

2.1.2 Experiment 2: Resource Assignment in Federated Community Cloud Scenario

In this experiment, we set up two local clouds, each with one SN and four ONs to study the federatedcommunity cloud scenario, as illustrated in Figure 2.5. Table 2.2 shows the two cloud cases withdifferent number of VMs available in the two zones. In the case of scarce capacity (case 1), the nodesin the SN1 zone share very few VMs compared to nodes in SN2 zone. In the case of equal capacity(case 2), the nodes in both the zones share the same number of VMs.

Deliverable D4.39

Page 12: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2.1. Prototype Experiments 2. Research on Incentives-Based Resource Regulation

Figure 2.4: Ratio of fulfilled and rejected requests for each ON

ON1 ON2 ON3

ON4 ON5 ON6

ON7

ON8

ON9

SN1

SN2SN3

Figure 2.5: Super and ordinary nodes in federated community cloud

Figure 2.6 shows the proportion of the requests fulfilled by VMs provided by the other zone. Withscarce capacity in SN1 zone, around 50% of the requests are fulfilled by VMs provided by SN2 zone.SN2 with sufficient capacity is able to meet most of the requests from VMs within the same zone,forwarding less than 15% requests to the other zone. In the second case, when both zones have thesame available capacity, most of the requests get processed within the same zone for both the SNs.This shows that a federated community cloud scenario extends the resources assigned to zones withlimited capacity.

10Deliverable D4.3

Page 13: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2. Research on Incentives-Based Resource Regulation 2.2. Simulation Experiments

Table 2.2: Two cases with different resource distribution between zones

Case 1: Scarce Capacity Case 2: Equal Capacity

SNs ONs Total VMs Shared VMs Total VMs Shared VMs

SN1

ON1 3 1 3 2

ON2 3 1 3 3

ON3 3 1 3 2

ON4 1 1 1 1

SN2

ON1 3 2 3 2

ON2 3 3 3 3

ON3 3 2 3 2

ON4 1 1 1 1

Figure 2.6: Resource assigned over different SN zones

2.2 Simulation Experiments for Incentive Mechanisms

We have studied incentive mechanisms for resource regulation within a single SN zone which cor-responds to a local community cloud scenario [1]. We have also extended our simulator to studyresource regulation across multiple SN zones covering both local and federated community cloudscenarios [2]. Even though we also implemented and deployed a prototype of the regulation compo-nent of Cloud Coordinator on nodes of a real community network [3], as only a handful of nodes aremade available currently, the analysis of our proposed system on greater scale using the real proto-type system is too limited. Therefore, we focused results from the simulation experiments, where ourscenario could be extended to a community cloud consisting of 1,000 nodes.

Our results indicate the impact of incentive mechanisms on the efficiency of the system and on reg-ulating the resource assignments. The understanding gained from the different experimental resultshelps in the design of the policies that such incentive mechanism could follow in a future prototypeof real community cloud system.

Deliverable D4.311

Page 14: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2.2. Simulation Experiments 2. Research on Incentives-Based Resource Regulation

Table 2.3: Configuration for each node in a zone with shared and total instancesNode Behaviour Shared Small capacity Medium capacity Large capacity

Selfish 33% ON1 (1/3) ON2 (2/6) ON3 (3/9)

Normal 66% ON4 (2/3) ON5 (4/6) ON6 (6/9)

Altruistic 100% ON7 (3/3) ON8 (6/6) ON9 (9/9)

2.2.1 Experiment Setup

We simulate a community network comprising of 1,000 nodes which is divided into 100 zones andeach zone has one super node and nine ordinary nodes [2]. The zones are distributed in a small worldtopology where each zone is neighbour to 10 other zones. This approximation holds well for realworld community networks as, for example, topology analysis of Guifi.net [8] shows that the ratioof super node to ordinary nodes is approximately 1 to 10. Each ordinary node in the simulation canhost a number of VM instances that allows users’ applications to run in isolation. Nodes in the zonehave two main attributes, one is capacity which is the number of available VM instances, and otheris sharing behaviour which is how many instances are shared with other nodes. Table 2.3 showsthe different configurations for each of the nine ONs in each zone. Nodes with low, medium andhigh capacity host 3, 6 and 9 VM instances respectively and they exhibit selfish, normal or altruisticbehaviour sharing one-third, two-thirds or all of their VM instances. For example, node ON2 hasmedium capacity with 6 instances and exhibits selfish behaviour reserving 4 instances for itself andcontributing only 2 to the system.When the experiment runs, nodes make requests for resources proportional to their capacity askingfor two-thirds of their capacity. For instance nodes with capacity of 3, 6 and 9 VM instances request2, 4 and 6 instances respectively. Nodes request instances for fixed duration and after transaction iscomplete wait briefly before making further requests.

2.2.2 Experimental Results

We evaluate the impact of the effort-based incentive mechanisms in the system in simulation experi-ments and discuss the results below [2]. We study the success ratio, i.e. number of requests fulfilledversus total requests, and the overall resource utilization in the system.

2.2.2.1 Ratio of Successful Requests

Table 2.4 shows the success ratio for requests made by different nodes analysed both with the effort-based and contribution-based incentive mechanisms. We first notice that the success ratio valuesdecrease as the capacity of the nodes increases. This is explained by the fact that nodes with greatercapacity request more instances and so have a higher chance getting rejected either because there arenot many resources available in the system or because the requesting nodes do not have sufficientcredit.Moreover, when we compare success ratio for nodes as capacity increases, we observe greater varia-tion in the case of contribution-based incentives. For instance, for the normal sharing behaviour thevalues range from 66% to 97% for contribution-based incentives, but from 86% to 90% for effort-based incentives. This is explained by the fact that contribution-based approach does not take het-

12Deliverable D4.3

Page 15: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2. Research on Incentives-Based Resource Regulation 2.2. Simulation Experiments

Table 2.4: Success ration of nodes for different configurations with effort and contribution basedincentives

Node Behaviour Incentives Small capacity Medium capacity Large capacity

Selfisheffort-based 54% 53% 50%

contribution-based 66% 59% 39%

Normaleffort-based 90% 91% 86%

contribution-based 97% 77% 66%

Altruisticeffort-based 97% 94% 86%

contribution-based 97% 85% 65%

erogeneity of nodes into account and penalizes nodes with low capacity as they cannot contribute asmuch to the system as others. These results indicate that effort-based incentives ensure fairness inthe system since the nodes with the same sharing behaviour are treated equally irrespective of theircapacity.

2.2.2.2 Breakdown of Request Responses

Figure 2.7: Breakdown of outcome of requests with effort and contribution based mechanisms

Figure 2.7 shows the breakdown of successful and rejected requests. The success ratio is higherfor effort-based incentives. Moreover, contribution-based mechanism has greater share of requestsrejected because of lack of credit. This indicates that effort-based incentives result in better efficiencyas more resources remain utilized. Another observation is that majority of requests are fulfilled usingresources from local zone with very few requests forwarded to other zones.

2.2.2.3 Resource Utilization

Figure 2.8 shows the proportion of resources utilized in the system along the execution of a 24 minutesexperiment for effort and contribution based approach. In the start all nodes have enough credit andthe resource utilization is high. Then it drops to below 60% at around the 12th minute. Then, sincemost of the nodes completed their transactions and consumed their credits, the utilization decreases

Deliverable D4.313

Page 16: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2.2. Simulation Experiments 2. Research on Incentives-Based Resource Regulation

Figure 2.8: Resource utilization along 24 minutes of the experiment

Figure 2.9: Success ratio comparison of provider ON selection strategies

significantly. The effort-based approach though achieves a higher resource utilization during thattime.

2.2.2.4 Nodes Selection Policies

Policies for Nodes Selection: When SN processes requests for resources, there may be multiplenodes that can be providers so SN applies a selection policy for prioritizing which nodes to choose.Similarly when SN forwards requests to other SN zones, it also has to select between multiple zonesthat have resources avail- able. We evaluated a number of selection criteria that can be employed inabove algorithm, and observed in experiments that low-credit-first and high-score-first policies werebetter in terms of effciency of the system. In the following we explain these different policies anddiscuss the motivation behind them.

Low Credit First Selection: When nodes consume resources, their credit gets spent and with timetheir credit may be too low to request any resources. Such nodes can provide their resources to othernodes and earn credit which allows them to participate in the system again. This policy gives priorityto nodes with low credit with the aim to ensure that most nodes participate in the system and are notleft out because of lack of credit.

When multiple SN zones participate in the system, same problem exists since nodes in a particular

14Deliverable D4.3

Page 17: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

2. Research on Incentives-Based Resource Regulation 2.2. Simulation Experiments

zone may have all their credit spent and cannot request any more resources. So the algorithm abovegives preference to such zones by applying low-credit-first policy when selecting other SNs to forwardrequests.High Score First Selection: One issue with the low-credit-first approach is that it does not differen-tiate among nodes with low credit. Some of the nodes may be inactive and not making any requestswhile others may be getting their requests rejected because of inadequate credit. In this policy, the SNtracks unsuccessful attempts by each node and assigns it a score calculated as follows. Nodes withhigher score get preference so they can recover their credits.Other Policies: We also considered following policies and compared their effect on effciency of thesystem.- First-in-first-out (FIFO). In this simple policy, as soon as nodes have free resources, they registertheir availability with SN which keeps on adding them in a queue. When processing requests, SNselects a node that has been in the queue the longest.- Random. In this policy, SN picks a node at random from the queue.- High credit first. This is opposite of low-credit-first policy and here nodes with more credits arechosen first.Figure 2.9 shows the effect of different node selection policies on the success ratio when using effort-based incentives. High-credit-first and first-in-first-out policies perform poorly since they do notconsider the credits of the nodes and so fail in ensuring a balanced distribution across the system. Thelow-credit-first and high-credit-first policies perform better since they give preference to nodes withlow credit allowing them to earn more so that they can be successful with their future requests.

Deliverable D4.315

Page 18: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3 Research on distributed file systems in communities

3.1 Experiments with distributed file systems

We argue that community network conditions are too diverse to be modelled comprehensively insimulation environments. End users of community networks, who are actually the users of these ap-plications, require quality of experience measurements to be oriented about an expected applicationperformance, rather than measurements at lower technical levels and under unrealistic laboratory con-ditions. We conclude therefore that the understanding of the behaviour of applications in communitynetworks needs to come from measurements in real deployments, that in a realistic way describe thetrue application behaviour.The approach for performance assessment of applications in community networks which we proposeis to set the experimental conditions as seen from the end user: experiment in production communitynetworks, focus on metrics that are of interest for end users, and deploy applications on real nodesintegrated in community networks.We look at distributed secure storage applications, Tahoe-LAFS1 and XtreemFS2, to be used withinand between these wireless community-owned networks. Tahoe-LAFS is an open source distributedcloud storage system. Features of Tahoe-LAFS relevant for community networks are that it encryptsdata at the client side, erasure codes it and then disperses it among a set of storage nodes. Thisapproach of Tahoe-LAFS results in high availability, e.g. even if some of the storage nodes are downor taken over by an attacker, the entire file system continues to function correctly, while preservingprivacy and security. XtreemFS is an open-source fault-tolerant distributed storage. Different ObjectStorage Devices (OSDs) and replica selection policies make it a very suitable distributed file systemfor community networks. We host the Tahoe-LAFS and XtreemFS distributed applications on nodesof a community clouds spread inside of the community networks.The contribution of this chapter consists in showing how for the experimental scenario the real usecase of the application can be used, exemplified by studying the performance of the Tahoe-LAFS andXtreemFS distributed storage applications on clouds within community networks. Options like eval-uating Tahoe-LAFS and XtreemFS in laboratory conditions cannot be considered valid alternatives,since our goal is to assess the application’s performance in the community networks, obtaining a per-formance characterization as seen by the end users. Since the results we obtain are aimed for users,we need to provide a valid assessment of the application’s performance as perceived by them. In or-der to achieve such results, we run our experiments on a real infrastructures deployed in communitynetworks.Understanding the performance of Tahoe-LAFS and XtreemFS from an experimental scenario thatrepresents the real use case is highly important because it reaches the end user and informs whichapplication performance he/she will perceive. These types of performance results will therefore pavethe way for bringing applications such as Tahoe-LAFS as well as other applications into communitynetworks. Applications are what ultimately will open up and make community networks attractive toa large number of users

1http://www.tahoe-lafs.org/2http://www.xtreemfs.com/

16

Page 19: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3. Research on distributed file systems in communities 3.2. Tahoe-LAFS

3.2 Tahoe-LAFS

Tahoe-LAFS is a decentralised storage system with provider-independent security. Hence the useris the only one who can view or modify disclosed data. The storage service provider never has theability to read or modify the data thanks to standard cryptographic techniques. The general idea isthat the client can store files on the Tahoe-LAFS cluster in an encrypted form using cryptographictechniques. The clients maintain the necessary cryptographic keys needed to access the files. Thesekeys are embedded in read/write/verify ”capability strings”. Without these keys no entity is able tolearn any information about the files in the storage cluster. The data and metadata in the cluster aredistributed among servers using erasure coding and cryptography. The erasure coding parametersdetermine how many servers are used to store each file, which is denoted with N, and how many ofthem are necessary for the files to be available, denoted as K . The default parameters used in Tahoe-LAFS are K=3 and N=10, which means that each file is shared across 10 different servers, and thecorrect function of any 3 of those servers is sufficient to access the file. This makes Tahoe-LAFStolerate multiple storage server failures or attacks.The Tahoe-LAFS grid or cluster consists of a set of storage nodes, client nodes and a single coor-dinator node called the Introducer. The main responsibility of the Introducer is to act as a kind ofpublish-subscribe hub. The storage nodes connect to the Introducer and announce their presence andthe client nodes connects to the Introducer to get the list of all connected storage nodes. The In-troducer does not transfer data between clients and storage nodes, but the transfer is done directlybetween them. The Introducer is a single-point-of-failure for new clients or new storage peers, sincethey need it for joining the storage network. We note that for a production environment the Introducermust be deployed on a stable server of the community network.When the client uploads a file to the storage cluster, a unique public/private key pair is generated forthat file, and the file is encrypted, erasure coded and distributed across storage nodes (with enoughstorage space). To download a file the client asks all known storage nodes to list the number of sharesof that file they hold and in the subsequent round, the client chooses which share to request based onvarious heuristics like latency, node load etc.

3.2.1 The Experiment Environment

For our experiment, three physical configurations are used. The main configuration includes nodesof two geographically distant community networks: Guifi.net in Spain, and AWMN (Athens Wire-less Metropolitan Network) in Greece. The second configuration is deployed on nodes of a single-community network, Guifi.net in Spain. And the baseline configuration consists of local deploymentof Tahoe-LAFS needed to be able to better understand the effects of the networks effects given by theother two community network scenarios. The connectivity between community network nodes variessignificantly. We observe that network characteristics are not symmetric. Both community networks(Guifi.net and AWMN) are connected on the IP layer via the FEDERICA (Federated E-infrastructureDedicated to European Researchers) [9] infrastructure, enabling network federation.The nodes of our experiments are real nodes, connected though wireless IEEE 802.11 a/b/g/n tech-nology, using equipment from various manufacturers, while different routing protocols are used onsome zones of the network where our nodes are located. The wireless links operate in the ISM fre-quency bands at 2.4GHz and 5GHz. While most links of the network are wireless, optical fibre alsointerconnects some zones.For the two-community network configuration, our Tahoe-LAFS cluster consists of a total of 15

Deliverable D4.317

Page 20: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3.2. Tahoe-LAFS 3. Research on distributed file systems in communities

nodes. From the Guifi.net community network we use 10 nodes, where 8 of them are storage nodes,1 of them is client node and 1 of them is the Introducer node. From AWMN we use 5 nodes asstorage nodes. The client node in our two-community experiment is located in the Guifi.net (Spain)community network. Most Community-Lab nodes are built with a Jetway device that is equippedwith an Intel Atom N2600 CPU, 4GB of RAM and 120GB SSD. Nodes are running a custom LinuxOS (based on OpenWrt) provided by CONFINE. We deploy Tahoe in slivers of these nodes by usingthe Guifi-Clommunity-Distro (Cloudy) operating system image.

For the single-community network configuration, we also use of a total of 15 nodes, 13 of them asstorage nodes, 1 node as Introducer node and 1 node as a client node. The client node in this secondexperiment is also located in Guifi.net, but on another node different from the client node used in thetwo-community experiment.

For the baseline configuration, we use 15 nodes in total (13 Storage nodes, 1 Introducer and 1 clientnode), but deploy them on virtual machines in a single PC, in order to be able to compare the perfor-mance of Tahoe-LAFS in a scenario with reduced effects of network latency and bandwidth.

All storage nodes share 1GB to the Tahoe-LAFS cluster. Table 3.1 shows the geographical distributionof the nodes used for the first experiment.

Table 3.1: Nodes in the Tahoe-LAFS grid and space that is shared with the grid

Nr. of nodes CN Location Shared space5 Guifi.net Catalunya, Spain 1GB

5 Guifi.net (UPC) Barcelona, Spain 1GB

5 AWMN Athens, Greece 1GB

We evaluate in the following the Tahoe-LAFS storage performance in community networks in orderto assess the impact of WAN latency and bandwidth. We focus on the read/write performance ofTahoe-LAFS in community networks and ignore other features such as recovery performance, repair,maintainability etc.

3.2.2 Results

We collect the measurements for Tahoe-LAFS in community networks either reading or writing fixed-size objects (files). We ignore tests with concurrent reads and writes. In the tests we use workloadsconsisting of 10MB (Small-file), 20MB (Medium-file) and 50MB (Large-file) objects. We use 5consecutive write (upload) operations of objects of size 10MB, 20MB and 50MB. We use also 5 con-secutive read (download) operations of objects of size 10MB, 20MB and 50MB. We present averagesof read/write operations in minutes in the two-community and the single-community experiment andin seconds in the local experiment. The upload and download files that we use in the experiment areof different types such as .jpg, .pdf, .zip, .mp4 etc. In every upload we use different file types andcontent because if we upload the same file again and again, Tahoe-LAFS returns the same capabilitystring for the file. The capability string is derived from two pieces of information: the content of thefile and the convergence secret (randomly generated secret by the node when it first starts up). We usethe default 3-of-10 Tahoe-LAFS erasure-coding parameter for our experiments.

18Deliverable D4.3

Page 21: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3. Research on distributed file systems in communities 3.2. Tahoe-LAFS

Figure 3.1: Write/read performance of Tahoe-LAFS in two-community networks

Figure 3.2: Write operation counts of Tahoe-LAFS per node

3.2.2.1 Two-community Network Evaluation

The results including 2 community networks are shown in figure 3.1. Write and read latency is shownin minutes. Moderate write performance of Tahoe-LAFS can be attributed by the fact that defaultstripe size is optimized for writing small objects (the stripe size determines the granularity at whichdata is being encrypted and erasure coded). These results in 2.8 minutes write latency using 3-of-10configuration. Other factors that contribute to this latency are the topological placement of the nodes.During write operation, Tahoe-LAFS will try to distribute the shares as widely as possible, using adifferent pseudo-random permutation for each file, but Tahoe-LAFS is unaware of the node propertieslike ”location”. If we have more free nodes than shares, it will only put one share on any given node,but we might end up with more shares in one location than the others.

Figure 3.2 shows the frequency of write operation per node when writing files of size 50MB, 20MB

Deliverable D4.319

Page 22: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3.2. Tahoe-LAFS 3. Research on distributed file systems in communities

and 10MB. As we can see, during 5 consecutive write operations for every file size, the node locatedin AWMN, Greece contains more shares, in total 14. This means that the node is written 14 timesout of 15. This shows that the Tahoe-LAFS is unaware of the location of the nodes, by writing thefarthest node. Table 3.2 shows the bandwidth and round-trip time (RTT) from the client to the storagenodes. The client node which writes and reads is located in Guifi.net community network. For writingoperation the average throughput was 2.2 Mbits/sec.

Table 3.2: RTT and Bandwidth between client and storage nodes in the two-community networkscenario

Nodes RTT BandwidthUPC-lab104-VM08

0.100 - 0.800 ms 100 - 150 Mbits/secUPC-lab104-VM09UPC-lab104-VM02UPC-D6-105-RD4

HW-ermita11

5 - 25 ms 7 - 10 Mbits/sec

LLUsbgTorreGB-MNBufalvent

GB-SFBDipositSanmartiLocal-i7

LLUperafitaPrionaAWMN-CF-7bpm

100 - 150 ms 3 - 5 Mbits/secAWMN-DA-Town HallAWMN-HQ-LAB-02AWMN-CF-djk604

AWMN-CF-Wolfpack

Another factor which affects the write performance is that, when writing new objects, Tahoe-LAFSgenerates a new public/private key, which is a computationally expensive operation. Also, Tahoe-LAFS has to deal with the overhead of writing the file system meta-data objects (i.e., the mutabledirectory objects), every time an object is accessed.Read operations are accomplished by submitting the request to all storage nodes simultaneously,hence the relevant peers are found with one round-trip to every node. The second round-trip occursafter choosing the peers to read the file from. The intention of the second round-trip is to selectwhich peers to read from, after the initial negotiation phase, based on some heuristics. Read opera-tions within two-community networks resulted in better performance than write operations. This isexpected in erasure coded systems, since reads in fact transfer less data than writes. For 50MB ofdata the read operation took 42 seconds (0.7 minutes) which is a superior performance achieved inthe community network, using 12 Mbits/sec as an average throughput.

3.2.2.2 Single-community Network Evaluation

Figure 3.3 shows the write/read performance in a single-community network (Gufi.net) comprised of15 nodes. As it can be seen the write/read performance is slightly better compared to two-communitynetwork experiment, with the write operation latency of 2.1 minutes and read operation latency of 0.5minutes for the 50MB file. The Table 3.3 shows the RTT and bandwidth between the client node andall the storage nodes.

20Deliverable D4.3

Page 23: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3. Research on distributed file systems in communities 3.2. Tahoe-LAFS

Figure 3.3: Write/read performance of Tahoe-LAFS in single-community network

Table 3.3: RTT and Bandwidth between client and storage nodes in the single-community scenarioNodes RTT Bandwidth

UPC-lab104-RD1

0.700 - 0.800 ms 90 - 100 Mbits/sec

UPC-lab104-VM10UPC-lab104-RD2

UPC-lab104-VM02UPC-lab104-VM09UPC-lab104-VM08UPC-D6-105-RD4VM1-i7-UB1304

6 - 10 ms 10 - 12 Mbits/sec

VM2-i7-UB1304VM3-i7-UB1304PC2-VM-i7-11PC2-VM-i7-12PC2-VM-i7-13PC2-VM-i7-14PC2-VM-i7-15

3.2.2.3 Evaluation in local setting

In order to gain insight into the relevance of RTT and bandwidth to the Tahoe-LAFS performance, weconduct the same experiment in nodes inside virtual machines of a single PC. Each virtual machineis assigned 1 GB of RAM, and 8 GB of free disk space. The PC itself has an Intel Core i7-3770CPU with 3.40GHz, 16GB of memory and 1TB of disk space. For these local tests we use, as beforein the real community network, 15 nodes where 13 are storage nodes, 1 is the Introducer node and1 is the client node. Bandwidth between the client node and storage nodes is 970 Mbits/sec. Themeasured round-trip-time is 0.313 ms. Figure 3.4 shows Tahoe-LAFS write and read performancein local settings. Compared to the previous two experiments, the write and read time reduces in thelocal context to the magnitude of seconds, instead of minutes, which we observed before in the WAN

Deliverable D4.321

Page 24: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3.3. XtreemFS 3. Research on distributed file systems in communities

Figure 3.4: Write/read performance of Tahoe-LAFS in local environment

environment.

3.2.3 Discussion

There are certain limiting factors in terms of performance when trying to deploy distributed storagesystems in a WAN setting. Overall we observe that WAN characteristics have significant impacton the performance of Tahoe-LAFS mainly due to several design choices of Tahoe-LAFS, like notwell-optimized stripe size for larger objects, inefficient message passing and expensive and frequenthash read/write seeks needed to reconstruct shares on the storage peers. However, besides theseperformance limitations, the functional performance was correct. Therefore our experiment showsthat Tahoe-LAFS provides convenient cloud storage services within community networks where dataencryption and system reliability are important. The significant read (12 Mbits/sec) and reasonablewrite (2.2 Mbits/sec) times make Tahoe-LAFS a promising application that should be considered forprivacy-preserving, secure and fault-tolerant storage in community networks.

3.3 XtreemFS

The XtreemFS is an open source object-based distributed file system for grid and cloud infrastruc-tures. The file system replicates objects for fault tolerance and caches data and metadata to improveperformance over high-latency links. As an object-based file-system, XtreemFS stores the direc-tory tree on the Metadata and Replica Catalog (MRC) and file content on Object Storage Devices(OSD). The MRC uses an LSM-tree based database which can handle volumes that are larger thanthe main memory. OSDs can be added to the system as needed without any data re-balancing; emptyOSDs are automatically used for newly created files and replicas. In addition to regular file repli-cation, XtreemFS provides read-only replication. This replication mode works on immutable filesand supports a large number of replicas. The read-only replication helps to quickly build a cachinginfrastructure on top of XtreemFS in order to reduce latency and bandwidth consumption betweendata centers.

22Deliverable D4.3

Page 25: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3. Research on distributed file systems in communities 3.4. Conclusions

Figure 3.5: Three local clouds

3.3.1 Testing environment

For our experiments, the main configuration includes 30 nodes of the Guifi.net community networkwhich is one of the biggest community networks worldwide. The nodes of our experiments are realnodes, connected though wireless IEEE 802.11 a/b/g/n technology, using equipment from variousmanufacturers, while different routing protocols are used on some zones of the network where ournodes are located. The wireless links operate in the ISM frequency bands at 2.4GHz and 5GHz.While most links of the network are wireless, optical fibre also interconnects some zones. We use 10nodes from three different local clouds located in the Catalunya region of Spain as shown in Figure4 (UPC local cloud, HAN local cloud and TAR local cloud). We observe that network characteristicsare not symmetric between the three local clouds. From each local cloud we use 10 nodes, which arelocated on one PC. Each PC is equipped with an Intel Core i7-3770 8-core CPU at 3.40GHz, 16GBof RAM and 1TB disk. The nodes inside PCs are provided as VMs through a Proxmox cluster. EachVM has 1GB of a memory and 20GB disk. Each VM uses the Guifi-Clommunity-Distro (Cloudy) asan operating system image.

3.3.2 Results

Figure 3.6 shows the write performance of XtreemFS when no replication is used. As it can be seenit takes 2 seconds for XtreemFS to write a file of 1MB. XtreemFS is using no encryption at the clientside. It just encrypts the data only on the network. As we increase the replication factor to 2 and to5 (figure 3.7 and figure 3.8), the write operation latency of XtreemFS increases and this is due to thefact that the probability to end up writing in storage nodes with slower links or slower transfer ratesis high.

3.4 Conclusions

XtreemFS, an open source distributed file system was evaluated on a very diverse heterogeneousdistributed community cloud infrastructure. The study aimed to provide an understanding of using

Deliverable D4.323

Page 26: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3.4. Conclusions 3. Research on distributed file systems in communities

Figure 3.6: Performance of XtreemFS for no replica

Figure 3.7: Performance of XtreemFS for 2 replicas

Figure 3.8: Performance of XtreemFS for 5 replicas

24Deliverable D4.3

Page 27: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

3. Research on distributed file systems in communities 3.4. Conclusions

distributed file systems on distributed heterogeneous cloud-based infrastructures in a production com-munity network. XtreemFS for the experiment was selected according to its potential relevance forcommunity network users, covering solutions for different requirements regarding fault-tolerance,storage performance, and privacy.In the experiments with three different file sizes and with different replication factors, the write timesof XtreemFS was measured. Each experiment was repeated several times to assess the storage systemswithin the dynamic conditions of a community network.

Deliverable D4.325

Page 28: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

4 Research on Distributed Scalable Storage

4.1 Experiments with Distributed Storage

As already noted in section 3.1 simulations will usually not give good predictions for system per-formance highly homogeneous conditions as seen in community networks. It is, hence, important toevaluate the behaviour of distributed storage systems in a real deployment on a testbed with condi-tions similar to final deployment conditions.This chapter contributes such experiments for the distributed key-value store CaracalDB.

4.2 CaracalDB

Since CaracalDB was originally intended for data centre usage, and a community network over wire-less links has very different properties from gigabit ethernet network topologies that data centresusually provide, it was necessary to verify that the storage system would operate in such an environ-ment. In its current state CaracaDB’s replication algorithms are latency optimised and not throughputand that is probably the area where wireless links have the greatest impact. Thus we will focus thefollowing evaluation on operation latency for different network topologies.

4.2.1 Setup

We used YCSB [10] for generating load and collecting the results, which is a well-known standardfor key-value stores. Every experiment was run using a single YCSB instance with 32 workersrunning operations on random nodes in the CaracalDB cluster concurrently.

We considered two basic topologies to compare. The first one can be seen in figure 4.1 and containsseven geographically co-located VMs in Barcelona with 1GB of memory and 10GB of disk space.Machines one to six act as CaracalDB server nodes with machine five being the designated bootstrapserver. Machine seven is only used as a client to run YCSB against the CaracalDB cluster. Thisrepresents our reference topology for a standard cloud deployment.The second topology aims to model the enormous latencies and congested links we usually see incommunity networks over wireless links. As seen in figure 4.2 we used four VMs in Barcelona (asubset of the ones from the reference topology) and three VMs in Stockholm which were connectedto Guifi.net via virtual private network (VPN). In this setup the ping round-trip time (RTT) fromStockholm to Barcelona VMs was 78ms. To achieve the desired effect of long latencies and easilycongested links, we set up the routing of the Stockholm machines such, that they would route viathe VPN gateway in Barcelona even when communicating among themselves, giving 154ms RTT,while keeping the Barcelona machines routing among each other as before, retaining their 0.4msRTT. Additionally, the VMs in Stockholm also have 2GB of memory and not just 1GB as the ones inBarcelona. Overall this topology is very heterogeneous and should serve well, to derive performanceestimates in adverse conditions.

26

Page 29: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

4. Research on Distributed Scalable Storage 4.2. CaracalDB

1 2 34

BS

6

YCSB

0.4ms0.4ms 0.4ms

0.4ms

0.4ms

0.4ms

Figure 4.1: Reference Topology: 7 co-located VMs with full ethernet connectivity. Node #5 isdesignated as “bootstrap node” and node #7 runs only YCSB. Round-trip latency 0.4ms.

For each topology we ran three YCSB workloads on the cluster. Workload A is a write intensiveworkload with an equal amount of reads and updates uniformly distributed over 10 000 keys with100 000 operations. Similarly, workload B uses the same keys but is read heavy with 95% reads andonly 5% updates. Both workloads have all the keys pre-loaded into the database. Lastly, workload Eis a scan heavy workload with 95% scans of lengths uniformly distributed over one to 100 items and5% inserts among 100 000 operations.

4.2.2 Results

The results of the experiments as seen in figures 4.3 and 4.4 show that on the reference topologylatencies for all operations are rather low, with averages around 25ms, and only the scan workloadshows a hint of a tail, with an average of 76ms. This is probably due to long scan operations keepingthe nodes occupied for a longer time. As typical for Paxos state machine replication (SMR) imple-mentations without master leases or similar read optimisations, the latencies for reads, updates andinserts look the same and only the scans which keep the storage layer busy longer and result in moredata being transferred deviate slightly.On the mixed topology things are more interesting, with averages between 70ms and 90ms and muchmore tail for all the operations. Three steps can be clearly seen for all operations, corresponding todifferent compositions of the replication groups. With our knowledge about the latencies betweennodes we can deduce that the first step corresponds to replication groups where at least two out ofthe three replicas are located in Barcelona and hence requests can be answered without waiting fora potentially slow third in Stockholm. The second step reflects mixed setups where one replica inBarcelona and two in Stockholm participate. The request can then be answered by two trips betweena Barcelona and Stockholm node at 78ms each, instead of two trips between Stockholm nodes whichare 154ms as is the case for the third step.

Deliverable D4.327

Page 30: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

4.2. CaracalDB 4. Research on Distributed Scalable Storage

1 2BS

YCSB

3 46

VPN

VPN

77ms

0.6ms0.6ms

0.6ms

0.6ms

0.4ms 0.4ms0.4ms

Figure 4.2: Mixed Topology: 4 VMs with full ethernet connectivity in Barcelona and 3 VMs inStockholm routing over Barcelona. Node #5 is designated as “bootstrap node” and node #7 runsonly YCSB. Round-trip latency from Stockholm to Barcelona 78ms and among Stockholm nodes

154ms.

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

95% Scans�5% Inserts

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

95% Scans�5% Inserts

Figure 4.3: Cumulative read/scan latencies for all workloads on the reference topology (left) and themixed topology (right).

4.2.3 Conclusions

As shown above the operation latency suffers severely from such adverse conditions. However, itshould be noted that for the mixed topology experiments the hypothetically best possible result wouldhave been all operations completing in less than 156ms, which is the latency from YCSB to the mostremote node and back. What we have seen though is 95% of all operations completing in under360ms even in these very adverse conditions which is bare more than double the “optimal”. Clearly,locational awareness is the most important factor to improve these results for a real deployment on

28Deliverable D4.3

Page 31: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

4. Research on Distributed Scalable Storage 4.3. Datamodel

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

95% Scans�5% Inserts

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

95% Scans�5% Inserts

Figure 4.4: Cumulative update/insert latencies for all workloads on the reference topology (left) andthe mixed topology (right).

community networks and will certainly be a focus for future research.

4.3 Datamodel

4.3.1 Setup

The experiments in this section are performed on the Datamodel built on top of the key-value storeCaracalDB. We used YCSB [10] for generating tests and collecting the results. The tested envi-ronments are the same as for the CaracalDB experiments: a topology with low latencies 4.1 anda topology with high latency 4.2. YCSB tests are used to check the latencies of the Datamodel’sput, get and query operations. For each topology we run three YCSB workloads very similar to theCaracalDB workloads, with the only exception that we take into consideration the structure of thedata while performing the new operations provided. All workloads perform 100 000 operations over10 000 uniformly distributed items. As with CaracalDB, we have one read intensive workload with95% reads, one balanced workload with 50% reads and 50% writes and an indexed query intensiveworkload performing 100% indexed queries. The indexed queries perform scans over values of a fieldof the items.

4.3.2 Results

The results of the experiments can be seen in figures 4.5 and 4.6. As expected, in the case of lowlatency links we get an average latency of 35ms for reads, 50ms for inserts and updates and 70ms forindexed queries returning around 10 items each. In the same environment CaracalDB performs key-value put and get operations with an average latency of 25ms, which means the datamodel overheadis on average 10ms for reads, 25ms for inserts and updates and 50ms for indexed queries. Since thedatamodel operations involve more messages and underlying CaracalDB operations, this overheadwas expected.In the mixed topology experiments the average latency is 130ms for reads, 150ms for updates and in-serts, 280ms for indexed queries returning around 10 items each. In the same environment, CaracalDBhas an average of 70ms for reads and 90ms for writes, resulting in the following datamodel overheard:60ms for reads, 60ms for inserts and updates, and around 200ms for the indexed query. Similar tothe results of CaracalDB, the datamodel results have several steps for each operation, depending onthe machines that are involved in each of the CaracalDB sub-operation’s quorums. The machines inthe Barcelona cloud have lower latency links, while the machines in the Stockholm cloud have higherlatency links, and since all CaracalDB operations are quorum based operations, the composition on

Deliverable D4.329

Page 32: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

4.3. Datamodel 4. Research on Distributed Scalable Storage

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

100% Scans

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

100% Scans

Figure 4.5: Datamodel - cumulative read/scan latencies for all workloads on the reference topology(left) and the mixed topology (right).

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

100 200 300 400 500 600

ms

20

40

60

80

100

%

50% Reads�50% Writes

95% Reads�5% Writes

Figure 4.6: Datamodel - cumulative update latencies for all workloads on the reference topology(left) and the mixed topology (right).

this quorum determines the steps seen in the results. On the datamodel indexed queries operations,the steps are smoothed due to the high number of CaracalDB operations performed for each indexedquery operation. With the slowest operation, indexed queries, 95th percentile finish within 550mswhich is very close to CaracalDB 360ms latency for 95th percentile operations, the price we pay indatamodel’s overhead is small compared to the extra operations that the system gains. With the highnumber of CaracalDB operations involved in each indexed query there is a higher chance to requestdata through the high latency links, which is the main reason of the high latency of indexed queryoperations.

4.3.3 Conclusion

The results above show how the latency of datamodel operations varies depending on the latency ofthe underlying links. The worst case communication latency between 2 nodes of 150ms, coupled withdatamodel operations requiring multiple CaracalDB operation are the main elements that determinethe higher latency of datamodel operations. Even under the adverse conditions of the underlyingnetwork, the overhead of the datamodel is small enough to allow for its use when applications requirea richer interface than key-value operations.

30Deliverable D4.3

Page 33: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

5 Research on Distributed Video Streaming

5.1 Video-on-Demand (Gvod)

5.1.1 Setup

The Gvod experimental setup involves two topologies: a low latency links topology, figure 5.1, and ahigh latency links topology, figure 5.2. The first topology, figure 5.1, contains 7 VMs located in thesame cloud in Barcelona. All VMs are identical, having 1GB of memory and 10GB of disk space.Since all the VMs are in the same cloud the connections between the leecher and the seeders havevery low latency, providing very good performance. In this topology one VM is used for the bootstrapserver, one is used for the leecher and between 2 and 5 VMs are used for seeders.In the second topology, 5.2, the Barcelona cloud contains 7 VMs and the Stockholm cloud contains1 VM. The new VM in the Stockholm cloud has 2GB of ram and 10GB of disk space. The latencybetween the Stockholm VM and any of the Barcelona VMs is around 70ms. The bootstrap serverand a maximum of 6 seeders will be placed on the Barcelona VMs and the leecher will be located ona VM in the Stockholm cloud. This will ensure that the latency between the leecher and any of theseeders is around 70ms which is considerably slower than the intra-cloud latency of 0.4ms.

For both topologies the experiments consisted of a leecher downloading a 36MB video from 2 and upto 6 concurrent seeders and measuring the leecher’s total download time as well as the download rate.

S3S2S1 S4 S5

BS L1

0.4ms0.4ms

0.4ms0.4ms

0.4ms

0.4ms

Figure 5.1: Reference Topology:7 co-located VMs with full ethernet connectivity. Round-trip latency 0.4ms.

31

Page 34: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

5.1. Video-on-Demand (Gvod) 5. Research on Distributed Video Streaming

S4S3S2 S5 S6

S1 BS

L1

VPN

VPN

70ms

0.6ms

0.6ms

0.6ms

0.6ms

0.4ms0.4ms

0.4ms0.4ms

Figure 5.2: Mixed Topology:7 VMs with full ethernet connectivity in Barcelona and 1 VMs in Stockholm routing over Barcelona.

Download time2 Seeders 11.612s3 Seeders 11.609s4 Seeders 14.518s5 Seeders 14.595s

Table 5.1: Download time - leecher and seeders are in same clouds

5.1.2 Results

The results of the experiments performed in the first topology can be seen in table 5.1 and figure 5.3.In this experiment the latencies between the leecher and the seeders are very low: 0.4ms which allowsfor a high download rate of about 50Mbit/s. Figure 5.3 shows that Gvod’s ramp-up period is around3s and its download end-time is also around 4s. The main download time is similar between 2 or moreseeders which is to be expected due to the low latencies between peers which allows the leecher tomaximise its bandwidth usage with even only 2 seeders.For the experiments performed in the second topology, the results can be viewed in table 5.2 andfigure 5.4. With an increase in latency between the leecher and the seeders, the leecher is no longerable to maximise its bandwidth usage and we can see the improvement that additional leechers bringto the download speed of the leecher. The ramp-up period and the end-time periods are very similar

32Deliverable D4.3

Page 35: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

5. Research on Distributed Video Streaming 5.1. Video-on-Demand (Gvod)

2 4 6 8 10 12 14

s

10

20

30

40

50

Mbit�s

1 leecher�2 seeders

1 leecher�3 seeders

1 leecher�4 seeders

1 leecher�5 seeders

Figure 5.3: Download rate - leecher and seeders are in same clouds

Download time2 Seeders 95.531s3 Seeders 65.524s4 Seeders 53.534s5 Seeders 47.537s6 Seeders 44.536s

Table 5.2: Download time - leecher and seeders are in different clouds

20 40 60 80

s

2

4

6

8

10

Mbit�s

1 leecher�2 seeders

1 leecher�3 seeders

1 leecher�4 seeders

1 leecher�5 seeders

1 leecher�6 seeders

Figure 5.4: Gvod - leecher and seeders are in different clouds

to the values obtained in the first setup.

5.1.3 Conclusion

We evaluated Gvod under two different topologies, one with low latency links and one with higherlatency links. In the case of the higher latency links, Gvod can take advantage of the multiple seedersin the network to improve its video streaming performance.Gvod can deliver high quality video in both 720p or the higher quality 1080p since a 720p videoencoded with the H.264 format standard requires a bitrate of 3Mbit/s and a 1080p video encodedwith the same H.264 standard requires a bitrate of 7Mbit/s. Even with a low number of seeders and

Deliverable D4.333

Page 36: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

5.1. Video-on-Demand (Gvod) 5. Research on Distributed Video Streaming

high latency links, Gvod can provide this kind of performance as can be seen in the results above.However, the system requires improvements for the ramp-up period and end-time period because atthe moment it does not employ any particular strategies for these special cases. The ramp-up periodis noticed by the user as buffering time and should be as short as possible. The end-period is due toslower or lost messages and can be noticed by the user if the download speed rate is very close tothe video rendering speed. Providing solutions for these two problems will be the focus for futureresearch.

34Deliverable D4.3

Page 37: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

6 Conclusions and Outlook

Experimental evaluation of the research addressed in WP3 was carried out. The experimental eval-uation studied different levels of the community cloud system. Resource allocation with incentive-based mechanism is a means for IaaS regulation of contribution and demand in community clouds.Distributed file system experimentation contributed to shape the offer of the services in the Cloudycommunity distribution. The experiments on distributed scalable storage solutions helped to validatethe proposed solutions for further integration as SaaS in Cloudy. Distributed video streaming experi-ments gave additional insights in this approach for provisioning such service in community clouds.While at the beginning of the project, the experiments first applied simulation methodologies, alongthe development and deployment of the community cloud testbed in T4.2, more and more experimentswere conducted on real deployed cloud systems, starting first with using Community-Lab from theCONFINE project, but later on combining Community-Lab devices with our own community cloudtestbed infrastructure.The experimental environment achieved by the end of the first reporting period offers now for thesecond phase of the project the opportunity for unique kinds of experiments on a distributed andheterogeneous cloud testbed. Together with the permanent deployment of Cloudy on some cloudresources and user participation in the cloud-based services, we expect that extremely valuable ex-periments will be possible, beyond that possible in any other available testbed, not only from thetechnical perspective, but also from the social perspective, by involving external the community inthe goals of the project.

35

Page 38: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

Bibliography

[1] Umit C Buyuksahin, Amin M Khan, and Felix Freitag, “Support Service for Reciprocal Com-putational Resource Sharing in Wireless Community Networks,” in 5th International Workshopon Hot Topics in Mesh Networking (HotMESH 2013), within IEEE WoWMoM, Madrid, Spain,June 2013. 2, 2.1, 2.2

[2] Amin M Khan, Umit C Buyuksahin, and Felix Freitag, “Towards Incentive-Based Resource As-signment and Regulation in Clouds for Community Networks,” in Economics of Grids, Clouds,Systems, and Services, Jorn Altmann, Kurt Vanmechelen, and Omer F. Rana, Eds., vol. 8193 ofLecture Notes in Computer Science, pp. 197–211. Springer International Publishing, Zaragoza,Spain, Sept. 2013. 2, 2.1, 2.2, 2.2.1, 2.2.2

[3] Amin M. Khan, Umit C Buyuksahin, and Felix Freitag, “Prototyping Incentive-based ResourceAssignment for Clouds in Community Networks,” in 28th International Conference on Ad-vanced Information Networking and Applications (AINA 2014), Victoria, Canada, May 2014,IEEE. 2, 2.1, 2.2

[4] R. Rahman, M. Meulpolder, D. Hales, J. Pouwelse, D. Epema, and H. Sips, “Improving Ef-ficiency and Fairness in P2P Systems with Effort-Based Incentives,” 2010 IEEE InternationalConference on Communications, pp. 1–5, May 2010. 2

[5] Davide Vega, Roc Messeguer, Sergio F. Ochoa, and Felix Freitag, “Sharing hardware resourcesin heterogeneous computer-supported collaboration scenarios,” Integrated Computer-Aided En-gineering, vol. 20, no. 1, pp. 59–77, 2013. 2

[6] Axel Neumann, Ivan Vilata, Xavier Leon, Pau Escrich, Leandro Navarro, and Ester Lopez,“Community-Lab: Architecture of a Community Networking Testbed for the Future Internet,” in2012 1st International Workshop on Community Networks and Bottom-up-Broadband (CNBuB2012), within IEEE WiMob, Oct. 2012. 2.1

[7] Bart Braem, Roger Baig Vinas, Aaron L. Kaplan, Axel Neumann, Ivan Vilata i Balaguer, BlaineTatum, Malcolm Matson, Chris Blondia, Christoph Barz, Henning Rogge, Felix Freitag, Lean-dro Navarro, Joseph Bonicioli, Stavros Papathanasiou, and Pau Escrich, “A case for researchwith and on community networks,” ACM SIGCOMM Computer Communication Review, vol.43, no. 3, pp. 68–73, July 2013. 2.1

[8] Davide Vega, Llorenc Cerda-Alabern, Leandro Navarro, and Roc Meseguer, “Topology patternsof a community network: Guifi.net,” in 1st International Workshop on Community Networksand Bottom-up-Broadband (CNBuB 2012), within IEEE WiMob, Barcelona, Spain, Oct. 2012,pp. 612–619. 2.2.1

[9] ““Federated E-infrastructure Dedicated to European Researchers Innovating in Computing net-work Architectures”,” . 3.2.1

[10] Yahoo, “YSCB,” . 4.2.1, 4.3.1

36

Page 39: Experimental research evaluation (initial)felix.site.ac.upc.edu/Clommunity_deliverables/D4.3.pdf · Project title: A Community networking Cloud in a box. Experimental research evaluation

Licence

The CLOMMUNITY project, April 2014, CLOMMUNITY-201404-D4.3:

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License.

37