D6.3 Final release of 5GTANGO platform · D6.3 Final release of 5GTANGO platform Project Acronym...
Transcript of D6.3 Final release of 5GTANGO platform · D6.3 Final release of 5GTANGO platform Project Acronym...
D6.3 Final release of 5GTANGO platform
Project Acronym 5GTANGOProject Title 5G Development and Validation Platform for Global Industry-Specific Network
Services and AppsProject Number 761493 (co-funded by the European Commission through Horizon 2020)Instrument Collaborative Innovation ActionStart Date 01/06/2017Duration 32 monthsThematic Priority H2020-ICT-2016-2017 – ICT-08-2017 – 5G PPP Convergent Technologies
Deliverable D6.3 Final release of 5GTANGO platformWorkpackage WP6Due Date M26Submission Date 15/8/2019Version 0.1Status To be approved by ECEditor George Xilouris (NCSRD) and Felipe Vicens (ATOS)Contributors Manuel Peuster (UPB), Rafael Schellenberg (UPB), Stefan Schneider (UPB),
Raul Muoz (CTTC), Pol Alemany (CTTC), Ricard Vilalta (CTTC), Juan L.de la Cruz (CTTC), Eleni Fotopoulou (UBITECH), Anastasios Zafeiropoulos(UBITECH), Stavros Kolometsos (NCSRD), Santiago Rodrıguez (OPT), ErikBriseid (OPT), Panos Trakadas (SYN), Panos Karkazis (SYN), Evgenia Kapasa(UPRC), Vrettos Moulos (UPRC), Dimitris Dres (UPRC), Ignacio DomınguezGomez (ATOS)
Reviewer(s) Tasos Zafeiropoulos (UBI), Carlos Parada (ALB), Kevin McDonnell (HWIL),Sonia Castro (ATOS)
Keywords:
5GTANGO platform, Service Platform, V&V, SDK toolkit, functional, non-functional
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Deliverable Type
R DocumentDEM Demonstrator, pilot, prototype XDEC Websites, patent filings, videos, etc.OTHER
Dissemination Level
PU Public XCO Confidential, only for members of the consortium (including the Commission Ser-
vices)
Disclaimer:This document has been produced in the context of the 5GTANGO Project. The research leading to these resultshas received funding from the European Community’s 5G-PPP under grant agreement n◦ 761493.All information in this document is provided “as is” and no guarantee or warranty is given that the informationis fit for any particular purpose. The user thereof uses the information at its sole risk and liability.For the avoidance of all doubts, the European Commission has no liability in respect of this document, which ismerely representing the authors’ view.
ii Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Executive Summary:
This document is the final WP6 deliverable on the 5GTANGO platform. It updates and concludesthe content of the previous two deliverables (D6.1 and D6.2) and, in addition, provides results froma large testing and validation campaign of the main 5GTANGO components, namely, the ServicePlatform, the Validation and Verification (V&V) platform and the SDK. The results are preparedand executed using a methodical approach that guarantees a system-level testing and validation of5GTANGO platform operation.
The test campaigns are divided per component and then per test category, functional and non-functional. The functional tests include integration, smoke and sanity tests. The functional testsproven the mature and tight integration of the various components of the 5GTANGO, as well astheir interoperability with other MANO platforms (i.e OSM).
The results of the non-functional tests (i.e. testing for performance, scalability, load etc.) are thenanalysed. The non-functional tests are more complex in their execution and involve the collectionof metrics from various sub-components from multiple test iterations so that the results contain astatistical significance. From the list of non-functional tests, the ones that are more highlightedare the ones related to the performance of the Service Platform and more specifically the servicedeployment time which is checked against both container and virtual machine containers.
The document concludes with the outcome that 5GTANGO software release constitutes matureplatform that is able to satisfy the requirements a DevOps process entitles in the 5G environment.
5GTANGO Public iii
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Contents
List of Figures vii
List of Tables ix
1 Introduction 11.1 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Document Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 How to read this deliverable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Infrastructure and Environments Overview 32.1 DevOps workflow for NS and VNF deployment . . . . . . . . . . . . . . . . . . . . . 32.2 Infrastructure Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.3 Development Environments Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3.2 Service Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3.3 Validation and Verification Platform . . . . . . . . . . . . . . . . . . . . . . . 72.3.4 NS/VNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3.5 Mapping of 5GTANGO infrastructure to environments . . . . . . . . . . . . . 9
3 Supporting Tools 103.1 Testing Frameworks Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 Robot Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Testing packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3.1 Workflow and automated packaging . . . . . . . . . . . . . . . . . . . . . . . 133.3.2 List of packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Python support library for 5GTANGO . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 DevOps Support tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.5.1 Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.5.2 Istio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.5.3 Kiali . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Functional tests 184.1 SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.1.1 Internal integration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.1.2 External integration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.1.3 List of tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 V&V Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2.1 Test Execution and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
iv Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4.3 Service Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.1 Networking and Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.2 User Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3.3 Policy and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3.4 SLAs and Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5 Non-Functional Tests 335.1 SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.1.1 tng-sdk-project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.1.2 tng-sdk-package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.3 Validator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.1.4 Tangotest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.1.5 tng-sdk-sm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.1.6 VIM Emulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1.7 tng-sdk-benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.1.8 tng-analytics-engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2 V&V Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.1 Multiple Parallel Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3 Service Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3.1 Service Platform Gatekeeper KPIs . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3.2 KPI Latency - Instantiation and Termination VNFs and CNFs . . . . . . . . 50
5.3.3 KPI Latency - Instantiation and Termination > 1 to 50 CNFs . . . . . . . . 53
5.3.4 Latency insights for Heat Wrapper and Kubernetes Wrapper . . . . . . . . . 54
6 Conclusion 57
A Appendix - V&V Platform - Test Cases Details 58A.1 [VnV Executor] Multiple Parallel Probes . . . . . . . . . . . . . . . . . . . . . . . . . 58
A.2 [VnV Executor] Sequential execution of probes . . . . . . . . . . . . . . . . . . . . . 59
A.3 [VnV Planner] Mapping Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
A.4 [VnV Planner] Retrigger a Test Manually . . . . . . . . . . . . . . . . . . . . . . . . 63
A.5 [VnV Executor] Parser Multiple Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 64
A.6 [VnV SONATA SP Simple Test] Perform a PING test over VNF deployed by SONATASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
A.7 [VnV OSM SP Simple Test] Perform a PING test over VNF deployed by OSM SP . 66
A.8 [VnV SONATA SP Monitoring] Check if the monitoring data is collected during testexecution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
A.9 [VnV Simple Test] Deploy test OSM cloud-init . . . . . . . . . . . . . . . . . . . . . 69
A.10 [VnV Simple Test] Deploy test Sonata Hybrid . . . . . . . . . . . . . . . . . . . . . . 70
A.11 [VnV Analytics] Analyze VnV test monitoring metrics results . . . . . . . . . . . . . 71
B Appendix - Service Platform - Test Cases Details 73B.1 [SP Slicing] Network Service Composition VNFs . . . . . . . . . . . . . . . . . . . . 73
B.2 [SP Slicing] Network Service Composition CNFs . . . . . . . . . . . . . . . . . . . . 76
B.3 [SP Slicing] SLA within Network Slices . . . . . . . . . . . . . . . . . . . . . . . . . . 79
B.4 [SLAs and Licensing] Testing SLA E2E . . . . . . . . . . . . . . . . . . . . . . . . . 82
B.5 [User Management] Testing User Management Role . . . . . . . . . . . . . . . . . . . 85
B.6 [Policy and Monitoring] Test Service Migration State OpenStack . . . . . . . . . . . 87
B.7 [Policy and Monitoring] Test Service Migration State Kubernetes . . . . . . . . . . . 88
5GTANGO Public v
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
B.8 [Policy and Monitoring] Test Service Reconfiguration OpenStack . . . . . . . . . . . 90B.9 [Policy and Monitoring] Test Service Reconfiguration Kubernetes . . . . . . . . . . . 91B.10 [Policy and Monitoring] Test Monitoring VIM endpoints . . . . . . . . . . . . . . . . 92
C Appendix - Various scripts and files 94C.1 Kubernetes VIM configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
D Bibliography 95
vi Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
List of Figures
2.1 DevOps Workflow for NS and VNF deployment . . . . . . . . . . . . . . . . . . . . . 32.2 5GTANGO Environments Work-flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 5GTANGO Environments mapping to testbeds . . . . . . . . . . . . . . . . . . . . . 9
3.1 Jmeter GUI for test development and analysis . . . . . . . . . . . . . . . . . . . . . . 113.2 SONATA microservice mesh view in kiali dashboard . . . . . . . . . . . . . . . . . . 17
4.1 Example of direct dependencies between SDK tools . . . . . . . . . . . . . . . . . . . 18
5.1 Project and descriptor generation runtimes for projects with 1 to 100 VNFs withthe tng-sdk-project CLI. 30 repetitions for each of the 100 experiments. . . . . . . 34
5.2 Project and descriptor generation memory footprint for projects with 1 to 100 VNFswith the tng-sdk-project CLI. 30 repetitions for each of the 100 experiments. . . . 35
5.3 Packaging runtimes using SDK projects with 1 to 100 VNFs for different configura-tions and platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.4 Packager memory usage using SDK projects with 1 to 100 VNFs for different con-figurations and platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.5 Validator runtimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.6 Validator memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.7 Validator runtimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.8 Validator memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.9 Service deployment and test execution times for different service scales . . . . . . . . 405.10 Duration of various lifecycle events . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.11 Scalability of the vim-emu platform with up to 1000 emulated PoPs. . . . . . . . . . 435.12 Service deployment times on different emulated topologies for services with up to
256 VNFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.13 Overall runtimes to collect the data sets of the SNDZoo library [48] . . . . . . . . . 455.14 Example of the time taken per measurement round during a large set of runs, showing
the stability of the tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.15 Analysis requests vs Memory usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.16 Analysis requests vs CPU usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.17 Analysis requests vs Analysis process duration. . . . . . . . . . . . . . . . . . . . . . 475.18 Ruby rack-base applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.19 Gatekeeper Package endpoint request per second . . . . . . . . . . . . . . . . . . . . 515.20 Gatekeeper Package endpoint latency from client . . . . . . . . . . . . . . . . . . . . 515.21 KPI Latency Instantiation and Termination VNF and CNF . . . . . . . . . . . . . . 535.22 Instantiation and Termination time #CNFs . . . . . . . . . . . . . . . . . . . . . . . 545.23 Wrapper times comparison HEAT vs K8S . . . . . . . . . . . . . . . . . . . . . . . . 56
A.1 Multiple Parallel Probes - Test Two Parallel Instances Report . . . . . . . . . . . . . 59A.2 Multiple Parallel Probes - Test Two Parallel Probes Report . . . . . . . . . . . . . . 59A.3 Sequential Execution of Probes Report . . . . . . . . . . . . . . . . . . . . . . . . . . 60A.4 Testing Tags Don’t Match Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5GTANGO Public vii
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
A.5 NS Testing Tag Matches With Multiple TD Testing Tag Report . . . . . . . . . . . . 62A.6 TD Testing Tag Matches With Multiple NS Testing Tag . . . . . . . . . . . . . . . . 63A.7 VnV Retrigger a Test Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64A.8 VnV Parser Multiple Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65A.9 VnV SONATA SP Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66A.10 VnV OSM Simple test result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67A.11 V&V monitoring metrics test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68A.12 VnV OSM Cloud-init Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70A.13 VnV Hybrid Package test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71A.14 Analyze VnV test monitoring metrics results . . . . . . . . . . . . . . . . . . . . . . 72
B.1 Scenario Slice Test 3VNFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74B.2 Slice Instantiation 3VNF Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75B.3 Robot Report for Netslice Instantiation 3VNFs test . . . . . . . . . . . . . . . . . . 76B.4 Scenario Slice Test 3CNFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77B.5 Slice Instantiation 3CNFs Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78B.6 Robot Report for Netslice Instantiation 3CNFs test . . . . . . . . . . . . . . . . . . . 79B.7 Network Slice Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80B.8 Instantiation Testflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81B.9 Robot Report for SLA within Network Slices test . . . . . . . . . . . . . . . . . . . . 82B.10 sla testing scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83B.11 test flow sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84B.12 slas robot results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85B.13 admin-robot-results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87B.14 Service Migration State Openstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88B.15 Service Migration State Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . 89B.16 Service Reconfiguration OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91B.17 Service Reconfiguration Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92B.18 SP Monitoring Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
viii Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
List of Tables
1.1 Document dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Environment deployment matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 NS/VNF Development Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1 Network service packages used for the system tests . . . . . . . . . . . . . . . . . . . 143.2 V&V test packages used for the system tests . . . . . . . . . . . . . . . . . . . . . . 14
4.1 VNFD tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.2 NSD tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.3 Package descriptor tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.4 Policy descriptor tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.5 SLA template descriptor tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.6 Slice descriptor tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.7 Test descriptor tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.8 CLI-related tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.9 REST API tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.10 Packager tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.11 Validator tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.12 Validator tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.13 vim-emu tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.14 VIM interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.15 VNF interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.16 V&V migration tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.17 V&V Functional Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.18 Sequential Execution Of Probes Test Configuration . . . . . . . . . . . . . . . . . . 274.19 Mapping Strategy Test Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 274.20 Retrigger Manually Test Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 284.21 Parser Test Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.22 Deploy test Service Platform Configuration . . . . . . . . . . . . . . . . . . . . . . . 284.23 Deploy test OSM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.24 Deploy test OSM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.25 Deploy test Service Platform metrics Configuration . . . . . . . . . . . . . . . . . . 294.26 Deploy test Service Platform Hybrid Configuration . . . . . . . . . . . . . . . . . . 294.27 Analyse Test Results Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.28 Networking and Network Slicing Functional Tests. . . . . . . . . . . . . . . . . . . . 314.29 User Management and Rate Limit Functional Tests. . . . . . . . . . . . . . . . . . . 314.30 Policy and Monitoring Functional Tests. . . . . . . . . . . . . . . . . . . . . . . . . 314.31 SLA and Licensing Functional Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.1 V&V Non Functional Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.2 Multiple Parallel Test Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.3 Stress Tests for kpi latency over GK API endpoints. . . . . . . . . . . . . . . . . . . 50
5GTANGO Public ix
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
5.4 KPI Latency - Instantiation and Termination VNFs . . . . . . . . . . . . . . . . . . 525.5 KPI Latency - Instantiation and Termination CNFs . . . . . . . . . . . . . . . . . . 525.6 KPI Latency Heat Wrapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.7 KPI Latency K8s Wrapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
A.1 Multiple Parallel Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58A.2 Sequential execution of probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59A.3 Mapping Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61A.4 Retrigger a Test Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63A.5 Parser Multiple Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64A.6 Deploy SONATA Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66A.7 Deploy OSM Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66A.8 V&V Monitoring metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67A.9 OSM Cloud-init . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69A.10 Test hybrid package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70A.11 Analyze VnV test monitoring metrics results . . . . . . . . . . . . . . . . . . . . . . 71
B.1 Slicing test Instantiation and Termination . . . . . . . . . . . . . . . . . . . . . . . 73B.2 Slicing test network service composition CNFs . . . . . . . . . . . . . . . . . . . . . 76B.3 Slicing test with multiple networks and SLAs . . . . . . . . . . . . . . . . . . . . . . 79B.4 Testing SLA End-to-End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82B.5 Testing User Management Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85B.6 Service Migration State Openstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87B.7 Service Migration State Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . 88B.8 Test Service Lifecycle in OpenStack with elasticity policies . . . . . . . . . . . . . . 90B.9 Test Service Lifecycle in Kubernetes with elasticity policies . . . . . . . . . . . . . . 91B.10 Monitoring VIM integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
x Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
1 Introduction
This document is the last of the WP6 deliverables that concludes the effort devoted in the def-inition, design and implementation of: (i) Development Operations (DevOps); (ii) Infrastructurecomprised of testbeds and demo sites and (iii) Development and testing Environments (i.e. Inte-gration, Qualification, Staging). The content of this document updates the deliverables D6.1 [14]and D6.2 [15]. However, the most valuable part of this deliverable is the testing and validation ofall the developed software components as released in 5GTANGO Release version 5.0.
In this context, this document focuses mostly on the validation of the SDK, the Service Platformand the Validation and Verification Platform. In order to support the testing campaigns, a solidmethodology was used, as introduced by ETSI NFV in TST002 [40]. Two types of test categorieswere created, functional for the validation of proper operation and functionality of the componentsand non-functional for the assurance of achieving proper performance.
Furthermore, test cases were defined in detail and expected outcomes were specified. Detaileddescription of these tests are provided in the appendix of this deliverable. Each test case focuses ona specific feature or metric. In order to automate the testing procedure, an automation frameworkwas selected (i.e. Robot) and specific supporting libraries were developed in order to inter-operatewith the 5GTANGO APIs. All the artefacts related with the system-level testing campaigns areavailable via GitHub as open-source contributions and are accessible by the community and thedevelopers that may choose to deploy 5GTANGO in their premises and run these validations.
The functional are provided in this document either as summary tables providing the successfuloutcome of each conducted test and the objective of the test. The non-functional tests are providingexperimental results for particular metrics. The detailed execution scenario, prerequisites, inputsand expected outputs are provided in the appendix.
1.1 Document Structure
The document initially provides an overview of the infrastructure and environments that are beingused by the project. Section 2 recapitulates the previously released deliverables, this time obscuringimplementation details. In particular information for previous well-described environments andinfrastructure is not included, however information for the new ones is provided. Section 3 providesvalidation and testing results for the 5GTANGO supporting tools and the automation frameworkexploited to obtain results. The functional test campaign and its results are analysed in section 4,while the non-functional ones in Section 5. In addition, section 5 discusses results in accordancewith the defined 5G-PPP KPIs. Conclusions are provided in section 6.
1.2 Document Dependencies
This document integrates the work carried out so far within the other 5GTANGO technical WPs,and as such, contains either implicit or explicit references to specific deliverables as indicated inTbl. 1.1.
5GTANGO Public 1
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Table 1.1: Document dependencies
Deliverable Name Description Reference
D2.3: Updated requirements,architecture design and V&Velements
The document discusses the evolution of the use cases,requirements and architectures described previously inDeliverable D2.1 [10] and D2.2 [11]
[12]
D3.3: Enriched store forVNF/NS qualification
This document discusses the features and architecture of thefinal release for the Validation and Verification (V&V)platform, as well as the overall DevOps testing life-cycle forservice creation.
[27]
D4.2: Final release of the servicevalidation SDK tool-set
This document updates the SDK tool-set as described inDeliverable D4.1 [13] and adds new features which requiresupport from WP6.
[28]
D5.2: Service Platform FinalRelease
This document provides the final architecture anddevelopments of the 5GTANGO Service Platform (SP) forthe final Release, corresponding to the Release 5.0 of theSONATA open source software.
[29]
D6.2: Integrated Lab-based5GTANGO Platform
This document describes the CI/CD approach and theparticipating infrastructure as well as a mapping of thedevelopment environments on top of the infrastructure.D6.3 is a continuation and update of this document
[15]
D7.2: Implementation of pilotsand first evaluation
This document describes the status of the three pilots,development steps taken and first results of the developmentprocess. The deliverable input is used for the configurationof the testbed infrastructure.
[30]
1.3 How to read this deliverable
Contrary to the previous releases of WP6 deliverables, this document should be read not as aninformation document on configurations, platforms and infrastructure variations. It mainly focuseson the presentation of the extensive system level testing that has been executed over all the releasedcomponents comprising 5GTANGO, and as such, it provides experimental results and technicalinformation that may distract the reader from the main objective. Hence, in each subsectionthat concludes the tests executed, a table is provided to quickly summarise the test results. Theappendix section at the end of the document contains more detailed information and results.
2 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
2 Infrastructure and Environments Overview
This section discusses updated information on the infrastructure deployment and environmentsconfiguration, based on the information already provided in D6.2 [15]. In particular, new environ-ments were created in order to aid and speed up the development and release of the V&V platform.The rest of the environments remain operational with minor changes.
As previously discussed, the following developer roles are foreseen:
• 5GTANGO SDK Developer
• 5GTANGO Service Platform Developer
• 5GTANGO Validation and Verification Platform Developer
• NS/VNF Developer
Each one of these roles is used within the project either from partners that contribute to theactual 5GTANGO integrated platform components development or from the supported verticals.
2.1 DevOps workflow for NS and VNF deployment
In 5GTANGO, we have designed a DevOps workflow for the development of verticals’ NetworkServices. The same workflow is applied in all the project development activities. In order tosupport the DevOps operations within the project, 5GTANGO provides an extensive set of toolsand environments. To have a self-contained document the final view of the DevOps workflow isdiscussed here.
As illustrated in Fig. 2.1, the DevOps workflow is initiated in the sandbox environment wherethe VNF developers instantiate their VNFs, configure and install additional software componentsand create working snapshots of their VNFs. In this process, the VNF developer gathers all the
Figure 2.1: DevOps Workflow for NS and VNF deployment
5GTANGO Public 3
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
information about the configuration parameters of the VNF to use it later in the Function SpecificManager (FSM) creation. It is worthwhile to mention that in the VNF development process,additional VNFs may be developed or imported and composed, trying to emulate the NetworkService composition. The creation of the VNF finishes when the VNF developer takes the snapshotand converts the running VNF in a deployable image.
The next step in the workflow is to test the VNFs. To this end, the developer needs to createthe descriptors making use of the 5GTANGO SDK tools. In this process, the DevOps cycle isappreciable since the VNF developer receives quick feedback about the creation of descriptorsduring the development process, thanks to the SDK tools.
In order to deploy the created VNFs, the developer, at a minimum, needs to compose a simpleNetwork Service containing the VNF. In that sense, he or she needs to create (using 5GTANGOSDK) the NS descriptor referencing the VNF descriptors and providing vital information for thedeployment and instantiation of the NS. Once the VNF/NS descriptors are ready he or she isable to deploy the 5GTANGO package containing the VNF images and the descriptors in a stableenvironment continuing the DevOps cycle. That environment is prepared with the stable version ofthe Service Platform and the V&V Platform. This environment is known in 5GTANGO as Staging.Starting from the V&V, the VNF developer can develop and run a set of tests/validations for theirVNFs or even use some tests already available in the V&V platform to validate certain aspects ofthe VNF, like performance KPIs, functionalities, etc. It should be noted that additional tests arealso available in the SDK under the emulator environment.
It is important to highlight that the V&V is a prior step of the deployment in the Service Platformsince the VNF developer can modify the configuration of the VNF flavour, tune the software andalso compare it with other software solutions to get the best performance of the VNF inside theNetwork Service. Moreover hints from analytics is also possible to get with the use of the V&V toselect the right flavour.
This workflow ends when the Network Service is deployed in the staging Service Platform andthe VNF developer performs functional and acceptance tests where he or she can do the finalfine-tuning of the service just before promoting it to the demo environment.
2.2 Infrastructure Evolution
This section updates the relevant section of deliverable D6.1 [14] regarding the infrastructure andtestbed deployed and used within 5GTANGO activities. All infrastructure is interconnected via acentral node (i.e. NCSRD at Athens) and virtual links in the form of VPN peer-to-peer connectionsacross the physical network connectivity.
During the last year of operation, the peer-to-peer interconnections were maintained and moni-tored for proper operation. Issues have arisen mainly because of network failures either at the mainnode interconnecting the rest of the testbeds with Athens testbed or because of network problemsin specific links. In order to address the issues, alternative connections directly to each testbedwere also provided like a VPN to Aveiro testbed, ssh connection to a bastion host in Paderborn.
Regarding the resources (i.e. computing, networking etc.) that were provided by each testbed,updates were made in all testbeds i.e (Athens, Aveiro, Barcelona) and new infrastructure wascommissioned in Detmold (Weidmueller premises) for the Industrial Pilot and in Dublin (HUWAEIpremises) for the ONAP interoperability tests. Technical details for the number of resources andtheir planning is not considered critical for this deliverable release.
4 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
2.3 Development Environments Evolution
The 5GTANGO ecosystem created on top of the deployed infrastructure comprises of a variety ofenvironments and systems that facilitate different DevOps phases and developer roles (see Sec. 2)as presented in Deliverable D6.2 [15]. This section discusses the evolution of the deployment andconfiguration of theses environments over 5GTANGO infrastructure. As discussed previously, onlyadditional/new information is discussed here.
2.3.1 Overview
In deliverable D6.2 [15], the full ecosystem of 5GTANGO environments is presented. To make thisdeliverable self-containing, we will briefly refer to these environments and also highlight the changesand their evolution.
Figure 2.2: 5GTANGO Environments Work-flow
Fig. 2.2, serves as a reminder to the reader how the 5GTANGO environments are cooperating tosatisfy the requirements set by the 5GTANGO DevOps processes. The environments are depictedas cycles i.e. Sandbox, Staging and Demonstration, are used by the NS/VNF developer at thevarious stages of the DevOps cycle as explained previously. The environments that are depicted asrectangles are mainly used by the other two roles (i.e. SP and V&V developer).
Tbl. 2.1 presents all the currently deployed environments over the 5GTANGO infrastructure.
Table 2.1: Environment deployment matrix
Environment OpenStack Kubernetes WIM (ODL) Location Comment
pre-int-sp-ath.5gtango.eu mock 10.200.16.2 10.30.0.13 ATHint-sp-ath.5gtango.eu 10.100.19.2 10.200.16.2 ATHsta-sp-ath.5gtango.eu 10.100.19.2 ATHsta-sp-ath-v4-0.5gtango.eu 10.100.19.2 ATHsta-vnv-ath-v4-0.5gtango.eu ATH SP: sta-sp-ath-v4-0.5gtango.eusta-vnv-ath-v4-0.5gtango.eu ATH SP: sta-sp-ath-v4-0.5gtango.euOSM Athens 10.200.0.12 ATH IP: 10.30.0.224sta-sp-pad.5gtango.eu 10.121.0.2 PADsta-vnv-pad.5gtango.eu PAD SP: sta-sp-pad.5gtango.euopenstack-paderborn PAD IP: 10.121.0.3k8s-paderborn PAD IP: 10.121.0.2 (baremetal)k8s-paderborn-2 PAD IP: 10.121.0.58
5GTANGO Public 5
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Environment OpenStack Kubernetes WIM (ODL) Location Comment
qual-sp-bcn.5gtango.eu 10.120.0.18 10.200.16.2 10.30.0.13 BCNdemo-comm-sp.5gtango.eu bcn-internal 10.30.0.13 BCNdemo-industrial-sp.5gtango.eu 10.200.16.2 10.30.0.13 BCNpre-int-vnv-bcn.5gtango.eu BCN SP: qual-sp-bcn.5gtango.euint-vnv.5gtango.eu AVE SP: sta-sp-ave.5gtango.euopenstack-athens-single-pop ATH IP: 10.100.19.2openstack-athens-multi-pop ATH IP: 10.200.0.12 - GPU Capabilitiesk8s-athens DET IP: 10.200.16.2 - GPU Capabilitiesopenstack-PRD-aveiro AVE IP: 172.31.8.16openstack-INT-aveiro AVE IP: 172.31.12.5k8s-aveiro AVE IP: 172.31.13.2OSM Aveiro 172.31.12.5 AVE IP: 172.31.8.163openstack-detmold DET IP: 10.220.0.2k8s-detmold DET IP: 10.220.0.81ONAP Lab Dublin 10.254.7.13 DUB ONAP 3.0.0 (Casablanca)
2.3.2 Service Platform
The Service Platform development is based on three environments:
• SP PRE-Integration (SP PreIntEnv)
– Purpose: Infrastructure and software tools used for SP component manual functionaltests.
– Components: Jenkins, GitHub, DockerHub
– Lifecycle: Each Pull request to Github - unstable
• SP Integration (SP IntEnv)
– Purpose: Infrastructure and software tools used for SP component integration andintegration, smoke tests
– Components: Jenkins, GitHub, DockerHub
– Lifecycle: Each Merge - unstable
• SP Qualification (SP QEnv)
– Purpose: Infrastructure and software tools used for SP qualification testing of NSdeployment over network infrastructure with VIMs/NFVI-PoPs and WIM
– Components: Jenkins, DockerHub, NFVI-PoP, VIM, WIM, SFC, test NS/VNFs
– Lifecycle: Weekly release deployment and bug fixes - stable
The SP developer works mainly on his local environment, developing software components forthe Service Platform. When a specific feature or bug correction is ready for integration, the SPdeveloper needs to update that part of the code in and run integrations tests. This purpose isfulfilled by the SP Pre-integration and Integration Environment. The deployment and integrationprocess is initiated by the developer when he issues a pull request. Then automatically a chainof actions is executed in pre-integration environment including integration tests, code validation.Functional tests are also performed by the developers in pre-integration since they can check thebehaviour of his or her code working with the rest of the components in the Service Platform.When a code merge occurs, then the containers are promoted automatically from pre-integrationto integration environment in order to perform the automated integration tests. In sequence, aftera week of tests, Jenkins promotes a stable version of containers from integration to qualification
6 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
environment and deploy in suitable for executing Qualification tests. Specifically for the ServicePlatform, the qualifications test involve NS and VNFs instantiation tests over dedicated NFVInfrastructure (NFVI-PoP). When all the tests have passed (including possible issue handling andbug corrections) the resulted in qualified Service Platform code is ready to be tagged as a release.
2.3.2.1 Updated SP Qualification environments
During the last year of 5GTANGO, a new qualification environment has been deployed at Aveirotestbed (QUAL-SP-AVE). The SP platform for the Qualification environment is a VM running ontop of OpenStack at Aveiro PoP. The minimum resources required to run V&V on a VM are 4vCPU, 8GB RAM and 80GB disk, equal to selecting ‘m1.large’ or higher Openstack VM flavour.
The qualification platform in Aveiro, see Tbl. 2.1, have three VIMs, of which two are Openstack,openstack-PRD-aveiro, openstack-INT-aveiro and one is Kubernetes k8s-aveiro. Is important tohighlight that the role definition for kubernetes cluster (see [1]) indicates that tango service accounthas restricted capabilities. This means that, if a specific Network Service requires access to otherresources, then appropriated privileges must be configured in the role.
Another characteristic of Aveiro Kubernetes VIM that this VIM is running Calico CNI plugin[46]. Calico CNI plugin will require the configuration of network policies for Network Services thatneed to access to low-level resources [2]. The requirement is because Calico is a layer three networkplugin with built-in dynamic routing protocols (e.g. BGP)
2.3.3 Validation and Verification Platform
Similarly to the Service Platform development, the V&V development is based on two environments:
• V&V Pre-Integration (VnV PreIntEnv)
– Purpose: Infrastructure and software tools used for V&V component manual functionaltests.
– Components: Jenkins, GitHub, DockerHub
– Lifecycle: Each Pull request to Github - unstable
• V&V Integration (VnV IntEnv)
– Purpose: Infrastructure and software tools used for V&V component integration andintegration, smoke tests
– Components: Jenkins, GitHub, DockerHub, SP (in SP QEnv) NFVI-PoP, VIM, WIM,SFC, example NS/VNFs
– Lifecycle: Hourly updates - unstable
• V&V Qualification (VnV QEnv)
– Purpose: Infrastructure and software tools used for V&V qualification testing usingexample NS along with SW/HW probes over network infrastructure with VIMs/NFVI-PoPs and WIM
– Components: Jenkins, DockerHub, SP (in SP QEnv) NFVI-PoP, VIM, WIM, SFC,example NS/VNFs
– Lifecycle: Weekly release deployment and bug fixes
5GTANGO Public 7
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
The development approach followed by the V&V developer is the same as the SP developer.In this context, the V&V developer pushes his changes in Github that are deployed to the pre-integration environment, where automated and manual tests are executed, and when the tests arepassed, the changes are merged resulting in a new integrated version deployed in integration. Thenthe V&V dev-team is testing the integrated version in the qualification environment. Successfullyexecuted tests, tag the qualified V&V code as a release. It is evident that a new release has aspecific list of features and enhancements.
2.3.3.1 Updated V&V Environments
For the V&V development, a new V&V integration environment (INT-VNV) was configured atAveiro PoP. The integration environment is actually a VM running on top of OpenStack at theAveiro testbed. The minimum resources required to run V&V on a VM are 2 vCPU, 4GB RAMand 40GB disk, presuming a ‘m1.medium’ or higher Openstack flavour.
This V&V integration environment is connected to the service platform in Aveiro staging environ-ment sta-sp-ave, that at the same time is attached to openstack-PRD-aveiro, openstack-INT-aveiroand k8s-aveiro, see Tbl. 2.1. Additionally, this V&V integration environment is connected to OSMAveiro for tests that require OSM Platform.
2.3.4 NS/VNF
5GTANGO is a DevOps enabling platform , as such it provides environments for the facilitation ofNS and VNF development. The development process that is facilitated is structured into 3 phases,namely:
• Development phase: developer creates initial code for VNFs and makes functional/non-functional tests. This phase also involves the write-up of the descriptors for the VNFs, NS,tests and local validation using 5GTANGO SDK.
• Qualification phase: developer deploys the NS/VNFs in a V&V environment in orderto validate the VNFs, verifies the correct operation and benchmark and/or validates theKPIs associated to different deployment configurations over real NFV enabled end-to-endinfrastructure.
• Deployment phase: deployment of developed and validated NS/VNFs over actual pre-production or demonstration ready infrastructure.
Table 2.2: NS/VNF Development Phases
Phase Environment
Development Sandbox (Sbox Env)Qualification Staging (Stage Env)Deployment Demonstration (Demo Env)
The table tbl. 2.2 provides the mapping of the above phases in the development environmentsfor NS and VNFs.
The environments that are used for facilitating the activities of the NS/VNF developer are:
• Sandbox Environment (Sbox Env)
8 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
– Purpose: Infrastructure used by NS/VFN developers for development and initial test-ing of VNFs and manually deployed NSs
– Components: OpenStack / NFVI-PoP nodes and SW/HW probes, monitoring
– Lifecycle: One version back from the official OpenStack Release
• Staging Environment (Stg Env)
– Purpose: Infrastructure with deployed stable releases of all 5GTANGO componentsfor NS deployment
– Components: SP, V&V, multiple NFVI-PoPs and WAN for transport links
– Lifecycle: Stable versions
• Demonstration Environment (Demo Env)
– Purpose: Pre-production infrastructure with stable versions of SP, V&V and SDK forvertical cases demonstration
– Components: SP, V&V, SDK, probes, interconnected
– Lifecycle: Updated as required
2.3.5 Mapping of 5GTANGO infrastructure to environments
The Fig. 2.3 illustrates the current mapping of environments to the actual available 5GTANGOinfrastructure, per testbed.
As it can be observed, all available testbeds are comprised of at least one NFVI-PoP and share5GTANGO components and artefacts.
Figure 2.3: 5GTANGO Environments mapping to testbeds
5GTANGO Public 9
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
3 Supporting Tools
This section discusses the new tools that were used to support the DevOps CI/CD workflow andalso to help the team not only to monitor the environments but also gain a certain degree ofobservability. These tools, include available open source tools and frameworks used for automation,validation and testing as well as tools developed for specific usage (i.e. the Tango Library). Duringthe last year of the project, we introduced a set of new tools that are mentioned in the followingsection. Most of the tools require expertise, and a learning curve to be digested and adopted bythe team members. Nevertheless, all the tools incorporated into the project were well received andappreciated by the consortium.
3.1 Testing Frameworks Survey
As part of the followed methodology, several testing automation frameworks were considered. Thissection presents a brief overview of the frameworks and tools considered and the ones finally used.Only open-source tools and frameworks were considered.
The considered frameworks were:
• Watir [3]- stands for ’Web Application Testing in Ruby and it is ’An open source Rubylibrary for automating tests. Watir interacts with a browser the same way people do: clickinglinks, filling out forms and validating text.
• Robot [39] - is a generic test automation framework for acceptance testing and acceptancetest-driven development (ATDD). It has easy-to-use tabular test data syntax and it utilizesthe keyword-driven testing approach. Its testing capabilities can be extended by test librariesimplemented either with Python or Java, and users can create new higher-level keywordsfrom existing ones using the same syntax that is used for creating test cases.
• pyTest [44] - is a python-based test framework for testing applications and python libraries.It is used from command line and requires tests to be formatted in a specific way so theframework can identify and execute them.
• Shell - UNIX shell scripting may be used to create testing scripts that use the available APIsto make integration and validation tests.
• jmeter [38] - is a 100% pure Java and has an Ubuntu installer in order to be used by commandline to perform the tests or via Graphical User Interface. It may be used to test performanceboth on static and dynamic resources. It can be used to simulate a heavy load on a server,group of servers, network or object to test its strength or to analyse overall performance underdifferent load types.
From the above testing frameworks we selected two of them, depending on the type of thetest. Robot and Jmeter were selected. The justification for this decision is due to the fact, thatRobot extensively supports python, which is the main programming language used in 5GTANGOcomponents. In addition, it is the framework selected by other Open Source projects that work on
10 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 3.1: Jmeter GUI for test development and analysis
the same fields (i.e. OPNFV, ETSI), hence there is already a plethora of available testing libraries.Finally, it provides visual reports and logs. In addition Jmeter was used to perform functionalload testing and measure the performance. With this tool, we generated packages in a controlledway from an easy interface of configuration. This tool was used to develop the test-plans and thetest-suites for all KPI latency measurement described in the Sec. 5.3.1. In the Fig. 3.1 the testdevelopment environment we used to generate the tests is illustrated.
3.2 Robot Framework
It was decided that a common testing framework should be used for the system tests in order toenhance efficiency and consistency among the tests. After careful consideration of a few testingtools and frameworks, Robot testing framework was chosen.
Robot Framework has easy-to-use tabular test data syntax and it utilizes the keyword-driventesting approach. This section presents information on installation and usage examples.
3.2.1 Installation
The installation of Robot is quite straightforward:
pip install robotframework
Note that the libraries and tools to be used alongside should be installed separately.
3.2.2 Examples
In order to use the Robot framework to perform tests with the 5GTANGO, it is recommended totake advantage of the python library developed for the project, as described in the previous section.
Following, an example template for tests in the Service Platform is provided. The exampleinitiates a test for uploading a TANGO package at the Service Platform, deployment of the service,termination of the service and deletion of the package.
5GTANGO Public 11
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
*** Settings ***
Documentation Test suite for uploading a package to the SP platform
Library tnglib
Library Collections
Library DateTime
*** Variables ***
# the name of SP we want to use
${HOST} http://pre-int-sp-ath.5gtango.eu
${READY} READY
# to be modified and added accordingly if package is
# not on the same folder as test
${FILE_SOURCE_DIR} ./packages
# The package to be uploaded and tested
${FILE_NAME} eu.5gtango.test-ns-nsid1c.0.1.tgo
${NS_PACKAGE_SHORT_NAME} test-ns-nsid1c
*** Test Cases ***
Setting the SP Path
#From date to obtain GrayLogs
${from_date} = Get Current Date
Set Global Variable ${from_date}Set SP Path ${HOST}${result} = Sp Health Check
Should Be True ${result}
Clean the Package Before Uploading
@{PACKAGES} = Get Packages
FOR ${PACKAGE} IN @{PACKAGES[1]}Run Keyword If ’${PACKAGE[’name’]}’== ’${NS_PACKAGE_SHORT_NAME}’
Remove Package ${PACKAGE[’package_uuid’]}END
Upload the Package
${result} = Upload Package ${FILE_SOURCE_DIR}/${FILE_NAME}Should Be True ${result[0]}${service} = Map Package On Service ${result[1]}Should Be True ${service[0]}Set Suite Variable ${SERVICE_UUID} ${service[1]}Log ${SERVICE_UUID}
Deploying Service
${init} = Service Instantiate ${SERVICE_UUID}Log ${init}
12 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Set Suite Variable ${REQUEST} ${init[1]}Log ${REQUEST}
Wait For Ready
Wait until Keyword Succeeds 3 min 5 sec Check Status
Set SIU
Terminate Service
${ter} = Service Terminate ${TERMINATE}Log ${ter}Set Suite Variable ${TERM_REQ} ${ter[1]}Wait until Keyword Succeeds 2 min 5 sec Check Terminate
Delete Package
${result}= Remove Package package_uuid=${PACKAGE_UUID}
Obtain GrayLogs
${to_date} = Get Current Date
Set Suite Variable ${param_file} True
Get Logs ${from_date} ${to_date} ${SP_HOST} ${param_file}
*** Keywords ***
Check Status
${status} = Get Request ${REQUEST}Should Be Equal ${READY} ${status[1][’status’]}
Set SIU
${status} = Get Request ${REQUEST}Set Suite Variable ${TERMINATE} ${status[1][’instance_uuid’]}
Check Terminate
${status} = Get Request ${TERM_REQ}Should Be Equal ${READY} ${status[1][’status’]}
3.3 Testing packages
To perform the system tests on 5GTANGO’s SDK, V&V and SP in an end-to-end fashion, a set oftest packages is needed. Those packages are not only NS packages, which contain different flavoursof example NSs that can be deployed, but also test packages containing example tests that canbe executed by the V&V. Those packages can be used on the SDK side, e.g. to test the packagerand validator, on the V&V side, e.g. to test the execution of a test against a NS, and on the SPside, e.g. to test the on-boarding of different NSs. This section describes the packages used for thesystem tests presented in this deliverable.
3.3.1 Workflow and automated packaging
Instead of hosting each of the packages in their final binary format on some server, we decided tostore the raw SDK projects from which the packages can be created. The benefit of this is two fold.First, the SDK projects, which consist of source files instead of binaries, can be directly hostedand version controlled in the test repositories on GitHub. This always allows to clearly track any
5GTANGO Public 13
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
change in the test packages and ensures repeatability of the test cases. Second, the use of SDKprojects implies that the SDK tools need to be used to create the final packages that are used totest the V&V and SP. To do so, we created a continuous integration (CI) job within our Jenkinssetup, which always builds the packages whenever a change is committed to the source projects.With this, we immediately have a first set of system tests in place, because the entire SDK workflowto generate packages is tested whenever a change is applied to the source projects.
Further, this automated CI setup ensures that no outdated packages are used and that all usedpackages have been always generated with the latest version of the SDK tools, ensuring that weimplicitly test the compatibility between SDK and V&V as well as SP. This setup also allows usto directly test our SDK tools in a DevOps-like environment in which new packages are built onevery code commit.
3.3.2 List of packages
Tbl. 3.1 shows the service packages that are used for our system tests and are available on GitHub.Some of our tests, e.g. the non-functional tests of the SDK tools might use slightly changed versionof those packages, e.g. to test the behaviour of a 5GTANGO tool with a package that has a highnumber of VNFs. Those custom packages are always based on the packages presented here.
Table 3.1: Network service packages used for the system tests
Name Description
NSID1C A generic service package with one CNFNSID1V A generic service package with one VNFNSID2C A generic service package with two CNFsNSID2V A generic service package with two VNFsNSID1V cirros OSM A OSM-specific service packageNSID1V cirros OSM cloud init A OSM-specific service package with cloud-init scriptsNSID1V cirros SONATA A SONATA-specific service packageNSIMPSP Package to test specifics for the media pilotNSINDP1C Package to test specifics for the smart manufacturing pilotNSTD A generic network slice template composed by 3 NSs (all of them are NSID1V)NSSQHA A proxy service, that combines the open source HAProxy load balancer with a pool of
squid back-end servers.
Tbl. 3.2 shows the test packages that contain example tests to be executed on the V&V. Thosepackages are exclusively used by system tests that target the V&V. However, they are also used bythe SDK tools, such as packager and validator, during the automated packaging procedure:
Table 3.2: V&V test packages used for the system tests
Name Description
TSTGNRPRB Generic test package with a ping probe, a telnet probe, a service checking probe and ahttp benchmarking (wrk) probe
TSTIMHLS Immersive Media test package with four probes to test the correct generation of the HTTPlive streaming video playlists
TSTIMPSP Immersive Media test package with five probes to test the performance of the videostreaming, checking the lost frames value
TSTINDP Smart manufacturing test package that contains probes to test MQTT messaging systems,e.g. Message Queuing Telemetry Transport (MQTT) brokers
TSTPING OSM An example of ping probe to use with NSID1V cirros OSMTSTPING SONATA An example of ping probe to use with NSID1V cirros SONATA
14 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
3.4 Python support library for 5GTANGO
In order to facilitate automated testing, the 5GTANGO consortium designed and developed apython library that wraps around the various APIs that are exposed by the 5GTANGO SP andV&V. By importing this library in any Python3.x script (or derivatives such as robot), the developerhas a wide range of methods available which make requests to this API and return the formattedresponse. Using this library aims to make the test automation process less cumbersome and error-prone.
3.4.1 Installation
The library is available on Github [16], and can be installed in two ways. To install both packagesmanually, with permissions:
git clone https://github.com/sonata-nfv/tng-cli.git
cd tng-cli
python3 setup.py install
Automated installation using pip3, with permissions:
pip3 install git+https://github.com/sonata-nfv/tng-cli
Both ways install the library, and also install a CLI client tng-cli.
3.4.2 Usage
To use the library, add import tnglib to your python script. Documentation on all supportedfunction calls can be found in readthedocs.io [49]
To use the CLI client, see its help for detailed information about all the arguments:
tng-cli -h
The tool supports a set of sub-commands. For there usage, so the local sub-command help:
tng-cli <subcommand> -h
The tool needs to know which 5GTANGO Service Platform or V&V you want to interface with.For this, you should use the -u argument:
tng-cli -u <URL_TO_SP> package --list
As it is cumbersome to specify this argument for every command, you can make it availablethrough the SP PATH env parameter:
export SP_PATH=<URL_TO_SP>
It will then persist throughout your terminal session.
5GTANGO Public 15
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
3.4.3 Examples
Some examples of how to use the CLI client.To obtain a list of all available packages:
tng-cli package --list
To obtain a list of all running network services:
tng-cli service --instances
To instantiate a new network service:
tng-cli service --instantiate <SERVICE_UUID>
To upload a new policy descriptor:
tng-cli policy --create <PATH_TO_DESCRIPTOR>
3.5 DevOps Support tools
3.5.1 Kubernetes
Initially, Kubernetes was introduced to 5GTANGO as an additional VIM to support deploymentof Cloud-Native Network Functions (CNFs). However, through this effort, the consortium gainedvaluable knowledge about Kubernetes capabilities, and also recognised the importance of supportingSONATA Service Platform on a Kubernetes platform. Since the SONATA SP and V&V architectureare based on microservices, it was a matter of time to generate the kubernetes objects to deploythe platforms in kubernetes.
There are many good tools available that assist in deployment to Kubernetes. The consortiumhas selected two such tools to take advantage of the SP deployed on top of Kubernetes. The decisionwas made because of the necessity to have a broad vision about what was happening among themicroservices and gather information of rates, errors and latency at a glance.
3.5.2 Istio
Istio is an opensource project [42] that works with kubernetes and enables a network service meshamong the microservices. In 5GTANGO, Istio is used to gain a real understanding of what isimpacting service’s performance when we perform benchmarking tests. Additionally, we installedKiali, that in conjunction with Istio provides observability to microservices.
3.5.3 Kiali
Kiali is an opensource tool [43] installed in kubernetes clusters that have Istio to provide observabil-ity and network service mesh configuration. 5GTANGO use Kiali for benchmarking tests to detectissues in the microservice topology and anomalous behaviour of microservices, bottlenecks anderrors. The Fig. 3.2 shows the Kiali dashboard with the SONATA Service Platform microservicedeployment in kubernetes, being used by a performance test. As shown in the picture, the responsetime is placed between the components. The Service Platform internal communications are buildaround the the message broker. All connections of the message broker are TCP connections. Thisis illustrated in the fig. 3.2 by blue line with several components communicating with it. Aroundthe broker, we have the HTTP communication that has its entry point via tng-api-gtw and therequests are distributed to the rest of the components.
16 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 3.2: SONATA microservice mesh view in kiali dashboard
5GTANGO Public 17
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4 Functional tests
Functional tests intend to validate the main end-to-end functions to be implemented by the system(Service Platform as a whole), which represent common action that users do with the system,requiring the complex interaction of multiple components. Functional tests come after integrationtests, which are key subsets of functional actions, which in turn come after component unit tests.This section presents the executed functional tests at the platform level. Each subsection is devotedto each main component-platform comprising 5GTANGO, namely SDK toolkit, V&V and SP.
4.1 SDK
The 5GTANGO SDK consists of a set of various tools. Each of these tools can be used independentlybut also in combination with any other. This means that each SDK tool must not only be testedin isolation but also in combination with the other tools. This section describes how we did thistesting throughout the project.
First, there are a series of SDK-internal integration tests that basically rely on the fact that thereare dependencies between the code of the SDK tools which makes it easy to test multiple tools atonce when a test on a single tool is executed. Second, there are SDK-external integration tests,which, e.g., verify that a package generated by the SDK is compatible with the 5GTANGO V&Vand SP.
4.1.1 Internal integration tests
To test the integration of the SDK tools among each other, we exploit the fact that many ofthe tools have direct relationships in terms of shared code, i.e., one tool directly calls the code ofanother tool. Fig. 4.1 shows an example of such dependencies: The benchmarker internally uses thepackager to generate packages for the experiments. The packager can internally use the validator tovalidate the packages it creates and the validator uses the project management tool to inspect thevalidated artefacts. Considering this example, it becomes clear that a well designed end-to-end testof the benchmarker would automatically also test the other SDK tools as well as the integrationbetween these tools.
We exploit this in our CI/CD setup and introduced a setup which is always triggering all theCI/CD jobs of the depended tools when a job for one of these SDK tools was executed. As aresult, developers directly notice if one of their changes, e.g., the update of the packager, breaksthe integration to another tool, e.g., the benchmarker. This setup turned out to work very wellthroughout the development phase of the SDK tools. Based on this, many SDK tools have goodend-to-end test cases which makes additional integration tests obsolete. Sec. 4.1.3 describes theimplemented tests in more detail.
Figure 4.1: Example of direct dependencies between SDK tools
18 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4.1.2 External integration tests
Besides integration tests among the SDK tools, we also test the integration of the SDK tools withother components of the 5GTANGO platform. Those tests are implicitly done by the fact that theSDK tools are used to automatically generate the test packages for the SP and V&V, as we havedescribed in Sec. 3.3. As a result, the integration between SDK, V&V and SP is implicitly tested,e.g., if the SDK produces malformed test packages, the V&V and SP test cases will immediatelyfail. This ensures good test coverage among all 5GTANGO tools while limiting the implementationeffort and avoiding code repetitions.
4.1.3 List of tests
Tbl. 4.1 and the following tables list the functional tests implemented for the SDK tools.
4.1.3.1 tng-schema
Even the schemas have a series of functional tests. Those tests are used to ensure that the descriptorexamples given in the schemas’ repositories are always correctly reflecting the models specified inthe latest schemas. In total, there are 27 tests implemented, which are automatically executed bythe test.sh in the repository’s main directory [5].
Table 4.1: VNFD tests
Name Description Status
test default-vnfd Tests the default VNFD against the VNFD schema for consistency OKtest firewall-vnfd Test a VNFD, describing a firewall VNF, against the VNFD
schemaOK
test cnf-vnfd Tests a VNFD describing a cloud-native network function (CNF)against the VNFD schema
OK
test pnf-vnfd Tests a VNFD describing a legacy physical network function(PNF) against the VNFD schema
OK
test hnf-vnfd Tests a VNFD describing a hybrid network function (HNF),containing both physical and cloud-native deployment units,against the VNFD schema
OK
test iperf-vnfd Tests a VNFD describing an iperf-based VNF against the VNFDschema
OK
test multi-flavour-vnfd Tests a VNFD describing a VNF with multiple flavours (fordifferent SLAs) against the VNFD schema
OK
test multi-image-vnfd Tests a VNFD describing a VNF with multiple VDU images (fordifferent architectures) against the VNFD schema
OK
test qos-vnfd Tests a VNFD describing aVNF with specifically defined QoSrequirements against the VNFD schema
OK
test tcpdump-vnfd Tests a VNFD describing a tcpdump-based VNF against theVNFD schema
OK
test vtc-vnfd Tests a VNFD describing an example VNF against the VNFDschema
OK
Table 4.2: NSD tests
Name Description Status
test default-nsd Tests the simple, default NSD with one VNF against the NSDschema
OK
test multi-flavour-nsd Tests an NSD describing a service with multiple flavours (fordifferent SLAs) against the NSD schema
OK
test recursive-nsd Tests an NSD describing a recursive service, i.e., with anothernetwork service inside, against the NSD schema
OK
test simplest-nsd Tests an NSD describing the simplest possible service, without anyVNF, against the NSD schema
OK
5GTANGO Public 19
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Name Description Status
test sonata-demo Tests an NSD describing a service used for a demonstrationagainst the NSD schema
OK
test sonata-demo-ssm Tests an NSD describing the demonstration service with an SSMagainst the NSD schema
OK
Table 4.3: Package descriptor tests
Name Description Status
test 5gtango-ns-package-example Tests a package description of a 5GTANGO network serviceagainst the package schema
OK
test 5gtango-test-package-example Tests a package description of a 5GTANGO validatio against thepackage schema
OK
test 5gtango-vnf-package-example Tests a package description of a 5GTANGO VNF against thepackage schema
OK
Table 4.4: Policy descriptor tests
Name Description Status
test ns-elasticity-policy-average Tests a policy description of an elasticity policy (average) againstthe policy schema
OK
test ns-elasticity-policy-besteffort Tests a policy description of an elasticity policy (best effort)against the policy schema
OK
test ns-elasticity-policy-premium Tests a policy description of an elasticity policy (premium) againstthe policy schema
OK
test policy-example Tests a policy description of a simple example policy against thepolicy schema
OK
Table 4.5: SLA template descriptor tests
Name Description Status
test sla Tests an SLA description of an example SLA against the SLAschema
OK
Table 4.6: Slice descriptor tests
Name Description Status
test slice Tests a slice description of an example slice against the sliceschema
OK
Table 4.7: Test descriptor tests
Name Description Status
test test-descriptor Tests a test description of an example test against the test schema OK
4.1.3.2 tng-sdk-project
There are 21 functional tests implemented for tng-sdk-project. All tests can be found in the tests
folder of the tng-sdk-project repository [8].
20 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Table 4.8: CLI-related tests
Name Description Status
test generated descriptors Tests the CLI functionality for creating a new project andgenerating VNFD and NSD descriptors inside using default values.
OK
test generate custom descriptors: Tests the CLI functionality for creating a new project andgenerating VNFD and NSD descriptors inside using customgeneration values (e.g., author and description).
OK
test generate multiple descriptors Tests the CLI functionality for creating a new project andgenerating VNFD and NSD descriptors inside for multiple VNFs.
OK
test generate tango descriptors Tests the CLI functionality for creating a new project andgenerating only 5GTANGO VNFD and NSD descriptors insideusing default values.
OK
test generate osm descriptors Tests the CLI functionality for creating a new project andgenerating only OSM VNFD and NSD descriptors inside usingdefault values.
OK
test load example project Tests the CLI functionality for loading the existing exampleproject inside the repository.
OK
test add remove file Tests the CLI functionality for adding and removing files to/froman existing project.
OK
test init Tests the CLI functionality for initializing a new workspace. OKtest create dirs Tests the CLI functionality for creating the correct workspace
directory structure.OK
test create from descriptor Performs several tests to ensure that workspaces are correctlycreated from a configuration descriptor.
OK
test create ws descriptor Tests the function that generates workspace configuration files.Verifies that a workspace can be re-created using the generated file.
OK
Table 4.9: REST API tests
Name Description Status
Test /pings endpoint (health check) Sends a GET request to the /pings REST API endpoint to test ifthe service is alive.
OK
Test /projects endpoint Series of sequential tests that test the /projects REST APIendpiont for managing projects. The different tests/steps aredescribed below.
OK
4.1.3.3 tng-sdk-package
There is a total of 64 tests implemented for the packager. All tests can be found in the tests folderof the tng-sdk-packager repository [7].
Table 4.10: Packager tests
Name Description Status
test cli package auto name Tests the automated name generation for new packages. OKtest cli package fixed name Test the generation of packages with custom name. OKtest cli unpackage Tests unpackaging using the CLI. OKtest cli unpackage invalid Tests unpackaging of a malformed package using the CLI. OKtest parse block based meta file Checks the parser for block files, e.g., ETSI. OKtest autoversion Tests the automated versioning feature. OKtest instantiation default Checks the correct instantiation of the default packager. OKtest instantiation etsi Checks the correct instantiation of the ETSI packager. OKtest instantiation tango Checks the correct instantiation of the 5GTANGO packager. OKtest package async Tests the asynchronous packaging process. OKtest package sync Tests the synchronous packaging process. OKtest store autoversion Tests auto versioning in the storage backend. OKtest unpackage async Tests the asynchronous unpackaging process. OKtest unpackage sync Tests the synchronous unpackaging process. OKtest attach files Tests the functionality to attach files. OKtest create temp dir Tests the creation of a temporary directory. OKtest create temp dirs Tests the creation of temporary directories. OK
5GTANGO Public 21
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Name Description Status
test do package Tests package creation. OKtest pack packages Tests (internal) package creation. OKtest sort files Tests the files sorter. OKtest collect metadata csar Tests the compatibility to the CSAR format. OKtest collect metadata etsi Tests the compatibility to the ETSI format. OKtest collect metadata tango Tests the compatibility to the 5GTANGO format. OKtest do unpackage bad checksum Tests the detection of bad checksums. OKtest do unpackage bad metadata Tests the detection of bad metadata. OKtest do unpackage good package Tests the unpackaging of a well-formed package. OKtest do unpackage missing file Tests unpackaging of a broken package. OKtest read correct metadata Tests metadata read functionality. OKtest read malformed metadata Tests metadata read functionality for bad metadata. OKtest read missing metadata Tests metadata read functionality for missing metadata. OKtest do package good project Tests packaging of a well-formed project. OKtest do package good project with au. . . Test packaging with auto versioning enabled. OKtest cli package cli unpackage Tests unpackaging using the CLI. OKtest cli package rest unpackage Tests unpackaging using the REST API. OKtest direct call package direct call. . . Tests packaging using the Python API. OKtest rest package cli unpackage Tests packaging with REST and unpackaging using CLI. OKtest rest package rest unpackage Tests rest packaging and REST unpackaging. OKtest pyapi package auto name Tests automated naming using Python API. OKtest pyapi unpackage Tests unpackaging using Python API. OKtest on packaging done Tests packaging callback. OKtest on unpackaging done Tests unpackaging callback. OKtest package v1 endpoint Tests REST endpoint for packaging. OKtest package v1 endpoint with userna. . . Tests multi-user integration of REST API. OKtest packager v1 status endpoint Tests REST endpoint to retrieve status information. OKtest ping v1 endpoint Tests the REST ping endpoint. OKtest project package project Test multiple calls of packaging. OKtest project project download v1 get. . . Tests project download endpoint. OKtest file match Tests file matching helper. OKtest file not match Tests file not matching helper. OKtest init Tests packager initialization. OKtest mime to pltfrm Tests platform extraction from MIME types. OKtest store Tests storage backend. OKtest store idempotent Tests of packaging and unpackaging is idempotent. OKtest do package bad project Tests packaging of a malformed project. OKtest do package good project Tests packaging of a well-formed project. OKtest do unpackage bad package Tests unpackaging of a bad package. OKtest do unpackage good package Tests unpackaging of a good package. OK
4.1.3.4 tng-skd-validation
There is a total 68 tests implemented in for the validation tool. All of them can be found in thetests folder in tng-sdk-validation repository [9].
Table 4.11: Validator tests
Name Description Status
test cli validation function syntax . . . Test to validate the syntax of a well-formed VNFD. OKtest cli validation function syntax . . . Test to validate the syntax of a malformed VNFD with additional
non-existing properties.OK
test cli validation function syntax . . . Test to validate the syntax of a malformed VNFD withunrecognized values in the properties (e.g. gB instead of GB wasused).
OK
test cli validation function syntax . . . Test to validate the syntax of a malformed VNFD using forbiddenfields, misspelling some fields and including symbols like ‘?’.
OK
test cli validation function syntax . . . Test to validate the syntax of a well-formed VNFD specifying thedocument extension as yml.
OK
test cli validation service syntax o. . . Test to validate the syntax of a simple well-formed NSD. OKtest cli validation service syntax o. . . Test to validate the syntax of a more complex well-formed NSD. OKtest cli validation service syntax n. . . Test to validate the syntax of a NSD which does not exist. OKtest cli validation service syntax k. . . Test to validate the syntax without a compulsory field. OKtest cli validation function integri. . . Test to validate the integrity of a well-formed function. OK
22 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Name Description Status
test cli validation function integri. . . Test to validate the integrity of a well-formed function specifyingthe document extension as yml.
OK
test cli validation function integri. . . Test to validate the integrity of a malformed VNFD that includesa connection point undeclared.
OK
test cli validation function integri. . . Test to validate the integrity of a malformed VNFD that hasseveral undeclared connection points.
OK
test cli validation service integrit. . . Test to validate the integrity of a well-formed NSD. OKtest cli validation service integrit. . . Test to validate the integrity with incomplete input parameters
(e.g. dpath not indicated as an input).OK
test cli validation service topology. . . Test to validate the topology of a well-formed service. OKtest cli validation service topology. . . Test to validate the topology with incomplete input parameters
(e.g. dpath is not indicated as an input).OK
test cli validation function topolog. . . Test to validate the topology of a well-formed function. OKtest cli validation function topolog. . . Test to validate the topology of a well-formed function specifying
the document extension as yml.OK
test cli validation function custom . . . Test to validate custom rules of a VNFD without including therules file.
OK
test cli validation function custom . . . Test to validate the custom rules of a well-formed VNFD. OKtest cli validation function custom . . . Test to validate the custom rules of a VNFD with incorrect
parameters such as a non-desired number of CPUs, RAM, storageor bandwidth.
OK
test validate function valid Test to completely validate a well-formed VNFD. OKtest validate service topology valid Test to validate the topology of a well-formed NSD. OKtest validate function topology vali. . . Test to validate the topology of a well-formed VNFD. OKtest validate service integrity vali. . . Test to validate the integrity of a well-formed NSD. OKtest validate service integrity inva. . . Test to validate the integrity of a malformed NSD with several
undeclared connection points.OK
test validate function integrity val. . . Test to validate the integrity of a well-formed VNFD. OKtest validate function integrity inv. . . Test to validate the integrity of a malformed VNFD with several
undeclared connection points.OK
test validate service syntax valid Test to validate the syntax of a well-formed NSD. OKtest validate service syntax valid s. . . Test to validate the syntax of a simple well-formed NSD. OKtest validate service syntax nonexis. . . Test to validate the syntax of a non-existing descriptor. OKtest validate service syntax invalid. . . Test to validate the syntax of a descriptor that does not have a
compulsory field.OK
test validate function syntax valid Test to validate the syntax of a well-formed VNFD. OKtest validate function syntax invali. . . Test to validate the syntax of a VNFD including not allowed
additional properties, non-defined values for a field (e.g. gB) andforbidden symbols like ‘?’.
OK
test rest validation function syntax. . . Test the syntax validation of a well-formed VNFD through theREST API.
OK
test rest validation function syntax. . . Test the syntax validation of a VNFD with an additionalnon-existing field through the REST API.
OK
test rest validation service syntax . . . Test the syntax validation of a well-formed NSD through theREST API.
OK
test rest validation service syntax . . . Test the syntax validation of a non-valid NSD through the RESTAPI.
OK
test rest validation function integr. . . Test the integrity validation of a well-formed NSD through theREST API.
OK
test rest validation function integr. . . Test the integrity validation of a NSD with undeclared connectionpoints through the REST API.
OK
test rest validation service integri. . . Test the integrity validation of a well-formed NSD through theREST API.
OK
test rest validation service integri. . . Test the integrity validation of a NSD with several undeclaredconnection points through the REST API.
OK
test rest validation function topolo. . . Test the topology validation of a well-formed VNFD through theREST API.
OK
test rest validation service topolog. . . Test the topology validation of a well-formed NSD through theREST API.
OK
test rest validation ko many argumen. . . Test the topology validation of a VNFD with incorrect argumentparameters through the REST API.
OK
test rest validation ko no path Test a complete validation of a VNFD without including the paththrough the REST API.
OK
test rest validation ko no descripto. . . Test the topology validation of a VNFD without including thedescriptor through the REST API.
OK
test rest validation ko no dpath dex. . . Test the integrity validation of a NSD without indicating the paththrough the REST API.
OK
5GTANGO Public 23
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Name Description Status
test rest validation ko no cfile Test the custom rules validation without indicating the rules filethrough the REST API.
OK
test rest validation ok embedded des. . . Test the topology validation of a VNFD through the REST API. OKtest rest validation ok embedded des. . . Test the custom rules validation of a VNFD through the REST
API.OK
test rest validation ok get validati. . . Test the obtaining of one validation result through the REST APIusing an ID.
OK
test rest validation ok get validati. . . Test the obtaining of all the validation results through the RESTAPI.
OK
test rest validation get validations. . . Test the obtaining of non-existing validation results throughREST API.
OK
test rest validation get resources w. . . Test the obtaining of non-stored resources through the REST API. OKtest rest validation ok get resource. . . Test the obtaining of the validation results through REST API. OKtest rest validation delete validati. . . Test the removal of the cached validation results. OK
4.1.3.5 vim-emu
There are 30 functional tests implemented for the emulator. The tests are implemented in thetests folder of the official vim-emu repository [45].
Table 4.12: Validator tests
Name Description Status
testMultipleDatacenterDirect Tests the inter-data center connectivity and routing. OKtestMultipleDatacenterWithIntermedia. . . Tests the inter-data center connectivity and routing with multiple
intermediate SDN switches between the data centers.OK
testSingleDatacenter Tests the instantiation of a single emulated data center. OKtestSDNChainingMultiService Tests the chaining of multiple service instances. OKtestSDNChainingSingleService Tests the chaining of a single service instance. OKtestSDNChainingSingleService withLea. . . Tests chaining of a single service instance with enabled learning
switch controller.OK
testAddSingleComputeSingleDC Adds a single compute instance to a single data center and verifiesthe deployment.
OK
testConnectivityMultiDC Checks the basic connectivity between multiple data centers. OKtestGetStatusSingleComputeSingleDC Tests if the status of a compute instance can be fetched correctly. OKtestInterleavedAddRemoveMultiDC Checks if the add and remove functionality for compute instances
work in the multi-data center case.OK
testRemoveSingleComputeSingleDC Tests if a single compute instance can be removed correctly. OKtest empty flow classifier to match . . . Tests the internal flow classification used by the SFC mechanism. OKtest tcp ip flow classifier to match. . . Tests the flow classification for TCP/IP flows. OKtestHeatDummy Tests the OpenStack Heat interfaces provided by the emulator. OKtestKeystoneDummy Tests the OpenStack Keystone interfaces provided by the emulator. OKtestNeutronDummy Tests the OpenStack Neutron interfaces provided by the emulator. OKtestNeutronSFC Tests the OpenStack NeutronSFC interfaces provided by the
emulator.OK
testNovaDummy Tests the OpenStack Nova interfaces provided by the emulator. OKtestAddRmToDc Tests the resource model API of the emulator. Adds an RM. OKtestBaseResourceModelApi Tests the resource model API of the emulator. Instantiates the
API.OK
testAllocationComputations Tests the resource model API of the emulator. Checks theallocation of compute resources
OK
testAllocationCpuLimit Tests the resource model API of the emulator. Checks theapplication of CPU limits.
OK
testAllocationMemLimit Tests the resource model API of the emulator. Checks theapplication of memory limits.
OK
testFree Tests the resource model API of the emulator. Tests if limits arefreed correctly.
OK
testInRealTopo Simple scenario with a realistic topology testing the emulatortopology API.
OK
testRestApi Tests the REST API of the emulator. OKtest tango llcm start service Tests the 5GTANGO LLCM. Checks service instantiation. OKtest tango llcm stop service Tests the 5GTANGO LLCM. Checks the service termination. OK
24 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4.1.3.6 tng-sdk-benchmark
There are 5 functional tests implemented for the benchmarker. All tests can be found in the tests
folder of the tng-sdk-benchmark repository [6].
Table 4.13: vim-emu tests
Name Description Status
test generate experiment specificati. . . Tests the generation of 5GTANGO packages according to givenPED.
OK
test load and validate ped Tests the PED validation functionality. OKtest cartesian product Tests the correct computation of the cartesian product of possible
configuration parameters to be benchmarked.OK
test generate projects Tests the generation of 5GTANGO projects according to givenPED.
OK
test unpack Tests the integration with the packager to unpack the seedpackages.
OK
4.1.3.7 tng-sdk-test
There are 14 functional tests implemented for the local functional testing library:
Table 4.14: VIM interaction
Name Description Status
test add instance from image Tests VNF instantiation from an image. OKtest add instance from source Tests image building and VNF instantiation. OKtest add instances from package Tests network service instantiation from a package. OKtest add link Tests adding network link between VNFs. OKtest get traffic Tests traffic sniffer functionality. OK
Table 4.15: VNF interaction
Name Description Status
test execute Tests commands execution on VNF instances. OKtest get ip Tests that IP address of an interface is correctly retrieved. OKtest get file Tests file retrieving functionality. OK
Table 4.16: V&V migration tools
Name Description Status
test vnv checker add instances from . . . Tests that vnf checker correctly verifies if there is exactly onenetwork package is used in a test.
OK
test vnv checker add instance from i. . . Tests that vnf checker raises an exception if network configurationis used when VNFs are instantiated using images.
OK
test vnv checker add instance from s. . . Tests that vnf checker raises an exception if network configurationis used when VNFs are built before instantiation.
OK
test vnv checker add link not called Tests that vnf checker raises an exception when link instantiationis requested.
OK
test package parser Tests package processing functionality. OKtest probe builder Tests probe building functionality. OK
5GTANGO Public 25
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4.2 V&V Platform
The V&V Platform is a mechanism of ensuring that the uploaded services can be tested in theappropriate target Service Platform to ensure that the service is considered fit for purpose.
The V&V platform currently is able to:
1. Identify and target the appropriate tests (via tag) for the target service.
2. Prepare the target Service Platform and corresponding test environment.
3. Execute the sequence of tests via a test plan on the target service platform.
4. Determine the success or failure of the test.
5. Publish the results for further analysis.
6. Analyse the test results so as to come up with insights for the design and implementation ofefficient orchestration (deployment and runtime) policies.
Tbl. 4.17 summarizes the executed functional tests implemented for the V&V Platform.
Table 4.17: V&V Functional Tests
Test Case Name Description Status
Sequential execution ofprobes [37]
To ensure that one probe can start after another. The TD will be based on thePING test but using two probe sections: one with no dependencies and the otherdepending on the first one.
OK
Mapping Strategy [32] To ensure that the mapping strategy of TD-NSD based on testing tags works asexpected. The TD will be a simple test based on the PING one using severaltesting tags to cover all mapping possibilities.
OK
Re-trigger a testmanually [36]
To ensure that a test can be relaunched manually. OK
Parser Multiple Cases[35]
To ensure that parser can extract the verdicts of the test result in multiple cases.The TD will use two probes: one that generates a JSON result file and other thatgenerates a txt result file. This TD will contain several validations from bothprobes and files types: TXT that contains “String”, TXT that not contains“String”, JSON field validation and JSON field2 validation2
OK
Deploy test ServicePlatform [79]
This test deploys in the Service Platform the package that contains a NScomprised by one VNF with an external network. After the deployment, the V&Vuse the ping probe to check the availability of the VNF via the external IP
OK
Deploy test OSM [77] This test deploy in OSM the package that contains a NS comprised of one VNFwith an external network. After the deployment, the V&V use the ping probe tocheck the availability of the VNF via the external IP
OK
Deploy test OSMcloud-init [78]
OSM is able to run cloud-init scripts for the VNFs deployed by the mentionedOrchestrator. This test checks the ability to run a cloud-init installing an Nginxservice during the boot of the ubuntu VNF. The service check probe is used toensure that the nginx service is running after the deployment.
OK
Deploy test ServicePlatform metrics [33]
Deploy a package that includes a Test and a Service and checks that monitoringmetrics are collected and stored successfully in the V&V monitoring framework
OK
Deploy test ServicePlatform Hybrid [75]
To verify the possibility of include a test package inside a package that contains anNS, this test uses a package that is composed of both an NSD and a TD with thecorresponding testing tags. The Network service is deployed in the ServicePlatform and tested by the test ping inside in the same NS package
OK
Analyse the test results[51]
Get the data results of a V&V Test and realise an analysis by testing their quality OK
4.2.1 Test Execution and Results
Sections bellow discuss the results of each functional test. The reader is redirected to the appendixin order to view the full details of each test execution. Here we enumerate the tests and discuss
26 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
briefly the outcome of each test.
4.2.1.1 Sequential execution of probes
This test launches two probes sequentially: PING and NETCAT. The Test Descriptor’s “depen-dencies” field is used to configure the test, forcing the second probe to start only when the firstone finishes. To check that this sequence is followed, the start timestamp of the second probe(NETCAT) is compared (should be greater) with the end timestamp of the first probe (PING).
Table 4.18: Sequential Execution Of Probes Test Configuration
Scenario Robot TestNetwork ServicePackages Test Packages
One probestarts afteranother
test probe can start after another [65] NSID1V cirros ServicePlatform no tags [56]
TSTPING dependency2 probes [89]
Fig. A.3 is the Robot Report that shows the step by step test results.
4.2.1.2 Mapping Strategy
This test contemplates three different scenarios:
• Test package’s testing tags and Network Service Package’s testing tags don’t match (Notmatch case), so no test plan is generated in the V&V platform. Fig. A.4 is the Robot Reportthat shows the step by step test result.
• One Network Service package’s testing tag matches with multiple Test packages’ testing tags(single NS multiple TD case). The Service used in this test is related to two different TestDescriptors by testing tag. The Service testing tag is contained in the test descriptors. Toexecute this test, the tests packages are loaded in the platform before the first service. Oncethe service package is uploaded, two different tests are generated using the same service.Fig. A.5 is the Robot Report that shows the step by step test result.
• One Test package’s testing tag matches with multiple Network Service packages’ testing tags(single TD multiple NS case). The Test Descriptor package used in this case uses a testing tagcontained by two different services. This test launches the same test against those differentservices. Fig. A.6 is the Robot Report that shows the step by step test results.
Table 4.19: Mapping Strategy Test Configuration
Scenario Robot TestNetwork ServicePackages Test Packages
Testing Tagsnot match
test NS TD testing tags not match [62] NSID1V cirros ServicePlatform no tags [56]
TSTPING testing tagnot match [93]
Single NStesting tagmatches withmultiple TDstesting tags
test NS testing tag matches multiple TDtesting tag [63]
NSID1V cirros ServicePlatform NS testing tagmatches multiple TDtesting tag [57]
TSTPING NS testingtag matches multipleTD testing tag 1 [90]TSTPING NS testingtag matches multipleTD testing tag 2 [91]
Single TDtesting tagmatches withmultiple NSstesting tags
test TD testing tag matches multiple NStesting tag [74]
NSID1V cirros ServicePlatform TD testingtag matches multipleNS testing tag 1 [58]
TSTPING TD testingtag matches multipleNS testing tag [92]
5GTANGO Public 27
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4.2.1.3 Re-trigger a test manually
This test uses the PING test. The scope of this scenario is to relaunch an already executed test.To do that and to maintain the independence of this test, the first step is to execute a new PINGtest and then, once completed, the test plan status is modified to “RETRIED” forcing the V&Vto relaunch the test generating a new test plan.
Table 4.20: Retrigger Manually Test Configuration
Scenario Robot TestNetwork ServicePackages Test Packages
Retrigger atest manually
test retrigger a test manually [66] NSID1V cirrosService Platform no tags\cite{cirros.sonata.no.tags}
TSTPING 2 instancesprobes [88]
Fig. A.7 is the Robot Report that shows the step by step test result.
4.2.1.4 Parser Multiple Cases
This scenario uses a Test Descriptor that executes two probes, one that generates a JSON resultfile and a second one that generates a text result file. Several validations are configured in thedescriptor:
• Text results file contains a specific String
• Text results file doesn’t contain a specific String
• JSON results file has a specific field that matches a condition
• JSON results file has another specific field that matches a second condition
Table 4.21: Parser Test Configuration
Scenario Robot TestNetwork ServicePackages Test Packages
Parsermultiple cases
test parser multiple cases [64] NSIMPSP no tags [61] TSTIMPSP parsermultiple cases [86]
Fig. A.8 is the Robot Report that shows the step by step test result.
4.2.1.5 Deploy test Service Platform
Table 4.22: Deploy test Service Platform Configuration
Scenario Robot Test Network Service Packages Test Packages
Deploy test Service Platform test [83] NSID1V [53] TSTPING [87]
Fig. A.9 is the Robot Report that shows the step by step test result.
4.2.1.6 Deploy test OSM
28 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Table 4.23: Deploy test OSM Configuration
Scenario Robot Test Network Service Packages Test Packages
Deploy test OSM test [82] NSID1V cirros OSM [55] TSTPING [87]
Fig. A.10 is the Robot Report that shows the step by step test result.
4.2.1.7 Deploy test OSM cloud-init
Table 4.24: Deploy test OSM Configuration
Scenario Robot TestNetwork ServicePackages Test Packages
Deploy testOSM withcloud-init
test [78] NSID1V ubuntu OSMcloud-init [59]
TSTTELNET osm cloud init[94]
Fig. A.12 is the Robot Report that shows the step by step test result.
4.2.1.8 Deploy test Service Platform metrics
Table 4.25: Deploy test Service Platform metrics Configuration
Scenario Robot Test Network Service Packages Test Packages
Service Platform Metrics test [81] NSIMPSP [60] TSTIMPSP [85]
Fig. A.11 is the Robot Report that shows the step by step test result.
4.2.1.9 Deploy test Service Platform Hybrid
Table 4.26: Deploy test Service Platform Hybrid Configuration
Scenario Robot TestNetwork ServicePackages Test Packages
ServicePlatformHybrid
test [80] NSID1V ANDTSTPING cirrosService Platform [54]
TSTPING [87]
Fig. A.13 is the Robot Report that shows the step by step test result.
4.2.1.10 Analyze the test results
Table 4.27: Analyse Test Results Configuration
Scenario Robot Test Network Service Packages Test Packages
Analyse Test Results test [76] Not needed Not needed
Fig. A.14 is the Robot Report that shows the step by step test result.
5GTANGO Public 29
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4.3 Service Platform
The Service Platform (SP) is a Management and Orchestration (MANO) tool responsible to managethe life cycle of Virtual Network Functions (VNFs) and orchestrate Network Services (NS) inaccordance to ETSI NFV standards. The SP extends the ETSI standards with features suchas SLAs, Policies, or Licensing, and implementing other features, such as Network Slicing, usuallylocated in other management layers (OSS layer). The SP is composed by the integration of multipleinternal components that work together as defined in the SP architecture in [29].
In-line with the 5GTANGO DevOps strategy, the Service Platform was during developmentput under a number of integration tests. Integration tests intend to validate the execution ofparticular key actions that require the intervention of multiple components. The actions to bevalidated in these tests are partial functions, usually corresponding to a subset of the end-to-endactions functional required to be implemented by the system. Integration tests come usually beforefunctional tests, but after unit tests, which in turn attempt to validate the behaviour of a singlecomponent. The list of integration tests has already been reported in Deliverable D6.2 [15] (see thisdocument for further details). The list of the integration tests already validated is provided belowas a reference and for providing the complete view on the tests related to the Service Platform.
• SP.int.1: Valid package is stored
• SP.int.2: Invalid package is not stored
• SP.int.3: Query available services
• SP.int.4: Add an SLA Template to a Service in the Catalogue
• SP.int.5: Add a Policy to a Service in the Catalogue
• SP.int.6: Add a Network Slice Template (NST) to the Catalogue
• SP.int.7: Instantiate a service
• SP.int.8: Instantiate a Network Slice
• SP.int.9: Gatekeeper read functions and services records
• SP.int.10: Terminate a service
• SP.int.11: Terminate a Network Slice
This section defines and executes functional tests that focus on end-to-end Service Platformfunctionalities involving more than one Service Platform components. The tests are organized intodifferent groups according to the set of features they intend to test.
4.3.1 Networking and Slicing
Networking and Slicing tests validate functions related to networking (single/multi-PoP) and Net-work Slicing. The Tbl. 4.28 summarizes those tests (for more details see Appendix Sec. B).
30 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Table 4.28: Networking and Network Slicing Functional Tests.
Name Description Status
test-slice-3.1.1[72]
This test checks the Network Service Composition functionality within a Network Slice, byinstantiating a Network Slice Instance (NSI) based on a predefined Network Slice TemplateDescriptor (NSTD). The main steps are; the slice virtual link descriptors (slice-vld) creation(requested to the IA) and the Network Services (NSs) deployment (requested to the MANO)composing the Network Slice. The test finishes after terminating the Network Slice andremoving all the uploaded descriptors.
OK
test-slice-3.2.1[71]
This test checks the functionality to share Network Services among multiple Network Slices.By instantiating two Network Slice instances based on a predefined Network Slice TemplateDescriptor (NSTD) which has one of the services with the share variable activated. The mainsteps are similar to the previous test but now two network slices are instantiated and onlythose slice-vld and NSs not shared are created and attached to the already existing shared NS.The test finishes after terminating the Network Slice and removing all the uploadeddescriptors.
OK
test-slice-3.3.1[73]
This test checks that an association between Network Services and Service Level Agreement(SLA) is created in order to ensure a certain level of Quality of Service (QoS). The test createsan SLA Template and assigns it to each one for the NSs composing the NSTD. Once theNetwork Slice instance is created, it validates that the agreements between NSs and SLA areactive. The test finishes after deactivating the SLA agreement, terminating the Network Sliceand removing all the uploaded descriptors.
OK
4.3.2 User Management
The User Management test validates functions related to the management of the users with adminpermissions. The Tbl. 4.29 summarizes this test (for more details see Appendix Sec. B).
Table 4.29: User Management and Rate Limit Functional Tests.
Name Description Status
test admin role [26] This test check the user’s permission with admin role. The admin has permission toobtain all kind of descriptors. Also, the admin has permission to register another admin.
OK
4.3.3 Policy and Monitoring
The Policy and Monitoring tests validate functions related to Runtime Policy and Monitoring.Tbl. 4.30 summarizes those tests (for more details see Appendix Sec. B).
Table 4.30: Policy and Monitoring Functional Tests.
Name Description Status
test service migration state os[70]
The goal of this test is to evaluate whether stateful migration is workingcorrectly for migrated VNFs. The test is also used to evaluate whether theSSM and FSM mechanism works correctly.
OK
test service migration k8s [69] The goal of this test is to evaluate whether migration is working correctlyfor CNFs.
OK
test service reconfiguration state os[68]
The goal of this test is to evaluate whether a network service can besuccessfully deployed at an OpenStack VIM(s). The test is also used toevaluate whether an elasticity policy is enforced correctly upon thedeployed network service.
OK
test service reconfiguration state k8s[67]
The goal of this test is to evaluate whether a network service can besuccessfully deployed at a k8s VIM(s). The test is also used to evaluatewhether an elasticity policy is enforced correctly upon the deployednetwork service.
OK
test monitoring vim integration[31]
The goal of this test is to evaluate whether the core monitoring servers(Monitoring manager, Prometheus) of the SP are collecting monitoringdata from the registered VIMs successfully.
OK
5GTANGO Public 31
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
4.3.4 SLAs and Licensing
The SLA and Licensing tests validate functions related to end-to-end Service Level Agreement(SLA) and Licensing. Tbl. 4.31 summarizes those tests (for more details see Appendix Sec. B).
Table 4.31: SLA and Licensing Functional Tests.
Name Description Status
test sla e2e 4.1[23]
This test checks the creation of an SLA with public license as an SLO (Service LevelObjective), the attachment to a NS, the instantiation of the service and the enforcement of theSLA, and finally the termination of the NS with the deactivation of the correspondingAgreement.
OK
test sla e2e 4.2[24]
This test checks the creation of an SLA with trial license as an SLO, the correlation to a NS,the instantiation of the service and the enforcement of the SLA in case the trial license is valid,and finally the termination of the NS with the deactivation of the corresponding Agreement.
OK
test sla e2e 4.3[25]
This test checks the creation of an SLA with private license as an SLO, the correlation to aNS, the instantiation of the service and the enforcement of the SLA in case the private licenseis bought by the user and valid, and finally the termination of the NS with the deactivation ofthe corresponding Agreement.
OK
32 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
5 Non-Functional Tests
Non-Functional tests intend to measure aspects of a system not directly related to its functions,but to other aspects such as performance, usability, reliability, etc. These tests usually provide asresult numbers that characterize the system, enabling the comparison of different systems, or doingprofiling and benchmarking studies. Non-functional tests are essential to ensure the ability of asystem to successfully cope with production environments.
This section presents the non-functional test that were run during the validation campaign. Allthree components comprising 5GTANGO were validated via non-functional tests. The details ofthe executed tests are available in the appendix of this deliverable (see sec. A and sec. B.
5.1 SDK
Even though most of the 5GTANGO SDK tools run on the developers’ laptop and thus do not haveto deal with thousands of requests, there are still a series of non-functional tests and performancemetrics that are important. We describe those non-functional tests, mainly focusing on performancemetrics, in the following sections to give the reader an insight about the general performancebehaviour of the 5GTANGO SDK tools.
5.1.1 tng-sdk-project
The 5GTANGO SDK project tool, tng-sdk-project, has an important part of the developmentworkflow. It not only allows managing existing NFV projects, e.g. to add or remove files, but alsoto quickly create new projects, automatically generating suitable descriptor files. Among otherparameters, users can configure the tool to generate a new project for a network service witha varying number of chained VNFs. The tool then automatically generates the correspondingVNFDs, the vLinks and forwarding graphs in the NSD. The tool supports both the generation of5GTANGO and OSM descriptors.
As NFV further matures, it is conceivable that more and more complex network services will berealized with an increasing number of VNFs. To this end, it is of interest that the project generationruntime and memory footprint is small even for larger network services with many VNFs.
5.1.1.1 Test setup
To evaluate the runtime and memory footprint of generating NFV projects with tng-sdk-project,we performed a range of systematic performance tests on a machine with Intel(R) Xeon(TM) E5-1660 v3 CPU with 8 cores @ 3.0 GHz and 32 GB memory. Specifically, we generated new NFVprojects with both 5GTANGO and OSM descriptors for an increasing number of VNFs, startingwith a network service that has a single VNF up to a huge service with 100 chained VNFs (increasedin steps of 1). For each of these 100 configurations, we measured the absolute runtimes and thememory footprint over 30 independent repetitions to obtain statistically significant results. Thecode of the described tests is implemented in [19].
5GTANGO Public 33
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.1: Project and descriptor generation runtimes for projects with 1 to 100 VNFs with thetng-sdk-project CLI. 30 repetitions for each of the 100 experiments.
5.1.1.2 Results
Fig. 5.1 shows the measured absolute runtimes in seconds for generating NFV projects and de-scriptors with an increasing number of VNFs. The points represent the absolute measurements,which are all below 5 seconds - even for very large NFV projects with 100 VNFs. For typicalnetwork services with less than 20 VNFs, the runtimes are below one second, ensuring a smoothuser experience.
The blue line in Fig. 5.1 is fit through the measurement points using linear regression. It illus-trates that the measured runtimes are close to linear with respect to the number of VNFs. Thisindicates that the performance of the tool scales well with increasing load (i.e., generation of largerNFV projects).
Fig. 5.2 visualizes the memory footprint, in particular the maximum consumed memory in MB, forrunning tng-sdk-project with an increasing number of VNFs. The maximum memory consump-tion is very stable until around 60 VNFs and then starts increasing roughly linearly. Nevertheless,the memory consumption is below 38 MB even for the largest scenarios with 100 VNFs. the totalincrease in memory consumption from 1 to 100 VNFs is only around 1.5 MB. This indicates thatthe resource (here, memory) usage of the tool is low.
5.1.2 tng-sdk-package
5GTANGO’s package component, called tng-sdk-package, is developed as part of 5GTANGO’sSDK. Besides its use as part of the command line tools offered by the SDK, it is also part of the5GTANGO Service Platform and V&V. In those platforms, it is deployed as a microservice and isused to unpackage the on-boarded artefacts. This makes it an important component in the overallworkflow of 5GTANGO since almost every artefact is processed by this component. Furthermore,the packager is the central component to ensure compatibility with other platforms. It, for example,allows to package 5GTANGO SDK projects as native OSM packages. As a result, a good packagingperformance is of great interest.
34 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.2: Project and descriptor generation memory footprint for projects with 1 to 100 VNFswith the tng-sdk-project CLI. 30 repetitions for each of the 100 experiments.
5.1.2.1 Test setup
To analyse the performance of the packager, we executed a series of tests on a machine with Intel(R)Xeon(TM) E5-1660 v3 CPU with 8 cores @ 3.0 GHz and 32 GB memory. During these tests wepackaged 5GTANGO SDK projects of different sizes (1 to 100 VNFs) and different validationoptions (skip/topology). Those projects were packaged as 5GTANGO advanced package formatas well as the native package format of OSM, both aligned with the SOL004 ETSI NFV packageformat [41]. During the execution, we captured the packaging time (runtime) as well as the memoryfootprint of tng-sdk-package. Each test was repeated 10 times. The code of the described tests isimplemented in [18].
5.1.2.2 Results
Fig. 5.3 shows the average runtimes of the packager to process projects of different sizes. It can beseen that the tool linearly scales with the size of the projects. The used validation level (skippedor topology) has a big effect to the runtimes and skipping the validation can heavily reduce thepackaging time. In general, packaging of 5GTANGO packages takes slightly longer when validationis enabled, which is caused by the fact that the validation tool validates much more details than inthe OSM case. Anyhow, the absolute runtimes are always in the order of several seconds, even forvery large projects with 100 VNFs, which will be barely seen in practical deployments.
The memory footprint of tng-sdk-package is reported in Fig. 5.4. It can be observed that thememory usage of the tools linearly scales with the sizes of the processed projects. The validationhas a noticeable effect to the memory consumption which can be explained by the fact that thevalidator needs to be executed in background. Interestingly, does the processing of 5GTANGOpackages needs noticeable (about 8.9%) more memory than the processing and creation of OSMpackages. This may be caused by the fact that 5GTANGO packages are more complex, and holdmore metadata than OSM packages. However, the absolute numbers are still close and the toolneeds less than 55 MB of memory to process large projects.
5GTANGO Public 35
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.3: Packaging runtimes using SDK projects with 1 to 100 VNFs for different configurationsand platforms
Figure 5.4: Packager memory usage using SDK projects with 1 to 100 VNFs for different configu-rations and platforms
36 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
5.1.3 Validator
The Validation tool of the SDK is the component in charge of verifying the validity of the differentfile descriptors along the 5GTANGO project. There are different levels of validation provided by thetool. They are syntax, integrity, topology and one recently included called custom rules validation.All of them exist to ensure the complete usability of the packages that will be uploaded to theplatforms.
In order to analyse the performance and usefulness of the validator, we have executed a set oftests related to integrity and topology validations over time. This section presents the test setuprequired to obtain these results as well as the obtained ones.
The performance of the validation tool has been indirectly tested in Sec. 5.1.2 since the tng-sdk-package tool performs a validation during the packaging process. It has been probed to be efficientenough for production environments without adding relevant delays.
5.1.3.1 Test setup
The experiment to study validator’s behaviour was performed in a range of 1 to 100 VNFD. Theexperiment was executed in a single physical machine with an Intel(R) Core(TM) i5-6200U CPUwith 4 cores @ 2.3 GHz and 8 GB memory and it has been repeated 20 times. In this case the testswere performed in a common laptop similar to the ones used by the developers.
The tests were carried out in two rounds. In the first both syntax and integrity were validated.The second round of tests also validates the topology. The code of the described tests is implementedin [21].
5.1.3.2 Results
As introduced above, the tests were performed over the integrity and the topology validations sincethey are the most complex validations. Syntax validation could be omitted since these two othersalready include it as a first step of the validation process.
Therefore, the results presented for this tool will be split in the following sections: integrity andtopology tests.
Integrity tests
Fig. 5.5 shows the average time of the validator while analysing an increasing amount of descriptorsover time. According to the graph, the amount of resources consumed by the tool can be consideredas predictable and stable since the results presented are nearly linear.
Fig. 5.6 represents the amount of memory in use over time while the validator analyses anincreasing amount of descriptors. The result presented is close to a linear growth so we can assumethat the memory required can be predicted and that the system presents stability.
Topology tests
Fig. 5.7 resumes the information related to the average time of this tool while analysing an increasingamount of descriptors. In terms of the time spent for each validation, there is an extra-delaycompared to the previous case, but it is low so the topology validation can be safely added withoutany noticeable impact.
Fig. 5.8 displays the amount of memory in use by the tool over time while the analysis is appliedto a bigger number of descriptors. The test show that topology test is more intensive in memorybut it lies within reasonable limits for any service.
5GTANGO Public 37
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.5: Validator runtimes
Figure 5.6: Validator memory usage
38 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.7: Validator runtimes
Figure 5.8: Validator memory usage
5GTANGO Public 39
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.9: Service deployment and test execution times for different service scales
5.1.4 Tangotest
Tangotest is a Python library for automated functional testing of VNFs and network services. Itis built on top of the Emulator and can be used to run tests locally or in CI/CD environments. Infavour of seamless development and testing workflow, the library provides functionality to preparetests for uploading to the V&V platform. In order to quantify the performance of the library, wemeasured service deployment and test execution times for different scales of a service.
5.1.4.1 Test setup
In this experiment we wrote a script to compose and test a simple web-service which analysesincoming traffic and logs requests to restricted content. We used Python built-in HTTPServermodule as a web-server, HAProxy as a load balancer, Snort as an IDS and Curl as a traffic generator.The IDS scale (number of instances) is used as a parameter to measure the performance of thelibrary. The test code sends HTTP requests to the web-server for both normal and restrictedcontent, then verifies responses and that log records are equally distributed among IDS instances.The number of these requests is equal to the number of IDS instances. The experiment has beenexecuted on a laptop with Intel(R) Core(TM) i3-6100H CPU @ 2.70GHz in a virtual machine with2 processors and 2Gb of memory allocated. It was executed 10 times for each scale and the averagevalues are shown on the plots. The code of the described tests is implemented in [20].
5.1.4.2 Results
The plots show linear growth of time with increased number of instances. These results demonstratethat in addition to regression testing, the library can be also used for ad-hoc testing and test-drivendevelopment due to fast deployment of Network Services and test execution. Tests can be executedlocally after every change in service implementation and the results can be immediately used tofind and fix bugs.
5.1.5 tng-sdk-sm
5GTANGO’s specific manager component, referenced as tng-sdk-sm, is part of the SDK andaims to aid network service and VNF developers when developing and testing specific managers.Service Specific Managers (SSM) and Function Specific Managers (FSM) are used by developers
40 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
to customize the behaviour of the MANO Framework for their components, during runtime. Asdesigning, developing and testing such specific managers have proven cumbersome and error-pronein the past, the consortium decided that there was a need for a tool such as tng-sdk-sm to facilitatethis work.
The tool can be used to generate templates for SSMs and FSMs, and it provides a framework totest them. In order to properly test them, they need to be executed in a MANO-like context. Forthis reason, the tool makes use of a standalone version of the 5GTANGO MANO Framework, thatorchestrates on top of the Emulator. When specific managers need to be tested, their associatedlife cycle workflow (instantiation, migration, scaling, termination) is triggered on this standaloneMANO Framework. Once this workflow is finished, the status of the specific manager is evaluatedand reported to the developer. The performance of tng-sdk-sm is defined by the performance ofthis standalone MANO and Emulator setup, i.e. how long does it take this setup to execute thevarious life cycle workflows.
5.1.5.1 Test setup
To analyse the performance of tng-sdk-sm, we executed a series of tests on a machine with Intel(R)Core(TM) i5-5300U CPU with 8 cores @ 2.30GHz and 8 GB memory. During these tests, weevaluated the duration required by the standalone MANO Framework to execute various life cycleworkflows on top of the Emulator. The duration of the instantiation and termination workflow wasmeasured for services with a number of VNFs ranging from 1 to 20. The duration of the scale inand out workflows was evaluated for a number of VNFs ranging from 1 to 20. Each VNF is anUbuntu Trusty docker container. Each test was repeated 5 times.
5.1.5.2 Results
Fig. 5.10 shows the results of the test. It shows that the duration of each life cycle workflow increaseslinearly with the number of involved VNFs. It also shows that on one hand the instantiation andscale-out workflow, and on the other hand, the termination and scale-in workflow, show similarresults. This shows that for every workflow, the duration is mostly defined by the time it takes todeploy or kill a VNF.
5.1.6 VIM Emulator
5GTANGO’s vim-emu is one of the key components of the SDK toolkit and provides an easy-to-use prototyping platform for VNF and service developers. One of its outstanding features is thecapability to emulate arbitrary network topologies with many NFVI PoPs on a single machine. Inorder to quantify this key feature as well as the general usefulness of the emulator for prototyping,e.g. in terms of fast deployment times, we executed a set of experiments. This section presents ashort summary of these results. Further results can be found in our corresponding Journal paper[47] which was published during the project.
5.1.6.1 Test setup
We did a set of experiments to study vim-emu’s behaviour when topologies with many PoPs areemulated or when hundreds of service instances are deployed on the emulated infrastructure. Theexperiments have been executed on a single physical machine with Intel(R) Xeon(TM) E5-1660 v3CPU with 8 cores @ 3.0 GHz and 32 GB memory, and have been repeated 10 times. The code ofthe described tests is implemented in [22].
5GTANGO Public 41
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.10: Duration of various lifecycle events
5.1.6.2 Results: Platform scalability
We analysed the startup and configuration time of the emulation platform for different synthetictopologies with different numbers of PoPs. Fig. 5.11 shows the setup time breakdown for up to1024 PoPs using four topologies. The linear topology connects all PoPs into a long chain, the startopology connects all PoPs to a single central PoP, and the two randomized (rnd) topologies whichget the number of PoPs |V| and a factor k as inputs. They then interconnect the PoPs with |E| =k|V| links where |E| is the number of created links [47].
The results show that in all topologies, 128 PoPs can be set up in between 91.8s and 197.7s.Even the maximum tested number of 1024 PoPs can, on average, be created in 3,704.0s using thernd(k=0.5) topology. The plots indicate a non-linear relationship between the number of PoPs andtotal setup times which is mostly caused by the setup time of the emulated links. We identifiedthe Open vSwitch daemon, which runs always on a single CPU core, to become the bottleneck inlarge deployments as it has to manage one vSwitch instance per PoP [47].
We also analysed the memory consumption for these four topologies (see Fig. 5.11). The figureshows that the total memory used by the tested environment increases proportionally to the numberof PoPs in the topology. In general, not more than 5.0Gb of memory is used, even with largetopologies, which verifies that our emulation platform can easily be executed on existing test nodesor locally on a developer’s laptop [47].
5.1.6.3 Results: Service deployment times
We also studied the time required to deploy a large number of VNFs on top of the emulatedinfrastructure. We used the same topologies as before and configured them with either 8 or 128PoPs. On top of them we deployed up to 256 VNFs (randomly placed). The used VNFs are basedon the default Docker (i.e. ubuntu:trusty) images and do not run any additional software since weare only interested in the bare instantiation times. Fig. 5.12 shows that the instantiation timesscale proportionally with the number of VNFs and that the instantiation process takes longer inlarger topologies and is also influenced by the number of links in a topology. It can be seen thatwith our platform hundreds of VNFs can be quickly deployed on a single machine, enabling fast
42 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.11: Scalability of the vim-emu platform with up to 1000 emulated PoPs.
Figure 5.12: Service deployment times on different emulated topologies for services with up to 256VNFs
5GTANGO Public 43
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
tests of large deployment scenarios [47].
5.1.7 tng-sdk-benchmark
The benchmarking tool (benchmarker) is part of 5GTANGO’s SDK and provides VNF and NSdevelopers with a flexible framework to build and execute fully automated benchmarking experi-ments of single VNFs or complex NSs . Those experiments are executed in rounds. In each round,the system under test (SUT) is freshly deployed on a service platform (e.g. vim-emu) using a dif-ferent configuration (e.g. a different number of CPU cores). After that, the SUT is stimulated byprobes, similar to the workflow of the V&V, and the resulting performance is measured. We usedtng-sdk-benchmark to collect a series of data sets from generic VNFs, such as IDSs, proxies, orMQTT brokers and published those data sets as part of the SNDZoo library [48]. We produced atotal of 8 data sets: SEC01-SEC03, WEB01-WEB03, and IOT01-IOT02.
5.1.7.1 Test setup
To quantify the performance of the benchmarker, we compare the runtimes of different experimentsused to collect data sets of different sizes. To collect those data sets, two machine with Intel(R)Xeon(TM) E5-1660 v3 CPU with 8 cores @ 3.0 GHz and 32 GB memory were used. One of themachines was used to execute the benchmarker itself and the other machine was used to run theSUT on top of vim-emu. For each data set between 40 and 80 different configurations were testedand the measurement for each configuration was repeated 20 times. During the experiments, up to281 experiment metrics have been recorded and up to 593 time series metrics have been captured,showing the large amount of data that can easily be collected with this tool. The code of thedescribed tests is implemented in [17].
5.1.7.2 Results
First, we report the overall experiment runtimes in Fig. 5.13. The different experiments (SEC01-IOT02) resulted in data sets that contain between 4.6 and 28.7 million data points. The experimentstook between 19.2h (SEC03) and 79.8h (IOT2) as reported in the Figure. These long term runsverify that tng-sdk-benchmark provides the maturity and stability to be used to execute long-lastingcomplex experiments.
Second, we investigated how stable the system runs. To do so, we investigated the runtimes ofeach of the measurement rounds of a single experiment (SEC01) which have been configured to last120s. Fig. 5.14 shows that all measurement rounds took at least 120s, and most of them took about0.13s more. This additional times is used by the benchmarker for configuration and managementtasks and do not cause further problems. The important takeaway from the presented Figure isthat tng-sdk-benchmark works reliable and equally executes a large number of experiments withoutgetting slower, i.e. it correctly clears the environment after each measurement round.
5.1.8 tng-analytics-engine
5GTANGO’s analytics engine package, called tng-analytics-engine is developed as a tool that can beinterconnected with the SDK as well as the V&V. In the SDK part, experiments’ results producedby the tng-sdk-benchmark tool are used for analysis purposes. The users are able to select thealgorithm to be executed along with the set of input monitoring metrics and the start and endtime of the considered time series data. The tool supports the production of analysis results invarious formats. Such results can lead to insights for the design and implementation of efficientorchestration (deployment and runtime) policies.
44 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.13: Overall runtimes to collect the data sets of the SNDZoo library [48]
Figure 5.14: Example of the time taken per measurement round during a large set of runs, showingthe stability of the tool
5GTANGO Public 45
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.15: Analysis requests vs Memory usage.
5.1.8.1 Test setup
The main performance metrics considered in the performance evaluation analysis regard resourceefficiency metrics, namely the trends in the consumption of computational resources by the analyticsengine for the execution of numerous analysis processes. To evaluate such performance indicators,we performed a range of systematic performance tests on an Ubuntu 18.04.1 Long Term Support(LTS) virtual machine with 4 vCPUs, 8 GB of RAM and 80 GB of storage space. Specifically,we prepared a set of scripts for realizing numerous parallel analysis requests, ranging from 1 to1000 requests. Such requests had to be accommodated by the analytics engine and scheduled forexecution by the analysis server. The impact of the increase in the number of requests in the CPUand memory resources usage and the time required for the realization of an analysis process (serviceexecution time) is evaluated. The analyzed dataset consists of time series data of 11 metrics with3352 data points in total. The analysis process that is executed is a correlogram analysis wherethere are calculated all the statistically significant (positive and negative) correlations between theprovided input metrics.
5.1.8.2 Results
Following, the produced performance evaluation results are depicted. Fig. 5.15 and Fig. 5.16 showthe effect of serving numerous parallel requests on the memory and CPU usage, along with theobtained time series data with regards to the memory usage, the CPU usage and the executionduration of the requests. The horizontal axis in the Figures corresponds to the number of serverrequests per second, provided as an average value within a 10 seconds time period. It should benoted that such an effect is dependent on the computational intensiveness of the executed algorithm.In the case of the correlogram (i.e. an image of correlation statistics), it is shown that there is aslightly increasing trend in the required memory resources and a somehow larger increasing trendin the required CPU resources for serving multiple requests in parallel. Thus, the allocation ofextra CPU resources should be mainly planned in cases where the execution of multiple analysison the same time period is required.
Fig. 5.17 shows the effect of serving numerous parallel requests on the duration of the executionof the analysis process, along with the obtained time series data. It is shown that the increase inthe number of requests has a small impact (less than 1 second) on the execution of the analysisprocesses. No severe limitation is identified with regards to this performance metric.
46 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.16: Analysis requests vs CPU usage.
Figure 5.17: Analysis requests vs Analysis process duration.
5GTANGO Public 47
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
5.2 V&V Platform
The following section list the non-functional tests implemented for V&V Platform. Tbl. 5.1summarizes these non-functional tests:
Table 5.1: V&V Non Functional Tests
Test Case Name Description Status
Multiple Parallel Probes[34]
To ensure that multiple probes can run in parallel. The Test Descriptor (TD)will be based on the PING test but using multiple instances/probes sectionswith no dependencies to execute all probes in parallel.
OK
5.2.1 Multiple Parallel Probes
The parallel execution can be performed in two different ways:
• Launching two instances of the same test probe. To execute probes in parallel, two instancesof the PING test can be launched at the same time. To do that, the Test Descriptor’s“instances” parameter will be increased to 2. The Fig. A.1 is the Robot Report that showsthe step by step test result.
• Launching two different test probes. To execute probes in parallel, two probes will be definedto launch the PING test and the NETCAT test. In this case, we have one validation conditionper probe (ping: it should not lose any packet and netcat: 80 port should be open). Robottest will check if the probes are executed in parallel. So, the difference between the end timeand the start time of each probe must be less than 60 seconds. The Fig. A.2 is the RobotReport that shows the step by step test result.
Table 5.2: Multiple Parallel Test Configuration
Scenario Robot TestNetwork ServicePackages Test Packages
Two Instances test 2 instances probes NSID1V cirrosSONATA no tags
TSTPING 2 instancesprobes
Two Probes test 2 parallel probes NSID1V cirrosSONATA no tags
TSTPING 2 parallelprobes
5.3 Service Platform
This section discusses the Service Platform non-functional tests executed for the most relevantKPIs.
5.3.1 Service Platform Gatekeeper KPIs
This section describes the Key Performance Indicators (KPIs) we have collected for the ServicePlatform.
• Number of requests, which counts the number of HTTP requests made to the platform’sAPI.
• Time taken by each request, which measures the time taken by each request.
48 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.18: Ruby rack-base applications.
It should be noted that the second KPI above measures only the time of synchronous requests.The time an asynchronous request takes has to be calculated based on the fields of the request.The Gatekeeper router is a (ruby) rack-based application (see [11]), which means that every HTTPrequest goes through a stack of applications, as shown in Fig. 5.18 (from the wiki in [4]).
In one of the first of such apps (not on the very first one since we need first to protect the APIwith rate-limiting), when the response is received, we increase a counter in a Prometheus database,thus allowing us to have this KPI available. Each HTTP request that is submitted to the platformgoes through the Gatekeeper, thus allowing a uniform and consistent way of measuring the time ittakes the platform to answer those requests.As is often the case in platforms such as 5GTANGO’sV&V and SP, we have requests that are answered synchronously and others that are just registeredand answered (later) asynchronously.
• Synchronous requests: The time each synchronous requests takes, like (e.g.) the queriesmade for packages, services, slices, functions, etc. that exist in the Catalogue, is measured ina similar way that counting the requests is (described above). So, we save the time the requestarrives (see Fig. 5.18) and later, when the response to that request arrives, we discount thearrival time from the current one, sending it in an HTTP header we have called X-Timing.We, therefore, make available the time every synch request takes to be answered.
• Asynchronous requests: The time each asynchronous requests takes, like (e.g.) the in-stantiation or termination of services or slices, the scaling-out or -in of a function, etc. hasto be measured differently. We do it by taking two fields every request has in its meta-data:
– created at, which is set when the request is created;
– updated at, which is set when the request is somehow changed.
In our case, we’re registering every change of status, so the time taken by any request will simplybe updated at - created at and the status is READY.
5.3.1.1 Test Setup
The platform used to perform the KPI latency tests is located in Paderborn. The server is abare metal server, 16 cores Intel(R) Xeon(R) CPU E5-1660 v3 @ 3.00GHz, 32GB RAM, 256 GB.The server that supports the installation of the SONATA Service Platform runs kubernetes v1.15,docker 18.09.6 and Istio 1.2.2. To perform the stress test, the JMeter application was installed in
5GTANGO Public 49
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
the server for the execution of the tests. The test plan used for the following tests is available inthe GitHub repository of sonata-nfv/tng-test [52]
The process used to obtain the metrics were the following. Having elected a benchmarking tool,we selected the endpoints that use different internal components like calls between microservices,queries to databases, enrichment results or caching of content. Starting from the election of end-points, we generated a test-suite per API endpoint to stress the system and establish 3 four groupof users 10, 20, 30 and 40 users simultaneously. The Tbl. 5.3 summarizes the results of stresstests that were applied to the SONATA SP (for more details see Appendix Sec. B). The latencyis calculated internally by the entrypoint tng-gtk-api using the methodology described previously.The column result is the average of all requests made to the endpoints
5.3.1.2 Test Results
As shown in the Tbl. 5.3, the average time taken by the gatekeeper is between 257 ms and 398 msper request processed by a thread. The difference between the quickest endpoint and the slowestendpoint is calculated to 150ms. The difference is between the endpoints is backend processinglatency, slowest endpoint backend is slower. For the endpoints package, service, function and slices,they use the chain of microservices tng-sec-gtw, tng-gtk-api, tng-gtk-common, tng-cat, mongodb.In case of policies, the chain is different and use the microservices tng-sec-gtw. tng-gtk-api, tng-gtk-sp, tng-policy-mngr, mysql. The SLA Manager follows a similar chain to the Policy Managerbut in this case the sla-manager microservice comprises of tng-sec-gtw, tng-gtk-api, tng-gtk-sp,tng-sla-mngr and MySQL microservices. The VIM resources use the rabbitmq microservice andit is asynchronous messaging. The chain for the VIM resources is tng-sec-gtw. tng-gtk-api, tng-gtk-sp, rabbitmq, ia-nbi, PostgreSQL. Moreover, the requests use the PostgreSQL via gk with themicroservices chain tng-sec-gtw. tng-gtk-api, tng-gtk-sp, PostgreSQL.
Table 5.3: Stress Tests for kpi latency over GK API endpoints.
Name DescriptionResults(ms)
test kpi latency get packages Time to get a package list from catalogue 278.561test kpi latency get services Time to get a service list from catalogue 280.587test kpi latency get functions Time to get the function list from catalogue 290.781test kpi latency get policies Time to get policies from database 257.715test kpi latency get requests Time to get the requests list 398.839test kpi latency get slas Time to get the SLAs list from database 277.41test kpi latency get slices Time to get the Network Slices Templates list from database 278.881test kpi latency get vims resources Time to get the vim resources list from VIMs 312.134
An additional test was created to check the impact of scaling the gatekeeper in the reply latency.In the Fig. 5.19, the request per second that the Service Platform can reply per user in twodimensions, Number of users and number of gatekeeper replicas. We used 10, 20, 30, 40 and 50users in parallel and tested over 1, 2 and 4 gatekeeper replicas. The result is that the request persecond is increased when we have more gatekeeper replicas, as shown in the Fig. 5.20 the latency isdecreased when concurrency is present, the gatekeeper internal times are the same, but the usershave to wait until gatekeeper processing request to process its request.
5.3.2 KPI Latency - Instantiation and Termination VNFs and CNFs
One of the most important KPI for the Service Platform to measure its performance is the timethat the orchestrator spends in instantiating a Network Service and terminating it. This test is tomeasure the instantiation and termination times of SONATA NFV Orchestrator.
50 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.19: Gatekeeper Package endpoint request per second
Figure 5.20: Gatekeeper Package endpoint latency from client
5GTANGO Public 51
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
5.3.2.1 Test Setup
For these measurements, we used the Aveiro testbed, sta-sp-ave.5gtango.eu (see Sec. 2.3). TheService Platform in Aveiro counts with the recommended resource installation specified in theSONATA quick guide [50] (4vCPU, 8GB RAM and 80GB disk). The OpenStack attached to thisService Platform is also located in Aveiro and the specifications are described in Sec. 2.3.
For the test, six Network Services were used. The difference between them is the amount of VNFsor CNFs to calculate the impact in deployment time when increasing the number of VNFs to theNS. More VNFs in a NS makes it more complex for the Service Platform to deploy. The packageeu.5gtango.test-ns-nsid1v.0.1.tgo contains one cirros VNF and one FSM that configures it. Thepackage eu.5gtango.test-ns-nsid2v.0.1.tgo contains two cirros VNFs and two FSMs that configuresit and the package eu.5gtango.test-ns-nsid3v.0.1.tgo contains three VNFs and three FSMs thatconfigure the service. All three packages also have one SSM for service reconfiguration.
In case of CNFs we used three packages, eu.5gtango.test-ns-nsid1c.0.1.tgo, eu.5gtango.test-ns-nsid2c.0.1.tgo, eu.5gtango.test-ns-nsid3c.0.1.tgo, each package with the corresponding amount ofCNFs in the NS. The CNF of the package contains an ubuntu container. The 5GTANGO packagesused for this test are available in GitHub in the repository tng-tests [84].
5.3.2.2 Test results
The results of the test were extracted from the execution of four consecutive hours the instantiationand termination of the same Network Service. No other Network Service was running or instantiatedin the Service Platform during the test to obtain precise results.
Table 5.4: KPI Latency - Instantiation and Termination VNFs
Test Name Text DescriptionResult(s)
test kpi service instantiation 1VNF This test measures the time it takes to execute a service instantiationof a NS composed by 1 VNF
76.871
test kpi service instantiation 2VNFs This test measures the time it takes to execute a service instantiationof a NS composed by 2 VNFs deployed in the same VIM
92.411
test kpi service instantiation 3VNFs This test measures the time it takes to execute a service instantiationof a NS composed by 3 VNF deployed in the same VIM
134.501
test kpi service termination 1VNF This test measures the time it takes to execute a service terminationof a NS composed by 1 VNF
23.417
test kpi service termination 2VNFs This test measures the time it takes to execute a service terminationof a NS composed by 2 VNF deployed in the same VIM
26.250
test kpi service termination 3VNFs This test measures the time it takes to execute a service terminationof a NS composed by 3 VNF deployed in the same VIM
28.272
Table 5.5: KPI Latency - Instantiation and Termination CNFs
Test Name Text DescriptionResult(ms)
test kpi service instantiation 1CNF This test measures the time it takes to execute a service instantiation of aNS composed by 1 CNF
8.84
test kpi service instantiation 2CNFs This test measures the time it takes to execute a service instantiation of aNS composed by 2 CNF deployed in the same VIM
13.16
test kpi service instantiation 3CNFs This test measures the time it takes to execute a service instantiation of aNS composed by 3 CNF deployed in the same VIM
19.37
test kpi service termination 1CNF This test measures the time it takes to execute a service termination of aNS composed by 1 CNF
6.68
test kpi service termination 2CNFs This test measures the time it takes to execute a service termination of aNS composed by 2 VNF deployed in the same VIM
6.69
52 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.21: KPI Latency Instantiation and Termination VNF and CNF
Test Name Text DescriptionResult(ms)
test kpi service termination 3CNFs This test measures the time it takes to execute a service termination of aNS composed by 3 CNF deployed in the same VIM
6.77
The table Tbl. 5.4 summarizes the results obtained from the instantiations and termination ofthe Network Services for VNFs. The simplest network service (one cirros VNF inside a NS) inSONATA requires 76 seconds to be instantiated while the NS with two VNFs requires 92 seconds.If we consider that three VNFs takes 134 seconds for the orchestrator, then we can conclude thatwe need 30 seconds in average per additional VNF. This results match with the ones obtained inthe test below [OpenStackWrapper]FunctionDeploy-time where the time is 33 seconds per VNF.The reason is that the orchestration of VNFs is sequential by the Service Platform, meaning thatit the requests are queued and processed in a First In First Out (FIFO) fashion. For CNFs, theresults obtained in Tbl. 5.5 shows that the CNFs are faster in terms of instantiation times requiringfive seconds on average per CNF in the Network service.
The [Fig:latency ins ter vnfs] shows the comparison between instantiation and termination ofeach type of Network Services used on the test. The instantiation of CNFs is around seven timesfaster than the instantiation of VNFs while the termination is three times faster for CNFs.
5.3.3 KPI Latency - Instantiation and Termination > 1 to 50 CNFs
In this test, we took advantage of the lightweight deployments over kubernetes to trigger more than250 instantiations and terminations (five times NS that are created from 1 VNF to 50 VNFs).
5.3.3.1 Test Setup
For these measurements, we used the Aveiro testbed, sta-sp-ave.5gtango.eu (see Sec. 2.3). Thesame environment used above for Instantiation and Termination KPI.
For the test, fifty Network Services were used. The difference between them is the amount ofVNFs to calculate the impact in deployment time when adding more VNFs to the orchestrator.The package eu.5gtango.cnf-XX-sm.0.1.tgo contains one ubuntu CNF. The configuration. The5GTANGO packages used for this test are available in GitHub in the repository tng-tests [84].
5GTANGO Public 53
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.22: Instantiation and Termination time #CNFs
5.3.3.2 Test results
The results of the test were extracted from the execution of 250 instantiations and terminationsof the Network Services. No other Network Service was running or instantiated in the ServicePlatform during the test to obtain precise results. In the Fig. 5.22 is show the time spent by theSONATA SP instantiating and terminating a network service composed by a set of CNFs. Theblue line is the time in seconds to instantiate the NS and the orange line is the time in seconds toterminate the NS. The horizontal axis has the amount of CNFs that compose the Network service.We can conclude from this graph that the instantiation time is increased following an incrementalconstant rate with steps of 5.5s per CNF added to the NS. Moreover, the termination also followsan incremental constant rate with steps of 0.42s per CNF.
The latency gained in each step deploying a CNF is because the MANO send sequentially thefunction to the wrapper and wait for completion. The wrapper generates the kubernetes objectsand schedules it in the k8s cluster. The rate is consistent with the time of deployment obtainedabove in kubernetes wrapper times.
5.3.4 Latency insights for Heat Wrapper and Kubernetes Wrapper
The times inside the wrapper helps to understand the behaviour of the orchestrator when perform-ing multiple instantiation and termination of an NS. This test intends to extract the average timesinside the heat and kubernetes wrapper of the SONATA NFV Service Platform.
5.3.4.1 Test Setup
For the collection of kpis, the environment selected was the staging platform in Aveiro. We used thetests above “KPI Latency - Instantiation and Termination VNFs and CNFs” to collect the timesspent by the wrappers deploying NSs. The VNF function contains a cirros image and the CNFfunction contains a ubuntu image. The times were collected from Graylog (A centralized loggingsystem used in 5GTANGO CI/CD environments) using the time marks recorded by the wrapperin the logs.
54 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
5.3.4.2 Test Results
For each test, the average latency (in msec) was calculated and presented at the Tbl. 5.6 andTbl. 5.7. The comparable tests between Heat and K8s are:
1. [OpenStackWrapper]FunctionDeploy-time to [KubernetesWrapper]CreateFunction-time
2. [OpenStackWrapper]RemoveService-time to[KubernetesWrapper]RemoveService-time
3. [OpenStackWrapper]NetworkCreate-time to [KubernetesWrapper]K8sCreatingService-time.
This comparison is represented in the Fig. 5.23 where it can be observed that the OpenStacktimes are larger than Kubernetes times. The use of CNF accelerates the deployment of NetworkFunctions in the order of 7 times faster. This result was expected due to the small footprint of thecontainers in general.
Table 5.6: KPI Latency Heat Wrapper
Test Name Text DescriptionResult(ms)
[OpenStackWrapper]getResourceUtilisation-time
Query to OpenStack to get resources 730.5
[OpenStackWrapper]NetworkCreate-time Creation of new network for VNF (3 nets Management internalexternal)
19610.35
[OpenStackWrapper]isImageStored-time Query to OpenStack to check image availability 853.35[OpenStackWrapper]FunctionDeploy-time Time spent by OpenStack to deploy one Network Function
(cirros-image)33745.04
[OpenStackWrapper]RemoveService-time Time spent by OpenStack deleting the stack 18344.35
Table 5.7: KPI Latency K8s Wrapper
Test Name Text DescriptionResult(ms)
[KubernetesWrapper]K8sCreatingResourcesObject-time
Time spent by Kubernetes Wrapper creating the Resourceutilization object
125.45
[KubernetesWrapper]K8sMonitoringMetrics-time Query to Kubernetes to get the metrics about resourceusage
58.26
[KubernetesWrapper]ListResource-time Query to kubernetes to get metrics about resourceavailability
274.31
[KubernetesWrapper]K8sCreatingConfigmap-time Request to Kubernetes Wrapper to create ConfigmapObject
67.81
[KubernetesWrapper]K8sCreatingDeployment-time Request to Kubernetes Wrapper to create the deploymentObject
4110.42
[KubernetesWrapper]CreatingDeploymentObject-time
Time spent by Kubernetes Wrapper creating thedeployment object
200.55
[KubernetesWrapper]K8sCreatingService-time Request to Kubernetes Wrapper to create the deploymentObject
346.43
[KubernetesWrapper]DeployCNF-time Time spent by Kubernetes Wrapper Creating all objectsneeded to deploy a CNF and its creation in the kubernetescluster
4797.3
[KubernetesWrapper]CreateFunction-time Time spent by Kubernetes Wrapper scheduling thecreation of a function
4814.06
[KubernetesWrapper]K8sGetDeployment-time Query to Kubernetes to list the deployments in the cluster 64.61[KubernetesWrapper]K8sGetConfigmap-time Query to Kubernetes to get a configmap 68.47[KubernetesWrapper]K8sOverwriteConfigmap-time Request to Kubernetes to Patch a configmap 68.26[KubernetesWrapper]K8sCreatingPatch-time Request to Kubernetes to Patch a configmap and trigger a
rolling update69.18
[KubernetesWrapper]ConfigureFunction-time Request to Kubernetes to Perform a rolling update of afunction
418.75
[KubernetesWrapper]RemoveService-time Time spent by Kubernetes Wrapper scheduling thetermination of a function
337
5GTANGO Public 55
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure 5.23: Wrapper times comparison HEAT vs K8S
56 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
6 Conclusion
This deliverable concludes the effort in WP6 regarding infrastructure (deployment, configurationand maintenance), DevOps environment (configuration and maintenance) and system level testingof 5GTANGO software release. The focus of this deliverable is the system level validations ofthe three components of the 5GTANGO platform, namely the SDK toolkit, the Validation andVerification framework and the Service Platform. The system level test campaign tested each andevery aforementioned components both with functional or non-functional tests. The methodologyused in the definition of these tests follows the guidelines of the ETSI NFV TST002 documentand is completely aligned with the best testing practices. The outcomes of the achieved resultsprove that the 5GTANGO Platform software release, has achieved a very good level of stabilityand operability with rich features and functionalities. The capability of the platform to enable andfoster DevOps processes at the NFV developer teams upon a variety of infrastructures supportingNFV is verified. In addition non-functional tests provide insights on specific KPIs that relate tothe specifications of 5G KPIs, as defined in 5G-PPP deliverables. The opensource definitions of thetests, the tools that are being used and their automated execution, allows the developer community,that is build around the code base of 5GTANGO, to monitor the progress and the efficiency of theavailable code and to progress beyond the current code base.
5GTANGO Public 57
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
A Appendix - V&V Platform - Test Cases Details
The description of the test cases is based on previous work of ETSI NFV ETSI NFV-TST001
A.1 [VnV Executor] Multiple Parallel Probes
The Tbl. A.1 shows the test case defined to test Multiple Parallel Probes in the V&V
Table A.1: Multiple Parallel Probes
Test CaseName
Multiple Parallel Probes
Test Purpose To ensure that multiple probes can berunning in parallel
Configuration The TD will be based on the PINGand NETCAT test, but using multipleinstances/probes sections with nodependencies to execute all probes inparallel.
Test Tool Robot FrameworkMetric No metric (ping and netcat test)References https://github.com/sonata-nfv/tng-
vnv-executor/Applicability Variations of this test case can be
performed modifying the TD: Todefine two probes sections withdifferent names but using the sameprobe. Using the same probe butsetting the instances to 2
Pre-testconditions
The packages that contain the NS andTests will be created before the testexecution
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in VnV catalogue2 Test Package On-Boarding Test Package is on-boarded in VnV catalogue3 Check Service Instantiation Service instance is up and running4 Check Test Execution VnV launches and executes the test5 Check Test Completion VnV test execution is completed6 Check Stored Results The test results are stored in the test results
repository7 Check No Running Instances In SP After the test, the instantiated service must be
deleted from Service PlatformTest Verdict The results will show content from the
probesAdditionalresources
The Fig. A.1 Shows the report of the test Multiple Parallel Probes with two instances
58 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure A.1: Multiple Parallel Probes - Test Two Parallel Instances Report
The Fig. A.2 Shows the report of the test Multiple Parallel Probes with two probes
Figure A.2: Multiple Parallel Probes - Test Two Parallel Probes Report
A.2 [VnV Executor] Sequential execution of probes
The Tbl. A.2 shows the test case defined to test the Sequential execution of probes in the V&V
Table A.2: Sequential execution of probes
Test CaseName
Sequential execution of probes
5GTANGO Public 59
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Test Purpose To ensure that one probe can startafter another
Configuration The TD will be based on the PINGtest but using two probes sections: onewith no dependencies and the otherdepending on the first one
Test Tool Robot FrameworkMetric No metric (ping test)References https://github.com/sonata-nfv/tng-
vnv-executor/ApplicabilityPre-testconditions
The packages that contain the NS andTests will be created before the testexecution
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in VnV catalog2 Test Package On-Boarding Test Package is on-boarded in VnV catalog3 Check Service Instantiation Service instance is up and running4 Check Test Execution VnV launches and executes the test5 Check Test Completion VnV test execution is completed6 Check No Running Instances In SP After test, the instantiated service must be deleted
from Service Platform7 Check Stored Results The test results are stored in the test results
repositoryTest Verdict The results will show content from the
probesAdditionalresources
The Fig. A.3 shows the report of the robot framework sequential execution of probes test.
Figure A.3: Sequential Execution of Probes Report
A.3 [VnV Planner] Mapping Strategy
The Tbl. A.3 shows the test case defined to test the Mapping Strategy in the V&V
60 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Table A.3: Mapping Strategy
Test CaseName
Mapping Strategy
Test Purpose To ensure that the mapping strategyof TD-NSD based on testing tagsworks as expected
Configuration The TD will be a simple test based onthe PING one using several testingtags to cover all mapping possibilities
Test Tool Robot FrameworkMetric No metric (ping test)References https://github.com/sonata-nfv/tng-
vnv-planner/Applicability Variations of this test case can be
performed modifying the TD and NSDto cover all scenarios: NSD and TDdon’t match. NSD single testing tagmatches with multiple TDs so multipletest plans will be generated. TD singletesting tag matches with multipleNSDs so multiple test plans will begenerated. For the first scenario wherethe testing tags of the TD and NSDdoes no match, no test plan will beexecuted. For the other to scenarios atest plan will be executed for every TDand NSD where the testing tags match.
Pre-testconditions
The packages that contain the NS andTests will be created before the testexecution
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in VnV catalogue2 Test Package On-Boarding Test Package is on-boarded in VnV catalogue3 Check Service Instantiation Test Plan
1Service instance is up and running
4 Check Test Execution Test Plan 1 VnV launches and executes the test5 Check Test Completion Test Plan 1 VnV test execution is completed6 Check No Running Instances In SP
Test Plan 1After the test, the instantiated service must bedeleted from Service Platform
7 Check Service Instantiation Test Plan2
Service instance is up and running
8 Check Test Execution Test Plan 2 VnV launches and executes the test9 Check Test Completion Test Plan 2 VnV test execution is completed10 Check No Running Instances In SP
Test Plan 2After the test, the instantiated service must bedeleted from Service Platform
11 Check Stored Results The test results are stored in the test resultsrepository
Test Verdict The results will show content from theprobes
Additionalresources
The Fig. A.4 shows the report of the robotframework test for a test where the testing tags don’tmatch.
The Fig. A.5 shows the report of the robot framework test for a test where the testing tagmatches with multiple test descriptors.
The Fig. A.6 shows the report of the robotframework test for a test where the testing tag matcheswith multiple Network Service descriptors.
5GTANGO Public 61
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure A.4: Testing Tags Don’t Match Report
Figure A.5: NS Testing Tag Matches With Multiple TD Testing Tag Report
62 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure A.6: TD Testing Tag Matches With Multiple NS Testing Tag
A.4 [VnV Planner] Retrigger a Test Manually
The Tbl. A.4 shows the test case defined to test retrigger a test manually in the V&V
Table A.4: Retrigger a Test Manually
Test CaseName
Retrigger a test manually
Test Purpose To ensure that a test can berelaunched manually
Configuration Not requiredTest Tool Robot FrameworkMetric No metricReferences https://github.com/sonata-nfv/tng-
vnv-planner/Applicability A test plan will be executed and the
status will be COMPLETED. Thenthe status of the test plan will be setto RETRIED and will cause then testplan to be retriggered and executeagain.
Pre-testconditions
The test plan to relaunch will bepresent in the VnV platform
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in VnV catalogue2 Test Package On-Boarding Test Package is on-boarded in VnV catalogue3 Check Service Instantiation Service instance is up and running4 Check Test Execution VnV launches and executes the test5 Check Test Completion VnV test execution is completed6 Check No Running Instances In SP After test, the instantiated service must be deleted
from Service Platform
5GTANGO Public 63
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
7 Retrigger Test Execution Using the REST API and set the test plan to statusRETRIED. A new test plan will be generated.
8 Check Service InstantiationRetriggered Test Plan
Service instance is up and running
9 Check Test Execution Retriggered TestPlan
VnV launches and executes the test
10 Check Test Completion RetriggeredTest Plan
VnV test execution is completed
11 Check Stored Results The test results are stored in the test resultsrepository. Two test plans exist, one with statusRETRIED and another one with statusCOMPLETED.
Test Verdict The results will show content fromprobes
Additionalresources
The Fig. A.7 shows the results of a a manual triggerd probe test.
Figure A.7: VnV Retrigger a Test Manually
A.5 [VnV Executor] Parser Multiple Cases
The Tbl. A.5 shows the test case defined to test retrigger a test manually in the V&V
Table A.5: Parser Multiple Cases
64 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Test CaseName
Parser Multiple Cases
Test Purpose To ensure that parser can extract theverdicts of the test result in multiplecases
Configuration The TD will use two probes: one thatgenerates a JSON result file and othersthat generates a txt result file. ThisTD will contain several validationsfrom both probes and files types: TXTthat contains “String” TXT that notcontains “String” JSON fieldvalidation. JSON field2 validation2
Test Tool Robot FrameworkMetric No metricReferences https://github.com/sonata-nfv/tng-
vnv-executor/ApplicabilityPre-testconditions
The packages that contain the NS andTests will be created before the testexecution
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in VnV catalogue2 Test Package On-Boarding Test Package is on-boarded in VnV catalogue3 Check Service Instantiation Service instance is up and running4 Check Test Execution VnV launches and executes the test5 Check Test Completion VnV test execution is completed6 Check No Running Instances In SP After the test, the instantiated service must be
deleted from Service Platform7 Check Stored Results The test results are stored in the test results
repositoryTest Verdict The results will show content from all
probesAdditionalresources
The Fig. A.8 shows the report of the robotframework test for a test that have multiple parsersof the results to report a verdict.
Figure A.8: VnV Parser Multiple Cases
5GTANGO Public 65
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
A.6 [VnV SONATA SP Simple Test] Perform a PING test over VNFdeployed by SONATA SP
This test uploads and deploys a network service at a SONATA SP. It also uploads and uses a testdescriptor in the VnV. The test case for this test is in Tbl. A.6
Table A.6: Deploy SONATA Simple
Test CaseName
Deploy SONATA Simple
Test Purpose Deploy a service in SONATAConfiguration The TD is based on the PING testTest Tool Robot FrameworkMetric No metric (ping test)References https://github.com/sonata-nfv/tng-
vnv-executor/Applicability N/APre-testconditions
The packages that contain the NS andTests will be created before the testexecution
Testsequence
Step Description Result
1 Setting variables The VnV env variables are setted2 Service Package On-Boarding Service Package is on-boarded in VnV catalog3 Test Package On-Boarding Test Package is on-boarded in VnV catalog4 Check Test Execution VnV launches and executes the test5 Obtain graylogs Get the logs
Test Verdict The results will show content from theprobes
Additionalresources
Fig. A.9 shows the test plan created for the simple test that use a deployment in SONATA SP.
Figure A.9: VnV SONATA SP Test
A.7 [VnV OSM SP Simple Test] Perform a PING test over VNFdeployed by OSM SP
This test uploads and deploys a network service at a OSM SP. It also uploads and uses a testdescriptor in the VnV. The test case for this test is in Tbl. A.7
Table A.7: Deploy OSM Simple
66 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Test CaseName
Deploy OSM Simple
Test Purpose Deploy a network service in OSMConfiguration The TD will be based on the PING
testTest Tool Robot FrameworkMetric No metric (ping test)References https://github.com/sonata-nfv/tng-
vnv-executor/Applicability N/APre-testconditions
The packages that contain the NS andTests will be created before the testexecution
Testsequence
Step Description Result
1 Setting variables The VnV env variables are setted2 Service Package On-Boarding Service Package is on-boarded in VnV catalog3 Test Package On-Boarding Test Package is on-boarded in VnV catalog4 Check Test Execution VnV launches and executes the test5 Obtain graylogs Get the logs
Test Verdict The results will show content from theprobes
Additionalresources
Fig. A.10 shows the test plan created for the simple test that use a deployment in OSM SP.
Figure A.10: VnV OSM Simple test result
A.8 [VnV SONATA SP Monitoring] Check if the monitoring data iscollected during test execution
This test uploads a service package and the appropriate test descriptor on the platform. After thedeployment of the service and the execution of the test it checks if the monitoring data have beencollected and stored properly from the monitoring framework. In Tbl. A.8 is defined the test caseused for this test.
Table A.8: V&V Monitoring metrics
5GTANGO Public 67
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Test CaseName
Monitoring metrics
Test Purpose Check that monitoring metrics from anew deployed NS is available in theVnV monitoring framework
Configuration A NS composed by more than oneVNFs are deployed on the serviceplatform.
Test Tool Robot Framework, using TnglibMetric Boolean (success or not), execution
timeReferencesApplicability This test case can be performed to test
if monitoring framework has been setup correctly and collects custom anddefault metrics
Pre-testconditions
The packages that contain the NS andTests will be created before the testexecution
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in VnV catalog2 Test Package On-Boarding Test Package is on-boarded in VnV catalog3 Check Service Instantiation Immersive Media Service instance is up and running4 Check Test Execution VnV launches executes the test5 Check monitoring metrics Retrieve monitoring data for specific NS and VNF6 Check Test Completion VnV test execution is completed7 Retrieve, monitoring metrics Check that monitoring data have been stored
correctlyTest Verdict Monitoring data collected, stored and
retrieved correctlyAdditionalresources
Figure A.11: V&V monitoring metrics test
68 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
A.9 [VnV Simple Test] Deploy test OSM cloud-init
This test upload a OSM package with an ubuntu VNF and a cloud-init with the script for nginxinstallation. Additionally, the Test descriptor contains the telnet probe that will test if the port 80of the nginx is exposed via the floating IP of the Instance. If the port 80 is available that meansthat the cloud-init was executed successfuly and the nginx service was installed in the VNF. In[Tbl:osm cloud init] is represented the test case used for this test.
Table A.9: OSM Cloud-init
Test CaseName
Deploy OSM Cloud-Init
Test Purpose Deploy a service that has cloud-init inOSM
Configuration The TD will be based on the telnettest
Test Tool Robot FrameworkMetric No metric (telnet port 80)References https://github.com/sonata-nfv/tng-
vnv-executor/Applicability N/APre-testconditions
The packages that contain the NS withone VNF (ubuntu) as well as cloud-initthat install the nginx service and Testswith telnet probe will be createdbefore the test execution
Testsequence
Step Description Result
1 Setting variables The VnV env variables are set2 Service Package On-Boarding Service Package (ubuntu VNF with cloud-init) is
on-boarded in VnV catalog3 Test Package On-Boarding Test Package is on-boarded in VnV catalog4 Check Test Execution VnV launches and executes the test5 Obtain graylogs Get the logs
Test Verdict Exit 0 from test probe that means theprobe can connect to the service thatwas installed in the VNF
Additionalresources
The Fig. A.12 shows the result of the test perform by robotframework of OSM cloud-init.
5GTANGO Public 69
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure A.12: VnV OSM Cloud-init Test
A.10 [VnV Simple Test] Deploy test Sonata Hybrid
The Tbl. A.10 shows the test case defined for hybrid packages, that is a package that contains NSpackages and test descriptors.
Table A.10: Test hybrid package
Test CaseName
Test hybrid package
Test Purpose Validate the capability of the V&V toaccept packages that contains NSs andTDs
Configuration The TD will be based on the PINGtest
Test Tool Robot FrameworkMetric No metric (ping test)References https://github.com/sonata-nfv/tng-
hybrid-package/Applicability N/APre-testconditions
The package that contain the NS andTD is be created before the testexecution
Testsequence
Step Description Result
1 Setting variables The VnV env variables are set2 Service Package On-Boarding Service Package is on-boarded in VnV catalog3 Test Package On-Boarding Test Package is on-boarded in VnV catalog4 Check Test Execution VnV launches and executes the test5 Obtain graylogs Get the logs
Test Verdict The results will show content from theprobes
Additionalresources
The Fig. A.13 shows the result of the test perform by robotframework about hybrid packages.
70 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure A.13: VnV Hybrid Package test
A.11 [VnV Analytics] Analyze VnV test monitoring metrics results
The Tbl. A.11 shows the test case defined for Analyze V&V monitoring test results.
Table A.11: Analyze VnV test monitoring metrics results
Test CaseName
Analyze VnV test monitoringmetrics results
Test Purpose Check that analytics engine is correctlyintegrated at VnV environment.
Configuration A NS composed by three VNFs (CMS,MA, MSE) is deployed on the ServicePlatform. A defined test plan has beenexecuted prior to this test and iscompleted succesfuly
Test Tool robot frameworkReferences https://github.com/sonata-nfv/tng-
analytics-engine/Applicability Variations of this test case can be
performed by modifying the requestedAnalytic Service
Pre-testconditions
A successful VnV test has to executedprior to the the analytics engineinvocation
Testsequence
Step Description Result
1 Fetch Results of latest Test successfulexecution
Fetch Results of latest Test successful executionfrom test results repository
2 Fetch All Available Analytic Services Fetch All Available Analytic Services3 Invoke a test analytic process Identify test metrics that have a significant rate
fluctuation. Report basic statistics about health ofVnV test data results
4 Check Analytic results Check the resuts generated by the analytic processexecution
The Fig. A.14 shows the result of the test perform by robotframework about the monitoringmetrics results
5GTANGO Public 71
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure A.14: Analyze VnV test monitoring metrics results
72 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
B Appendix - Service Platform - Test CasesDetails
The description of the test cases is based on previous work of ETSI NFV ETSI NFV-TST001
B.1 [SP Slicing] Network Service Composition VNFs
The Tbl. B.1 shows the test case defined for Instantiation & Termination processes validation of aNetwork Slice composed by with VNF based Network Services.
Table B.1: Slicing test Instantiation and Termination
Test CaseName
slice
Test Purpose Instantiation & Termination processesvalidation of a Network Slice composedby with VNF based Network Services.
Configuration A Network Slice contains 3 VNF basedNetwork Services interconnectedamong them by 5 Virtual Links. Thetest uses a Network Slice Templatedescriptor (NSTd) which defines theNetwork Slice to instantiate andterminate.
Test ToolMetricReferences https://github.com/sonata-nfv/tng-
slice-mngrApplicabilityPre-testconditionsTestsequence
Step Description Result
1 Setting Up Test Environment Prepares the environment information to be usedduring the test.
2 Remove Previously Used Packages Remove any possible package left from other tests.3 Service Package On-Boarding Service (NSs and VNFs) package is on-boarded in
SP catalogue.4 Network Slice Template On-Boarding Network Slice Template descriptor is on-boarded in
SP catalogue.5 Network Slice Instantiation An instantiation request reaches the Network Slice
Manager through the GTK and starts theintra-process requests: creates the NSIr, slice-vldcreation, NSs instances.
6 Validates Instantiation Process Validates if all the process is well-done by checkingthe status (INSTANTIATED) of the sliceinstantiation request.
7 Network Slice Termination A termination request reaches the Network SliceManager through the GTK and starts all theintra-process requests: terminate all NSs and removeslice-VLDs.
8 Validate Termination Process Validates if all the process is well-done by checkingthe status (TERMINATED) of the slice terminationrequest.
9 Remove Network Slice Template Deletes the NST descriptor previously on-boarded inorder to leave the environment clean for other tests.
5GTANGO Public 73
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
10 Remove Service Package Deletes the service package previously on-boarded inorder to leave the environment clean for other tests.
Test Verdict If no error appeared in all actions andthe NSIr finishes with its status andthose of the NS instances as“Terminated”.
Additionalresources
The Fig. B.1 shows the scenario used on the test
Figure B.1: Scenario Slice Test 3VNFs
The Fig. B.2 shows the message sequence chart used on the test.
74 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.2: Slice Instantiation 3VNF Flow
The Fig. B.3 shows the results of robotframework of the slice instantiation test
5GTANGO Public 75
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.3: Robot Report for Netslice Instantiation 3VNFs test
B.2 [SP Slicing] Network Service Composition CNFs
The Tbl. B.2 shows the test case defined for Instantiation & Termination processes validation of aNetwork Slice composed by with CNF based Network Services
Table B.2: Slicing test network service composition CNFs
Test CaseName
slice
Test Purpose Instantiation & Termination processesvalidation of a Network Slice composedby with CNF based Network Services.
Configuration A Network Slice contains 3 CNF basedNetwork Services interconnectedamong them by 5 Virtual Links. Thetest uses a Network Slice Templatedescriptor (NSTd) which defines theNetwork Slice to instantiate andterminate.
Test ToolMetricReferences https://github.com/sonata-nfv/tng-
slice-mngr
76 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
ApplicabilityPre-testconditionsTestsequence
Step Description Result
1 Setting Up Test Environment Prepares the environment information to be usedduring the test.
2 Remove Previously Used Packages Remove any possible package left from other tests.3 Service Package On-Boarding Service (NSs and CNFs) package is on-boarded in
SP catalogue.4 Network Slice Template On-Boarding Network Slice Template descriptor is on-boarded in
SP catalogue.5 Network Slice Instantiation An instantiation request reaches the Network Slice
Manager through the GTK and starts theintra-process requests: creates the NSIr, slice-vldcreation, NSs instances.
6 Validates Instantiation Process Validates if all the process is well-done by checkingthe status (INSTANTIATED) of the sliceinstantiation request.
7 Network Slice Termination A termination request reaches the Network SliceManager through the GTK and starts all theintra-process requests: terminate all NSs and removeslice-VLDs.
8 Validate Termination Process Validates if all the process is well-done by checkingthe status (TERMINATED) of the slice terminationrequest.
9 Remove Network Slice Template Deletes the NST descriptor previously on-boarded inorder to leave the environment clean for other tests.
10 Remove Service Package Deletes the service package previously on-boarded inorder to leave the environment clean for other tests.
Test Verdict If no error appeared in all actions andthe NSIr finishes with its status andthose of the NS instances as“Terminated”.
Additionalresources
Slice test scenario is illustrated in Fig. B.4
Figure B.4: Scenario Slice Test 3CNFs
The test flow is presented in Fig. B.5
5GTANGO Public 77
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.5: Slice Instantiation 3CNFs Flow
The robot report is illustrated in Fig. B.6
78 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.6: Robot Report for Netslice Instantiation 3CNFs test
B.3 [SP Slicing] SLA within Network Slices
The Tbl. B.3 shows the test case defined for Validate the process to instantiate & terminate aNetwork Slice composed by multiple Network Services and SLAs
Table B.3: Slicing test with multiple networks and SLAs
Test CaseName
slice
Test Purpose Validate the process to instantiate &terminate a Network Slice composedby multiple Network Services.
Configuration A Network Slice contains 3 NetworkServices interconnected among themby 5 Virtual Links. The test uses aNetwork Slice Template descriptor(NSTd) which defines the NetworkSlice to instantiate and terminate.
Test Tool
5GTANGO Public 79
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
MetricReferences https://github.com/sonata-nfv/tng-
slice-mngrApplicabilityPre-testconditionsTestsequence
Step Description Result
1 Setting Up Test Environment & CleanOld Descriptors
Prepares the environment information to be usedduring the test and clean possible remaining olddescriptors.
2 Service Package On-Boarding Service (NSs and VNFs) package is on-boarded inSP catalogue.
3 Generate the SLA Template Creates all the necessary SLA descriptors to be usedby the nest test cases.
4 On-Boarding Network Slice Template Network Slice Template descriptor is on-boarded inSP catalogue.
5 Deploy Network Slice Instantiation An instantiation request reaches the Network SliceManager through the GTK and starts theintra-process requests: creates the NSIr, slice-vldcreation, NSs instances.
6 Validates Instantiation Process Validates if all the process is well-done by checkingthe status (INSTANTIATED) of the sliceinstantiation request.
7 Get All SLA Agreements It validates if all the instantiated NSs within theNetwork Slice Instance got the agreement with theirassociated SLA.
8 Requests Network Slice Termination A termination request reaches the Network SliceManager through the GTK and starts all theintra-process requests: terminate all NSs and removeslice-VLDs.
9 Validate Termination Process Validates if all the process is well-done by checkingthe status (TERMINATED) of the slice terminationrequest.
10 Remove Network Slice Template Deletes the NST descriptor previously on-boarded inorder to leave the environment clean for other tests.
11 Remove SLA Templates Deletes the previously uploaded SLA templates usedfor this test.
12 Remove Service Package Deletes the service package previously on-boarded inorder to leave the environment clean for other tests.
Test Verdict If no error appeared in all actions andthe NSIr finishes with its status andthose of the NS instances as“Terminated”.
Additionalresources
The scenario is depicted in Fig. B.7
Figure B.7: Network Slice Architecture
The test flow is presented in Fig. B.8
80 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.8: Instantiation Testflow
The Fig. B.9 shows the results of robotframework for this test case
5GTANGO Public 81
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.9: Robot Report for SLA within Network Slices test
B.4 [SLAs and Licensing] Testing SLA E2E
The Tbl. B.4 shows the test case defined to create an end to end SLA test.
Table B.4: Testing SLA End-to-End
Test CaseName
Testing SLA E2E [23] [24] [25]
Test Purpose Validate the process to formulate aSLA for a NS, and automaticallygenerate the final agreement.
Configuration A SLA for a NS package is formulatedwhich includes licensing as a SLO. Thetest uses the SLA and the NS in orderto instantiate the latter, and test if thefinal agreement is generated after thesuccessful instantiation of the NS.
Test Tool Robot Framework & tnglibMetric -References https://github.com/sonata-nfv/
tng-sla-mgmt
Applicability Variations of this test case can beperformed modifying the TD: - Usedifferent License type
Pre-testconditions
The packages that contain the NS willbe created before the test execution
Testsequence
Step Description Result
1 Upload a Service Package NSs and VNFs packages are on-boarded in SPcatalogue and the NS uuid is provided.
82 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
2 Generate a SLA for that Service (uuid)(Choosing a License type as SLO )
SLA is Formulated and on-boarded in SP catalogue.
3 Requests Network ServiceInstantiation
A request for instantiation reaches the MANOthrough the GTK, the service is instantiated and thefinal agreement is automatically generated.
4 Licensing status check Instantiation of the NS is checked based on thelicense type selected in the SLA.
5 Terminate the Service The SLA status should be changed to‘TERMINATED’.
5 Clean Package The package that was used for the test should bedeleted from the Catalogues
Test Verdict If no error appeared in all actions thetest is successfully past.
Fig. B.10 depicts the test scenario of the current test case. framework report, with the testresults of the SLAs and Licensing end-to-end test case. The scenario checks the public,trial andprivate license types as a SLOs inside an SLA Templates, and the appropriate NS instantiationbased on this type.
Figure B.10: sla testing scenario
In order to test the end to end SLA and licensing flow, several steps should be done with regardsto the selected SLA with an attached License. The detailed test flow is depicted in Fig. B.11
5GTANGO Public 83
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.11: test flow sequence
In Fig. B.12 is depicted the robot framework report, with the test results of the SLAs andLicensing end-to-end test case.
84 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.12: slas robot results
B.5 [User Management] Testing User Management Role
The Tbl. B.5 shows the test case defined to perform the test of User management role
Table B.5: Testing User Management Roles
Test CaseName
Testing User Management Roles [26]
Test Purpose Validate the permissions of thedifferent user roles.
Configuration Register three different user roles, tryto fetch the different 5GTANGOdescriptors and validate thepermissions.
Test Tool Robot - tnglibMetricReferences https://github.com/sonata-nfv/tng-
gtk-usrApplicability Variations of this test case can be
performed modifying the TD: - Usedifferent License type
Pre-testconditions
One admin user should exist at leastwith the following credentials:username: tango password:admin
5GTANGO Public 85
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Testsequence
Step Description Result
1 Login as admin user One user should be logged-in in order to create anddelete other users
2 Register a new user with a role The new user can be admin, developer or customer.NOTE: In order to register an admin, the logged-inuser should be an admin
3 Logout initial user4 Login the new user Log in the new user to validate the permissions5 Obtain services Validate the role permissions trying to obtain the
services6 Obtain SLAs Validate the role permissions trying to obtain the
SLAs7 Obtain policies Validate the role permissions trying to obtain the
policies8 Obtain slices Validate the role permissions trying to obtain the
slices9 Log out the new user10 Delete the new user
Test Verdict If no error appeared in all actions thetest is successfully past.
In Fig. B.13 is depicted the robot framework report, with the test results of the user managementtesting case.
86 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.13: admin-robot-results
B.6 [Policy and Monitoring] Test Service Migration State OpenStack
The Tbl. B.6 shows the test case defined to perform the service migration state test in Openstack
Table B.6: Service Migration State Openstack
Test CaseName
test service migration state os
Test Purpose Test the stateful migration of a VNFConfiguration A NS composed by one VNF is
deployed with the Service PlatformTest Tool Robot Framework, using TnglibMetric Boolean (success or not)ReferencesApplicability Variations of this test case can be
performed to test it for different VNFsand multi-VNF services
Pre-testconditions
A running Service Platform
5GTANGO Public 87
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in the SP2 Deploy Network Service Network Service is deployed in the SP3 Check Network Service instantiation
correctnessConfirm that the NS was deployed without errors
4 Generate state The VNF has altered state5 Migrate the VNF The VNF is migrated from one openstack to the
other6 Check VNF migration correctness Confirm that the migration was executed without
errors7 Check if state was migrated to new
VNFThe altered state is there or not
8 Terminate Network Service Delete the NS deployedTest Verdict Network Service’s VNF was migrated
successfullyAdditionalresources
The Fig. B.14 shows the result of the execution of the test case by robotframework
Figure B.14: Service Migration State Openstack
B.7 [Policy and Monitoring] Test Service Migration State Kubernetes
The Tbl. B.7 shows the test case defined to perform the service migration state test in kubernetes
Table B.7: Service Migration State Kubernetes
88 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Test CaseName
test service migration k8s
Test Purpose Test the migration of a CNFConfiguration A NS composed by one CNF is
deployed with the Service PlatformTest Tool Robot Framework, using TnglibMetric Boolean (success or not)ReferencesApplicability Variations of this test case can be
performed to test it for different CNFsand multi-CNF services
Pre-testconditions
A running Service Platform
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in the SP2 Deploy Network Service Network Service is deployed in the SP3 Check Network Service instantiation
correctnessConfirm that the NS was deployed without errors
5 Migrate the CNF The CNF is migrated from one k8s cluster to theother
6 Check CNF migration correctness Confirm that the migration was executed withouterrors
8 Terminate Network Service Delete the NS deployedTest Verdict Network Service’s CNF was migrated
successfullyAdditionalresources
Fig. B.15 presents the Robot report.
Figure B.15: Service Migration State Kubernetes
5GTANGO Public 89
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
B.8 [Policy and Monitoring] Test Service Reconfiguration OpenStack
This test deploys an NS upon OpenStack infrastructure and reconfigures the NS on the fly byenforcing an elasticity policy that generates a scale-out action, which is executed by the MANO.The Tbl. B.8 shows the test case defined to perform the Service Reconfiguration in OpenStack withthe elastic policies
Table B.8: Test Service Lifecycle in OpenStack with elasticity policies
Test CaseName
Test Service Lifecycle in OS withelasticity policies
Test Purpose Check the lifecycle of a NetworkService with an elasticity policyactivated. Check collaborationbetween MANO, monitoring engineand policy engine components. TestService Lifecycle in OpenStack withelasticity policies
Configuration A NS composed by one haproxy andone squid VNF are deployed on theService Platform. ns-squid-haproxy NSis deployed on the Service Platformupon openstack
Test Tool Robot Framework, using TnglibMetric Boolean (success or not), execution
timeReferencesApplicability Variations of this test case can be
performed to test multiple policyactions
Pre-testconditions
The packages that contain the NS arecreated before the test execution. Thepolicy descriptor is also defined beforethe test execution.
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in the SP2 Runtime Policy Creation Runtime Policy is created in the SP3 Define Runtime Policy as default Attach the Runtime Policy with the deployed
Service Package4 Deploy Network Service Network Service is deployed in the SP5 Check Network Service instantiation Confirm that the NS was deployed without errors6 Check Monitoring Rules Confirm that the NS monitoring rules are enabled
without errors from the monitoring engine7 Satisfy Monitoring Rule generate a custom metric value that satisfies the
monitoring rule8 Demand Elasticity policy action policy manager requests VNF scaling out from
MANO9 Evaluate the outcome of MANO
scaling action enforcementConfirm that the requested action is successfullycompleted
10 Deactivate Runtime Policy Deactivate runtime policy while the NS is stilldeployed
9 Terminate Network Service Delete the NS deployed10 Delete Runtime Policy Delete Runtime Policy11 Remove NS package Remove NS package
Test Verdict Network service is successfullydeployed and un-deployed at an OSVIM environment.Runtime Policies areenforced and deactivated successfully
Additionalresources
Fig. B.16 shows the step by step test result
90 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
Figure B.16: Service Reconfiguration OS
B.9 [Policy and Monitoring] Test Service Reconfiguration Kubernetes
This test deploys an NS upon Kubernetes infrastructure and reconfigures the NS on the fly byenforcing an elasticity policy that generates a scale-out action, which is executed by the MANO.The Tbl. B.9 shows the test case defined to perform the Service Reconfiguration in Kuberneteswith the elastic policies
Table B.9: Test Service Lifecycle in Kubernetes with elasticity policies
Test CaseName
Test Service Lifecycle in k8s withelasticity policies
Test Purpose Check the lifecycle of a NetworkService with an elasticity policyactivated. Check collaborationbetween MANO, monitoring engineand policy engine components
Configuration ns-mediapilot-service pilot NS isdeployed on the Service Platform uponk8s
Test Tool Robot Framework, using TnglibMetric Boolean (success or not), execution
timeReferencesApplicability Variations of this test case can be
performed to test multiple policyactions
Pre-testconditions
The packages that contain the NS willbe created before the test execution.The policy descriptor is also definedbefore the test execution.
Testsequence
Step Description Result
1 Service Package On-Boarding Service Package is on-boarded in the SP2 Runtime Policy Creation Runtime Policy is created in the SP3 Define Runtime Policy as default Attach the Runtime Policy with the deployed
Service Package4 Deploy Network Service Network Service is deployed in the SP5 Check Network Service instantiation Confirm that the NS was deployed without errors6 Check Monitoring Rules Confirm that the NS monitoring rules are enabled
without errors from the monitoring engine7 Satisfy Monitoring Rule generate a custom metric value that satisfies the
monitoring rule
5GTANGO Public 91
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
8 Demand Elasticity policy action policy manager requests VNF scaling out fromMANO
9 Evaluate the outcome of MANOscaling action enforcement
Confirm that the requested action is successfullycompleted
10 Deactivate Runtime Policy Deactivate runtime policy while the NS is stilldeployed
9 Terminate Network Service Delete the NS deployed10 Delete Runtime Policy Delete Runtime Policy11 Remove NS package Remove NS package
Test Verdict Network services is successfullydeployed and un-deployed at an k8sVIM environment.Runtime Policies areenforced and deactivated successfully
Fig. B.17 shows the step by step test result
Figure B.17: Service Reconfiguration Kubernetes
B.10 [Policy and Monitoring] Test Monitoring VIM endpoints
This test gets the registered VIMs from the SP and checks if the monitoring framework collectssuccessfully data from each one of them. The Tbl. B.10 shows the test case defined to perform themonitoring vim endpoints test
Table B.10: Monitoring VIM integration
Test CaseName
Monitoring VIM integration
Test Purpose Check the integration betweenmonitoring manager and the VIMs
ConfigurationTest Tool Robot Framework, using TnglibMetric Boolean (success or not), execution
timeReferencesApplicability This test case can be performed to test
if monitoring framework is set upcorrectly
Pre-testconditions
At least one VIM has been registeredto the SP
Testsequence
Step Description Result
92 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
1 GET list of attached VIMs VIM list received2 GET ‘up’ metric for targets ‘up’ metric list received3 Check each target’s status ‘up’ metric value should be ‘1’
Test Verdict All VIMs are integrated correctlyAdditionalresources
Fig. B.18 shows the step by step test result:
Figure B.18: SP Monitoring Test
5GTANGO Public 93
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
C Appendix - Various scripts and files
C.1 Kubernetes VIM configuration file
The file config.json (example)
{"apiVersion": "v1",
"clusters": [
{"cluster": {"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0=",
"server": "https://172.31.13.2:6443"
},"name": "kubernetes"
}],
"contexts": [
{"context": {
"cluster": "kubernetes",
"user": "tango",
"namespace": "tango"
},"name": "tango"
}],
"current-context": "tango",
"kind": "Config",
"preferences": {},"users": [
{"name": "tango",
"user": {"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.=",
"client-key-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t="
}}
]
}
94 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
D Bibliography
[1] Kubernetes rbac. Online at https://kubernetes.io/docs/reference/
access-authn-authz/rbac/.
[2] Kubernetes security policy. Online at https://kubernetes.io/docs/concepts/policy/
pod-security-policy/.
[3] Watir. Online at http://watir.com/.
[4] 5GTANGO Project. 5GTANGO API Gateway Repository. Website, 2017. Online at https:
//github.com/sonata-nfv/tng-api-gtw.
[5] 5GTANGO Project. 5GTANGO Schema Repository. Website, 2017. Online at https://
github.com/sonata-nfv/tng-schema.
[6] 5GTANGO Project. 5GTANGO SDK Benchmark Repository. Website, 2017. Online athttps://github.com/sonata-nfv/tng-sdk-benchmark.
[7] 5GTANGO Project. 5GTANGO SDK Package Repository. Website, 2017. Online at https:
//github.com/sonata-nfv/tng-sdk-package.
[8] 5GTANGO Project. 5GTANGO SDK Project Repository. Website, 2017. Online at https:
//github.com/sonata-nfv/tng-sdk-project.
[9] 5GTANGO Project. 5GTANGO SDK Validate tool repository. Website, 2017. Online athttps://github.com/sonata-nfv/tng-sdk-validate.
[10] 5GTANGO Project. D2.1 Pilot Definition and Requirements, Sept. 2017. Online at 5gtango.eu/documents/D21_v1.pdf.
[11] 5GTANGO Project. D2.2 Architecture Design, Nov. 2017. Online at 5gtango.eu/documents/D22_v1.pdf.
[12] 5GTANGO Project. D2.3 Updated requirements, architecture design and V&V elements, Nov.2018. Online at 5gtango.eu/documents/D23_v1.pdf.
[13] 5GTANGO Project. D4.1 First open-source release of the SDK toolset, Apr. 2018. Online at5gtango.eu/project-outcomes.html.
[14] 5GTANGO Project. D6.1: Infrastructures, Continuous integration approach, Sep 2018. Onlineat https://5gtango.eu/project-outcomes.html.
[15] 5GTANGO Project. D6.2: Integrated lab-based 5GTANGO platform, May 2018. Online athttps://5gtango.eu/project-outcomes.html.
[16] 5GTANGO Project. 5GTANGO CLI library. Website, 2019. Online at https://github.com/sonata-nfv/tng-cli.git.
5GTANGO Public 95
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
[17] 5GTANGO Project. 5GTANGO SDK: tng-sdk-benchmark tests. Website, 2019.Online at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SDK/test_
sdk08_benchmarker.
[18] 5GTANGO Project. 5GTANGO SDK: tng-sdk-packager tests. Website, 2019. Online at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SDK/test_sdk02_packager.
[19] 5GTANGO Project. 5GTANGO SDK: tng-sdk-project tests. Website, 2019.Online at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SDK/test_
sdk01_project_management.
[20] 5GTANGO Project. 5GTANGO SDK: tng-sdk-test tests. Website, 2019. Online athttps://github.com/sonata-nfv/tng-tests/tree/master/tests/SDK/test_sdk04_
test_creation.
[21] 5GTANGO Project. 5GTANGO SDK: tng-sdk-validate tests. Website, 2019. Online at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SDK/test_sdk03_validation.
[22] 5GTANGO Project. 5GTANGO SDK: vim-emu tests. Website, 2019. Online athttps://github.com/sonata-nfv/tng-tests/tree/master/tests/SDK/test_sdk07_
vim-emu-scalability.
[23] 5GTANGO Project. 5GTANGO SP: SLAs and Licensing Test 1. Website, 2019. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_cases_slas/test_sla_e2e_4.1.
[24] 5GTANGO Project. 5GTANGO SP: SLAs and Licensing Test 1. Website, 2019. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_cases_slas/test_sla_e2e_4.2.
[25] 5GTANGO Project. 5GTANGO SP: SLAs and Licensing Test 1. Website, 2019. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_cases_slas/test_sla_e2e_4.3.
[26] 5GTANGO Project. 5GTANGO SP: User Management Tests. Website, 2019. Online athttps://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_user_roles.
[27] 5GTANGO Project. D3.3 Enriched store for VNF/NS qualification, Jun. 2019. Online at5gtango.eu/project-outcomes.html.
[28] 5GTANGO Project. D4.2 Final release of the service validation SDK toolset, Jun. 2019. Onlineat 5gtango.eu/project-outcomes.html.
[29] 5GTANGO Project. D5.2 Service Platform Final Release, Jun. 2019. Online at https://
5gtango.eu/project-outcomes.html.
[30] 5GTANGO Project. D7.2: Implementation of pilots and first evaluation, Dec. 2019. Onlineat 5gtango.eu/project-outcomes.html.
[31] 5GTANGO Project. Test Monitoring Endpoints. Website, 2019. Online at https:
//github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_integration_
monitoring_vims.
96 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
[32] 5GTANGO Project. Test VnV Mapping Strategy. Website, 2019. Online at https://github.com/sonata-nfv/tng-tests/tree/master/tests/VnV/g2-03-mapping_strategy.
[33] 5GTANGO Project. Test VnV Monitoring Framework. Website, 2019. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/tests/VnV/test_vnv_
monitoring_framework.
[34] 5GTANGO Project. Test VnV Multiple Parallel Probes. Website, 2019. Online athttps://github.com/sonata-nfv/tng-tests/tree/master/tests/VnV/g2-01-multiple_
parallel_probes.
[35] 5GTANGO Project. Test VnV Parser Multiple Cases. Website, 2019. Onlineat https://github.com/sonata-nfv/tng-tests/tree/master/tests/VnV/g2-05-parser_
multiple_cases.
[36] 5GTANGO Project. Test VnV Retrigger a Test Manually. Website, 2019. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/tests/VnV/
g2-04-retrigger_a_test_manually.
[37] 5GTANGO Project. Test VnV Sequential Execution of Probes. Website, 2019. Onlineat https://github.com/sonata-nfv/tng-tests/tree/master/tests/VnV/g2-02-one_
probe_start_after_another.
[38] Apache. Apache jmeter. Online at https://jmeter.apache.org/.
[39] Robot Framework Foundation. Robot. Online at https://robotframework.org.
[40] ETSI European Telecommunications Standards Institute. Etsi gs nfv-tst 002, network func-tions virtualisation (nfv); testing methodology; report on nfv interoperability testing method-ology v1.1.1. Website, April 2016. Online at http://www.etsi.org/deliver/etsi_gs/
NFV-TST/001_099/002/01.01.01_60/gs_NFV-TST002v010101p.pdf.
[41] ETSI European Telecommunications Standards Institute. ETSI GS NFV-SOL004. Website,July 2017. Online at http://www.etsi.org/deliver/etsi_gs/NFV-SOL/001_099/004/02.
03.01_60/gs_nfv-sol004v020301p.pdf.
[42] istio.io. istio. Online at https://istio.io.
[43] kiali.io. kiali. Online at https://www.kiali.io/.
[44] Holger Krekel and pytest-dev team. Pytest. Online at https://pytest.org/en/latest/.
[45] Open Source MANO. OSM VIM EMU, Mar. 2018. Online at https://osm.etsi.org/gitweb/?p=osm/vim-emu.git.
[46] Packet. Cni plugin. Website, November 2018. Online at https://kubernetes.io/docs/
concepts/cluster-administration/addons/.
[47] Manuel Peuster, Michael Marchetti, Gerardo Garcıa de Blas, and Holger Karl. Automatedtesting of nfv orchestrators against carrier-grade multi-pop scenarios using emulation-basedsmoke testing. EURASIP Journal on Wireless Communications and Networking, 2019(1):172,2019.
[48] Manuel Peuster, Stefan Schneider, and Holger Karl. The softwarised network data zoo. arXivpreprint arXiv:1905.04962, May 2019.
5GTANGO Public 97
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
[49] 5GTANGO Project. 5gtango cli documentation. Online at https://tng-cli.readthedocs.
io/en/latest/index.html.
[50] 5GTANGO Project. 5gtango quickguide. Online at https://sonata-nfv.github.io/
quickguide.
[51] 5GTANGO Project. Analyze the test results. Online at https://github.com/sonata-nfv/
tng-tests/blob/master/tests/VnV/test_analytic_engine.
[52] 5GTANGO Project. Jmeter test suite for sonata sp. Online at https://github.com/
sonata-nfv/tng-tests/blob/master/tests/SP/jmeter/sptestplan.jmx.
[53] 5GTANGO Project. Nsid1v. Online at https://github.com/sonata-nfv/tng-tests/tree/master/packages/NSID1V.
[54] 5GTANGO Project. Nsid1v and tstping cirros sonata. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/packages/NSID1V_AND_TSTPING_cirros_SONATA.
[55] 5GTANGO Project. Nsid1v cirros osm. Online at https://github.com/sonata-nfv/
tng-tests/tree/master/packages/NSID1V_cirros_OSM.
[56] 5GTANGO Project. Nsid1v cirros sonata no tags. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/packages/NSID1V_cirros_SONATA_no_tags.
[57] 5GTANGO Project. Nsid1v cirros sonata ns testing tag matches multiple td testing tag. Onlineat https://github.com/sonata-nfv/tng-tests/tree/master/packages/NSID1V_cirros_
SONATA_NS_testing_tag_matches_multiple_TD_testing_tag.
[58] 5GTANGO Project. Nsid1v cirros sonata td testing tag matches multiple ns testing tag 1.Online at https://github.com/sonata-nfv/tng-tests/tree/master/packages/NSID1V_
cirros_SONATA_TD_testing_tag_matches_multiple_NS_testing_tag_1.
[59] 5GTANGO Project. Nsid1v ubuntu osm cloud-init. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/packages/NSID1V_ubuntu_OSM_cloud_init.
[60] 5GTANGO Project. Nsimpsp. Online at https://github.com/sonata-nfv/tng-tests/
tree/master/packages/NSIMPSP.
[61] 5GTANGO Project. Nsimpsp no tags. Online at https://github.com/sonata-nfv/
tng-tests/tree/master/packages/NSIMPSP_no_tags.
[62] 5GTANGO Project. test ns td testing tags not match. Online at https://github.com/
sonata-nfv/tng-tests/blob/master/tests/VnV/g2-03-mapping_strategy/not_match_
case/test_NS_TD_testing_tags_not_match.robot.
[63] 5GTANGO Project. test ns testing tag matches multiple td testing tag. Online athttps://github.com/sonata-nfv/tng-tests/blob/master/tests/VnV/g2-03-mapping_
strategy/single_NS_multiple_TD/test_NS_testing_tag_matches_multiple_TD_
testing_tag.robot.
[64] 5GTANGO Project. test parser multiple cases. Online at https://github.com/sonata-nfv/tng-tests/blob/master/tests/VnV/g2-05-parser_multiple_cases/test_parser_
multiple_cases.robot.
98 Public 5GTANGO
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
[65] 5GTANGO Project. test probe can start after another. Online at https:
//github.com/sonata-nfv/tng-tests/blob/master/tests/VnV/g2-02-one_probe_
start_after_another/test_probe_can_start_after_another.robot.
[66] 5GTANGO Project. test retrigger a test manually. Online at https://github.com/
sonata-nfv/tng-tests/blob/master/tests/VnV/g2-04-retrigger_a_test_manually/
test_retrigger_a_test_manually.robot.
[67] 5GTANGO Project. Test service lifecycle in k8s with elasticity policies. Onlineat https://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_service_
reconfiguration_k8s.
[68] 5GTANGO Project. Test service lifecycle in os with elasticity policies. Onlineat https://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_service_
reconfiguration_os.
[69] 5GTANGO Project. Test service migration k8s. Online at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_service_migration_k8s.
[70] 5GTANGO Project. Test service migration state openstack. Online at https://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test_service_migration_state_os.
[71] 5GTANGO Project. Test slice network service composition cnfs. Online athttps://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test-Slice_
Network_Service_Composition-3.1.2.
[72] 5GTANGO Project. Test slice network service composition vnfs. Online athttps://github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test-Slice_
Network_Service_Composition-3.1.1.
[73] 5GTANGO Project. Test slice sla within network slice. Online at https:
//github.com/sonata-nfv/tng-tests/tree/master/tests/SP/test-Slice_SLA_within_
Network_Slice-3.3.1.
[74] 5GTANGO Project. test td testing tag matches multiple ns testing tag. Online athttps://github.com/sonata-nfv/tng-tests/blob/master/tests/VnV/g2-03-mapping_
strategy/single_TD_multiple_NS/test_TD_testing_tag_matches_multiple_NS_
testing_tag.robot.
[75] 5GTANGO Project. Test vnv allow packages with ns and td. Online at https://github.
com/sonata-nfv/tng-tests/tree/master/tests/VnV/test_hybrid_package.
[76] 5GTANGO Project. Test vnv analytic engine. Online at https://github.com/sonata-nfv/tng-tests/blob/master/tests/VnV/test_analytic_engine/test.robot.
[77] 5GTANGO Project. Test vnv deploy osm. Online at https://github.com/sonata-nfv/
tng-tests/tree/master/tests/VnV/test_vnv_osm.
[78] 5GTANGO Project. Test vnv deploy osm with cloud-init. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/tests/VnV/test_vnv_osm_cloud_init.
[79] 5GTANGO Project. Test vnv deploy sonata. Online at https://github.com/sonata-nfv/
tng-tests/tree/master/tests/VnV/test_vnv_sonata.
5GTANGO Public 99
Document: 5GTANGO/D6.3Date: August 12, 2019 Security: PublicStatus: To be approved by EC Version: 0.1
[80] 5GTANGO Project. Test vnv hybrid package. Online at https://github.com/sonata-nfv/tng-tests/blob/master/tests/VnV/test_hybrid_package/test.robot.
[81] 5GTANGO Project. Test vnv monitoring framework. Online at https://github.com/
sonata-nfv/tng-tests/blob/master/tests/VnV/test_vnv_monitoring_framework/
test.robot.
[82] 5GTANGO Project. Test vnv osm. Online at https://github.com/sonata-nfv/tng-tests/blob/master/tests/VnV/test_vnv_osm/test.robot.
[83] 5GTANGO Project. Test vnv sonata. Online at https://github.com/sonata-nfv/
tng-tests/blob/master/tests/VnV/test_vnv_sonata/test.robot.
[84] 5GTANGO Project. tng-tests packages. Online at https://github.com/sonata-nfv/
tng-tests/tree/master/packages.
[85] 5GTANGO Project. Tstimpsp. Online at https://github.com/sonata-nfv/tng-tests/
tree/master/packages/TSTIMPSP.
[86] 5GTANGO Project. Tstimpsp parser multiple cases. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/packages/TSTIMPSP_parser_multiple_cases.
[87] 5GTANGO Project. Tstping. Online at https://github.com/sonata-nfv/tng-tests/
tree/master/packages/TSTPING.
[88] 5GTANGO Project. Tstping 2 instances probes. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/packages/TSTPING_2_instances_probes.
[89] 5GTANGO Project. Tstping dependency 2 probes. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/packages/TSTPING_dependency_2_probes.
[90] 5GTANGO Project. Tstping ns testing tag matches multiple td testing tag 1. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/packages/TSTPING_TD_testing_tag_matches_multiple_NS_testing_tag.
[91] 5GTANGO Project. Tstping ns testing tag matches multiple td testing tag 2. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/packages/TSTPING_NS_testing_tag_matches_multiple_TD_testing_tag_2.
[92] 5GTANGO Project. Tstping td testing tag matches multiple ns testing tag. On-line at https://github.com/sonata-nfv/tng-tests/tree/master/packages/TSTPING_NS_testing_tag_matches_multiple_TD_testing_tag_1.
[93] 5GTANGO Project. Tstping testing tag not match. Online at https://github.com/
sonata-nfv/tng-tests/tree/master/packages/TSTPING_testing_tag_not_match.
[94] 5GTANGO Project. Tsttelnet osm cloud init. Online at https://github.com/sonata-nfv/tng-tests/tree/master/packages/TSTTELNET_osm_cloud_init.
100 Public 5GTANGO