OpenStack Quantum Intro (OS Meetup 3-26-12)

Post on 12-May-2015

57.199 views 1 download

Tags:

description

This is the presentation I gave at the Bay Area OpenStack Meetup on 3-26-12.

Transcript of OpenStack Quantum Intro (OS Meetup 3-26-12)

Intro to OpenStack Quantum

Dan Wendlandt – Quantum Hacker & PTLdan@nicira.com

twitter - danwendlandt

Caveats

• “Contents may shift in flight”– Quantum is a young and rapidly evolving project. My

focus will be on big picture concepts, not on deploying it right now.

• “Handwave, Handwave”– Target audience is future users of Quantum (cloud

tenants, cloud operators). Not enough time to satisfy details of developers.

• “One point of view”– Quantum is a community, and differences of opinion

“sometimes” exist.

Outline

• Why Quantum?• What is Quantum? – Basic Concepts & Demo– High-level System Architecture

• Current Project Status• Future Directions• Frequently Asked Questions

Why Quantum?

What is OpenStack?• Open Source Cloud Software…• A collection of “cloud services”• Each service includes: – A tenant-facing API that exposes

logical abstractions for consuming the service.

– One or more backend implementations of that API

In the beginning..

Compute

Storage

Network

Nova

Swift (Objects)

Glance (Images)

?

*-as-a-Service Capability OpenStack Service

Why Quantum?

• Networking was sub-component of Nova• Two Key Problems:

#1: Limited technology “baked in” to design. #2: No tenant control of network topology and addressing, no way to insert advanced network services (e.g., firewall).

Problem #1: Technology Limitations

• Cloud stresses networks like never before: – High-density multi-tenancy, massive scale– Strict uptime requirements.– Integrate with legacy hosting environments /

remote data centers.– Price pressure to use commodity gear. – VM mobility

• Nova provides only basic technologies: – VLANs are only option for multitenancy– Used simple Linux Bridge (no advanced QoS,

ACLs, or monitoring)– “network controller” node is centralized single-

point of failure for large networks.

VLANs are Great!- Stone Age Man

Why Quantum? Reason #1

• New networking technologies are emerging to try and tackle these challenges.– Software-defined Networking (SDN) / OpenFlow– Overlay tunneling: VXLAN, NVGRE, STT– Fabric solutions: FabricPath, Qfabric, etc. – [ insert other solution here ]

• Quantum provides a “plugin” mechanism to enable different technologies implement calls made via the Quantum API.

• Choice is a good thing!

Problem #2: No Tenant Control

“You can have any color as long as its black.“- Henry Ford about the Model-T

• Cloud tenants want to replicate rich enterprise network topologies: – Ability to create “multi-tier” networks

(e.g., web tier, app tier, db tier) – Control over IP addressing. – Ability to insert and configure your own

services (e.g., firewall, IPS)– VPN/Bridge to remote physical hosting

or customer premises. • Nova provides no tenant control:

– No way to control topology.– Cloud assigns IP prefixes + addresses. – No generic service insertion.

Why Quantum? Reason #2

• Base Quantum API lets tenants create multiple private networks, control IP addressing on them.

• Quantum API extensions enable additional control:– Security & Compliance Policies– Quality-of-Service– Monitoring + Troubleshooting

• “Advanced Network Services” such as firewall, intrusion detection, VPN, can be inserted either as VMs that route between networks, or as API extensions.

All is Right with the World…

Compute

Storage

Network

Nova

Swift (Objects)

Glance (Images)

*-as-a-Service Capability OpenStack Service

Quantum

Why Quantum?

Questions?

What is Quantum?

Quantum Basics (by analogy to Nova)Nova Quantum

*-as-a-service Compute Network

Major API abstractions “virtual servers”: represents a host with CPU, memory, disk, and NICs.

“virtual networks”: A basic L2 network segment.“virtual ports”: Attachment point for devices connecting to virtual networks.

Interactions with other OpenStack services.

virtual servers use “virtual images” from Glance.

virtual ports are linked to vNICs on “virtual servers”.

Supports different back-end technologies

“virt-drivers” for KVM, XenServer, Hyper-V, VMWare ESX

“plugins” for Open vSwitch Cisco UCS, Linux Bridge, Nicira NVP, Ryu Controller.

API Extensibility for new or back-end specific features.

keypairs, instance rescue, volumes, etc.

quality-of-service, port statistics, security groups, etc.

API Abstractions

Net110.0.0.0/24

VM110.0.0.2Nova

Quantum virtual network

VM210.0.0.3

virtual port

virtual server

virtual interface (VIF)

Quantum Rest API Abstraction Details

• Virtual Networks:– Equivalent to a “virtual VLAN”, a dedicated L2 segment. – Example: quantum.foo.com/<tenant-id>/network/<network-id>

• Virtual Ports:– Where a virtual interface (e.g., Nova vNIC) attaches to a network. – Ports expose configuration and monitoring state via extensions

(e.g., ACLs, QoS policies, Packet Statistics)– Example: quantum.foo.com/<tenant-id>/network/<network-

id>/port/<port-id>

Old Model: Static Nova Networking

Public Net88.0.0.0/18

• Single network exists (per-project or global). • VMs automatically get a vNIC on that single network on boot. • Tenants have no control over IP addressing.

TenantA-VM188.0.0.2

TenantB-VM188.0.0.3

TenantA-VM288.0.0.4

TenantA-VM388.0.0.5

Quantum Model: Dynamic Network Creation + Association

TenantA-VM110.0.0.2

TenantA-VM39.0.0.2

• Tenant can use API to create many networks.• When booting a VM, define which network(s) it

should connect to.• Can even plug-in instances from other services

(e.g., a load-balancing service).

TenantA-VM210.0.0.3 9.0.0.3

Load Balancer

Public Net88.0.0.0/18

Tenant-A Net110.0.0.0/24

Tenant-A Net29.0.0.0/24

Demo!

What could possibly go wrong?

Questions on Quantum Basics or Demo?

• At no time during demo did tenant see the technology used to implement L2 isolation (VLANs, tunneling, etc.).

• Key tenant: abstract logical API, “pluggable” back-end gives provider choice.

• Plugins will give operators choices in terms of: – Advanced Features– Cost– Scale – High Availability – Hypervisor + Network HW Compatibility – Manageability / Polish

Quantum Architecture Basics

Plugin Architecture

• Plugins perform two main tasks: – Process API calls: store the results of all network

+ port calls, while mapping abstract entities to a plugin-specific identifiers (e.g., map a network UUID to a VLAN)

– Manage virtual switches: learn about VIFs when they are attached to the network and configure network switches accordingly (e.g., assign a vswitch port to a particular VLAN).

Nova ComputeNova Compute

Nova ComputeNova Compute

Quantum Architecture (simple)

Tenant Scripts

Horizon

Nova

API Clients Quantum Server

Quantum Plugin

Create-net...

Create-port

virtual switch

Internal plugin communication.Quantum

API

Create-net...

Create-port

Interfaces from a service like Nova plug into a

switch manages by the Quantum plugin.

API + Plugin = Quantum Service

Uniform API for all clients

API Extensions DB

Nova ComputeNova Compute

Nova ComputeNova Compute

Quantum Architecture (adv.)

Tenant Scripts

Horizon

Nova

API Clients Quantum Server

Quantum Plugin

Create-net...

Create-port

virtual switch

Internal plugin communication.

Quantum API

Create-net...

Create-port

Interfaces from a service like Nova plug into a

switch manages by the Quantum plugin.

API + Plugin = Quantum Service

Uniform API for all clients

External Manager

DB

DBAPI

Extensions

Current Project Status

Project Status: Essex Cycle• Started at Diablo summit, “incubated” for Essex, “core” in Folsom. • Available at: http://launchpad.net/quantum• Docs at: http://docs.openstack.org/incubation/• Current Capabilities:

– v1.1 of the Quantum L2 API, with extension support. – API client library and CLI – Nova Integration via the QuantumManager– Plugin framework & several publicly available plugins:

• Open vSwitch Plugin• Cisco UCS/Nexus Plugin • Linux Bridge Plugin• Nicira Network Virtualization Platform (NVP)• Ryu OpenFlow Controller

– Integrated with “devstack” (see: http://wiki.openstack.org/QuantumDevstack)– Packaging for Ubuntu (Precise) / Fedora / Debian .

Project Status: Who should use Quantum?

• “Early adopters” already putting Quantum into trial & production OpenStack deployments.

• Caution: Deployments are by people at the cutting edge, require significant familiarity with Quantum.

• Folsom release will be first target for widespread adoption.

Future Directions• More and more Plugins– Already have a pipeline of additional plugins...

• Merge with Melange IP Address Mgmt. Project• Beyond L2: Advanced Network Services– L3 routing + NAT/Floating IPs– Firewall & Security Groups.– QoS Guarantees– VPN, DHCP, LB (may be part of Quantum, or separate

projects that integrate with Quantum). • Keystone: fine-grain API permissions• Horizon: GUI for configuring networking.

Play with Quantum

• New integrated with DevStack• http://wiki.openstack.org/QuantumDevstack• Use nova-manage to create networks• Spin up VMs with -- nic option. • See Quantum Administrator Guide for details– http://docs.openstack.org/incubation/openstack-

network/admin/content/

Frequently Asked Questions

• Is OpenFlow required for Quantum– A: Nope! OpenFlow is just one technology that

Quantum enables.• Is Quantum “software-defined networking”?– It depends…

• How does Quantum compare to Amazon VPC? – A: Have similar goal of enabling advanced

networking in cloud. Quantum will give cloud operators ability to compete with (and go beyond) VPC feature-set.

Thanks! Questions / Comments?

Come join us:http://wiki.openstack.org/Quantum

netstack@lists.launchpad.net

Dan WendlandtDan@nicira.com

Twitter: danwendlandthttp://www.slideshare.net/danwent/

Bonus Slides

Basic Quantum + Nova API FlowAPI Client Quantum Server

Create Network (POST /tenant1/network)

Network UUID: ‘abc’

Create Server (POST /tenant1/server)

Nova Server

Server UUID: ‘def’

Get Server Interface(s) (GET /tenant1/server/def/interface)

Server Interface UUID List: [ ‘ghi’ ]

Create Port on Network (POST /tenant1/network/abc/port)

Port UUID ‘jkl’

Attach Interface to port (PUT /tenant1/network/abc/port/jkl) { ‘attachment’ : ‘ghi’ }

Success

Simple VLAN Plugin Example

• Plugin assumes all VLANs are trunked to all hypervisors (similar to nova-network)

• When new q-network is created, creates a DB entry mapping network to a free VLAN.

• Stores port + attachment mappings in DB. • Runs agent on hypervisor to recognize new vswitch

ports that represent Nova interfaces. • When new vswitch port appears, finds q-port + q-

network associated with interface-id, configures vswitch port with correct VLAN.

Example Quantum + Nova ArchitectureDashboard /

Automation Tools

Nova Service

XenServer #1

Quantum Pluginnova-api

Hypervisor

vswitch

nova-scheduler

nova-compute

Tenant API Tenant API

Internal PluginCommunication

Internal novaCommunication

Quantum APIQuantumService

Two Plugins Available:- Open vSwitch- Cisco UCS/Nexus

Common Question: Can I run multiple plugins at once?

• A “plugin” is NOT a “driver” *• A “plugin” is NOT a “driver” *• A “plugin” is NOT a “driver” *• A “plugin” is NOT a “driver” *• A “plugin” is NOT a “driver” *• A “plugin” is NOT a “driver” *• A “plugin” is NOT a “driver” *• A “plugin” is NOT a “driver” *

* Explained on next slide….

A plugin is not a driver

• A plugin registers to handle all Quantum API calls in a “group” (e.g., all network/port calls).

• Because Quantum only has one “group” of API calls right now, only one plugin runs at a time (this will change as APIs expand beyond L2).

• A single plugin may talk to multiple types of switches (i.e., it may have multiple “drivers”)

• “driver” code can be shared across plugins.

Why separate plugins + drivers?

• Plugins may make decisions that are technology, but not device-specific (e.g., mapping q-network ‘foo’ to VLAN 99).

• That decision must be made by only a single entity… if multiple such decisions were made by different plugins, they likely would conflict.

• The plugin may use drivers to communicate the results of this decision to different devices (e.g., it may configure the VLAN on a vswitch port, and tell the upstream physical switch to trunk that VLAN).