OpenStack Deployment in the Enterprise
In partnership with:
Shannon McFarland – CCIE #5245 Principal Engineer @eyepv6
House Keeping Notes
• Thank you for attending Cisco Connect Toronto 2015, here are a few housekeeping notes to ensure we all enjoy the session today.
• Please ensure your cellphones / Laptops are set on silent to ensure no one is disturbed during the session
• A power bar is available under each desk in case you need to charge your laptop
• The slides you download contain more material than what I will present
Agenda
• Cloud Trends • What is OpenStack? • OpenStack Participation • What are Enterprises doing with OpenStack? • OpenStack Deployment • Cisco Product Integration • Conclusion
Cloud Trends
Enterprise Trends – Cloud
Virtualization (Server,
Storage, App, etc)
Public/Hybrid Cloud
Public Cloud Retraction Private Cloud
Cost driven - mistake Missed expectations: - Cost - HA - Performance - Ops
Cloud done their way: - Self-service - Reset cost expectations - Elastic - Understand Cloud HA - Multi-tenancy - IT meet DevOps
Some skip the public cloud step
Legacy IT Change Control <> Diametrically Opposed to Cloud • Cool and exciting technologies are borderline useless if IT process & change control don’t adapt
• Elastic, self-service, FastIT, are all the enemy of legacy IT models
Continuous Integration/Continuous Deployment Operational Process for an Upgradeable OpenStack • The biggest issues with OpenStack in the Enterprise is actually not
‘OpenStack in the Enterprise’ but the operational processes that surround it
• DevOps – Learn it, Live it, Love it: http://www.jedi.be/blog/2012/05/12/codifying-devops-area-practices/
• CI/CD – The make or break process that your customer has to understand
• Build the processes BEFORE building the OpenStack environment • Remember, OpenStack was built for modern-day distributed web
applications that are driven by developers
Revision Control System
Code Review Tool Code Repo
Test Jobs Integration Server
High-Level CI/CD Overview • RCS: Subversion,
Mercurial, CVS, Bazaar, Perforce, ClearCase, etc..
• Code Review: Gerrit, Git pull request, Phabricator, Barkeep, Gitlab, etc..
• Code Repo: GitHub, BitBucket, BitKeeper, Gitorious, etc..
• Integration Server: Jenkins/Hudson, Zuul, CloudBees, Go, Maven, etc..
• Test Jobs: Tempest, Rally, puppet-rspec, tox, etc..
• Artifacts: rpmbuild, Jenkins, Artifactory, Apache Archiva, etc..
Artifact Creation Artifact Rep Mgr
Deployment Jobs
(Gerrit/Git pull request)
*See notes for logo credits
(Tempest/Rally/etc) (rpmbuild/Jenkins/etc)
Continuous Integration
Continuous Deployment
(GitHub)
What is OpenStack?
“OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system” -
openstack.org
Austin – Oct 2010
Bexar– Feb 2011
Diablo – September 2011
Essex– April 2012
Cactus – April 2011
Folsom –Sept 2012
Grizzly– April 2013
Havana – October 2013
2011 2012 2013 2014
Icehouse– April 2014
2015
Kilo – April 2015
OpenStack Releases Juno – November 2014
Compute “Nova”
- Houses VMs - API driven - Support for multi-hypervisors
Storage Image, Object, Block
“Glance, Swift, Cinder”
- Instance/VM image storage - Cloud object storage - Persistent block level storage
Dashboard “Horizon”
- Web app for controlling OpenStack resources - Self-service portal
Identity “Keystone”
- Centralized policies - Tenant mgmt. - RBAC - Ext. integration (LDAP)
Networking “Neutron”
- Networking as a service - Multiple models - IP address mgmt. - Plugins to external HW
Telemetry “Ceilometer”
- Central collection point - Metering and monitoring
Orchestration “Heat”
- Template-based orchestration engine - More rapid deployment of applications
Database “Trove”
-DBaaS -Single-tenant DB within instance
Data Processing “Sahara”
- Fast provisioning of Hadoop clusters
OpenStack is “Project” Based Core Projects Shown
Reference
More are added over time…
OpenStack Participation
Why Does OpenStack Matter? • Choice
– There is no one-size fits all option for cloud computing – There is no single vendor who can fill all needs of a cloud stack – You will likely
engage with multiple partners
• Community – Open Source – Community driven – Individual, organizational – Better time-to-market and faster feature velocity
• Commercialization – Start with the ‘baseline’ OpenStack components – Vendor opportunities for value-add integration on top of OpenStack baseline
• Design, deployment, automation, operation, high-availability, applications, etc…
Who is Involved in OpenStack?
• You name it – Compute, Storage, Networking vendors, Universities, Gov’t, massive pile of OpenStack-specific startups
• Traditional HW vendors – Cisco, HP, Dell, etc…
• Providers – Rackspace, AT&T, Comcast, etc…
• Startups – PistonCloud, SwiftStack and many, many more…
• Distributions & Support – Red Hat, Canonical, SUSE
• Some are focused on only small parts of OpenStack such as driving object storage features (SwiftStack) or high-performance block storage (SolidFire)
Cisco’s Focus on OpenStack - Today
• Start simple, build from there – Focus on automation and HA
• Baseline + Cisco Integration • General education of what Cisco is doing - Thought
Leadership – Help customers know What, When, Where & How
Engineering
Customers
Community • Nexus/ACI integration • UCS/UCSM • CSR/ASR • Cisco Prime Network Registrar • Co-developed solutions (Red
Hat, Canonical, SUSE) • Metacloud/Cisco Private Cloud
• Neutron – Network Service • Horizon – Dashboard • Keystone – Identity • Swift – Object Storage • Ceph/Cinder – Block Storage • Automation • Design/Deployment
Cisco OpenStack Private Cloud Design and
Architect
Platform Installation
24X7 Monitoring
Problem Mitigation
Maintenance Coordination
Platform Updates
Capacity Planning
Cisco
OpenStack® Private Cloud
Remote private cloud engineering and operations
Delivered “as a service”
In your data center, on your hardware (that meets minimum specifications)
Cisco + Other Distributions/Vendors • Cisco.com OpenStack: http://www.cisco.com/web/solutions/openstack/index.html • Red Hat:
– UCSO: http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/OpenStack/UCSO/Starter/1-0/UCSO.pdf
– http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/OpenStack/RHEL-UCS/Red-Hat-Openstack-Platform-UCS.pdf
– http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_rhos.pdf – http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/
wp_openstack.pdf – http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/unified-fabric/
solution-brief-c22-729865.pdf • Ubuntu:
http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_ubuntu.pdf
Reference
To Automate or Not and How Much to Automate • Single Shot – Manually setup everything:
– Deep appreciation for what installers do – Best way to learn how the components of OpenStack communicate
• Semi-Automatic – Use automation for ‘some’ of the setup and maintain/modify manually:
– See slide on installers • Automatic – Install > Operate > Upgrade
– CI/CD a huge part of this flow
Distro/Vendor Supported Installers
• Red Hat OpenStack (RHOS/RDO) – PackStack and Foreman: http://www.redhat.com/openstack/ https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/ Spinal Stack: http://spinal-stack.readthedocs.org/en/latest/index.html
• Canonical/Ubuntu – MAAS and JuJu: http://www.ubuntu.com/cloud • SUSE: https://www.suse.com/products/suse-cloud/features/ • Mirantis Fuel: http://software.mirantis.com/main/ • Piston Cloud: http://www.pistoncloud.com/ • Others …
Reference
Red Hat - Packstack • Meant for single/few host deployments in NON-production deployments:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/6/html/Deploying_OpenStack_Proof_of_Concept_Environments/index.html
• Install Packstack:
• Generate SSH keys (or let Packstack do it):
• Generate an answer file (or just run ‘packstack’ and follow the prompts):
• Run the answer file:
yum install -y openstack-packstack!
ssh-keygen!
packstack --gen-answer-file=~/answers.cfg!
packstack --answer-file=~/answers.cfg!
Reference
What are Enterprises doing with OpenStack?
Common Enterprise Use Cases
• OpenStack, at least today, is targeted at hosting modern day distributed applications written for the cloud – This isn’t your grandpa’s server virtualization platform built for individual VM HA/Mobility
• Sandbox environments – A place to research, learn and test CI/CD processes – PoC web applications along with ‘practicing’ the new DevOps methodology – A place to learn the whole cloud deployment framework, document, train, move to production
• Development environments – Using the lessons learned in the sandbox phase:
• Build Dev, QA and production environments • Apply CI/CD processes • Slow-role Web application deployment either on ‘standard’ OpenStack or in conjunction with a PaaS deployment
• Data Processing environments – Big Data clusters, etc.. • Training systems – Cheap and fast to build and tear down for each class • Revenue generating applications – Vertical applications
Shock-and-Awe: Dashboard is not where tenants do their work
Cloud Apps Deployment – Automate it
Boot the Instance
Config Management
App is Deployed
Rinse & Repeat
- Cloud-init for Puppet/Chef/etc.. - Image already has agent/script
http://docs.openstack.org/user-guide/content/user-data.html
# Nodes for web server instances
node 'sales-web-01' {
include lamp
}
root@build-server:~# tree /etc/puppet/modules/lamp/
/etc/puppet/modules/lamp/
├── files
│ ├── apache2.conf
│ ├── index.php
│ └── php5.conf
└── manifests
└── init.pp
nova boot --user-data ./cloud-config-puppet.txt --image precise-x86_64 --flavor m1.tiny --key_name ctrl-key --nic net-id=42823c88-bb86-4e9a-9f7b-ef1c0631ee5e sales-web-01!
Cloud Apps Deployment - Heat
• Growing interest in Heat-backed deployments • Today, Heat orchestrates resources inside a tenant space
• https://wiki.openstack.org/wiki/Heat • http://docwiki.cisco.com/wiki/OpenShift_Origin_Heat_Deployment_Guide
• http://blog.scottlowe.org/2014/05/01/an-introduction-to-openstack-heat/
• https://github.com/shmcfarl/my-heat-templates
Baseline vs. Premium OpenStack Deployments
OpenStack Platform
Network
Neutron
ML2
OVS Linux Bridge
Infrastructure
Haproxy/Keepalived
Compute
Nova
KVM Zen
Storage
Swift
Ceph Object GW
Cinder
Ceph Block RBD
Glance
Orchestration etc..
Common Baseline Components - Example
Common Premium Components - Example
OpenStack Platform
Network
Neutron
ML2
OVS Cisco Nexus
Linux Bridge
Infrastructure
Compute
Nova
KVM Zen
Storage
Swift Cinder
Ceph Block RBD
Glance
Orchestration etc..
OpenStack Deployment Overview – Rack/Node Scale
AIO Controller/
Compute/Storage
AIO Controller: - MySQL, MariaDB, etc - RabbitMQ, Qpid, etc.. - API Endpoints:
- Keystone - Glance - Nova - Neutron - Cinder - Heat - Swift
AIO Controller
Compute/Storage
Compute/Storage
Compute
Compute
Storage
Storage
Storage Compute
AIO Controller
All-in-One (AIO) – Getting Started
Data Center Infrastructure
OOB
Compute
Network Node(s)
AIO Controller
Compute
Network Node(s)
AIO Controller
Compute
Network Node(s)
AIO Controller
Spine/Agg Layer
TOR(s) TOR(s) TOR(s)
Spine/Agg Layer
Block Storage
Block Storage
Block Storage
AIO Controllers: - Galera/MySQL - RabbitMQ - API Endpoints:
- Keystone - Glance - Nova - Neutron - Cinder - Heat - Swift
OOB OOB SLB
Infrastructure Services
Build/PXE
Automation
DNS
DHCP
NTP
Logging
Object Storage
Object Storage
Object Storage
All-in-One (AIO) Compressed HA
Data Center Infrastructure
OOB
Spine/Agg Layer
TOR(s) TOR(s) TOR(s)
Spine/Agg Layer
OOB OOB
Object Storage
Object Storage
Swift Proxies
TOR(s)
Object Storage
OOB OOB
RabbitMQ
API Endpoints
Galera
TOR(s) TOR(s)
Compute
OOB
Block Storage
Object Storage
RabbitMQ
API Endpoints
Galera
Compute
Block Storage
Object Storage
RabbitMQ
API Endpoints
Galera
Compute
Block Storage
Object Storage
Compute
Network Node(s)
Compute
Compute
Compute
Compute
Network Node(s)
Compute
Compute
Compute
Block Storage
Block Storage
Compute Compute
Service Cloud Tenant Cloud
Service Cloud + Tenant Cloud
• It’s the ‘under cloud’ • Used as a hosting platform for tenant cloud services – usually in a large cloud (1000s of
instances with 100-1000s of tenants) • It is an OpenStack deployment that will host (virtually) the OpenStack control functions
used by each tenant
What’s a Service Cloud?
Service Cloud
AIO Controller
AIO Controller
AIO Controller Tenant 1
AIO Controller
AIO Controller
AIO Controller Tenant 2
Compute
Compute
OpenStack Deployment Overview - Network
What Really Changes in my Data Center?
• OpenStack components live South of the Top-of-Rack switch
• Your existing DC, Internet Edge and BN architecture stays the same
• It’s about the compute, storage and orchestration/management tiers
• Your apps go largely unchanged
Ser
vice
s
Access Layer
Agg Layer
Core Layer
UC
S C
-Ser
ies U
CS
B-S
eries
Enterprise/Internet
OpenStack Lives Here
Network Decisions • OpenStack Networking
– http://docs.openstack.org/admin-guide-cloud/content/section_networking-scenarios.html – Many vendor plugins
• ML2/OVS, ML2/Linux Bridge • Cisco Nexus Mechanism driver for VLAN and VXLAN -
http://docwiki.cisco.com/wiki/OpenStack/ML2NexusMechanismDriver • Cisco Nexus Mechanism driver for APIC -
https://techzone.cisco.com/t5/Application-Centric/APIC-OpenStack-Driver-Installation/ta-p/764781 • Cisco Nexus 1000v - http://www.cisco.com/c/en/us/products/switches/nexus-1000v-kvm/index.html
– VLAN Trunking, GRE, VXLAN • Scale
– VLAN number limitations for large tenant + networking environments – GRE/VXLAN – Throughput impact, especially on older releases
• Network Tuning – Linux kernel, networking and vSwitch-specific (OVS) tuning is critical: – vhost-net (‘modprobe vhost-net’):
http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net https://ask.openstack.org/en/question/6140/quantum-neutron-gre-slow-performance/
– Test Offload settings: ‘ethtool -K eth1 gro off’ - http://www.linuxcommand.org/man_pages/ethtool8.html
Nexus Plugin Example Topology - VLANs
• Trunk links from each compute node to ToR/Access Layer
• Each tenant uses one or more VLANs for tenant isolation
• Very basic and very fast • Cisco Nexus ML2 driver allows
for auto-configuration of ToR trunk links facing the compute nodes
compute- server01
compute- server02
Agg Layer
Trunk links: VLAN:500-600
eth0
control-server
eth0
eth1 eth1
eth0
eth1
e1/8
e1/9
Provider Networks(s): VLAN500: 192.168.250.0/24 VLAN501: 192.168.251.0/24 …
Mgmt Network
Spine/Leaf – VXLAN examples
Compute 1 Compute 2
VXLAN host-to-host
Leaf
Spines
Compute 1 Compute 2
VLANs host-to-leaf
Leaf
Spines
VXLAN leaf-to-leaf
Mixed VLAN/VXLAN Host-based VXLAN
The Hard Stuff – IPv6 + Cloud • Inside of a private cloud stack you have a lot of moving parts and they all ride on IP:
– API endpoints – Provisioning, Orchestration and Management services – Boatload of protocols and databases and high-availability components – Virtual networking services <> Physical networking
• IPv6 has been available with OpenStack for awhile but it has depended on a lot of backports and custom patches to be functional
• Kilo offers the best ‘out-of-box’ support yet – but still needs more work • Two common approaches for IPv6 support:
– Dual-Stack everything (Service Tier + Tenant Access Tier [Tenant management interface along with VM network access])
– Conditional Dual stack (Tenant Access Tier only – API endpoints & DBs are still IPv4)
Tenant IPv6 Address Options
Web Server
App Server
Tenant 1 Tenant 2
2001:420::/32
:BAD:BEEF::/64 :DEAD:BEEF::/64
::1
::2
::A
:BA
D:F
AC
E::/
64 Web
Server
App Server
::1
::2
::A
:DE
AD
:FA
CE
::/64
Option 1 Cloud Provider-assigned
Addressing
Web Server
App Server
Tenant 1 Tenant 2
Tenant 1 = 2001:DB8:1::/48 Tenant 2 = 2001:DB8:2::/48
:1000::/64 :2000::/64
::1
::2
::A
:100
1::/6
4
Web Server
App Server
::1
::2
::A
:200
1::/6
4 Option 2
Tenant Brings Addressing
Web Server
App Server
Tenant 1 Tenant 2
Tenant 1 = 2001:DB8:1::/48 Tenant 2 = 2001:DB8:2::/48
ULA Block/48 ULA Block/48
::1
::2
::A Web
Server
App Server
::1
::2
::A
Option 3 Prefix Translation
FD9C
:58E
D:7
D73
:1::/
64
FDD
E:5
0EE
:79D
A:1
::/64
XLATE/Proxy
Don’t do this
Cloud Stack – IP Version Options
API endpoints
Service Tier
Database(s)
Automation
Interface (GUI, CLI)
VM Operating System
Tenant Access Tier
Virtual Networking
(L2/L3)
Virtual Network Services (SLB/FW)
Tenant Interface
(GUI, CLI)
Dual-Stack Everything
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
API endpoints
Service Tier
Database(s)
Automation
Interface (GUI, CLI)
VM Operating System
Tenant 1 Access Tier
Virtual Networking
(L2/L3)
Virtual Network Services (SLB/FW)
Tenant Interface
(GUI, CLI)
Conditional Dual-Stack
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4
IPv4
IPv4
IPv4/IPv6
Tenant 2 Access Tier
IPv6
IPv6
IPv6
IPv6
VM Operating System
Virtual Networking
(L2/L3)
Virtual Network Services (SLB/FW)
Tenant Interface
(GUI, CLI)
OpenStack Deployment Overview – High Availability
High Availability Decisions
• Know what you don’t know • Pick your release – Major changes in HA across all parts of OpenStack have progressed
on each release • Many components are:
– Databases: Options include MySQL-WSREP and Galera – Message Queue: RabbitMQ Clustering and RabbitMQ Mirrored Queues – API/Web services: HAProxy, Keepalived, traditional SLB – Swift proxy nodes: HAProxy, Keepalived, traditional SLB – Swift nodes: Architecturally designed to be available (i.e. multiple copies of objects) – Compute node: Nothing directly HA, but can use Migration for planned maintenance windows
• Puppet HA: Search “puppet master redundancy” or “masterless puppet” – you will land plenty of reading choices ;-)
L3 High-Availability (HA) • New in Juno release: https://wiki.openstack.org/wiki/ReleaseNotes/Juno • Helps resolve issue of single tenant router going down and isolating the tenant instances • Can be configured manually via neutron client by an Admin: ‘neutron router-create --ha True|False’
• Or set as a system default within the neutron.conf/l3_agent.ini files • Uses Keepalived for VRRP between L3 agents • Existing non-HA enabled routers can be updated to HA: ‘neutron router-update <router-name> --ha=True’
• In Juno, Distributed Virtual Router (DVR) and L3 HA cannot be enabled at the same time • Requires a minimum of two network nodes (or controllers) that have L3 agents running
L3 HA – Tenant View
• Tenant sees one router with a single gateway IP address
• non-Admin users cannot control if the router is HA or non-HA
• From the tenant’s perspective the router behaves the same in HA or non-HA mode
L3 HA – Routing View
• Tenant network has 10.10.30.x/24 assigned
• VRRP is using 169.254.xx over a dedicated HA-only network that traverse the same tenant network type
• Router (L3 agent) on the left is the VRRP master and is the tenant GW (10.10.30.1)
Bridge
Bridge
External Networks
Internal Network 10.10.30.x/24
HA Network VRRP VIP
169.254.0.1 Tenant GW
10.10.30.1
master backup
L3 HA – Host View
Management/Underlay Network
Public Network
eth0
br-tun patch-int
patch-tun
br-int
VM
VM
VM
Compute Node
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 1
qr-xxxx
qrouter
keepalived
ha-xxxx
qg-xxxx
eth0
br-tun
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 2
qr-xxxx
qrouter
keepalived
ha-xxxx
qg-xxxx
eth0
br-tun
L3 HA – Traffic Flow
Management/Underlay Network
Public Network
eth0
br-tun patch-int
patch-tun
br-int
VM
VM
VM
Compute Node
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 1
qr-xxxx
qrouter
keepalived
ha-xxxx
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 2
qr-xxxx
qrouter
keepalived
ha-xxxx
qg-xxxx qg-xxxx
VRRPv2 169.254.192.x >> 224.0.0.18
master backup
eth0
br-tun
eth0
br-tun
Enabling L3 HA – Neutron Server • On node running Neutron server – Edit the /etc/neutron/neutron.conf file:
• Restart neutron server (i.e. systemctl restart neutron-server.service)
router_distributed = False
# =========== items for l3 extension ==============
# Enable high availability for virtual routers.
l3_ha = True
#
# Maximum number of l3 agents which a HA router will be scheduled on. If it
# is set to 0 the router will be scheduled on every agent.
max_l3_agents_per_router = 3
#
# Minimum number of l3 agents which a HA router will be scheduled on. The
# default value is 2.
min_l3_agents_per_router = 2
#
# CIDR of the administrative network if HA mode is enabled
l3_ha_net_cidr = 169.254.192.0/18
Enabling L3 HA – L3 Agents (on each network node) • On nodes running L3 Agent – Edit the /etc/neutron/l3_agent.ini file:
• Restart the L3 Agent service on each node (i.e. systemctl restart neutron-l3-agent.service)
# Location to store keepalived and all HA configurations
ha_confs_path = $state_path/ha_confs
# VRRP authentication type AH/PASS
ha_vrrp_auth_type = PASS
# VRRP authentication password
ha_vrrp_auth_password = cisco123
# The advertisement interval in seconds
ha_vrrp_advert_int = 2
Example: neutron router-create
• Note: Once the neutron.conf and l3_agent.ini configs are done you no longer need to use the --ha True flag to enable HA – it does it by default
• If you want to create a non-HA enabled router, use --ha False
• Remember that admins are the only ones who can use the flags
[root@net1 ~]# neutron router-create --ha True test1
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| distributed | False |
| external_gateway_info | |
| ha | True |
| id | 1fe9e406-2bb5-42c4-af62-3daef314e181 |
| name | test1 |
| routes | |
| status | ACTIVE |
| tenant_id | 45e1c2a0b3a244a3a9fad48f67e28ef4 |
+-----------------------+--------------------------------------+
Keepalived.conf [root@net1 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/keepalived.conf
. . . output abbreviated
vrrp_instance VR_1 {
state BACKUP
interface ha-0d655b16-c6
virtual_router_id 1
priority 50
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass cisco123
}
track_interface {
ha-0d655b16-c6
}
virtual_ipaddress {
169.254.0.1/24 dev ha-0d655b16-c6
}
virtual_ipaddress_excluded {
10.10.30.1/24 dev qr-c3090bd6-1b
192.168.81.13/24 dev qg-4f163e63-c4
}
virtual_routes {
0.0.0.0/0 via 192.168.81.2 dev qg-4f163e63-c4
}
L3 HA Interface
Track the L3 HA interface
VRRP IP address
IP address from ‘real’ networks – not used for VRRP VIP
Default route
Reference
VRRPv2 Advertisement
[root@net1 ~]# ip netns exec qrouter-719b853f-539e-420b-a76b-0440146f05de tcpdump -n -i ha-0d655b16-c6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ha-0d655b16-c6, link-type EN10MB (Ethernet), capture size 65535 bytes
14:00:03.123895 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:05.125386 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:07.128133 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:09.129421 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:11.130814 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:13.131529 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
Reference
Testing a failure [root@net1 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
master
[root@net2 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
backup
[root@net1 ~]# ip netns exec qrouter-719b853f-539e-420b-a76b-0440146f05de ifconfig ha-0d655b16-c6 down
[root@net1 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
fault
[root@net2 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
master
ubuntu@server1:~$ ping 8.8.8.8
64 bytes from 8.8.8.8: icmp_seq=20 ttl=127 time=65.4 ms
64 bytes from 8.8.8.8: icmp_seq=21 ttl=127 time=107 ms
64 bytes from 8.8.8.8: icmp_seq=22 ttl=127 time=64.5 ms
64 bytes from 8.8.8.8: icmp_seq=23 ttl=127 time=67.6 ms
Check who is master:
Simulate a failure by shutting down the HA interface (remember this was in the ‘track’ list):
Check that VRRP switched to the other node as master:
Ping – the ultimate test of HA J:
Increased delay but no loss
Storage
References for Storage Info
• OpenStack Storage: https://www.openstack.org/software/openstack-storage/ • Block Storage:
http://docs.openstack.org/havana/config-reference/content/ch_configuring-openstack-block-storage.html
• Object Storage:http://docs.openstack.org/havana/config-reference/content/ch_configuring-object-storage.html
• Cinder How-to:http://docwiki.cisco.com/wiki/OpenStack:Havana:Cinder-Volume-Test • Cinder Deep Dive (Grizzly):
https://wiki.openstack.org/wiki/File:Cinder-grizzly-deep-dive-pub.pdf • CEPH Storage: http://ceph.com/docs/master/rados/
– http://www.inktank.com/resource/type/presentations/ – http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern
Reference
Cisco Integration
Product Integration Overview • Nexus 1000v: http://www.cisco.com/c/en/us/products/switches/nexus-1000v-kvm/index.html • Nexus 3000 and Higher:
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps11541/data_sheet_c78-727737.html • Cisco Nexus + OpenStack Deployment:
http://docwiki.cisco.com/wiki/OpenStack:_Havana:_2-Role_Nexus • Cisco CSR 1000v:
http://www.cisco.com/c/en/us/td/docs/routers/csr1000/software/configuration/csr1000Vswcfg/installkvm.html
• Cisco ACI with OpenStack: http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/unified-fabric/solution-brief-c22-729865.pdf
• Cisco APIC driver for OpenStack Neutron ML2: http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/guide-c07-732454.html
• UCS Mechanism Driver for ML2 – Kilo: http://docwiki.cisco.com/wiki/OpenStack/UCS_Mechanism_Driver_for_ML2_Plugin_Kilo
Support
• Community model is like any other open source community support model
– http://docs.openstack.org/grizzly/openstack-compute/admin/content/community-support.html
– http://ask.openstack.org
• Cisco AS - Assessments, plans, design, implement, support & optimize
• Cisco + Partnerships • Channel Partners - Build a practice NOW!
Conclusion
• OpenStack is for real and maturing at a rapid pace • Many different players involved and it is evolving rapidly • Align yourself with market leaders who have strong partnerships • There is still a lot of focus on getting OpenStack Deployed, but we are
progressing rapidly towards true operational issues: – Scale – Application deployment – Upgrades
• Start now! • Get involved in the community – open source enjoys the major advantage of
feature velocity
§ Cisco dCloud is a self-service platform that can be accessed via a browser, a high-speed Internet connection, and a cisco.com account
§ Customers will have direct access to a subset of dCloud demos and labs
§ Restricted content must be brokered by an authorized user (Cisco or Partner) and then shared with the customers (cisco.com user).
§ Go to dcloud.cisco.com, select the location closest to you, and log in with your cisco.com credentials
§ Review the getting started videos and try Cisco dCloud today: https://dcloud-cms.cisco.com/help
dCloud
Customers now get full dCloud experience!
In partnership with:
Thank you. Visit us in the World of Solutions
Top Related