Optimizing Virtual Machines Using Hybrid Virtualization
description
Transcript of Optimizing Virtual Machines Using Hybrid Virtualization
Optimizing Virtual Machines Using Hybrid Virtualization
Qian LinShanghai Jiao Tong University
SAC 2011 -- OS track
Agenda
Background: system virtualization
Motivation
Design and optimization
Evaluation
Summary
Background: system virtualization
Goal: consolidate and maximize H/W platform resources
Way: multiple VMs run concurrently on a single machine
Motivation
Current virtualization types
Software-only full virtualization Dynamic binary translation technique For cross-platform development and debug only
H/W assisted full virtualization Leverage H/W virtualization extension of CPU architecture Superior in CPU and memory virtualization, but not in I/O
Paravirtualization Ring compression → compromised strategy Exist much overhead in the execution of system call Efficient in I/O event handling
Motivation (cont.)
Reduce overhead incurred by virtualization
Optimize to reduce the execution redundancy
Hybrid approach to merge advantages
64-bit HVM is at par or faster than 64-bit PVM
Paravirtualization is helpful for enhancing HVM Simplicity, performance, efficiency, scalability, correctness
Design
Type I: Hybrid PVM
Port paravirtualized OS
into HVM container
Add HAP support
Type II: Hybrid HVM
Local optimize HVM
using PV strategy
Import PV components
Common design goal: Run paravirtualized OS in the HVM container.
VM optimization (1)
Abridge the system call path
PVM: Kernel/User space share the same ring
Hybrid VM: Rings are assigned normally
VM optimization (2)
Shadow page table → complex and dull
VM optimization (2)
H/W assisted paging → accelerated!
VM optimization (3)
Local APIC is not used by Hybrid VM Linux
EOI (End of Interrupt) does not cause VM exit.
Use Xen API (event channel)
MSI/MSI-X handling is paravirtualized in Hybrid VM
MSI Mask/Unmask does not cause VM exit
No changes are made to device drivers
I/O intensive loads expose those virtualization overheads
About 12K interrupts/per sec. (per VCPU) with 10 GbE
CPU utilization is >3% higher per VCPU on ordinary HVM
Linux
As number of VCPUs increases, overhead increases
Micro performance (1)
LMBench - Processor
Overhead contributed by fundamental
operations
Factor: system call
Micro performance (2)
LMBench - Context switch
Inter processes, kernel/user mode, interrupt
Factor: address translation, TLB flush
Micro performance (3)
LMBench - Local communication latency
Response time of I/O request
Factor: address translation
Macro performance: CPU intensive
Kernel compile benchmark
Factor: system call, context switch, address translation
Macro performance: I/O intensive
CPU utilization within Ethernet workload
Factor: interrupt handling
Summary
Minimize virtualization overheads and utilize new/advanced H/W features in VMs as much as possible for cloud computing
Take both superiority of H/W assisted virtualization and paravirtualization
Open source implementation of prototype
Also release to the Xen community
Thank you!
Trusted Computing Group @ SJTU
http://202.120.40.124/index.php/Project_TC