Redesigning Xen Memory Sharing (Grant) Mechanism
description
Transcript of Redesigning Xen Memory Sharing (Grant) Mechanism
Redesigning Xen Memory Sharing (Grant) Mechanism
Kaushik Kumar Ram (Rice University)Jose Renato Santos (HP Labs)
Yoshio Turner (HP Labs)Alan L. Cox (Rice University)Scott Rixner (Rice University)
Xen SummitAug 2nd 2011
Xen Summit 2011 2
This talk…Will make a case for redesigning the grant
mechanism to achieve better I/O performance and for other benefits
Will propose an alternate design for the grant mechanism
Will present an evaluation of a prototype of this new design
8/2/11
Xen Summit 2011 3
OutlineMotivationProposal A grant reuse schemeEvaluationConclusion
8/2/11
Xen Summit 2011 4
Traditional I/O VirtualizationDriver Domain Guest Domain
backend frontend
Xen Hypervisor
Physical Driver
HardwareDevice
Guest domain –driver domainmemory sharing(grant mechanism)
Driver domain –device memory sharing (IOMMU)
Two level memory sharing8/2/11
Xen Summit 2011 5
Direct Device AssignmentGuest Domain
Xen Hypervisor
Physical Driver
HardwareDevice
Guest domain –device memory sharing (IOMMU)
One level memory sharing8/2/11
Xen Summit 2011 6
Grant Mechanism
Controlled memory sharing between domains
Source domain can share its memory pages with a specific destination domain
Destination domain can validate that the shared pages belong to the source domain via the hypervisor
8/2/11
Xen Summit 2011 7
Creating Shared Memory using Grant Mechanism
Source DomainCreates grant entry in
grant tableDestination DomainIssues grant hypercallHypervisor validates
grant and maps source page
Destination Domain Source Domain
Xen Hypervisor
Hardware
Grant Table
Hypercall
grant reference
8/2/11
Xen Summit 2011 8
Revoking Shared Memory using Grant Mechanism
Destination DomainIssues grant hypercallHypervisor unmaps
pageSource DomainDeletes grant entry
from grant table
Destination Domain Source Domain
Xen Hypervisor
Hardware
Grant Table
Hypercall
8/2/11
Xen Summit 2011 9
IOMMUTo safely share memory with I/O devices
Maintain memory isolation between domains (direct device assignment)
Protect against device driver bugsProtect against attacks exploiting device DMA
I/O Device
MemoryIOMMU Table
I/O Address
Machine Address
8/2/11
Xen Summit 2011 10
Sharing Memory via IOMMUs
Para-virtualized I/O :-Fine-grained sharing IOMMU mapping setup during grant map hypercall
and revoked during grant unmap hypercall
Direct Device Assignment :-Only coarse-grained sharing
8/2/11
Xen Summit 2011 11
High Memory Sharing Overhead
I/O page is shared only for the duration of a single I/O
High cost of grant hypercalls and mapping/unmapping incurred in driver domain on every I/O operation
8/2/11
Xen Summit 2011 12
Reuse Scheme to Reduce Overhead
Take advantage of temporal and/or spatial locality in use of I/O pagesReuse grants when I/O pages are reused
Reduce grant issue and revoke operationsReduce grant hypercalls and
mapping/unmapping overheads in driver domain
8/2/11
Xen Summit 2011 13
Reuse Under Existing Grant Mechanism
Grant reuse scheme requires –Not revoking grants after every I/O operationPersistent mapping of guest I/O pages in driver
domainGrants can be revoked when pages re-purposed
for non-I/O operationsToday, there exists no way for guest
domain to revoke access when its page is still mapped in driver domain
8/2/11
Xen Summit 2011 14
GoalsEnable reuse to reduce memory sharing related
overheads during I/OSupport unilateral revocation of grants by source
domainsSupport an unified interface to share memory
with I/O devices via IOMMUs
8/2/11
Xen Summit 2011 15
Proposal
Move the grant related hypercalls to the guest domains
Guest domains directly interact with the hypervisor to issue and revoke grants
Driver Domain Guest Domain
Xen Hypervisor
Hardware
HypercallHypercallGrant Table
8/2/11
Xen Summit 2011 16
Redesigned Grant Mechanism 1. Initialization
INIT1 hypercall (para-virtualized I/O only)Registers a virtual
address range Base address(es) and
size
INIT2 hypercallProvides a “device_id”Returns the size of the
“grant address space” 0 – size of address range
Driver Domain Guest Domain
Xen Hypervisor
Hardware
INIT2Hypercall
INIT1Hypercall
8/2/11
Xen Summit 2011 17
Grant (I/O) Address Space
8/2/11
Grant address space
Driver domain virtual address space (page table)
I/O virtual address space (IOMMU table)
Size of address range
0x20000
0x0
0x40000
0x30000
0x10000
0x10000
Xen Summit 2011 18
Redesigned Grant Mechanism 2. Creating Shared Memory
Guest Domain : Picks a “grant reference”
Offset within grant address space
Issues grant MAP hypercall Hypervisor validates grant
and maps guest pageDriver Domain :
Translates grant reference into virtual address and I/O address
Driver Domain Guest Domain
Xen Hypervisor
MAPHypercall
grant reference
Hardware
Setup IOMMU mapping
8/2/11
Xen Summit 2011 19
Grant Mapping
8/2/11
Grant address space
Driver domain virtual address space (page table)
I/O virtual address space (IOMMU table)
0x20000
0x0
0x40000
0x30000
0x10000
0x10000
Grant reference
0x7000
Xen Summit 2011 20
Redesigned Grant Mechanism 2. Creating Shared Memory
Guest Domain : Picks a “grant reference”
Offset within grant address space
Issues grant MAP hypercall Hypervisor validates grant
and maps guest pageDriver Domain :
Translates grant reference into virtual address and I/O address
Driver Domain Guest Domain
Xen Hypervisor
MAPHypercall
grant reference
Hardware
Setup IOMMU mapping
8/2/11
Xen Summit 2011 21
Grant Mapping
8/2/11
Grant address space
Driver domain virtual address space (page table)
I/O virtual address space (IOMMU table)
0x20000
0x0
0x40000
0x30000
0x10000
0x10000
Grant reference
0x37000
0x17000
Xen Summit 2011 22
Redesigned Grant Mechanism 3. Revoking Shared Memory
Guest Domain : Issues grant UNMAP
hypercall Provides grant referenceHypervisor unmaps
page
Driver Domain Guest Domain
Xen Hypervisor
Hardware
UNMAPHypercall
Remove IOMMU mapping
8/2/11
Xen Summit 2011 23
Unilateral RevocationGuest domains can revoke grants any time by
issuing grant UNMAP hypercallNo driver domain participation requiredSafe to revoke grants even when the I/O pages
are in use Since corresponding IOMMU mappings are also
removed
8/2/11
Xen Summit 2011 24
Unified InterfaceGrant hypercall interface can be invoked from the Guest DMA library
netfront SRIOV VF Driver
DMA Library
Xen Hypervisor
HardwareIOMMU
Guest Domain
8/2/11
Xen Summit 2011 25
Grant ReuseTake advantage of temporal and/or spatial locality in use
of I/O pages Reuse grants when I/O pages are reused
Reuse grants across multiple I/O operations Guest domain issues grant Driver domain uses I/O page for multiple I/O operations Guest domain revokes grant
Guest domains can implement any scheme to reuse grants Relax safety constraints Security vs performance trade-off Shared mappings, delayed invalidations, optimistic tear-down
etc.
8/2/11
Xen Summit 2011 26
A Grant Reuse SchemeSecurity compromise – prevents corruption of non-I/O
pagesPolicy – Never share a non-I/O read-write pageReceive read-write sharing
Allocate I/O buffers from a dedicated poolE.g. slab cache in Linux
Revoke grant when pages are reaped from pool I/O buffer pool also promotes temporal locality
Transmit read-only sharing Persistent sharing Grants revoked only when there are no more grant
references available (or keep it mapped always)8/2/11
Xen Summit 2011 27
Evaluation - Setup and Methodology
Server Configuration HP Proliant BL460c G7 Blade server
Intel Xeon X5670 – 6 CPU cores 32 GB RAM 2 embedded 10 GbE ports
Domain Configuration Domain0
linux 2.6.32.40 pvops kernel and 1 GB memory Driver Domain
linux-2.6.18.8-xen0 (modified) and 512 MB memory Guest Domains
linux-2.6.18.8-xenU (modified) and 512 MB memory Driver and guest domains configured with one VCPU each (pinned)
Netperf TCP Streaming tests
8/2/11
Xen Summit 2011 28
Evaluation - Transmit Results
Baseline (without IOMMU)
Baseline (with IOMMU)
New design with reuse (with
IOMMU)
0
2000
4000
6000
8000
10000
Thro
ughp
ut (M
bps)
8/2/11
• mapcount() logic significantly affects performance (baseline with IOMMU)
Xen Summit 2011 29
Evaluation - Receive Results
Baseline (without IOMMU)
Baseline (with IOMMU)
New design with reuse (with
IOMMU)
0
2000
4000
6000
8000
10000
Thro
ughp
ut (M
bps)
8/2/11
• No IOMMU overhead during RX• Driver domain bottleneck (Baseline)
Xen Summit 2011 30
Evaluation – Inter-guest Results
Baseline (without IOMMU)
Baseline (with IOMMU)
New design with reuse (with
IOMMU)
0
5000
10000
15000
20000
25000
30000
Thro
ughp
ut (
Mbp
s)
8/2/11
• Driver domain bottleneck (Baseline)
Xen Summit 2011 31
DiscussionSupporting multiple mappings in driver domain
(e.g. block tap interface)Driver domain can register address ranges from
multiple address spacesOr use hardware-assisted memory virtualization
Cannot support unilateral revocation without IOMMUsCannot revoke grants to in-use pages
8/2/11
Xen Summit 2011 32
ConclusionsMade a case for redesigning the grant
mechanismEnable grant reuseSupport unilateral revocationsSupport an unified interface to program IOMMUs
Proposed an alternate design where the source domain interacts directly with the hypervisor
Implemented and evaluated a reuse scheme
8/2/11