Towards Cinematic Internet Video-on-Demand
description
Transcript of Towards Cinematic Internet Video-on-Demand
1
Towards Cinematic Internet Video-on-Demand
Bin Cheng, Lex Stein, Hai Jin and Zheng Zhang
HUST and MSRAHuazhong University of Science & Technology Microsoft Research AsiaEuroSys 2008, Glasgow, Scotland, April 2~4, 2008
2
Motivation
VoD is popular and desirable, but costly
Peer-to-Peer has helped some applications:―File Downloading: Napster, BitTorrent―Live Streaming: CoolStreaming, PPLive, PPStream
Can it help VoD? Two challenges:―High bandwidth with real-time constraints―Users can join/leave, seek, pause at any time
3
Related WorkTopology Management
―Tree-, Mesh-, or DHT- based―Simulation-based―Show the sharing potential for a single video
Deployed Systems―Joost, PPLive, PPStream―Their details are closed
Nobody has implemented and deployed a system with the first purpose of openly and systematically evaluating P2P VoD
GridCast:―A P2P VoD system deployed on CERNET
4
Questions for GridCast
1. What benefits can be obtained from P2P?
2. What are the limitations of P2P?
3. Where is the room for further optimizations?
5
Talk Outline
Basic Design―Overview of GridCast architecture―Key issues: peer management, scheduling policy
Deployment―Single-video caching―Multi-video caching
Evaluation and Analysis―From single-video caching to multi-video caching
Conclusions―What have we learned?
6
What does GridCast look like?
http://www.gridcast.cn
7
Basic Design
Hybrid architecture (client-server + P2P)― Tracker: indexes all joined peers― Source Server: stores a complete copy of every video― Peer: fetches chunks from source servers or other peers― Web Portal: provides the video catalogtracker
Source ServerWeb portal
8
Basic Design
Three major issues
―How to organize online peers for better sharing?
―How to schedule requests for smooth playback?
―How to use caching to maximize peer sharing and minimize source server load?
9
Deployment
GridCast has been deployed on CERNET since May 2006―Network (CERNET)
• 1,500 Universities, 20 million hosts• Good bandwidth, 2 to 100Mbps to the desktop (core is complicated)
―Hardware• 1 Windows server 2003, shared by the tracker and the web portal• 2 source servers (share 100Mbps uplink)
―Content• 2,000 videos• 48 minutes on average• 400 to 800Kbps, 600 Kbps on average
―Users• 100,000 users (23% behind NATs) • 400 concurrent users at peak time (limited by our current infrastructure)
―Log (two logs, one for SVC, the other for MVC)• 40GB log (from Sep. 2006 to Oct. 2007)
10
Evaluation ModelMetrics
― Concurrency: number of users watching the same video• Higher concurrency, better opportunities for sharing
― Chunk cost: # chunks fetched from source / # chunks played• Lower chunk cost = higher scalability
― Continuity: Total delay time (s) / # chunks played• Lower value represents a better user experience
11
Evaluation: Single Video Caching (SVC)
SVC: only cache the currently watching video for sharing―High concurrency, better sharing [Concurrency: 2, Cost: 0.56, 78% increase]―GridCast is close to the ideal model [1/concurrency]―Sometime GridCast is lower than the ideal model [Concurrency: 7]
• Pause, temp source server• Prefetching
0 2 4 6 8 10 12 14 16 18 200.0
0.2
0.4
0.6
0.8
1.0
1.2
chun
k co
st
concurrency
client-server model ideal model GridCast with single video caching
12
Motivation: from SVC to MVCOverall performance
― With the same server load, the number of supported users increases [fluctuate from 0 to 50%, 28% on average against Client-server]
― Increase [28%] is far from 78% [concurrency of 2]― Why?
• 80% of viewing sessions happen at a concurrency of 1
How? Save the watched videos for later sharing
1 2 3 4 5
0.0
0.2
0.4
0.6
0.8
1.0
view
ing
time
(nor
mal
ized
by
the
tota
l vie
win
g tim
e)
concurrency
single video caching
13
Chance: from SVC to MVCDo we have resources for further sharing?
―Bandwidth, disk • 2.65Mbps download, 2.25Mbps upload, • 90% users have over 90% unused upload and 60% unused download
―Upper bound achieved from simulation• without any constraints• “Cold cache”• ~75% decrease of chunk cost, from SVC to MVC
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
110
unus
ed b
andw
idth
cap
acity
(%)
users (normalized)
download
upload
0
20
40
60
80
100
Tue.Mon.Sun.Sat.Fri.Thur.
sour
ce s
erve
r loa
d (M
bps)
day of week
single video caching multiple video caching without resource constraints
Wed.
14
Evaluation: Multiple Video Caching (MVC)
Cache all recent videos with a fixed cache size by LRU―Cache size, at most 1GB―Deployed June of 2007
surprise: improves both scalability and continuity
[50,100] [150,200]0.0
0.2
0.4
0.6
0.8
1.0
1.2
chun
k co
st
scale[50,100] [150,200]
0.00
0.02
0.04
0.06
0.08
0.10
cont
inui
ty single video caching multiple video caching
15
Evaluation: Multiple Video Caching (MVC)― Higher concurrency, lower chunk cost― Larger scale, better improvement (at most 26%, 15% on average) ― far from upper bound, 75% in simulation
0 20 40 60 80 100 120 140 160 180 200 220 2400
10
20
30
40
50
60
70
80
90
100
sour
ce s
erve
r loa
d (M
bps)
scale (number of users)
single video caching multiple video caching
16
Evaluation of MVCClassify misses by their causes
Chunk X does not hit in the peer cache, Why? New content
―Never fetched by any peer Peer departed
―Fetched by some peers, but all of them are offline Peer evicted
―Fetched by an online peer, but evicted Can not connect
―Cached by some online peer that is not in the neighborhood Insufficient bandwidth
― Cached by some neighbor, but cannot retrieve it
17
Evaluation: MVCunderstanding misses
― Less eviction misses, significantly reduced (30%)― More insufficient bandwidth misses (load imbalance, over-
utilized peers)― More connect misses (NATs, connection constraints) ― Peer departure, becomes a big issue
8.2
20.1
45.6
2.4 1.95.3
27.6
15.6
11.3
4
new content peer departure peer eviction connection issue insuf. BW0
10
20
30
40
50
perc
enta
ge o
f all
play
ed c
hunk
s (%
) SVC MVC
18
Conclusions The first detailed design description for a live
P2P VoD system
Improvements― SVC (22%), MVC (15%), in terms of the decrease of chunk cost― Totally, 34% reduction of server load over client-server― 51% user increase with the same server load
― Improve both scalability and user experience, from SVC to MVC― Larger scale, better improvements [scalable]
Limitations― Load imbalance: larger cache, hot-spot, over-utilized― Departure miss becomes a big issue (about 45% of misses in MVC)
19
Any questions……
Bin Cheng, Lex Stein, Hai Jin and Zheng Zhang
HUST and MSRAHuazhong University of Science & Technology Microsoft Research AsiaEuroSys 2008, Glasgow, Scotland