CS 414 - Spring 2011 CS 414 – Multimedia Systems Design Lecture 37 – P2P Streaming and P2P...
-
Upload
aubrie-ryan -
Category
Documents
-
view
217 -
download
0
Transcript of CS 414 - Spring 2011 CS 414 – Multimedia Systems Design Lecture 37 – P2P Streaming and P2P...
CS 414 - Spring 2011
CS 414 – Multimedia Systems Design Lecture 37 – P2P Streaming and P2P Applications/PPLive
Klara Nahrstedt
Spring 2011
Administrative MP3 preview is on April 27
Sign-up sheet for APRIL 27 demonstration will be provided in class on April 27
Preview demonstrations on April 27, 7-9pm in 216 SC
CS 414 - Spring 2011
Administration Final MP3 delivery on April 29
Demonstration time slots on April 29 will be allocated on Thursday, April 28 (TA will email to each group)
Two demonstration intervals students not in Google competition – demos 2-4pm in 216 SC, students in Google competition – demos 5-7pm in 216 SC
Pizza and Google Prizes Announcement at 7:15pm, April 29 (room 3403 SC)
Homework 2 will be out on Wednesday, April 27 Deadline, May 4, 11:59pm – submission via
compass
CS 414 - Spring 2011
Outline
P2P StreamingSingle tree (previous lecture)Multiple treesMesh-based streaming
Case study: PPLive
CS 414 - Spring 2011
Why P2P?
Every node is both a server and clientEasier to deploy applications at endpointsNo need to build and maintain expensive
infrastructurePotential for both performance improvement
and additional robustnessAdditional clients create additional servers for
scalability
CS 414 - Spring 2011
Multiple Trees
Challenge: a peer must be internal node in only one tree, leaf in the rest
CS 414 - Spring 2011
1 2
2 3 1 3
3
1 2
Source Are nodes 1, 2, 3 receiving the same data multiple times?
Multiple Description Coding (MDC)
Each description can be independently decoded (only one needed to reproduce audio/video)More descriptions received result in higher
qualityCS 414 - Spring 2011
coder
Frames 30 20 10
Packets for description 0
Packets for description n
…
3n 2n 1n
Streaming in multiple-trees using MDC Assume odd-bit/even-bit encoding --
description 0 derived from frame’s odd-bits, description 1 derived from frame’s even-bits
CS 414 - Spring 2011
4
1
2 3
3
1 2
3020
10
3121
11(using RTP/UDP)
Multiple-Tree Issues
Complex procedure to locate a potential-parent peer with spare out-degreeDegraded quality until a parent found in every
tree Static mapping in trees, instead of choosing
parents based on their (and my) bandwidthAn internal node can be a bottleneck
CS 414 - Spring 2011
Mesh-based streaming Basic idea
Report to peers the packets that you haveAsk peers for the packets that you are
missingAdjust connections depending on in/out
bandwidth
CS 414 - Spring 2011
Description 0/1/2
Description 1
Description 0
Description 1,2
Description 0,1,2
(Nodes are randomly connected to their peers, instead of statically)
Description 0/1/2
(mesh uses MDC)
Content delivery
CS 414 - Spring 2011
1
4
10 11 12
5
2
6
13 14 15
7
3
8
16 17
9
Description 0 Description 2Description 1
(1) Diffusion Phase ( ) (2) Swarming Phase ( )
(Levels determined by hops to source)
Diffusion Phase
As a new segment (set of packets) of length L becomes available at source every L seconds Level 1 nodes pull data units
from source, then level 2 pulls from level 1, etc.
Recall that reporting and pulling are performed periodically
CS 414 - Spring 2011
30 20 10
…
32 22 12
40
42
Segment 0Segment 1
…
…
Have segment 0
3
Send meSegment 0
22
12
(during period 0)
3 Have segment 0
9
22
12
(during period 1)
Send meSegment 0
(drawings follow previous example)
Swarming Phase
At the end of the diffusion all nodes have at least one data unit of the segment
Pull missing data units from (swarm-parent) peers located at same or lower level
Can node 9 pull new data units from node 16? Node 9 cannot pull data in a single swarm interval
CS 414 - Spring 2011(drawings follow previous example)
10 11 12
2
13 14 15
7
16 17
9
20
10
2111
21 11
1121
1020
Purdue
Stan1
Stan2
Berk2
Overlay Tree
Stanford
Berkeley
Dumb Network
Gatech
Gatech
Berk1
Stan1
Stan2
Berk1
Berk2
Source:
Purdue
Single Tree/Multi-tree/Mesh use Overlay P2P Multicast
Source: Sanjay Rao’s lecture from PurdueCS 414 - Spring 2011
Overlay Performance
Even a well-designed overlay cannot be as efficient as IP Mulitcast But performance penalty can be kept low Trade-off some performance for other benefits
Increased Delay
Dumb Network
GatechDuplicate Packets:
Bandwidth Wastage
Stanford
Berkeley
Source: Sanjay Rao’s lecture from PurdueCS 414 - Spring 2011
Traffic Distribution (2006) and New Trends (P4P)
CS 414 - Spring 2011
P4P – ISPs and P2P TrafficWork together
Source: http://www.openp4p.net/
P2P Applications
Many P2P applications since the 1990sFile sharing
Napster, Gnutella, KaZaa, BitTorrent
Internet telephony Skype
Internet television PPLive, CoolStreaming
CS 414 - Spring 2011
PPLive Current Viewers during Olympics 2008
CS 414 - Spring 2011
Case Study: PPLive
Very popular P2P IPTV applicationFrom Huazhong U. of Science and
Technology, ChinaFree for viewersOver 100,000 simultaneous viewers and
400,00 viewers dailyOver 200+ channelsWindows Media Video and Real Video format
CS 414 - Spring 2011
PPLive Overview
CS 414 - Spring 2011
PPLive Design Characteristics Gossip-based protocols
Peer management Channel discovery TCP used for signaling
Data-driven p2p streaming TCP used for video streaming Peer client contacts multiple active peers to download media content of the
channel Cached contents can be uploaded from a client peer to other peers watching
the same channel Received video chunks are reassembled in order and buffered in queue of
PPLive TV Engine (local streaming)
CS 414 - Spring 2011
PPLive Architecture
1. Contact channel server for available channels
2. Retrieve list of peers watching selected channel
3. Find active peers on channel to share video chunks Source: “Insights into PPLive: A Measurement
Study of a Large-Scale P2P IPTV System” by Hei et al.
CS 414 - Spring 2011
P2P Streaming Process
CS 414 - Spring 2011
TV Engine – responsible for • downloading video chunks from PPLive network• streaming downloaded video to local media player
Download and Upload Video Rate over Time at CCTV3 Campus
CS 414 - Spring 2011
Evolution of active video peer connections on CCTV3 Network
CS 414 - Spring 2011
PPLive Channel Size Analysis
CS 414 - Spring 2011
Conclusion Couple of Lessons Learned
Structure of PPLive overlay is close to random PPLive peers slightly peer to have closer neighbors and
peers can attend simultaneous overlays Improves streaming quality
Geometrically distributed session lenghts of nodes can be used to accurately model node arrival and departure
Major differences between PPLive overlays and P2P file-sharing overlays!!!
CS 414 - Spring 2011
Background Large-scale video broadcast over Internet (Internet TV such
as PPLIve, YouTube) Real-time video streaming Need to support large numbers of viewers
AOL Live 8 broadcast peaked at 175,000 (July 2005) CBS NCAA broadcast peaked at 268,000 (March 2006) NBC Olympic Games in 2008 served total 75.5 million streams BBC served almost 40 million streams of Olympic Games 2008
(http://newteevee.com/2008/08/28/final-tally-olympics-web-and-p2p-numbers/)
Very high data rate TV quality video encoded with MPEG-4 would require 1.5 Tbps aggregate
capacity for 100 million viewers NFL Superbowl 2007 had 93 million viewers in the U.S. (Nielsen Media
Research)
CS 414 - Spring 2011
Reading “Opportunities and Challenges of Peer-to-Peer Internet Video
Broadcast” by Liu et al. “Insights into PPLive: A Measurement Study of a Large-Scale
P2P IPTV System” by Hei et al. “Mapping the PPLive Network: Studying the Impacts of Media
Streaming on P2P Overlays” by Vu et al. Some lecture material borrowed from the following sources
Sanjay Rao’s lecture on P2P multicast in his ECE 695B course at Purdue “Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV
System” by Hei et al. “Mapping the PPLive Network: Studying the Impacts of Media Streaming
on P2P Overlays” by Vu et al.
CS 414 - Spring 2011