1. Introduction & Perf Reqs - Computer Sciencejeffay/talks/ICMCS-98.pdf1 IEEE International...
Transcript of 1. Introduction & Perf Reqs - Computer Sciencejeffay/talks/ICMCS-98.pdf1 IEEE International...
1
IEEE International Conference onMultimedia Computing and Systems
Issues in Multimedia Delivery Over Today’s Internet
Kevin JeffayDepartment of Computer Science
University of North Carolina at Chapel [email protected]
June 28, 1998
http://www.cs.unc.edu/~jeffay
2
Issues in Multimedia Delivery onToday’s Internet
u Domain of discourse» Definitions, concepts, and objectives
u Performance of “naive” applications today» What’s “broken”?
u Media adaptations for best-effort multimedia delivery» Can we fix “it”?
u Performance of best-effort applications today» Fundamental challenges for tomorrow’s Internet
Outline
3
Multimedia Delivery on Today’s InternetDomain of discourse
u A prototypical videoconferencing system» Architecture
» Quality-of-service requirements
u Performance metrics» End-to-end latency
» Packet loss
u Some typical experimental results
4
Interactive Multimedia ApplicationsPerformance requirements
u Latency — the duration between acquisition of a signaland its display
u Videoconferencing latency requirements» telephony literature — 100 ms roundtrip» multimedia networking literature — 250 ms one-way» CSCW literature — tolerance of latency as high as 400 ms
5
Latency in Computer-Based Video SystemsCanonical application structures
Sender’sProcessing
PipelineDigitizationDigitization CompressionCompression Network
Transport
NetworkTransport
Time(ms)
0 33 66 100
TransmitDigitize Compress
6
Latency in Computer-Based Video SystemsCanonical application structures
DisplayDisplay
Time(ms)
t t+33 t+66 t+100
DisplayReceive/Synchronize Decompress
NetworkReception
NetworkReception
Decomp-ression
Decomp-ression
Receiver’sProcessing
Pipeline
7
Latency in Computer-Based Video SystemsRecevier synchronization
u In general, acquisition and playout clocks are notsynchronized
u Therefore a buffer must be present at the receiver to adjustfor phase-shift in sender’s & receiver’s media clocks
Sender
Receiver
Display initiation points
8
Latency in Computer-Based Video SystemsBest case end-to-end latency
Sender Processing
Time(ms)
0 33 66 100 133 167
Digitize Compress Decompress DisplayTransmit/Synchronize
Receiver ProcessingNetwork
9
Latency in Computer-Based Audio SystemsHow bad can audio latency be?
u Just as bad as video if lip-synchronization is required
u Otherwise, it depends on how one manages thenetwork interface» Video frames are typically too large to fit into a single
network packet» Multiple audio samples can be transmitted together
u Example: An audio codec generating 1 byte of dataevery 125 µs» Building 500 byte packets requires 62.5 ms/packet» Building 1,500 byte packets requires 187.5 ms/packet
10
Performance RequirementsDelay-jitter
u Latency» 250 ms one-way
u Delay-jitter — Variation in end-to-end latency
Sender
Receiver
Delay-jitter
PerfectDelivery
p
p
11
Performance requirementsThe impact of delay-jitter
u Delay-jitter leads to “gaps” in the playout ofmedia and increases playout latency
Sender
Receiver
Playout gapsDisplay initiation points
12
Performance requirementsThe impact of delay-jitter
u Delay-jitter increases playout latency
Sender
Receiver
latency = p latency = 3p
Playout gapsDisplay initiation points
13
Performance Requirements
u Latency» 250 ms one-way
u Delay-jitter
u Throughput —the effective delivered frame orsample rate» For video the issue is motion perception
» For audio the issue is comprehension
u Loss — the complement of throughput
14
Performance requirementsLoss
u Loss has the same effect as delay-jitter: gaps» With a potentially beneficial effect of potentially lower latency
Sender
Receiver
Delay-jitter inducedplayout gaps
X
Loss inducedplayout gap
latency = p latency = 3p latency = 2p
15
Performance requirementsLoss
u Loss has the same effect as delay-jitter: gaps» With a potentially beneficial effect of potentially lower latency
Sender
Receiver
Delay-jitter inducedplayout gaps
X
Loss inducedplayout gap
16
latency = p latency = 3p latency = 2p
17
Receiver’sPipeline
Avoiding Loss in the End SystemReal-time management of a processing pipeline
Time(ms)
0 33 66 100 133 167 200
Digitize Compress Decompress DisplayTransmit/Synchronize
Sender’sPipeline
Frame n+2
Frame n+1
Frame n
18
Performance requirementsLoss requirements
u Audio — 1-2% sample loss» individual sample losses (depending on sample size) are
noticeable
» 5-10 lost samples per minute are tolerable
(the distribution of loss is critical)
u Video — 10-15 frames/s required for minimalmotion perception» highly application dependent
» video loss raise issues of “network citizenship”
19
Performance Requirements
u Latency» 250 ms one-way
u Delay-jitteru Throughput —the effective delivered frame or
sample rate» For video the issue is motion perception» For audio the issue is comprehension
u Loss
u Lip synchronization» The temporal relationship between an audio and video
stream representing a human speaking
20
u Perfect lip synchronization requires audio playoutat time _____
Performance RequirementsLip synchronization
Time(ms)
0 33 66 100 133 167
Digitize Compress Decompress DisplayTransmitVideoProcessing
AudioProcessing
acquisition/compression
21
u Varying lip sync can be an effective technique inmitigating high video latency
u But... this is fundamentally unnatural!
Performance RequirementsLip synchronization
Time(ms)
0 33 66 100 133 167
Digitize Compress Decompress DisplayTransmitVideoProcessing
AudioProcessing Transmit
playoutacquisition/compression
22
Interactive Multimedia ApplicationsPerformance requirements
u No more than 250 ms end-to-end, one-way latency
u Continuous audio
u Minimum of 10 frames per second video throughput
u “Loosely synchronized” playout — ± 80 ms skew
Internet
23
Issues in Multimedia Delivery onToday’s Internet
u Domain of discourse» Definitions, concepts, and objectives
u Performance of “naive” applications today» What’s “broken”?
u Media adaptations for best-effort multimedia delivery» Can we fix “it”?
u Performance of best-effort applications today» Fundamental challenges for tomorrow’s Internet
Outline
24
Videoconferencing on the Internet TodayProShareTM performance on the Internet
Packet Loss
Audio Latency (ms)
Video Latency (ms)
Throughput (frames/sec)
0
5
10
15
20
25
30
35
40
0 100 200 300 400 500 600
VideoAudio
0
20
40
60
80
100
120
140
160
0 100 200 300 400 500 600
Packet Loss
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
Video Latency
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
Audio Latency
25
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 6000
5
10
15
20
25
30
35
40
0 100 200 300 400 500 600
VideoAudio
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 6000
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
Videoconferencing on the Internet TodayProShareTM performance on the Internet
Packet Loss
Audio Latency (ms)
Video Latency (ms)
Throughput (frames/sec)
26
Videoconferencing on the Internet TodayWhat’s the problem?
u Where is data being delayed and lost?
InternetInternet
27
Videoconferencing on the Internet TodayWhat’s the problem?
u Do we need more bandwidth or just better managementof the existing bandwidth?
1980
insufficientresources
1990 2000
Hardware resources in year X
Requirements(performance,
scale)
abundantresources
sufficientbut scarceresources
28
Where do we go from here?Two fundamental approaches
u Provide true quality-of-service through reservation ofresources in the network» Requires coordination amongst all parties
v admission controlv policingv ...
u Provide “best-effort” service by adapting media streams» Monitor & provide feedback on performance» Bias transmission and processing of media to ameliorate the
effects of congestion
29
Issues in Multimedia Delivery onToday’s Internet
u Domain of discourse» Definitions, concepts, and objectives
u Performance of “naive” applications today» What’s “broken”?
u Media adaptations for best-effort multimedia delivery» Can we fix “it”?
u Performance of best-effort applications today» Fundamental challenges for tomorrow’s Internet
Outline
30
Best-Effort Multimedia NetworkingOutline
u IP message delivery semantics» The four common Internet pathologies
u Ameliorating the effects of delay-jitter» “60 ways to queue & play your media samples”
u Ameliorating the effects of packet loss» Recovery of lost samples through retransmission» Recovery of lost samples through the addition of redundant
information
u Congestion control» Adaptive media scaling and packaging
31
Best-Effort Multimedia NetworkingThe four Internet pathologies
u Delay-jitter» Managing a trade-off
between end-to-end latencyand continuous playout
u Loss» Proactively control through
forward error correction
» Reactively control throughretransmission
Sender
Receiver
Delay-Jitter Out-of-orderArrivalsLoss Duplicate
ArrivalsPerfect
Delivery
X
u Out-of-order arrivals» Assume out-of-order
sample is lost» Assume sample is late
u Duplicate arrivals» Do we care?
32
Ameliorating the Effects of Delay-JitterTrading-off end-to-end latency for continuous playout
u When the first media sample arrives, should it be played orenqueued?
Receiver’s processing pipeline
DisplayDecomp-ression
NetworkTransport
Display QueueDisplaySynchronization
33
u When the first media sample arrives, should it be played orenqueued?» playing the sample ensures minimal end-to-end latency...
Ameliorating the Effects of Delay-JitterTrading-off end-to-end latency for continuous playout
Send
Receive
Playout
2pp 2p 3p
DisplayQueue
a b c d e f g h
a b cd
de
e f g h34
DisplayQueue
a b c d e f g h
a b d e f g h
u Samples that arrive “too late” may be discarded
Ameliorating the Effects of Delay-JitterTrading-off end-to-end latency for continuous playout
Send
Receive
Playout
pp 2p 3p
35
u Enqueueing the sample ensures continuous playout...
Ameliorating the Effects of Delay-JitterTrading-off end-to-end latency for continuous playout
DisplayQueue
a ab
abc
bcd
cde
de
ef
fg
g h
Send
Receive
Playout
3p
a b c d e f g h
36
u Purposefully throwing away samples reduces latency
Principles of Delay-Jitter BufferingSample discarding
DisplayQueue
a ab
abc
bcd
de
e f g h
Send
Receive
Playout
3p
a b c d e f g h
X
2p
37
u Let naturally occurring network delays determine playoutlatency
u Avoid initial sequence of playout gaps by estimatingnetwork delay and setting playout delay accordingly
Principles of Delay-Jitter BufferingTwo fundamental initial playout strategies
Send
Receive
Playout
Send
Receive
Playout
38
u Play media samples as they arrive» Latency increases as delay-jitter increases
u Discard “late” samples» Playout media with constant latency
Principles of Delay-Jitter BufferingTwo fundamental late arrival strategies
Send
Receive
Playout
Send
Receive
Playout
39
Principles of Delay-Jitter BufferingEstimating network delay
u If network delay is bounded by a constant d:» timestamp each packet at sender» when a packet arrives arrives, enqueue the packet and dequeue
at time:sender’s_transmission_time + d
Send
Receive
Playout
playout latency = 3p
a b c d e f g h
worst case network delay = d
40
Principles of Delay-Jitter BufferingEstimating network delay
u Basic algorithm: playout latency = d + (k x v)where» d is the average estimated network delay
» v is the estimated variation deviation
» k is a “congestion estimator”
Send
Receive
Playout
playout latency
a b c d e f g h
41
Principles of Delay-Jitter BufferingEstimating network delay
u The average network delay and variation can be estimatedby:
dnew estimate = dold estimate + α x (dobserved – dold estimate)
vnew estimate = vold estimate + β x (|dobserved – dold estimate| – vold estimate)or
dnew estimate = α dold estimate + (1 – α) dobserved
vnew estimate = β vold estimate + (1 – β) x (|dobserved – dold estimate|)or
dnew estimate = MIN(dobserved in the recent past)
playout latency = d + (k x v)playout latency = d + (k x v)
42
Principles of Delay-Jitter BufferingEstimating network delay
u All samples are scheduled for playout at time playout latency = d + (k x v)
u But when should playout latency be changed?
a b c d e f g hSend
Receive
Playout
d + kv dnew > dold + kvold
43
Talkspurt i Talkspurt i+1
Principles of Delay-Jitter BufferingVoice transmission
u For voice transmission we can dynamically adapt playouttimes of audio samples using silent periods to “resync”the stream
a b c d e f g hSend
Receive
Playout
d + kv dnew + kvnew
44
Principles of Delay-Jitter BufferingContinuous audio transmission
u Many forms of audio (and other media) must betransmitted continuously» Music
» “Noisy” voice
» Mixed audio streams
» Video?
u Scheduling individual samples for playout basedon estimates of network delay gives poor results
45
u Let naturally occurring network delays determineplayout latency
Principles of Delay-Jitter BufferingContinuous audio transmission
Send
Receive
Playout
2pp 2p 3p
DisplayQueue
a b c d e f g h
a b cd
de
e f g h46
DisplayQueue
ij
jk
klm
lmn
mno
nop
opq
pq
qr
rs
u How do we determine if our delay-jitter buffer is toolarge?
Principles of Delay-Jitter BufferingContinuous audio transmission
Send
Receive
Playout
3p
k l m n o p q r s
h
47
DisplayQueue
ij
jk
klm
lmn
mno
op
pq
q r s
u Simulating packet loss by discarding samples at thereceiver will reduce playout latency
Principles of Delay-Jitter BufferingContinuous audio transmission
Send
Receive
Playout
3p
k l m n o p q r s
2p
h
48
DisplayQueue
ij
jk
klm
lmn
mno
nop
opq
pq
qr
rs
u Rather than compute network delay, infer it from thelength of the display queue» If queue length grows, network delay is decreasing» If queue length shrinks, network delay is increasing» If queue length remains constant, network delay is stable
Continuous Audio TransmissionQueue monitoring
h
Send
Receive
k l m n o p q r s
49
rs
u Keep count of the number of consecutive displayinitiation times at which the display queue contained nitems
u When the count exceeds a threshold, the oldest sample inthe queue is discarded» queue locations near the head of the queue have large thresholds» queue locations near the tail of the queue have small thresholds
Continuous Audio TransmissionQueue monitoring
Display Queue
ij
jk
klm
lmn
mno
h nop
opq
pq
qr
1 2 3 4 5 6 7 8 9 10 111 2 3 4 5 6 7 8 9 10
1 2 3 4 5
50
DisplayQueue
a b c def
efg
fgh
hi
i
u Example: Queue monitoring with thresholds = 3, 10» sample g discarded at playout time 10
Continuous Audio TransmissionQueue monitoring
Send
Receive
Playout
3p
a b c d e f g h i
2p
j
1 2 3
p
51
Queue MonitoringPerformance on a campus-area network
u How much delay-jitter can be accommodated in practice?» What ranges of delay-jitter are observed?
» How well do these buffing schemes work in practice?
u Stone’s delay-jitter study in the UNC CS department:» A comparison of the effectiveness of three delay-jittter
management policies:I-Policy — playout media with fixed latency
E-Policy— playout media samples as they arrive
Queue Monitoring — adaptively set the playout delay
on the playout of audio/video in a videoconferencing system
52
End-To-End Delays (ms.)
Nu
mb
er
of F
ram
es
0
1
10
100
1000
10000
100000
0 50 100 150 200 250
End-To-End Delays (ms.)
Nu
mb
er
of
Fra
me
s
0
1
10
100
1000
10000
100000
0 50 100 150 200 250
Queue MonitoringPerformance on a campus-area network
u What ranges of delay-jitter are observed?» Stone measured the performance of 28, 5 minute conferences
during the course of a “typical” day
Audio end-to-end delay distribution (in ms)
Morning Afternoon
0 50 100 150 200 250 0 50 100 150 200 250
53
0
20
40
60
80
100
120
140
160
6:03
6:25
6:36
6:47
8:03
8:14
8:25
8:36
10:0
2
10:1
6
10:3
1
10:4
9
11:5
7
12:0
8
12:1
9
12:3
4
14:0
2
14:1
3
14:4
2
14:5
4
16:0
1
16:2
1
16:3
3
16:5
5
0:05
0:16
0:27
0:38
0123456789
10
6:03
6:25
6:36
6:47
8:03
8:14
8:25
8:36
10:0
2
10:1
6
10:3
1
10:4
9
11:5
7
12:0
8
12:1
9
12:3
4
14:0
2
14:1
3
14:4
2
14:5
4
16:0
1
16:2
1
16:3
3
16:5
5
0:05
0:16
0:27
0:38
0123456789
10
6:03
6:25
6:36
6:47
8:03
8:14
8:25
8:36
10:0
2
10:1
6
10:3
1
10:4
9
11:5
7
12:0
8
12:1
9
12:3
4
14:0
2
14:1
3
14:4
2
14:5
4
16:0
1
16:2
1
16:3
3
16:5
5
0:05
0:16
0:27
0:38
Queue MonitoringPerformance on a campus-area network
Network delay
Average gap rate
I-2 PolicyI-3 PolicyE PolicyQueue Monitoring
45 1513111222 12192523 12 15 15
54
Queue MonitoringPerformance on a campus-area network
0
100
200
300
400
500
600
700
0 60 120 180 240 300 360 420 480 540 600
0
100
200
300
400
500
600
700
0 60 120 180 240 300 360 420 480 540 600
0
100
200
300
400
500
600
700
0 60 120 180 240 300 360 420 480 540 600
0
100
200
300
400
500
600
700
0 60 120 180 240 300 360 420 480 540 600
Media latency under Queue Monitoring
Media latency without Queue Monitoring
55
Principles of Delay-Jitter BufferingNon-real-time media transmission
u If the communication is non-real-time, doesn’tsimple static buffering solve the problem?» Yes, but...
56
Adaptive, best-effort, multimedia networkingOutline
u IP message delivery semantics» The four common Internet pathologies
u Ameliorating the effects of delay-jitter» “60 ways to queue & play your media samples”
u Ameliorating the effects of packet loss» Recovery of lost samples through retransmission» Recovery of lost samples through the addition of redundant
information
u Congestion control» Adaptive media scaling and packaging
57
Dealing With Packet LossApplication requirements
u Audio — 1-2% sample loss» individual sample losses are noticeable (depending on
the sample size)
» 5-10 lost samples per minute are tolerable
(the distribution of loss is critical)
u Video — 10-15 frames/s required for minimalmotion perception» highly application dependent
» video loss raise issues of “network citizenship”
58
Dealing With Packet LossTwo basic approaches
u Traditional “reactive” approach» Acknowledge transmissions and resend lost packets
v “Automatic Repeat Request” (ARQ)
u Two proactive approaches» Introduce redundancy into streams to enable reconstruction of
lost media samplesv “Forward error correction” (FEC)
» Dynamically adapt streams to the bandwidth perceived to beavailable at the current timev Media scaling & packaging
59
Retransmission-Based Error CorrectionConventional wisdom
u Retransmission is silly...» By the time you realize something is lost, it’s too late
to resend it» Traditional sender-oriented retransmission techniques
do not scale to multicast environments
Send
Receive
2dd
X
60
Retransmission-Based Error CorrectionThe reality
u Retransmission is potentially beneficial...» Since data is buffered at the receiver to ameliorate the
effects of jitter, provide intermedia synchronization, etc.,retransmission may work!
Send
Receive
DisplayQueue
abc
bcd
cde
efg
fg
Xij
ijk
jkl
l Xn
de
gi
kl
X
c d e f g h i j k l m n o
X
61
1.Loss is detected
2.A retransmission request is issued
3.The requested packet is retransmitted
Retransmission-Based Error CorrectionThe retransmission process
Send
Receive
DisplayQueue
de
efg
fg
gi
Xil
il
Xn
l
X X
cde
f g h i l m n o
62
u If:
then retransmission is a possibility
Retransmission-Based Error CorrectionThe retransmission “budget”
Send
Receive
DisplayQueue
de
efg
fg
gi
Xil
il
Xn
l
X X
cde
f g h i l m n o
gaplength 3 playout
latencyone-way
transmission time+ <x( )
63
mean =
mean =
mean =
mean = mean =
1-Error S
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 10 20 30 40 50 60 70 80 90 100 110 120
Control Time (ms)
1
Prob
[Con
tinuo
us P
layb
ack]
Is there likely to be enough time to retransmit?The Dempsey et al. study
probability of continuous playout v. receiver buffering delay (in ms)
0-Error1-Error2-Error3-Error
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 50 100 150 200Control Time (ms)
Prob
[Con
tinuo
us P
layb
ack]
3
µ = 2 ms
10 ms20 ms
30 ms
40 ms 210
Retransmission effectiveness fordifferent average network delays
Retransmission effectivenessfor different loss patterns
number ofconsecutivelost packets
64
Retransmission-Based Error CorrectionHow can retransmission work in a multicast environment?
u Issues of scale» Avoiding ACK/NACK implosions» State requirements
SenderInternetInternet
65
Scalable Reliable MulticastPrinciples of operation
u Receivers are responsible for ensuring they receivethe data they care about» Repair requests are multicast to the group
u Any receiver is capable of acting as a sender andsending a repair response
InternetInternetSender
66
Scalable Reliable MulticastAvoiding repair and repair response implosions
u Hosts continually measure the distance to each other» Hosts periodically emit control messages as in RTCP
u When a receiver detects a loss, it sets a timer for emitting itsrepair request based on its estimate distance to the sender» Send repair requests quickly to nearby senders
Sender
9,8,7,6
98,6,
9,8, 7, 6
“NACK 7”
InternetInternet
67
Scalable Reliable MulticastAvoiding repair and repair response implosions
u If a host receives a repair request and it has the requestpacket, it similarly sets a timer for emitting itsresponse based on its estimated distance to the receiver
Sender
Requester
“NACK 7”
InternetInternet
“NACK 7”
“7”Responder
“7”
68
Scalable Reliable MulticastAvoiding repair and repair response implosions
u Ideally a lost packet triggers only 1 repair request from ahost just downstream from the point of failure & a singlerepair response from a host just upstream of the failure
Sender
Requester
“NACK 7”
InternetInternet
“NACK 7”
“7”Responder
“7”
69
Scalable Reliable MulticastPerformance issues
u If losses are infrequent and correlated, then fewrepair/response messages are sent» But every host will receive each message
u Otherwise, in the worst case the data traffic can double
Sender
Requester
“NACK 7”
InternetInternet
“NACK 7”
“7”Responder
“7”
70
Scalable Reliable MulticastPerformance issues
u What is the impact of having both the repair requester& responder delay before issuing their message?» What is the likelihood that the resulting retransmission will
be on time?
Sender
Requester
“NACK 7”
InternetInternet
“NACK 7”
“7”Responder
“7”
71
Scalable Reliable MulticastOpen issues
u How to limit the scope of repair/repair response messages?u Managing the trade-off between keeping silent to avoid
implosions and sending quickly to maximize (individual)performance
Sender
Requester
“NACK 7”
InternetInternet
“NACK 7”
“7”Responder
“7”
72
0
50
100
150
200
250
300
350
400
70 80 90 100 110 120
SDR (CMU, Aug. 4, 1996)SDR (Berkeley, Jun. 24, 1996)
machines reachedfrom CMU
machines reachedfrom Berkeley
Scalable Reliable MulticastUsing TTL to limit the scope of repair/response messages
u TTL is not a goodmeasure of locality» Number of hosts
reachable is notlinear in TTL
u TTLs between twohosts are notsymmetric
number of hostsreachable by a given TTL v TTL
73
Scalable Reliable MulticastUsing TTL to limit the scope of repair/response messages
u How can a repair responder ensure its reply reaches:» the original requestor» all would-be requestors who suppressed their repair request
Requester
NACK
Responder
NACK
X
74
Retransmission-Based Error CorrectionSummary
u Retransmission will be effective means of dealing withpacket loss if...» we can detect losses quickly
» average receiver buffering delay ≥ (1.5 x RTT) + gap length
u Retransmission can be made to scale if...» we can avoid repair request and response implosions
» repairs can be performed locally
75
Dealing With Packet LossTwo basic approaches
u Traditional “reactive” approach» Acknowledge transmissions and resend lost packets
v “Automatic Repeat Request” (ARQ)
u Two proactive approaches» Introduce redundancy into streams to enable reconstruction of
lost media samplesv “Forward error correction” (FEC)
» Dynamically adapt streams to the bandwidth perceived to beavailable at the current timev Media scaling & packaging
76
FEC payload
Forward Error CorrectionBasic concepts
u We introduce redundancy intothe stream to enable the receiverto recover from errors due to loss
u Forms of redundancy» Simple replication and re-
transmission of original data
» k-way XOR
» Replication, recoding, and re-transmission of original data
header extension
RTP payload
RTP Packet
RTP Header
77
Forward Error CorrectionSimple replication and retransmission example
u Key issue: If a sample is lost, how do we ensure that theredundant information necessary for the repair arrives?» How much bandwidth should we dedicate to FEC?» Where should we place the redundant information in the stream?
Send
Receive
DisplayQueue
de
efg
fg
ghi
hij
ij
j
X X
cde
f, e g, f h, g i, h j, i k, j l, k m, l
X
lm
78
Forward Error CorrectionStaggering original & redundant samples by two samples
u As before, the length of receiver’s buffering delay is acritical performance parameter
Send
ReceiveX X
f, d g, e h, f i, g j, h k, i l, j m, k
X
DisplayQueue 1 d
e
efg
fg
gi
hij
ij
jcde
km
DisplayQueue 2 e
fg
g
X
ij
j Xde
Xm
79
Forward Error Correctionk-way XOR
u Assume consecutive packet losses are rare and transmitthe word-by-word XOR of groups of k samples
u Example: 3-way XOR
1010001101011111
1010001101010011
1010001100001111
1010001000111010
...
1010001101010000
1010001101000000
1010111100000011
1010111000101010
...
0000001111111111
0000001111000001
0000001100001010
0000001000000000
...
0000001111110000
0000001111010010
0000111100000110
0000111000010000
...
Sample 1 Sample 2 Sample 3 Repair Sample=
80
Forward Error CorrectionRecoding/transcoding of original sample
u If losses are infrequent, perhaps we can get by withlower quality repairs
u Example: UCL’s Robust Audio Tool (RAT) recodes thestream using an LPC codec for error recovery» Normal samples are generated by an ADPCM codec
» LPC codec generates a 4.8 kbps stream(12 bytes/20 ms sample)
» Redundant samples separated from originals by 1 sample
81
95
85
75
65
55
45
350 10 20 30 40
95
85
75
65
55
45
350 10 20 30 40
RAT LPC Redundancy ExperimentsIntelligibility v. percentage of packet loss
u Conclusion: LPC redundancy is likely not warrantedwith small packets; it is worthwile for large packets» (This is due in large part to quality of LPC coded speech)
20/40 ms packets 80 ms packets
ADPCMw/ no loss
Repetition
LPC repair
Silence
LPC w/no loss
82
Foward Error CorrectionSummary
u FEC will be effective means of dealing with packetloss if...» we can tolerate the overhead
» consecutive packet losses are rare orwe can tolerate higher playout delays
83
0
1
2
3
4
5
6
10000 12000 14000 16000 18000 20000
num
ber
of c
onse
cutiv
e lo
sses
sequence number
Consecutive losses vs. packet sequence number
INRIA-UCL unicast loaded
0
1
2
3
4
5
6
10000 12000 14000 16000 18000 20000
num
ber
of c
onse
cutiv
e lo
sses
sequence number
Consecutive losses vs. packet sequence number
INRIA-UCL unicast unloaded
The Incidence of Consecutive Packet LossThe INRIA unicast IVS experiments
u Packet loss from INRIA to UCL
number of consecutive losses v. sequence number
8 am 4 pm
84
1
10
100
1000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
num
ber
of o
ccur
renc
es
number of consecutively lost packets
Distribution of the number of consecutively lost packets
INRIA-UCL unicast loaded
1
10
100
1000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
num
ber
of o
ccur
renc
es
number of consecutively lost packets
distribution of the number of consecutively lost packets
INRIA-UCL unicast unloaded
The Incidence of Consecutive Packet LossThe INRIA unicast IVS experiments
u Frequency distribution of consecutive packet lossesfrom INRIA to UCL
number of occurences of n consecutively lost packets v. n
8 am 4 pm
85
1
10
100
1000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
num
ber
of o
ccur
renc
es
number of consecutively lost packets
distribution of the number of consecutively lost packets
INRIA-UMD unicast medium loaded
0
1
2
3
4
5
6
10000 12000 14000 16000 18000 20000
num
ber
of c
onse
cutiv
e lo
sses
sequence number
Consecutive losses vs. packet sequence number
INRIA-UMD unicast medium loaded
The Incidence of Consecutive Packet LossThe INRIA unicast IVS experiments
u Packet loss from INRIA to Maryland at 3 pm(9 am EST)
number of consecutive losses v.sequence number
number of occurences of nconsecutively lost packet v. n
86
0
1
2
3
4
5
6
10000 12000 14000 16000 18000 20000
num
ber
of c
onse
cutiv
e lo
sses
sequence number
Number of consecutive losses vs. packet sequence number
INRIA-UCL multicast unloaded
0
1
2
3
4
5
6
10000 12000 14000 16000 18000 20000
num
ber
of c
onse
cutiv
e lo
sses
sequence number
Number of consecutive losses vs packet sequence number
INRIA-UCL multicast loaded
The Incidence of Consecutive Packet LossThe INRIA multicast IVS experiments
u Packet loss from INRIA to UCL
number of occurences of n consecutively lost packets v. n
8 am 4 pm
87
1
10
100
1000
10000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
num
ber
of o
ccur
renc
es
number of consecutively lost packets
distribution of the number of consecutively lost packets
INRIA-UCL multicast unloaded
1
10
100
1000
10000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
num
ber
of o
ccur
renc
es
number of consecutively lost packets
distribution of the number of consecutively lost packets
INRIA-UCL multicast loaded
8 am 4 pm
The Incidence of Consecutive Packet LossThe INRIA multicast IVS experiments
u Frequency distribution of consecutive packet lossesfrom INRIA to UCL
number of occurences of n consecutively lost packets v. n
88
0
5
10
15
20
25
30
3535
30
25
20
15
10
5
0
Packet Loss on the Internet TodayAudio packet loss for UNC-UW-UNC ProShare xmission
u Percentage of audiopacket loss during a 5minute interval» Each line represents a
5 second average
890
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
8
0
5
10
15
20
25
Packet Loss on the Internet TodayAudio packet loss for UNC-UVa-UNC ProShare xmission
10-12
12-14
14-16
16-18
90
Adaptive, best-effort, multimedia networkingOutline
u IP message delivery semantics» The four common Internet pathologies
u Ameliorating the effects of delay-jitter» “60 ways to queue & play your media samples”
u Ameliorating the effects of packet loss» Recovery of lost samples through retransmission» Recovery of lost samples through the addition of redundant
information
u Congestion control» Adaptive media scaling and packaging
91
Best-Effort Multimedia NetworkingCongestion control
u Delay-jitter buffering, retransmission, and forwarderror correction ameliorate the effects of variation inend-to-end delay and packet loss» They do not attempt to address the root cause
u Congestion control aims to eliminate or reduce theseeffects
92
Congestion ControlWhat is congestion?
SwitchFabric
InputLinks
OutputLinks
93
Congestion ControlThe nature of congestion
u What causes congestion?» Did our multimedia stream(s) cause the network to be
congested?» Are there simply too many connections competing for too
little bandwidth?
SwitchFabric
SwitchFabric
94
Congestion ControlThe adaptive, best-effort, congestion control problem
u How can we make the best use of the (time varying)bandwidth that is available to our streams?» How can we determine what this bandwidth is?
» How can we track how it changes over time?» How can we match our codec(s)’s output the bandwidth
“available” to our application?
SwitchFabric
SwitchFabric
95
Adaptive, Best-Effort Congestion ControlPrinciples of operation
u Receivers periodically report throughput & lossstatistics
u Sender adapts to match the bandwidth available» Assume sufficient bandwidth exists for some useful
execution of the system96
Canonical Adaptive Congestion ControlVideo bit-rate scaling
u Temporal scaling» Reduce the resolution of the stream
by reducing the frame rate
u Spatial scaling» Reduce the number of pixels in an
image
u Frequency scaling» Reduce the number of DCT
coefficients used in compression
u Amplitude scaling» Reduce the color depth of each
pixel in the image
u Color space scaling» Reduce the number of colors
available for displaying the image
17221-9-10-8440
17221-9-10-8440
-18-34-86-2-2-3-8
-18-34-86-2-2-3-8
1524-4-5-3-4-4-4
1524-4-5-3-4-4-4
-8-8645653
-8-8645653
23-10-5-4-3-462
23-10-5-4-3-462
-911443431
-911443431
-1414324214
-1414324214
197-116-110
197-116-110
97
UNC Adaptive Congestion Control2-Dimensional media scaling
uCanonical approachto congestion» Reduce (video) bit-rate
uAlternate approach» View congestion control as
a search of a 2-dimensionalbit-rate x packet-rate space
» Scale bit- and packet-ratessimultaneously to find asustainable operating point
Packet-Rate (in packets/s)
Bit-Rate(in kbits/s) High-Quality Video
(8,000 bytes/frame)
Medium-Quality Video(4,000 bytes/frame)
Low-Quality Video(2,000 bytes/frame)
Audio (256 bytes/sample)
500
1,000
1,500
2,000
15 30 45 60
98
Bit- and Packet-Rate ScalingAn analytic model of media scaling
u Capacity constraints» the network is incapable of supporting the desired bit rate
in any form
u Access constraints» the network can not support the desired bit rate with the
current packaging scheme
OutboundProcessor
Real-TimeTraffic
Other Outbound LinksNon-Real-TimeTraffic
ServiceQueuing
DataMovement & PacketProcessing
AdaptorQueuing
MediaAccess &Xmission
Time
InboundProcessor
99
Two Types of Congestion ConstraintsTwo dimensions of adaptation
u Reduce the packet-rate to adapt to an access constraint» Change the packaging or send fewer video frames» Primary Trade-off: higher latency (potentially)
u Reduce the bit-rate to adapt to a capacity constraint» Send fewer video frames or fewer bits per video frame» Primary Trade-off: lower fidelity
OutboundProcessor
Real-TimeTraffic
Other Outbound LinksNon-Real-TimeTraffic
ServiceQueuing
DataMovement & PacketProcessing
AdaptorQueuing
MediaAccess &Xmission
Time
InboundProcessor
100
2-Dimensional Scaling ExampleThe “Recent Success” heuristic
u Initial operating point:(high quality, 12 fps)
u First adaptation:(high quality, 10 fps)
» congestion persists
u Second adaptation:(medium quality, 10 fps)
» congestion relieved
u First probe:(medium quality, 12 fps)
u Second probe:(medium quality, 14 fps)
Bit-
Rat
e (in
bits
/s)
500
15Packet-Rate (in packets/s)
101
2-Dimensional Media ScalingFinding a sustainable operating point
u The search space can bepruned by eliminating» points that lead to inherently
high latency
» points that lead to highlatency given the state of thenetwork
Packet-Rate (in packets/s)
Bit-Rate(in kbits/s)
500
1,000
1,500
2,000
15 30 45 60
102
2-Dimensional Media ScalingDealing with effects of fragmentation
u The problem» A sender can only (directly)
effect the message rate, notthe packet rate
u Does fragmentation rendermessage-rate scalingobsolete?
1664
Video8,000 bytes/framePacket-Rate
(packets/sec)
Message-Rate(mesg/sec)
Bit-Rate(kbits/sec)
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57
16
128
256
384544
8001152
0
20
40
60
80
100
120
140
160
180
Audio (stereo)
4,000bytesframe
8,000bytesframe
2,000bytesframe
103
Adaptive, 2-Dimensional Media ScalingDoes it work?
u Campus-szed internets — yes!» It “solves” the first-mile/last-mile problem
u The Internet? — well...» Does our necessary condition for success hold?» Does it hold often enough to be useful?» How much “room” is there for 2-D scaling in most codecs?
InternetInternet
104
0
100
200
300
400
500
600
700
800
0 50 100 150 200 250 300 350
Audio Delivered LatencyAudio Induced Latency
0
20
40
60
80
100
0 50 100 150 200 250 300 350
Lost
0
100
200
300
400
500
600
700
800
0 50 100 150 200 250 300 350
Video Delivered Latency
0
10
20
30
40
50
60
70
80
0 50 100 150 200 250 300 350
Video FPS (actual)Audio FPS (actual)
Media Scaling Evaluation on the UNC CampusPerformance with no media scaling
Throughput (frames/sec) Packet Loss
Audio Latency (ms) Video Latency (ms)
105
0
10
20
30
40
50
60
70
80
0 50 100 150 200 250 300 350
Video FPS (actual)Audio FPS (actual)
0
100
200
300
400
500
600
700
800
0 50 100 150 200 250 300 350
Audio Delivered LatencyAudio Induced Latency
0
20
40
60
80
100
0 50 100 150 200 250 300 350
Lost
0
100
200
300
400
500
600
700
800
0 50 100 150 200 250 300 350
Video Delivered Latency
Media Scaling Evaluation on the UNC CampusPerformance with video scaling only
Throughput (frames/sec) Packet Loss
Audio Latency (ms) Video Latency (ms)106
0
20
40
60
80
100
0 50 100 150 200 250 300 350
Lost
0
10
20
30
40
50
60
70
80
0 50 100 150 200 250 300 350
Video FPS (actual)Audio FPS (actual)
0
100
200
300
400
500
600
700
800
0 50 100 150 200 250 300 350
Video Delivered Latency
0
100
200
300
400
500
600
700
800
0 50 100 150 200 250 300 350
Audio Delivered LatencyAudio Induced Latency
Media Scaling Evaluation on the UNC CampusPerformance with 2-dimensional scaling
Throughput (frames/sec) Packet Loss
Audio Latency (ms) Video Latency (ms)
107
0
5
10
15
20
25
30
0 50 100 150 200 250 300 350
Stereo
Mono
Audio Frames/Message
0
5
10
15
20
25
30
35
0 50 100 150 200 250 300 350
Video frames/sec
High Quality
Medium Quality
Low Quality
0
10
20
30
40
50
60
70
80
0 50 100 150 200 250 300 350
Video FPS (actual)Audio FPS (actual)
0
100
200
300
400
500
600
700
800
0 50 100 150 200 250 300 350
Audio Delivered LatencyAudio Induced Latency
Media Scaling Evaluation on the UNC Campus2-dimensional adaptation over time
Throughput (frames/sec) Audio Latency (ms)
Video Adaptation Over Time Audio Adaptation Over Time108
Media Scaling Evaluation on the InternetMedia scaling in Intel’s ProShareTM codec
0
100
200
300
400
500
600
0 10 20 30 40 50
Packet-Rate
Bit-Rate
Proshare operating points
109
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 6000
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
0
5
10
15
20
25
30
35
40
0 100 200 300 400 500 600
VideoAudio
Media Scaling Evaluation on the InternetProShare with no media scaling
Throughput (frames/sec) Packet Loss
Audio Latency (ms) Video Latency (ms)110
0
5
10
15
20
25
30
35
40
0 100 200 300 400 500 600
VideoAudio
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
Media Scaling Evaluation on the InternetProShare with 2-dimensional media scaling
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600
Throughput (frames/sec) Packet Loss
Audio Latency (ms) Video Latency (ms)
111
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 6000
5
10
15
20
25
30
35
40
0 100 200 300 400 500 600
VideoAudio
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 6000
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
Media Scaling Evaluation on the InternetProShare with video scaling only
Throughput (frames/sec) Packet Loss
Audio Latency (ms) Video Latency (ms)112
Media Scaling Evaluation on the InternetProShare with 2-dimensional media scaling
0
5
10
15
20
25
30
35
40
0 100 200 300 400 500 600
VideoAudio
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600
0
100
200
300
400
500
600
700
800
0 100 200 300 400 500 6000
100
200
300
400
500
600
700
800
0 100 200 300 400 500 600
Throughput (frames/sec) Packet Loss
Audio Latency (ms) Video Latency (ms)
113
Sustainability ResultsAdaptive methods on the Internet
u Results of an Internet performance study fromUNC to UVa» Repeated trials from 10 am to 7 PM weekdays
» Trials separated by at least two hours
» Scattered over three months
Time Slot Sustainable Not Sustainable % Sustainable10:00-12:00 6 3 67%12:00-14:00 4 4 50%14:00-16:00 1 11 8%16:00-18:00 3 9 25%18:00-20:00 4 5 44% Percentage 36% 64%
114
Real-time data delivery on the Internet TodayWhat’s the problem?
u Do we need more bandwidth or just better managementof the existing bandwidth?
1980
insufficientresources
1990 2000
Hardware resources in year X
Requirements(performance,
scale)
abundantresources
sufficientbut scarceresources
115
Real-time data delivery on the Internet TodayWhere do we go from here?
u Provide “best-effort” service by adapting media streams» Monitor & provide feedback on performance» Bias transmission and processing of media to ameliorate the
effects of congestion
u Provide true quality-of-service through reservation ofresources in the network» Requires coordination amongst all parties
v admission controlv policingv ...