1 A Prefetching Protocol for Continuous Media Streaming in
Transcript of 1 A Prefetching Protocol for Continuous Media Streaming in
1
A PrefetchingProtocolfor ContinuousMediaStreamingin WirelessEnvironments
FrankH.P. Fitzek Martin ReissleinTechnicalUniversityBerlin ArizonaStateUniversity
Abstract
Streamingof continuousmediaover wirelesslinks is a notoriouslydifficult problem. This is due to thestringentQualityof Servicerequirementsof continuousmediaandtheunreliabilityof wirelesslinks. Wedevelopa streamingprotocolfor thereal–timedelivery of prerecordedcontinuousmediafrom (to) a centralbasestationto (from) multiplewirelessclientswithin awirelesscell. Ourprotocolprefetchespartsof theongoingcontinuousmediastreamsinto prefetchbuffers in the clients(basestation). Our protocolprefetchesaccordingto a Join–the–Shortest–Queuepolicy. By exploiting rateadaptationtechniquesof wirelessdatapacketprotocols,theJoin–the–Shortest–Queuepolicy dynamicallyallocatesmoretransmissioncapacityto streamswith small prefetchedreserves.Ourprotocoluseschannelprobingto handlethelocation–dependent,time–varying,andburstyerrorsofwirelesslinks. We evaluateour prefetchingprotocolthroughextensivesimulationswith VBR MPEGandH.263encodedvideo traces.Our simulationsindicatethat for bursty VBR video with an averagerateof 64 kbit/secandtypical wirelesscommunicationconditionsour prefetchingprotocolachievesclient starvationprobabilitieson theorderof
�������anda bandwidthefficiency of 90 % with prefetchbuffersof 128kBytes.
Keywords
CDMA, ChannelProbing,Multimedia,Prefetching,PrerecordedContinuousMedia,RateAdaptation,Real–TimeStreaming,WirelessCommunication.
I . INTRODUCTION
Due to the popularityof the World Wide Web retrievals from web serversaredominatingtoday’s
Internet.While mostof theretrievedobjectstodayaretextualandimageobjects,web–basedstreaming
of continuousmedia,suchasvideo andaudio,becomesincreasinglypopular. It is expectedthat by
2003,continuousmediawill accountfor morethan50 % of thedataavailableon thewebservers[1].
This trendis reflectedin a recentstudy[2], which foundthat thenumberof continuousmediaobjects
storedonwebservershastripledin thefirst ninemonthsof 1998.At thesametimethereis increasingly
the trendtowardaccessingthe InternetandWebfrom wirelessmobile devices. Analystspredictthat
ManuscriptreceivedDecember18,2000;revisedJune5, 2001.A shorterversionof this articlehasappearedunderthetitle A PrefetchingProtocolfor StreamingPrerecordedContinuous
Media in WirelessEnvironmentsin Proceedingsof SPIEITCom2001,InternetPerformanceandQoS, Denver, CO,August2001
F. Fitzek is with theDepartmentof ElectricalEngineering,TechnicalUniversityBerlin, 10587Berlin, Germany (e–mail:[email protected]).
CorrespondingAuthor: Martin Reisslein, Dept. of Electrical Eng., Arizona State University, P.O. Box 877206,Tempe AZ 85287–7206, phone: (480)965–8593, fax: (480)965–8325, e-mail: [email protected] , web:www.eas.asu.edu/˜mre .
2
therewill be over onebillion mobile phoneusersby 2003andmorepeoplewill accessthe Internet
from wirelessthanwireline devices[3].
The stringentQuality of Service(QoS)requirementsof continuousmediaandthe unreliability of
wirelesslinks combineto make streamingover wirelesslinks a notoriouslydifficult problem. For
uninterruptedvideo playbackthe client hasto decodeand display a new video frame periodically;
typically every 40 msecwith MPEGencodedvideo. If theclient hasnot completelyreceiveda video
frameby its playbackdeadline,theclient loses(a partor all of) thevideoframeandsuffersplayback
starvation. A small probability of playbackstarvation (typically, ����� – ���� ) is requiredfor good
perceivedvideoquality. In additionto thesestringenttiming andlossconstraints,thevideoframesizes
(in byte) of the moreefficient VariableBit Rate(VBR) encodingsarehighly variable;typically with
peak–to–meanratiosof 4 – 10 [4]. Thesepropertiesandrequirementsmake the real–timestreaming
of video over packet–switchednetworks — even for the caseof wireline networks — a challenging
problem[5]. Theproblemis evenmorechallengingwhenstreamingvideooverwirelesslinks. Wireless
links are typically highly error prone. They introducea significantnumberof bit errorswhich may
rendera transmittedpacket undecodable.The tight timing constraintsof real–timevideo streaming,
however, allow only for limited re–transmissions[6]. Moreover, thewirelesslink errorsaretypically
time–varyingandbursty. An errorburst(which maypersistsfor hundredsof milliseconds)maymake
the transmissionto the affectedclient temporarilyimpossible. All of thesewirelesslink properties
combineto make thetimely delivery of thevideoframesvery challenging.
In this articlewe developa high performancestreamingprotocolfor the real–timedelivery of pre-
recordedcontinuousmediaover wirelesslinks. Our protocol is equallywell suitedfor streamingin
thedownlink (basestationto clients)direction,aswell astheuplink (clientsto basestation)direction.
We focusin thefollowing discussionon thedownlink direction(uplink streamingis discussedin Sec-
tion V-C). Ourprotocolallowsfor immediatecommencementof playbackaswell asnearinstantaneous
client interactions,suchaspause/resumeandtemporaljumps.Ourprotocolgivesaconstantperceptual
mediaquality at theclientswhile achieving a very high bandwidthefficiency. Our protocolachieves
this high performanceby exploiting two specialpropertiesof prerecordedcontinuousmedia: (1) the
clientconsumptionratesover thedurationof theplaybackareknown beforethestreamingcommences,
and(2) while thecontinuousmediastreamis beingplayedout at theclient, partsof thestreamcanbe
prefetchedinto theclient’s memory. (Thesepropertiesmaybeexploitedto someextendwhenstream-
ing live contentwith someprefetchdelaybudget;seeSectionV-B.) Theprefetchedreservesallow the
clientsto continueplaybackduring periodsof adversetransmissionconditionson the wirelesslinks.
In addition,theprefetchedreservesallow theclientsto maintaina highperceptualmediaqualitywhen
retrieving burstyVariableBit Rate(VBR) encodedstreams.
3
Theprerecordedcontinuousmediastreamsareprefetchedaccordingto aspecificJoin–the–Shortest–
Queue(JSQ)policy, which strives to balancethe prefetchedreserves in the wireless(and possibly
mobile)clientswithin a wirelesscell. TheJSQprefetchpolicy exploits rateadaptationtechniquesof
wirelessdatapacket protocols[7]. The rateadaptationtechniquesallow for the dynamicallocation
of transmissioncapacitiesto theongoingwirelessconnections.In theCodeDivision Multiple Access
(CDMA) IS–95(RevisionB) standard,for instance,therateadaptationis achievedby varyingthenum-
berof codes(i.e., thenumberof parallelchannels)usedfor thetransmissionsto theindividual clients.
(The total numberof codechannelsusedfor continuousmediastreamingin thewirelesscell maybe
constant.)TheJSQprefetchpolicy dynamicallyallocatesmoretransmissioncapacityto wirelessclients
with smallprefetchedreserveswhile allocatinglesstransmissioncapacityto theclientswith large re-
serves. The ongoingstreamswithin a wirelesscell collaboratethroughthis lendingand borrowing
of transmissioncapacities.Channelprobingis usedto judiciously utilize the transmissioncapacities
of thewirelesslinks, which typically experiencelocation–dependent, time–varying,andburstyerrors.
Ourextensivenumericalstudiesindicatethatthiscollaborationis highly effective in reducingplayback
starvation at the clientswhile achieving a high bandwidthefficiency. For bursty VBR video with an
averagerate of 64 kbit/secand typical wirelesscommunicationconditionsour prefetchingprotocol
achievesclient starvation probabilitieson theorderof ����� anda bandwidthefficiency of 90 % with
client buffers of 128 kBytes. Introducinga start–uplatency of 0.4 secreducesthe client starvation
probabilityto roughly ����� .This article is organizedasfollows. In SectionII we describeour architecturefor thestreamingof
prerecordedcontinuousmediain thedownlink direction. Our focusis on multi–rateCDMA systems.
In SectionIII we developthecomponentsof our JSQprefetchingprotocol. Importantly, we introduce
channelprobing; a mechanismthat handlesthe location–dependent, time–varying, andbursty errors
of wirelessenvironments.In SectionIV we evaluateour prefetchingprotocolboth without andwith
channelprobing throughextensive simulations. In SectionV we discussseveral extensionsof the
prefetchingprotocol. We considerstreamingwith client interactions,suchaspause/resumeandtem-
poral jumps. We alsoconsiderthestreamingof live content,suchastheaudioandvideofeedfrom a
sportingevent. We outlinehow to usetheprefetchingprotocolfor prefetchingin theuplink (clientsto
basestation)direction. Moreover, we outlinehow to deploy theprefetchingprotocolin third genera-
tion CDMA systemsaswell asTDMA andFDMA systems.We alsodiscussthe deploymentof the
prefetchingprotocolon top of physicallayer error control schemes.We discussthe relatedwork in
SectionVI andconcludein SectionVII.
4
Wide AreaNetwork
Wireless
Link 1
Wireless Link J
Wireless Link 2
...
...
Video Server
Video Server
Base Station
Client 1
Client 2
Client J
Fig. 1. Architecture: A centralbasestationstreamsprerecordedcontinuousmediato wireless(andpossiblymobile)clientswithin a wirelesscell.
I I . ARCHITECTURE
Figure1 illustratesourarchitecturefor continuousmediastreamingin thedownlink direction.A cen-
tral basestationprovidesstreamingservicesto multiplewireless(andpossiblymobile)clientswithin a
wirelesscell. Let � denotethenumberof clientsservicedby thebasestation.We assumefor thepur-
poseof this studythateachclient receivesonestream;thusthereare � streamsin process.(Although
someclientsmight receive the samestream,the phases(i.e., startingtimes)are typically different.)
Thebasicprincipleof our streamingprotocol— exploiting rateadaptationtechniquesfor prefetching
— canbeappliedto any typeof wirelesscommunicationsystemwith a slottedTime Division Duplex
(TDD) structure.TheTDD structureprovidesalternatingforward(basestationto clients)andbackward
(clientsto basestation)transmissionslots.
We initially considera multi–codeCodeDivision Multiple Access(CDMA) system.Suchasystem
adaptsratesfor thesynchronoustransmissionsin theforwarddirectionby aggregatingorthogonalcode
channels,that is, by varying the numberof codechannelsusedfor transmissionsto the individual
clients.ThesecondgenerationCDMA IS–95(Rev. B) system[8] is anexampleof suchasystem;asis
thethirdgenerationUMTSsystem[9] in TDD mode.Let � denotethenumberof orthogonalcodesused
by thebasestationfor transmittingthecontinuousmediastreamsto theclients.Let �������������������� �!� ,
denotethenumberof parallelchannelssupportedby theradiofront–endof client � . TheCDMA IS–95
(Rev. B) standardprovidesup to eightparallelchannelsperclient; in theTDD modeof UMTS up to
15parallelcodechannelscanbeassignedto anindividual client. Let " denotethedatarate(in bit/sec)
providedby oneCDMA codechannelin theforwarddirection.
5
Our streamingprotocolis suitablefor any type of prerecordedcontinuousmedia(anextensionfor
livecontentis developedin SectionV-B). To fix ideaswefocusonvideostreams.A key featureof our
protocolis that it accommodatesany typeof encoding;it accommodatesConstantBit Rate(CBR) and
burstyVariableBit Rate(VBR) encodingsaswell asencodingswith afixedframerate(suchasMPEG–
1 andMPEG–4)andavariableframerate(suchasH.263).For thetransmissionover thewirelesslinks
thevideoframesarepacketizedinto fixedlengthpackets. Thepacket sizeis setsuchthatoneCDMA
codechannelaccommodatesexactly onepacket in oneforwardslot; thusthebasestationcantransmit
� packetson theorthogonalcodechannelsin a forwardslot.
Let #$�������%�&�'������(�!� , denotethelengthof videostream� in frames.Let )+*,����� denotethenumber
of packetsin the - th frameof videostream� . Notethatfor aCBRencodedvideostream� the )+*,����� ’sareidentical.Let ./*,����� denotetheinterarrival time betweenthe - th frameandthe �0-213�(� th frameof
video stream� in seconds.Frame - is displayedfor a frameperiodof .4*+����� secondson the client’s
screen.For a constantframerateencodingthe frameperiods ./*,����� areidentical. Becausethe video
streamsareprerecordedthesequenceof integers �0)65(�������7) ���������� �8)%9;:=<�>?�����8� andthesequenceof real
numbers�0.�5(�������@. ���������� �8.89A: <�>?�����8� arefully known whenthestreamingcommences.
Whena client requestsa specificvideo thebasestationrelaysthe requestto theappropriateorigin
server or proxy server. If the requestpassesthe admissionteststhe origin/proxy server immediately
beginsto streamthevideovia thebasestationto theclient. Our focusin thisarticleis on thestreaming
from thebasestationover thewirelesslink to theclient. Thestreamingfrom theorigin/proxyserver
to thebasestationis beyondthescopeof this article. Weassumefor thepurposeof this studythat the
video is deliveredto thebasestationin a timely fashion.Upon grantingtheclient’s requestthebase
stationimmediatelycommencesstreamingthe video to the client. The packetsarriving at the client
areplacedin theclient’s prefetchbuffer. Thevideo is displayedon the client’s monitor assoonasa
few frameshavearrivedat theclient. Undernormalcircumstancestheclientdisplaysframe - of video
stream� for ./*,����� seconds,thenremovesframe -�1B� from its prefetchbuffer, decodesit, anddisplays
it for .4*�CD5(����� seconds.If at oneof theseepochsthereis no completeframein theprefetchbuffer the
clientsuffersplaybackstarvationandlosesthecurrentframe.Theclientwill try to concealthemissing
encodinginformationby applyingerror concealmenttechniques[10]. At the subsequentepochthe
client will attemptto displaythenext frameof thevideo.
In our protocol the basestationkeepstrack of the contentsof the prefetchbuffers in the clients.
Towardthisend,let E��������F�&�'������(�!� , denotethenumberof packetsin theprefetchbuffer of client � .
Furthermore,let GH�������7���'������ �!� , denotethelengthof theprefetchedvideosegmentin theprefetch
buffer of client � in seconds.ThecountersE������ andGH����� areupdated(1) whentheclient � acknowledges
thereceptionof sentpackets,and(2) whena frameis removed,decoded,anddisplayedatclient � .
6
First, considertheupdatewhenpacketsareacknowledged.For thesake of illustrationsupposethat
the )+*,����� packetsof frame - of stream� have beensentto client � during the just expired forward
slot. Supposethat all )+*,����� packets areacknowledgedduring the subsequentbackward slot. When
the lastof the )+*,����� acknowledgmentsarrivesat thebasestation,thecountersareupdatedby setting
E������JIKE������61L)+*,����� , andGH�����MINGH�����D1L./*+����� .Next, considertheupdateof thecounterswhenaframeis removedfrom theprefetchbuffer, decoded,
anddisplayedat theclient. Giventhesequence./*,�������D-O�������� �P# , andthestartingtimeof thevideo
playbackat theclient thebasestationkeepstrackof theremoval of framesfrom theprefetchbuffer of
client � . Supposethat frame Q������ is to beremoved from theprefetchbuffer of client � at a particular
instantin time. Thebasestationtrackstheprefetchbuffer contentsby updating
ER�����SIUT=E������WVX)+Y :=<�> �����[Z C (1)
and
GH�����SIUT GH�����SVX./Y :=<�> �����[Z C � (2)
where T )�Z C ��\�]R^6�_�`�M)F� . Notethat theclient suffersplaybackstarvationwhen E������MVa) Y : <�>?�����cbd� ,
that is, whenthe framethat is supposedto beremoved is not in theprefetchbuffer. To ensureproper
synchronizationbetweentheclientsandthebasestationin practicalsystems,eachclientsendsperiod-
ically (every few slots,say)anupdateof its buffer contents(alongwith a regularacknowledgement)to
thebasestation.Thesebuffer updatesareusedat thebasestationto re–validate(andpossiblycorrect)
thevariablesE��/ef� andGH�/ef� .I I I . COMPONENTS OF PREFETCH PROTOCOL
For eachforward slot the basestationmustdecidewhich packets to transmitfrom the � ongoing
streams.Theprefetchpolicy is therule thatdetermineswhich packetsaretransmitted.Themaximum
numberof packetsthatcanbetransmittedin a forwardslot is � . TheJoin–the–Shortest–Queue (JSQ)
prefetchpolicy strivesto balancethelengthsof theprefetchedvideosegmentsacrossall of theclients
servicedby the basestation. The basicideais to dynamicallyassignmorecodes(andthustransmit
morepacketsin parallel)to clientsthathave only a small reserve of prefetchedvideoin their prefetch
buffers. The JSQprefetchpolicy is inspiredby the EarliestDeadlineFirst (EDF) schedulingpolicy.
The EDF schedulingpolicy is known to be optimal amongthe classof non–preemptive scheduling
policiesfor a singlewireline link [11]. It is thereforenaturalto basetheprefetchpolicy on theEDF
schedulingpolicy. With JSQprefetchingthebasestationselectsthepacket with theearliestplayback
deadlinefor transmission.In otherwords, the basestationtransmitsthe next packet to the playout
queuewith theshortestsegmentof prefetchedvideo(i.e., theshortestqueue).
7
In orderto simplify thediscussionandhighlight themainpointsof ourapproachwefirst introducea
basicprefetchpolicy. Thisbasicprefetchpolicy assumesthatall clients(1) support� parallelchannels,
and(2) have infinite prefetchbuffer space.Weshalladdressthesetwo restrictionsin a refinedprefetch
policy. Also, we initially excludeclient interactions,suchaspause/resumeandtemporaljumps;these
arediscussedin SectionV-A.
A. BasicJSQPrefetch Policy
Let g+�������h�d�i������(�!� , denotethe length of the video segment(in secondsof video run time)
that is scheduledfor transmissionto client � in the currentforward slot. The following scheduling
procedureis executedfor every forwardslot. At thebeginningof theschedulingprocedureall g������ ’sareinitialized to zero.Thebasestationdeterminestheclient ��j with thesmallestGH�����D1$g������ (tiesare
brokenarbitrarily). Thebasestationschedulesonepacket for transmission(by assigninga codeto it)
andincrementsg�����jk� :
g+��� j �SIlg+��� j �F1 .nm�:=<�oP>?���pjk�),m�:=<�o?>P��� j � �
where qS���pj � is the frame (number)of stream�pj that is carried(partially) by the scheduledpacket.
(Althoughthelengthof theprefetchedvideosegmentgrows in incrementsof ./*,����� secondswhenever
the transmissionof the )�*,����� packets carrying frame - of stream� is completed;for simplicity we
accountfor partially transmittedframesby incrementingthe prefetchedsegmentby .4*%�����8r )�*6����� for
eachtransmittedpacket. This approachis further justified by error concealmenttechniquesthat can
decodepartialframes[10].) Thebasestationrepeatsthisprocedure� times,thatis,until the � available
codesareusedup. At eachiterationthebasestationdeterminesthe ��j with thesmallestGH�����H1sg������ ,schedulesonepacket for client �pj andincrementsg����pj(� .
Throughoutthis schedulingprocedurethebasestationskipspacketsfrom a framethatwould miss
its playbackdeadlineat theclient. (Specifically, if frame Q������ is to be removedbeforetheendof the
upcomingforward slot and if GH�����ObN.nY : <�> ����� , the basestationskips frame Q������ andprefetchesfor
frame Q����t1u�(� . Moreover, frame Q������ is skippedif E������71wv&e(�������Abx)+Y :=<�> ����� , where v is thenumber
of forwardslotsbeforeframe Q+����� is removed.)
Duringthesubsequentbackwardslotthebasestationwaitsfor theacknowledgmentsfromtheclients.
(Typicalhardwareconfigurationsof wirelesscommunicationsystemsallow theclientsto acknowledge
thepacketsreceivedin a forwardslotof theTDD timing structure,immediatelyin thefollowing back-
ward slot [12].) If all packetssentto client � areacknowledgedby the endof the backward slot we
set GH�����AIyGH�����D1Bg������ . If someof theacknowledgmentsfor a stream� aremissingat theendof the
backwardslot, GH����� is left unchanged.At theendof thebackwardslot theschedulingprocedurestarts
8
over. The g+����� ’s arere–initializedto zeroandthebasestationschedulespacketsfor theclientswith the
smallestGH�����61$g������ .Notethat theacknowledgmentprocedureoutlinedabove considersall of thepacketssentto a client
in a forwardslotaslostwhenone(or more)of thepackets(or acknowledgments)is lost. Weadoptthis
procedurefor simplicity andto beconservative. A refinedapproachwouldbeto selectively accountfor
theacknowledgedpackets. However, this would leadto ”gaps” in theprefetchedvideosegmentsthat
takesomeeffort to keeptrackof. Wealsonotethatbecauseof theburstyerrorcharacteristicsof wireless
links typically eitherall or noneof thepacketssentto aclient in a forwardslotarelost. Also, sincethe
acknowledgementscanbesentwith alargeamountof forwarderrorcorrection,theprobabilityof losing
anacknowledgementis typically small.Nevertheless,wepointout theimplicationsof theconservative
acknowledgmentprocedure.Notethatthebuffer variablesE������ andGH����� maintainedat thebasestation
arethe basisfor the JSQscheduling.It is thereforeimportantthat the buffer variablesE������ and GH�����correctly reflect the actualbuffer contentsat client � . Our conservative acknowledgmentprocedure
clearly ensuresthat the actualbuffer contentat client � is always larger than or equalto the buffer
variablesE������ and GW����� . Notethatonly (1) sporadicpacket losses(i.e., lossof a subsetof thepackets
sentto aclient in aslot),or (2) thelossof anacknowledgement(of apacket thatwascorrectlyreceived)
cancausetheactualbuffer contentto belargerthanthecorrespondingbuffer variables.As pointedout
above thesetwo eventsareratherunlikely, thereforethe actualbuffer contentis typically not larger
thanthebuffer variables.Also, if thebuffer contentis larger thanthebuffer variables,thentypically
by a small margin. Now, whenever the actualbuffer contentat a client � happensto be larger than
the correspondingbuffer variablesE������ and GH����� , thenthebuffer contentandthebuffer variablesare
reconciledwheneitheroneof thefollowing threeeventshappens.( z ) Thepacketsthatwereconsidered
lost by the basestation(but actuallyreceived by theclient) arere–transmitted(aspart of the regular
JSQschedule)and their acknowledgmentsarrive at the basestation. (The client acknowledgesall
successfullyreceivedpackets,andthendiscardspacketsthatarereceivedin duplicate.)( z[z ) Theframe
(of which packetsareconsideredlost at thebasestation)is consumed(beforea re–transmissioncould
take place). In this casetheoperations(1) and(2) setthebuffer variablesto zero. Theclient empties
its prefetchbuffer anddecodesthe part (if any) of the frame that it received. ( z{z{z ) A buffer update
messagegivesthebasestationtheactualclient buffer content.Wefinally notethatin our performance
evaluationin SectionIV thelossesarecountedbasedon thebuffer variablesGH����� and E������ at thebase
station.Thus,ourperformanceresultsareon theconservative side.
We notethat if �������}|~� for all ���M����������(�!� , it is not necessaryto signaltheassignedcodes
to the clientsaseachclient is capableof monitoringall of the � orthogonalchannels.In casethere
aresomeclientswith �������Ob�� , the codeassignmentmustbe signaledto theseclients, this could
9
bedonein a signallingtime slot (right beforetheforwardslot) on a commonsignallingchannel,e.g.,
theNetwork AccessandConnectionChannel(NACCH) of the IntegratedBroadbandMobile System
(IBMS) [13].
With prefetchingit is possiblethatall #$����� framesof videostream� have beenprefetchedinto the
client’s prefetchbuffer but notall frameshave beendisplayed.Whenastreamreachesthisstatewe no
longerconsiderit in theabove JSQpolicy. Fromthebasestation’s perspective it is asif thestreamhas
terminated.
B. RefinedJSQPrefetch Policy
In this sectionwe discussimportantrefinementsof theJSQprefetchpolicy. Theserefinementslimit
(1) thenumberof packets,that aresent(in parallel)to a client in a forward slot, and(2) thenumber
of packets that a client may have in its prefetchbuffer. Supposethat the clients �����3�������� �!� ,
supportat most ������� parallelchannels,andhave limited prefetchbuffer capacitiesof �2����� packets.
Let ���������M�X����������!� , denotethenumberof packetsscheduledfor client � in the upcomingforward
slot. Recallthat E������ is thecurrentnumberof packetsin theprefetchbuffer of client � . Therefinements
work asfollows. Supposethat thebasestationis consideringschedulinga packet for transmissionto
client �pj . Thebasestationschedulesthepacket only if
����� j �;�w����� j �HVw��� (3)
and
E���� j ���w�2��� j �WVw��� (4)
If oneof theseconditionsis violated,thatis, if thepacketwouldexceedthenumberof parallelchannels
of client �pj or the packet would overflow the prefetchbuffer of client �pj , the basestationremoves
connection�pj from consideration.Thebasestationnext findsa new ��j thatminimizesGH�����D1xg������ . If
conditions(3) and(4) hold for thenew client �pj , we schedulethepacket, updateg�����jk� and �����pjk� , and
continuetheprocedureof transmittingpacketsto theclientsthatminimize GH�����F1ag������ . Whenever one
of theconditions(3) or (4) (or both)is violatedweskipthecorrespondingclientandfind anew ��j . This
procedurestopswhenwe have either(1) scheduled� packets,or (2) skippedover all � streams.The
JSQschedulingalgorithmcanbeefficiently implementedwith a sortedlist (using GH����� asthesorting
key).
C. ChannelProbing
In this sectionwe introducea channelprobingrefinementdesignedto improve theperformanceof
thepurelyJSQbasedprefetchprotocol.Notethattheprefetchprotocolintroducedin theprevioussec-
10
tion doesnot directly take thephysicalcharacteristicsof thewirelesschannelsinto consideration.The
JSQtransmissionscheduleis basedexclusively on theprefetchbuffer contentsat theclients(andthe
consumptionratesof thevideo streams).Wirelesschannels,however, typically experiencelocation–
dependent,time–varying,andburstyerrors,that is, periodsof adversetransmissionconditionsduring
which all packetssentto a particularclient arelost. Especiallydetrimentalto theprefetchprotocol’s
performancearethepersistentbadchannelsof long–termshadowing that is causedby terrainconfig-
urationor obstacles.Long–termshadowing typically persistsfor hundredsof milliseconds,evenup to
seconds[14]. To seetheneedfor thechannelprobingrefinementconsidera scenariowhereoneof the
clientsexperiencesa persistentbursterroron its wirelesslink to thebasestation.Thebursterrorcuts
theclient off from thebasestationandtheclientcontinuesvideoplaybackfrom its prefetchedreserve.
As its prefetchedreserve decreasesthe JSQprefetchpolicy allocateslarger and larger transmission
capacitiesto theaffectedclient. Theexcessive transmissionresourcesexpendedon theaffectedclient,
however, reducethe transmissionsto theotherclientsin thewirelesscell. As a resulttheprefetched
reservesof all theotherclientsin thewirelesscell arereduced.This makesplaybackstarvation— not
only for theclient experiencingthebadchannel,but all theotherclientsaswell — morelikely.
To fix this shortcomingwe introducethe channelprobingrefinement,which is inspiredby recent
work on channelprobingfor power saving [15]. Thebasicideais to startprobing thechannel(client)
whenacknowledgment(s)aremissingat the endof a backward slot. While probingthe channelthe
basestationsendsat most onepacket (probingpacket) per forward slot to the affectedclient. The
probingcontinuesuntil anacknowledgmentfor a probingpacket is received. More specifically, if the
acknowledgmentfor at leastonepacket sentto client � in a forward slot is missingat theendof the
subsequentbackward slot, we set ����������� . This allows theJSQalgorithmto scheduleat mostone
packet (probingpacket) for client � in the next forward slot. If the acknowledgmentfor the probing
packet is returnedby theendof thenext backwardslot,weset ������� backto its originalvalue;otherwise
we continueprobingwith ������� = 1.
IV. SIMULATION OF PREFETCH PROTOCOL
In this sectionwe describethesimulationsof our protocolfor continuousmediastreamingin wire-
lessenvironments.In oursimulationsweconsideragenericwirelesscommunicationsystemwith Time
Division Duplex. We assumethroughoutthat thebasestationallocates� = 15 channels(e.g.,orthog-
onal codesin CDMA or time slotsin TDMA) to continuousmediastreaming.We assumethat each
channelprovidesa datarateof " = 64 kbit/secin the forward (downlink) direction. Throughoutwe
considerscenarioswhereall � clientshave thesamebuffer capacityof � packetsandsupportthesame
numberof parallelchannels� , i.e., �2�����S�u� and �������M�u� for all �&�'������(�!� .
11
Weevaluateourstreamingprotocolfor threedifferentencodingsof videostreams:( z ) videostreams
encodedat a ConstantBit Rate(CBR) and a constantframe rate, ( z{z ) video streamsencodedat a
VariableBit Rate(VBR) anda constantframerate(e.g.,MPEG–1encodings),and( z{z{z ) videostreams
encodedat a variableframerateanda variablebit rate(e.g.,H.263encodings).In theCBR scenario
we assumea videostreamwith a framerateof 25 frames/secanda bit rate(includingpacket headers
andpadding)of �� = 64kbit/sec.
For theVBR MPEG scenariowe generatedten pseudotracesby scalingMPEG–1tracesobtained
from the public domain[16], [17], [18] to an averagerateof �� = 64 kbps. The traceshave a fixed
framerateof 25 frames/secandare40.000frameslong. Thegeneratedpseudotracesarehighly bursty
with peak–to–meanratiosin the rangefrom 7 to 18; see[19] for details. For the H.263scenariowe
generatedtenH.263encodingswith anaveragerateof �� = 64kbpsandonehourlengtheach.Theframe
periodsof theencodingsarevariable;they aremultiplesof thereferenceframeperiodof 40msec.The
H.263encodingshave peak–to–meanratiosin therangefrom 5 to 8; see[4] for details.
For eachof the � ongoingstreamsin thewirelesscell werandomlyselectoneof theMPEG(H.263)
traces.We generaterandomstartingphasesQ��������M�X��������(�!� , into the selectedtraces.The Q������ ’sare independentanduniformly distributedover the lengthsof the selectedtraces.The frame Q������ is
removed from the prefetchbuffer of client � at the end of the first frame period. All clients start
with emptyprefetchbuffers. Furthermore,we generaterandomstreamlengths #$�������M�X�������� �!� .
The #L����� ’s are independentandaredrawn from an exponentialdistribution with mean #�� frames
(correspondingto avideoruntimeof � seconds).(Wechosetheexponentialdistribution becauseit has
beenfound to bea goodmodelfor streamlifetimes. We discussthe impactof thestreamlifetime on
theprefetchprotocolperformancein moredetail in SectionIV-B.) We initially assumethat theclient
consumeswithout interruption #L����� frames,startingat framenumberQ+����� of theselectedtrace.The
traceis wrappedaroundif Q������H1B#L����� extendsbeyondtheendof thetrace.Whenthe #$����� th frame
is removedfrom theprefetchbuffer of client � , we assumethat theclient immediatelyrequestsa new
videostream.For thenew videostreamweagainrandomlyselectoneof thetraces,anew independent
randomstartingphaseQ+����� into thetrace,anda new independentrandomstreamlifetime #$����� . Thus
therearealways � streamsin progress.
In simulatingthe wirelesslinks we follow the well–known Gilbert–Elliot model [20], [21]. We
simulateeachwirelesslink (consistingof upto ������� parallelcodechannels)asanindependentdiscrete–
time Markov Chainwith two states:”good” and”bad”. (Thetwo statemodelis a usefulandaccurate
wirelesschannelcharacterizationfor the designof higherlayer protocols[22], [23], [24]. It may be
obtainedfrom amorecomplex channelmodel,whichmayincorporateadaptiveerrorcontroltechniques
(seeSectionV-E), usingweaklumpabilityor stochasticboundingtechniques[25].) Weassumethatall
12
parallelcodechannelsof a wirelesslink (to a particularclient) areeitherin thegoodstateor thebad
state. The Gilbert–Elliot channelmodel is typically characterizedby (1) the steadystateprobability
of beingin thegoodstate,denotedby ��� , ( ��� denotesthesteadystateprobabilityof beingin thebad
state),and(2) the averagesojourntime in the badchannelstate,denotedby � bad . For a flat fading
channelthe Gilbert–Elliot modelparameterscanbe expressedanalytically in termsof the Rayleigh
fadingparameters[26]. With � denotingthe ratio of the Rayleighfadingenvelopeto the local root
meansquarelevel, thesteadystateprobabilityof beingin thegoodstateis givenby ������������ (and
���S���`V������� ). Weconservatively assumeafademargin of 10dB (i.e., � = – 10dB) of theclient’s radio
front end(even20dB maybeassumed[27]). Thisgivesthesteadystateprobabilities��� = 0.99and ���= 0.01,which aretypical for burst–errorwirelesslinks [28]. Thesesteadystateprobabilitiesareused
throughoutoursimulations.Theaveragesojourntime in thebadchannelstate,thatis, theaveragetime
periodfor which thereceivedsignalis below aspecifiedlevel, is givenby � bad ��������WVa�(�8r`�{� ���F�`�6� .Here, � denotesthe maximumDopplerfrequency given by ������r� , where � denotesthe speedof
thewirelessterminaland denotesthecarrierwavelength.A UMTS systemcarrierwavelengthof =
0.159m (correspondingto a carrierfrequency of 1.885GHz) anda typical client speedof � = 0.2m/s
suggest� bad = 32 msec.However, we chosea larger � bad of onesecondfor mostof our simulations.
Wedid this for two reasons.First, to accountfor thedetrimentaleffectsof long termshadowing, which
may persistfor hundredsof millisecondseven up to seconds,aswell asRayleighfading. Secondly,
weobservedin oursimulations(seeSectionIV-B) thatourprefetchprotocolgivesslightly largerclient
starvation probabilitiesfor larger � bad . Thus, a larger � bad gives conservative performanceresults.
We setthe channelerror probabilitiessuchthat all the packetssentto a client in a slot arelost with
probability ¡ �¢ = 0.05in thegoodchannelstate,andwith probability ¡ �¢ = 1 in thebadchannelstate.
Weassumethatacknowledgmentsarenever lost in thesimulations.
For our quantitative evaluationof the JSQprefetchprotocol we definetwo key measuresof the
performanceof a streamingprotocol. We definethe bandwidthefficiency £ of a wirelessstreaming
protocolasthe sumof the averageratesof the streamssupportedby the basestationdivided by the
totalavailableeffective transmissioncapacityof thebasestation,i.e.,
£�� �¤e`��T ���Aep�n��V¥¡ �¢ �61L�+�Wep�n�AV¥¡ �¢ �[Z�eR"ue(\�¦¨§F�_�3e��©�S�S� �
We definethe client starvationprobability ¡ loss as the long run fraction of encodinginformation
(packets)thatmissesits playbackdeadlineat theclients.We conservatively considerall ) * �/ef� packets
of frame - asdeadlinemisseswhenat leastoneof theframe’spacketsmissesits playbackdeadline.We
warmup eachsimulationfor a perioddeterminedwith theSchruben’s test[29] andobtainconfidence
intervals on the client starvation probability ¡ loss usingthemethodof batchmeans[30]. We run the
13
Pre
fetc
h B
uffe
r C
onte
nts
[kby
te]
Client 1
Client 3Client 2
PSfragreplacements
00
1
1
2
2
3
3 4 5 6 7
4.8
9.6
14.4
19.2
24.0
28.8
33.8
Buffer Contents[kBytes]
time [sec]
Fig. 2. Samplepathplot: Prefetchbuffer contents(inkBytes)of 3 clientsasafunctionof time: Client1startsover
Pre
fetc
h B
uffe
r C
onte
nts
[kby
te]
Client 1Client 2Client 3
PSfragreplacements
00
1
1
2
2
3
3
4 5
6
7
4.8
9.6
14.4
19.2
24.0
28.8
33.8
Buffer Contents[kBytes]
time [sec]
Fig. 3. Samplepathplot: Prefetchbuffer contents(inkBytes)of 3 clientsasa functionof time: Client1experiencesa badchannel.
simulationsuntil the90 % confidenceinterval of ¡ loss is lessthan10% of its point estimate.
Unlessstatedotherwise,all the following experimentsare conductedfor the streamingof VBR
MPEGvideoto clientswith abuffer capacityof � = 32kBytesandsupportfor � = 15parallelchannels.
Theaveragelifetime of thevideostreamsis setto � = 10minutesunlessstatedotherwise.Throughout,
thebasestationhasa total of � = 15channelsavailablefor videostreaming.
A. Simulationof RefinedPrefetch ProtocolwithoutChannelProbing
Figures2 and3 show typicalsamplepathplotsfrom thesimulations.In thisexperimentwesimulate
thestreamingof VBR MPEG–1videosto clientswith a buffer capacityof � = ª�� kBytes.Thefigure
shows the prefetchbuffer contentsof threeclients in kBytes. The plots illustrate the collaborative
natureof theJSQprefetchpolicy in conjunctionwith therateadaptationof thewirelesscommunication
system.We observe from Figure2 thatat time . = 5.1secthebuffer contentof client 1 dropsto zero.
This is becausethevideostreamof client 1 endsat this time; theclient selectsanew videostreamand
startsoverwith anemptyprefetchbuffer. Notethatalreadyattime . = 1.0secondall framesof the”old”
videostreamhave beenprefetchedinto theclient’s buffer andtheclient continuedto consumeframes
withoutreceiving any transmissions.Whentheclientstartsoverwith anemptyprefetchbuffer, theJSQ
prefetchpolicy givespriority to thisclientandquickly fills its prefetchbuffer. While theprefetchbuffer
of client 1 is beingfilled, theJSQprefetchpolicy reducesthe transmissionsto theotherclients; they
”li veoff” theirprefetchedreservesuntil client1 catchesupwith them.Noticethatthebuffer of client1
is thenfilled fasterthanthebuffersof clients2 and3. This is becausetheJSQprefetchpolicy strives
to balancethe lengthsof the prefetchedvideo segments(in secondsof video runtime)in theclients’
buffers; clients2 and3 just happento have videosegmentswith lower bit ratesin their buffers in the
timeperiodfrom 5.8secto 6.4sec.
14
Noticefrom Figure3 thatat time . = 0.4secthebuffer occupancy of client 1 drops.This is because
thisclientexperiencesabadchannelthatpersistsfor 2.1sec(aratherlongperiodchosenfor illustration,
in our numericalwork we settheaveragesojourntime in thebadchannelstateto 1 sec),that is, the
client is temporarilycut off from thebasestation.Theprefetchedreservesallow theclient to continue
playbackduring this period. As the prefetchedreservesof client 1 dwindle the JSQprefetchpolicy
allocateslarger transmissioncapacitiesto it. This, however, cutsdown on the transmissionsto the
otherclients,causingtheir prefetchedreservesto dwindle aswell. This degradestheperformanceof
the streamingprotocol as smallerprefetchedreserves make client starvation more likely. The JSQ
prefetchpolicy tendsto wastetransmissionresourcesonclientsthatexperienceabadchannel.Adverse
transmissionconditionsto just one client can decreasethe prefetchedreserves of all clients in the
wirelesscell. For this reasonwe have introducedthechannelprobingrefinementin SectionIII-C. In
thenext sectionweconductadetailedquantitativeevaluationof theJSQprefetchprotocolwith channel
probing.
B. Simulationof RefinedPrefetch Protocolwith ChannelProbing
To evaluatetheperformanceof thesimplechannelprobingschemeintroducedin SectionIII-C we
compareit with an ideal scenariowherethe basestationhasperfectknowledgeof the statesof the
wirelesschannels.In theperfectknowledgescenariothebasestationschedulespacketsonly for clients
in thegoodchannelstate.Thebasestationdoesnot scheduleany packet (not evena probingpacket)
for clientsexperiencinga badchannel. (Note that in this perfectknowledgescenariono packetsare
lost dueto a badchannel;however, packetsthat aresentto clientswith a goodchannelarelost with
probability ¡ �¢ .)
Figure4 shows theclientstarvationprobability ¡ loss withoutchannelprobing,with channelprobing,
andwith perfectknowledgeof thechannelstateasa functionof theaveragesojourntime in the”bad”
channelstate.For this experimentthenumberof clientsis fixedat � = 13 (and12 respectively). We
observefrom thefigurethatoverawiderangeof channelconditions,channelprobingis highly effective
in reducingtheprobabilityof client starvation.For fastvaryingchannelswith anaveragesojourntime
of � bad = 12 msecin thebadchannelstate,prefetchingwith our simplechannelprobingschemegives
a loss probability of ¡ loss = 5.5 e¨�� �� , while prefetchingwithout channelprobing gives ¡ loss = 1.1
e¨�� �« (for � = 13). As � bad increasesthe client starvation probability for prefetchingwith channel
probing increasesonly slightly; the gap betweenprefetchingwith channelprobing and prefetching
without channelprobing,however, increasesdramatically. For � bad = 350msecandlarger theclient
starvationprobabilityof prefetchingwith channelprobingis overoneorderof magnitudesmallerthan
theclientstarvationprobabilityof prefetchingwithoutchannelprobing.Wealsoobserve thattheclient
15
J=13 no probing
J=12 probing
J=13 probing
PSfragreplacements
¬ loss
1.0
0.1
0.01
0.001
0.0001
1e-050.5 1.0 1.5 2.0 2.5 3.0
=12perfectknow
=13perfectknow
®8¯¨°n± [sec]
Fig.4. Clientstarvationprobability ² lossasafunctionof the averagesojourntime in the ”bad” channelstatefor prefetchingof VBR videowithout chan-nel probing,with channelprobing,andwith per-fect knowledgeof thechannelstate.
J=13 no probing
probingJ=7 no probing
probing
PSfragreplacements
¬ loss
1.0
0.1
0.01
0.001
0.00010 2 4 6 8 10 12 14 16³
perfectknow
perfectknow
Fig. 5. Client starvation probability ² lossasa func-tion of the maximumnumberof channels perclient for prefetchingof VBR videowithoutchan-nel probing,with channelprobing,andwith per-fect knowledgeof thechannelstate.
starvation probabilitiesachieved by our simplechannelprobing schemeareonly slightly above the
clientstarvationprobabilitiesachievedwith perfectknowledgeof thechannelstate.This indicatesthat
moresophisticatedchannelprobingschemescouldachieveonlysmallreductionsof theclientstarvation
probability. A moresophisticatedchannelprobingschemecouldprobewith multiple packetsperslot
whentheaffectedclient hasa smallprefetchedreserve andwith onepacket every v th slot, v¶µ'� , for
clientswith a large prefetchedreserve. We alsonotethat reducingtheslot lengthof theTDD would
make channelprobing more effective at the expenseof increasedoverhead;inspiredby the UMTS
standard[9] we useda shortfixedTDD slot lengthof 12 msecthroughout.We settheaveragesojourn
time in the”bad” channelstateto onesecondfor all thefollowing experiments.
Figure5showstheclientstarvationprobability ¡ loss asafunctionof themaximumnumberof parallel
channels� that canbe assignedto an individual client. Thefigure givesresultsfor JSQprefetching
without channelprobing,with channelprobing,andwith perfectknowledgeof thechannelstate.We
observe from Figure5 that for all threeapproaches,¡ loss dropsby over oneorderof magnitudeas �increasesfrom oneto two, allowing for collaborative prefetchingthroughthe lendingandborrowing
of channels.Now considerprefetchingwith � = 13 clients. For prefetchingwith channelprobing
andwith perfectknowledgeof thechannelstate,¡ loss dropssteadilyas � increases.For prefetching
withoutchannelprobing,however, ¡ loss increasesas � growslargerthantwo, thatis,allowing for more
extensive lendingandborrowing of channelsis detrimentalto performancein thisscenario.
This is becauseJSQprefetchingwithout channelprobingtendsto wastetransmissionchannelson a
client experiencinga persistentbadchannel.This reducestheprefetchedreservesof all clientsin the
cell, thusincreasingthe likelihoodof client starvation. The larger thenumberof parallelchannels�
16
PSfragreplacements
· loss
1.0
0.1
0.01
0.001
0.0001
0
0 2 4 6 8 10 12 14 16¸0.14 0.28 0.42 0.56 0.70 0.85 0.99
perfectknow
noprobingprobing
Fig.6. Clientstarvationprobability ² lossasafunctionof the bandwidthefficiency ¹ (obtainedby vary-ing the numberof clients º ) for VBR video andprefetchbuffer » = 32kByte.
J=14 probing
J=13 probingJ=7 probing
PSfragreplacements
· loss
0.1
0.01
0.001
0.0001
1e-059.6 28.8 57.6 96.0¼
[kBytes]
Fig.7. Clientstarvationprobability ² loss asafunctionof theclientbuffer capacity» for VBR video.
thatcanbeassignedto anindividual client, thelarger this wasteof transmissionchannels(resultingin
a larger ¡ loss ). Now considerprefetchingwith � = 7 clients. ¡ loss dropsbothfor prefetchingwithout
andwith channelprobingas � increasesto nine.As � grows largerthannine,however, ¡ loss increases
for prefetchingwithoutchannelprobing.This is becausein this low loadscenarioaclientexperiencing
a bad channelmay occupy nine out of the 15 available channelswithout doing much harm to the
prefetchedreservesof theothersix clients. (Thenoiselevel, however, is unnecessarilyincreased.)As
� grows larger thannine,however, a client experiencinga persistentbadchanneltendsto reducethe
prefetchedreservesof theotherclients.Anotherimportantobservation from Figure5 is thatalreadya
low–costclientwith supportfor a few parallelchannelsallows for effective prefetching.
Figure6 shows theclientstarvationprobability ¡ loss asafunctionof thebandwidthefficiency £ . The
plots areobtainedby varying the numberof clients � . Noteworthy is againthe effectivenessof the
simplechannelprobingscheme.Theclient starvationprobability ¡ loss achievedwith channelprobing
is �0z/� generallyoveroneorderof magnitudesmallerthanwithoutchannelprobing,and �0z{zn� onlyslightly
largerthanwith perfectknowledgeof thechannelstate.Throughouttheremainderof thisarticleweuse
prefetchingwith channelprobing. Importantly, theresultsin Figure6 indicatethata crudeadmission
controlcriterionthatlimits thebandwidthefficiency to lessthan0.9,say, is highly effective in ensuring
small client starvation probabilities. We note, however, that more researchis neededon admission
controlfor streamingin wirelessenvironments.
Figure7 shows the client starvation probability ¡ loss asa function of the client buffer capacity � .
Theresultsdemonstratethedramaticimprovementin performancethatcomesfrom prefetching.For �
17
CBR
H.263
PSfragreplacements
· loss
0
0.1
0.01
0.001
0.0001
1e-052 4 6 8 10 12 14 16
½0.14 0.28 0.42 0.56 0.70 0.85 0.99
VBR MPEG-1
Fig.8. Clientstarvationprobability ² lossasafunctionof thebandwidthefficiency ¹ (obtainedby varyingthe numberof clients º ) for » = 32 kBytes forVBR MPEG–1,CBR,andH.263video.
H.263
CBR
PSfragreplacements
¬ loss
0.1
0.01
0.001
0.0001
1e-05
969.60.90.09
VBR MPEG-1
= 1 14 ¼[kBytes]
Fig.9. Clientstarvationprobability ² loss asafunctionof theclient buffer capacity» for º = 13 streamsof VBR MPEG–1,CBR,andH.263video.
= 13 ongoingVBR streamstheclient starvationprobabilitydropsby over two ordersof magnitudeas
theclientbuffersincreasefrom 8 kBytesto 128kBytes.(A buffer of �k��¾ kBytescanholdonaverage16
secondsegmentsof theVBR videoswith anaveragerateof �� = 64kbit/sec.)With clientbuffersof � =
�k��¾ kBytesand � = 13ongoingstreamsourprefetchprotocolachievesaclientstarvationprobabilityof
lessthan ����� andabandwidthefficiency of ¿��pÀ ! Ourprotocolachievesthis remarkableperformance
for thestreamingof burstyVBR videoswith a typicalpeak–to–meanratioof theframesizesof ten. In
the long run eachwirelesslink is in the”bad” state(whereall packetsarelost) for onepercentof the
time; theaveragesojourntime in the”bad” stateis onesecond.
Figure8 shows theclient starvationprobability ¡ loss asa functionof thebandwidthefficiency £ for
thestreamingof VBR MPEG–1video,CBR video,andH.263video.Theprefetchbuffer is fixedat �= 32 kBytesin this experiment.Figure9 shows theclient starvationprobability ¡ loss asa functionof
theclient buffer capacity� for thestreamingof � = 13 streamsof VBR MPEG–1video,CBR video,
andH.263video.Weobserve from theplotsthatH.263videogivesgenerallysmallerclient starvation
probabilitiesthanVBR MPEG–1video.This is primarily becauseH.263videohasfor agivenaverage
bit rate,largerframesizes)+*,����� andcorrespondinglylargerframeperiods./*,����� thanMPEGvideo[4].
For H.263video theprefetchprotocolhasthereforemorefreedomin schedulingthe frames’packets,
resultingin smallerclient starvation probabilities. Moreover, the usedH.263 tracesarelessvariable
thanthe VBR MPEG–1traces.We alsoobserve from the plots that CBR video givessmallerclient
starvationprobabilitiesthanthevery variableVBR MPEG–1video.
TableI givestheclient starvationprobability ¡ loss asa functionof theaveragestreamlifetime � for
18
TABLE ICLIENT STARVATION PROBABIL ITY ² LOSS AS A FUNCTION OF THE AVERAGE STREAM LIFETIME Á FOR
VBR VIDEO.
� [sec] 10 50 100 600 1200¡ loss [ �� �� ] 45 7.3 4.6 4.4 2.1
TABLE IICLIENT STARVATION PROBABIL ITY ² LOSS AS A FUNCTION OF THE START–UP LATENCY FOR VBR VIDEO.
start–uplat. [msec] 0 40 80 120 400¡ loss [ �� �� ] 7.7 5.0 3.6 2.2 0.7
thestreamingof VBR MPEG–1videoto � = 13 clientseachwith a buffer capacityof � = 64 kBytes.
Ourprefetchingprotocolperformsverywell for streamlifetimesontheorderof minutesor longer. For
streamlifetimesshorterthanoneminute ¡ loss increasesconsiderablyasthelifetime decreases.This is
becausestreamlifetimesthisshortallow for very little timeto build upprefetchedreserves.Evenfor an
averagestreamlifetime of � = 10 sec,however, prefetchingreducestheclient’s starvationprobability
from 2.8e¨���� withoutany prefetchingto 4.5e¨����« with prefetching.Wenotethat(givenafixedmean)
the distribution of the streamlifetime hastypically little impacton the performanceof the prefetch
protocol. It is important,however, that thestartingtimesof thestreamsdo not colludetoo frequently.
This is becausetheJSQpolicy allocatesmoretransmissionresourcesto a new stream(with anempty
prefetchbuffer) to quickly build up its prefetchedreserve (seeFigure2 for an illustration wherethe
new stream’s reserve is build up in lessthanonesecond).If several streamsstartat thesametime it
takeslongerto build up their reserves. The longerbuild–up periodmakeslossesmorelikely. This is
becausestreamswithout sufficient reservesaremorevulnerableto playbackstarvationdueto wireless
link failures(or burstsin thevideotraffic). In oursimulationswedraw randomstreamlifetimesfrom an
exponentialdistribution andstartupanew streamimmediatelyafteranongoingstreamhasterminated.
With thisapproachthestartingtimesof thestreamcolludeonly infrequently. Webelieve thatthis is an
appropriatemodelfor anon–demandstreamingservice.
We observed in our simulationsthat lossestypically occurright at thebeginningof thevideoplay-
backwhentheclienthasnoprefetchedreserves.Thismotivatesusto introduceashortstart–uplatency
allowing the client to prefetchinto its prefetchedbuffer for a shortperiodof time without removing
anddisplayingvideo frames.TableII givestheclient starvation probability ¡ loss asa functionof the
start–uplatency for � = 13 ongoingVBR streamsandclient buffersof � = 128kBytes. We observe
from TableII that very shortstart–uplatenciesreducethe client starvation probability significantly;
a start–uplatency of 400 msec,for instance,reducesthe client starvation probability by roughly one
19
TABLE IIICLIENT STARVATION PROBABIL ITY ² LOSS AS A FUNCTION OF THE AVERAGE SPACING Â BETWEEN CLIENT
INTERACTIONS FOR VBR VIDEO.Ã
[sec] 10 50 100 250 500 1000 5000¡ loss [ �� �� ] 586 214 67 9.0 7.5 3.6 3.2
orderof magnitude.With a start–uplatency of 400 msecthe client prefetchesfor 400 msecwithout
removing video framesfrom its prefetchbuffer; thefirst frameis removedanddisplayedat time 400
msec+ . Y :=<�> ����� , where. Y : <�> ����� denotestheframeperiodof thefirst frameof thevideostream.
V. EXTENSIONS OF PREFETCH PROTOCOL
A. Client Interactions
In thissectionweadaptourstreamingprotocolto allow for client interactions,suchaspause/resume
andtemporaljumps.Supposethat theuserfor stream� pausesthevideo. Uponreceiving notification
of theaction,thebasestationcansimply removestream� from considerationuntil it receivesaresume
messagefrom the client. While the client is in the pausedstate,it’s prefetchbuffer contentsremain
unchanged;a slightly morecomplex policy would beto fill thecorrespondingbuffer onceall theother
(”unpaused”)client buffersarefull or reachaprespecifiedlevel.
Supposethattheuserfor stream� makesa temporaljump of Ä frames(correspondingto ./Å seconds
of videoruntime)into thefuture. If ./ÅÆb¶GH����� wediscardÄ framesfrom theheadof theprefetchbuffer
andset GH�����ÆIKGH�����MVL.nÅ ; E������ is adjustedaccordingly. If .nÅ}|wGH����� we set GW�����ÆIU� and E������tIÇ�anddiscardtheprefetchbuffer contents.Finally, supposethattheuserfor stream� makesa backward
temporaljump. In thiscasewe setGH�����SIK� and E������MIl� anddiscardtheprefetchbuffer contents.
In termsof performance,pausesactually improve performancebecausethe video streamsthat re-
mainactivehave moretransmissioncapacityto share.Frequenttemporaljumps,however, candegrade
performancebecauseprefetchbufferswould befrequentlysetto zero. We now give somesimulation
resultsfor client interactions. In our simulationswe consideronly forward andbackward temporal
jumpsandignorepausesbecausepausescanonly improve performance.We furthermoreassumethat
.nÅȵdGH����� for all forward temporaljumps. Thus,the prefetchbuffer is setto zerowhenever thecor-
respondinguserinvokessuchan interaction.Our resultsgive thereforea conservative estimateof the
actualperformance.In oursimulations,weassumethateachuserperformstemporaljumpsrepeatedly,
with the time betweentwo successive jumpsbeingexponentiallydistributed with meanÃ
seconds.
TableIII givesthe client starvation probability ¡ loss asa function of the averagespacingÃ
between
temporaljumps.Thisexperimentwasconductedfor thestreamingof VBR MPEGvideoto clientswith
20
buffersof � = 64kByte andsupportfor � = 15 parallelchannels.Thebandwidthefficiency is fixedat
£ = 92 % ( � = 13). Theaveragestreamlifetime is fixedat � = 1200sec.As we wouldexpect,theloss
probability increasesasthe rateof temporaljumpsincreases,however, the increaseis not significant
for asensiblenumberof temporaljumps.
B. Prefetching for LiveStreams
Although we have developedour prefetchingprotocolprimarily for the streamingof prerecorded
continuousmedia,in this sectionwe explore theprefetchingof live (i.e., not prerecorded)continuous
media. In particular, we focuson the transmissionof the coverageof a live event, suchasthe video
and audio feed from a conference,concert,or sportingevent. In the caseof a live video feed the
delaybudgetfrom capturingavideoframeto its displayon theclient’s screenis typically on theorder
of hundredsof milliseconds. (Onesatellitehop, for instance,introducesa propagationdelayof 240
msec.)We introduceanadditionalprefetchdelaybudget(on theorderof a few hundredmilliseconds)
to allow for prefetchingover the wirelesslink (i.e., the last hop). Specifically, let ¡È����� denotethe
prefetchdelaybudgetfor stream� in seconds.With prefetchingthebeginningof theplaybackat the
useris delayedby an additional ¡É����� . However, we believe that an increaseof thestart–updelayof
a few hundredmillisecondsis acceptableto mostusers. (We note that the situationis different for
interactive communication,e.g.,videoconferencing,wherethetotal delaybudgetis typically limited
to 250msec.)
Whencommencingthetransmissionof a live streamthebasestationimmediatelystartsto transmit
thestream’spackets.Theclientstartsto playout thestream¡É����� secondsafterthestreamtransmission
hascommenced.Thebasestationmaysupportlive streamsandprerecordedstreamsat thesametime.
The basestationschedulesboth the live streamsand prerecordedstreamsaccordingto the refined
JSQprefetchpolicy with channelprobing. Packetsarescheduledfor thestream�pj with thesmallest
prefetchedreserve GH����jk� , thatsatisfies( z ) theclient receptionconstraint(3) (which is setto onepacket
perslot if client �pj is in probingmode)and( z[z ) theclient buffer constraint(4). In addition,for a live
streamthe basestationcheckswhetherthe packet consideredfor transmissionexceedsthe prefetch
delaybudget. Specifically, whenconsideringa packet for transmissionto client ��j the basestation
verifieswhether
GH��� j ���w¡É��� j �HV . m�: <!o8> ��� j �)+m�: <!oP>P��� j � � (5)
where qS���pj(� is the frame(number)of stream�pj that is (partially) carriedby the consideredpacket.
Thisconstraintensuresthatthebasestationdoesnotexceedtheprefetchdelaybudgetby schedulinga
packet thathasyet to bedeliveredfrom thelive videosource.Theconstraint(5) ensuresthat thebase
21
TABLE IVCLIENT STARVATION PROBABIL ITY ² LOSS AS A FUNCTION OF THE PREFETCH DELAY BUDGET ²cÊÌËRÍ FOR
THE STREAMING OF live VBR MPEG–1 VIDEO.
¡É����� [sec] 0.04 0.08 0.24 0.4 1.0 2.0 4.0 Ρ loss [ �� �« ] 17.8 15.5 12.3 9.4 5.9 1.9 0.56 0.46
stationdoesnotprefetchmorethan ¡É����� seconds”into thefuture”.
In our simulationstudywe considera scenariowhereall ongoingstreamsare live streams.(Note
that our performanceresultsare thus somewhat conservative comparedto a more realistic scenario
wherethebasestationsupportsamixtureof live streamsandprerecordedstreams.This is becausethe
basestationis not constrainedby theprefetchdelaybudget(5) whenprefetchingfor theprerecorded
streams.)We considerthe streamingof live VBR MPEG–1streamsto � = 12 clients,eachwith a
buffer capacityof � = 32 kBytesandsupportfor � = 15 parallelchannels.The averagelifetime of
thevideo streamsis setto � = 10 minutes.Thebasestationhasa total of � = 15 channelsavailable
for videostreaming;thebandwidthefficiency is fixedat £ = 0.85. TableIV givestheclient starvation
probability ¡ loss asa functionof the prefetchdelaybudget ¡É����� (which is the samefor all streams).
In the consideredscenario,prefetchingwith a prefetchdelaybudgetof 400 msec(the typical delay
requirementfor voice over IP) givesa client starvation probability of lessthan �� � (a typical loss
requirementfor MPEG–4video). Theclient starvationprobabilityfor prefetchingof live contentwith
a prefetchdelaybudgetof four secondsis almostassmallasfor thestreamingof prerecordedcontent
without aprefetchdelaybudgetconstraint.
C. Uplink Streaming
In this sectionwe outline the streamingof prerecordedcontinuousmediafrom multiple clientsto
a centralbasestation. We assumea multi–rateCDMA systemwith a Time Division Duplex (TDD)
timing structure.For every uplink streama prefetchbuffer is allocatedin the basestation. Thebase
stationtrackstheprefetchbuffer contentsandschedulestheuplink packet transmissionsaccordingto
theJSQpolicy. Thebasestationsendsout theuplink transmissionschedulein theforward(downlink)
slot. In thesubsequentbackward(uplink) slot theclientssendthescheduledpacketsto thebasestation;
pseudonoisecodesare usedfor theseasynchronoustransmissions.The basestationprocessesthe
received packets, computesthe next uplink transmissionschedule,andsendsit out in the following
forwardslot.
Whenpacketsaremissingat theendof thebackward slot (i.e., whenthescheduleor a packet was
lost) the affectedclient is probed. While probingthe affectedclient sendsone(probing)packet per
backwardslot. Theprobingterminateswhenaprobingpacket is receivedby thebasestation.
22
D. Prefetching in third generation CDMAsystemsandTDMAsystems
WehavediscussedtheJSQprefetchpolicy in thecontext of asecondgenerationCDMA IS–95(Rev.
B) system,whererateadaptationis achievedthroughcodeaggregation,thatis, by dynamicallyassign-
ing multiple codechannelsto oneparticularclient (stream).We have considereda scenariowherethe
basestationuses� orthogonalcodesfor thedownlink streamingserviceandclient � canreceive (and
process)������� codechannelsin parallel.JSQprefetchingis equallywell suitedfor therateadaptation
techniquesusedin third generationCDMA systems.The third generationNorth AmericanCDMA
standard,cdma2000,adaptsdataratesby employing codeaggregationandvariablespreadinggains
[31]. With a 5 MHz carrier, for instance,the datarateof oneCDMA codechannelcanbe adjusted
from 9.6 kbit/secto 614.4kbit/secby varying the spreadingfactor. With codeaggregationa maxi-
mumdatarateof 2,048kbit/seccanbeachieved.Similarly, theUniversalMobile Telecommunications
System(UMTS) WidebandCDMA (WCDMA) standardfor EuropeandJapanadaptsdataratesby
employing codeaggregation in conjunctionwith variablespreadinggainsand codepuncturing[9].
UMTS mayrun in theFrequency Division Duplex (FDD) or theTime Division Duplex (TDD) mode.
In theTDD mode,whichweassumethroughoutthisarticle,timeis dividedinto 10msecframes,which
aresubdivided into 16 minislotsof 625 Ï seceach.A minislot is spreadwith a uniquecodeandmay
carryeitherforwardor backwardtraffic. Eachminislot hasa total of 15 codechannels,which maybe
dynamicallyassignedto the individual clients. To illustrateJSQprefetchingin thesethird generation
CDMA systems,supposethat the forward (basestationto clients)transmissioncapacityallocatedto
streamingservicesallows for thetransmissionof � packetsin oneforwardslot of theTDD. Thebase
stationexecutestheJSQschedulingalgorithmto determinethenumberof packets g������ scheduledfor
client ���D��������(�!� , in theupcomingforwardslot. Thespreadingfactorsandcodechannelsfor the
forwardslotareassignedaccordingto thetransmissionscheduleg��������7��������� �!� .
TheJSQprefetchpolicy is alsosuitedfor therateadaptationtechniquesof wirelessTime Division
Multiple Access(TDMA) systems.In GeneralizedPacketRadioService(GPRS)andEnhancedGPRS
(EGPRS),the GSM standardsfor packet dataservices,rateadaptationis achieved throughadaptive
codingandtime slot aggregation, that is, by assigningmultiple time slotswithin a GSM frameto a
particularclient (stream)[32]. Up to eight time slots per GSM frame can be allocatedto a client,
giving maximumdataratesof 160kbit/secin GPRSand473kbit/secin EGPRS.Similarly, in GPRS–
136,theIS–136TDMA standardfor packet dataservices,rateadaptationis achievedthroughadaptive
modulationandtime slot aggregation[33]. Up to threetime slotsper 20 msecTDMA framecanbe
allocatedto a client, giving a maximumdatarate of 44.4. kbit/sec. Supposethat the basestation
allocates� time slotsof theforwardportionof theTDMA frameto continuousmediastreaming.The
basestationexecutesthe JSQschedulingalgorithmto determinethe numberof time slots (packets)
23
g������ assignedto client ���H�������� �!� , in theupcomingTDMA frame.Thebasestationthenassigns
g������ timeslotsto client � in theforwardportionof theTDMA frame(using,for instance,theDynamic
Slot Assignment(DSA++) protocol[34]). Theprefetchingprotocolmaybeemployed in a Frequency
DivisionMultiple Access(FDMA) systemwith TimeDivisionDuplex in analogousfashion.
E. PhysicalLayerRefinements
In this sectionwe outlinehow ourprefetchprotocolmaybeusedin conjunctionwith physicallayer
techniquesdesignedto improve the throughputover wirelesschannels.Recall from SectionIII that
theprefetchingprotocoldistinguishestwo statesof thewirelesschannel:a ”good” statewherepack-
etsaresuccessfullyreceived, decoded,andacknowledged,anda ”bad” statewherepackets (and/or
acknowledgments)are lost (due to an excessive numberof bit errorsor completeshadowing). The
prefetchprotocolmay run on top of adaptive error control schemesemployed at the physicallayer.
Theseschemesmayadaptthe forward errorcorrection[35] or transmissionpower [8] basedon pilot
toneor signal to noiseandinterferencemeasurements.As long asthesetechniquesareableto suc-
cessfullyreturnan acknowledgmentfor a transmittedpacket, the prefetchingprotocol”sees”a good
channelandschedulespacketsaccordingto thelength(in runtime)of theprefetchedreserve. Whenthe
physicallayertechniquesfail (dueto severelydeterioratedchannelconditionsor completeshadowing),
theprefetchingprotocolswitchesto theprobingmode.
An avenuefor futureresearchis to useinterleaving schemesor turbocodes[36] in conjunctionwith
theprefetchingprotocol.Interleaving or turbocodescouldexploit thedelaytoleranceof clientswith a
largeprefetchedreserve to successfullydecodepacketsunderadversechannelconditions.
VI. RELATED WORK
Thereis a large body of literatureon providing QoSin wirelessenvironments.Much of the work
in this areahasfocusedon mechanismsfor channelaccess;seeAkyildiz et al. [37] for a survey. Choi
and Shin [38] have recentlyproposeda comprehensive channelaccessand schedulingschemefor
supportingreal–timetraffic andnon–real–timetraffic on theuplinksanddownlinksof awirelessLAN.
Recently, packet fair schedulingalgorithmsthatguaranteeclientsa fair portionof thesharedtrans-
missioncapacityhave received a greatdealof attention[39], [40], [41], [42]. Theseworksadaptfair
schedulingalgorithmsoriginally developedfor wireline packet–switchednetworks to wirelessenvi-
ronments.They addressthekey differencebetweenschedulingandresourceallocationin wireline and
wirelessenvironments:wireline links have a fixed transmissioncapacitywhile wirelesslinks experi-
encelocation–dependent, time–varying,andburstyerrors,which resultin situationswheretheshared
transmissioncapacityis temporarilyavailableonly to a subsetof the clients. Another line of work
24
addressesthe efficiency of reliabledatatransferover wirelesslinks [43], [22]. Krunz andKim [44],
[45] studythepacket delayandlossdistributionsaswell astheeffective bandwidthof anon–off flow
over awirelesslink with automaticrepeatrequestandforwarderrorcorrection.
We note that to our knowledgenoneof the existing schemesfor providing QoSin wirelessenvi-
ronmentstakesadvantage of thespecialproperties(predictabilityandprefetchability) of prerecorded
continuousmedia. Prerecordedcontinuousmedia,however, areexpectedto accountfor a largeportion
of thefuture Internettraffic. Thereis anextensive literatureon thestreamingof prerecordedcontinu-
ousmedia,in particularVBR video,over wirelinepacket–switchednetworks; seeKrunz [46] aswell
asReissleinandRoss[47] for a survey. In this literaturea wide variety of smoothingandprefetch-
ing schemesis explored to efficiently accommodateVBR video on fixed bandwidthwireline links.
Theproposedapproachesfall into two maincategories:non–collaborative prefetchingandcollabora-
tive prefetching.Non–collaborative prefetchingschemes[48], [49], [50] smooth(i.e., reducethepeak
rateandratevariability of) an individual VBR videostream.Thesmoothingis achieved by transmit-
ting thevideotraffic at a piecewiseconstant–ratetransmissionschedule.Thepiecewiseconstant–rate
transmissionschedulerelieson prefetchingof somepartsof the prerecordedvideo (alsoreferredto
aswork–ahead)into theclient buffer. Thenon–collaborative prefetchingschemescompute(typically
off–line) a transmissionschedulethat is assmoothas possiblewhile ensuringthat the client buffer
neitheroverflowsnorunderflows. Thevideostreamsarethentransmittedaccordingto theindividually
precomputedtransmissionschedules.Thestatisticalmultiplexing of the individually smoothedvideo
streamsis studiedin [51], [52].
Collaborative prefetchingschemes[53], [54], on theotherhand,determinethetransmissionsched-
ule of a video streamon–lineasa functionof all the otherongoingvideo streams.In particular, the
collaborative prefetchingschemesfor wireline networks take the prefetchbuffer contentsat all the
clientsinto consideration.We have built our prefetchprotocolfor wirelessenvironmentson theprin-
ciple of collaborative prefetchingfor two mainreasons.First, the time–varying,bursty, andlocation–
dependentwirelesslink errorsmake it verydifficult (if not impossible)to transmitaccordingto afixed
transmissionschedulethatis pre–computedfor anindividual videostreamby somenon–collaborative
prefetchingscheme.Wirelesssystemsexperiencesituationswherethesharedtransmissioncapacityis
temporarilyavailableonly to a subsetof the clients. At any given point in time theremay be good
transmissionconditionson the wirelesslinks to someclients while the transmissionconditionsare
adverseon the wirelesslinks to someother clients. It is thereforenatural to build upon the prin-
ciple of collaborative prefetchingwhen designinga prefetchingprotocol for wirelessenvironments.
Secondly, it hasbeenshown in thecontext of wireline networksthatcollaborative prefetchingoutper-
formsnon–collaborative prefetchingwhenmultiple videostreamssharea singlebottlenecklink [47];
25
that is collaborative prefetchingachieveshigheraveragelink utilizationsandsmallerclient starvation
probabilities. Wirelessnetworks andwireline networks have fundamentallydifferentcharacteristics.
However, streamingfrom a centralbasestationto wirelessclientsis similar to streamingover a wire-
line bottlenecklink. In both casesmultiple video streamssharea commonresource.In the wireless
settingthiscommonresourceis thedownlink transmissioncapacityof thebasestation.
Among the collaborative prefetchingschemesfor wireline environmentsis a prefetchingscheme
basedon the Join–the–Shortest–Queue(JSQ)principle developedby ReissleinandRoss[53]. Their
schemeis designedfor a Videoon Demandservicewith VBR encodedfixedframerateMPEGvideo
overanADSL network or thecableplant.Theprotocolproposedin thisarticlediffersfrom theprotocol
in [53] in two majoraspects.First, theprotocolin [53] is designedfor a sharedwireline link of fixed
capacity. It doesnothandlethelocation–dependent, time–varying,andburstyerrorsthataretypical for
wirelessenvironments.Secondly, theprotocolin [53] is designedfor fixedframeratevideo. It assumes
thatall ongoingvideostreamshave thesameframerate. Furthermore,it requiresthesynchronization
of the frameperiodsof theongoingstreams.Our protocolin this article,on theotherhand,doesnot
requiresynchronizationof theongoingvideostreams.Ourprotocolaccommodatesvideostreamswith
different(andpossiblytime–varying)framerates.It is thuswell suitedfor H.263encodingswhichare
expectedto playanimportantrole in videostreamingin wirelessenvironments.
ElaoudandRamanathan[55] proposea schemefor providing network level QoSto flows in a wire-
lessCDMA system.Theirschemedynamicallyadjustthesignalto interferenceandnoiseratiorequire-
mentsof flowsbasedonMACpacketdeadlinesandchannelconditions.TheSimultaneousMACPacket
Transmission(SMPT)schemeof Fitzeketal. [56] providestransportlevel QoSby exploiting rateadap-
tationtechniquesof CDMA systems.TheSMPTschemedeliverstransportlayersegments(e.g.,UDP
or TCP segments,which aredivided into several MAC packets)with high probability within a per-
missibledelaybound.Our work in this articlediffers from thenetwork/transportlevel schemes[55],
[56] in several aspects.First, [55], [56] proposedecentralizedschemesfor backward (uplink) trans-
missions,that is, the schemesaredesignedfor uncoordinatedtransmissionsfrom distributed clients
to a centralbasestation. Secondly, thereis no prefetchingin [55], [56]; theSMPTschemeresortsto
rateadaptation(i.e., parallelpacket transmissions)only to recover from gapscausedby errorson the
wirelesslink within a givenTCPor UDP segment.Moreover, [55], [56] do not take thecharacteristics
of the applicationlayer traffic into consideration;the scheme[55] operateson oneMAC packet at a
time andSMPT[56] operateson oneTCPor UDP segmentat a time. Our protocolin this article,on
theotherhand,exploits two specialpropertiesof theprerecordedcontinuousmediastreamingtraffic:
(1) theclientconsumptionratesover thedurationof theplaybackareknown beforethestreamingcom-
mences,and(2) while thecontinuousmediastreamis beingplayedoutat theclient,partsof thestream
26
canbeprefetchedinto theclient’s memory.
We notein closingthatChaskarandMadhow [57] andAndrews et al. [58] studythesharingof the
basestation’s downlink transmissioncapacityby multiple flows. ChaskarandMadhow [57] propose
a link shapingschemewheretheflows areassignedindividual packet slotsor entirerows in an inter-
leaverblockmatrix. Andrewsetal. [58] proposeaModifiedLargestWeightedDelayFirst (M–LWDF)
scheme,which is designedto provide throughputand/ordelayassurances.In the M–LWDF scheme
the basestationmaintainsa transmissionqueuefor eachongoingdownlink flow. Roughlyspeaking,
packets are scheduledfor the flow with ( z ) the largest transmissionqueue,and ( z[z ) relatively good
transmissionconditions.
VII . CONCLUSION
Wehavedevelopedahighperformanceprefetchingprotocolfor thestreamingof prerecordedcontin-
uousmediain acellularwirelesssystem.Ourprefetchingprotocolcanbeemployedontopof any of the
rateadaptationtechniquesof wirelesscommunicationsystems.Our protocolaccommodatesCBR and
VBR encodingsaswell asfixedframerateandvariableframerateencodings.Channelprobingis cru-
cial for theperformanceof our protocol.With channelprobingthebasestationallocatestransmission
resourcesin a judiciousmanneravoiding theallocationof largeportionsof theavailabletransmission
capacityto clientsexperiencingadversetransmissionconditions.
In ourongoingwork westudyservicedifferentiationamongtheongoingstreams.In acrudeservice
differentiationschemethebasestationprefetchesfor low priority clientsonly if theprefetchedreserves
of thehighpriority clientshavereachedprespecifiedlevels. In ourongoingwork wearealsoextending
theprefetchingprotocolto scalable(layered)encodedvideo. With layeredencodingthevideois typi-
cally encodedinto a baselayer, which providesa basicvideoquality, andone(or more)enhancement
layer(s),which improve thevideoquality. Layeredencodedvideohasthedecodingconstraintthatan
enhancementlayercanonly bedecodedif all lower layersaregiven. Layeredencodedvideomakesit
possibleto provide differentvideoqualitiesto clientswith differentdecodinganddisplaycapabilities.
Importantly, layeredencodedvideoalsoallows for gracefuldegradationof thevideo quality. During
periodsof badchannelconditionsthe client may run out of the enhancementlayer(s)andcontinue
displayingthe lower quality baselayer video (provided a sufficiently long segmentof the baselayer
hasbeenprefetched).With layeredencodedvideothereis atrade–off betweenprefetching(1) ashorter
segmentof thecomplete(baselayer+ enhancementlayer(s))stream,or (2) alongerbaselayersegment.
Weareexploring this trade–off andaredevelopingpoliciesthatprefetchtheenhancementlayer(s)only
if aminimum–lengthbaselayersegmenthasbeenprefetched.
27
Acknowledgment: We aregratefulto Prof.AdamWolisz for providing theenvironmentthatallowed
usto pursuethework presentedin thisarticle.
REFERENCES
[1] G. A. Gibson,J. S. Vitter, andJ. Wilkes, “StorageandI/O Issuesin Large–ScaleComputing,” in ACM WorkshoponStrategic Directionsin ComputingResearch, ACM ComputingSurveys, 1996.
[2] Inktomi Inc., “Streamingmediacachingwhite paper,” TechnicalReport,Inktomi Corporation,1999.[3] L. RobertsandM. Tarsala,“Inktomi goeswireless;formsalliances,” in CBSMarketWatch, March14th2000.[4] F. Fitzek andM. Reisslein, “MPEG–4 andH.263video tracesfor network performanceevaluation,” IEEE Network,
acceptedfor publication, 2001,Preprintandvideotracesavailableathttp://www.eas.asu.edu/˜mre .[5] G. Karlsson, “Asynchronoustransferof video,” IEEE CommunicationsMagazine, vol. 34, no. 8, pp. 106–113,Feb.
1996.[6] H. Liu andM. El Zarki, “H.263 video transmissionover wirelessnetworks usinghybrid ARQ,” IEEE Journal on
SelectedAreasin Communications, vol. 15,no.9, pp.1775–1785,Dec.1997.[7] S.Nanda,K. Balachandran,andS. Kumar, “Adaptationtechniquesin wirelesspacket dataservices,” IEEE Communi-
cationsMagazine, vol. 38,no.1, pp.54–64,Jan.2000.[8] EIA/TIA-95 Rev. B, “Mobile station–basestationcompatibility standardfor dual–modewidebandspreadspectrum
cellularsystems,” 1997.[9] UMTS 30.03, “UniversalMobile TelecommunicationsSystem(UMTS); SelectionProceduresfor theChoiceof Radio
TransmissionTechnologiesof theUMTS,” .[10] Y. WangandQ. Zhu, “Error controlandconcealmentfor videocommunication:A review,” Proceedingsof theIEEE,
vol. 86,no.5, pp.974–997,May 1998.[11] L. Georgiadis,R. Guerin,andA. K. Parekh, “Optimal multiplexing on a singlelink: Delayandbuffer requirements,”
in Proceedingsof IEEE Infocom’94, Toronto,Canada,June1994.[12] F. Wegner, “PersonalCommunication.SiemensAG, Mobile RadioAccessSimulationGroup,Berlin, Germany,” May
2000.[13] M. Bronzelet al, “Integratedbroadbandmobile system(IBMS) featuringwirelessATM,” in Proceedingsof ACTS
MobileCommunicationSummit, Aalborg, Denmark,Oct.1998.[14] M. D. Yacoub,Foundationsof MobileRadioEngineering, CRCPress,1993.[15] M. Zorzi andR. R. Rao, “Error controlandenergy consumptionin communicationsfor nomadiccomputing,” IEEE
TransactionsonComputers, vol. 46,no.3, pp.279–289,Mar. 1997.[16] M. W. Garret,Contributionstoward Real-TimeServiceson Packet Networks, Ph.D.thesis,ColumbiaUniversity, May
1993.[17] M. Krunz,R. Sass,andH. Hughes,“Statisticalcharacteristicsandmultiplexing of MPEGstreams,” in Proceedingsof
IEEE Infocom’95, April 1995,pp.455–462.[18] O. Rose, “Statisticalpropertiesof MPEGvideo traffic andtheir impacton traffic modellingin ATM systems,” Tech.
Rep.101,Universityof Wuerzburg, Instituteof ComputerScience,Feb. 1995.[19] F. FitzekandM. Reisslein,“A prefetchingprotocolfor continuousmediastreamingin wirelessenvironments(extended
version),” Tech.Rep.TKN–00–05,TechnicalUniversityBerlin,Dept.of ElectricalEng.,Germany, June2001,availableat http://www-tkn.ee.tu-berlin.de/˜fitze k andhttp://www.eas.asu.edu/˜mre .
[20] E. N. Gilbert, “Capacityof aburst–noisechannel,” Bell SystemsTechnicalJournal, vol. 39,pp.1253–1266,Sept.1960.[21] E. O. Elliot, “Estimatesof errorratesfor codeson burst–noisechannels,” Bell SystemsTechnical Journal, vol. 42,pp.
1977–1997,Sept.1963.[22] P. Bhagwat, P. Bhattacharya,A. Krishna,andS. K. Tripathi, “Using channelstatedependentpacket schedulingto
improve TCPthroughputover wirelessLANs,” ACM WirelessNetworks, vol. 3, pp.91–102,1997.[23] M. Zorzi, R. R. Rao,andL. B. Milstein, “On theaccuracy of afirst–ordermarkovianmodelfor datablocktransmission
on fadingchannels,” in Proceedingsof IEEE InternationalConferenceon Universal PersonalCommunications, Nov.1995,pp.211–215.
[24] H. S.WangandN. Moayeri, “Finite–statemarkov model— a usefulmodelfor radiocommunicationchannels,” IEEETransactionsonVehicularTechnology, vol. 44,pp.163–177,Feb. 1995.
[25] R. R. Rao, “Higher layer perspectiveson modelingthe wirelesschannel,” in Proceedingsof IEEE ITW, Killarney,Ireland,June1998,pp.137–138.
[26] G. L. Stuber, Principlesof MobileCommunications, Kluwer AcademicPublishers,secondedition,2001.
28
[27] C.Chien,M. B. Srivastava,R.Jain,P. Lettieri,V. Aggarwal, andR.Sternowski, “Adaptiveradiofor multimediawirelesslinks,” IEEEJournal onSelectedAreasin Communications, vol. 17,no.5, pp.793–813,May 1999.
[28] C. Hsu,A. Ortega,andM. Khansari, “Ratecontrol for robustvideotransmissionover burst–errorwirelesschannels,”IEEE JournalonSelectedAreasin Communications, vol. 17,no.5, pp.756–773,May 1999.
[29] L. W. Schruben,“Detectinginitialization biasin simulationoutput,” OperationsResearch, vol. 30,pp.569–590,1982.[30] G. S.Fishman,Principlesof DiscreteEventSimulation, Wiley, 1991.[31] D. N. Knisely, S.Kumar, S.Laha,andS.D. Nanda,“Evolutionof wirelessdataservices:IS–95to CDMA 2000,” IEEE
CommunicationsMagazine, vol. 36,no.10,pp.140–149,Oct.1998.[32] ETSI GSM 03.60, “Digital CellularTelecommunicationsSystem(Phase2+); GeneralPacket RadioService(GPRS);
ServiceDescription;Stage2,” .[33] K. Balachandran,R. Ejzak,S. Nanda,S. Vitebskiy, andS. Seth, “GPRS–136:High–ratepacket dataservicefor north
americanTDMA digital cellularsystems,” IEEEPersonalCommunications, vol. 6, no.3, pp.34–47,June1999.[34] G.Anastasi,L. Lenzini,E.Mingozzi,A. Hettich,andA. Kramling, “MA C protocolsfor widebandwirelesslocalaccess:
Evolution towardswirelessATM,” IEEE PersonalCommunications, vol. 5, no.5, pp.53–64,Oct.1998.[35] H. Liu, H. Ma, M. ElZarki, andS. Gupta, “Error controlschemesfor networks: An overview,” Mobile Networksand
Applications, vol. 2, no.2, pp.167–182,Oct.1997.[36] T. DumanandM. Salehi,“New performanceboundsfor turbocodes,” IEEETransactionsonCommunications, vol. 46,
no.6, pp.717–723,June1998.[37] I. F. Akyildiz, J.McNair, L. C.Martorell,R. Puigjaner, andY. Yesha,“Medium accesscontrolprotocolsfor multimedia
traffic in wirelessnetworks,” IEEE Network, vol. 13,no.4, pp.39–47,July/August1999.[38] S.ChoiandK. G.Shin,“A unifiedwirelessLAN architecturefor real–timeandnon–real–timecommunicationservices,”
IEEE/ACM Transactionson Networking, vol. 8, no.1, pp.44–59,Feb. 2000.[39] C. Fragouli,V. Sivaraman,andM. B. Srivastava, “Controlled multimediawirelesslink sharingvia enhancedclass–
basedqueueingwith channel–state–dependentpacket scheduling,” in Proceedingsof IEEEInfocom’98, SanFrancisco,CA, Apr. 1998.
[40] T. S. E. Ng, I. Stoica,andH. Zhang, “Packet fair queueingalgorithmsfor wirelessnetworkswith locationdependenterrors,” in Proceedingsof IEEE Infocom’98, SanFrancisco,CA, Apr. 1998,pp.1103–1111.
[41] S.Lu, T. Nandagopal,andV. Bharghavan, “A wirelessfair servicealgorithmfor packet cellularnetworks,” in Proceed-ingsof ACM/IEEEMobiCom’98, Dallas,TX, Oct.1998,pp.10–20.
[42] P. RamanathanandP. Agrawal, “Adaptingpacket fair queueingalgorithmsto wirelessnetworks,,” in ProceedingsofACM MobiCom1998, Dallas,TX, Oct.1998.
[43] E. Amir, H. Balakrishnan,S.Seshan,andR. Katz, “Efficient TCPover networkswith wirelesslinks,” in Proceedingsof ACM Conferenceon MobileComputingandNetworking, Berkeley, CA, Dec.1995.
[44] J. G. Kim and M. Krunz, “Bandwidth allocation in wirelessnetworks with guaranteedpacket loss performance,”IEEE/ACM Transactionson Networking, vol. 8, no.3, pp.337–349,June2000.
[45] M. M. Krunz andJ. G. Kim, “Fluid analysisof delayandpacket discardperformancefor QoSsupportin wirelessnetworks,” IEEEJournalon SelectedAreasin Communications, vol. 19,no.2, pp.384–395,Feb. 2001.
[46] M. Krunz, “Bandwidth allocationstrategiesfor transportingvariable–bit–ratevideo traffic,” IEEE CommunicationsMagazine, vol. 37,no.1, pp.40–46,Jan.1999.
[47] M. ReissleinandK. W. Ross,“High–performanceprefetchingprotocolsfor VBR prerecordedvideo,” IEEE Network,vol. 12,no.6, pp.46–55,Nov/Dec1998.
[48] W. Fengand J. Rexford, “A comparisonof bandwidthsmoothingtechiniquesfor the transmissionof prerecordedcompressedvideo,” in Proceedingsof IEEE Infocom, Kobe,Japan,Apr. 1997,pp.58–67.
[49] J. Salehi,Z. Zhang,J. Kurose,andD. Towsley, “Supportingstoredvideo: Reducingratevariability andend–to–endresourcerequirementsthroughoptimalsmoothing,” IEEE/ACMTransactionsonNetworking, vol. 6,no.4,pp.397–410,Aug. 1998.
[50] Z. JiangandL. Kleinrock, “A generaloptimalvideosmoothingalgorithm,” in Proceedingsof IEEE Infocom’98, SanFrancisco,CA, Apr. 1998,pp.676–684.
[51] M. Grossglauser, S. Keshav, and D. Tse, “RCBR: A simple and efficient servicefor multiple time–scaletraffic,”IEEE/ACM Transactionson Networking, vol. 5, no.6, pp.741–755,1997.
[52] Z. Zhang,J. Kurose,J. Salehi,andD. Towsley, “Smoothing,statisticalmultiplexing andcall admissioncontrol forstoredvideo,” IEEEJournal onSelectedAreasin Communications, vol. 13,no.6, pp.1148–1166,Aug. 1997.
[53] M. ReissleinandK. W. Ross, “A join–the–shortest–queueprefetchingprotocolfor VBR videoon demand,” in Pro-ceedingsof IEEEInternationalConferenceonNetworkProtocols(ICNP), Atlanta,GA, Oct.1997,pp.63–72.
29
[54] S. BakirasandV. O. K. Li, “Smoothingandprefetchingvideo from distributedservers,” in Proceedingsof IEEEInternationalConferenceon NetworkingProtocols(ICNP), Toronto,Canada,Oct.1999,pp.311–318.
[55] M. ElaoudandP. Ramanathan,“Adaptive allocationof CDMA resourcesfor network–level QoSassurance,” in Pro-ceedingsof ACM MobiCom2000, Boston,MA, Aug. 2000.
[56] F. H. P. Fitzek, B. Rathke, M. Schlager, andA. Wolisz, “Quality of servicesupportfor real–timemultimediaappli-cationsover wirelesslinks usingthe simultaneousMAC–packet transmission(SMPT) in a CDMA environment,” inProceedingsof 5th InternationalWorkshopon Mobile MultimediaCommunications(MoMuC), Berlin, Germany, Oct.1998,pp.367–378.
[57] H. M. ChaskarandU. Madhow, “Statisticalmultiplexing andQoSprovisioningfor real–timetraffic on wirelessdown-links,” IEEEJournal onSelectedAreasin Communications, vol. 19,no.2, pp.347–354,Feb. 2001.
[58] M. Andrews,K. Kumaran,K. Ramanan,A. Stolyar, P. Whiting, andR. Vijayakumar, “Providing qualityof serviceovera sharedwirelesslink,” IEEECommunicationsMagazine, vol. 39,no.2, pp.150–154,Feb. 2001.
30
L IST OF FIGURES
1 Architecture: A centralbasestationstreamsprerecordedcontinuousmediato wireless
(andpossiblymobile)clientswithin awirelesscell. . . . . . . . . . . . . . . . . . . . . 4
2 Samplepathplot: Prefetchbuffer contents(in kBytes)of 3 clientsasa functionof time:
Client1 startsover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Samplepathplot: Prefetchbuffer contents(in kBytes)of 3 clientsasa functionof time:
Client1 experiencesabadchannel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Client starvation probability ¡ loss asa functionof theaveragesojourntime in the ”bad”
channelstatefor prefetchingof VBR videowithout channelprobing,with channelprob-
ing, andwith perfectknowledgeof thechannelstate.. . . . . . . . . . . . . . . . . . . . 15
5 Client starvation probability ¡ loss asa function of the maximumnumberof channels�perclient for prefetchingof VBR videowithout channelprobing,with channelprobing,
andwith perfectknowledgeof thechannelstate. . . . . . . . . . . . . . . . . . . . . . . 15
6 Clientstarvationprobability ¡ loss asafunctionof thebandwidthefficiency £ (obtainedby
varyingthenumberof clients � ) for VBR videoandprefetchbuffer � = 32 kByte. . . . . 16
7 Client starvation probability ¡ loss asa function of the client buffer capacity � for VBR
video. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
8 Clientstarvationprobability ¡ loss asafunctionof thebandwidthefficiency £ (obtainedby
varyingthenumberof clients � ) for � = 32kBytesfor VBR MPEG–1,CBR,andH.263
video. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
9 Client starvationprobability ¡ loss asa functionof theclient buffer capacity� for � = 13
streamsof VBR MPEG–1,CBR,andH.263video. . . . . . . . . . . . . . . . . . . . . . 17
L IST OF TABLES
I Client starvationprobability ¡ loss asa functionof theaveragestreamlifetime � for VBR
video. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
II Client starvationprobability ¡ loss asa functionof thestart–uplatency for VBR video. . . 18
III Client starvation probability ¡ loss asa functionof theaveragespacingÃ
betweenclient
interactionsfor VBR video. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
IV Client starvationprobability ¡ loss asa functionof theprefetchdelaybudget ¡É����� for the
streamingof live VBR MPEG–1video. . . . . . . . . . . . . . . . . . . . . . . . . . . 21
31
Frank Fitzek is working towardshis Ph.D.in ElectricalEngineeringin theTelecommunicationNet-
worksGroupat theTechnicalUniversityBerlin, Germany. HereceivedhisDipl.–Ing. degreein electri-
cal engineeringfrom theUniversityof Technology— Rheinisch–WestfalischTechnischeHochschule
(RWTH) — Aachen,Germany, in 1997.His researchinterestsarein theareasof multimediastreaming
over wirelesslinks andQualityof Servicesupportin wirelessCDMA systems.
Martin Reisslein is anAssistantProfessorin theDepartmentof ElectricalEngineeringatArizonaState
University, Tempe.He is affiliated with ASU’sTelecommunicationsResearchCenter. He receivedthe
Dipl.-Ing. (FH) degreefrom theFachhochschuleDieburg, Germany, in 1994,andtheM.S.E.degree
from theUniversityof Pennsylvania,Philadelphia,in 1996.Both in electricalengineering.Hereceived
his Ph.D.in systemsengineeringfrom theUniversity of Pennsylvania in 1998. During theacademic
year 1994-1995he visited the University of Pennsylvania as a Fulbright scholar. From July 1998
throughOctober2000he wasa scientistwith the GermanNationalResearchCenterfor Information
Technology(GMD FOKUS),Berlin. While in Berlin he wasteachingcourseson performanceeval-
uationandcomputernetworking at the TechnicalUniversity Berlin. He hasserved on the Technical
ProgramCommitteesof IEEE Infocom, IEEE Globecom,andthe IEEE InternationalSymposiumon
ComputerandCommunications.He hasorganizedsessionsat the IEEE ComputerCommunications
Workshop(CCW). His researchinterestsarein theareasof InternetQuality of Service,wirelessnet-
working, andoptical networking. He is particularlyinterestedin traffic managementfor multimedia
serviceswith statisticalQuality of Servicein theInternetandwirelesscommunicationsystems.