IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf ·...

13
IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016 3043 In-Network Congestion Control for Multirate Multicast Georgios S. Paschos, Senior Member, IEEE, Chih-Ping Li, Eytan Modiano, Fellow, IEEE, Kostas Choumas, and Thanasis Korakis Abstract—We present a novel control scheme that dynamically optimizes multirate multicast. By computing the differential backlog at every node, our scheme adaptively allocates transmis- sion rates per session/user pair in order to maximize throughput. An important feature of the proposed scheme is that it does not require source cooperation or centralized calculations. This methodology leads to efficient and distributed algorithms that scale gracefully and can be embraced by low-cost wireless devices. Additionally, it is shown that maximization of sum utility is pos- sible by the addition of a virtual queue at each destination node of the multicast groups. The virtual queue captures the desire of the individual user and helps in making the correct resource allocation to optimize total utility. Under the operation of the pro- posed schemes backlog sizes are deterministically bounded, which provides delay guarantees on delivered packets. To illustrate its practicality, we present a prototype implementation in the NITOS wireless testbed. The experimental results verify that the proposed schemes achieve maximum performance while maintaining low complexity. Index Terms—Congestion control, multicast, network utility maximization, stability, throughput. Manuscript received January 18, 2015; revised August 27, 2015; ac- cepted November 01, 2015; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor D. Manjunath. Date of publication December 11, 2015; date of current version October 13, 2016. This work was supported by the NSF under Grants CNS-0915988 and CNS-1116209 and the ONR under Grant N00014-12-1-0064. The work of G. Paschos was supported in part by the WiNC project of the Action: Supporting Postdoctoral Researchers, funded by national and community funds (European Social Fund) . This work makes use of results produced by the SmartFIRE project, which is supported by the International Research & Development Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP, Korea) under Grant No. K2013078191 and the Seventh Framework Programme (FP7) funded by the European Commission under Grant No. 611165. Opinions, interpretations, conclusions and recommendations are those of the authors and are not necessarily endorsed by the United States Government, nor of the companies Huawei, and Qualcomm. A part of this work is published in IEEE INFOCOM 2014. G. S. Paschos was with the Massachusetts Institute of Technology, Cam- bridge, MA 02139 USA. He is now with the Mathematical and Algorithmic Sciences Lab, France Research Center, Huawei Technologies Co., Ltd., 92100 Boulogne-Billancourt, France (e-mail: [email protected]). C.-P. Li was with Massachusetts Institute of Technology, Cambridge, MA 02139 USA. He is now with Qualcomm Research, San Diego, CA 92121 USA. E. Modiano is with the Laboratory for Information and Decision Systems (LIDS), Massachusetts Institute of Technology, Cambridge, MA 02139 USA. K. Choumas is with the Department of Electrical and Computer Engineering, University of Thessaly, Volos 38221, Greece, and also with the Informatics & Telematics Institute, CERTH, Thermi 57001, Greece T. Korakis is with the Department of Electrical and Computer Engineering, University of Thessaly, Volos 38221, Greece. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNET.2015.2503261 I. INTRODUCTION T HE INCREASING demand for multimedia applications, such as real-time conferencing, multiview video and video streaming, pushes data networks to their operational limits and motivates efficient resource allocation schemes. Multicast is a candidate method for delivering multimedia streams to multiple users across a network. To optimize in- dividual user experience, it is desired to employ multi-rate multicast transmissions and use layered multimedia coding schemes to adapt users' perceived quality to allowable data rates, see [2], [3]. Since different receivers may require dif- ferent data rates, we study the problem of per-receiver Network Utility Maximization (NUM) in multi-rate multicast, where each receiver is assigned a potentially different utility function. Controlling multicast streams is challenging; the optimal net- work resource allocation and stream rate adaptation depends on the network state, which includes channel quality, network congestion, user demand and device capabilities. Current ap- proaches collect network state information at the source and calculate the achievable stream rate per receiver, e.g., [4], [5]. Such a process can be overwhelming for the source, since a mul- ticast stream may have thousands of receivers. In this work we develop a solution to per-receiver NUM in multi-rate multicast without source cooperation. This solution is highly desirable in situations where there is no coordination between stream inputs, and the network is utilized as a decoupled, neutral resource. Our solution combines scheduling with intelligent packet dropping at intermediate nodes. Packets corresponding to all stream layers are initially injected into the network without any calculations. Progressively, some packets are dropped according to a dropping scheme which bases its decisions on local information. We show that the original stream is stripped of unnecessary packets so that each receiver obtains the exact amount of information that corresponds to maximum throughput. Moreover, we combine the above mechanism with receiver-end congestion control to produce a scheme that maximizes utility without source cooperation. The proposed policies have the following attractive features. First, they dynamically track the optimal solution without explicitly exchanging information about time-varying system parameters such as stream rate, link quality and network congestion. Second, they are based on neighbor backlog information which is found locally. Thus the policies are amenable to distributed implementation for wireless and het- erogeneous network technologies. Third, they do not require source cooperation, i.e., the sources transmit stream packets 1063-6692 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Transcript of IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf ·...

Page 1: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016 3043

In-Network Congestion Controlfor Multirate Multicast

Georgios S. Paschos, Senior Member, IEEE, Chih-Ping Li, Eytan Modiano, Fellow, IEEE,Kostas Choumas, and Thanasis Korakis

Abstract—We present a novel control scheme that dynamicallyoptimizes multirate multicast. By computing the differentialbacklog at every node, our scheme adaptively allocates transmis-sion rates per session/user pair in order to maximize throughput.An important feature of the proposed scheme is that it doesnot require source cooperation or centralized calculations. Thismethodology leads to efficient and distributed algorithms thatscale gracefully and can be embraced by low-cost wireless devices.Additionally, it is shown that maximization of sum utility is pos-sible by the addition of a virtual queue at each destination nodeof the multicast groups. The virtual queue captures the desireof the individual user and helps in making the correct resourceallocation to optimize total utility. Under the operation of the pro-posed schemes backlog sizes are deterministically bounded, whichprovides delay guarantees on delivered packets. To illustrate itspracticality, we present a prototype implementation in the NITOSwireless testbed. The experimental results verify that the proposedschemes achieve maximum performance while maintaining lowcomplexity.

Index Terms—Congestion control, multicast, network utilitymaximization, stability, throughput.

Manuscript received January 18, 2015; revised August 27, 2015; ac-cepted November 01, 2015; approved by IEEE/ACM TRANSACTIONS ONNETWORKING Editor D. Manjunath. Date of publication December 11, 2015;date of current version October 13, 2016. This work was supported by theNSF under Grants CNS-0915988 and CNS-1116209 and the ONR under GrantN00014-12-1-0064. The work of G. Paschos was supported in part by theWiNC project of the Action: Supporting Postdoctoral Researchers, funded bynational and community funds (European Social Fund) . This work makesuse of results produced by the SmartFIRE project, which is supported by theInternational Research & Development Program of the National ResearchFoundation of Korea (NRF) funded by the Ministry of Science, ICT and FuturePlanning (MSIP, Korea) under Grant No. K2013078191 and the SeventhFramework Programme (FP7) funded by the European Commission underGrant No. 611165. Opinions, interpretations, conclusions and recommendationsare those of the authors and are not necessarily endorsed by the United StatesGovernment, nor of the companies Huawei, and Qualcomm. A part of thiswork is published in IEEE INFOCOM 2014.G. S. Paschos was with the Massachusetts Institute of Technology, Cam-

bridge, MA 02139 USA. He is now with the Mathematical and AlgorithmicSciences Lab, France Research Center, Huawei Technologies Co., Ltd., 92100Boulogne-Billancourt, France (e-mail: [email protected]).C.-P. Li was with Massachusetts Institute of Technology, Cambridge, MA

02139 USA. He is now with Qualcomm Research, San Diego, CA 92121 USA.E. Modiano is with the Laboratory for Information and Decision Systems

(LIDS), Massachusetts Institute of Technology, Cambridge, MA 02139 USA.K. Choumas is with the Department of Electrical and Computer Engineering,

University of Thessaly, Volos 38221, Greece, and also with the Informatics &Telematics Institute, CERTH, Thermi 57001, GreeceT. Korakis is with the Department of Electrical and Computer Engineering,

University of Thessaly, Volos 38221, Greece.Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TNET.2015.2503261

I. INTRODUCTION

T HE INCREASING demand for multimedia applications,such as real-time conferencing, multiview video and

video streaming, pushes data networks to their operationallimits and motivates efficient resource allocation schemes.Multicast is a candidate method for delivering multimediastreams to multiple users across a network. To optimize in-dividual user experience, it is desired to employ multi-ratemulticast transmissions and use layered multimedia codingschemes to adapt users' perceived quality to allowable datarates, see [2], [3]. Since different receivers may require dif-ferent data rates, we study the problem of per-receiver NetworkUtility Maximization (NUM) in multi-rate multicast, whereeach receiver is assigned a potentially different utility function.Controlling multicast streams is challenging; the optimal net-

work resource allocation and stream rate adaptation dependson the network state, which includes channel quality, networkcongestion, user demand and device capabilities. Current ap-proaches collect network state information at the source andcalculate the achievable stream rate per receiver, e.g., [4], [5].Such a process can be overwhelming for the source, since a mul-ticast stream may have thousands of receivers. In this work wedevelop a solution to per-receiver NUM in multi-rate multicastwithout source cooperation. This solution is highly desirable insituations where there is no coordination between stream inputs,and the network is utilized as a decoupled, neutral resource.Our solution combines scheduling with intelligent packet

dropping at intermediate nodes. Packets corresponding to allstream layers are initially injected into the network withoutany calculations. Progressively, some packets are droppedaccording to a dropping scheme which bases its decisionson local information. We show that the original stream isstripped of unnecessary packets so that each receiver obtainsthe exact amount of information that corresponds to maximumthroughput. Moreover, we combine the above mechanismwith receiver-end congestion control to produce a scheme thatmaximizes utility without source cooperation.The proposed policies have the following attractive features.

First, they dynamically track the optimal solution withoutexplicitly exchanging information about time-varying systemparameters such as stream rate, link quality and networkcongestion. Second, they are based on neighbor backloginformation which is found locally. Thus the policies areamenable to distributed implementation for wireless and het-erogeneous network technologies. Third, they do not requiresource cooperation, i.e., the sources transmit stream packets

1063-6692 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

3044 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016

without calculating the achievable receiver rates-this simpli-fies multi-rate multicast for networks with a large number ofreceivers. Last, they yield deterministic bounds for the queuebacklogs, which provides delay guarantees and facilitates im-plementation on systems with finite buffers. Our contributionis summarized in the following points:• We present the Maximum Multicast Throughputpolicy, which achieves near optimal throughput for multi-rate multicast. uses backpressure-type schedulingand threshold-based packet dropping.

• We develop the Maximum Multicast Utilitypolicy, which additionally includes a utility-based conges-tion controller at the receivers. is shown to solvethe per-receiver NUM problem.

• We propose , a heuristic modification for oper-ation on IEEE 802.11-based wireless devices. We imple-ment in a wireless testbed and perform experi-ments. The results demonstrate the efficiency of the pro-posed schemes both in terms of achieved utility and com-putation complexity.

A. Related Work

The problem of Network Utility Maximization (NUM) hasbeen extensively studied for the case of unicast sessions [6].For multicast sessions, [4] provides a fair-utility offline solu-tion. Maximizing network utility by solving offline optimizationproblems is less desirable in practice since variability in the net-work renders this approach ineffective. Every change requiresre-solving the problem and enforcing new rules. A preferablemethodology is to achieve the long-term goals by making adap-tive real-time decisions, cf. [7].In [8], a dynamic approach balances multicast streams across

a selection of multicast trees. This approach provides maximumthroughput, which for the special case of feasible arrivals coin-cides with the solution to the NUM problem. More generally toaddress the infeasible arrivals we need to design a congestioncontroller, which works jointly with multicast routing and ad-mits the precise amount of traffic from each multicast sessionthat solves the NUM problem.Many existing congestion control approaches have the users

estimate their maximally allowable throughput and convey thisinformation to the source, [5], [9]. Utilizing such information,the source computes a set of stream layers allocating the op-timal rate per layer and then implements a virtual multicast ses-sion for each layer. In a time-varying setting, this approach maybe cumbersome requiring frequent reports and changes of vir-tual sessions. Moreover, the complexity of source computationsbecomes prohibitive in large networks. Our approach differsbecause it is based on local dropping and it does not requireend-to-end signaling or computations at the source.An in-network congestion control approach is proposed in

[10], where a credit-based flow controller is shown to achievemax-min fairness, i.e., it solves the NUM problem for a specificchoice of utility functions. We generalize in-network flow con-trol for per-receiver multirate multicast NUM. To maximize theper-receiver utility, we utilize a virtual queue at each receiver,

inspired by the unicast method proposed in [11]. The core chal-lenge of generalizing [11] to multirate multicast lies in identi-fying the correct Lyapunov function that uses the informationof receiver location in the tree to correctly balance the queuelengths.A related recent work [12] proposed a dynamic policy for the

stochastic NUM problem using the concept of the shadow back-pressure routing. Virtual packets travel in the reverse directionin order to discover congestion and help route the actual datapackets. Each receiver is equipped with a source-type flow con-troller, inspired by [13]. The combination of these elements isshown to maximize the sum of all receiver utilities under theassumption of infinite demand.

II. SYSTEM MODEL

Let be the graph, serving a set of multicastsessions. Session consists of a source node 1

and a set of receivers , and is delivered over a given tree. We consider a wireline network in

which all links in can be used simultaneously (we discusswireless networks in Section V).Time is slotted and in slot , packets arrive at the

source of session . We assume that are i.i.d. over timeslots withmean and take finite values, i.e., .

A. Queueing StructureEach node maintains one transmission queue for every out-

going link and session , let denote its backlog at thebeginning of slot . Define to be the incoming(parent) link to that link2 and let be the setof outgoing links of the source node . Queue evolvesacross slots according to

(1)

where is the allocated transmission rate3 and isthe number of packets that are dropped from . The in-equality is due to the fact that the actual transmitted packets over

can be less than if there are not enough packets tobe transmitted in . Letbe the actual packets transmitted over link in slot . In ad-dition, let denote the capacity of link . The capacityconstraint must be satisfied in everyslot. Also, we impose , where is asystem-defined parameter. Throughout the paper, we assume

, where is themaximum link capacity. The value is an upperbound to the incoming data rate to a node, and our choice of

ensures that the packet dropping rate is large enough

1To simplify the notation we do not allow different sessions to have the samesource. This limitation can be waived without affecting the results.

2Only one such incoming link exists since is a tree. Note that thevalue of depends on the multicast session under consideration, and weabuse the notation to simplify exposition.

3We assume that if .

Page 3: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

PASCHOS et al.: IN-NETWORK CONGESTION CONTROL FOR MULTIRATE MULTICAST 3045

Fig. 1. The proposed queue structure on an node with one incoming and twooutgoing links (we show one session and we omit the session notation). Eachlink is associated with a transmission queue and a droppingqueue .

so that all transmission queues can always be stabilized. Letbe the actual packets dropped

from in slot , which can be smaller than ifthere are not enough packets to be dropped. To provide directcontrol over the amount of dropped packets, we set up a dropqueue associated with each transmission queue .Before leaving the system, the dropped packets are “moved” tothe drop queue from which they are later discarded accordingto the control policy. The drop queue evolves acrossslots according to

(2)

where is a decision variable that determinesthe number of packets that are eventually removed from the net-work in slot . Note that the packets in drop queues arenot going to be transmitted, and therefore it suffices to keep trackof the values of only as counters.4Focusing on a network node, our queueing mechanism works

as follows. All arriving packets are replicated to each transmis-sion queue , for example see Fig. 1. In a slot ,determines the number of session packets transmitted onlink , decides the number of packets that are internallymoved from to the drop queue , and isthe number of packets that are discarded from queueand permanently removed from the network. A control policychooses the values of the decision variables, ,and at each slot.

III. THROUGHPUT MAXIMIZATION

In order to introduce the concepts and the notation, we beginby considering the problem of maximizing the sum throughputof all receivers in multiple multicast sessions. In Section IV, westudy the more general problem of utility maximization.

A. Flow-Level CharacterizationBefore we develop the dynamic network control policy, it is

useful to provide a flow-level characterization of the optimalthroughput vector and the optimal packet dropping rates,

4The drop queues are essential to our mathematical analysis and they allow usto keep track of the actual number of dropped packets . In Section V-E,we study by simulations a simplified version of our schemewithout drop queues.

presented as solutions to linear optimization problems. Theseflow-level solutions are useful for proving the optimality of ourcontrol policies. However, the proposed policies solve theseproblems in a distributed and dynamic manner without the needto explicitly compute these solutions.We define to be the average session data flow rate over

link and the average packet dropping rate at queue .These flow variables must satisfy the flow conservation and linkcapacity constraints:

(3)(4)

(5)

The packet dropping rate vector is said to be feasible ifthere exist flow variables that satisfy (3)–(5).Let denote the throughput of receiver . Let be

the set of feasible throughput vectors . We have

(6)

where is the incoming link of the receiver in session (thesession index is omitted for simplicity). In (6),states that the throughput of a receiver is equal to its incomingflow rate.The problem of maximizing the sum throughput of all re-

ceivers in the network is

subject to (7)

It is useful to consider an equivalent optimization problem thatminimizes packet dropping rates. Let denote the set of linksthat form the path from the source node to a receiver . Sum-ming (4) over and using , we have

(8)

which states that the throughput of receiver is equal tothe exogenous data arrival rate less the sum of packet droppingrates along the path to . Summing (8) over all receivers

in a session, the total session throughput is

(9)

where is the number of session receivers connected totheir source via link .5 From (9) we see that maximizing thetotal throughput of session is equivalent to minimizing theweighted packet dropping rate . Consequently,the throughput maximization problem (7) is equivalent to theminimization problem

subject to feasible (10)

5We assume that if .

Page 4: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

3046 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016

Next, we design a control policy that stabilizes all queues in thenetwork and achieves optimal packet dropping rates; from theequivalence of (7) and (10), our policy achieves the maximumtotal throughput as well.

B. Intuition for Packet-Level Control

To measure the degree of congestion in the network, weconstruct a strictly increasing function of the queue backlogs

and , i.e., we define the weighted quadraticLyapunov function

The quadratic terms are weighted by because the impor-tance of a queue is proportional to the number of receivers con-nected to that queue. Let be thequeue backlog vector in slot . Define the Lyapunov drift

(11)

as the expected difference of the congestion measure overa slot. A control policy that minimizes the Lyapunov drift inevery slot suffices to stabilize the network and keep the queuebacklogs bounded [7].Recall from (10), that we also seek to minimize the weighted

time-average packet dropping rate

(12)If a drop queue is stable, then from queueing theory itsarrival rate must be less than or equal to its time-average servicerate, i.e., from (2) we have

(13)

Our approach forces this bound to be tight, and hence mini-mizing (12) can be achieved by minimizing its upper boundin (13), provided all queues are stable. In fact, it suf-fices to minimize in every slot the sum ,where is a compact notation for the conditional expecta-tion .Minimizing both the Lyapunov drift and the sum

induces a conflict, because the networkbecomes more congested when less packets are dropped. It istherefore natural to consider minimizing a weighted sum ofthese two metrics,

(14)

where is predefined parameter that reflects the relativeimportance of minimum packet dropping to queue stability. Aswe will see, also controls a tradeoff between the performancegap of our policy from optimality and the required finite buffersize in the transmission queues .

Fig. 2. Illustration of the differential backlog calculation in policy;.

C. The Proposed PolicyOur policy arises from the minimization of (14). In

Appendix A we derive the following bound

(15)

where is a finite constant given in the Appendix, and

(16)

is the weighted differential backlog. Fig. 2 gives an examplecalculation of . Next, we propose a throughput-optimalpolicy that is designed to minimize the RHS of (15) at each slot.

Maximum Multicast Throughput Policy

Packet Dropping: Each transmission queue movespackets to its drop queue at the

end of slot , whereifotherwise.

(17)

The drop queue removespackets from the network according to

ifotherwise.

(18)

Scheduling: Let be the set of multicast sessions that uselink . Define and let be amaximizer session (ties are broken arbitrarily). We allocatethe link rate

ifotherwise.

(19)

Let for all the other sessions .

Page 5: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

PASCHOS et al.: IN-NETWORK CONGESTION CONTROL FOR MULTIRATE MULTICAST 3047

Observe that (17) minimizes the dropping term of (15), (18)minimizes the discarding term and (19) minimizes the sched-uling term. Since the first two terms in (15) are constant, weconclude that minimizes the RHS of (15).We note that the policy operates in a distributed manner

using only locally available information. For the computationof , we require knowledge of the neighbor backlogs.As shown in prior work, this is not restrictive for practicalapplications, e.g., see [14]. Also, delayed backlog informationis sufficient for throughput optimality, see [15, §4.7].

D. Performance Evaluation of

Due to the dropping mechanism in (17)–(18), andare deterministically bounded. Applying the approach

of [11] we have the following result.Lemma 1: All queues and are deterministi-

cally bounded by

Hence, a buffer size of is sufficient to avoid un-expected queue overflow at . The policy achievesnear-optimal total throughput as the following theorem asserts.Theorem 1 (Optimality of ): The policy yields

the total throughput satisfying

Where

is the throughput of receiver in multicast session andis a solution to (7). The performance gap can be

made arbitrarily small by choosing a sufficiently large .Appendix B provides the proof of Theorem 1.

E. Simulation of

We illustrate how adapts to changing conditions viasimulations. Consider the multicast scenario of Fig. 3. Twomul-ticast sessions share link . The set of multicast receiversare for session 1 and for ses-sion 2. Links and have capacity , while the restlinks have unit capacities, i.e., 1packet/slot. Both sessions haveunit arrival rate. We seek to maximize total throughput.Observe that throughput maximization depends crucially on

the value of . For example, if , thenmaximum throughputis achieved by allocating all the resources of link to ses-sion 2, since session 2 has three receivers and session 1 hastwo. If on the other hand , then maximum throughputis achieved by allocating all the resources of link to ses-sion 1. In general, for , throughput is maximized if theallocation on link is to session 2 and to session 1.Note, that the packet dropping decision of node depends onthe quality of links and , information which is notdirectly available at .

Fig. 3. An example of multirate multicast with two sessions. Session 1 usesthe link set and session 2 uses the link set

. The set of receivers are denoted with. Numbers on links indicate capacities.

Fig. 4. Performance when varying quality for links in thetopology of Fig. 3. The left Figure compares to the optimal averagethroughput of receiver c. The right Figure shows the backlog of node b withpackets for transmission to receiver c.

In the simulation we vary the value . Initially andgradually reduces in steps of 0.1. Fig. 4 (left) shows the re-ceiver throughput. According to the above discussion, the op-timal average throughput is equal to , showcased in theFigure with gray line. The simulations showed that the averagethroughput of is equal to the optimal. Hence, we show theinstantaneous throughput averaged in moving windows of 100slots. At each interval, the throughput converges quickly to theoptimal, which shows how adapts to changing conditions.In Fig. 4 (right), we showcase the backlog at node b with

packets destined to node c, for the same sample path. In this sim-ulation we have used and and by Lemma 1,the backlog is upper bounded by 35 packets. In the simulations,the backlog never exceeds 25 packets despite the link qualityvariations and the randomness of the arrivals.

IV. UTILITY MAXIMIZATION

Next we consider the per-receiver NUM problem. Solvingthis general problem allows to use different utility functionsto achieve several objectives such as maximum throughput(studied separately in the previous section), —fairness whichincludes proportional fairness and max-min fairness as spe-cial cases, user priority, and satisfying user-specific qualityrequirements.

Page 6: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

3048 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016

A. Per-Receiver NUM Problem FormulationIn multicast session , a receiver has a utility function ,

which is assumed to be concave, increasing and continuouslydifferentiable with bounded derivatives.6 Consider the per-re-ceiver NUM problem:

subject to (20)

Define the auxiliary function

where is a parameter decided later. Then, maximizing thetotal utility is equivalent to maximizing

(21)

where the last sum is an (unknown) constant. In what follows,we modify our mechanism so that by controlling functions

and a new virtual queue which willintroduce next, the system is driven to the solution of (21).

B. Receiver Virtual QueueAt each multicast receiver , we set up the virtual queue

, which tracks the deficit/surplus of session packets re-ceived at that user and evolves as

where is the incoming link of node . The depar-tures are controlled by the policy and chosen in the in-terval , we choose below. The functionality ofthis virtual queue is to track the urgency of a receiver to obtainmore packets: if is small, receiver must urgently ob-tain packets for the maximum utility to be preserved.We also define the virtual pressure for each receiver which

is regulated by the virtual queue:

if ,otherwise,

(22)

where are positive parameters whose value will be chosenlater. Note, that in normal backpressure, the pressure of a des-tination node is zero, while in our policy can take pos-itive or even negative values. The sign of indicates theurgency of the particular receiver to obtain more or less packetsaccording to the requested objective. Indeed, the behavior of

6We assume . Utility functions that have un-bounded derivatives as , such as , can be approximated by thosewith bounded derivatives. For example, we can approximate by

for some small .

is controlled by , which as we will see shortly, ischosen according to the utility function.

C. The Proposed PolicyLet be the joint queue

backlog vector in slot . Define the Lyapunov function

Note, that the Lyapunov function is composed of two terms,the quadratic term is identical to the Lyapunov function used inthroughput maximization section, while the exponential term isidentical to the one used for receiver-based congestion controlfor unicast sessions in [16]. By using this form of Lyapunovfunction, we are able to center the virtual queue aroundthe value .Recall the definition of Lyapunov drift from (11). In

order to solve the problem in (21) we define the weighted ob-jective:

(23)Using standard drift derivation techniques we obtain the fol-lowing bound in the Appendix C

(24)where is a parameter, is a large constant defined in theAppendix and

Let be the indicator function on the event that the tailnode of on is a receiver . Then, define theweighted differential backlog as

(25)

Page 7: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

PASCHOS et al.: IN-NETWORK CONGESTION CONTROL FOR MULTIRATE MULTICAST 3049

TABLE IPARAMETER SELECTION GUIDELINES.

Observe that the virtual pressure is applied only if thetail node of is a receiver for this session. By rearranging terms,we have

We design our Maximum Multicast Utility policyto minimize the RHS of (24). To achieve this, we add a re-ceiver-end congestion controller, similar to the one used in [16].

Maximum Multicast Utility Policy

Parameter Selection: Choose positive parameters , ,, , , and as summarized in the Table I. For a discussion

on these parameter choices see [11], Section V-C. Initializethe queues with , and

.

Packet Dropping: Same as in policy.

Receiver-End Congestion Control: Choose to be thesolution to

(26)subject to (27)

where is given in (22).

Scheduling: Same as in policy, except that we use (25)as the value of , instead of (16).

D. Performance Evaluation ofLemma 2: Under the policy, all queues ,

, and are deterministically bounded by

Theorem 2 (Optimality of ): The policy achievesthe long-term utility satisfying

(28)where is the utility-optimal throughput vector.

E. Achieving Throughput RequirementsWe show how to use the policy to deliver a video

stream to users with strict throughput requirements. Considerthe optimization problem:

subject to(29)

where the inequality is element-wise and denotes thethroughput requirement of session receiver . We assumeproblem (29) admits a feasible solution. In order to solve (29)using , we use the penalty method, see [17], Section IV-B.Define the penalty function

where is a vector with one element for every receiver-ses-sion pair. If all requirements are satisfied (i.e., ,

) then . If some requirement is violated, thenincreases proportionally to and to the norm-1 dis-

tance of from the feasible set. Also note that is convexand thus is concave. Next, consider a convex optimizationproblem:

subject to (30)

By letting , the solution of (30) converges to the solu-tion of (29) [17]. A practical approach is to pick a “large” finitevalue for .

F. Simulations: Prioritizing Base Layer PacketsIn multimedia streaming with layer coding, the stream re-

construction requires the reception of specific data packets, be-longing to the base layer. Then, the reception of additional en-hancement layer packets improves the quality of the stream.Therefore, a reasonable strategy is to maximize the number ofenhancement layer packets subject to the correct reception ofbase layer packets at each receiver. We show next how to tune

to have such a behavior.We revisit the example of Fig. 3 and set so that all

links have unit capacities. Next, we tag the packets belonging tothe base layer video to distinguish them from the enhancementlayer packets. The video stream of each session is modeled bya superposition of two Poisson processes with and

.Using the penalty approach explained in the previous subsec-

tion, it is possible to achieve throughput maximization subjectto rate 0.2 at each receiver. We choose ,

and for all respectively.However, additionally to achieving a specific throughput raterequirement, we require the reception of specific packets. Tocope with this added constraint, we impose a strict priority ruleat all transmission queues : enhancement layer packets are

Page 8: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

3050 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016

TABLE IISIMULATION RESULTS FOR PRIORITIZING BASE LAYER PACKETS.

TABLE IIIBASIC CONFIGURATION OF NITOS NODES

dropped first and only served if there are no base layer packetsleft in the queue.The resulting experiments for this scenario are shown in

Table II. The combination of with the priority ruleprovides delivery ratio of base layer packets very close to theideal 100%. The small loss is attributed to randomness of thearrivals. Moreover, when the base layer packet delivery ratiois less than 100%, the enhancement layer counterpart is verysmall. Conclusively, our policy achieves the high-level goal tocombine guaranteed delivery with optimal performance.

V. EXPERIMENTATION IN WIRELESS TESTBED

To demonstrate the practicality of the policy, wedevelop a prototype implementation in NITOS testbed [18].NITOS is a heterogeneous outdoor testbed, where two types ofnetworks are used: a wireless network with IEEE 802.11a/b/g/nprotocol and a wired network using Gbit Ethernet. Being partlydeployed in a building roof, NITOS is a non-RF-isolated wire-less testbed. To eliminate interference we employed 802.11a,which is not used by commercial 802.11 products in Greece.The NITOS nodes feature a 3.4 GHz Intel i7 processor andtwo Atheros wireless cards. The main hardware and softwarespecifications of the nodes are depicted in Table III.

A. Implementation Framework

The implementation is based on the Click Modular routerframework [19]. Click facilitates experimentation and evalu-ation of scheduling and congestion control algorithms in realsystems. It runs as a user-level daemon at each node and via

the libpcap library it provides full control on packet transmis-sion. Our implemented framework includes mechanisms for es-timating channel quality, forming a queue structure, exchangingqueue backlog information, and splitting time into virtual slots.Estimating Channel Quality: To evaluate channel quality,

we adopted the ETT estimation algorithm of Roofnet [20].Nodes periodically broadcast probes which are used to estimatethe successful transmission probability. With this processevery node periodically obtains a table with the qualities foreach channel rate/neighbor pair. Using this table, theparameters are determined. This mechanism is known to incurnegligible throughput overhead [20], [21].Queue Structure: We implement the transmission queueson each node and we create a counter for each

virtual queue. The counter may take non-integer values.Each of these internal queues/counters is created upon thearrival of the first packet of a new session. This allows sessiongeneration “on the fly”. The queues are removed after a periodof inactivity.Exchanging Queue Backlog Information: To compute

from (25), each node broadcasts periodically the backlog sizeof all its transmission queues . If a node is a receiver forsome session , it broadcasts instead. The broad-cast messaging is repeated once every second. Prior experimentssuggest that more frequent broadcasts incur visible throughputoverhead, while rarer broadcasts may affect the delay perfor-mance due to obsolete queue information.In the proposed schemes, the routing is based on fixed mul-

ticast trees. Thus parameters are predefined and known.However, in our implementation, it is possible to use thebacklog exchange mechanism to transport information about

, should these be time-varying.Virtual Slots: In order to simplify the implementation we use

the concept of the virtual slot. Each node keeps an internal timerthat expires once every slot. Upon counter expiration the policyselects the next queue to be served and for the duration of thenext slot the decision remains fixed. The slot duration is set to100 msecs, equal to 1/10 of the broadcasts period. Small valuesfor the slot duration improve delay and reduce throughput fluc-tuations but burden the CPU of the device. We leave the inves-tigation of optimum slot duration for future work. We note thatthe implementation of is not tied to the idea of the virtualslot.

B. Policy Implementation

Wemodify our proposed policy so that it can operate on a net-work with wireless channels. Due to interference, some wire-less links cannot be activated simultaneously. A well knownlink activation policy is the maxweight policy, proposed in [22]for stabilizing mobile packet networks. Maxweight activatesat each slot the set of links that maximize the sum products

, effectively preferring links with higher ca-pacity. In our setting, the activation of the transmitting nodesis automatically selected by the IEEE 802.11 protocol. Thus, itremains to choose the activation of a session and a receivinglink, subject to the activated nodes. Using intuition from themaxweight policy we propose the following heuristic.

Page 9: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

PASCHOS et al.: IN-NETWORK CONGESTION CONTROL FOR MULTIRATE MULTICAST 3051

Fig. 5. Experiment topology with five NITOS nodes. Two sessions A and Bare generated at Alice, forwarded to Bob via a wired connection, and then dis-tributed to Carol, Dave, and Erin through wireless. The Figure shows the raterequirement per receiver (in parentheses) and the physical rate per link.

Maximum Multicast Utility for Wireless Policy

Parameter Selection, Packet Dropping, Receiver-EndCongestion Control, Scheduling on Wired Links: same as in

.

Scheduling on Wireless Links: Calculate using (25).On a wireless node, choose the link-session pair

ties broken arbitrarily. Then, allocate the rate

ifotherwise.

Let for all the other link-session pairs.

C. Throughput Experiments and ResultsWe conduct experiments on the specific topology of Fig. 5.

Five NITOS nodes are used: Alice and Bob are connected viaEthernet while Bob is connected to the other three nodes viawireless. The nodes are configured to run the policy.The wireless links use fixed physical rates instead of the 802.11rate adaptation scheme. In particular we set the physical rates to18 Mb/s, 6 Mb/s and 6 Mb/s for the links to Carol, Dave, andErin respectively. The physical rate of the wired connection is1 Gb/s.We consider two sessions, A and B, each with traffic rate

14 Mb/s. The source node for both sessions is Alice and themulticast receivers are {Bob, Carol} for A, and {Dave, Erin}for B, see Fig. 5. To generate packets we use two UDP streamscreated with the iperf tool [23]. We run iperf on external nodesto avoid polluting the CPU measurements. The receiver raterequirements are 4.5 Mb/s for Bob, for Carol, 1.7 Mb/sfor Dave and for Erin, where the values arechosen differently per experiment. The objective is to satisfyall receiver rate requirements as well as achieve maximumthroughput.We show the measured instantaneous and average throughput

for two scenarios. The instantaneous throughput is computed as

the average over 1sec periods. In the first scenario we choose, see Fig. 6(a). The objective is achieved

because all receiver requirements are satisfied and the excesswireless resource is allocated to the receiver with the highestcapacity, i.e., Carol. We observed that the wireless medium wasfully utilized. In the second scenario, we reverse the require-ments of Carol and Erin, , see Fig. 6(b).The theoretical total throughput is smaller in this case due toErin's low physical rate and high requirement.

D. CPU Occupancy Experiments and ResultsOur framework is implemented on user-level click. We ob-

served the user-level CPU occupancy using the sigar library in-corporated into the click framework, as well as the valgrind toolfor analytical profiling. The CPU occupancy at every node re-mained below 3% even when we increased the traffic rate ofthe two sessions, A and B, up to 200 Mb/s. We note, that akernel-level implementation can improve this figure further. Aswe see in Fig. 6(c), the CPU usage remains the same for eachnode in most cases, indicating that our policy does not incurextra burden on the sources. A slight difference between thesource and the other nodes is attributed mainly to high load andthe fact that at optimality this specific node is charged to dropa lot of packets. However, we note that this difference is notcomparable to the utilization variance caused by a congestioncontrol based on source cooperation. Additionally, the CPU uti-lization was largely independent of data rates used, since it isslightly increased by almost 2.5% when the traffic rate of thesessions is extremely increased up to 200 Mb/s and more than180 Mb/s are dropped.This implies that packet operations and queue maintenance

have a minor contribution to the CPU occupancy. This is alsoillustrated in Fig. 6(d), where we see that the percentage of theCPU occupancy of click that occurs due to the queues main-tenance, reception/transmission operations and message broad-casting is low enough (almost 15%). Instead most of the CPUoccupancy is due to operations related to the underlying routing/forwarding scheme, which is only used as a demonstrator ve-hicle here and it is not necessary for our scheme.

E. Elimination of Dropping QueuesWe use the drop queues to prove mathematically the perfor-

mance of our scheme. Here, we consider an extensionof the policy such that the packets are directly discardedfrom the packet queues without ever visiting the drop queues.

without Drop Queues Policy

Parameter Selection, Receiver-End Congestion Control,Scheduling on Wired Links: same as in .

Packet Dropping: Each transmission queue removespackets at the end of slot , where

ifotherwise.

In Figs. 7(a)–7(b) we compare and usingan one-hop path. From the experiments we observe that both

Page 10: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

3052 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016

Fig. 6. Results of the experimentation in wireless testbed, in the five nodes topology of Fig. 5. (a) Instantaneous and average throughput (Mb/s) for. (b) Instantaneous and average throughput (Mb/s) for . (c) CPU utilization (%) for various traffic rates of the two sessions, A and

B. (d) CPU utilization (%) of the main processes, queues maintenance, tx/rx operations and broadcast messaging.

Fig. 7. Throughput and Data Queue Length experimentation of and on a single link. Parameters used: , ,, the capacity of the link is set to 27 Mbps. Two cases are examined: (left) The offered load is 30 Mbps or for packet size 1470 bytes

(critical loading), and (right) the offered load is 100 Mbps or (overloading). The differences between the two policies are hardly visible. (a) Criticalloading (b) Overloading.

policies achieve the same throughput, while the data queueshave a similar evolution yielding the same average delay.

VI. CONCLUSION

We proposed a distributed control scheme that maximizesutility in multirate multicast. The performance is analyzed and

shown to be near-optimal. Several enhancements of the policyare described including a priority rule for base layer packets,and a modification for 802.11 wireless devices. The scheme isimplemented in a wireless testbed and its applicability is demon-strated. In future work, we plan to derive the optimal policy forgeneral wireless networks and to experiment further in largertopologies, investigating delay and CPU occupancy.

Page 11: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

PASCHOS et al.: IN-NETWORK CONGESTION CONTROL FOR MULTIRATE MULTICAST 3053

APPENDIX ADERIVATION OF BOUND IN (15)

From (1)–(2) and [11, Lemma 7], we obtain

(31)

(32)

where

(33)Multiplying (31) and (32) by , summing over and , andtaking conditional expectation, we have

(34)

where and is a predefinedparameter. Rearranging terms and using the equality, for eachsession ,

(35)

we obtain (15).

APPENDIX BPROOF OF THEOREM 1

We denote by the RHS of the inequality (15) evalu-ated under a policy in slot , where can be re-writtenas

(36)

It is clear that the decision variables in (36) are chosen accordingto policy . Let be the sum (14) evaluated underpolicy in slot . The inequality (15) under the policy is

Now, consider the optimal stationary policy, denoted by ,that chooses and

for all , where and are the optimal flow vari-ables that solves (10). This is a feasible policy. Since thepolicy minimizes the RHS of (36) in every slot, we have

(37)

Using the flow conservation (3)–(4), we have

(38)

Plugging (38) into (37) yields, under the policy,

Taking expectation, summing over slots , dividingby , and using , we have

Lemma 1 shows that is finite for all . Takingyields

Since all queues are bounded, we have

It follows that

In a multicast session , along the path from the sourceto a receiver , traffic that arrives at source in an interval

must be equal to the sum of total dropped packets anddata delivered to receiver in that interval, plus the total queuebacklogs at time . In other words,

Taking expectation and time average, and using the finiteness ofqueues , we have as

(39)

Page 12: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

3054 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 24, NO. 5, OCTOBER 2016

Summing (39) over and yields

(40)

(41)

From (9), the maximum total throughput is

As a result,

The proof is complete.

APPENDIX CDERIVATION OF BOUND IN (24)

Define the indicator function if ,and 0 otherwise. Let . Define

and . Then we haveand . For each link , we

obtain from [11, Lemma 7] that

where and are given in (33). Using the same analysis in[11, Lemma 5], we get

Combining the above inequalities yields

where

(42)

By definition of we have

Using and rearranging terms, weobtain the requested.

REFERENCES[1] G. S. Paschos, C.-P Li, E. Modiano, K. Choumas, and T. Korakis,

“Multirate multicast: Optimal algorithms and implementation,” inProc. IEEE INFOCOM, 2014, pp. 343–351.

[2] X. Li, S. Paul, and M. Ammar, “Layered video multicast with retrans-missions (LVMR): Evaluation of hierarchical rate control,” in Proc.IEEE INFOCOM, 1998, pp. 1062–1072.

[3] S. McCanne, V. Jacobson, and M. Vetterli, “Receiver-driven layeredmulticast,” in Proc. ACM SIGCOMM, 1996, pp. 117–130.

[4] S. Sarkar and L. Tassiulas, “Fair allocation of utilities in multirate mul-ticast networks: A framework for unifying diverse fairness objectives,”IEEE Trans. Autom. Control, vol. 47, no. 6, pp. 931–944, Jun. 2002.

[5] S. Deb and R. Srikant, “Congestion control for fair resource allocationin networks with multicast flows,” IEEE/ACM Trans. Netw., vol. 12,no. 2, pp. 274–285, Apr. 2004.

[6] S. Shakkottai and R. Srikant, “Network optimization and control,”Found. Trends Netw., vol. 2, no. 3, pp. 271–379, Jan. 2007.

[7] M. J. Neely, “Stochastic network optimization with application to com-munication and queueing systems,” in Synthesis Lectures on Commu-nication Networks. San Francisco, CA, USA: Morgan & Claypool,2010.

[8] S. Sarkar and L. Tassiulas, “A framework for routing and congestioncontrol for multicast information flows,” IEEE Trans. Inf. Theory, vol.48, no. 10, pp. 2690–2708, Oct. 2002.

[9] K. Kar, S. Sarkar, and L. Tassiulas, “A scalable low-overhead ratecontrol algorithm for multirate multicast sessions,” IEEE J. Sel. AreasCommun., vol. 20, no. 8, pp. 1541–1557, Oct. 2002.

[10] S. Sarkar and L. Tassiulas, “Back pressure based multicast schedulingfor fair bandwidth allocation,” IEEE Trans. Neural Netw., vol. 16, no.5, pp. 1279–1290, Sep. 2005.

[11] C.-P Li and E. Modiano, “Receiver-based flow control for networks inoverload,” IEEE/ACM Trans. Netw., vol. 23, no. 2, pp. 616–630, Apr.2015.

[12] L. Bui, R. Srikant, and A. Stolyar, “Optimal resource allocation formulticast flows in multihop wireless networks,” in Proc. IEEE CDC,2007, pp. 1134–1139.

[13] M. Neely, E. Modiano, and C.-P Li, “Fairness and optimal stochasticcontrol for heterogeneous networks,” IEEE/ACMTrans. Netw., vol. 16,no. 2, pp. 396–409, Apr. 2008.

[14] R. Laufer, T. Salonidis, H. Lundgren, and P. Le Guyadec, “Across-layer backpressure architecture for wireless multihop networks,”IEEE/ACM Trans. Netw., vol. 22, no. 2, pp. 363–376, Apr. 2014.

[15] L. Georgiadis, M. J. Neely, and L. Tassiulas, “Resource allocation andcross-layer control in wireless networks,” Found. Trends Netw., vol. 1,no. 1, pp. 1–144, Apr. 2006.

[16] C. Li and E. Modiano, “Receiver-based flow control for networks inoverload,” in Proc. IEEE INFOCOM, 2013, pp. 2949–2957.

[17] D. P. Bertsekas, Nonlinear Programming, 2nd ed. Belmont, MA,USA: Athena Scientific, 1999.

[18] “NITLab: Network Implementation Testbed Laboratory,” [Online].Available: http://nitlab.inf.uth.gr/NITlab/index.php/testbed

[19] E. Kohler, R. Morris, B. Chen, J. Jannotti, and M. F. Kaashoek, “Theclick modular router,” ACM Trans. Comput. Syst., vol. 18, no. 3, pp.263–297, Aug. 2000.

[20] J. Bicket, D. Aguayo, S. Biswas, and R.Morris, “Architecture and eval-uation of an unplanned 802.11b mesh network,” in Proc. MobiCom,2005, pp. 31–42.

Page 13: IEEE/ACMTRANSACTIONSONNETWORKING,VOL.24,NO.5,OCTOBER2016 ...modiano/papers/CV_J_87.pdf · Receiver-End Congestion Control:Choose tobethe solutionto (26) subjectto (27) where isgivenin(22).

PASCHOS et al.: IN-NETWORK CONGESTION CONTROL FOR MULTIRATE MULTICAST 3055

[21] K. Choumas, T. Korakis, I. Koutsopoulos, and L. Tassiulas, “Imple-mentation and end-to-end throughput evaluation of an IEEE 802.11compliant version of the enhanced-backpressure algorithm,” in Proc.Tridentcom, 2012, pp. 64–80.

[22] L. Tassiulas and A. Ephremides, “Stability properties of constrainedqueueing systems and scheduling policies for maximum throughput inmultihop radio networks,” IEEE Trans. Autom. Control, vol. 37, no.12, pp. 1936–1948, Dec. 1992.

[23] “Iperf: The TCP/UDP bandwidth measurement tool,” 2013 [Online].Available: http://sourceforge.net/projects/iperf/

Georgios S. Paschos (S'01–M'06–SM'15) receivedhis dipl. in electrical and computer engineering 2002from Aristotle University of Thessaloniki, and Ph.D.degree 2006 from ECE department, University ofPatras, both in Greece. He was an ERCIM PostdocFellow for one year in VTT, Finland. From 2008to 2014, he was affiliated with “The Center ofResearch and Technology Hellas-Informatics &Telematics Institute,” CERTH-ITI, Greece. Hewas also teaching as an Adjunct Lecturer in theDepartment of Computer and Communication

Engineering of the University of Thessaly, for the period 2009–2011. From2012 to 2014, he was a Postdoctoral Associate at LIDS, Massachusetts Instituteof Technology, USA. Since 2014, he is a Principal Researcher in the FrenchResearch Center of Huawei Technologies, Paris, and the head of the NetworkControl and Resource Allocation team. His main interests are in the area ofwireless communications, networks and stochastic modeling. He has servedas a Technical Program Committee member in INFOCOM, MobiHoc, andWiOPT conferences. Since September 2015, he serves as an Associate Editorfor the IEEE/ACM TRANSACTIONS ON NETWORKING.

Chih-Ping Li received his B.S. degree in electricalengineering from the National Taiwan University in2001, and his M.S. and Ph.D. degrees in electrical en-gineering from the University of Southern Californiain 2005 and 2011, respectively. He was a Postdoc-toral Associate in the Laboratory for Information andDecision Systems at the Massachusetts Institute ofTechnology between 2011 and 2014. Since 2014 hejoined the Corporate R&DDepartment at Qualcommand has been working on 5G system design and anal-ysis. His research interests include stochastic control,

resource allocation, and QoS in communication networks, wireless networks,and queueing systems.

Eytan Modiano (S'90–M'93–SM'00–F'12) receivedthe B.S. degree in electrical engineering and com-puter science from the University of Connecticut,Storrs, CT, USA, in 1986, and the M.S. and Ph.D.degrees in electrical engineering from the Universityof Maryland, College Park, MD, USA, in 1989 and1992, respectively.He was a Naval Research Laboratory Fellow

between 1987 and 1992 and a National ResearchCouncil Post-Doctoral Fellow from 1992 to 1993.Between 1993 and 1999, he was with the Massa-

chusetts Institute of Technology (MIT) Lincoln Laboratory, Lexington, MA,USA, where he was a project leader for MIT Lincoln Laboratory's NextGeneration Internet (NGI) project. Since 1999, he has been on the faculty withMIT, Cambridge, MA, USA, where he is a Professor with the Department ofAeronautics and Astronautics and the Laboratory for Information and DecisionSystems (LIDS). His research is on communication networks and protocolswith emphasis on satellite, wireless, and optical networks.Prof. Modiano is an Associate Fellow of the AIAA. He is an Editor-at-Large

for the IEEE/ACMTRANSACTIONS ONNETWORKING and served as an AssociateEditor for the IEEE TRANSACTIONS ON INFORMATION THEORY and IEEE/ACMTRANSACTIONS ON NETWORKING. He was the Technical Program Co-Chair forIEEE WiOpt 2006, IEEE INFOCOMM 2007, and ACM MobiHoc 2007.

Kostas Choumas received the B.S. and M.S. de-grees from the Electrical and Computer EngineeringDepartment, University of Thessaly, Greece, in2007 and 2008 respectively and the Ph.D. degree inthe same department from University of Thessaly,in 2015, under the supervision of Prof. LeandrosTassiulas and Assistant Prof. Thanasis Korakis.From 2009, he is a Research Scientist working onmultiple projects with University of Thessaly andCentre for Research and Technology Hellas. Hiscurrent research interests are in the field of designing

and implementing enhanced routing and access layer protocols for wireless adhoc networks, as well as in experimenting with use of testbed platforms.

Thanasis Korakis received the B.S. and M.S. de-grees from the Informatics and TelecommunicationDepartment, University of Athens, in 1994 and 1997respectively and the Ph.D. degree in computer andcommunications engineering from University ofThessaly, Greece in 2005, under the supervision ofProf. Leandros Tassiulas. In the summer of 2004,he was a Visiting Researcher in the ComputerScience and Engineering Department, University ofCalifornia, Riverside. From 2005 to 2006, he was aResearch Scientist with the Department of Electrical

and Computer Engineering, Polytechnic University, NY. From 2006 to 2012he was a Research Assistant Professor in the same department. Currently,he is an Assistant Professor in the Electrical and Computer EngineeringDepartment, University of Thessaly. His research interests are in the fieldof wireless networks with emphasis on access layer protocols, cooperativenetworks, directional antennas, quality-of-service provisioning and networkmanagement. From 2007 to 2012 he was a voting member of the IEEE 802.16standardization group. He served as a Publication Chair for WiOpt 2007, TPCChair of WiNTECH 2010, TPC Chair of Tridentcom 2011, General Chair ofTridentcom 2012, Chair of EMUTools 2013 and Chair of WINMEE 2016.