An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf ·...

6
An Experimental Evaluation of Low Latency Congestion Control for mmWave Links Ashutosh Srivastava, Fraida Fund, Shivendra S. Panwar Department of Electrical and Computer Engineering, NYU Tandon School of Engineering Emails: {ashusri, ffund, panwar}@nyu.edu Abstract—Applications that require extremely low latency are expected to be a major driver of 5G and WLAN networks that include millimeter wave (mmWave) links. However, mmWave links can experience frequent, sudden changes in link capacity due to obstructions in the signal path. These dramatic variations in link capacity cause a temporary “bufferbloat” condition during which delay may increase by a factor of 2-10. Low latency congestion control protocols, which manage bufferbloat by minimizing queue occupancy, represent a potential solution to this problem, however their behavior over links with dramatic variations in capacity is not well understood. In this paper, we explore the behavior of two major low latency congestion control protocols, TCP BBR and TCP Prague (as part of L4S), using link traces collected over mmWave links under various conditions. Our evaluation reveals potential problems associated with use of these congestion control protocols for low latency applications over mmWave links. Index Terms—congestion control, low latency, millimeter wave I. I NTRODUCTION Low latency applications are envisioned to play a significant role in the long-term growth and economic impact of 5G and future WLAN networks [1]. These include augmented and virtual reality (AR/VR), remote surgery, and autonomous connected vehicles. As a result, stringent latency requirements have been established for 5G communication. The ITU [2] has set the minimum requirement for user plane latency at 4ms for enhanced mobile broadband and 1ms for ultra reliable low latency communication; the minimum requirement for control plane latency is 20ms, and a lower (10ms) target is strongly encouraged. To achieve high throughput as well as low latency, these wireless networks will rely heavily on millimeter wave fre- quency bands (30-300 GHz), due to the large amounts of spectrum available on those bands. However, mmWave links are highly susceptible to blockages such as buildings, vehicles, walls, doors and even the human body [3] [4]. For example, Figure 1 shows the received signal strength over a 60GHz WLAN link with an occasional human blocker in the signal path. While the human blocks the signal path, the received signal strength decreases about 10 dB. As a result of this susceptibility to blockages, mmWave links can experience frequent, sudden outages or changes in link capacity. This has been confirmed in live 5G deployments as well; a recent ex- perimental study on Verizon’s mmWave network deployed in Minneapolis and Chicago reported a high handover frequency due to frequent disruptions in mmWave connectivity [5]. -60 -55 -50 -45 -40 0 25 50 75 100 Time (s) RSSI (dBm) Fig. 1: Effect of human blocker on received signal strength of a 60 GHz WLAN link. Link rate: 5 packets/ms Link rate: 1 packet/ms 1ms queuing delay 5ms queuing delay Fig. 2: Illustration of effect of variations in link capacity on queueing delay. This effect is of great concern to low-delay applications, because extreme variations in mmWave link capacity cause a temporary “bufferbloat” condition during which delay in- creases dramatically. The reason for this is illustrated in Figure 2, which shows the queuing delay experienced by a packet that arrives at a buffer with five packets already queued. When the egress rate of the queue is 5 packets/ms, the arriving packet sees 1ms of queuing delay. If the egress rate of the queue suddenly drops to 1 packet/ms, the arriving packet sees 5ms of queuing delay. In other words, with the same number of packets in the bottleneck queue, a sudden five-fold decrease in link capacity will give rise to a corresponding five-fold increase in the observed queueing delay. This effect is exacerbated by large buffers, since the delay at a given egress rate is proportional to buffer occupancy, and large buffers permit a greater buffer occupancy. However, because of frequent short-term link outages, buffers adjacent to a mmWave link may need to hold several seconds of data - which, because of the high link capacity, suggests the need for very large buffers. In fact, subscribers of the Verizon 5G Home mmWave fixed broadband service have reported extreme delay, especially under uplink saturation conditions, which may be due to bufferbloat [6], [7]. Low-delay congestion controls have been proposed as a potential solution to the general problem of bufferbloat across the Internet. Traditional loss-based congestion control schemes © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Transcript of An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf ·...

Page 1: An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf · 2020-04-21 · a temporary “bufferbloat” condition during which delay in-creases dramatically.

An Experimental Evaluation of Low LatencyCongestion Control for mmWave Links

Ashutosh Srivastava, Fraida Fund, Shivendra S. PanwarDepartment of Electrical and Computer Engineering, NYU Tandon School of Engineering

Emails: {ashusri, ffund, panwar}@nyu.edu

Abstract—Applications that require extremely low latency areexpected to be a major driver of 5G and WLAN networks thatinclude millimeter wave (mmWave) links. However, mmWavelinks can experience frequent, sudden changes in link capacitydue to obstructions in the signal path. These dramatic variationsin link capacity cause a temporary “bufferbloat” conditionduring which delay may increase by a factor of 2-10. Lowlatency congestion control protocols, which manage bufferbloatby minimizing queue occupancy, represent a potential solutionto this problem, however their behavior over links with dramaticvariations in capacity is not well understood. In this paper, weexplore the behavior of two major low latency congestion controlprotocols, TCP BBR and TCP Prague (as part of L4S), using linktraces collected over mmWave links under various conditions.Our evaluation reveals potential problems associated with use ofthese congestion control protocols for low latency applicationsover mmWave links.

Index Terms—congestion control, low latency, millimeter wave

I. INTRODUCTION

Low latency applications are envisioned to play a significantrole in the long-term growth and economic impact of 5Gand future WLAN networks [1]. These include augmentedand virtual reality (AR/VR), remote surgery, and autonomousconnected vehicles. As a result, stringent latency requirementshave been established for 5G communication. The ITU [2] hasset the minimum requirement for user plane latency at 4msfor enhanced mobile broadband and 1ms for ultra reliable lowlatency communication; the minimum requirement for controlplane latency is 20ms, and a lower (10ms) target is stronglyencouraged.

To achieve high throughput as well as low latency, thesewireless networks will rely heavily on millimeter wave fre-quency bands (30-300 GHz), due to the large amounts ofspectrum available on those bands. However, mmWave linksare highly susceptible to blockages such as buildings, vehicles,walls, doors and even the human body [3] [4]. For example,Figure 1 shows the received signal strength over a 60GHzWLAN link with an occasional human blocker in the signalpath. While the human blocks the signal path, the receivedsignal strength decreases about 10 dB. As a result of thissusceptibility to blockages, mmWave links can experiencefrequent, sudden outages or changes in link capacity. This hasbeen confirmed in live 5G deployments as well; a recent ex-perimental study on Verizon’s mmWave network deployed inMinneapolis and Chicago reported a high handover frequencydue to frequent disruptions in mmWave connectivity [5].

−60

−55

−50

−45

−40

0 25 50 75 100Time (s)

RS

SI (

dBm

)

Fig. 1: Effect of human blocker on received signal strength ofa 60 GHz WLAN link.

Link rate: 5 packets/ms

Link rate: 1 packet/ms

1ms queuing delay

5ms queuing delay

Fig. 2: Illustration of effect of variations in link capacity onqueueing delay.

This effect is of great concern to low-delay applications,because extreme variations in mmWave link capacity causea temporary “bufferbloat” condition during which delay in-creases dramatically. The reason for this is illustrated inFigure 2, which shows the queuing delay experienced by apacket that arrives at a buffer with five packets already queued.When the egress rate of the queue is 5 packets/ms, the arrivingpacket sees 1ms of queuing delay. If the egress rate of thequeue suddenly drops to 1 packet/ms, the arriving packetsees 5ms of queuing delay. In other words, with the samenumber of packets in the bottleneck queue, a sudden five-folddecrease in link capacity will give rise to a correspondingfive-fold increase in the observed queueing delay. This effectis exacerbated by large buffers, since the delay at a givenegress rate is proportional to buffer occupancy, and largebuffers permit a greater buffer occupancy. However, becauseof frequent short-term link outages, buffers adjacent to ammWave link may need to hold several seconds of data -which, because of the high link capacity, suggests the need forvery large buffers. In fact, subscribers of the Verizon 5G HomemmWave fixed broadband service have reported extreme delay,especially under uplink saturation conditions, which may bedue to bufferbloat [6], [7].

Low-delay congestion controls have been proposed as apotential solution to the general problem of bufferbloat acrossthe Internet. Traditional loss-based congestion control schemes

© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, includingreprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, orreuse of any copyrighted component of this work in other works.

Page 2: An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf · 2020-04-21 · a temporary “bufferbloat” condition during which delay in-creases dramatically.

fill up the bottleneck buffer and back off only when packetloss is observed. This results in high queueing delays in thepresence of large buffers, i.e. bufferbloat [8]. Low latencycongestion controls including BBR [9], PCC [10], COPA [11],and Sprout [12], as well as frameworks such as L4S [13] whichinclude low latency congestion control, seek to minimizequeueing delay by occupying as little of the bottleneck bufferas possible while utilizing the available capacity to the fullest.

However, the behavior of these low latency congestioncontrols over links with dramatic variations in capacity (asis characteristic of mmWave links) is not well understood.Early work in this area has been limited to ns-3 simulationstudies [14]–[20], where the mmWave link dynamics andin most cases, the implementation of the congestion controlprotocols, are simulated. These results have not been validatedexperimentally using a real network stack.

In this paper, we use a testbed experiment to explorethe behavior of several key low latency congestion controlschemes when they operate in a network with a mmWavewireless link as the bottleneck. To promote reproducibility,we used a series of traces collected over 60 GHz mmWavelinks under different obstruction conditions, and then usedemulation to play back the mmWave link characteristics onthe CloudLab testbed [21]. This experiment can more closelyrepresent the true behavior of congestion control protocolsthan simulation studies, while still allowing more control andreproducibility than experiments on an over-the-air mmWavetestbed. The contributions of this work are as follows:

• We describe a reproducible testbed experiment to explorethe behavior of low latency congestion control overmmWave links. Instructions to reproduce this experimenton CloudLab are provided at [22].

• We show that TCP BBR, the low latency congestioncontrol scheme championed by Google, works well tolimit the delay caused by sudden obstructions in thesignal path. However, for applications that require bothlow delay and high throughput, BBR may not be a goodcandidate because it frequently reduces its sending ratein order to drain the queue.

• We show that in TCP Prague (the L4S congestion con-trol), the combination of slow convergence to a fair shareof capacity and frequent disruptions in the mmWave linkcan lead to starvation of some flows.

The rest of this paper is organised as follows. We startwith a brief introduction of the low latency congestion controlschemes considered for evaluation in Section II. In Section III,we discuss previous work evaluating TCP congestion controlover mmWave. We move on to describe the methodology ofour experiments in Section IV before presenting our resultsin Section V. We present a more detailed analysis of theseresults in Section VI, and finally present some directions forfuture research based on our findings.

II. LOW LATENCY CONGESTION CONTROL SCHEMES

Several low latency congestion control algorithms have beenproposed in the literature to overcome the bufferbloat problem.

In our experimental evaluation, we have focused on twoproposals which are championed by major industry players:TCP BBR [9] and TCP Prague (which is not a standalonecongestion control, but is part of the L4S architecture [13]).We will give a brief introduction to these schemes.

BBR (Bottleneck bandwidth and round-trip propagationtime) : BBR [9] tries to maintain low queueing delays by op-erating at the bandwidth-delay product (BDP) of the networkpath. BBR maintains and continuously updates estimates ofthe bottleneck bandwidth and minimum RTT of the path, andcaps the number of packets in flight to a constant times theestimated BDP.

The first release of TCP BBR operates in four phases.The first phase, called the startup phase, is the bandwidthestimation phase where it seeks to quickly estimate the bottle-neck bandwidth. Then, in the second phase (called the drainphase) it reduces its sending rate in order to drain the queuethat is expected to have built up in the first phase. At thispoint, BBR enters steady state, where it moves back and forthbetween bandwidth probing and RTT probing phases. Most ofBBR’s time is spent in the bandwidth probing phase, in whichit usually sends at a rate equal to its bottleneck bandwidthestimate, but occasionally probes just above this estimate totry and discover additional capacity. During this phase, theCWND is set to double the estimated BDP. Meanwhile, BBRkeeps a running minimum of RTT, and after 10 seconds haveelapsed without updating this running minimum, it enters anRTT probing phase. During the RTT probing phase, BBR’sCWND is set to 4 segments in order to drain the queue andfind the base RTT of the link. After updating the minimumRTT estimate, BBR returns to the bandwidth probing phase.

Version 2 of BBR introduces many changes, but operates ac-cording to the same principle of estimating the BDP, spendingmost of its time probing for more bandwidth, and occasionallyreducing its sending rate to measure the minimum RTT.

L4S (low latency, low loss, scalable throughput): L4S [13]is an architecture for network service which includes twomajor components :

• TCP Prague congestion control : TCP Prague is similarto DCTCP [23], in that it uses accurate ECN feedbackto control sending rate. However, it includes severalextensions and adaptations relative to DCTCP that makeit more suitable for use over the Internet, where it mayshare links with loss-based congestion control flows.

• Dual-Q coupled AQM : To address the problem ofunfairness with loss-based congestion control flows, andto preserve low queuing delay for flows that respond toECN, the L4S architecture uses a Dual-Q coupled AQM.This queue separates the conventional loss-based flowsand ECN-responsive flows and handles them differentlyto maintain fairness and low latency for ECN-responsiveflows.

III. RELATED WORK

A number of simulation studies have explored the problemof latency for TCP congestion control over mmWave links. In

Page 3: An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf · 2020-04-21 · a temporary “bufferbloat” condition during which delay in-creases dramatically.

Static Link Short Blockages Long Blockages Mobility & Blockages

0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120

0

1000

2000

3000

4000

Time (s)

Cap

acity

(M

bps)

Fig. 3: Traces of link capacity collected over the 60GHz WLAN testbed in four representative scenarios. We play back thesetraces in our CloudLab experiment to replicate the frequent variation in mmWave link capacity.

[14], an end-to-end 5G mmWave simulation in ns-3 is usedto evaluate how TCP New Reno and TCP CUBIC react to adynamic mmWave channel with short term blockages. Theyshow that large drops in link capacity during a LOS to NLOStransition causes extreme delays due to the bufferbloat problemfor loss-based TCP congestion control. This work motivatesthe need to move away from classic loss-based congestioncontrol for mmWave links. Building upon this work, [15] usesthe same ns-3 framework to evaluate the use of ControlledDelay (CoDel) AQM as a potential solution to mitigate latency,and suggests that AQM will under-utilize the link capacity.As an alternative, they propose a cross-layer dynamic receivewindow adaptation scheme to manage queuing delay.

Several studies have followed this early work by exploringalternative congestion control schemes over mmWave links, allusing the same end-to-end 5G mmWave simulation frameworkin ns-3. Multipath TCP (MP-TCP) for mmWave links isevaluated in [16], and a TCP proxy architecture for mmWavelinks is proposed in [24]. More recently, [19] simulates a widerange of congestion control protocols over mmWave links,including TCP BBR, but with a primary focus on comparisonbetween TCP CUBIC and TCP YeAH. Finally, in [17], TCPBBR is compared to loss-based congestion control schemesin a high speed train scenario and a dense urban scenario.However, in this simulation the authors use a small buffer, sothat the benefits of the low latency congestion control relativeto the loss based congestion control are not fully realized.

While the above mentioned literature used internal ns-3 im-plementations of different TCP congestion control algorithms,[18] used real TCP stacks with the Direct Code Executionframework in ns-3, along with the ns-3 mmWave 5G module.The results showed a general tradeoff between capacity andlatency when using a purely loss-based congestion controllike CUBIC and Reno, versus using a CoDel AQM at thebottleneck. Other TCP variants like Scalable TCP and TCPIllinois were also considered, but did not show any majorimprovement in mmWave wireless conditions. In a follow upstudy [25], they further investigate fairness problems that arisebecause of the frequent blocking events in mmWave links.

Finally, [26] investigates a problem with TCP BBR whenthere is extreme jitter on the link, a problem that was first

observed by the authors over a mmWave link. This study usesthe CloudLab [21] testbed to evaluate TCP BBR using itsactual Linux kernel implementation, using a link with jitterproduced by netem.

IV. EXPERIMENT SETUP

To replicate the frequent variation in mmWave link capacityin our experiment, we used link traces obtained from [27],which were collected using a 60GHz WLAN testbed. Linktraces include received signal strength (RSSI), signal qualityindex (SQI), beamforming sector index, transmit modulationand coding scheme, transmit goodput, and the theoreticaltransmit capacity (given SQI and TX MCS), all as reportedby the wil6210 driver in Linux.

A link was established between two laptop devices equippedwith 60GHz wireless cards by Qualcomm. One laptop wasplaced on a shelf and acted as a fixed access point, while theother acted as a client device. Then, link traces were capturedunder four different conditions, each lasting 120 seconds:

• Static link: in this scenario, the laptops are positioned ina static configuration across the room from one another,with no obstructions between them.

• Short blockages: in this scenario, the laptops werepositioned in the same configuration as in the static linkscenario. However, at approximately 15-second intervals,a human walked through the line of sight path betweenthe laptops.

• Long blockages: this scenario is similar to the one withshort blockages. In this one, however, the human pausedfor approximately four seconds while obstructing thesignal path.

• Mobility and blockages: in the final scenario, the accesspoint was static, but the client laptop moved around theroom in a circle. At the same time, a human moved pastthe access point, obstructing its line of sight view of theclient, at approximately 15-second intervals.

While collecting traces, iperf3 was used to send a singleTCP flow in each direction between the client and the accesspoint. The reason for these flows was that the behavior of the60 GHz NIC is different when the link is not loaded versuswhen there is traffic on the link.

Page 4: An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf · 2020-04-21 · a temporary “bufferbloat” condition during which delay in-creases dramatically.

The theoretical capacity of the link in each scenario isshown in Figure 3. In general, we observe that the link capacityis reduced whenever there is a human obstruction in the signalpath. In the scenario with mobility, the link capacity alsoincreases and decreases as the client laptop moves closer to,then farther from, the fixed access point.

To conduct our congestion control experiments, we use theCloudLab [21] testbed. We configured a three-node, two-hoptopology with a TCP sender and receiver connected by abottleneck router, all with high-capacity links between them.At the bottleneck router, we configured a FIFO queue with7.5MB capacity, representing approximately 20ms of delaywhen the queue is full and the link operates at the typicalunobstructed rate of 3Gbps. To play back the link traces, weuse the bandwidth shaping tools provided by tc in Linux tolimit the egress rate of this queue. For experiments with TCPPrague, which requires ECN, we enabled accurate ECN atboth the sender and the receiver, and configured the bottleneckqueue to mark packets at 5ms.

In each experiment, we send ten bulk TCP flows of duration120 seconds from the TCP sender to the receiver usingiperf3. We record the throughput per flow reported byiperf3 and the smoothed RTT measurements from each TCPsocket using ss. We consider four congestion control alterna-tives: TCP CUBIC (the current default in the Linux kernel, anda loss-based congestion control), TCP Prague (as part of L4S),TCP BBR v1, and TCP BBR v2. To support these congestioncontrol algorithms, all hosts in our experiment run UbuntuLinux 18.04, but with additional kernels including the lowlatency congestion control algorithms under consideration. ForTCP BBR (both BBRv1 and BBRv2) experiments, we used a5.2-rc3 kernel with the BBRv2 alpha implementation from thev2alpha branch of Google’s BBR GitHub repository [28].For TCP Prague (L4S) and TCP CUBIC experiments, we useda 5.3-rc3 kernel from the testing branch of the L4S team’sGitHub repository [29].

V. RESULTS

The results of our experiments, showing the throughput andtransport RTT for four congestion control protocols in differentmmWave link scenarios, are shown in Figure 4 and Figure 5.Instructions to reproduce these figures are provided at [22].

A. Latency

TCP CUBIC and other loss-based congestion controlschemes fill the bottleneck buffer, and therefore experiencethe most queuing delay. Our RTT results for TCP CUBICwith a static link (Fig. 4) are in line with this expectation, aswe observe around 20ms of queueing delay (7.5MB buffer /3Gbps typical capacity). We also see spikes in the RTT whenthe line of sight path of the mmWave link is blocked and thecapacity is reduced. The delay gets as high as 150ms, whichis more than seven times the base delay with a static link.

In contrast, an ECN-based congestion control is expected tomaintain queueing delays at or below the marking thresholdspecified at the bottleneck routers. We observe this behaviour

with L4S’s TCP Prague, where the delay stays near the 5msECN marking threshold. The delays increase in the presenceof blockage events but remain much lower compared to loss-based TCP. Both versions of TCP BBR maintain small delayswhich mostly stay in the range of 5-10ms for all mmWavelink conditions. BBR achieves this without any ECN supportusing its model-based algorithm at the sender side.

B. Throughput and Fairness

We begin by noting that the sum of the throughput ofthe ten flows in all scenarios and for all congestion controlprotocols was close to the link capacity (Figure 5). However,in TCP BBR, the sending rate is reduced at regular intervals(10 seconds for BBR v1 and more frequently for BBR v2) todrain the queue in order to measure the minimum RTT of thelink. For applications that require consistent high throughputas well as low delay, this tradeoff will be unacceptable.

With respect to fairness, we observe that TCP CUBICmaintains a fair share of throughput between the 10 bulkflows. TCP Prague (L4S) has poor fairness, with some flowsessentially being starved while others capture most of thelink throughput. This unfairness seems to be exacerbatedby variations in capacity on the mmWave link - in somecases, even when the flows converge to a fair share at first,the balance is “reset” when the link is blocked, and in thefollowing interval some flows are starved. TCP BBR is ableto maintain better fairness between the 10 competing flowsin this experiment, however BBR is known to have problemswith fairness for flows with different RTTs [30] and for flowsthat share a link with loss-based congestion control flows [31].

VI. DISCUSSION AND CONCLUSIONS

Our results confirm that classic loss-based congestion con-trol like TCP CUBIC will not be able to provide low delays inmmWave networks. Queueing delays can get as high as 150mswhen the line of sight path is temporarily blocked.

However, the low delay congestion control protocols weevaluated also had some problems. TCP Prague, which isused in the L4S architecture, manages queuing delay well. Weobserved problems with fairness and starvation of some flows,however, and this problem seems to become worse with block-ages and mobility. Furthermore, TCP Prague requires accurateECN negotiation at both the sender and the receiver, and forrouters in the network path to have ECN marking capability.This makes it difficult to deploy the L4S architecture at alarge scale, although in controlled environments it is muchmore practical. TCP BBR maintains low queueing delays inmmWave channel conditions and has better fairness comparedto L4S. However, BBR may not be the ideal candidate forapplications which need uninterrupted high-speed service,such as live streaming of high quality video. This is becauseBBR periodically enters its ”Probe RTT” phase wherein allsenders cut their CWND size drastically to drain the queue.This helps the BBR algorithm to get a correct estimate ofthe minimum RTT of the path, but these periodic dips inthroughput will be a major cause of concern in many cases.

Page 5: An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf · 2020-04-21 · a temporary “bufferbloat” condition during which delay in-creases dramatically.

Static Link Short Blockages Long Blockages Mobility & Blockages

0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

30

60

90

120

150 TCP CUBIC

0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

10

20

30

40 TCP Prague (L4S)

0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

10

20

30

40

Tra

nsp

ort

RT

T (

ms)

TCP BBR v1

0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

10

20

30

40

Time (s)

TCP BBR v2

Fig. 4: Transport RTT(ms) vs time (s) for 10 flows with different congestion control schemes. Each flow is represented by adifferent color.

Th

rou

gh

pu

t (M

bp

s)

Static Link Short Blockages Long Blockages Mobility & Blockages

0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

500

1000

1500

2000 TCP CUBIC

0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

500

1000

1500

2000 TCP Prague (L4S)

0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

500

1000

1500

2000 TCP BBR v1

Time (s)0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

0

500

1000

1500

2000 TCP BBR v2

Fig. 5: Throughput(Mbps) vs time (s) for 10 flows with different congestion control schemes. Each flow is represented by adifferent color.

Page 6: An Experimental Evaluation of Low Latency Congestion Control …ffund/pubs/tcp-mmwave.pdf · 2020-04-21 · a temporary “bufferbloat” condition during which delay in-creases dramatically.

Our experimental setup has enabled us to investigate thedynamic behavior of these low latency congestion con-trol schemes in a realistic mmWave wireless scenario. ThemmWave setting with blockages and/or mobility is highlydynamic, and hence a challenging environment for congestioncontrol. This work is a first step towards finding a suitablecongestion control algorithm which can minimize delay formmWave wireless networks.

One major limitation of this experimental approach is thatit does not allow us to investigate the behavior of flowscoming from multiple mmWave clients, which are blockedindependently but share the link capacity. We hope to collectadditional link capacity traces in a multi-client setting, withwhich to extend this experiment.

Given the results of this paper, we hope to pursue severalfuture research directions. First, we would like to move awayfrom bulk flows to a more realistic traffic model with a mixtureof long and short flows. We may consider evaluating otherrecently proposed low latency congestion control schemes.We hope to extend our experiment to include scenarios whereflows have different RTTs, and to include other topologies inwhich the bottleneck may not be at the mmWave link. Finally,we would like to more thoroughly investigate the performancewith a dynamic mmWave channel, and either propose improve-ments to existing low latency congestion control schemes, orpropose a new one, to achieve consistent high throughput andlow delay in this challenging environment.

ACKNOWLEDGEMENTS

This work was supported by the New York State Center forAdvanced Technology in Telecommunications (CATT), NYUWIRELESS, and gifts from Cisco Systems and Futurewei. Theauthors would also like to acknowledge the contributions ofShreeshail Hingane and Youssef Azzam, who collected themmWave link traces used in this research.

REFERENCES

[1] M. A. Lema, A. Laya, T. Mahmoodi, M. Cuevas, J. Sachs, J. Mark-endahl, and M. Dohler, “Business case and technology analysis for 5Glow latency applications,” IEEE Access, vol. 5, pp. 5917–5935, 2017.

[2] ITU-R, “Minimum requirements related to technical performance forIMT-2020 radio interface (s),” Tech. Rep. ITU-R M. 2410-0, 2017.

[3] T. Bai and R. W. Heath, “Coverage and rate analysis for millimeter-wavecellular networks,” IEEE Transactions on Wireless Communications,vol. 14, no. 2, pp. 1100–1114, 2014.

[4] G. R. MacCartney, T. S. Rappaport, and S. Rangan, “Rapid fading dueto human blockage in pedestrian crowds at 5g millimeter-wave fre-quencies,” in IEEE Global Communications Conference (GLOBECOM2017), 2017, pp. 1–7.

[5] A. Narayanan, J. Carpenter, E. Ramadan, Q. Liu, Y. Liu, F. Qian, andZ.-L. Zhang, “A first measurement study of commercial mmWave 5Gperformance on smartphones,” in WWW 2020, 2020.

[6] “Verizon community: Buffer bloat on Verizon 5G,” https://community.verizonwireless.com/thread/964495, accessed: 2019-02-20.

[7] “Reddit: I got 5G Home Internet installed this week,”https://www.reddit.com/r/verizon/comments/9lqp6n/i got 5g homeinternet installed this week and it/e7ancsf/, accessed: 2019-02-20.

[8] J. Gettys and K. Nichols, “Bufferbloat: Dark buffers in the Internet,”Queue, vol. 9, no. 11, pp. 40–54, 2011.

[9] N. Cardwell, Y. Cheng, C. S. Gunn, S. H. Yeganeh, and V. Jacobson,“BBR: Congestion-based congestion control,” Queue, vol. 14, no. 5, pp.20–53, 2016.

[10] M. Dong, T. Meng, D. Zarchy, E. Arslan, Y. Gilad, B. Godfrey, andM. Schapira, “PCC Vivace: Online-learning congestion control,” in 15thUSENIX Symposium on Networked Systems Design and Implementation(NSDI 18), 2018, pp. 343–356.

[11] V. Arun and H. Balakrishnan, “Copa: Practical delay-based congestioncontrol for the internet,” in 15th USENIX Symposium on NetworkedSystems Design and Implementation (NSDI 18), 2018, pp. 329–342.

[12] K. Winstein, A. Sivaraman, and H. Balakrishnan, “Stochastic forecastsachieve high throughput and low delay over cellular networks,” in 10thUSENIX Symposium on Networked Systems Design and Implementation(NSDI 13), 2013, pp. 459–471.

[13] B. Briscoe, K. D. Schepper, M. Bagnulo, and G. White, “LowLatency, Low Loss, Scalable Throughput (L4S) Internet Service:Architecture,” Internet Engineering Task Force, Internet-Draft draft-ietf-tsvwg-l4s-arch-05, Feb. 2020, work in Progress. [Online]. Available:https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-l4s-arch-05

[14] M. Zhang, M. Mezzavilla, R. Ford, S. Rangan, S. Panwar, E. Mellios,D. Kong, A. Nix, and M. Zorzi, “Transport layer performance in 5GmmWave cellular,” in 2016 IEEE Conference on Computer Communi-cations Workshops (INFOCOM WKSHPS), 2016, pp. 730–735.

[15] M. Zhang, M. Mezzavilla, J. Zhu, S. Rangan, and S. Panwar, “TCPdynamics over mmwave links,” in IEEE 18th International Workshopon Signal Processing Advances in Wireless Communications (SPAWC),2017, pp. 1–6.

[16] M. Polese, R. Jana, and M. Zorzi, “TCP and MP-TCP in 5G mmWavenetworks,” IEEE Internet Computing, vol. 21, no. 5, pp. 12–19, 2017.

[17] M. Zhang, M. Polese, M. Mezzavilla, J. Zhu, S. Rangan, S. Panwar, andM. Zorzi, “Will TCP work in mmWave 5G cellular networks?” IEEECommunications Magazine, vol. 57, no. 1, pp. 65–71, 2019.

[18] M. Pieska and A. Kassler, “TCP performance over 5G mmWave links- tradeoff between capacity and latency,” in IEEE 13th InternationalConference on Wireless and Mobile Computing, Networking and Com-munications (WiMob), 2017, pp. 385–394.

[19] P. J. Mateo, C. Fiandrino, and J. Widmer, “Analysis of TCP performancein 5G mm-wave mobile networks,” in IEEE International Conference onCommunications (ICC 2019), 2019, pp. 1–7.

[20] R. Ford, M. Zhang, M. Mezzavilla, S. Dutta, S. Rangan, and M. Zorzi,“Achieving ultra-low latency in 5G millimeter wave cellular networks,”IEEE Communications Magazine, vol. 55, no. 3, pp. 196–203, 2017.

[21] R. Ricci, E. Eide, and C. Team, “Introducing CloudLab: Scientificinfrastructure for advancing cloud architectures and applications,” ;login:: the magazine of USENIX & SAGE, vol. 39, no. 6, pp. 36–38,2014.

[22] A. Srivastava, “An experimental evaluation of low latency congestioncontrol over mmWave links,” Run my experiment on GENI blog: https://witestlab.poly.edu/blog/tcp-mmwave, 2020.

[23] M. Alizadeh, A. Greenberg, D. A. Maltz, J. Padhye, P. Patel, B. Prab-hakar, S. Sengupta, and M. Sridharan, “Data center TCP (DCTCP),” inProceedings of the ACM SIGCOMM 2010 conference, 2010, pp. 63–74.

[24] M. Polese, M. Mezzavilla, M. Zhang, J. Zhu, S. Rangan, S. Panwar,and M. Zorzi, “milliProxy: A TCP proxy architecture for 5G mmWavecellular systems,” in 51st Asilomar Conference on Signals, Systems, andComputers, 2017, pp. 951–957.

[25] M. Pieska, A. J. Kassler, H. Lundqvist, and T. Cai, “Improving TCPfairness over latency controlled 5G mmWave communication links,” in22nd International ITG Workshop on Smart Antennas (WSA ’18), 2018,pp. 1–8.

[26] R. Kumar, A. Koutsaftis, F. Fund, G. Naik, P. Liu, Y. Liu, and S. Panwar,“TCP BBR for ultra-low latency networking: challenges, analysis, andsolutions,” in 2019 IFIP Networking Conference (IFIP Networking),2019, pp. 1–9.

[27] S. Hingane, “Using AQM to manage ”temporary bufferbloat” onmmwave links,” Run my experiment on GENI blog: https://witestlab.poly.edu/blog/aqm-mmwave/, 2020.

[28] Google, “BBR repository,” https://github.com/google/bbr.[29] L4S, “L4S Linux repository,” https://github.com/L4STeam/linux.[30] S. Ma, J. Jiang, W. Wang, and B. Li, “Towards RTT fairness

of congestion-based congestion control,” CoRR, vol. abs/1706.09115,2017. [Online]. Available: http://arxiv.org/abs/1706.09115

[31] R. Ware, M. K. Mukerjee, S. Seshan, and J. Sherry, “Modeling BBR’sinteractions with loss-based congestion control,” in Proceedings of theInternet Measurement Conference, 2019, pp. 137–143.