Predictable Performance Optimization for Wireless Networks
-
Upload
austin-barry -
Category
Documents
-
view
81 -
download
2
description
Transcript of Predictable Performance Optimization for Wireless Networks
Predictable Performance Predictable Performance Optimization for Wireless NetworksOptimization for Wireless Networks
Lili Qiu Lili Qiu University of Texas at AustinUniversity of Texas at Austin
[email protected]@cs.utexas.edu
Joint work with Joint work with
Yi Li, Yin Zhang, Ratul Mahajan, and Eric RoznerYi Li, Yin Zhang, Ratul Mahajan, and Eric Rozner
ACM SIGCOMM 2008ACM SIGCOMM 2008August 21, 2008August 21, 2008
2
MotivationMotivation• Wireless networks are becoming ubiquitous• Managing wireless networks is hard
• Our goal: develop systematic techniques to optimize the performance of wireless networks
Predict if given sending rates are achievable
Perform what-if analysis
Optimize sending rates for different objectives
Wireline Wireless
3
0 200 400 600 800
1000 1200 1400
0 1000 2000 3000 4000 5000
Thro
ughp
ut (K
bps)
Sending rate (Kbps)
bad-goodgood-bad
Unpredictability of wireless networksUnpredictability of wireless networks
Need predictable wireless performance optimization.
S
S
R
R
D
DSS
50%
100%
100%
50%
bad-good
good-bad
4
Model-driven optimization frameworkModel-driven optimization framework
Given network
Network measureme
nt
Network model
Optimization
Constraints
Traffic demands
Optimizedflow rates
Performance objectives: - Maximize fairness, total throughput, …
Routing
5
Existing models are insufficientExisting models are insufficient• Models of asymptotic performance bounds
– Cannot model any specific networks [GP00,LB+01,GT01,GV02]
• Conflict graph based model– Assume perfect scheduling and over-estimate
802.11 performance [JPPQ03]
• 802.11 DCF models– Restricted topologies or traffic demands
[Bianchi00,KA+05,GLC06,GSK05 QZWH+07,KDG07]– They aim to estimate performance and cannot be
easily incorporated into optimization procedure
Need a better 802.11 network model for optimization.
6
Our network modelOur network model• Provide a compact characterization of feasible sol
ution space to facilitate optimization• Simple yet flexible and accurate
– Handle asymmetric link loss rate– Handle asymmetric interference– Handle hidden terminals– Handle heterogeneous, multihop traffic demands
Network measureme
nt
Network model
Throughput constraints
Loss rate constraints
Sending rate constraints
7
Throughput constraintsThroughput constraints• Divide time into variable-length slot
(VLS)– 3 types of slots:
idle slot, transmission slot, deferral slot
j ijjjijiislotj
iiii TDTT
pEPg
)1(
)1(
Expected payloadtransmission time
Probability of starting tx in a slot
Success probability
Expected duration of a variable-length slot
8
Loss rate constraintsLoss rate constraints • Inherent and collision loss are independent • Inherent loss
– Based on one-sender broadcast measurement
• Collision loss– Synchronous loss
• Two senders can carrier sense each other• Occur when two transmissions start at the same time
– Asynchronous loss• At least one sender cannot carrier sense the other• Occur when two transmissions overlap
9
Sending rate feasibility Sending rate feasibility constraintsconstraints
• 802.11 unicast
– Random backoff interval uniformly chosen [0,CW]
– CW doubles after a failed transmission until CWmax, and restores to CWmin after a successful transmission or when max retry count is reached
– CW(pi): the expected contention window size under packet loss rate pi [Bianchi00]
• Sending rate feasibility constraints
2/)(1
10
ipCWi
DIFS Data TransmissionRandomBackoff
ACKTransmission
SIFS
10
Extensions to the basic modelExtensions to the basic model• RTS/CTS
– Add RTS and CTS delay to VLS duration– Add RTS and CTS related loss to loss rate constraints
• Multihop traffic demands– Link load routing matrix e2e demand– Routing matrix gives the fraction of each e2e demand th
at traverses each link• TCP traffic
– Update the routing matrix:
where reflects the size & frequency of TCP ACKsackdataTCP RRR
11
Model-driven optimization frameworkModel-driven optimization framework
Given network
Network measureme
nt
Network model
Optimization
Constraints
Traffic demands
Optimizedflow rates
Performance objectives:
- Maximize fairness, total throughput, …
Routing
12
Flow throughput feasibility testingFlow throughput feasibility testing• Test if given flow throughput are achievable• Challenge: strong interdependency• Our approach: iterative procedure
Initializeτ= 0 and p = pinherent
Check feasibility
constraints
Converged?
noyes
Estimate τ from throughput and p
Estimate p from throughput andτ
Estimate throughput from p andτ
Output:feasible/infeasible
Input: throughput
13
Fair rate allocationFair rate allocationInitialization: add all demands to unsatSet
Scale up all demands in unsatSet until some demand is saturated or scale1
Output X
Move saturated demands from unsatSet to X
if (unsatSet≠)
if (scale 1)yes
no
yes
no
14
Total throughput maximizationTotal throughput maximization• Formulate a non-linear optimization problem
(NLP)
• Solve NLP using iterative linear programming
*0
2/)(1
10
)1(
)1(..
max
d
i
xx
pCW
TDT
pEPxRts
x
d
i
jjjijslot
jj
iii
ddid
dd
Sending rate is feasible
E2e throughput is bounded by demand
Link load is bounded bythroughput constraints
Maximize total throughput
15
Evaluation methodologyEvaluation methodology• Testbed experiment
– Capture real-world complexities– 19 mesh nodes at UTCS building; up to 7 hops
• Qualnet simulation – Controlled environment for a broad range of evaluation
• Rate optimization schemes– No optimization– Conflict graph (CG) model: assume perfect scheduling– Our scheme
• Traffic– TCP and UDP; saturated and random demands
• Routing – Use hop count, ETX, MIC, and CG-based routing
16
Model validation: UDP trafficModel validation: UDP traffic
0 1 2 3 4 5 6 7 8 9
10
0 1 2 3 4 5 6 7 8 9 10Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
0 0.2 0.4 0.6 0.8 1
Frac
tions
of r
unsRatios between actual and estimated throughput
scale=1.0scale=1.1scale=1.2scale=1.5
1) Most estimated rates are achievable within 20%.2) Rates scaled up by just 10% become unachievable.
y=x
y=0.8x
17
Model validation: TCP trafficModel validation: TCP traffic
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
0 0.2 0.4 0.6 0.8 1Fr
actio
ns of
runs
Ratios between actual and estimated throughput
scale=1.0scale=1.1scale=1.2scale=1.5
Our model is accurate for TCP traffic.
y=x
y=0.8x
18
Model validation: conflict graph Model validation: conflict graph modelmodel
0
2
4
6
8
10
12
1 2 3 4 5 6 7 8 9 10 11Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
0 2 4 6 8
10 12 14
0 2 4 6 8 10 12 14Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
CG model significantly over-estimates sending rates.
UDP TCP
y=0.8x
y=x y=x
y=0.8x
19
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12 14 16Fa
irness
inde
x
Num of Flows
wo/ opt.CG opt.Our opt.
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12 14 16
Fairn
ess in
dex
Num of Flows
wo/ opt.CG opt.Our opt.
Maximizing fairnessMaximizing fairnessUDP TCP
Fairness index is close to 1 under our scheme, while it degrades quickly in other schemes.
20
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
5.5
0 2 4 6 8 10 12 14 16
Throu
ghpu
t
Num of Flows
wo/ opt.CG opt.Our opt.
Maximizing total throughputMaximizing total throughputUDP
Our scheme significantly increases total throughput.
1 1.5
2 2.5
3 3.5
4 4.5
5
0 2 4 6 8 10 12 14 16Th
rough
put
Num of Flows
wo/ opt.CG opt.Our opt.
TCP
21
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
5.5
0 2 4 6 8 10 12 14 16
Throu
ghpu
t (Mbp
s)
# Flows
wo/ opt.w/ our opt.
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
0 2 4 6 8 10 12 14 16Th
rough
put (M
bps)
# Flows
wo/ opt.w/ our opt.
Impact on different routing Impact on different routing schemesschemes
Our scheme helps all routing schemes considered.
TCPUDP
22
ConclusionsConclusions• Main contributions
– Predictable wireless performance optimization• A simple yet accurate wireless network model• Effective model-driven optimization algorithms
– Demonstrate their effectiveness through testbed experiments and simulation
• Future work– Handle dynamic traffic and topologies– Use passive measurement to seed our
model
Thank you!Thank you!
24
TCP Pathologies under no rate TCP Pathologies under no rate controlcontrol
S1
S2
R
D1
D2
No Rate Limit (Mbps)
Rate Limit
0.805, 0.740 1.066, 1.064
TCP cannot set the rate that maximizes throughput.
25
Sensitivity of wireless network throughput Sensitivity of wireless network throughput to bottleneck location (I)to bottleneck location (I)
S
S
R
R
D
DSS
0
500
1000
1500
2000
2500
3000
3500
0 10 20 30 40 50 60 70 80 90
Thro
ughp
ut (
Kbp
s )
Link Loss Rates ( % )
bad - goodgood - bad
Performance degrade severely without rate limiting.
bad
bad
good
good
0
500
1000
1500
2000
2500
0 10 20 30 40 50 60 70 80 90
Thro
ughp
ut (K
bps)
Link Loss Rates (%)
bad-goodgood-bad
sim testbed
26
How to determine safe sending How to determine safe sending rates under wireless rates under wireless
interference?interference?
28
• Divide time into variable length slots– Idle slot, transmission slot, deferral slot
• Throughput constraint:
• VLS duration constraint
– EP(i): expected payload transmission time at link i– : probability of starting a transmission in a slot– : loss rate of link i– µ(i): expected VLS duration
Throughput & VLS duration Throughput & VLS duration constraintsconstraints
j ijjjijjjslotj
iiii TDTT
pEPg
)1(
)1(
jij
jijijslot
jji TDTT
)1(
ipi
29
Flow throughput feasibility testingFlow throughput feasibility testing
Capture interdependence– τdepends on link throughput and loss rate– Loss rate depends on link active probability– A link active probability depends on active probabil
ities of other links
Initialize τ=0 and p = 0
Check feasibility
constraintsConverged?
No
Yes
Goal: if a given set of link throughput is achievable
Estimate p from throughput andτ
Estimate τ from throughput and p
Compute throughput
30
Related WorkRelated Work• Interference modeling
– Asymptotic performance bounds – Conflict graph based model– 802.11 DCF models
• Simple but restrictive– All nodes are within communication range of each other– Restricted traffic demands
• General but expensive• Both aim to predict performance and cannot facilitate
optimization
• Rate control and scheduling– Joint optimization of rate control and scheduling– IFRC: fair rate control for sensor networks and
specific to tree topology and workload• Routing
– Least cost path model [HopCount,ETX,WCETT,MIC]
31
Motivation (Cont.)Motivation (Cont.)• Vision: Bring wireless network
management in par with wireline network management
• This work provides answers to basic management questions:– What traffic demands can be supported in a
network?– What is the impact of routing news and
addition of new flows?– What is safe sending rates for a given set of
flows?
32
Throughput constraintsThroughput constraints
• EP(i): expected payload transmission time at link i• : probability of starting a transmission in a slot• : loss rate of link i• Variable length slots:
– Idle slot– Transmission slot– Deferral slot
iip
j ijjjijjjslotj
iiii TDTT
pEPg
)1(
)1(
33
Lessons learnedLessons learned• Rate limiting is necessary
• Proper rate limiting has to take into account of interference
• Q: How to systematically estimate the safe sending rates that a network can support?
34
Throughput constraintsThroughput constraints
j ijjjijjjslotj
iiii TDTT
pEPg
)1(
)1(
Expected payloadtransmission time
Probability of starting tx in a slot
Success probability
Expected variable slot duration-Idle slot duration-Transmission slot duration-Deferral slot duration