Full Stack Load Testing

Post on 17-Dec-2014

484 views 0 download

description

A talk I gave at the Boston Web Performance Meetup in August 2014. Performance is one of the most challenging issues in modern web app design, in large part because modeling, testing, and validating performance before deploying to production is so challenging. While many ops teams have nailed down the problem of re-creating pre-production environments that closely mimic production, those environments frequently rely on known-good components beyond the application code itself: AWS ELB, F5 load balancers, CDNs, Varnish, and more. Testing plug-in components like that can be challenging, because their performance characteristics don't directly align with application metrics. - How many simultaneous users can my load balancer support? - What sort of network load will I put on my CDN (i.e., how much will it cost?) - How do different user behavior patterns affect performance? In this meetup, we'll introduce a novel tool in this toolbox: tcpreplay, an open-source tool for replaying packet capture files back at an application. By replaying user traffic to a staging environment, you can test the effects of - Network saturation to the load balancer - High numbers of users / IPs - Lots of traffic to your other monitoring tools!

Transcript of Full Stack Load Testing

Full Stack Load Testing

Boston Web PerformanceAugust 27th, 2014

What do you mean, “testing”?

Unit

Functional

IntegrationPerformance

Security

Usability

YOUR CODE

YOUR CODE

ELB

RDS Stripe

Web Cache

Memcache

•Production-level load•Production-level servers•Production-level cost

•Production-level load•Production-level servers•Production-level cost•Production-level bugs

DO IT LIVEYeah, well, that’s just like, your opinion, man

SCRIPT IT

…cURL?

$ for i in `seq 1 1000`; do curl http://example.com/login &; done

Apache Bench$ ab -n 100 -c 10 http://www.yahoo.com/Concurrency Level: 10Time taken for tests: 1.889 secondsComplete requests: 100Failed requests: 0Write errors: 0Total transferred: 1003100 bytesHTML transferred: 949000 bytesRequests per second: 52.94 [#/sec] (mean)Time per request: 188.883 [ms] (mean)Time per request: 18.888 [ms] (mean, across all concurrent requests)Transfer rate: 518.62 [Kbytes/sec] received

Connection Times (ms) min mean[+/-sd] median maxConnect: 57 59 1.7 59 64Processing: 117 126 7.5 124 162Waiting: 57 62 7.0 60 98Total: 175 186 8.0 184 224

Percentage of the requests served within a certain time (ms) 50% 184 66% 186 75% 187 80% 188 90% 192 95% 203 98% 216 99% 224 100% 224 (longest request)

Selenium

“More realistic”

“Netflow is a feature that was introduced on Cisco routers that give the ability to collect IP network traffic as it enters or exits an interface.”

- Wikipedia

Netflow?

Source IP Source Port Destination IP Destination Port Protocol

“Sooo…. what do you do?”

• Layer 3 (Standard)– IP, port, protocol

• Layer 7 (FlowView)– Application names

tcpreplayroot@pw29:~# tcpreplay -i eth7 -tK --loop 50000 --netmap --unique-ip smallFlows.pcapSwitching network driver for eth7 to netmap bypass mode... done!File Cache is enabledActual: 713050000 packets (460826550000 bytes) sent in 385.07 seconds.Rated: 1194660947.8 Bps, 9557.28 Mbps, 1848532.79 ppsFlows: 60450000 flows, 156712.44 fps, 712150000 flow packets, 900000 non-flowStatistics for network device: eth7 Attempted packets: 713050000 Successful packets: 713050000 Failed packets: 0 Truncated packets: 0 Retried packets (ENOBUFS): 0 Retried packets (EAGAIN): 0Switching network driver for eth7 to normal mode... done!

tcpreplayroot@pw29:~# tcpreplay -i eth7 -tK --loop 50000 --netmap --unique-ip smallFlows.pcapSwitching network driver for eth7 to netmap bypass mode... done!File Cache is enabledActual: 713050000 packets (460826550000 bytes) sent in 385.07 seconds.Rated: 1194660947.8 Bps, 9557.28 Mbps, 1848532.79 ppsFlows: 60450000 flows, 156712.44 fps, 712150000 flow packets, 900000 non-flowStatistics for network device: eth7 Attempted packets: 713050000 Successful packets: 713050000 Failed packets: 0 Truncated packets: 0 Retried packets (ENOBUFS): 0 Retried packets (EAGAIN): 0Switching network driver for eth7 to normal mode... done!

So…?

Lesson #1: Control the load directly

Lesson #2: Use Real Traffic

Lesson #3: Push on the sticky points

Thanks!

tr@appneta.com@_tr

appneta.com <-- pretty graphs! we’re hiring!