NGINX for Application Delivery & Acceleration

Post on 16-Jul-2015

473 views 0 download

Tags:

Transcript of NGINX for Application Delivery & Acceleration

NGINX for Application Delivery & Acceleration

Introduced by Andrew Alexeev

Presented by Owen Garrett

Nginx, Inc.

About this webinar

NGINX is an HTTP request and load balancing server that powers many of the

world's busiest websites. Learn why NGINX is such a popular choice, and see how it

improves the capacity of web applications through HTTP intelligence and caching.

Why is page speed important?

• We used to talk about the ‘N second rule’:

– 10-second rule (Jakob Nielsen, March 1997)

– 8-second rule (Zona Research, June 2001)

– 4-second rule (Jupiter Research, June 2006)

– 3-second rule (PhocusWright, March 2010)

… then Google changed the game

“We want you to be able to get from one page to another as quickly as you turn the page on a book”

Urs Hölzle, Google

The costs of poor performance• Google: search enhancements cost 0.5s page load

– Ad CTR dropped 20%

• Amazon: Artificially increased page load by 100ms– Customer revenue dropped 1%

• Walmart, Yahoo, Shopzilla, Edmunds, Mozilla… – All reported similar effects on revenue

• Google Pagerank – Page Speed affects Page Rank– Time to First Byte is what appears to count

What can you do?

INTRODUCING NGINX…

What is NGINX?

Internet

N

Web ServerServe content from disk

Application ServerFastCGI, uWSGI, Passenger…

ProxyCaching, Load Balancing… HTTP traffic

Application Acceleration

SSL and SPDY termination

Performance Monitoring

High Availability

Advanced Features: Bandwidth Management

Content-based Routing

Request Manipulation

Response Rewriting

Authentication

Video Delivery

Mail Proxy

GeoLocation

143,000,000Websites

NGINX Accelerates

22%Top 1 million websites

37%Top 1,000 websites

Three steps to a faster website

Offload HTTP “Heavy Lifting”

Cache common responses

Compress data to reduce bandwidth

Hundreds of concurrent connections…

require hundreds of heavyweight threads or processes…

competing for limited CPU and memory

What is the challenge with HTTP?

Client-side:Slow networkMultiple connectionsHTTP Keepalives

Server-side:Limited concurrency

Let’s try an example…

• Client opens 8 connections, 4 requests in each (32 req)

• Average request processing time is <10ms (single core)

• Average client write is 80ms, average read 160ms

• One client can occupy 8 concurrency slots for 60+ seconds

r w

250ms

r w

250ms

r w

250ms

r w

250ms 60 seconds or more…

Idle keepalive x 8

Hundreds of concurrent connections…

handed by a small number of multiplexing processes,…

typically one process per core

NGINX architecture

NGINX transforms application performance

• NGINX has almost-unlimited concurrency

– Transforms worst-case traffic to best-case

– Maximizes application utilization

Internet

N Slow, high-concurrency

internet-side trafficFast, efficient

local-side traffic

2. Cache common responses

GET /logo.png

GET /logo.png

N

Hybrid on-disk and in-memory cache

What about dynamic content?

• Some content appears to be uncacheable

– Use client-side or server-side page assembly

– Use cache keys

– Use cache purging

– Use fast cache times

3. Compress data to reduce bandwidth

• Reduce bandwidth requirements per client

– Content Compression reduces text and HTML

• Typically about 70% reduction

– Image resampling reduces JPEG size

Three steps to a faster website

Offload HTTP “Heavy Lifting”

Cache common responses

Compress data to reduce bandwidth

Closing thoughts

• 37% of the busiest websites use NGINX– In most situations, it’s a drop-in extension

• Check out the blogs on nginx.com

• Future webinars: nginx.com/webinars

Try NGINX F/OSS (nginx.org) or NGINX Plus (nginx.com)