mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

21
mTCP: A Highly Scalable User- level TCP Stack for Multicore Systems EunYoung Jeong, Shinae Woo, Muhammad Jamshed, Haewon Jeong Sunghwan Ihm*, Dongsu Han, and KyoungSoo Park KAIST * Princeton University

description

mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems. EunYoung Jeong , Shinae Woo , Muhammad Jamshed , Haewon Jeong Sunghwan Ihm *, Dongsu Han, and KyoungSoo Park KAIST * Princeton University. Needs for Handling Many S hort F lows. - PowerPoint PPT Presentation

Transcript of mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

Page 1: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems

EunYoung Jeong, Shinae Woo, Muhammad Jamshed, Haewon Jeong

Sunghwan Ihm*, Dongsu Han, and KyoungSoo Park KAIST * Princeton University

Page 2: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

2

Needs for Handling Many Short Flows

End systems- Web servers

0 1K 4K 8K 16K32K 64K128K

256K

512K

1M Total

0%20%40%60%80%

100%61%

91%Flow count

Flow Size (Bytes)

CDF

* Commercial cellular traffic for 7 daysComparison of Caching Strategies in Modern Cellular Backhaul Networks, MO-BISYS 2013

32K

Middleboxes- SSL proxies- Network caches

Page 3: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

3

Unsatisfactory Performance of Linux TCP

• Large flows: Easy to fill up 10 Gbps• Small flows: Hard to fill up 10 Gbps regardless of # cores

– Too many packets: 14.88 Mpps for 64B packets in a 10 Gbps link

– Kernel is not designed well for multicore systems

1 2 4 6 80.0

50,000.0

100,000.0

150,000.0

200,000.0

250,000.0

TCP Connection Setup Performance

Number of CPU Cores

Conn

ectio

ns/s

ec (x

105

)

Linux: 3.10.16Intel Xeon E5-2690Intel 10Gbps NIC

Performance meltdown

Page 4: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

Kernel Uses the Most CPU Cycles

4

83% of CPU usage spent inside kernel!

Performance bottlenecks 1. Shared resources2. Broken locality3. Per packet processing

1) Efficient use of CPU cycles for TCP/IP processing 2.35x more CPU cycles for app2) 3x ~ 25x better performance

Bottleneck removed by mTCP

Kernel(without TCP/IP)

45%

Packet I/O4%

TCP/IP34%

Appli-cation17%

CPU Usage Breakdown of Web ServerWeb server (Lighttpd) Serving a 64 byte fileLinux-3.10

Page 5: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

5

Inefficiencies in Kernel from Shared FD

1. Shared resources– Shared listening queue– Shared file descriptor space

Per-core packet queue

Receive-Side Scaling (H/W)

Core 0 Core 1 Core 3Core 2

Listening queue

Lock

File descriptor space

Linear search for finding empty slot

Page 6: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

6

Inefficiencies in Kernel from Broken Locality

2. Broken locality

Per-core packet queue

Receive-Side Scaling (H/W)

Core 0 Core 1 Core 3Core 2

Interrupt handle

accept()read()write()

Interrupt handling core != accepting core

Page 7: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

7

Inefficiencies in Kernel from Lack of Support for Batching

3. Per packet, per system call processing

Inefficient per packet processing

Frequent mode switchingCache pollution

Per packet memory allocation

Inefficient per system call processing

accept(), read(), write()

Packet I/O

Kernel TCP

Application thread

BSD socket LInux epollKernelUser

Page 8: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

Previous Works on Solving Kernel Complexity

8

Listening queue

Connection locality

App <-> TCP comm. Packet I/O API

Linux-2.6 Shared No Per system call Per packet BSD

Linux-3.9SO_REUSEPORT Per-core No Per system call Per packet BSD

Affinity-Accept Per-core Yes Per system call Per packet BSD

MegaPipe Per-core Yes Batched system call Per packet custom

How much performance improvement can we getif we implement a user-level TCP stack with all optimizations?

Still, 78% of CPU cycles are used in kernel!

Page 9: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

Clean-slate Design Principles of mTCP

9

• mTCP: A high-performance user-level TCP designed for mul-ticore systems

• Clean-slate approach to divorce kernel’s complexity

Each core works independently– No shared resources– Resources affinity

1. Shared resources2. Broken locality

Problems Our contributions

Batching from flow processing from packet I/O to user API

3. Lack of support for batching

Easily portable APIs for compatibility

Page 10: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

Overview of mTCP Architecture

10

1. Thread model: Pairwise, per-core threading2. Batching from packet I/O to application3. mTCP API: Easily portable API (BSD-like)

User-level packet I/O library (PSIO)

mTCP thread 0 mTCP thread 1

ApplicationThread 0

ApplicationThread 1

mTCP socket mTCP epoll

NIC device driver Kernel-level

12

3

User-level

Core 0 Core 1

• [SIGCOMM’10] PacketShader: A GPU-accelerated software router, http://shader.kaist.edu/packetshader/io_engine/index.html

Page 11: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

11

1. Thread Model: Pairwise, Per-core Threading

User-level packet I/O library (PSIO)

mTCP thread 0 mTCP thread 1

ApplicationThread 0

ApplicationThread 1

mTCP socket mTCP epoll

Device driver

Core 0 Core 1

Symmetric Receive-Side Scaling (H/W)

Per-core packet queue

Per-core listening queue

Per-core file descriptor

Kernel-level

User-level

Page 12: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

12

From System Call to Context Switch-ing

Packet I/O

Kernel TCP

Application thread

BSD socket LInux epoll

User-level packet I/O library

mTCP thread

Application Thread

NIC device driver

mTCP socket mTCP epollKernel

User

Linux TCP mTCP

System call Context switching

higher overhead<Batching to amortizecontext switch overhead

Page 13: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

13

2. Batching process in mTCP thread

Acceptqueue

S S/A F/A

Data list

ACK list

Control list

TX manager

Connectqueue

Application thread

SYN RST FIN

Data list

Writequeue

Closequeue

write()

Rx manager

Payload handler

Socket API

Internal event queue

Eventqueue

ACK listControl list

TX manager

connect() close()epoll_wait()accept()

SYN ACK

(7)

Data(8)

Rx queue Tx queue

mTCP thread

Page 14: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

14

3. mTCP API: Similar to BSD Socket API

• Two goals: Easy porting + keeping popular event model• Ease of porting

• Just attach “mtcp_” to BSD socket API• socket() mtcp_socket(), accept() mtcp_accept(), etc.

• Event notification: Readiness model using epoll()• Porting existing applications

• Mostly less than 100 lines of code change

Application Description Modified lines / To-tal lines

Lighttpd An event-driven web server 65 / 40K

ApacheBench A webserver performance bench-mark tool 29 / 66K

SSLShader A GPU-accelerated SSL proxy [NSDI ’11] 43 / 6,618

WebReplay A web log replayer 81 / 3,366

Page 15: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

15

Optimizations for Performance

• Lock-free data structures• Cache-friendly data structure• Hugepages for preventing TLB missing• Efficient TCP timer management• Priority-based packet queuing• Lightweight connection setup• ……

Please refer to our paper

Page 16: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

16

mTCP Implementation

• 11,473 lines (C code)• Packet I/O, TCP flow management, User-level socket API,

Event system library• 552 lines to patch the PSIO library• Support event-driven packet I/O: ps_select()

• TCP implementation• Follows RFC793• Congestion control algorithm: NewReno

• Passing correctness test and stress test with Linux TCP stack

Page 17: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

17

Evaluation

• Scalability with multicore• Comparison of performance of multicore with previous so-

lutions• Performance improvement on ported applications• Web Server (Lighttpd)

– Performance under the real workload• SSL proxy (SSL Shader, NSDI 11)

– TCP bottlenecked application

Page 18: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

18

0 2 4 6 80

300000600000900000

12000001500000

Linux REUSEPORT MegaPipe mTCPNumber of CPU Cores

Tran

sact

ions

/sec

(x

105)

1

Multicore Scalability

• 64B ping/pong messages per connection• Heavy connection overhead, small packet processing overhead• 25x Linux, 5x SO_REUSEPORT*[LINUX3.9], 3x MegaPipe*[OSDI’12]

Shared fd in process

Shared listen socket

* [LINUX3.9] https://lwn.net/Articles/542629/* [OSDI’12] MegaPipe: A New Programming Interface for Scalable Network I/O, Berkeley

Inefficient small packet processing in Kernel

0300000600000900000

12000001500000

0300000600000900000

12000001500000

0300000600000900000

12000001500000

0300000600000900000

12000001500000 Linux: 3.10.12

Intel Xeon E5-269032GB RAMIntel 10Gbps NIC

Page 19: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

19

Performance Improvement on Ported Applications

Web Server (Lighttpd)• Real traffic workload: Static file

workload from SpecWeb2009 set• 3.2x faster than Linux• 1.5x faster than MegaPipe

SSL Proxy (SSLShader)• Performance Bottleneck in TCP• Cipher suite

1024-bit RSA, 128-bit AES, HMAC-SHA1

• Download 1-byte object via HTTPS• 18% ~ 33% better on SSL handshake

Linux REUSEPORT MegaPipe mTCP0

1

2

3

4

5

1.24 1.79

2.69

4.02

Thro

ughp

ut (G

bps)

4K 8K 16K0

5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000

26,762 28,208 27,72531,710

36,505 37,739

LinuxmTCP

# Concurrent Flows

Tran

sacti

ons/

sec (

x 10

3)

Page 20: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

20

Conclusion

• mTCP: A high-performing user-level TCP stack for multicore systems• Clean-slate user-level design to overcome inefficiency in kernel

• Make full use of extreme parallelism & batch processing• Per-core resource management• Lock-free data structures & cache-aware threading• Eliminate system call overhead• Reduce context switch cost by event batching

• Achieve high performance scalability• Small message transactions: 3x to 25x better• Existing applications: 33% (SSLShader) to 320% (lighttpd)

Page 21: mTCP : A Highly Scalable User-level TCP Stack for Multicore Systems

Thank You

Source code is available at http://shader.kaist.edu/mtcp/

https://github.com/eunyoung14/mtcp