The Performance Analysis of Linux
Networking – Packet Receiving
Wenji Wu, Matt Crawford
Fermilab
CHEP 2006
wenji@fnal.gov, crawdad@fnal.gov
Topics
Background
Problems
Linux Packet Receiving Process
NIC & Device Driver Processing
Linux Kernel Stack Processing
IP
TCP
UDP
Data Receiving Process
Performance Analysis
Experiments & Results
2
1. Background
Computing model in HEP
Globally distributed, grid-based
Challenges in HEP
To transfer physics data sets – now in the multi-petabyte (1015
bytes) range and expected to grow to exabytes within a decade –
reliably and efficiently among facilities and computation centers
scattered around the world.
Technology Trends
Raw transmission speeds in networks are increasing
rapidly, the rate of advancement of microprocessor
technology has slowed.
Network protocol-processing overheads have risen
sharply in comparison with the time spend in packet
transmission in the networks. 3
2. Problems
What, Where, and How are the bottlenecks
of Network Applications?
Networks?
Network End Systems?
We focus on the Linux 2.6 kernel.
4
3. Linux Packet Receiving Process
5
Linux Networking subsystem: Packet
Receiving Process
Socket RCV Process
TrafficSource Ring Buffer SoftIrq Scheduler Traffic Sink
Buffer
NIC IP TCP/UDP SOCK RCV Network
DMA
Hardware Processing Processing SYS_CALL Application
NIC & Device Driver Kernel Protocol Stack Data Receiving Process
Stage 1: NIC & Device Driver
Packet is transferred from network interface card to ring buffer
Stage 2: Kernel Protocol Stack
Packet is transferred from ring buffer to a socket receive buffer
Stage 3: Data Receiving Process
Packet is copied from the socket receive buffer to the application
6
NIC & Device Driver Processing
NIC & Device Driver Processing Steps
1. Packet is transferred from NIC to
Netif_rx_schedule()
Poll_queue (per CPU) Ring Buffer through DMA
Raised softirq check 4
NIC1
SoftIrq
NIC Interrupt
Net_rx_action 2. NIC raises hardware interrupt
Handler 3
Ring Buffer 3. Hardware interrupt handler schedules
Hardware packet receiving software interrupt (Softirq)
2
Interrupt
8
...
dev->poll
7
Packet
Descriptor 5
4. Softirq checks its corresponding CPU’s
NIC device poll-queue
6
NIC1 5 1
4 3 2
5. Softirq polls the corresponding NIC’s
x
.. . 6
ring buffer
DMA
1 .. . P ac
ke
t Higher Layer Processing
6. Packets are removed from its receiving
Packet Packet Refill ring buffer for higher layer processing;
alloc_skb() the corresponding slot in the ring buffer
is reinitialized and refilled.
Layer 1 & 2 functions of the OSI 7-layer network Model
Receive ring buffer consists of packet descriptors
When there are no packet descriptors in ready state, incoming packets will be discarded!
7
Kernel Protocol Stack – IP
IP processing
IP packet integrity verification
Routing
Fragment reassembly
Preparing packets for higher layer processing.
8
Kernel Protocol Stack – TCP 1
TCP processing
TCP Processing Contexts
Interrupt Context: Initiated by Softirq
Process Context: initiated by data receiving process;
more efficient, less context switch
TCP Functions
Flow Control, Congestion Control, Acknowledgement, and Retransmission
TCP Queues
Prequeue
Trying to process packets in process context, instead of the interrupt
contest.
Backlog Queue
Used when socket is locked.
Receive Queue
In order, acked, no holes, ready for delivery
Out-of-sequence Queue
9
Kernel Protocol Stack – TCP 2
Ringbuffer Traffic Src
DMA
NIC
Hardware
IP Application
Processing
entry sys_call iov User Space
Sock
Y Kernel
TCP
Locked? data
Processing N
N Backlog Receive Queue Copy to iov
Empty?
Receiving Y
Y
Task exists?
N Prequeue
Prequeue tcp_prequeue_process() sk_backlog_rcv()
Empty?
tcp_v4_do_rcv() Y
Backlog
N Slow path release_sock()
Fast path? Out of Sequence Empty?
Queue
Y
N
N return / sk_wait_data() tcp_recvmsg()
Copy to iovec? InSequence
Y Y
Y
Copy to iovec? TCP Processing- Process context
N
Receive
Queue Except in the case of prequeue overflow, Prequeue and
Backlog queues are processed within the process context!
Application Traffic Sink
TCP Processing- Interrupt context
10
Kernel Protocol Stack – UDP
UDP Processing
Much simpler than TCP
UDP packet integrity verification
Queue incoming packets within Socket
receive buffer; when the buffer is full,
incoming packets are discarded quietly.
11
Data Receiving Process
Copying packet data from the socket’s
receive buffer to user space through struct
iovec.
Socket-related systems calls
For TCP stream, data receiving process
might also initiate the TCP processing in
the process context.
12
4. Performance Analysis
13
Notation
14
Mathematical Model
Refill Rate Rr
T Total Number of
T Packet Descriptors
D
To other sockets
3 2 1 3 1
Socket i
Ring Buffer RCV Buffer
RT RT’ Rs Rsi Rdi
Ri Ri’
2
Packet
Discard
Token bucket algorithm models NIC & Device Driver
receiving process
Queuing process models the receiving process’ stage 2 & 3
15
Token Bucket Algorithm – Stage 1
The reception ring buffer is represented as the token bucket with a depth of D tokens.
Each packet descriptor in the ready state is a token, granting the ability to accept one
incoming packet. The tokens are regenerated only when used packet descriptors are
reinitialized and refilled. If there is no token in the bucket, incoming packets
k will be
discarded.
⎧ R (t ), A(t ) > 0
∀t > 0 , RT ' (t ) = ⎨ T (1)
⎩ 0, A(t ) = 0
To admit packets into system without discarding, it should have:
∀t > 0 , A(t ) > 0 (2)
∫ ∫
t t
A(t) = D − RT ' (τ )dτ + Rr (τ )dτ , ∀t > 0 (3)
0 0
NIC & Device Driver might be a potential bottleneck!
16
Token Bucket Algorithm – Stage 1
To reduce the risk of being the bottleneck, what measures could
be taken?
• Raise the protocol packet service rate
• Increase system memory size
• Raise NIC’s ring buffer size D
• D is a design parameter for the NIC and driver.
• For an NAPI driver, D should meet the following condition to
avoid unnecessary packet drops:
D ≥ τ min * Rmax (4)
17
Queuing process – Stage 2 & 3
For stream i; it has
Ri ' (t ) ≤ Ri (t ) and Rsi (t ) ≤ Rs (t ) (5)
It can be derived that:
∫ ∫
t t
Bi (t) = Rsi (τ )dτ − Rdi (τ )dτ (6)
0 0
For UDP, when receive buffer is full, incoming UDP packets are dropped;
For TCP, when receive buffer is approaching full, flow control would throttle
sender’ data rate;
For network applications, it is desirable to raise (7)
∫ ∫
t t
QBi − Rsi (τ )dτ + Rdi (τ )dτ (7)
0 0
A full receive buffer is another potential bottleneck!
18
Queuing process – Stage 2 & 3
What measures can be taken?
Raising socket’s receive buffer size QBi
Configurable, subject to system memory limits
Raising Rdi (t )
Subject to system load and the data receiving process’ nice
value
Raise data receiving process’ CPU share
Increase nice value
Reduce system load
Cycle n Cycle n+1
Running Running
expired expired
0 t1 t2 t3 t4
⎧ D, 0 < t < t1
Rdi (t ) = ⎨ (8)
⎩0, t1 < t < t 2
19
5. Experiments & Results
20
Experiment Settings
Run iperf to send data in one direction between two computer systems;
We have added instrumentation within Linux packet receiving path
Compiling Linux kernel as background system load by running make –nj
Receive buffer size is set as 20M bytes
10G
1G
Cisco 6509 Cisco 6509
1G
Fermi Test Network
Receiver
Sender
Sender Receiver
CPU Two Intel Xeon CPUs (3.0 GHz) One Intel Pentium II CPU (350 MHz)
System Memory 3829 MB 256MB
Tigon, 64bit-PCI bus slot at Syskonnect, 32bit-PCI bus slot at 33MHz,
NIC
66MHz, 1Gbps/sec, twisted pair 1Gbps/sec, twisted pair
Sender & Receiver Features
21
Experiment 1: receive ring buffer
Running out
packet descriptors
TCP throttles rate
to avoid loss
Figure 8
Total number of packet descriptors in the reception ring buffer of the NIC is 384
Receive ring buffer could run out of its packet descriptors: Performance Bottleneck!
22
Experiment 2: Various TCP Receive Buffer Queues
Zoom in
Background Load 0 Background Load 10
Figure 9 Figure 10
23
Experiment 3: UDP Receive Buffer Queues
The experiments are run with three different cases:
(1) Sending rate: 200Mb/s, Receiver’s background load: 0; Receive livelock problem!
(2) Sending rate: 200Mb/s, Receiver’s background load: 10;
(3) Sending rate: 400Mb/s, Receiver’s background load: 0. When UDP receive buffer is full, incoming
Transmission duration: 25 seconds; Receive buffer size: 20 Mbytes packet is dropped at the socket level!
UPD Receive Buffer Queues UDP receive Buffer
Figure 11 Committed Memory
Figure 10
Both cases (1) and (2) are within receiver’s handling limit. The receive buffer is generally empty
The effective data rate in case (3) is 88.1Mbits, with a packet drop rate of 670612/862066 (78%) 24
Experiment 3: Data receive process
350
300
TCP Bandwidt h Mbps/s
250
nice = 0
200
nice = -10
150
nice = -15
100
50
0
BL0 BL1 BL4 BL10
Background Load
Sender transmits one TCP stream to receiver with the transmission duration of 25
seconds. In the receiver, both data receiving process’ nice value and the background
load are varied. The nice values used in the experiments are: 0, -10, and -15.
25
Conclusion:
The reception ring buffer in NIC and
device driver can be the bottleneck for
packet receiving.
The data receiving process’ CPU share is
another limiting factor for packet receiving.
26
References
[1] Miguel Rio, Mathieu Goutelle, Tom Kelly, Richard Hughes-Jones, Jean-Philippe Martin-Flatin, and
Yee-Ting Li, "A Map of the Networking Code in Linux Kernel 2.4.20", March 2004.
[2] J. C. Mogul and K. K. Ramakrishnan, “Eliminating receive livelock in an interrupt-driven kernel”,
ACM Transactions on Computer Systems, 15(3): 217--252, 1997.
[3] Klaus Wehrle, Frank Pahlke, Hartmut Ritter, Daniel Muller, and Marc Bechler, The Linux
Networking Archetecture – Design and Implementation of Network Protocols in the Linux Kernel,
Prentice-Hall, ISBN 0-13-177720-3, 2005.
[4] www.kernel.org
[5] Robert Love, Linux Kernel Development, Second Edition, Novell Press, ISBN: 0672327201, 2005.
[6] Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman, Linux Device Drivers, 3rd Edition,
O’Reilly Press, ISBN: 0-596-00590-3, 2005.
[7] Andrew S. Tanenbaum, Computer Networks, 3rd Edition, Prentice-Hall, ISBN: 0133499456, 1996.
[8] Arnold O. Allen, Probability, Statistics, and Queueing Theory with Computer Science Applications,
2nd Edition, Academic Press, ISBN: 0-12-051051-0, 1990.
[9] Hoskote, Y., et.al., A TCP offload accelerator for 10 Gb/s Ethernet in 90-nm CMOS, Solid-State
Circuits, IEEE Journal of Volume 38, Issue 11, Nov. 2003 Page(s):1866 – 1875.
[10] Regnier, G., et.al., TCP onloading for data center servers, Computer, Volume 37, Issue 11, Nov.
2004 Page(s):48 - 58
[11] Transmission Control Protocol, RFC 793, 1981
[12] http://dast.nlanr.net/Projects/Iperf/
27