0% found this document useful (0 votes)
67 views10 pages

Congestion Control - Routers: Problems With FIFO/Drop Tail

1) Traditional FIFO queuing leads to problems like persistent queuing, burst packet losses, and synchronization between TCP flows. 2) Random Early Detection (RED) aims to address these issues by probabilistically dropping or marking packets before the queue is full to provide early congestion notification. 3) RED helps reduce average queue length and the likelihood of synchronization between flows, but it does not fully isolate flows or differentiate responses between well-behaved and unresponsive flows.

Uploaded by

ajaz ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views10 pages

Congestion Control - Routers: Problems With FIFO/Drop Tail

1) Traditional FIFO queuing leads to problems like persistent queuing, burst packet losses, and synchronization between TCP flows. 2) Random Early Detection (RED) aims to address these issues by probabilistically dropping or marking packets before the queue is full to provide early congestion notification. 3) RED helps reduce average queue length and the likelihood of synchronization between flows, but it does not fully isolate flows or differentiate responses between well-behaved and unresponsive flows.

Uploaded by

ajaz ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Congestion Control – Routers

CSE 561 Lecture 8, Spring 2002.


David Wetherall

Problems with FIFO/Drop tail

• Persistent queuing

• Synchronization

• Burst losses

• Lack of flow isolation

djw // CSE 561, Spring 2002, with credit to savage L8.2

1
Persistent queuing

• Queues exist to absorb transient bursts


– Need big enough queue to deal with BW*delay of link
• We want average queue length to be small
– Non-transient queuing unnecessarily increases latency

• TCP will fill queue until a loss occurs


• Naturally keeps average queue length high

djw // CSE 561, Spring 2002, with credit to savage L8.3

Burst losses
• Real life traffic is bursty and structured
– Lots of packets from same flow back-to-back
• If the queue is full, many packets from the same flow may be lost
• TCP will likely timeout
– May not have enough duplicate acks for fast retransmit
– TCP Reno doesn’t handle multiple losses in a window well

Incoming packets Queue

Dropped packets

djw // CSE 561, Spring 2002, with credit to savage L8.4

2
Synchronization

• Multiple TCP flows through a router


• Queue fills up
• Arriving packets from all flows will be dropped
• Drops cause all TCP flows to slow down
• Queue empties
• TCPs ramp up together causing congestion

• Repeat…

djw // CSE 561, Spring 2002, with credit to savage L8.5

Flow Isolation

• FIFO/Drop Tail doesn’t differentiate among flows

• Scenario:
– 1 UDP flow sending at 100Mbps
– 99 TCP flows
– What happens?

djw // CSE 561, Spring 2002, with credit to savage L8.6

3
Active Queue Management

• Idea: Use buffer management to improve congestion signaling and


hence queuing behavior
• Precursors
– IP Source Quench
• Router sends ICMP packet to host, “hey, partner, slow down”
– Early Random Drop
• When buffer beyond drop level, drop incoming packets according
to a drop probability
• Biases bursty traffic
– DECbit
• Set congestion-indication bit in packets when average queue length
is greater than a threshold
• Source reduces window when it sees half of packets with bit set

djw // CSE 561, Spring 2002, with credit to savage L8.7

Random Early Detection (RED)

• Key ideas
– Use congestion avoidance to keep average queue size low
– Detect congestion by monitoring queue size
– Signal congestion probabilistically
• Drop packets (compatible with existing TCPs)
• Also supports packet marking (ala DECbit)

• Nice side effects


– Less synchronization (random)
– Fewer burst losses

djw // CSE 561, Spring 2002, with credit to savage L8.8

4
Basic RED algorithm I
• Set two static trigger parameters
– MinThresh, MaxThresh: define where queue length should be
• Calculate average queue length
– AvgLen = (1-Weigth)*AvgLen + Weight * SampleLen
– EWMA: Weight parameter decides importance of new samples

MaxThresh MinThresh

Queue

AvgLen
djw // CSE 561, Spring 2002, with credit to savage L8.9

Basic RED algorithm II

• When a packet arrives:


– If AvgLen < MinThresh do nothing
– If AvgLen > MaxThresh drop packet
– If MinThresh < AvgLen < MaxThresh,
Drop/mark packet with probability P

• Where does P come from?

djw // CSE 561, Spring 2002, with credit to savage L8.10

5
Basic RED algorithm III

• tmpP = MaxP*(AvgLen – MinThresh)/(MaxThresh-MinThresh)


• P = tmpP/(1 – count * tmpP)
• Count? (# packets queued while AvgLen between thresholds)
1.0
Drop probability

MaxP

MinThresh MaxThresh
Average Queue length
djw // CSE 561, Spring 2002, with credit to savage L8.11

RED Marking

• With RED, you can either mark or drop packets


– When would it be better to mark instead of drop? Vice versa?
– Think about assumptions you make about the hosts

• Marking represents another congestion signal to TCP


– Bit in header: Explicit Congestion Notification (ECN)
• Proposed, RFCed, not deployed
– Forced a drop before queue fills up (c.f., FIFO)

djw // CSE 561, Spring 2002, with credit to savage L8.12

6
Implementation/Deployment issues

• RED was first introduced in the early 1990’s w/lots of


academic/IETF political support
• Still not widely deployed…

• Three key issues


– “You’re going to drop my packets randomly?”
– Naïve implementation is slow
• Recalculate P for each forwarded packet
• Random number generation is slow
– How to configure it?

djw // CSE 561, Spring 2002, with credit to savage L8.13

What does RED accomplish?

• Keeps average queue length smaller

• Reduces likelihood of synchronization

• Reduces likelihood of burst losses

djw // CSE 561, Spring 2002, with credit to savage L8.14

7
What doesn’t RED do?

• Flows still aren’t isolated


• RED doesn’t differentiate between flows
• Depends on hosts to be well behaved and back off when
packets are dropped/marked
• Return to the UDP scenario…

djw // CSE 561, Spring 2002, with credit to savage L8.15

The Role of the Network vs. End Host

• Problem: Hark! The sky is falling!


– Non-congestion-controlled traffic is going to cause another congestion
collapse (streaming media, “accelerators”, etc.)
Page fetch from CNN.com
60000

50000
Sequence Number (bytes)

40000

30000

20000

Modified Client
10000
Normal Client

0
0 0.2 0.4 0.6 0.8 1
Time (sec)

• Implication is that the network must help itself.


djw // CSE 561, Spring 2002, with credit to savage L8.16

8
Network Protection/Isolation

• Assume the network must now participate in


controlling its utilization, but that we still need E2E
congestion control
– Network mechanisms provide isolation/protection
– Hosts must still adapt their own flow behavior

• Possibilities:
– RED “penalty box”: punish aggressive flows to incent good
behavior
– Fair queuing: better isolate competing flows from one another
– Pricing: charge user for packets during congestion!

djw // CSE 561, Spring 2002, with credit to savage L8.17

Issue: What is “good behavior”?


• One answer is “TCP-Friendly” flows
– A TCP-Friendly flow has an arrival rate limited by VJ
congestion avoidance (mult. decrease, add. increase)
– In steady state this has a known analytic upper bound

MSS 0.7
BW ≈
RTT p

• Then drop flow’s packets to throttle to expected


bandwidth. There are some limitations however:
– Need to know MSS, RTT, drop rate
– B/w depends upon RTT, need to use low RTT estimate
– Flow length? Fragmentation? Bursty vs smooth traffic?
– Unresponsive flows?

djw // CSE 561, Spring 2002, with credit to savage L8.18

9
Fair Queuing

djw // CSE 561, Spring 2002, with credit to savage L8.19

10

You might also like