0% found this document useful (0 votes)
60 views20 pages

Congestion Control

Congestion control is essential for managing network overload, where routers may either prevent new packets from entering or discard queued packets. Factors contributing to congestion include high packet arrival rates, insufficient memory, bursty traffic, and slow processors. Techniques for congestion control include warning bits, choke packets, load shedding, random early discard, and traffic shaping, with approaches focusing on prevention, avoidance, and management of congestion.

Uploaded by

Rudra K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views20 pages

Congestion Control

Congestion control is essential for managing network overload, where routers may either prevent new packets from entering or discard queued packets. Factors contributing to congestion include high packet arrival rates, insufficient memory, bursty traffic, and slow processors. Techniques for congestion control include warning bits, choke packets, load shedding, random early discard, and traffic shaping, with approaches focusing on prevention, avoidance, and management of congestion.

Uploaded by

Rudra K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Congestion Control


When one part of the subnet (e.g. one or more routers
in an area) becomes overloaded,congestion results.

Because routers are receiving packets faster than
they can forward them, one of two things must happen:
1. The subnet must prevent additional packets from
entering the congested region until those already
present can be processed.
2. The congested routers can discard queued packets to
make room for those that are arriving.
Factors that Cause
Congestion

Packet arrival rate exceeds the
outgoing link capacity.

Insufficient memory to store arriving
packets

Bursty traffic

Slow processor

2
Congestion Control vs Flow
Control

Congestion control is a global issue –
involves every router and host within
the subnet

Flow control – scope is point-to-point;
involves just sender and receiver.

3
Three general approaches:

prevent it altogether

congestion avoidance

deal with it if it occurs
Congestion Control, cont.

Congestion Control is concerned with efficiently
using a network at high load.

Several techniques can be employed. These
include:
 Warning bit
 Choke packets
 Load shedding
 Random early discard
 Traffic shaping

The first 3 deal with congestion detection and
recovery. The last 2 deal with congestion
avoidance.
5
Warning Bit

A special bit in the packet header is set by
the router to warn the source when
congestion is detected.

The bit is copied and piggy-backed on the
ACK and sent to the sender.

The sender monitors the number of ACK
packets it receives with the warning bit set
and adjusts its transmission rate
accordingly.
6
Piggybacking
In all practical situations, the transmission of data needs to be bi-directional.
This is called as full-duplex transmission.

• We can achieve this full duplex transmission i.e. by having two separate
channels-one for forward data transfer and the other for separate transfer i.e.
for acknowledgements.

• A better solution would be to use each channel (forward & reverse) to transmit
frames both ways, with both channels having the same capacity. If A and B
are two users. Then the data frames from A to Bare intermixed with the
acknowledgements from A to B.

The major advantage of piggybacking is better use of available channel
bandwidth.

7
Choke Packets

A more direct way of telling the source to
slow down.

A choke packet is a control packet
generated at a congested node and
transmitted to restrict traffic flow.

The source, on receiving the choke packet
must reduce its transmission rate by a
certain percentage.

8
Load Shedding

When buffers become full, routers simply discard
packets.

Which packet is chosen to be the victim depends
on the application and on the error strategy used
in the data link layer.

For a file transfer, for, e.g. cannot discard older
packets since this will cause a gap in the received
data.

For real-time voice or video it is probably better to
throw away old data and keep new packets.

Get the application to mark packets with discard
priority.

9
Random Early Discard (RED)

This is a proactive approach in which the
router discards one or more packets before
the buffer becomes completely full.

Each time a packet arrives, the RED
algorithm computes the average queue
length, avg.

If avg is lower than some lower threshold,
congestion is assumed to be minimal or
non-existent and the packet is queued.

10
RED, cont.

If avg is greater than some upper threshold,
congestion is assumed to be serious and
the packet is discarded.

If avg is between the two thresholds, this
might indicate the onset of congestion. The
probability of congestion is then calculated.

11
Traffic Shaping

Another method of congestion control is to
“shape” the traffic before it enters the
network.

Traffic shaping controls the rate at which
packets are sent (not just how many).

Used in ATM and Integrated Services
networks.

At connection set-up time, the sender and
carrier negotiate a traffic pattern (shape).

Two traffic shaping algorithms are:
 Leaky Bucket
 Token Bucket
12
The Leaky Bucket Algorithm

The Leaky Bucket Algorithm used to
control rate in a network.

It is implemented as a single-server queue
with constant service time.

If the bucket (buffer) overflows then packets
are discarded.

13
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with


packets.
14
Leaky Bucket Algorithm, cont.

The leaky bucket enforces a constant output rate
(average rate) regardless of the burstiness of the
input.

Does nothing when input is idle.

The host injects one packet per clock tick onto the
network.

This results in a uniform flow of packets, smoothing
out bursts and reducing congestion.

When packets are the same size, the one packet per
tick is okay.

For variable length packets though, it is better to
allow a fixed number of bytes per tick. E.g. 1024
bytes per tick will allow one 1024-byte packet or two
512-byte packets or four 256-byte packets on 1 tick.
15
Token Bucket Algorithm

In contrast to the LB, the Token Bucket Algorithm,
allows the output rate to vary, depending on the
size of the burst.

In the TB algorithm, the bucket holds tokens. To
transmit a packet, the host must capture and
destroy one token.

Tokens are generated by a clock at the rate of one
token every t sec.

Idle hosts can capture and save up tokens (up to
the max. size of the bucket) in order to send larger
bursts later.

16
The Token Bucket Algorithm

5-34

(a) Before. (b) After.17


Leaky Bucket vs Token Bucket

LB discards packets; TB does not. TB
discards tokens.

With TB, a packet can only be transmitted if
there are enough tokens to cover its length
in bytes.

LB sends packets at an average rate. TB
allows for large bursts to be sent faster by
speeding up the output.

TB allows saving up tokens (permissions) to
send large bursts. LB does not allow saving.

18
Flow Control

Flow control is aimed at preventing a fast
sender from overwhelming a slow receiver.

Flow control can be helpful at reducing
congestion, but it can’t really solve the
congestion problem.

19

For example, suppose we connect a fast
sender and fast receiver using a 9.6 kbps line:
1. If the two machines use a sliding window
protocol, and the window is large, the link will
become congested in a hurry.
2. If the window size is small (e.g., 2 packets),
the link won’t become congested.

20

You might also like