0% found this document useful (0 votes)
17 views30 pages

CN Unit 4

Uploaded by

anirudda1908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views30 pages

CN Unit 4

Uploaded by

anirudda1908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

CN UNIT-4

UNIT IV The Transport Layer


Connectionless Transport: UDP, The Internet Transport Protocols: TCP, Congestion Control
Difference between Connection-Oriented and Connectionless Services of the Protocols

Protocol
Connection Oriented Protocol Services Connectionless Protocol Services
Characteristics

In this communication service,


It is the communication service in which
packets are sent without creating
1) Definition virtual connection is created before sending
any virtual connection over the
the packet over the internet.
internet.
2) It needs authentication of the destination node It transfers the data message
Authentication before transferring data. without authenticating destination.
This is a more reliable connection as it makes
the virtual connection before sending packets This connection does not ensure
3) Reliability
and ensures delivery of the packet to the reliability on packet transmission.
destination.
The handshaking is carried out to ensure both There is no handshaking happens
4) Handshaking sender and receiver agree with this while sending a packet over the
connection. network.
It is slower than the connectionless service.
Before sending a packet, the virtual It is faster than connection-oriented
5) Delay
connection is created in the connection- protocol service.
oriented protocol which adds extra delay.
6)Protocol
TCP is connection-oriented protocol. UDP is connectionless protocol.
Example
Importance of TCP Connection-Oriented and UDP Connectionless Protocol
If you look at the image below, TCP opens the connection and complete all the handshaking
formalities before transferring the message to another node. Here client and server are two nodes.

As UDP is connectionless protocol, it does not require creating a connection. And the message is
transferred without handshaking.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 1


CN UNIT-4

This is one of the main differences between UDP and TCP networking protocol.

What are the Advantages and Disadvantages of Connection-Oriented Service:


Advantages:

 It is reliable.
 All the packets follow the same path to the destination.
Disadvantages:

 Handshaking is required before sending an actual data packet over the internet.
 Requires additional header parameter to ensure reliable communication between sender
and receiver. So, it has extra overhead.
 Header size of the packet is bigger than connectionless protocol.
Advantages and Disadvantages of Connectionless protocol:
Advantages:

 It sends the packet without handshaking.


 It is faster than connection-oriented protocol.
 The header size of the packet is smaller as compared to the packets in connection-
oriented services.
Disadvantages:

 It is not reliable and cannot ensure the data transmission to the destination.
 Packets decide the route while transmission based on the network congestion.
 It does not have a fixed path.
 Different packets do not necessarily follow the same path.
Use of Connection-Oriented Protocol:

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 2


CN UNIT-4

If you need reliable communication between sender and receiver, connection-oriented services
are more useful.

Example: We use email for communication. If we are sending an email to another recipient, it
should be delivered. In this case, the connection-oriented protocol is more reliable to use.

When to use Connectionless protocol?


If we are more concern about the packet transmission speed than reliability, connectionless
service is more useful.

Example: If we are developing video streaming website, we need a faster connection to stream
without buffer delay. In this case, the connectionless protocol is more useful.

Domain name server (DNS) uses connectionless service protocol (UDP) for the domain and IP
resolution.

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and connectionless
protocol. So, there is no need to establish a connection before data transfer. The UDP helps to
establish low-latency and loss-tolerating connections over the network. The UDP enables
process-to-process communication.

User Datagram Protocol (UDP) is one of the core protocols of the Internet Protocol (IP) suite.
It is a communication protocol used across the internet for time-sensitive transmissions such as
video playback or DNS lookups.

DNS lookups is the process through which human-readable domain names (www.digicert.com)
are translated into a computer-readable IP address
Unlike Transmission Control Protocol (TCP), UDP is connectionless and does not guarantee
delivery, order, or error checking, making it a lightweight and efficient option for certain types
of data transmission.

UDP Header
D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 3
CN UNIT-4

UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes to
60 bytes. The first 8 Bytes contain all necessary header information and the remaining part
consists of data. UDP port number fields are each 16 bits long, therefore the range for port
numbers is defined from 0 to 65535; port number 0 is reserved. Port numbers help to
distinguish different user requests or processes.

UDP Header

 Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
 Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
 Length: Length is the length of UDP including the header and the data. It is a 16-
bits field.
 Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of
the one’s complement sum of the UDP header, the pseudo-header of information
from the IP header, and the data, padded with zero octets at the end (if necessary) to
make a multiple of two octets.

Applications of UDP
 Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
 It is a suitable protocol for multicasting as UDP supports packet switching.
 UDP is used for some routing update protocols like RIP (Routing Information
Protocol).
 Normally used for real-time applications which cannot tolerate uneven delays
between sections of a received message.
 VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use
UDP for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to
ensure fast and efficient data transmission.
 DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a
suitable protocol for this application.
 DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 4


CN UNIT-4

TCP vs UDP
User Datagram Protocol
Basis Transmission Control Protocol (TCP) (UDP)

UDP is the Datagram-


oriented protocol. This is
TCP is a connection-oriented protocol.
because there is no overhead
Connection orientation means that the
for opening a connection,
communicating devices should establish
maintaining a connection, or
a connection before transmitting data
terminating a connection.
and should close the connection after
UDP is efficient for broadcast
transmitting the data.
Type of Service and multicast types of
network transmission.

The delivery of data to the


TCP is reliable as it guarantees the
destination cannot be
delivery of data to the destination router.
Reliability guaranteed in UDP.

No acknowledgment
An acknowledgment segment is present.
Acknowledgment segments.

There is no sequencing of
Sequencing of data is a feature of data in UDP. If the order is
Transmission Control Protocol (TCP). required, it has to be
this means that packets arrive in order at managed by the application
Sequence the receiver. layer.

UDP is faster, simpler, and


TCP is comparatively slower than UDP.
Speed more efficient than TCP.

There is no retransmission of
Retransmission of lost packets is
lost packets in the User
possible in TCP, but not in UDP.
Retransmission Datagram Protocol (UDP).

TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-
Header Length header. length header.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 5


CN UNIT-4

User Datagram Protocol


Basis Transmission Control Protocol (TCP) (UDP)

Weight TCP is heavy-weight. UDP is lightweight.

Handshaking Uses handshakes such as SYN, ACK, It’s a connectionless protocol


Techniques SYN-ACK i.e. No handshake

UDP is used by DNS, DHCP,


TCP is used by HTTP,
TFTP, SNMP, RIP,
HTTPs, FTP, SMTP and Telnet.
Protocols and VoIP.

UDP connection is a message


The TCP connection is a byte stream.
Stream Type stream.

Advantages of UDP
 Speed: UDP is faster than TCP because it does not have the overhead of
establishing a connection and ensuring reliable data delivery.
 Lower latency: Since there is no connection establishment, there is lower latency
and faster response time.
 Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
 Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
 Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
Disadvantages of UDP
 No reliability: UDP does not guarantee delivery of packets or order of delivery,
which can lead to missing or duplicate data.
 No congestion control: UDP does not have congestion control, which means that it
can send packets at a rate that can cause network congestion.

TCP (Transmission Control Protocol) is one of the main protocols of the TCP/IP
suite. It lies between the Application and Network Layers which are used in providing
reliable delivery services. Transmission Control Protocol (TCP) ensures reliable and
efficient data transmission over the internet.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 6


CN UNIT-4

TCP plays a crucial role in managing the flow of data between computers, guaranteeing that
information is delivered accurately and in the correct sequence.
In this article, we will discuss about Transmission control protocol (TCP) in detail. We will
also discuss IP, the Difference between TCP and IP, and the working process of IP here. Let’s
proceed with the definition of TCP First.

Transmission Control Protocol (TCP):


Transmission Control Protocol (TCP) is a connection-oriented protocol for
communications that helps in the exchange of messages between different devices over a
network. The Internet Protocol (IP), which establishes the technique for sending data packets
between computers, works with TCP.
The position of TCP is at the transport layer of the OSI model. TCP also helps in ensuring that
information is transmitted accurately by establishing a virtual connection between the sender
and receiver.

Internet Protocol (IP):


Internet Protocol (IP) is a method that is useful for sending data from one device to another
from all over the internet. It is a set of rules governing how data is sent and received over the
internet. It is responsible for addressing and routing packets of data so they can travel from the
sender to the correct destination across multiple networks. Every device contains a unique IP
Address that helps it communicate and exchange data across other devices present on the
internet.

Working of Transmission Control Protocol (TCP)


Transmission Control Protocol (TCP) model breaks down the data into small bundles and
afterward reassembles the bundles into the original message on the opposite end to make sure
that each message reaches its target location intact. Sending the information in little bundles of
information makes it simpler to maintain efficiency as opposed to sending everything in one
go.
After a particular message is broken down into bundles, these bundles may travel along
multiple routes if one route is jammed but the destination remains the same.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 7


CN UNIT-4

TCP

For Example: When a user requests a web page on the internet, somewhere in the world, the
server processes that request and sends back an HTML Page to that user. The server makes use
of a protocol called the HTTP Protocol. The HTTP then requests the TCP layer to set the
required connection and send the HTML file.
Now, the TCP breaks the data into small packets and forwards it toward the Internet Protocol
(IP) layer. The packets are then sent to the destination through different routes.
The TCP layer in the user’s system waits for the transmission to get finished and acknowledges
once all packets have been received.

Features of TCP/IP
Some of the most prominent features of Transmission control protocol are mentioned below.
 Segment Numbering System: TCP keeps track of the segments being transmitted or
received by assigning numbers to each and every single one of them. A specific
Byte Number is assigned to data bytes that are to be transferred while segments are
assigned sequence numbers. Acknowledgment Numbers are assigned to received
segments.
 Connection Oriented: It means sender and receiver are connected to each other till
the completion of the process. The order of the data is maintained i.e. order remains
same before and after transmission.
 Full Duplex: In TCP data can be transmitted from receiver to the sender or
vice – versa at the same time. It increases efficiency of data flow between sender
and receiver.
 Flow Control: Flow control limits the rate at which a sender transfers data. This is
done to ensure reliable delivery. The receiver continually hints to the sender on how
much data can be received (using a sliding window).
 Error Control: TCP implements an error control mechanism for reliable data
transfer. Error control is byte-oriented. Segments are checked for error detection.
Error Control includes – Corrupted Segment & Lost Segment Management, Out-of-
order segments, Duplicate segments, etc.
 Congestion Control: TCP takes into account the level of congestion in the
network. Congestion level is determined by the amount of data sent by a sender.
Advantages of TCP
 It is a reliable protocol.
 It provides an error-checking mechanism as well as one for recovery.
D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 8
CN UNIT-4

 It gives flow control.


 It makes sure that the data reaches the proper destination in the exact order that it
was sent.
 It is a well-documented and widely implemented protocol, maintained by standards
organizations like the IETF (Internet Engineering Task Force).
 It works in conjunction with IP (Internet Protocol) to establish connections between
devices on a network.
Disadvantages of TCP
 TCP is made for Wide Area Networks; thus, its size can become an issue for small
networks with low resources.
 TCP runs several layers so it can slow down the speed of the network.
 It is not generic in nature. Meaning, it cannot represent any protocol stack other
than the TCP/IP suite. E.g., it cannot work with a Bluetooth connection.
 No modifications since their development around 30 years ago.

Congestion Control in Computer Networks


Congestion:

Congestion in a computer network happens when there is too much data being sent at the same
time, causing the network to slow down. Just like traffic congestion on a busy road, network
congestion leads to delays and sometimes data loss

Congestion control:

Congestion control is a crucial concept in computer networks. It refers to the methods used to
prevent network overload and ensure smooth data flow. When too much data is sent through
the network at once, it can cause delays and data loss. Congestion control techniques help
manage the traffic, so all users can enjoy a stable and efficient network connection. These
techniques are essential for maintaining the performance and reliability of modern networks.

Effects of Congestion control in Computer Network


 Improved Network Stability: Congestion control helps keep the network stable by
preventing it from getting overloaded. It manages the flow of data so the network
doesn’t crash or fail due to too much traffic.
 Reduced Latency and Packet Loss: Without congestion control, data
transmission can slow down, causing delays and data loss. Congestion
control helps manage traffic better, reducing these delays and ensuring fewer data
packets are lost, making data transfer faster and the network more responsive.
 Enhanced Throughput: By avoiding congestion, the network can use its resources
more effectively. This means more data can be sent in a shorter time, which is
important for handling large amounts of data and supporting high-speed
applications.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 9


CN UNIT-4

 Fairness in Resource Allocation: Congestion control ensures that network


resources are shared fairly among users. No single user or application can take up
all the bandwidth, allowing everyone to have a fair share.
 Better User Experience: When data flows smoothly and quickly, users have a
better experience. Websites, online services, and applications work more reliably
and without annoying delays.

Congestion Control Algorithm


 Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
 Congestive-avoidance algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
 There are two congestion control algorithms which are as follows:

Leaky Bucket Algorithm

Let us consider an example to understand Imagine a bucket with a small hole in the bottom. No
matter at what rate water enters the bucket, the outflow is at constant rate. When the bucket is
full with water additional water entering spills over the sides and is lost.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 10


CN UNIT-4

Similarly, each network interface contains a leaky bucket and the following steps are involved
in leaky bucket algorithm:

 When host wants to send packet, packet is thrown into the bucket.
 The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
 Bursty traffic is converted to a uniform traffic by the leaky bucket.
 In practice the bucket is a finite queue that outputs at a finite rate.

Token Bucket Algorithm

 The leaky bucket algorithm has a rigid output design at an average rate independent
of the bursty traffic.
 It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
 When tokens are shown, a flow to transmit traffic appears in the display of tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.

As a result, if tokens are available, part of the busty packets are transmitted at the same rate,
giving the system some flexibility. M * S = C + * S = M * S = M * S = M * S = M * S

Where, S — denotes the amount of time spent

M — stands for maximum production rate.

C — Byte capacity of the token bucket

Need of Token Bucket Algorithm

The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty
the traffic is. So, in order to deal with the bursty traffic we need a flexible algorithm so that the
data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
 In regular intervals tokens are thrown into the bucket. ƒ
 The bucket has a maximum capacity. ƒ
 If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
 If there is no token in the bucket, the packet cannot be sent.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 11


CN UNIT-4

Let’s understand with an example, In figure (A) we see a bucket holding three tokens, with
five packets waiting to be transmitted. For a packet to be transmitted, it must capture and
destroy one token. In figure (B) We see that three of the five packets have gotten through, but
the other two are stuck waiting for more tokens to be generated.

Advantages of congestion control:


 Stable Network Operation: Congestion control ensures that networks remain
stable and operational by preventing them from becoming overloaded with too
much data traffic.
 Reduced Delays: It minimizes delays in data transmission by managing traffic flow
effectively, ensuring that data packets reach their destinations promptly.
 Less Data Loss: By regulating the amount of data in the network at any given time,
congestion control reduces the likelihood of data packets being lost or discarded.
 Optimal Resource Utilization: It helps networks use their resources efficiently,
allowing for better throughput and ensuring that users can access data and services
without interruptions.
 Scalability: Congestion control mechanisms are scalable, allowing networks to
handle increasing volumes of data traffic as they grow without compromising
performance.
Disadvantages of congestion control:
 Complexity: Implementing congestion control algorithms can add complexity to
network management, requiring sophisticated systems and configurations.
 Overhead: Some congestion control techniques introduce additional overhead,
which can consume network resources and affect overall performance.
 Algorithm Sensitivity: The effectiveness of congestion control algorithms can be
sensitive to network conditions and configurations, requiring fine-tuning for optimal
performance.
 Resource Allocation Issues: Fairness in resource allocation, while a benefit, can
also pose challenges when trying to prioritize critical applications over fewer
essential ones.
 Dependency on Network Infrastructure: Congestion control relies on the
underlying network infrastructure and may be less effective in environments with
outdated or unreliable equipment.

Working of Token Bucket Algorithm

The system removes one token for every cell of data sent. For each tick of the clock the system
sends n tokens to the bucket. If n is 100 and host is idle for 100 ticks, bucket collects 10000
tokens. Host can now consume all these tokens with 10 cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to zero. Each
time a token is added, counter is incremented to 1. Each time a unit of data is sent, counter is
decremented by 1. When the counter is zero, host cannot send data.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 12


CN UNIT-4

Process depicting how token bucket algorithm works

Steps Involved in Token Bucket Algorithm


Step 1: Creation of Bucket: An imaginative bucket is assigned a fixed capacity, known as
“rate limit”. It can hold up to a certain number of tokens.
Step 2: Refill the Bucket: The bucket is dynamic; it gets periodically filled with tokens.
Tokens are added to the bucket at a fixed rate.
Step 3: Incoming Requests: Upon receiving a request, we verify the presence of tokens in the
bucket.
Step 4: Consume Tokens: If there are tokens in the bucket, we pick one token from it. This
means the request is allowed to proceed.
Step 5: Empty Bucket: If the bucket is depleted, meaning there are no tokens remaining, the
request is denied.

Advantage of Token Bucket :


 If a bucket is full in tokens, then tokens are discarded and not the packets. While in
leaky bucket algorithm, packets are discarded.
 Token bucket can send large bursts at a faster rate while leaky bucket always sends
packets at constant rate.
 Token bucket ensures predictable traffic shaping as it allows for setting token
arrival rate and maximum token count. In leaky bucket, such control may not be
present.
 Token bucket is suitable for high-speed data transfer or streaming video content as
it allows transmission of large bursts of data. As leaky bucket operates at a
constant rate, it can lead to less efficient bandwidth utilization.

Disadvantages of Token Bucket Algorithm


 Token Bucket has the tendency to generate tokens at a fixed rate, even when the
network traffic is not present. This is leads of accumulation of unused tokens during
times when there is no traffic, hence leading to wastage.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 13


CN UNIT-4

 Due to token accumulation, delays can be introduced in the packet delivery. If the
token bucket happens to be empty, packets will have to wait for new tokens, leading
to increased latency and potential packet loss.
 Token Bucket happens to be less flexible than leaky bucket when it comes to
network traffic shaping. The fixed token generation rate cannot be easily altered to
meet changing network requirements, unlike the adaptable nature of leaky bucket.

Flow Characteristics of Token Bucket Algorithm


Four types of characteristics are attributed to a flow: reliability, delay, jitter, and bandwidth.

Types of Characteristics for Quality of Service

Reliability
It implies packet reached or not, information lost or not. Lack of reliability means losing a
packet or acknowledgement, which entails re-transmission. Reliability requirements may differ
from program to program. For example, it is more important that electronic mail, file transfer
and internet access have reliable transmissions than telephony or audio conferencing.
Delay
It denotes source-to-destination delay. Different applications can tolerate delay in different
degrees. Telephony, audio conferencing, video conferencing, and remote log-in need minimum
delay, while delay in file transfer or e-mail is less important.
Jitter
Jitter is the variation in delay for packets belonging in same flow. High jitter means the
difference between delays is large; low jitter means the variation is small.
Bandwidth
Different applications need different bandwidths. In video conferencing we need to send
millions of bits per second to refresh a color screen while the total number of bits in an e-mail
may not reach even a million.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 14


CN UNIT-4

Techniques to Improve QoS

There are several ways to improve QoS like Scheduling and Traffic shaping, we will see each
and every part of this in brief.
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Three scheduling
techniques are:
1. FIFO Queuing
2. Priority Queuing
3. Weighted Fair Queuing

Packet Queuing and Dropping in Routers


Routers are essential networking devices that direct the flow of data over a network. Routers
have one or more input and output interfaces which receive and transmit packets
respectively. Since the router’s memory is finite, a router can run out of space to accommodate
freshly arriving packets. This occurs if the rate of arrival of the packets is greater than the rate
at which packets exit from the router’s memory. In such a situation, new packets are
ignored or older packets are dropped. As part of the resource allocation mechanisms, routers
must implement some queuing discipline that governs how packets are buffered or dropped
when required.

Fig 1: Depiction of a router’s inbound and outbound traffic

Queue Congestion and Queuing Disciplines

Router queues are susceptible to congestion by virtue of the limited buffer memory available to
them. When the rate of ingress traffic becomes larger than the amounts that can be forwarded
on the output interface, congestion is observed. The potential causes of such a situation mainly
involve:
 Speed of incoming traffic surpasses the rate of outgoing traffic
 The combined traffic from all the input interfaces exceeds overall output capacity
 The router processor is incapable of handling the size of the forwarding table to
determine routing paths
To manage the allocation of router memory to the packets in such situations of congestion,
different disciplines might be followed by the routers to determine which packets to keep and

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 15


CN UNIT-4

which packets to drop. Accordingly, we have the following important queuing disciplines in
routers:

First-In, First-Out Queuing (FIFO)

The default queuing scheme followed by most routers is FIFO. This generally requires little or
no configuration to be done on the server. All packets in FIFO are serviced in the same order
as they arrive in the router. On reaching saturation within the memory, new packets attempting
to enter the router are dropped (tail drop).

Priority Queuing (PQ)


In Priority Queuing, instead of using a single queue, the router bifurcates the memory into
multiple queues, based on some measure of priority. After this, each queue is handled in a
FIFO manner while cycling through the queues one by one. The queues are marked
as High, Medium, or Low based on priority. Packets from the High queue are always
processed before packets from the medium queue. Likewise, packets from the medium queue
are always processed before packets in the Normal queue, etc. As long as some packets exist in
the High priority queue, no other queue’s packets are processed. Thus, high priority packets cut
to the front of the line and get serviced first. Once a higher priority queue is emptied, only
then is a lower priority queue serviced.

Fig 2: Multiple sub-queues used in Priority Queuing Scheme

The obvious advantage of PQ is that higher-priority traffic is always processed first. However,
a significant disadvantage to the PQ scheme is that the lower-priority queues can often receive
no service at all as a result of starvation. A constant stream of High priority traffic can starve
out the lower-priority queues

Weighted Fair Queuing (WFQ)


Weighted Fair Queuing (WFQ) dynamically creates queues based on traffic flows and assigns
bandwidth to these flows based on priority. The sub-queues are assigned bandwidths
dynamically. Suppose 3 queues exist which have bandwidth percentages of 20%, 30%, and
50% when they are all active. Then, if the 20% queue is idle, the freed-up bandwidth is
allocated among the remaining queues, while preserving the original bandwidth ratios. Thus,

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 16


CN UNIT-4

the 30% queue is now allotted (75/2) % and the 50% queue is now allotted (125/2) %
bandwidth.

Fig 3: Dynamically allocated bandwidths for sub-queues in WFQ

Unlike PQ schemes, the WFQ-queues are allotted differing bandwidths based on their queue
priorities. Packets with a higher priority are scheduled before lower-priority packets arriving at
the same time.

Effect of Queuing Disciplines on Network

The choice of queuing discipline impacts the performance of the network in terms of the
number of dropped packets, latency, etc. When analyzing the effect of choosing the different
schemes, we observe significant impacts on various parameters.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 17


CN UNIT-4

Fig 4: Number of packets dropped versus time for different queuing disciplines (Simulation

run on Riverbed Modeler)

Measuring the overall packet drop in the network for the three schemes points to the
followingresults:

 In all the mechanisms, there are no packet drops in the beginning, up to a certain
point. This is because it takes a finite time for router buffer memory to be filled up.
Since packet drops occur only after the buffer is full, thus there is an initial period
when there are no packet drops as the buffer capacity has not yet been
reached.
 In FIFO scheme, the packet drop starts after PQ but before WFQ. More
prominently, the number of packets being dropped is the greatest in the case of
FIFO. This is by virtue of the fact that once congested, all incoming traffic from
all the apps is dropped altogether without any discrimination.
 In PQ scheme, the packet drops start the earliest. Since PQ divides the queue based
on priority levels, the overall size of the individual queues is divided up. Assuming
a simple division of the memory into an “Important” Queue and a “Less
Important” Queue, the queue size is halved. Thus, the packets being directed to

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 18


CN UNIT-4

the sub-queues will cause the queue to be filled up earlier (because of the smaller
capacity) and hence packet drop will start earlier

Traffic Shaping
It is a mechanism to control the amount and the rate of the traffic sent to the network. The
techniques used to shape traffic are: leaky bucket and token bucket.

Difference Between Token Bucket Algorithm and Leaky Bucket Algorithm


The differences between leaky and token bucket algorithm are:
Token Bucket Algorithm Leaky Bucket Algorithm

It depends on tokens. It does not depend on tokens.

If bucket is full, token is discarded but not


If bucket is full, then packets are discarded.
the packet.

Packets can only transmit when there are


Packets are transmitted continuously.
enough tokens.

Allows large bursts to be sent at faster rate.


Sends the packet at a constant rate.
Bucket has maximum capacity.

The bucket holds tokens generated at regular When the host has to send a packet, packet is
intervals of time. thrown in bucket.

If there is a ready packet, a token is removed Bursty traffic is converted into uniform
from Bucket and packet is send. traffic by leaky bucket.

If there is no token in the bucket, then the In practice bucket is a finite queue outputs at
packet cannot be sent. finite rate.

For example, if a host is not sending for a while, its bucket becomes empty. If the host has
bursty data, the leaky bucket allows only an average rate. The time when the host is idle is not
take into account. On the other hand, token bucket algorithm allows idle hosts to accumulate
credit for the future in the form of tokens. And that is how it overcomes the shortcoming of
leaky bucket algorithm.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 19


CN UNIT-4

Handshaking Basics

In simple terms, connecting parties with a handshaking protocol means defining how the
communication will occur. Specifically, handshaking can establish, before full communication
beginning, the protocol for exchanging messages, data encoding, and maximum transfer rates.
Several applications require handshaking. Next, we briefly describe how this process executes
for particular applications:

 Transmission Control Protocol (TCP): Opening regular TCP connections requires a three-
way handshake. Thus, TCP aims to establish reliable communication by
synchronizing the message exchanging between two parties.
 Transport Layer Security (TLS): A TLS connection requires a series of agreements
between clients and servers to secure the communication. In this way, the handshake
protocol from TLS is designed to define security features for a specific connection,
such as encryption algorithms and keys.
 Simple Mail Transfer Protocol (SMTP): After establishing a TCP connection to exchange
messages, the SMTP servers and clients must identify themselves through a particular
handshake process. Hence, besides authenticating servers and users, this handshake
process also negotiates other communication features, such as the adopted
encryption protocol and the maximum message size.

Two-Way Handshake

The two-way handshake is a simple protocol to create a connection between two parties
that want to communicate. In order to do that, this protocol uses synchronization (SYN) and
acknowledgment (ACK) messages.
Briefly, an SYN message requires a connection and informs the other party of a sequence
number to control the data exchange. In practice, the TCP sequence number is a counter of bytes
passing on a particular stream. ACK messages, in turn, are employed to confirm receipt (using
the sequence number) of incoming messages.
To accomplish the two-way handshaking considering a client/server model, the client sends an
SYN message to the server with a sequence number X. Then, the server should acknowledge
(ACK) the SYN message, providing another sequence number Y and establishing the
connection. Thus, sequence number X will acknowledge messages from the client to the server,
while sequence number Y will acknowledge messages from the server to the client.
The following figure depicts the previously described two-way handshake process:

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 20


CN UNIT-4

Particularly, the two-way handshake presents potential problems when the ACK message
from the server delays too much. Thus, if a connection timeout occurs, the client sends another
SYN message with a new sequence number (Z, for example) to the server. However, if the server
previously sent an ACK (which is delayed), it’ll discard this new SYN message. The client, in
turn, receives the delayed ACK and assumes that it refers to the last sent SYN message. Here’s
where the error happens: the client will send messages with the sequence number Z, while the
server expects messages following the sequence number X.
The figure below shows the outlined problem:

A three-way handshake process solves the described problem of two-way handshaking.

Three-Way Handshake

Like two-way handshaking, three-way handshaking also establishes connections between


two parties using SYN and ACK messages. However, besides providing their sequence
numbers, the server and client acknowledge the sequence numbers from each other. This
sequence number acknowledgment avoids the occurrence of SYN duplication errors.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 21


CN UNIT-4

So, let’s consider that a client wants to communicate with a server that employs the three-way
handshaking protocol. First, the client sends an SYN message with your sequence number (X) to
the server. The server replies with an SYN-ACK containing its sequence (Y) number and
acknowledging the client’s sequence number (X). After that, the client sends an ACK message to
confirm the server’s sequence number (Y), finally establishing the connection.
The following figure demonstrates the presented three-way handshake process:

Congestion Control Techniques:

Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens. The
congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –

1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care of. If the sender
feels that a sent packet is lost or corrupted, the packet needs to be retransmitted.
This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.

2. Window Policy:
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be
received successfully at the receiver side. This duplication may increase the
congestion in the network and make it worse.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 22


CN UNIT-4

Therefore, Selective repeat window should be adopted as it sends the specific


packet that may have been lost.

3. Discarding Policy: routers can discard less sensitive packets to prevent congestion
and also maintain the quality of the audio file.

4. AcknowledgmentPolicy:
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment
only if it has to send a packet or a timer expires.

5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in
a flow should first check the resource requirement of a network flow before
transmitting it further. If there is a chance of a congestion or there is a congestion in
the network, router should deny establishing a virtual network connection to prevent
further congestion.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:

1.Backpressure:
Backpressure is a technique in which a congested node stops receiving packets from upstream
node. Backpressure is a node-to-node congestion control technique that propagate in the
opposite direction of data flow.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st node
may get congested and inform the source to slow down.

2.Choke Packet Technique:


A choke packet is a packet sent by a node to the source to inform it of congestion.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 23


CN UNIT-4

3.ImplicitSignaling:
In implicit signaling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example, when sender sends
several packets and there is no acknowledgment for a while, one assumption is that there is a
congestion.

4.ExplicitSignaling:
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating
a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.

Congestion Control Mechanisms:


Some of the congestion control mechanisms are as follows:

Traffic Monitoring and Measurement:

Congestion control begins with network traffic monitoring and measurement. These metrics
include packet loss, delay, and throughput, which network administrators can analyze
continuously to identify signs of congestion and to take appropriate measures.

Traffic Regulation:

To manage congestion, traffic regulation mechanisms are implemented. As a result of these


mechanisms, data is transmitted across the network at a controlled rate, ensuring that
network resources are not overwhelmed. Among the most common traffic regulation
mechanisms are:

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 24


CN UNIT-4

Traffic Shaping:

Traffic shaping slows down data transmission by smoothing the flow of traffic using
techniques such as buffering, queuing, and prioritization.

Traffic Policing:

Traffic policing enforces pre-defined traffic rates. It examines packets as they enter the
network and drops or marks packets that exceed the specified rate. This prevents excessive
traffic from congesting networks.

Congestion notification and feedback:

A congestion notification and feedback system is crucial for managing congestion


effectively. It is important that network devices inform the source of traffic about
congestion status so the source can adjust transmission behavior accordingly. Two important
mechanisms for this purpose are:

Explicit Congestion Notification (ECN):

ECN is a congestion notification mechanism that allows routers to mark packets to indicate
network congestion. Congestion detection is achieved by setting a bit in packet headers.
When the receiver or subsequent routers detect congestion, they can notify the sender,
triggering congestion control mechanisms.

Transmission Control Protocol (TCP) Congestion Control:

Using congestion control mechanisms, TCP monitors network conditions by analyzing


acknowledgements and adjusts its transmission rate based on congestion indications. In
order to manage congestion, it uses algorithms such as TCP Reno, TCP Vegas, and TCP
Cubic.

Quality of Service (QoS) Mangement:

The management of Quality of Service (QoS) means ensuring that critical applications
receive preferential treatment and are less likely to experience congestion-related issues by
prioritizing certain types of network traffic. A Quality of Service mechanism allocates
network resources according to predefined priorities and rules.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 25


CN UNIT-4

Routing Optimization and Load Balancing:

Network traffic is distributed across multiple paths and resources by load balancing and
routing optimization techniques. Congestion can be mitigated, and network resources can be
more effectively utilized, if the load is distributed evenly and efficiently.

Approaches Of Congestion Control


Network Provisioning: In network provisioning, the network is constructed in such a way that it
can handle pre-determined traffic.
 If less bandwidth is present on a link, then it causes congestion on the network.
Also, resources are added dynamically when severe congestion is detected.
 In cases of severe congestion, routers or lines originally designed for the backup
enable to prevent congestion.
 In network provisioning, routers and links heavily loaded all the time are
regularly upgraded to ensure performance.
Traffic-aware routing: In this method, the path changes when the router finds traffic in the link.
The below diagram explains the traffic-aware routing approach.

 As shown in the figure, there are 12 routers on a network. Network-1 and


Network-2 are connected by two links, R5 to R11 and R6 to R12.
 The link from R5 to R11 is congested because the link is carrying packets beyond
its capacity. Therefore, Router-5 notices that there is a heavy load on the R5 to
R11 link, so it looks for another way to send the packets quickly.
 Router-5 finds that the path from R6 to R12 is free, and there is no traffic on the
link, so it will send packets using the path from R6 to R12.
 Router-5 continuously sends packets on path R6-R12 until it finds path R5-R11
available.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 26


CN UNIT-4

 From this example, we can say that the traffic is spread over both R5-R11 and R6-
R12.

Admission Control

Admission control is a mechanism used to prevent congestion in connection-oriented networks.


In this method, the new virtual-circuit network is not set up until the old virtual circuit can
handle the traffic. Leaky bucket and Token bucket are techniques of admission control.

 In a leaky bucket, incoming packets fall into the bucket and leak out at a constant
rate. The incoming rate may vary, but the outgoing rate remains constant.
 In the leaky bucket technique, if the host does not send any packets for some time,
its bucket becomes empty. Time is wasted because none of the packets use this
time. In the token bucket, the host stores the credits for the future in the form of
packets.
 Leaky Bucket detects the average data rate of incoming traffic and converts it to
fixed-rate traffic. If the bucket on the network is full, it drops the packet.
 Admission control mechanisms can be combined with traffic-aware routing and
used across networks.

Traffic Throttling

When communication occurs between a sender and a receiver on a network, the sender sends as
much traffic to the receiver as possible. If the network becomes congested by receiving a large
number of packets, it notifies the sender to throttle back and slow down the transmission.

 The traffic throttling approach is used on both datagram networks and virtual-
circuit networks.
 Routers monitor the resources they have and check the links over which packets
arrive, checks the buffer of queued packets and the number of packets lost. Traffic
can be throttled using this technique.
 In another approach to throttling traffic, the router looks at the delay present in the
incoming packets. By measuring the delay, the router decides whether there is any
congestion in the packet. This is a common technique used mostly on networks.
 But what will the router do if it experiences packet congestion? Therefore, if
congestion is experienced, the router notifies the appropriate sender who sent the
packet and alerts it to the congestion.
 To respond to congestion, routers use different schemes. They are as follows:
o Choke Packets
o Explicit Congestion Notification
o Hop-by-Hop Backpressure

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 27


CN UNIT-4

Traffic Throttling Schemes

Typically, the router uses choke packets, explicit congestion network (ECN), and hop-by-hop
backpressure mechanisms to notify the sender about congestion.

Choke Packets: In this method, the router directly informs the sender about the congestion.
Router chokes packets containing congested packets and sends them to the sender.
 When the sender receives a choke packet from the router, it reduces the data
sending rate by 40-50% to prevent congestion on the network.
 If after the sender lowers the data rate, the network becomes congested then the
router sends choke packets again. The sender receives choke packets until the
network is balanced.
Explicit Congestion Network (ECN): Instead of sending choke packets to the source, the router
sets the Congestion Experienced (CE) bit and sends it to the sender. When the sender receives
the CE bit, he knows that the packets he sends are experiencing congestion. This mechanism is
known as Explicit Congestion Notification (ECN).
The diagram below explains the explicit congestion notification method.

 As shown in the figure, PC-1 sends the packet to PC-2 on the network, it passes
through two routers.
 First, the packet is unmarked, which means that the Congestion Experienced (CE)
bit is not set. As the packet enters Router-2, which is congested, the packet sets
the CE bit.
 When PC-2 receives the packet, it checks whether the CE bit is set, if it is set, it
sends the Echo packet to the sender and informs that the packet is experiencing
congestion on Router-2.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 28


CN UNIT-4

 Upon receiving an echo reply, the sender throttles the traffic and slows down the
transmission speed.
Hop-by-Hop Backpressure: In this method, the destination sends choke packets hop-by-hop.
Let us understand with an example.
The below diagram explains the hop-by-hop backpressure mechanism.

 As shown in the figure, Router-1 sends packets continuously at a speed of 100


Mbps. Now, Router-4 can only handle packets at 40 Mbps. So, it will generate a
choke packet and send it to Router-3.
 When Router-3 receives the choke packet, it reduces the flow of the packet to
Router-4. But, Router-3 is also receiving a large number of packets, so it will send
a choke packet to Router-2.
 As soon as Router-2 receives the choke packet, it also reduces the flow of data
and forwards the choke packet to Router-1.
 In this way, Router-1 also receives a choke packet and this reduces the flow of
data sending.

Load Shedding

When none of the above methods work to prevent congestion on the network, the router starts
discarding packets to balance the network. This is known as load shedding, in which routers
simply drop packets when they cannot handle them.

 The key point in load shedding is how the router discards the packets. This varies
according to the applications and services used by the network.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 29


CN UNIT-4

 For example, if two devices are transferring files using FTP, the old packets are
more important than the new packets.
 If a packet contains routing information, it is considered as an important packet
because the loss of this packet can result in network downtime.
 In the load shedding method, packets are marked with their priority, using which
the router decides how important they are to the network. It enforces a wise
abandonment policy.
 Random early detection is a part of the load shedding method used to deal with
congestion.

Random Early Detection

Random Early Detection (RED) is an algorithm used to determine when to discard packets.
Using the RED mechanism, the router decides which packets to drop quickly before they cause
problems on the network.

 Senders are sending packets over a network at a very high speed. In RED, the
packets are randomly selected by the router. Because packets are randomly
selected, the routers do not tell which router is causing the most trouble in the
network.
 When the sender does not receive an acknowledgment for its sent packets, it
knows that the sent packets experience congestion and are discarded by the router.
In addition, it also reduces the data rate, the throughput.
 Routers using the RED mechanism improve network performance as packets are
dropped only when the buffer is full by checking the priority of the packets.
 RED is similar to ECN but is used when senders cannot receive an explicit signal.
If the ECN is available on the network, notifying the sender about congestion is
the preferred option.

D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 30

You might also like