UNIT-IV TRANSPORT LAYER
Process to Process Communication, User Datagram Protocol (UDP), Transmission Control Protocol (TCP),
SCTP Congestion Control; Quality of Service (QoS), QoS improving techniques - Leaky Bucket and Token
Bucket algorithms.
TRANSPORT LAYER
The transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI
model. It is an end-to-end layer used to deliver messages to a host. It is termed an end-to-end
layer because it provides a point-to-point connection rather than hop-to-hop, between the source
host and destination host to deliver the services reliably. The unit of data encapsulation in the
Transport Layer is a segment.
Working of Transport Layer
The transport layer takes services from the Application layer and provides services to
the Network layer.
The transport layer ensures the reliable transmission of data between systems.
At the sender’s side: The transport layer receives data (message) from the Application layer and
then performs Segmentation, divides the actual message into segments, adds the source and
destination’s port numbers into the header of the segment, and transfers the message to the
Network layer.
At the receiver’s side: The transport layer receives data from the Network layer, reassembles the
segmented data, reads its header, identifies the port number, and forwards the message to the
appropriate port in the Application layer.
Responsibilities of a Transport Layer
The Process to Process Delivery
End-to-End Connection between Hosts
Multiplexing and Demultiplexing
Congestion Control
Data integrity and Error correction
Flow control
1. The Process to Process Delivery
While Data Link Layer requires the MAC address (48 bits address contained inside the Network
Interface Card of every host machine) of source-destination hosts to correctly deliver a frame and
the Network layer requires the IP address for appropriate routing of packets, in a similar way
Transport Layer requires a Port number to correctly deliver the segments of data to the correct
process amongst the multiple processes running on a particular host. A port number is a 16-bit
address used to identify any client-server program uniquely.
Process to Process Delivery
2. End-to-end Connection between Hosts
The transport layer is also responsible for creating the end-to-end Connection between hosts for
which it mainly uses TCP and UDP. TCP is a secure, connection-orientated protocol that uses a
handshake protocol to establish a robust connection between two end hosts. TCP ensures the
reliable delivery of messages and is used in various applications. UDP, on the other hand, is a
stateless and unreliable protocol that ensures best-effort delivery. It is suitable for applications
that have little concern with flow or error control and requires sending the bulk of data like video
conferencing. It is often used in multicasting protocols.
End to End Connection.
3. Multiplexing and Demultiplexing
Multiplexing(many to one) is when data is acquired from several processes from the sender and
merged into one packet along with headers and sent as a single packet. Multiplexing allows the
simultaneous use of different processes over a network that is running on a host. The processes
are differentiated by their port numbers. Similarly, Demultiplexing(one to many) is required at
the receiver side when the message is distributed into different processes. Transport receives the
segments of data from the network layer distributes and delivers it to the appropriate process
running on the receiver’s machine.
Multiplexing and Demultiplexing
4. Congestion Control
Congestion is a situation in which too many sources over a network attempt to send data and the
router buffers start overflowing due to which loss of packets occurs. As a result, the
retransmission of packets from the sources increases the congestion further. In this situation, the
Transport layer provides Congestion Control in different ways. It uses open-loop congestion
control to prevent congestion and closed-loop congestion control to remove the congestion in a
network once it occurred. TCP provides AIMD – additive increases multiplicative decrease
and leaky bucket technique for congestion control.
Leaky Bucket Congestion Control Technique
5. Data integrity and Error Correction
The transport layer checks for errors in the messages coming from the application layer by using
error detection codes, and computing checksums, it checks whether the received data is not
corrupted and uses the ACK and NACK services to inform the sender if the data has arrived or
not and checks for the integrity of data.
Error Correction using Checksum
6. Flow Control
The transport layer provides a flow control mechanism between the adjacent layers of the TCP/IP
model. TCP also prevents data loss due to a fast sender and slow receiver by imposing some flow
control techniques. It uses the method of sliding window protocol which is accomplished by the
receiver by sending a window back to the sender informing the size of data it can receive.
TCP (Transmission Control Protocol)
TCP is a connection-oriented protocol that ensures reliable and ordered data delivery between
applications. It establishes a reliable, error-free communication channel through various
mechanisms, such as acknowledgment of data receipt, retransmission of lost packets, and flow
control. TCP guarantees data integrity but sacrifices speed and efficiency in the process. It is
commonly used for applications that require the reliable delivery of data, such as web
browsing, email transfer, and file transfer protocols (FTP).
Key Features of TCP
1. Reliability: TCP guarantees that all transmitted data is received by the destination and in the
correct order.
1. Flow Control: TCP regulates the data flow between sender and receiver, preventing overload
and congestion.
1. Congestion Control: TCP adjusts the transmission rate based on network conditions to avoid
network congestion.
1. Error Checking: TCP implements error detection and retransmission mechanisms to ensure
data integrity.
UDP (User Datagram Protocol)
Unlike TCP, UDP is a connectionless protocol that focuses on speed and low overhead rather
than reliability. It operates on a “best-effort” basis, meaning it does not guarantee data delivery,
ordering, or error recovery. UDP is ideal for applications that require fast transmission of data but
can tolerate occasional packet loss, such as real-time communication, video streaming, online
gaming, and DNS (Domain Name System) resolution.
Key Features of UDP
1. Speed: UDP is faster than TCP as it omits the overhead associated with reliability
mechanisms.
1. Low Overhead: UDP has a minimal header size, making it lightweight and efficient for
transmitting small amounts of data.
1. Broadcast and Multicast Support: UDP allows for the broadcasting of data to multiple
recipients simultaneously.
1. Real-Time Applications: UDP is commonly used in applications that require real-time data
delivery, such as VoIP (Voice over Internet Protocol) and video conferencing.
SCTP (Stream Control Transmission Protocol)
SCTP is a relatively newer transport layer protocol that combines the advantages of both TCP and
UDP. It offers the reliability of TCP while supporting message-oriented and real-time data
transmission like UDP. SCTP is primarily designed for applications that demand high reliability,
ordered data delivery, and congestion control while allowing multi-streaming and multi-homing
capabilities. It is often used in telecommunications, voice and video over IP, and signaling
transport in telecommunication networks.
Key Features of SCTP
1. Message-Oriented Delivery: SCTP enables the transmission of individual messages,
maintaining message boundaries during data exchange.
1. Multi-streaming: SCTP allows the simultaneous transmission of multiple streams of data
within a single connection.
1. Multi-homing: SCTP supports multiple IP addresses for a single endpoint, enhancing fault
tolerance and network resilience.
1. Congestion Control: SCTP implements congestion control mechanisms, similar to TCP, to
optimize network performance.
TCP vs UDP vs SCTP
TCP (Transmission UDP (User SCTP (Stream Control
Protocol Control Protocol) Datagram Protocol) Transmission Protocol)
Reliable data delivery Reliable data delivery
Unreliable data
with error detection, with error detection,
delivery without
Reliability retransmission, and retransmission, and
error recovery or
acknowledgement acknowledgement
acknowledgement
mechanisms mechanisms
Connection
Connection-oriented Connectionless Connection-oriented
Type
Guarantees ordered Does not guarantee
Guarantees ordered
Ordering delivery of data the ordered delivery
delivery of data packets
packets of data packets
Comparable to TCP,
Slower due to Faster due to
Speed slower than UDP due to
reliability mechanisms minimal overhead
additional functionality
Lower overhead due
Higher overhead due to Moderate overhead due to
to minimal headers
Overhead additional headers and additional headers and
and control
control mechanisms control mechanisms
mechanisms
Real-time
Web browsing, email Telecommunications,
communication,
Applications transfer, file transfer voice and video over IP,
video streaming,
(FTP) signalling transport
online gaming, DNS
Implements congestion Implements congestion
Congestion control mechanisms to No congestion control mechanisms to
Control optimize network control mechanisms optimize network
performance performance
Detects and retransmits
Error No error recovery Detects and retransmits
lost or corrupted
Recovery mechanisms lost or corrupted packets
packets
Message-
Yes, supports message-
Oriented No No
oriented delivery
Delivery
Yes, supports the
Multi-
No No simultaneous transmission
streaming
of multiple streams
Yes, supports multiple IP
Multi-
No No addresses for fault
homing
tolerance and resilience
The actual functionalities and capabilities may vary depending on the implementation and
specific protocol versions.TCP, UDP, and SCTP are essential protocols that serve distinct
purposes in the realm of computer networking. TCP prioritizes reliability and ordered data
delivery, making it suitable for applications that require error-free transmissions, such as web
browsing and file transfer. UDP, on the other hand, focuses on speed and low overhead, making it
ideal for real-time communication and multimedia streaming. SCTP strikes a balance between the
two, combining reliability, message-oriented delivery, and multi-streaming capabilities for
applications in telecommunications and signalling transport.
When choosing between TCP, UDP, and SCTP, it is crucial to consider the specific requirements
of the application at hand. By understanding the strengths and weaknesses of each protocol,
network engineers and developers can make informed decisions to optimize data transmission for
their intended use cases.
Quality of Service (QoS) in networks:
A stream of packets from a source to destination is called a flow. Quality of Service is defined as
something a flow seeks to attain. In connection oriented network, all the packets belonging to a
flow follow the same order. In a connectionless network, all the packets may follow different
routes.
The needs of each flow can be characterized by four primary parameters :
Reliability, Lack of reliability means losing a packet or acknowledgement which entertains
retransmission.
Delay, Increase in delay means destination will find the packet later than expected,
Importance of delay changes according to the various application.
Jitter, Variation of the delay is jitter, If the delay is not at a constant rate, it may result in poor
quality.
Bandwidth, Increase in bandwidth means increase in the amount of data which can be
transferred in given amount of time, Importance of bandwidth also varies according to various
application.
Application Reliability Delay Jitter Bandwidth
E-mail High Low Low Low
File transfer High Low Low Medium
Web access High Medium Low Medium
Remote login High Medium Medium Low
Audio on demand Low Low High Medium
Video on demand Low Low High High
Telephony Low High High Low
Videoconferencing Low High High High
Techniques for achieving good Quality of Service:
1. Overprovisioning –
The logic of overprovisioning is to provide greater router capacity, buffer space and
bandwidth. It is an expensive technique as the resources are costly. Eg: Telephone System.
2. Buffering –
Flows can be buffered on the receiving side before being delivered. It will not affect
reliability or bandwidth, but helps to smooth out jitter. This technique can be used at uniform
intervals.
3. Traffic Shaping –
It is defined as about regulating the average rate of data transmission. It smooths the traffic on
server side other than client side. When a connection is set up, the user machine and subnet
agree on a certain traffic pattern for that circuit called as Service Level Agreement. It reduces
congestion and thus helps the carrier to deliver the packets in the agreed pattern.
Leaky bucket algorithm
In computer networks, congestion occurs when data traffic exceeds the available bandwidth and
leads to packet loss, delays, and reduced performance. Traffic shaping can prevent and reduce
congestion in a network. It is a technique used to regulate data flow by controlling the rate at
which packets are sent into the network. There are 2 types of traffic shaping algorithms:
1. Leaky Bucket
1. Token Bucket
Leaky bucket algorithm
Suppose we have a bucket in which we are pouring water at random points in time but we have to
get water at a fixed rate to achieve this we will make a hole at the bottom of the bucket. This will
ensure that the water coming out is at some fixed rate. If the bucket gets full, then we will stop
pouring water into it.
The input rate can vary but the output rate remains constant. Similarly, in networking, a technique
called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent
out at an average rate.
In the above figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment. In the
above figure, the host sends a burst of data at a rate of 12 Mbps for 2s, for a total of 24 Mbits of
data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data. In all, the host has sent 30 Mbits of data in 10 s. The leaky bucket smooths out the
traffic by sending out data at a rate of 3 Mbps during the same 10 s. Without the leaky bucket, the
beginning burst may have hurt the network by consuming more bandwidth than is set aside for
this host. We can also see that the leaky bucket may prevent congestion.
Working of Leaky Bucket Algorithm:
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO queue holds the
packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the process
removes a fixed number of packets from the queue at each tick of the clock. If the traffic consists
of variable-length packets, the fixed output rate must be based on the number of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
1. Repeat until n is smaller than the packet size of the packet at the head of the queue.
1. Pop a packet out of the head of the queue, say P.
1. Send the packet P, into the network
1. Decrement the counter by the size of packet P.
1. Reset the counter and go to step 1.
Note: In the below examples, the head of the queue is the rightmost position and the tail of the
queue is the leftmost position.
Example: Let n=1000
Packet=
Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.
Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.
Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialize n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.
Below is the implementation of above explained approach:
Difference between Leaky and Token buckets
Leaky Bucket Token Bucket
When the host has to send a packet, packet In this, the bucket holds tokens generated at regular
is thrown in bucket. intervals of time.
Bucket leaks at constant rate Bucket has maximum capacity.
Bursty traffic is converted into uniform If there is a ready packet, a token is removed from
traffic by leaky bucket. Bucket and packet is send.
In practice bucket is a finite queue outputs If there is no token in the bucket, then the packet
at finite rate cannot be sent.
Advantage of Leaky Bucket over Token bucket
Tokens may be wasted: In Token Bucket, tokens are generated at a fixed rate, even if there is
no traffic on the network. This means that if no packets are sent, tokens will accumulate in the
bucket, which could result in wasted resources. In contrast, with leaky bucket the network
only generates packets when there is traffic, which helps to conserve resources.
Delay in packet delivery: Token Bucket may introduce delay in packet delivery due to the
accumulation of tokens. If the token bucket is empty, packets may need to wait for the arrival
of new tokens, which can lead to increased latency and packet loss.
Lack of flexibility: Token Bucket is less flexible compared to leaky bucket in terms of
shaping network traffic. This is because the token generation rate is fixed, and cannot be
changed easily to meet the changing needs of the network. In contrast, leaky bucket can be
adjusted more easily to adapt to changes in network traffic.
Complexity: Token Bucket can be more complex to implement compared to leaky bucket,
especially when different token generation rates are used for different types of traffic. This
can make it more difficult for network administrators to configure and manage the network.
Inefficient use of bandwidth: In some cases, Token Bucket may lead to inefficient use of
bandwidth. This is because Token Bucket allows for large bursts of data to be sent at once,
which can cause congestion and lead to packet loss. In contrast, leaky bucket helps to prevent
congestion by limiting the amount of data that can be sent at any given time.
Questions
1. What are TCP, UDP, and SCTP, and how do they differ?
TCP (Transmission Control Protocol), UDP (User Datagram Protocol), and SCTP (Stream
Control Transmission Protocol) are transport layer protocols used in computer networks. TCP
provides reliable, connection-oriented communication, UDP offers connectionless and unreliable
communication, while SCTP combines features of both TCP and UDP with added functionalities
like multistreaming and multihoming.
2. What are the main differences between TCP and UDP?
TCP provides reliable, ordered, and error-checked data delivery with features like
acknowledgement, retransmission, and congestion control. UDP, on the other hand, offers a
simple and faster connectionless communication method without error-checking or flow control.
3. In which scenarios is TCP preferred over UDP in network applications?
TCP is preferred in scenarios where data integrity, reliability, and ordered delivery are crucial,
such as file transfer, email communication, and web browsing. These applications require error-
free and sequential data transmission.
4. When would it be appropriate to use UDP instead of TCP?
UDP is appropriate when real-time communication is desired, and occasional packet loss or out-
of-order delivery is tolerable. Applications like live video streaming, online gaming, and DNS
(Domain Name System) typically use UDP for its low latency and reduced overhead.
5. How does SCTP differ from TCP and UDP, and where is it commonly used?
SCTP offers features like multistreaming, multihoming, and improved congestion control,
making it suitable for real-time communication, Voice over IP (VoIP), and telephony systems.
SCTP provides reliable and ordered data transmission with added resilience against network
failures compared to TCP and UDP.
6. What is QoS and how does it work?
Quality of Service (QoS) is a network management technique that prioritizes certain types of
traffic to ensure reliable performance, especially for latency-sensitive applications like VoIP,
video streaming, and gaming.
7. What is congestion control in TCP?
Congestion control in TCP is a mechanism that prevents network congestion by adjusting the data
transmission rate based on network conditions.
8. What are the parameters of leaky bucket?
The Leaky Bucket Algorithm has the following key parameters:
1. Bucket Size (B) – The maximum capacity of the bucket (queue), defining how much data can
be stored before overflow occurs.
1. Leak Rate (R) – The fixed rate at which packets (or tokens) are drained from the bucket,
controlling the output flow.
1. Incoming Traffic (I) – The rate at which data packets arrive into the bucket, which can be
bursty or steady.
1. Time Interval (T) – The duration over which the leak rate is applied, ensuring a consistent
outflow of packets.