Unit 2
Unit 2
UNIT 2
TRANSPORT LAYER
1. INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of
the Internet model. It responds to service requests from the session layer and
issues service requests to the network Layer. The transport layer provides
transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality
o f service needed by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
2. TRANSPORT SERVICES
The transport layer is located between the network layer and the application
layer. The transport layer is responsible for providing services to the
application layer; it receives from the network layer. The services that can be
provided by the transport layer are
Process-to-Process Communication
Addressing: Port Numbers
Encapsulation and Decapsulation
Flow Control
Error Control
1
CS3591 UNIT 2
Congestion Control
Multiplexing and demultiplexing
Process-to-Process Communication
The Transport Layer is responsible for delivering data to the appropriate
application process on the host computers. This involves multiplexing of data
from different application processes, i.e., forming data packets, and adding
source and destination port numbers in the header of each Transport Layer data
packet. Together with the source and destination IP address, the port numbers
constitute a network socket, i.e. an identification address of the process-to-
process communication.
Addressing: Port Numbers
Ports are the essential ways to address multiple entities in the same
location. Using port addressing it is possible to use more than one network-
based application at the same time.
Three types of Port numbers are used:
Well-known ports - These are permanent port numbers. They range
between 0 to 1023.These port numbers are used by Server Process.
Registered ports - The ports ranging from 1024 to 49,151 are not
assigned or controlled.
Ephemeral ports (Dynamic Ports) – These are temporary port numbers.
They range between 49152–65535.These port numbers are used by Client
Process.
Encapsulation and Decapsulation
To send a message from one process to another, the transport-layer
protocol encapsulates and decapsulates messages.
Encapsulation happens at the sender site. The transport layer receives the data
and adds the transport-layer header. Decapsulation happens at the receiver site.
When the message arrives at the destination transport layer, the header is
dropped and the transport layer delivers the message to the process running at
the application layer.
Multiplexing and Demultiplexing
Whenever an entity accepts items from more than one source, this is
referred to as multiplexing (many to one).
Whenever an entity delivers items to more than one source, this is
referred to as demultiplexing (one to many). The transport layer at the source
performs multiplexing. The transport layer at the destination performs
demultiplexing
2
CS3591 UNIT 2
3
CS3591 UNIT 2
4
CS3591 UNIT 2
Length
This field denotes the total length of the UDP Packet (Header plus
data)
The total length of any UDP datagram can be from 0 to 65,535
bytes.
Checksum
UDP computes its checksum over the UDP header, the contents
of the message body, and something called the pseudo header.
The pseudo header consists of three fields from the IP header—
protocol number, source IP address, destination IP address plus the
UDP length field
Data
Data field defines that actual payload to be transmitted.
Its size is variable.
UDP SERVICES
Process-to-Process Communication
UDP provides a process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.
Connectionless Services
UDP provides a connectionless service.
There is no connection establishment and no connection
termination.
5
CS3591 UNIT 2
Checksum
UDP checksum calculation includes three sections: a pseudo
header, the UDP header, and the data coming from the application
layer.
The pseudo header is the part of the header in which the user
datagram is to be encapsulated with some fields filled with 0s.
Congestion Control
Since UDP is a connectionless protocol, it does not provide
congestion control.
UDP assumes that the packets sent are small and sporadic
(occasionally or at irregular intervals) and cannot create
congestion in the network.
6
CS3591 UNIT 2
This assumption may or may not be true, when UDP is used for
interactive real-time transfer of audio and video.
Queuing
In UDP, queues are associated with ports.
At the client site, when a process starts, it requests a port number
from the operating system.
Some implementations create both an incoming and an outgoing
queue associated with each process.
Other implementations create only an incoming queue associated
with each process.
7
CS3591 UNIT 2
TCP offers full-duplex service, where data can flow in both directions at the
same time. Each TCP endpoint then has its own sending and receiving
buffer, and segments move in both directions.
Multiplexing and Demultiplexing:
TCP performs multiplexing at the sender and demultiplexing at the
receiver.
8
CS3591 UNIT 2
Connection-Oriented Service:
TCP is a connection-oriented protocol. A connection needs to be
established for each pair of processes. When a process at site A wants to send to
and receive data from another process at site B, the following three phases
occur:
The two TCP’s establish a logical connection between them.
Data are exchanged in both directions.
The connection is terminated.
Reliable Service: TCP is a reliable transport protocol. It uses an
acknowledgment mechanism to check the safe and sound arrival of data.
TCP SEGMENT
A packet in TCP is called a segment. Data unit exchanged between TCP
peers are called segments. A TCP segment encapsulates the data received from
the application layer. The TCP segment is encapsulated in an IP datagram,
which in turn is encapsulated in a frame at the data-link layer.
Connection Establishment
While opening a TCP connection the two nodes(client and server) want to
agree on a set of parameters.
The parameters are the starting sequence numbers that is to be used for their
respective byte streams. Connection establishment in TCP is a three-way
handshaking.
Client sends a SYN segment to the server containing its initial sequence
number (Flags = SYN, Sequence Num = x)
Server responds with a segment that acknowledges client’s
segment and specifies its initial sequence number (Flags = SYN +
ACK, ACK = x + 1 Sequence Num = y).
10
CS3591 UNIT 2
Client sends a FIN segment. The FIN segment can include last chunk of data.
Server responds with FIN + ACK segment to inform its closing. Finally,
client sends an ACK segment
11
CS3591 UNIT 2
6. CONGESTION CONTROL
Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the
number of packets a network can handle). Congestion control refers to the
mechanisms and techniques that control the congestion and keep the load below
the capacity. Congestion Control refers to techniques and mechanisms that
can either prevent congestion, before it happens, or remove congestion, after it
has happened Congestion control mechanisms are divided into two categories
1. Open loop
2. Closed loop
Congestion occurs if load (number of packets sent) is greater than capacity
of the network (number of packets a network can handle). When load is less
than network capacity, throughput increases proportionally. When load exceeds
capacity, queues become full and the routers discard some packets and
throughput declines sharply. When too many packets are contending for the
same link The queue overflows, Packets get dropped, Network is congested,
Network should provide a congestion control mechanism to deal with such a
situation.
Additive Increase / Multiplicative Decrease (AIMD)
TCP source initializes Congestion Window based on congestion level in
the network. Source increases Congestion Window when level of
congestion goes down and decreases the same when level of congestion goes
up. TCP interprets timeouts as a sign of congestion and reduces the rate of
12
CS3591 UNIT 2
Slow Start
Slow start is used to increase Congestion Window exponentially from a cold
start. Source TCP initializes Congestion Window to one packet. TCP doubles
the number of packets sent every RTT on successful transmission. When ACK
arrives for first packet TCP adds 1 packet to Congestion Window and sends
two packets. When two ACKs arrive, TCP increments Congestion Window by
2 packets and sends four packets and so on. Instead of sending entire
permissible packets at once (burst traffic), packets are sent in a phased manner,
i.e., slow start. Initially TCP has no idea about congestion,
henceforth it increases Congestion Window rapidly until there is a
timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2 CongestionWindow = 1
Slow start is repeated until Congestion Window reaches Congestion
Threshold and thereafter 1 packet per RTT. The congestion window trace
will look like
13
CS3591 UNIT 2
14
CS3591 UNIT 2
7. CONGESTION AVOIDANCE
Congestion avoidance mechanisms prevent congestion before it actually
occurs. These mechanisms predict when congestion is about to happen and then
to reduce the rate at which hosts send data just before packets start being
discarded. TCP creates loss of packets in order to determine bandwidth of the
connection. Routers help the end nodes by intimating when congestion is likely
to occur. Congestion-avoidance mechanisms are:
DEC bit - Destination Experiencing Congestion Bit
RED - Random Early Detection
15
CS3591 UNIT 2
The destination host then copies this congestion bit into the ACK it sends
back to the source. The Source checks how many ACK has DEC bit set for
previous window packets. If less than 50% of ACK have DEC bit set, then
source increases its congestion window by 1 packet, Otherwise, decreases the
congestion window by 87.5%. Finally, the source adjusts its sending rate so as
to avoid congestion. Increase by 1, decrease by 0.875 rule was based on AIMD
for stabilization. A single congestion bit is added to the packet header. Using a
queue length of 1 as the trigger for setting the congestion bit. A router sets this
bit in a packet if its average queue length is greater than or equal to 1 at the
Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
It calculates the average queue length by dividing the curve area with time
interval.
Red - Random Early Detection
The second mechanism of congestion avoidance is called as
Random Early Detection (RED). Each router is programmed to monitor its own
queue length, and when it detects that there is congestion, it notifies the source
to adjust its congestion window. RED differs from the DEC bit scheme by two
ways: In DECbit, explicit notification about congestion is sent to source,
whereas RED implicitly notifies the source by dropping a few packets.
DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some drop
probability whenever the queue length exceeds some drop level. This idea is
called early random drop.
Computation of average queue length using RED
Avg Len = (1 − Weight) × Avg Len + Weight × Sample Len
where 0 < Weight < 1 and Sample Len – is the length
16
CS3591 UNIT 2
17
CS3591 UNIT 2
If one of the streams is blocked, the other streams can still deliver their
data.
Multihoming
An SCTP association supports multihoming service. The sending and receiving
host can define multiple IP addresses in each end for an association. In this
fault-tolerant approach, when one path fails, another interface can be used for
data delivery without interruption.
An SCTP packet has a mandatory general header and a set of blocks called
chunks.
General Header
Source port
Destination port
Verification tag
Checksum
Chunks
Types of Chunks
An SCTP association may send many packets, a packet may contain
several chunks, and chunks may belong to different streams. SCTP defines two
types of chunks - Control chunks and Data chunks. A control chunk controls and
19
CS3591 UNIT 2
Data Transfer
The whole purpose of an association is to transfer data between two
ends. After the association is established, bidirectional data transfer
can take place. The client and the server can both send data. SCTP
supports piggybacking. Types of SCTP data Transfer :
Multihoming Data Transfer
Multistream Delivery
Association Termination
In SCTP,either of the two parties involved in exchanging data
(client or server) can close the connection. SCTP does not allow a “half
closed” association. If one end closes the association, the other end must
20
CS3591 UNIT 2
stop sending new data. If any data are left over in the queue of the
recipient of the termination request, they are sent and the association is
closed. Association termination uses three packets.
21
CS3591 UNIT 2
When the site receives a data chunk, it stores it at the end of the buffer (queue)
and subtracts the size of the chunk from win Size. The TSN number of the
chunk is stored in the cum TSN variable. 2. When the process reads a chunk, it
removes it from the queue and adds the size of the removed chunk to winSize
(recycling).
3. When the receiver decides to send a SACK, it checks the value of lastAck; if
it is less than cumTSN, it sends a SACK with a cumulative TSN number equal
to the cumTSN. It also includes the value of win Size as the advertised window
size.
Sender Site:
The sender has one buffer (queue) and three variables: curTSN, rwnd, and in
Transit, as shown in the following figure. We assume each chunk is 100 bytes
long. The buffer holds the chunks produced by the process that either have been
sent or are ready to be sent. The first variable, curTSN, refers to the next chunk
to be sent. All chunks in the queue with a TSN less than this value have been
sent, but not acknowledged; they are outstanding. The second variable, rwnd,
holds the last value advertised by the receiver (in bytes). The third variable, in
Transit, holds the number of bytes in transit, bytes sent but not yet
acknowledged. The following is the procedure used by the sender.
1. A chunk pointed to by curTSN can be sent if the size of the data is less than
or equal to the quantity rwnd - inTransit. After sending the chunk, the value of
curTSN is incremented by 1 and now points to the next chunk to be sent. The
value of inTransit is incremented by the size of the data in the transmitted
chunk.
2. When a SACK is received, the chunks with a TSN less than or equal to the
cumulative TSN in the SACK are removed from the queue and discarded. The
sender does not have to worry about them anymore. The value of inTransit is
reduced by the total size of the discarded chunks. The value of rwnd is updated
with the value of the advertised window in the SACK.
22
CS3591 UNIT 2
9. QUALITY OF SERVICE
Quality of service (QoS) is the use of mechanisms or technologies that work on
a network to control traffic and ensure the performance of critical applications
with limited network capacity. It enables organizations to adjust their
overall network traffic by prioritizing specific high-performance applications.
QoS is typically applied to networks that carry traffic for resource-intensive
systems. Common services for which it is required include internet protocol
television (IPTV), online gaming, streaming media, videoconferencing, video
on demand (VOD), and Voice over IP (VoIP). Using QoS in networking,
organizations have the ability to optimize the performance of multiple
applications on their network and gain visibility into the bit rate, delay, jitter,
and packet rate of their network. This ensures they can engineer the traffic on
their network and change the way that packets are routed to the internet or other
networks to avoid transmission delay. This also ensures that the organization
achieves the expected service quality for applications and delivers expected user
experiences. As per the QoS meaning, the key goal is to enable networks and
organizations to prioritize traffic, which includes offering dedicated bandwidth,
controlled jitter, and lower latency. The technologies used to ensure this are
vital to enhancing the performance of business applications, wide-area networks
(WANs), and service provider networks. QoS networking technology works by
marking packets to identify service types, then configuring routers to create
separate virtual queues for each application, based on their priority. As a result,
bandwidth is reserved for critical applications or websites that have been
23
CS3591 UNIT 2
Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth.
For example, assigning a certain amount of bandwidth to different queues for
different traffic types.
Delay: The time it takes for a packet to go from its source to its end destination.
This can often be affected by queuing delay, which occurs during times of
congestion and a packet waits in a queue before being transmitted. QoS enables
organizations to avoid this by creating a priority queue for certain types of
traffic.
Loss: The amount of data lost as a result of packet loss, which typically occurs
due to network congestion. QoS enables organizations to decide which packets
to drop in this event.
24
CS3591 UNIT 2
The leaky bucket algorithm enforces output patterns at the average rate, no
matter how busy the traffic is. So, to deal with the more traffic, we need a
flexible algorithm so that the data is not lost. One such approach is the token
bucket algorithm.
Let us understand this algorithm step wise as given below −
25
CS3591 UNIT 2
Differentiated QoS
26
CS3591 UNIT 2
A meter measures the traffic to make sure that packets do not exceed their
traffic profiles. A marker marks or unmarks packets in order to keep track of
their situations in the DS node. A shaper delays any packet that is not compliant
with the traffic profile. Finally, a dropper discards any packet that violates its
traffic profile.
Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the
availability of their business-critical applications. It is vital for delivering
differentiated bandwidth and ensuring data transmission takes place without
interrupting traffic flow or causing packet losses. Major advantages of
deploying QoS include:
28