0% found this document useful (0 votes)
57 views24 pages

UNIT IV-Transport Layer

UDP is a connectionless transport protocol that does not ensure reliable or ordered delivery of packets. It uses port numbers to identify sending and receiving processes. UDP packets contain a header with source and destination port numbers, length, and checksum. TCP is a connection-oriented protocol that provides reliable, ordered delivery of a byte stream. It uses three-way handshake for connection establishment and termination. TCP segments contain a header with source and destination port numbers, sequence numbers, acknowledgments, flags, window size, checksum, and other fields.

Uploaded by

Senthilkumar S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views24 pages

UNIT IV-Transport Layer

UDP is a connectionless transport protocol that does not ensure reliable or ordered delivery of packets. It uses port numbers to identify sending and receiving processes. UDP packets contain a header with source and destination port numbers, length, and checksum. TCP is a connection-oriented protocol that provides reliable, ordered delivery of a byte stream. It uses three-way handshake for connection establishment and termination. TCP segments contain a header with source and destination port numbers, sequence numbers, acknowledgments, flags, window size, checksum, and other fields.

Uploaded by

Senthilkumar S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT-IV

Illustrate and explain UDP and its packet format.


􀂾 User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol.
􀂾 It does not add anything to the services of IP except process-to-process communication.
􀂾 UDP is a simple multiplexer/demultiplexer that allow multiple processes on each host to share
the network.
􀂾 UDP does not implement flow control or reliable/ordered delivery.
􀂾 UDP ensures correctness of the message by the use of a checksum.
􀂾 If a process wants to send a small message and does not require reliability, UDP is used.

Port Number
􀂾 Each process is assigned a unique 16-bit port number on that host.
􀂾 Processes are identified by (host, port) pair.
􀂾 Processes can be classified as either as client / server.
o Client process usually initiates exchange of information with the server
o Server process is identified by a well-known port number (0 – 1023).
o Client process is assigned an ephemeral port number (49152 – 65,535) by the OS.
o Some well known UDP ports are:

􀂾 Ports are usually implemented as a message queue.


o When a message arrives, UDP appends the message to the end of the queue.
o When queue is full, the message is discarded.
o When a message is read, it is removed from the queue.
o When queue is empty the process is blocked
UDP Header

􀂾 UDP packets, called user datagrams, have a fixed-size header of 8 bytes.


􀂾 SrcPort and DstPort—Contains port number for both the sender (source) and receiver
(destination) of the message.
􀂾 Length—This 16-bit field defines total length of the user datagram, header plus data.
The total length is less than 65,535 bytes as it is encapsulated in an IP datagram.
UDP length = IP length - IP header's length
􀂾 Checksum—It is computed over pseudo header, UDP header and message content to ensure
that message is correctly delivered to the exact recipient.
o The pseudoheader consists of three fields from the IP header (protocol number i.e., 17,
source and destination IP address), plus the UDP length field.

Bring out the classification of port numbers.


􀂾 Well-known ports range from 0 to 1023 are assigned and controlled by IANA.
􀂾 Registered ports range from 1024 to 49,151 are not assigned or controlled by IANA.
They can only be registered with IANA to prevent duplication.
􀂾 Ephemeral (dynamic) ports range from 49,152 to 65,535 is neither controlled nor registered. It
is usually assigned to a client process by the operating system.

Distinguish between network and transport layer

List some applications of UDP.


􀂾 UDP is used for management processes such as SNMP.
􀂾 UDP is used for some route updating protocols such as RIP.
􀂾 UDP is a suitable transport protocol for multicasting.
􀂾 UDP is suitable for a process with internal flow and error control mechanisms such as Trivial
File Transfer Protocol (TFTP).

With a neat architecture, explain TCP in detail.


􀂾 Transmission Control Protocol (TCP) offers a reliable, connection-oriented, byte stream
service
􀂾 TCP guarantees the reliable, in-order delivery of a stream of bytes
􀂾 It is a full-duplex protocol
􀂾 TCP supports demultiplexing mechanism for process-to-process communication.
􀂾 TCP has built-in congestion-control mechanism, i.e., sender is prevented from overloading the
network.

Process-to-Process Communication
􀂾 Like UDP, TCP provides process-to-process communication. A TCP connection is identified a
4-tuple (SrcPort, SrcIPAddr, DstPort, DstIPAddr).
􀂾 Some well-known port numbers used by TCP are
Segment Format

􀂾 TCP is a byte-oriented protocol, i.e. the sender writes bytes into a TCP connection and the
receiver reads bytes out of the TCP connection.
􀂾 TCP groups a number of bytes together into a packet called segment and adds a header onto
each segment. Segment is encapsulated in a IP datagram and transmitted.
􀂾 SrcPort and DstPort fields identify the source and destination ports.
􀂾 SequenceNum field contains sequence number, i.e. first byte of data in that segment.
􀂾 Acknowledgment defines byte number of the segment, the receiver expects next.
􀂾 HdrLen field specifies the number of 4-byte words in the TCP header.
􀂾 Flags field contains six control bits or flags. They are set to indicate:
o URG—indicates that the segment contains urgent data.
o ACK—the value of acknowledgment field is valid.
o PSH—indicates sender has invoked the push operation.
o RESET—signifies that receiver wants to abort the connection.
o SYN—synchronize sequence numbers during connection establishment.
o FIN—terminates the connection
􀂾 AdvertisedWindow field defines the receiver window and acts as flow control.
􀂾 Checksum field is computed over the TCP header, the TCP data, and pseudoheader.
􀂾 UrgPtr field indicates where the non-urgent data contained in the segment begins.
􀂾 Optional information (max. 40 bytes) can be contained in the header.
Connection Establishment
The connection establishment in TCP is called three-way handshaking as shown below:
1. The client (active participant) sends a segment to the server (passive participant) stating the
initial sequence number it is to use (Flags = SYN, SequenceNum = x)
2. The server responds with a single segment that both acknowledges the client’s sequence
number (Flags = ACK, Ack = x + 1) and states its own beginning sequence number (Flags =
SYN, SequenceNum = y).
3. Finally, the client responds with a segment that acknowledges the server’s sequence number
(Flags = ACK, Ack = y + 1).

State Transition Diagram


􀂾 The states involved in opening and closing a connection is shown above and below
ESTABLISHED state respectively.
􀂾 The operation of sliding window (i.e., retransmission) is not shown.
􀂾 The two events that trigger a state transition is:
o a segment arrives from its peer.
o the local application process invokes an operation on TCP
􀂾 TCP’s state transition diagram defines the semantics of both its peer-to-peer interface and its
service interface.
The state transition involved in opening a connection is as follows:
1. The server first invokes a passive open on TCP, which causes TCP to move to LISTEN state
2. Later, the client does an active open, which causes its end of the connection to send a SYN
segment to the server and to move to the SYN_SENT state.
3. When the SYN segment arrives at the server, it moves to SYN_RCVD state and responds with
a SYN + ACK segment.
4. The arrival of this segment causes the client to move to the ESTABLISHED state and to send
an ACK back to the server.
5. When this ACK arrives, the server finally moves to the ESTABLISHED state.
a. Even if the client's ACK gets lost, sever will move to ESTABLISHED state when the first data
segment from client arrives.
In TCP, the application process on both sides of the connection can independently close its half
of the connection. The combinations of transitions from the ESTABLISHED state to CLOSED
state are:
􀂾 ESTABLISHED􀂾 􀂾 FIN_WAIT_1􀂾 􀂾 FIN_WAIT_2􀂾 􀂾 TIME_WAIT􀂾 􀂾 CLOSED (this
side closes first)
􀂾 ESTABLISHED􀂾􀂾 CLOSE_WAIT􀂾􀂾 LAST_ACK􀂾􀂾 CLOSED (other side closes first)
􀂾 ESTABLISHED􀂾􀂾 FIN_WAIT_1􀂾􀂾 CLOSING􀂾􀂾 TIME_WAIT􀂾􀂾 CLOSED (both side
close simultaneously)

Connection Termination
Three-way Handshaking—Most implementation follow three-way handshaking as shown.
1. The client TCP after receiving a Close command from the client process sends a FIN
segment. A FIN segment can include the last chunk of data.
2. The server TCP responds with FIN + ACK segment to inform its closing.
3. The client TCP finally sends an ACK segment.
Four-way Half-Close—In TCP, one end can stop sending data while still receiving data,
known as half-close. For instance, submit its data to the server initially for processing and
close its connection. At a later time, the client receives the processed data from the server.
1. The client TCP half-closes the connection by sending a FIN segment.
2. The server TCP accepts the half-close by sending the ACK segment. The data transfer
from the client to the server stops.
3. The server can send data to the client and acknowledgement can come from the client.
4. When the server has sent all the processed data, it sends a FIN segment to the client.
5. The FIN segment is acknowledged by the client.
Write short notes on urgent data in TCP?
􀂾 TCP is a stream-oriented protocol, i.e., each byte of data has a position in the stream.
􀂾 At times an application may need to send urgent data, i.e., sending process wants a piece of
data to be read out of order by the receiving process. For example, to abort the process by issuing
Ctrl + C keystroke.
􀂾 The above scenario is handled by setting the URG bit.
􀂾 The sending TCP inserts the urgent data at beginning of the segment.
􀂾 The urgent pointer field in the header defines start of normal data.
􀂾 When the receiving TCP receives a segment with the URG bit set, it delivers urgent data out
of order to the receiving application.

What is push operation in TCP?


􀂾 The receiving TCP buffers the data when they arrive and delivers them to the application
program when ready or when it is convenient for the receiving process.
􀂾 In case of interactive applications, delayed delivery of data is not acceptable.
􀂾 TCP handles as follows:
o The application program at the sending site can request a Push operation.
o This instructs the sending TCP not to wait for the window to be filled. It must create a segment
and send it immediately.
o The sending TCP also sets the push bit (PSH) to let the receiving TCP know that the segment
includes data that must be delivered to the receiving application program as soon as possible and
not to wait for more data to come.

Explain TCPs adaptive control and its uses.


􀂾 TCP uses a variant of sliding window known as adaptive flow control that:
o guarantees the reliable delivery of data in ordered manner
o enforces flow control at the sender
􀂾 The receiver advertises a window size to the sender using AdvertisedWindow field in the TCP
header
􀂾 The sender cannot have unacknowledged data greater than value of AdvertisedWindow

Reliable and Ordered Delivery


Sender Receiver
􀂾 TCP on the sending side maintains a send buffer that is divided into 3 segments namely
acknowledged data, unacknowledged data and data to be transmitted
􀂾 Similarly TCP on the receiving side maintains a receive buffer to hold data even if it arrives of
order.
􀂾 The send buffer maintains three variables namely LastByteAcked, LastByteSent, and
LastByteWritten as shown above. The relation between them is obviousLastByteAcked ≤
LastByteSent and LastByteSent ≤ LastByteWritten
􀂾 The bytes to the left of LastByteAcked are not kept as it had been acknowledged.
􀂾 The receive buffer maintains three variables namely LastByteRead, NextByteExpected, and
LastByteRcvd. The relation between them is LastByteRead < NextByteExpected and
NextByteExpected ≤LastByteRcvd + 1
􀂾 If data are received in order, NextByteExpected is the next byte after LastByteRcvd
􀂾 Bytes to the left of LastByteRead is not buffered as it has been read by the application

Flow Control
􀂾 The capacity of send and receiver buffer is MaxSendBuffer and MaxRcvBuffer respectively.
􀂾 The sending TCP prevents overflowing of its buffer by maintaining
LastByteWritten - LastByteAcked ≤ MaxSendBuffer
􀂾 The receiving TCP avoids overflowing its receive buffer by maintaining
LastByteRcvd - LastByteRead ≤ MaxRcvBuffer
􀂾 The receiver throttles the sender by advertising a window that is no larger than the amount of
free space that it can buffer as
AdvertisedWindow = MaxRcvBuffer - ((NextByteExpected - 1) 􀐾 LastByteRead)
􀂾 When data arrives, the receiver acknowledges it as long as preceding bytes have arrived.
o LastByteRcvd moves to its right (incremented), and the advertised window shrinks
􀂾 The advertised window expands when the data is read by the application
o It data is read as fast as it arrives then AdvertisedWindow = MaxRcvBuffer
o If it is read slow, it eventually leads to a AdvertisedWindow of size 0.
􀂾 The sending TCP adheres to the advertised window by computing effective window, that limits
how much data it should send as
EffectiveWindow = AdvertisedWindow - (LastByteSent -LastByteAcked)
􀂾 When a acknowledgement arrives for x bytes, LastByteAcked is incremented by x and the
buffer space is freed accordingly

Fast Sender vs. Slow Receiver


􀂾 A slow receiver prevents being swamped with data from a fast receiver by using
AdvertisedWindow field
􀂾 Initially the fast sender transmits at a higher rate.
􀂾 The receiver's buffer gets filled up. Hence, AdvertisedWindow shrinks, eventually to 0.
􀂾 When the receiver advertises window of size 0, sender cannot transmit any further data.
Therefore, the TCP at the sender blocks the sending process.
􀂾 When the receiving process reads some data, those bytes are acknowledged. Thus the
AdvertisedWindow expands.
􀂾 The LastByteAcked is incremented and buffer space is freed to that extent,
􀂾 The sending process becomes unblocked and is allowed to fill up the free space.
Checking AdvertisedWindow status
􀂾 TCP always sends a segment in response that contains the latest values for the Acknowledge
and AdvertisedWindow fields, even if these values have not changed.
􀂾 Thus the sender can come to know the status of AdvertisedWindow even after the receiver
advertises a window of size 0.

AdvertisedWindow
􀂾 The TCP's AdvertisedWindow field is 16 bits long, half the size of SequenceNum
􀂾 The length of 16-bits ensures that it does not wrap around
􀂾 The length of AdvertisedWindow is designed such that it allows the sender to keep the pipe
full.
􀂾 The 16-bit length also accounts for product of delay × bandwidth. It is not big enough, in case
of a T3 connection, but taken care by TCP extension headers.

What is adaptive retransmission? Explain the algorithms used?


􀂾 TCP guarantees reliability through retransmission.
o Retransmission due to timeout before ACK.
o Timeout is a function of RTT.
o RTT is highly variable between any two hosts on the internet.
o Appropriate timeout is chosen using adaptive retransmission.

Original Algorithm
􀂾 TCP estimates SampleRTT by computing the duration between sending of a packet and arrival
of its ACK.
􀂾 TCP then computes EstimatedRTT as a weighted average between the previous and current
estimate as
EstimatedRTT = α × EstimatedRTT + (1 -α) × SampleRTT
where α is the smoothening factor and its value is in the range 0.8–0.9
􀂾 Timeout is twice the EstimatedRTT
TimeOut = 2 × EstimatedRTT

Karn/Partridge Algorithm
􀂾 The flaw discovered in original algorithm after years of use is
o whether an ACK should be associated with the original or retransmission segment
o If ACK is associated with original one, then SampleRTT becomes too large
o If ACK is associated with retransmission, then SampleRTT becomes too small
􀂾 Karn/Partridge proposed a solution to the above by making changes to the timeout mechanism.
􀂾 Each time TCP retransmits, it sets the next timeout to be twice the last timeout.
o Loss of segments is mostly due to congestion and hence TCP source does not react
aggressively to a timeout.
Jacobson/Karels Algorithm
􀂾 The main problem with original algorithm is that variance of the sample RTTs is not taken
into account.
o if variation among samples is small, then EstimatedRTT can be trusted
o otherwise timeout should not be tightly coupled with the EstimatedRTT
􀂾 In this new approach, the sender measures a new SampleRTT as before.
􀂾 The Deviation amongst RTTs is computed as follows:
Difference = SampleRTT - EstimatedRTT
EstimatedRTT = EstimatedRTT + (δ × Difference)
Deviation = Deviation + ( |Difference| 􀐾 Deviation)
where δ is a fraction between 0 and 1
􀂾 TCP now computes TimeOut as a function of both EstimatedRTT and Deviation as listed:
TimeOut = µ × EstimatedRTT + φ × Deviation
where µ = 1 and φ = 4 usually
􀂾 When variance is small, difference between TimeOut and EstimatedRTT is negligible.
􀂾 When variance is larger, Deviation plays a greater role in deciding TimeOut.

What is silly window syndrome? When should TCP transmit a segment?


􀂾 When an ACK arrives, the window enlarges for transmission.
􀂾 Even if window size is less than one MSS, TCP decides to go ahead and transmit a half-full
segment.
􀂾 The strategy of aggressively taking advantage of any available window leads to a situation
now known as the silly window syndrome.
􀂾 If the sender aggressively fills, then any small segments introduced into the system remains in
the system indefinitely as it does not combine with adjacent segments to create larger ones as
shown.

Nagle’s Algorithm
Nagle's suggests a solution as to what the sending TCP should do when there is data to send
and window size is less than one MSS. The algorithm is listed below:
When the application produces data to send
if both the available data and the window ≥ MSS
send a full segment
else
if there is unACKed data in flight
buffer the new data until an ACK arrives
else
send all the new data now
􀂾 It’s always OK to send a full segment if the window allows.
􀂾 It’s also OK to immediately send a small amount of data if there are currently no
segments in transit, but if there is anything in flight, the sender must wait for an ACK before
transmitting the next segment.

Explain TCP congestion control techniques in detail.


􀂾 In TCP congestion control, each source has to determine the available capacity in the network,
so that it can send packets without loss.
􀂾 By using ACKs to pace transmission of packets, TCP is said to be self-clocking.
􀂾 TCP maintains a state variable CongestionWindow for each connection. Therefore
MaxWindow = MIN(CongestionWindow, AdvertisedWindow)
EffectiveWindow = MaxWindow - (LastByteSent - LastByteAcked)
􀂾 Thus, a TCP source is allowed to send no faster than network or destination host
􀂾 The problem is that available bandwidth changes over time. The three congestion control
mechanism are:
o Additive Increase/Multiplicative Decrease
o Slow Start
o Fast Retransmit and Fast Recovery

Additive Increase/Multiplicative Decrease (AIMD)


􀂾 TCP source sets the CongestionWindow based on the level of congestion it perceives to exist
in the network.
􀂾 The additive increase/multiplicative decrease (AIMD) mechanism works as follows:
o The source increases CongestionWindow when level of congestion goes down and
decreases CongestionWindow when level of congestion goes up.
􀂾 TCP interprets timeouts as a sign of congestion and reduces the rate at which it is transmitting.
o Each time a timeout occurs, the source sets CongestionWindow to half of its previous
value. This is known as multiplicative decrease.
o For example, if CongestionWindow is set to 16 packets, after a packet loss, it is set to 8.
o The CongestionWindow is not allowed to fall below one packet size or MSS,
irrespective of the level of congestion.
􀂾 Every time, the source successfully sends one packet, CongestionWindow is increased by a
fraction (additive increase).
o An ACK acknowledges receipt of MSS bytes, the increment is computed as
Increment = MSS × (MSS/CongestionWindow)
CongestionWindow += Increment
􀂾 This pattern of continually increasing and decreasing the congestion window continues
throughout the lifetime of the connection

􀂾 When the current value of CongestionWindow as a function of time, it results as a saw-tooth


pattern.
􀂾 AIMD decreases its CongestionWindow aggressively but increases conservatively.
o Having small CongestionWindow only results in less probability of packets being
dropped.
o Thus congestion control mechanism becomes stable.
􀂾 Since timeout is an indication of congestion that triggers multiplicative decrease, TCP needs
the most accurate timeout mechanism.
􀂾 AIMD is appropriate only when source is operating close to network capacity.

Slow Start
􀂾 Slow start increases the congestion window exponentially, rather than linearly. It is usually
used from cold start.
􀂾 The source starts by setting CongestionWindow to one packet.
o When ACK arrives, TCP adds 1 to CongestionWindow and sends two packets.
o Upon receiving two ACKs, TCP increments CongestionWindow by 2 and sends four
packets.
o Thus TCP doubles the number of packets every RTT as shown.
􀂾 Slow start provides exponential growth and is designed to avoid bursty nature of TCP.
􀂾 Initially TCP has no idea about congestion, henceforth it increases CongestionWindow rapidly
until there is a packet loss.
􀂾 When a packet is lost:
o TCP immediately decreases CongestionWindow by half (multiplicative decrease).
o It stores the current value of CongestionWindow as CongestionThreshold and resets to
CongestionWindow one packet
o The CongestionWindow is incremented one packet for each ACK arrived until it
reaches CongestionThreshold and thereafter one packet per RTT.
􀂾 In initial stages, TCP loses more packets because it attempts to learn the available bandwidth
quickly through exponential increase
􀂾 An alternate strategy to slow start is known as packet pair
o Send packets without space and then observe timings of their ACKs.
o The difference between ACKs is taken as a measure of congestion in the network

In the above example, initial slow start causes increase in CongestionWindow (34KB). The trace
then flattens at 2 sec due to loss of packets. CongestionThreshold is set to 17KB (34/2) and
CongestionWindow to 1 packet. Thereafter additive increase is followed
Fast Retransmit and Fast Recovery
􀂾 Fast retransmit is a heuristic that triggers the retransmission of a dropped packet sooner than
the regular timeout mechanism. It does not replace regular timeouts.
􀂾 When a packet arrives out of order, the receiving TCP resends the same acknowledgment
(duplicate ACK) it sent the last time.
􀂾 The sending TCP waits for three duplicate ACK, to confirm that the packet is lost before
retransmitting the lost packet. This is known as fast retransmit and it signals congestion.
􀂾 Instead of setting CongestionWindow to one packet, this method uses the ACKs that are still
in the pipe to clock the sending of packets. This mechanism is called fast recovery.
􀂾 The fast recovery mechanism removes the slow start phase and follows additive increase.
􀂾 The fast retransmit/recovery results increase in throughput by 20%.
The following example shows transmission of packets in which the third packet gets lost. The
sender on receiving three duplicate ACKs (ACK 2) retransmits the third packet as shown below.
On receiving the lost one, the receiver acknowledges the packet with highest number.

In this strategy:
􀂾 Slow start is only used at the beginning of a connection and when the regular timeout
occurs.
􀂾 At other times, the congestion window follows a pure additive increase/multiplicative
decrease pattern
􀂾 TCP's fast retransmit can detect up to three dropped packets per window.
Explain in detail about TCP congestion avoidance algorithms.
􀂾 Congestion avoidance refers to mechanisms that prevent congestion before it actually
occurs.
􀂾 TCP increases the load and when congestion is likely to occur, it decreases load on
the network.
􀂾 TCP creates loss of packets in order to determine bandwidth of the connection
􀂾 The three congestion-avoidance mechanisms are:
o DECbit
o Random Early Detection (RED)
o Source-based congestion avoidance
DECbit
􀂾 Was developed for use on Digital Network Architecture
􀂾 In DEC bit, each router monitors the load it is experiencing and explicitly notifies the end
nodes when congestion is about to occur by setting a binary congestion bit called DECbit in
packets that flow through it.
􀂾 The destination host copies the DECbit into the ACK and sends back to the source.
􀂾 Eventually the source reduces its transmission rate and congestion is avoided.

Algorithm
􀂾 A single congestion bit is added to the packet header.
􀂾 A router sets this bit in a packet if its average queue length is 􀂾 1 when the packet
arrives.
􀂾 The average queue length is measured over a time interval that spans the last busy +
last idle cycle + current busy cycle as shown below.
􀂾 Router calculates average queue length by dividing the curve area by time interval

􀂾 The source computes how many ACK has the DECbit set for the previous window packets it
has sent.
o If less than 50% of the packets had its DECbit set, then source increases its congestion window
by 1 packet.
o Otherwise, source decrease the congestion window by 87.5% (multiply its previous value by
0.875)
􀂾 “Increase by 1, decrease by 0.875” rule is its additive increase/multiplicative decrease
strategy.

Random Early Detection (RED)


􀂾 Proposed by Floyd and Jackson
􀂾 Each router monitors its own queue length.
􀂾 In RED, router implicitly notifies the source that congestion is likely to occur by dropping one
of its packets.
􀂾 The source is notified by timeout or duplicate ACK.
􀂾 The router drops a few packets earlier before it runs out of space, so that it need not drop more
packets later.
􀂾 Each incoming packet is dropped with a probability known as drop probability when the
queue length exceeds drop level.
Algorithm
􀂾 RED computes average queue length using a weighted running average as follows:
AvgLen = (1 􀂾 Weight) × AvgLen + Weight × SampleLen
o where 0 < Weight < 1 and SampleLen is length of the queue when a sample measurement is
made.
o Because of the bursty nature of Internet traffic, queues can become full very quickly and then
empty again.
o The weighted running average detects long-lived congestion as shown below

􀂾 RED has two queue length thresholds MinThreshold and MaxThreshold. When a packet
arrives at the gateway, RED compares the current AvgLen with these thresholds and decides
whether to queue or drop the packet as follows:
if AvgLen 􀂾 MinThreshold
queue the packet
if MinThreshold < AvgLen < MaxThreshold
calculate probability P
drop the arriving packet with probability P
if MaxThreshold 􀂾 AvgLen
drop the arriving packet
o The probability of drop increases slowly when AvgLen is between the two thresholds, reaching
MaxP at the upper threshold, at which point it jumps to unity as shown.
RED thresholds Drop probability function
o P is a function of both AvgLen and how long it has been since the last packet was dropped. It is
computed as
TempP = MaxP × (AvgLen - MinThreshold)/(MaxThreshold - MinThreshold)
P = TempP/(1 - count × TempP)
􀂾 Because RED drops packets randomly, the probability that RED decides to drop a flow’s
packet(s) is roughly proportional to the share of the bandwidth for that flow.
􀂾 MaxThreshold is set to twice of MinThreshold as it works well for the Internet traffic.
􀂾 There should be enough free buffer space above MaxThreshold to absorb bursty traffic.
Source-Based Congestion Avoidance
􀂾 The source looks for signs of congestion on the network, for example, a considerable increase
in the RTT, indicate queuing at a router.

Some mechanisms
1. Every two round-trip delays, it checks to see if the current RTT is greater than the average of
the minimum and maximum RTTs.
a. If it is, then the algorithm decreases the congestion window by one-eighth.
b. Otherwise the normal increase as in TCP.
2. The window is adjusted once every two round-trip delays based on the product
(CurrentWindow - OldWindow) × (CurrentRTT - OldRTT)
a. If the result is positive, the source decreases the window size by one-eighth
b. Otherwise, the source increases the window by one maximum packet size.
3. Every RTT, it increases the window size by one packet and compares the throughput achieved
to the throughput when the window was one packet smaller.
a. If the difference is less than one-half the throughput achieved when only one packet was in
transit, it decreases the window by one packet.

TCP Vegas
􀂾 In standard TCP, it was observed that throughput increases as congestion window increases,
but not beyond the available bandwidth.
􀂾 Any further increase in the window size only results in packets taking up buffer space at the
bottleneck router
􀂾 TCP Vegas uses this idea to measure and control the right amount of extra data in transit.
􀂾 If a source is sending too much extra data, it will cause long delays and possibly lead to
congestion.
􀂾 TCP Vegas’s congestion-avoidance actions are based on changes in the estimated amount of
extra data in the network.
􀂾 A flow’s BaseRTT is set to the minimum of all RTTs and is mostly the first packet
sent.
􀂾 The expected throughput is given by ExpectedRate = CongestionWindow/BaseRTT
􀂾 The sending rate, ActualRate is computed by dividing number of bytes transmitted during a
RTT by that RTT.
􀂾 The difference between two rates is computed, say Diff = ExpectedRate – ActualRate
􀂾 Two thresholds α and β are defined such that α < β
o When Diff < α, the congestion window is linearly increased during the next RTT
o When Diff > β, the congestion window is linearly decreased during the next RTT
o When α < Diff < β, the congestion window is unchanged
􀂾 When actual and expected output varies significantly, the congestion window is reduced as it
indicates congestion in the network.
􀂾 When actual and expected output is almost the same, the congestion window is increased to
utilize the available bandwidth.
􀂾 The overall goal is to keep between α and β extra bytes in the network. The expected & actual
throughput with thresholds α and β (shaded region) is shown below

What is meant by quality of service?


􀂾 QoS is defined as a set of attributes pertaining to the performance of a connection.
􀂾 The attributes may be either user or network oriented.
􀂾 QoS on the Internet can be broadly classified into
o Integrated Services (IntSrv)
o Differentiated Services
Explain how QoS is provided through integrated services.
􀂾 Integrated Services IntSrv is a flow-based QoS model, i.e., user creates flow from source to
destination and informs all routers of the resource requirement.
Service Classes
􀂾 The two classes of service defined are:
o Guaranteed service in which the network assures that delay will not be beyond some
maximum if flow stays within TSpec.
o Controlled load service meets the need of tolerant, adaptive applications which requests
low-loss or no-loss such as file transfer, e-mail, etc.

Flowspec
􀂾 The set of information given to the network for a given flow is called flowspec. It has two
parts namely
o Tspec defines the traffic characterization of the flow
o Rspec defines resources that the flow needs to reserve (buffer, bandwidth, etc.)

TSpec
􀂾 The bandwidth of real-time application varies constantly for most application.
􀂾 The average rate of flows cannot be taken into account as variable bit rate applications exceed
the average rate. This leads to queuing and subsequent delay/loss of packets.
Token Bucket
􀂾 The solution to manage varying bandwidth is to use token bucket filter that can describe
bandwidth characteristics of a source/flow.
􀂾 The two parameters used are token rate r and a bucket depth B
􀂾 A token is required to send a byte of data.
􀂾 A source can accumulate tokens at rate r/second, but not more than B tokens.
􀂾 Bursty data of more than r bytes per second is not permitted. Therefore bursty data should be
spread over a long interval.
􀂾 The token bucket provides information that is used by admission control algorithm to
determine whether or not to consider the new request for service.
The following example shows two flows with equal average rates but different token bucket
descriptions.
􀂾 Flow A generates data at a steady rate of 1 Mbps, which is described using a token
bucket filter with rate r = 1 Mbps and a bucket depth B = 1 byte.
􀂾 Flow B sends at rate of 0.5 Mbps for 2 seconds and then at 2 Mbps for 1 second,
which is described using a token bucket filter with rate r = 1 Mbps and a bucket depth
B = 1 MB. The additional depth allows it to accumulate tokens when it sends 0.5
Mbps (2 × 0.5 = 1 MB) and uses the same to send for bursty data of 2 Mbps.

Admission Control
􀂾 When a flow requests a level of service, admission control examines TSpec and RSpec of the
flow.
􀂾 It checks to see whether the desired service can be provided with currently available resources,
without causing any worse service to previously admitted flows.
o If it can provide the service, the flow is admitted otherwise denied.
􀂾 The decision to allow/deny a service can be heuristic such as "currently delays are within
bounds, therefore another service can be admitted."
􀂾 Admission control is closely related to policy. For example, a network admin will allow CEO
to make reservations and forbid requests from other employees.
Reservation Protocol (RSVP)
􀂾 The Resource Reservation Protocol (RSVP) is a signaling protocol to help IP create a flow and
make a resource reservation.
􀂾 RSVP provides resource reservations for all kinds of traffic including multimedia which uses
multicasting. RSVP supports both unicast and multicast flows.
􀂾 RSVP is a robust protocol that relies on soft state in the routers.
o Soft state unlike hard state (as in ATM, VC), times out after a short period if it is not
refreshed. It does not require to be deleted.
o The default interval is 30 ms.
􀂾 Since multicasting involves large number of receivers than senders, RSVP follows receiver-
oriented approach that makes receivers to keep track of their requirements.

RSVP Messages
􀂾 To make a reservation, the receiver needs to know:
o What traffic the sender is likely to send so as to make an appropriate reservation, i.e., TSpec.
o Secondly, what path the packets will travel.
􀂾 The sender sends a PATH message to all receivers (downstream) containing TSpec.
􀂾 A PATH message stores necessary information for the receivers on the way. PATH messages
are sent about every 30 seconds.
􀂾 The receiver sends a reservation request as a RESV message back to the sender (upstream),
containing sender's TSpec and receiver requirement RSpec.
􀂾 Each router on the path looks at the RESV request and tries to allocate necessary resources to
satisfy and passes the request onto the next router.
o If allocation is not feasible, the router sends an error message to the receiver
􀂾 If there is any failure in the link a new path is discovered between sender and the receiver. The
RESV message follows the new path thereafter.
􀂾 A router reserves resources as long as it receives RESV message, otherwise released.
􀂾 If a router does not support RSVP, then best-effort delivery is followed.

Reservation Merging
􀂾 In RSVP, the resources are not reserved for each receiver in a flow, but merged.
􀂾 When a RESV message travels from receiver up the multicast tree, it is likely to come across a
router where reservations have already been made for some other flow.
􀂾 If the new resource requirements can be met using existing allocations, then new allocations
need not be made.
o For example, receiver A has already made a request for a guaranteed delay of less than 100 ms.
If B comes with a new request for a delay of less than 200 ms, then no new reservations are
made.
o Another example shows router R3 merging requests from Rc1, Rc2 and Rc3 before making
bandwidth reservation.
􀂾 A router that handles multiple requests with one reservation is known as merge point. This is
because, different receivers require different quality.
􀂾 Reservation merging meets the needs of all receivers downstream of the merge point.
Packet Classifying and Scheduling
􀂾 Packet classification refers to the process of associating each packet with corresponding
reservation.
o This is done by examining the fields source address, destination address, protocol
number, source port and destination port in the packet header.
􀂾 Scheduling refers to the process of managing packets in queues to ensure that they get the
requested service.
o Weighted fair queuing or a combination of queuing disciplines can be used.

List the disadvantages of integrated services


􀂾 Scalability􀂾IntSrv requires router to maintain information for each flow, which is not feasible
for today's internet growth
􀂾 Service type limitation􀂾Only two types of services are provided. Certain applications may
require more than the offered services.

Explain how QoS is provided through differentiated services


Differentiated Services (DiffServ) is a class-based QoS model designed for IP.
Premium class
􀂾 The default best-effort model is enhanced as a new class called premium.
􀂾 The premium packets have bits set (marked) in the header by the organization gateway router
or by the ISP router.
􀂾 IETF has defined a set of behaviors for routers known as per-hop behaviors (PHB).
􀂾 IETF has replaced the existing TOS field in IPv4 or Class field in IPv6 with 6-bit DiffServ
code points (DSCP) and remaining 2 bits unused.

􀂾 6-bit DSCP can be used to define 64 PHB that could be applied to a packet.
􀂾 The three PHBs defined are default PHB (DE PHB), expedited forwarding PHB (EF PHB)
and assured forwarding PHB (AF PHB).
􀂾 The DE PHB is the same as best-effort delivery and is compatible with TOS

Expedited Forwarding (EF PHB)


􀂾 Packets marked for EF treatment should be forwarded by the router with minimal delay
(latency) and loss by ensuring required bandwidth.
􀂾 A router guarantees EF, only if arrival rate of EF packets is less than forwarding rate
􀂾 The rate limiting of EF packets is achieved by configuring routers at the edge of an
administrative domain to ensure that it is less than bandwidth of the slowest link.
􀂾 Queuing can be either using strict priority or weighted fair queuing.
o In strict priority, EF packets are preferred over others, leaving less chance for other
packets to go through.
o In weighted fair queuing, other packets are given a chance, but there is a possibility of
EF packets being dropped, if there is excessive EF traffic.

Assured Forwarding
􀂾 The AF PHB is based on RED with In and Out (RIO) algorithm.
􀂾 In RIO, the drop probability increases as the average queue length increases.
The following example shows RIO with two classes named in and out.

􀂾 The out curve has a lower MinThreshold than in curve, therefore under low levels of
congestion, only packets marked out will be discarded.
􀂾 If the average queue length exceeds Minin, packets marked in are also dropped.
􀂾 The terms in and out are explained with the example "Customer X is allowed to send up to y
Mbps of assured traffic".
o If the customer sends packets less than y Mbps then packets are marked in.
o When the customer exceeds y Mbps, the excess packets are marked out.
􀂾 Thus combination of profile meter at the edge router and RIO in all routers, assures (but does
not guarantee) the customer that packets within the profile will be delivered
􀂾 RIO does not change the delivery order of in and out packets.
􀂾 If weighted fair queuing is used, then weight for the premium queue is chosen using the
formula. It is based on the load of premium packets.
Bpremium = Wpremium / (Wpremium + Wbest-effort)
o For example, if weight of premium queue is 1 and best-effort is 4, then only 20% of the
link is reserved for premium packets.

How differentiated services overcome the limitations of integrated services?


1. The main processing was moved from the core of the network to edge of the network
(scalability). Thus routers need not store information about flows. The applications define the
type of service they need each time when a packet is sent.
2. The per-flow service is changed to per-class service. The router routes the packet based on
class of service defined in the packet, not the flow. Different types of classes (services) based on
the needs of applications.

Write short notes on ATM QoS.


The five ATM service classes are:
1. constant bit rate (CBR)
2. variable bit rate—real-time (VBR-rt)
3. variable bit rate—non-real-time (VBR-nrt)
4. available bit rate (ABR)
5. unspecified bit rate (UBR)

Constant Bit Rate


􀂾 Sources of CBR traffic are expected to send at a constant rate.
􀂾 The source’s peak rate and average rate of transmission are equal.
􀂾 CBR class is designed for customers who need real-time audio or video services.
􀂾 CBR is a relatively easy service for implementation

Variable Bit Rate


􀂾 The VBR class is divided into two subclasses: real-time (VBR-rt) and non-real-time (VBR-nrt).
􀂾 VBR-rt is designed for users who need real-time services (such as voice and video
transmission) and use compression techniques to create a variable bit rate.
􀂾 The traffic generated by the source is characterized by a token bucket, and the maximum total
delay required through the network is specified.
􀂾 VBR-nrt bears some similarity to IP’s controlled load service. The source traffic is specified
by a token bucket.
􀂾 VBR-nrt is designed for users who do not need real-time services but use compression
techniques to create a variable bit rate

Unspecified Bit Rate


􀂾 UBR class is a best-effort delivery service that does not guarantee anything.
􀂾 UBR allows the source to specify a maximum rate at which it will send.
o Switches may make use of this information to decide whether to admit or reject or negotiate
with the source for a less peak rate.

Available Bit Rate


􀂾 ABR apart from being a service class also defines a set of congestion-control mechanism.
􀂾 The ABR mechanisms operate over a virtual circuit by exchanging special ATM cells called
resource management (RM) cells between the source and destination.
􀂾 RM cells work as explicit congestion feedback mechanism as shown below.

􀂾 ABR allows a source to increase or decrease its allotted rate as conditions dictate.
􀂾 ABR class delivers cells at a minimum rate. If more network capacity is available, this
minimum rate can be exceeded.
􀂾 ABR is suitable for applications that are bursty in nature.

What is equation based congestion control?


􀂾 TCP’s congestion-control algorithm is not appropriate for real-time applications.
􀂾 A smooth transmission rate is obtained by ensuring that flow’s behavior adheres to an
equation that models TCP’s behavior.

􀂾 To be TCP-friendly, the transmission rate must be inversely proportional to the roundtrip


time (RTT) and the square root of the loss rate (ρ).

You might also like