CN Unit 4
CN Unit 4
Protocol
Connection Oriented Protocol Services Connectionless Protocol Services
Characteristics
As UDP is connectionless protocol, it does not require creating a connection. And the message is
transferred without handshaking.
This is one of the main differences between UDP and TCP networking protocol.
It is reliable.
All the packets follow the same path to the destination.
Disadvantages:
Handshaking is required before sending an actual data packet over the internet.
Requires additional header parameter to ensure reliable communication between sender
and receiver. So, it has extra overhead.
Header size of the packet is bigger than connectionless protocol.
Advantages and Disadvantages of Connectionless protocol:
Advantages:
It is not reliable and cannot ensure the data transmission to the destination.
Packets decide the route while transmission based on the network congestion.
It does not have a fixed path.
Different packets do not necessarily follow the same path.
Use of Connection-Oriented Protocol:
If you need reliable communication between sender and receiver, connection-oriented services
are more useful.
Example: We use email for communication. If we are sending an email to another recipient, it
should be delivered. In this case, the connection-oriented protocol is more reliable to use.
Example: If we are developing video streaming website, we need a faster connection to stream
without buffer delay. In this case, the connectionless protocol is more useful.
Domain name server (DNS) uses connectionless service protocol (UDP) for the domain and IP
resolution.
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and connectionless
protocol. So, there is no need to establish a connection before data transfer. The UDP helps to
establish low-latency and loss-tolerating connections over the network. The UDP enables
process-to-process communication.
User Datagram Protocol (UDP) is one of the core protocols of the Internet Protocol (IP) suite.
It is a communication protocol used across the internet for time-sensitive transmissions such as
video playback or DNS lookups.
DNS lookups is the process through which human-readable domain names (www.digicert.com)
are translated into a computer-readable IP address
Unlike Transmission Control Protocol (TCP), UDP is connectionless and does not guarantee
delivery, order, or error checking, making it a lightweight and efficient option for certain types
of data transmission.
UDP Header
D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 3
CN UNIT-4
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes to
60 bytes. The first 8 Bytes contain all necessary header information and the remaining part
consists of data. UDP port number fields are each 16 bits long, therefore the range for port
numbers is defined from 0 to 65535; port number 0 is reserved. Port numbers help to
distinguish different user requests or processes.
UDP Header
Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
Length: Length is the length of UDP including the header and the data. It is a 16-
bits field.
Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of
the one’s complement sum of the UDP header, the pseudo-header of information
from the IP header, and the data, padded with zero octets at the end (if necessary) to
make a multiple of two octets.
Applications of UDP
Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
It is a suitable protocol for multicasting as UDP supports packet switching.
UDP is used for some routing update protocols like RIP (Routing Information
Protocol).
Normally used for real-time applications which cannot tolerate uneven delays
between sections of a received message.
VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use
UDP for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to
ensure fast and efficient data transmission.
DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a
suitable protocol for this application.
DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network.
TCP vs UDP
User Datagram Protocol
Basis Transmission Control Protocol (TCP) (UDP)
No acknowledgment
An acknowledgment segment is present.
Acknowledgment segments.
There is no sequencing of
Sequencing of data is a feature of data in UDP. If the order is
Transmission Control Protocol (TCP). required, it has to be
this means that packets arrive in order at managed by the application
Sequence the receiver. layer.
There is no retransmission of
Retransmission of lost packets is
lost packets in the User
possible in TCP, but not in UDP.
Retransmission Datagram Protocol (UDP).
TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-
Header Length header. length header.
Advantages of UDP
Speed: UDP is faster than TCP because it does not have the overhead of
establishing a connection and ensuring reliable data delivery.
Lower latency: Since there is no connection establishment, there is lower latency
and faster response time.
Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
Disadvantages of UDP
No reliability: UDP does not guarantee delivery of packets or order of delivery,
which can lead to missing or duplicate data.
No congestion control: UDP does not have congestion control, which means that it
can send packets at a rate that can cause network congestion.
TCP (Transmission Control Protocol) is one of the main protocols of the TCP/IP
suite. It lies between the Application and Network Layers which are used in providing
reliable delivery services. Transmission Control Protocol (TCP) ensures reliable and
efficient data transmission over the internet.
TCP plays a crucial role in managing the flow of data between computers, guaranteeing that
information is delivered accurately and in the correct sequence.
In this article, we will discuss about Transmission control protocol (TCP) in detail. We will
also discuss IP, the Difference between TCP and IP, and the working process of IP here. Let’s
proceed with the definition of TCP First.
TCP
For Example: When a user requests a web page on the internet, somewhere in the world, the
server processes that request and sends back an HTML Page to that user. The server makes use
of a protocol called the HTTP Protocol. The HTTP then requests the TCP layer to set the
required connection and send the HTML file.
Now, the TCP breaks the data into small packets and forwards it toward the Internet Protocol
(IP) layer. The packets are then sent to the destination through different routes.
The TCP layer in the user’s system waits for the transmission to get finished and acknowledges
once all packets have been received.
Features of TCP/IP
Some of the most prominent features of Transmission control protocol are mentioned below.
Segment Numbering System: TCP keeps track of the segments being transmitted or
received by assigning numbers to each and every single one of them. A specific
Byte Number is assigned to data bytes that are to be transferred while segments are
assigned sequence numbers. Acknowledgment Numbers are assigned to received
segments.
Connection Oriented: It means sender and receiver are connected to each other till
the completion of the process. The order of the data is maintained i.e. order remains
same before and after transmission.
Full Duplex: In TCP data can be transmitted from receiver to the sender or
vice – versa at the same time. It increases efficiency of data flow between sender
and receiver.
Flow Control: Flow control limits the rate at which a sender transfers data. This is
done to ensure reliable delivery. The receiver continually hints to the sender on how
much data can be received (using a sliding window).
Error Control: TCP implements an error control mechanism for reliable data
transfer. Error control is byte-oriented. Segments are checked for error detection.
Error Control includes – Corrupted Segment & Lost Segment Management, Out-of-
order segments, Duplicate segments, etc.
Congestion Control: TCP takes into account the level of congestion in the
network. Congestion level is determined by the amount of data sent by a sender.
Advantages of TCP
It is a reliable protocol.
It provides an error-checking mechanism as well as one for recovery.
D. SANJEEVA REDDY M. TECH., ASSISTANT PROFESSOR 8
CN UNIT-4
Congestion in a computer network happens when there is too much data being sent at the same
time, causing the network to slow down. Just like traffic congestion on a busy road, network
congestion leads to delays and sometimes data loss
Congestion control:
Congestion control is a crucial concept in computer networks. It refers to the methods used to
prevent network overload and ensure smooth data flow. When too much data is sent through
the network at once, it can cause delays and data loss. Congestion control techniques help
manage the traffic, so all users can enjoy a stable and efficient network connection. These
techniques are essential for maintaining the performance and reliability of modern networks.
Let us consider an example to understand Imagine a bucket with a small hole in the bottom. No
matter at what rate water enters the bucket, the outflow is at constant rate. When the bucket is
full with water additional water entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are involved
in leaky bucket algorithm:
When host wants to send packet, packet is thrown into the bucket.
The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
Bursty traffic is converted to a uniform traffic by the leaky bucket.
In practice the bucket is a finite queue that outputs at a finite rate.
The leaky bucket algorithm has a rigid output design at an average rate independent
of the bursty traffic.
It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
When tokens are shown, a flow to transmit traffic appears in the display of tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.
As a result, if tokens are available, part of the busty packets are transmitted at the same rate,
giving the system some flexibility. M * S = C + * S = M * S = M * S = M * S = M * S
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty
the traffic is. So, in order to deal with the bursty traffic we need a flexible algorithm so that the
data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
In regular intervals tokens are thrown into the bucket. ƒ
The bucket has a maximum capacity. ƒ
If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example, In figure (A) we see a bucket holding three tokens, with
five packets waiting to be transmitted. For a packet to be transmitted, it must capture and
destroy one token. In figure (B) We see that three of the five packets have gotten through, but
the other two are stuck waiting for more tokens to be generated.
The system removes one token for every cell of data sent. For each tick of the clock the system
sends n tokens to the bucket. If n is 100 and host is idle for 100 ticks, bucket collects 10000
tokens. Host can now consume all these tokens with 10 cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to zero. Each
time a token is added, counter is incremented to 1. Each time a unit of data is sent, counter is
decremented by 1. When the counter is zero, host cannot send data.
Due to token accumulation, delays can be introduced in the packet delivery. If the
token bucket happens to be empty, packets will have to wait for new tokens, leading
to increased latency and potential packet loss.
Token Bucket happens to be less flexible than leaky bucket when it comes to
network traffic shaping. The fixed token generation rate cannot be easily altered to
meet changing network requirements, unlike the adaptable nature of leaky bucket.
Reliability
It implies packet reached or not, information lost or not. Lack of reliability means losing a
packet or acknowledgement, which entails re-transmission. Reliability requirements may differ
from program to program. For example, it is more important that electronic mail, file transfer
and internet access have reliable transmissions than telephony or audio conferencing.
Delay
It denotes source-to-destination delay. Different applications can tolerate delay in different
degrees. Telephony, audio conferencing, video conferencing, and remote log-in need minimum
delay, while delay in file transfer or e-mail is less important.
Jitter
Jitter is the variation in delay for packets belonging in same flow. High jitter means the
difference between delays is large; low jitter means the variation is small.
Bandwidth
Different applications need different bandwidths. In video conferencing we need to send
millions of bits per second to refresh a color screen while the total number of bits in an e-mail
may not reach even a million.
There are several ways to improve QoS like Scheduling and Traffic shaping, we will see each
and every part of this in brief.
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Three scheduling
techniques are:
1. FIFO Queuing
2. Priority Queuing
3. Weighted Fair Queuing
Router queues are susceptible to congestion by virtue of the limited buffer memory available to
them. When the rate of ingress traffic becomes larger than the amounts that can be forwarded
on the output interface, congestion is observed. The potential causes of such a situation mainly
involve:
Speed of incoming traffic surpasses the rate of outgoing traffic
The combined traffic from all the input interfaces exceeds overall output capacity
The router processor is incapable of handling the size of the forwarding table to
determine routing paths
To manage the allocation of router memory to the packets in such situations of congestion,
different disciplines might be followed by the routers to determine which packets to keep and
which packets to drop. Accordingly, we have the following important queuing disciplines in
routers:
The default queuing scheme followed by most routers is FIFO. This generally requires little or
no configuration to be done on the server. All packets in FIFO are serviced in the same order
as they arrive in the router. On reaching saturation within the memory, new packets attempting
to enter the router are dropped (tail drop).
The obvious advantage of PQ is that higher-priority traffic is always processed first. However,
a significant disadvantage to the PQ scheme is that the lower-priority queues can often receive
no service at all as a result of starvation. A constant stream of High priority traffic can starve
out the lower-priority queues
the 30% queue is now allotted (75/2) % and the 50% queue is now allotted (125/2) %
bandwidth.
Unlike PQ schemes, the WFQ-queues are allotted differing bandwidths based on their queue
priorities. Packets with a higher priority are scheduled before lower-priority packets arriving at
the same time.
The choice of queuing discipline impacts the performance of the network in terms of the
number of dropped packets, latency, etc. When analyzing the effect of choosing the different
schemes, we observe significant impacts on various parameters.
Fig 4: Number of packets dropped versus time for different queuing disciplines (Simulation
Measuring the overall packet drop in the network for the three schemes points to the
followingresults:
In all the mechanisms, there are no packet drops in the beginning, up to a certain
point. This is because it takes a finite time for router buffer memory to be filled up.
Since packet drops occur only after the buffer is full, thus there is an initial period
when there are no packet drops as the buffer capacity has not yet been
reached.
In FIFO scheme, the packet drop starts after PQ but before WFQ. More
prominently, the number of packets being dropped is the greatest in the case of
FIFO. This is by virtue of the fact that once congested, all incoming traffic from
all the apps is dropped altogether without any discrimination.
In PQ scheme, the packet drops start the earliest. Since PQ divides the queue based
on priority levels, the overall size of the individual queues is divided up. Assuming
a simple division of the memory into an “Important” Queue and a “Less
Important” Queue, the queue size is halved. Thus, the packets being directed to
the sub-queues will cause the queue to be filled up earlier (because of the smaller
capacity) and hence packet drop will start earlier
Traffic Shaping
It is a mechanism to control the amount and the rate of the traffic sent to the network. The
techniques used to shape traffic are: leaky bucket and token bucket.
The bucket holds tokens generated at regular When the host has to send a packet, packet is
intervals of time. thrown in bucket.
If there is a ready packet, a token is removed Bursty traffic is converted into uniform
from Bucket and packet is send. traffic by leaky bucket.
If there is no token in the bucket, then the In practice bucket is a finite queue outputs at
packet cannot be sent. finite rate.
For example, if a host is not sending for a while, its bucket becomes empty. If the host has
bursty data, the leaky bucket allows only an average rate. The time when the host is idle is not
take into account. On the other hand, token bucket algorithm allows idle hosts to accumulate
credit for the future in the form of tokens. And that is how it overcomes the shortcoming of
leaky bucket algorithm.
Handshaking Basics
In simple terms, connecting parties with a handshaking protocol means defining how the
communication will occur. Specifically, handshaking can establish, before full communication
beginning, the protocol for exchanging messages, data encoding, and maximum transfer rates.
Several applications require handshaking. Next, we briefly describe how this process executes
for particular applications:
Transmission Control Protocol (TCP): Opening regular TCP connections requires a three-
way handshake. Thus, TCP aims to establish reliable communication by
synchronizing the message exchanging between two parties.
Transport Layer Security (TLS): A TLS connection requires a series of agreements
between clients and servers to secure the communication. In this way, the handshake
protocol from TLS is designed to define security features for a specific connection,
such as encryption algorithms and keys.
Simple Mail Transfer Protocol (SMTP): After establishing a TCP connection to exchange
messages, the SMTP servers and clients must identify themselves through a particular
handshake process. Hence, besides authenticating servers and users, this handshake
process also negotiates other communication features, such as the adopted
encryption protocol and the maximum message size.
Two-Way Handshake
The two-way handshake is a simple protocol to create a connection between two parties
that want to communicate. In order to do that, this protocol uses synchronization (SYN) and
acknowledgment (ACK) messages.
Briefly, an SYN message requires a connection and informs the other party of a sequence
number to control the data exchange. In practice, the TCP sequence number is a counter of bytes
passing on a particular stream. ACK messages, in turn, are employed to confirm receipt (using
the sequence number) of incoming messages.
To accomplish the two-way handshaking considering a client/server model, the client sends an
SYN message to the server with a sequence number X. Then, the server should acknowledge
(ACK) the SYN message, providing another sequence number Y and establishing the
connection. Thus, sequence number X will acknowledge messages from the client to the server,
while sequence number Y will acknowledge messages from the server to the client.
The following figure depicts the previously described two-way handshake process:
Particularly, the two-way handshake presents potential problems when the ACK message
from the server delays too much. Thus, if a connection timeout occurs, the client sends another
SYN message with a new sequence number (Z, for example) to the server. However, if the server
previously sent an ACK (which is delayed), it’ll discard this new SYN message. The client, in
turn, receives the delayed ACK and assumes that it refers to the last sent SYN message. Here’s
where the error happens: the client will send messages with the sequence number Z, while the
server expects messages following the sequence number X.
The figure below shows the outlined problem:
Three-Way Handshake
So, let’s consider that a client wants to communicate with a server that employs the three-way
handshaking protocol. First, the client sends an SYN message with your sequence number (X) to
the server. The server replies with an SYN-ACK containing its sequence (Y) number and
acknowledging the client’s sequence number (X). After that, the client sends an ACK message to
confirm the server’s sequence number (Y), finally establishing the connection.
The following figure demonstrates the presented three-way handshake process:
Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:
1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care of. If the sender
feels that a sent packet is lost or corrupted, the packet needs to be retransmitted.
This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.
2. Window Policy:
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be
received successfully at the receiver side. This duplication may increase the
congestion in the network and make it worse.
3. Discarding Policy: routers can discard less sensitive packets to prevent congestion
and also maintain the quality of the audio file.
4. AcknowledgmentPolicy:
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment
only if it has to send a packet or a timer expires.
5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in
a flow should first check the resource requirement of a network flow before
transmitting it further. If there is a chance of a congestion or there is a congestion in
the network, router should deny establishing a virtual network connection to prevent
further congestion.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:
1.Backpressure:
Backpressure is a technique in which a congested node stops receiving packets from upstream
node. Backpressure is a node-to-node congestion control technique that propagate in the
opposite direction of data flow.
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st node
may get congested and inform the source to slow down.
3.ImplicitSignaling:
In implicit signaling, there is no communication between the congested nodes and the source.
The source guesses that there is congestion in a network. For example, when sender sends
several packets and there is no acknowledgment for a while, one assumption is that there is a
congestion.
4.ExplicitSignaling:
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the
source or destination to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that carry data rather than creating
a different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Congestion control begins with network traffic monitoring and measurement. These metrics
include packet loss, delay, and throughput, which network administrators can analyze
continuously to identify signs of congestion and to take appropriate measures.
Traffic Regulation:
Traffic Shaping:
Traffic shaping slows down data transmission by smoothing the flow of traffic using
techniques such as buffering, queuing, and prioritization.
Traffic Policing:
Traffic policing enforces pre-defined traffic rates. It examines packets as they enter the
network and drops or marks packets that exceed the specified rate. This prevents excessive
traffic from congesting networks.
ECN is a congestion notification mechanism that allows routers to mark packets to indicate
network congestion. Congestion detection is achieved by setting a bit in packet headers.
When the receiver or subsequent routers detect congestion, they can notify the sender,
triggering congestion control mechanisms.
The management of Quality of Service (QoS) means ensuring that critical applications
receive preferential treatment and are less likely to experience congestion-related issues by
prioritizing certain types of network traffic. A Quality of Service mechanism allocates
network resources according to predefined priorities and rules.
Network traffic is distributed across multiple paths and resources by load balancing and
routing optimization techniques. Congestion can be mitigated, and network resources can be
more effectively utilized, if the load is distributed evenly and efficiently.
From this example, we can say that the traffic is spread over both R5-R11 and R6-
R12.
Admission Control
In a leaky bucket, incoming packets fall into the bucket and leak out at a constant
rate. The incoming rate may vary, but the outgoing rate remains constant.
In the leaky bucket technique, if the host does not send any packets for some time,
its bucket becomes empty. Time is wasted because none of the packets use this
time. In the token bucket, the host stores the credits for the future in the form of
packets.
Leaky Bucket detects the average data rate of incoming traffic and converts it to
fixed-rate traffic. If the bucket on the network is full, it drops the packet.
Admission control mechanisms can be combined with traffic-aware routing and
used across networks.
Traffic Throttling
When communication occurs between a sender and a receiver on a network, the sender sends as
much traffic to the receiver as possible. If the network becomes congested by receiving a large
number of packets, it notifies the sender to throttle back and slow down the transmission.
The traffic throttling approach is used on both datagram networks and virtual-
circuit networks.
Routers monitor the resources they have and check the links over which packets
arrive, checks the buffer of queued packets and the number of packets lost. Traffic
can be throttled using this technique.
In another approach to throttling traffic, the router looks at the delay present in the
incoming packets. By measuring the delay, the router decides whether there is any
congestion in the packet. This is a common technique used mostly on networks.
But what will the router do if it experiences packet congestion? Therefore, if
congestion is experienced, the router notifies the appropriate sender who sent the
packet and alerts it to the congestion.
To respond to congestion, routers use different schemes. They are as follows:
o Choke Packets
o Explicit Congestion Notification
o Hop-by-Hop Backpressure
Typically, the router uses choke packets, explicit congestion network (ECN), and hop-by-hop
backpressure mechanisms to notify the sender about congestion.
Choke Packets: In this method, the router directly informs the sender about the congestion.
Router chokes packets containing congested packets and sends them to the sender.
When the sender receives a choke packet from the router, it reduces the data
sending rate by 40-50% to prevent congestion on the network.
If after the sender lowers the data rate, the network becomes congested then the
router sends choke packets again. The sender receives choke packets until the
network is balanced.
Explicit Congestion Network (ECN): Instead of sending choke packets to the source, the router
sets the Congestion Experienced (CE) bit and sends it to the sender. When the sender receives
the CE bit, he knows that the packets he sends are experiencing congestion. This mechanism is
known as Explicit Congestion Notification (ECN).
The diagram below explains the explicit congestion notification method.
As shown in the figure, PC-1 sends the packet to PC-2 on the network, it passes
through two routers.
First, the packet is unmarked, which means that the Congestion Experienced (CE)
bit is not set. As the packet enters Router-2, which is congested, the packet sets
the CE bit.
When PC-2 receives the packet, it checks whether the CE bit is set, if it is set, it
sends the Echo packet to the sender and informs that the packet is experiencing
congestion on Router-2.
Upon receiving an echo reply, the sender throttles the traffic and slows down the
transmission speed.
Hop-by-Hop Backpressure: In this method, the destination sends choke packets hop-by-hop.
Let us understand with an example.
The below diagram explains the hop-by-hop backpressure mechanism.
Load Shedding
When none of the above methods work to prevent congestion on the network, the router starts
discarding packets to balance the network. This is known as load shedding, in which routers
simply drop packets when they cannot handle them.
The key point in load shedding is how the router discards the packets. This varies
according to the applications and services used by the network.
For example, if two devices are transferring files using FTP, the old packets are
more important than the new packets.
If a packet contains routing information, it is considered as an important packet
because the loss of this packet can result in network downtime.
In the load shedding method, packets are marked with their priority, using which
the router decides how important they are to the network. It enforces a wise
abandonment policy.
Random early detection is a part of the load shedding method used to deal with
congestion.
Random Early Detection (RED) is an algorithm used to determine when to discard packets.
Using the RED mechanism, the router decides which packets to drop quickly before they cause
problems on the network.
Senders are sending packets over a network at a very high speed. In RED, the
packets are randomly selected by the router. Because packets are randomly
selected, the routers do not tell which router is causing the most trouble in the
network.
When the sender does not receive an acknowledgment for its sent packets, it
knows that the sent packets experience congestion and are discarded by the router.
In addition, it also reduces the data rate, the throughput.
Routers using the RED mechanism improve network performance as packets are
dropped only when the buffer is full by checking the priority of the packets.
RED is similar to ECN but is used when senders cannot receive an explicit signal.
If the ECN is available on the network, notifying the sender about congestion is
the preferred option.