0% found this document useful (0 votes)
89 views41 pages

Unit 2 (DCN)

The document discusses the link layer of computer networks, focusing on data framing, which organizes data into frames for transmission. It details the structure of frames, types of framing methods, and various error control techniques, including error detection and correction methods like parity checks and cyclic redundancy checks. Additionally, it covers the significance of framing in ensuring reliable communication over networks.

Uploaded by

iganonymous69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views41 pages

Unit 2 (DCN)

The document discusses the link layer of computer networks, focusing on data framing, which organizes data into frames for transmission. It details the structure of frames, types of framing methods, and various error control techniques, including error detection and correction methods like parity checks and cyclic redundancy checks. Additionally, it covers the significance of framing in ensuring reliable communication over networks.

Uploaded by

iganonymous69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Unit-II

Link Layer
Framing: Data framing is the process of organizing data into frames for transmission
over a network. It's a key function of the data link layer in computer networks.

Data transmission in the physical layer involves synchronized bit transfer from the
source to the destination. These bits are bundled into frames by the data link layer.
Particularly in computer networks and telecommunications, frames are the basic
building blocks of digital transmission. Energy packets called photons are used to
transmit frames, just like the light energy. In the process of time division multiplexing,
the frame is continuously used.
The frames are created by the data-link layer by encapsulating the packets coming
from a network layer. The packet may be split into smaller frames in case the frame
size grows to be too large. Flow control and error control are more effectively
controlled with smaller frames. These frames are transmitted bit by bit by the
hardware. The data link layer gathers signals from hardware at the receiver’s end and
puts them together into frames.

Parts of a Frame
Frame Header − It includes the frame’s source and destination addresses.
Payload field − It includes the message that needs to be spread.
Trailer − The bits for error detection and repair are present.
Flag − It designates the start and finish of the frame.
Types of Framing
There are two different forms of framing: fixed size framing and variable size framing.

1. Fixed-sized Framing
Since the frame’s size is constant in the fixed-sized framing, its length serves as
its boundary. As a result, it is not necessary to use additional boundary bits to
specify the beginning and end of the frame. ATM cells are an example of this.

2. Variable-Size Framing
Our main discussion in this chapter concerns variable-size framing, prevalent
in local area networks. In variable-size framing, we need a way to define the
end of the frame and the beginning of the next. Historically, two approaches
were used for this purpose: a character-oriented approach and a bit-oriented
approach.

Framing Methods:
1. Character count
2. Starting and Ending character with Character Stuffing
3. Starting and ending flags with bit stuffing
4. Encoding Violation.
1. Character Count
First framing method uses a field in the header to specify the number of characters in
the frame. When the data link layer at the destination sees the character count, it knows
how many characters follow and hence where the end of the frame is.

For Example,
Consider a data − 1 2 3 4 5 6 7 8 9 0 1 2 3

Explanation
Step 1 − Starting header in the frame indicate the character count, so first frame
consists of 5 units of data including that number,

Step 2 − Second frame header consists of 5 units of data including that number, so
second frame consists of data 5,6,7,8. 8 indicate the end of the frame here.

Step 3 − Third frame header consists of character count 6 that means the frame
consists of 6 characters including 6. So the data in the third frame is 9,0,1,2,3.

Step 4 − My data transfer to the receiver side without any errors.


2. Starting and Ending Character with Character Stuffing: The problem of character
count method is solved here by using a starting character before the starting of each
frame and an ending character at the end of each frame. Each frame is proceeded by
the transmission of ASCII character sequence DLE STX (DLE stand for data link Escape
and STX is start of TEXT). After each frame the ASCII character sequence DLE ETX is
transmitted. Here DLE stands for data Link Escape and ETX stands for END of Text.
Hence, if the receiver loses the synchronization, it just has to search for the DLE STX
or DLE ETX characters to return back on track.
Character stuffing
In the second method, each frame starts with the ASCII character sequence DLE STX and
ends with the sequence DLE STX.(where DLE is Data Link Escape, STX is Start of TeXt and
ETX is End of TeXt.) This method overcomes the drawbacks of the character count
method. If the destination ever loses synchronization, it only has to look for DLE STX and
DLE ETX characters. If however, binary data is being transmitted then there exists a
possibility of the characters DLE STX and DLE ETX occurring in the data. Since this can
interfere with the framing, a technique called character stuffing is used. The sender's
data link layer inserts an ASCII DLE character just before the DLE character in the data.
The receiver's data link layer removes this DLE before this data is given to the network
layer. However character stuffing is closely associated with 8-bit characters and this is a
major hurdle in transmitting arbitrary sized characters.
3. Starting and ending flags with bit stuffing:
1. Definition: this technique allows the frames to contain an arbitrary number of bits and
codes different from ASCII code. At the beginning and end of each frame, a specific bit
pattern 01111110 called Flag byte is transmitted. Since there are six consecutive 1s
in this byte, a technique called bit stuffing which is similar to character stuffing is used.

2. Bit Stuffing: Whenever the sender data link layer detects the presence of five consecutive
ones in the data, it automatically stuffs a 0 bit into the outgoing bit stream. This is called
bit stuffing and it is illustrated in given fig.
De-stuffing: When a receiver detects presence of five consecutive ones in the received
bit stream, it automatically detects the 0 bit following the five ones. This is called de-
stuffing. It is shown in given fig. due to bit stuffing, the possible problem if the data
contains the flag byte pattern(0111 1110) is eliminated.

Physical Layer Coding Violations: This method of framing is applicable only to the
networks in which the encoding on the physical medium contains some
redundancy. Some LANs encode each bit of data using two physical bits. The
Manchester coding is generally used. Normally, a 1 bit is encoded into a 10 pair
and a 0 bit is encoded into a 01 pair as shown in below fig. this used of invalid
physical code is a part of 802 LAN standards.
Error control in the Data Link Layer is a crucial aspect of ensuring reliable
communication between devices over a network. The Data Link Layer (Layer 2 of the OSI
model) is responsible for framing, addressing, and error detection and correction. Errors
can occur due to noise, interference, or collisions on the communication medium. To
manage these errors, various error control techniques are employed.
Here’s a breakdown of error control techniques used in the Data Link Layer:

In data communication, errors refer to the discrepancies or alterations in transmitted


data due to various factors like noise, interference, or signal degradation. These
errors can occur during transmission, reception, or processing of the data.
Understanding the types of errors helps in selecting appropriate error detection and
correction techniques.
Here are the main types of errors that can occur in data communication:
Error Detection
When data is transmitted from one device to another device, the system does not
guarantee whether the data received by the device is identical to the data transmitted
by another device. An Error is a situation when the message received at the receiver
end is not identical to the message transmitted.

Single-Bit Error: The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is
changed to 1.
Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a
single-bit error to occurred, a noise must be more than 1 ?s.
Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires
are used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is
corrupted per byte.
Burst Error: The two or more bits are changed from 0 to 1 or from 1 to 0 is known as
Burst Error.
The Burst Error is determined from the first corrupted bit to the last corrupted bit.
The duration of noise in Burst Error is more than the duration of noise in Single-Bit.
Burst Errors are most likely to occur in Serial Data Transmission.
The number of affected bits depends on the duration of the noise and data rate.

Error Detecting Techniques:


The most popular Error Detecting Techniques are:
1. Single parity check
2. Two-dimensional parity check
3. Checksum
4. Cyclic redundancy check
1. Single Parity Check:
Single Parity checking is the simple mechanism and inexpensive to detect the errors.
In this technique, a redundant bit is also known as a parity bit which is appended at the
end of the data unit so that the number of 1s becomes even. Therefore, the total
number of transmitted bits would be 9 bits.
If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s bits
is even, then parity bit 0 is appended at the end of the data unit.
At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
This technique generates the total number of 1s even, so it is known as even-parity
checking.
Drawbacks Of Single Parity Checking:
It can only detect single-bit errors which are very rare.
If two bits are interchanged, then it cannot detect the errors.
Two-Dimensional Parity Check
Performance can be improved by using Two-Dimensional Parity Check which organizes
the data in the form of a table.
Parity check bits are computed for each row, which is equivalent to the single-parity
check.
In Two-Dimensional Parity check, a block of bits is divided into rows, and the redundant
row of bits is added to the whole block.
At the receiving end, the parity bits are compared with the parity bits computed from
the received data.

Drawbacks Of 2D Parity Check


If two bits in one data unit are corrupted and two bits exactly the same position in another
data unit are also corrupted, then 2D Parity checker will not be able to detect the error.
This technique cannot be used to detect the 4-bit errors or more in some cases.
Checksum A Checksum is an error detection technique based on the concept of
redundancy.
It is divided into two parts:
Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data
into equal segments of n bits each, and all these segments are added together by
using one's complement arithmetic. The sum is complemented and appended to the
original data, known as checksum field. The extended data is transmitted across the
network.
Suppose L is the total sum of the data segments, then the checksum would be ?L
The Sender follows the given steps:
The block unit is divided into k sections, and each of n bits.
All the k sections are added together by using one's complement to get the sum.
The sum is complemented and it becomes the checksum field.
The original data and checksum field are sent across the network.

Checksum Checker
A Checksum is verified at the receiving side. The receiver subdivides the incoming
data into equal segments of n bits each, and all these segments are added together,
and then this sum is complemented. If the complement of the sum is zero, then the
data is accepted otherwise data is rejected.
The Receiver follows the given steps:
The block unit is divided into k sections and each of n bits.
All the k sections are added together by using one's complement algorithm to get t
he sum.
The sum is complemented.
If the result of the sum is zero, then the data is accepted otherwise the data is disc
arded.
Cyclic Redundancy Check (CRC)
CRC is a redundancy error technique used to determine the error.
Following are the steps used in CRC for error detection:
•In CRC technique, a string of n 0s is appended to the data unit, and this n number is
less than the number of bits in a predetermined number, known as division which is
n+1 bits.
•Secondly, the newly extended data is divided by a divisor using a process is known as
binary division. The remainder generated from this division is known as CRC remainder.
•Thirdly, the CRC remainder replaces the appended 0s at the end of the original data.
This newly generated unit is sent to the receiver.
•The receiver receives the data followed by the CRC remainder. The receiver will treat
this whole unit as a single unit, and it is divided by the same divisor that was used to
find the CRC remainder.

If the resultant of this division is zero which means that it has no error, and the data is
accepted.
If the resultant of this division is not zero which means that the data consists of an
error. Therefore, the data is discarded.
Let's understand this concept through an example:
Suppose the original data is 11100 and divisor is 1001.
CRC Generator
A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end
of the data as the length of the divisor is 4 and we know that the length of the string 0s
to be appended is always one less than the length of the divisor.
Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
CRC remainder replaces the appended string of 0s at the end of the data unit, and the
final string would be 11100111 which is sent across the network.
CRC Checker
The functionality of the CRC checker is similar to the CRC generator.
When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.
A string is divided by the same divisor, i.e., 1001.
In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.
Error Correction
Error correction is much more complex than error prediction, as we previously said. The
receiver just
needs to realize that the sent codeword is invalid in error detection; in error correction,
the receiver
must locate (or guess) the original codeword sent. We may assume that error correction
requires
more redundant bits than error detection.
Errors and Errors correcting code
In error correction, the encoder and decoder have different structures.
Bits can be corrupted when being transferred over a data network due to congestion and
network
issues. Errors are caused by distorted bits, which cause the recipient to receive
erroneous data. Under
the limits of the algorithm, error correcting codes determine the precise number of bits
that have been
corrupted and the origin of the corrupted bits. Error-correcting codes (ECC) are a series
of numbers developed by algorithms for detecting and correcting errors in data
transmitted over a noisy
medium.
Error-correcting codes can be divided into two categories.

Block codes-
The message is broken down into fixed-size chunks of bits, with redundant bits inserted for
error detection and correction.
Convolution codes use data streams of arbitrary length to create parity symbols, which are
created by sliding a Boolean function over the data stream.
Hamming Code
Hamming code is a block code that can identify and correct single-bit errors while detecting up
to two simultaneous bit errors. It was created by R.W. Hamming for the purpose of error
correction. The source encodes the message using this coding process by adding redundant bits
into the message. Extra bits are produced and placed at unique locations in the message to
allow error detection and correction. When the destination receives this message, it performs
recalculations in order to locate errors and determine which bit location is incorrect.
Hamming Code is a method of encoding a letter.
The procedure used by the sender to encode the message encompasses the following steps −
Step 1 − Calculation of the number of redundant bits.
Step 2 − Positioning the redundant bits.
Step 3 − Calculating the values of each redundant bit.
Once the redundant bits are embedded within the message, this is sent to the user.
Decoding a Hamming Code message
When a message is sent, the recipient performs recalculations to find and correct errors.
The
recalculation procedure is as follows:
Step 1: Determine how many redundant bits there are.
Step 2: Putting the obsolete pieces in their proper places.
Step 3: Testing for parity.
Step 4: Identifying and correcting errors
Step 1 − Calculation of the number of redundant bits
Using the same formula as in encoding, the number of redundant bits are ascertained.
2r ≥ m + r + 1 where m is the number of data bits and r is the number of redundant bits.
Step 2 − Positioning the redundant bits
The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc.
Step 3 − Parity checking
Parity bits are calculated based upon the data bits and the redundant bits using the
same rule as
during generation of c1,c2 ,c3 ,c4 etc. Thus
c1 = parity(1, 3, 5, 7, 9, 11 and so on)
c2 = parity(2, 3, 6, 7, 10, 11 and so on)
c3 = parity(4-7, 12-15, 20-23 and so on)
Step 4 − Error detection and correction
The decimal equivalent of the parity bits binary values is calculated. If it is 0, there is no
error.
Otherwise, the decimal value gives the bit position which has error. For example, if
c1c2c3c4 = 1001, it
implies that the data bit at position 9, decimal equivalent of 1001, has error. The bit is
flipped to get the correct message.
Data Link Control
Data link management and media access control are the two primary features of the
data link layer as shown in Figure . The first, data link control, is concerned with the
architecture and procedures for node-to-node connectivity between two neighboring
nodes. The data link layer's second feature is media access control, or how to share the
link. Framing, flow and error management, and software-implemented protocols are all
data link control functions that ensure seamless and stable frame transfer between
nodes.
Flow Control and Error Control
Data transmission necessitates the cooperation of at least two computers, one of
which sends data and the other of which receives it. Even such a simple arrangement
necessitates a great deal of planning in order for an understandable transaction to take
place. Flow control and error control are the two most critical functions of the data link
network. These functions are collectively referred to as data link management. The
basic functions of data link layer are framing, Error control and flow control as seen in
given figure.
Types of Data Link Protocols
The protocols are divided into those that can be used for noiseless (error-free)
channels and those that can be used for noisy (error-producing) channels. The
protocols in the first group can't be used in real life, so they're useful for learning
about noisy channel protocols. The classifications as seen in below Figure. There is a
distinction between the protocols we address here and the protocols used in actual
networks. The data frames pass from one node, referred to as the sender, to another
node, referred to as the receiver, in all of the protocols we address.
Data flows in just one direction, despite the fact that special frames called
acknowledgment (ACK) and negative acknowledgment (NAK) will flow in the opposite
direction for flow and error monitoring. In a real-world network, data link protocols are
bidirectional, allowing data to flow in all directions. Piggybacking is a method used in
these protocols to provide flow and error control information such as ACKs and NAKs in
the data frames. We picked the latter for our topic because bidirectional protocols are
more complicated than unidirectional protocols. They can be generalized to
bidirectional protocols if they are understood.

NOISELESS CHANNELS

Simplest Protocol
It is very simple. The sender sends a sequence of frames without even thinking about the
receiver. Data are transmitted in one direction only. Both sender & receiver always ready.
Processing time can be ignored. Infinite buffer space is available. And best of all, the
communication channel between the data link layers never damages or loses frames.
This thoroughly unrealistic protocol, which we will nickname ‘‘Utopia,’’ .The utopia
protocol is unrealistic because it does not handle either flow control or error correction
Stop-and-wait Protocol
It is still very simple. The sender sends one frame and waits for feedback from the
receiver. When the ACK arrives, the sender sends the next frame It is Stop-and-Wait
Protocol because the sender sends one frame, stops until it receives confirmation from
the receiver (okay to go ahead), and then sends the next frame. We still have
unidirectional communication for data frames, but auxiliary ACK frames (simple tokens of
acknowledgment) travel from the other direction. We add flow control to our previous
protocol.

NOISY CHANNELS
Although the Stop-and-Wait Protocol gives us an idea of how to add flow control to its
predecessor, noiseless channels are nonexistent. We can ignore the error (as we
sometimes do), or we need to add error control to our protocols. We discuss three
protocols in this section that use error control.
Sliding Window Protocols
1 Stop-and-Wait Automatic Repeat Request
2 Go-Back-N Automatic Repeat Request
3 Selective Repeat Automatic Repeat Request
1 Stop-and-Wait Automatic Repeat Request
To detect and correct corrupted frames, we need to add redundancy bits to our data
frame. When the frame arrives at the receiver site, it is checked and if it is corrupted, it is
silently discarded. The detection of errors in this protocol is manifested by the silence of
the receiver.
Lost frames are more difficult to handle than corrupted ones. In our previous protocols,
there was no way to identify a frame. The received frame could be the correct one, or a
duplicate, or a frame out of order. The solution is to number the frames. When the
receiver receives a data frame that is out of order, this means that frames were either
lost or duplicated The lost frames need to be resent in this protocol. If the receiver does
not respond when there is an error, how can the sender know which frame to resend? To
remedy this problem, the sender keeps a copy of the sent frame. At the same time, it
starts a timer. If the timer expires and there is no ACK for the sent frame, the frame is
resent, the copy is held, and the timer is restarted. Since the protocol uses the stop-and-
wait mechanism, there is only one specific frame that needs an
ACK Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent
frame and retransmitting of the frame when the timer expires
In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence
numbers are based on modulo-2 arithmetic.
In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2
arithmetic the sequence number of the next frame
expected.
Bandwidth Delay Product :
Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and
1 bit takes 20 ms to make a round trip. What is the bandwidth-delay product? If the
system data frames are 1000 bits in length, what is the utilization percentage of the
link?

The link utilization is only 1000/20,000, or 5 percent. For this reason, for a link with a
high bandwidth or long delay, the use of Stop-and-Wait ARQ wastes the capacity of
the link.

2. Go-Back-N Automatic Repeat Request


To improve the efficiency of transmission (filling the pipe), multiple frames must be
in transition while waiting for acknowledgment. In other words, we need to let more
than one frame be outstanding to keep the channel busy while the sender is waiting
for acknowledgment. The first is called Go-Back-N Automatic Repeat. In this protocol
we can send several frames before receiving acknowledgments; we keep a copy of
these frames until the acknowledgments arrive.
In the Go-Back-N Protocol, the sequence numbers are modulo 2m, where m is the
size of the sequence number field in bits.
The sequence numbers range from 0 to 2 power m- 1. For example, if m is 4, the only
sequence numbers are 0 through 15 inclusive.
The sender window at any time divides the possible sequence numbers into four regions.
The first region, from the far left to the left wall of the window, defines the sequence
numbers belonging to frames that are already acknowledged. The sender does not worry
about these frames and keeps no copies of them. The second region, colored in Figure (a),
defines the range of sequence numbers belonging to the frames that are sent and have an
unknown status. The sender needs to wait to find out if these frames have been received
or were lost. We call these outstanding frames. The third range, white in the figure, defines
the range of sequence
numbers for frames that can be sent; however, the corresponding data packets have not
yet been received from the network layer. Finally, the fourth region defines sequence
numbers that cannot be used until the window slides
The send window is an abstract concept defining an imaginary box of size 2m − 1 with
three variables: Sf, Sn, and S size.
The variable Sf defines the sequence number of the first (oldest) outstanding frame. The
variable Sn holds the sequence number that will be assigned to the next frame to be sent.
Finally, the variable Ssize defines the size of the window. Figure (b) shows how a send
window can slide one or more slots to the right when an acknowledgment arrives from the
other end. The acknowledgments in this protocol are cumulative, meaning that more than
one frame can be acknowledged by an ACK frame. In Figure, frames 0, I, and 2 are
acknowledged, so the window has slide to the right three slots. Note that the value of Sf
is 3 because frame 3 is now the first outstanding frame.
The send window can slide one or more slots when a valid acknowledgment arrives
Receiver window:
variable Rn (receive window, next frame expected) .
The sequence numbers to the left of the window belong to the frames already received
and acknowledged; the sequence numbers to the right of this window define the frames
that cannot be received. Any received frame with a sequence number in these two
regions is discarded. Only a frame with a sequence number matching the value of
Rn is accepted and acknowledged. The receive window also slides, but only one slot at a
time. When a correct frame is received (and a frame is received only one at a time), the
window slides.( see below figure for receiving
window) The receive window is an abstract concept defining an imaginary box of size 1
with one single variable Rn. The window slides when a correct frame has arrived; sliding
occurs one slot at a time
Timers
Although there can be a timer for each frame that is sent, in our protocol we use only one.
The reason is that the timer for the first outstanding frame always expires first; we send all
outstanding frames when this timer expires.
Acknowledgment
The receiver sends a positive acknowledgment if a frame has arrived safe and sound and
in order. If a frame is damaged or is received out of order, the receiver is silent and will
discard all subsequent frames until it receives the one it is expecting. The silence of the
receiver causes the timer of the unacknowledged frame at the sender side to expire. This,
in turn, causes the sender to go back and resend all frames, beginning with the one with
the expired timer. The receiver does not have to acknowledge each frame received. It can
send one cumulative acknowledgment for several frames.
Resending a Frame When the timer expires, the sender resends all outstanding frames.
For example, suppose the sender has already sent frame 6, but the timer for frame 3
expires. This means that frame 3 has not been acknowledged; the sender goes back and
sends frames 3,4,5, and 6 again. That is why the protocol is called
Go-Back-N ARQ.
3 Selective Repeat Automatic Repeat Request
In Go-Back-N ARQ, The receiver keeps track of only one variable, and there is no need
to buffer out-of- order frames; they are simply discarded. However, this protocol is very
inefficient for a noisy link. In a noisy link a frame has a higher probability of damage,
which means the resending of multiple frames. This resending uses up the bandwidth
and slows down the transmission. For noisy links, there is another mechanism that
does not resend N frames when just one frame is damaged; only the damaged frame is
resent. This mechanism is called Selective Repeat ARQ. It is more efficient for noisy
links, but the processing at the receiver is more complex.

Sender Window
(explain go-back N sender window concept (before & after sliding.) The only
difference in sender window between Go-back N and Selective Repeat is Window size)
Receiver window
The receiver window in Selective Repeat is totally different from the one in Go Back-N.
First, the size of the receive window is the same as the size of the send window (2
m-1). The Selective Repeat Protocol allows as many frames as the size of the receiver
window to arrive out of order and be kept until there is a set of in order frames to be
delivered to the network layer. Because the sizes of the send window and receive
window are the same, all the frames in the send frame can arrive out of order and be
stored until they can be delivered. However the receiver never delivers packets out of
order to the network layer. Above Figure shows the receive window. Those slots inside
the window that are colored define frames that have arrived out of order and are
waiting for their neighbors to arrive before delivery to the network
layer. In Selective Repeat ARQ, the size of the sender and receiver window must be at
most one-half of 2m

Delivery of Data in Selective Repeat ARQ:


Flow Diagram
Differences between Go-Back N & Selective Repeat
One main difference is the number of timers. Here, each frame sent or resent needs a
timer, which means that the timers need to be numbered (0, 1,2, and 3). The timer for
frame 0 starts at the first request, but stops when the ACK for this frame arrives. There
are two conditions for the delivery of frames to the network layer: First, a set of
consecutive frames must have arrived. Second, the set starts from the beginning of the
window. After the first arrival, there was only one frame and it started from the
beginning of the window. After the last arrival, there are three frames and the first one
starts from the beginning of the window. Another important point is that a NAK is sent.
The next point is about the ACKs. Notice that only two ACKs are sent here. The first one
acknowledges only the first frame; the second one acknowledges three frames. In
Selective Repeat, ACKs are sent when data are delivered to the network layer. If the data
belonging to n frames are delivered in one shot, only one ACK is sent for all of them.

Piggybacking
A technique called piggybacking is used to improve the efficiency of the bidirectional
protocols. When a frame is carrying data from A to B, it can also carry control
information about arrived (or lost) frames from B; when a frame Is carrying data from
B to A, it can also carry control information about the arrived (or lost) frames from A.

You might also like