0% found this document useful (0 votes)
15 views41 pages

Unit - Ii

Uploaded by

yakad53859
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views41 pages

Unit - Ii

Uploaded by

yakad53859
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT-II

Data Link Layer

o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the
bottom.
o The communication channel that connects the adjacent nodes is known as links, and
in order to move the datagram from source to the destination, the datagram must be
moved across an individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across an
individual link.
o The Data link layer protocol defines the format of the packet exchanged across the
nodes as well as the actions such as Error detection, retransmission, flow control,
and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.

o Following services are provided by the Data Link Layer:

FRAMMING:

Framing is a function of the data link layer. It provides a way for a sender to transmit a set
of bits that are meaningful to the receiver.
Problems in Framing –
✓ Detecting start of the frame: When a frame is transmitted, every station must be able to
detect it. Station detect frames by looking out for special sequence of bits that marks the
beginning of the frame i.e. SFD (Starting Frame Delimeter).
✓ How do station detect a frame: Every station listen to link for SFD pattern through a
sequential circuit. If SFD is detected, sequential circuit alerts station. Station checks
destination address to accept or reject frame.
✓ Detecting end of frame: When to stop reading the frame.
Types of framing – There are two types of framing:

1. Fixed size – The frame is of fixed size and there is no need to provide boundaries to the
frame, length of the frame itself acts as delimiter.
✓ Drawback: It suffers from internal fragmentation if data size is less than frame size
✓ Solution: Padding

2. Variable size – In this there is need to define end of frame as well as beginning of next
frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of the
frame. Used in Ethernet (802.3). The problem with this is that sometimes the length field
might get corrupted.
1. End Delimeter (ED) – We can introduce an ED(pattern) to indicate the end of the
frame. Used in Token Ring. The problem with this is that ED can occur in the data. This
can be solved by:

1. Character/Byte Stuffing:

In this method a flag byte, is used as both the starting and ending of a frame. See in
the figure below.
Two consecutive flag bytes indicate the end of one frame and the start of the next frame.
If the receiver ever loses synchronization it can just search for two flag bytes to find the end
of the current frame and the start of the next frame.

Disadvantage – It is very costly and obsolete method.

2. Bit stuffing framing method:

In this method bit stuffing is used.


When sender’s data link layer encounters five consecutive 1s in the data, it automatically
stuffs a 0 bit.
At receiver end this stuffed 0 bit automatically deleted. As shown in the figure below.
3. CHARACTER count framing method:

The byte count framing method uses a field in the header to specify the number of bytes in
the frame.
Data link layer at sender sends the byte count.
Data link layer at receiver counts the byte count. send by sender.
If there is difference between bytes counts of sender and receiver. There is error in data
received.
Else received data is correct.
Above points are shown in diagram below.

Error Detection

When data is transmitted from one device to another device, the system does not guarantee
whether the data received by the device is identical to the data transmitted by another
device. An Error is a situation when the message received at the receiver end is not
identical to the message transmitted.

Types of Errors
Errors can be classified into two categories:

o Single-Bit Error
o Burst Error

Single-Bit Error:

The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed
to 1.

Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a single-
bit error to occurred, a noise must be more than 1 ?s.

Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires are
used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is corrupted
per byte.

Burst Error:

The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.

The Burst Error is determined from the first corrupted bit to the last corrupted bit.
The duration of noise in Burst Error is more than the duration of noise in Single-Bit.

Burst Errors are most likely to occurr in Serial Data Transmission.

The number of affected bits depends on the duration of the noise and data rate.

Error Detecting Techniques:

The most popular Error Detecting Techniques are:

o Single parity check


o Two-dimensional parity check
o Checksum
o Cyclic redundancy check

Single Parity Check


o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, a redundant bit is also known as a parity bit which is appended at
the end of the data unit so that the number of 1s becomes even. Therefore, the total
number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s
bits is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity
checking.
Drawbacks Of Single Parity Checking

o It can only detect single-bit errors which are very rare.


o If two bits are interchanged, then it cannot detect the errors.

Two-Dimensional Parity Check


o Performance can be improved by using Two-Dimensional Parity Check which
organizes the data in the form of a table.
o Parity check bits are computed for each row, which is equivalent to the single-parity
check.
o In Two-Dimensional Parity check, a block of bits is divided into rows, and the
redundant row of bits is added to the whole block.
o At the receiving end, the parity bits are compared with the parity bits computed
from the received data.
Drawbacks Of 2D Parity Check

o If two bits in one data unit are corrupted and two bits exactly the same position in
another data unit are also corrupted, then 2D Parity checker will not be able to detect
the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.

Checksum

A Checksum is an error detection technique based on the concept of redundancy.

It is divided into two parts:

Checksum Generator

A Checksum is generated at the sending side. Checksum generator subdivides the data into
equal segments of n bits each, and all these segments are added together by using one's
complement arithmetic. The sum is complemented and appended to the original data,
known as checksum field. The extended data is transmitted across the network.

Suppose L is the total sum of the data segments, then the checksum would be ?L
1. The Sender follows the given steps:
2. The block unit is divided into k sections, and each of n bits.
3. All the k sections are added together by using one's complement to get the sum.
4. The sum is complemented and it becomes the checksum field.
5. The original data and checksum field are sent across the network.

Checksum Checker

A Checksum is verified at the receiving side. The receiver subdivides the incoming data
into equal segments of n bits each, and all these segments are added together, and then this
sum is complemented. If the complement of the sum is zero, then the data is accepted
otherwise data is rejected.

Cyclic Redundancy Check (CRC)

CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:

o In CRC technique, a string of n 0s is appended to the data unit, and this n number is
less than the number of bits in a predetermined number, known as division which is
n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known
as binary division. The remainder generated from this division is known as CRC
remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data.
This newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will
treat this whole unit as a single unit, and it is divided by the same divisor that was
used to find the CRC remainder.

If the resultant of this division is zero which means that it has no error, and the data is
accepted.

If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.

CRC Generator

o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the
end of the data as the length of the divisor is 4 and we know that the length of the
string 0s to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
o The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and
the final string would be 11100111 which is sent across the network.
CRC Checker

o The functionality of the CRC checker is similar to the CRC generator.


o When the string 11100111 is received at the receiving end, then CRC checker
performs the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.

Error Correction

Error Correction codes are used to detect and correct the errors when data is transmitted
from the sender to the receiver.

Error Correction can be handled in two ways:

o Backward error correction: Once the error is discovered, the receiver requests the
sender to retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting code
which automatically corrects the errors.

A single additional bit can detect the error, but cannot correct it.

For correcting the errors, one has to know the exact position of the error. For example, If we
want to calculate a single-bit error, the error correction code will determine which one of
seven bits is in error. To achieve this, we have to add some additional redundant bits.

Suppose r is the number of redundant bits and d is the total number of the data bits. The
number of redundant bits r can be calculated by using the formula:
2r>=d+r+1

The value of r is calculated by using the above formula. For example, if the value of d is 4,
then the possible smallest value that satisfies the above relation would be 3.

To determine the position of the bit which is in error, a technique developed by R.W
Hamming is Hamming code which can be applied to any length of the data unit and uses
the relationship between data units and redundant units.

Hamming Code

Parity bits: The bit which is appended to the original data of binary bits so that the total
number of 1s is even or odd.

Even parity: To check for even parity, if the total number of 1s is even, then the value of the
parity bit is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is
1.

Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of
parity bit is 1. If the total number of 1s is odd, then the value of parity bit is 0.

Algorithm of Hamming code:

o An information of 'd' bits are added to the redundant bits 'r' to form d+r.
o The location of each of the (d+r) digits is assigned a decimal value.
o The 'r' bits are placed in the positions 1,2,.....2k-1.
o At the receiving end, the parity bits are recalculated. The decimal value of the parity
bits determines the position of an error.

Relationship b/w Error position & binary number.

Let's understand the concept of Hamming code through an example:

Suppose the original data is 1010 which is to be sent.

Total number of data bits 'd' = 4


Number of redundant bits r : 2r >= d+r+1
2r>= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.
Total number of bits = d+r = 4+3 = 7;

Determining the position of the redundant bits

The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The position
of the redundant bits is calculated with corresponds to the raised power of 2. Therefore,
their corresponding positions are 1, 21, 22.

1. The position of r1 = 1
2. The position of r2 = 2
3. The position of r4 = 4

Representation of Data on the addition of parity bits:

Determining the Parity bits

Determining the r1 bit

The r1 bit is calculated by performing a parity check on the bit positions whose binary
representation includes 1 in the first position.

We observe from the above figure that the bit positions that includes 1 in the first position
are 1, 3, 5, 7. Now, we perform the even-parity check at these bit positions. The total
number of 1 at these bit positions corresponding to r1 is even, therefore, the value of the r1
bit is 0.

Determining r2 bit

The r2 bit is calculated by performing a parity check on the bit positions whose binary
representation includes 1 in the second position.
We observe from the above figure that the bit positions that includes 1 in the second
position are 2, 3, 6, 7. Now, we perform the even-parity check at these bit positions. The
total number of 1 at these bit positions corresponding to r2 is odd, therefore, the value of
the r2 bit is 1.

Determining r4 bit

The r4 bit is calculated by performing a parity check on the bit positions whose binary
representation includes 1 in the third position.

We observe from the above figure that the bit positions that includes 1 in the third position
are 4, 5, 6, 7. Now, we perform the even-parity check at these bit positions. The total
number of 1 at these bit positions corresponding to r4 is even, therefore, the value of the r4
bit is 0.

Data transferred is given below:

Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits are
recalculated.

R1 bit

The bit positions of the r1 bit are 1,3,5,7


We observe from the above figure that the binary representation of r1 is 1100. Now, we
perform the even-parity check, the total number of 1s appearing in the r1 bit is an even
number. Therefore, the value of r1 is 0.

R2 bit

The bit positions of r2 bit are 2,3,6,7.

We observe from the above figure that the binary representation of r2 is 1001. Now, we
perform the even-parity check, the total number of 1s appearing in the r2 bit is an even
number. Therefore, the value of r2 is 0.

R4 bit

The bit positions of r4 bit are 4,5,6,7.

We observe from the above figure that the binary representation of r4 is 1011. Now, we
perform the even-parity check, the total number of 1s appearing in the r4 bit is an odd
number. Therefore, the value of r4 is 1.
o The binary representation of redundant bits, i.e., r4r2r1 is 100, and its corresponding
decimal value is 4. Therefore, the error occurs in a 4th bit position. The bit value must be
changed from 1 to 0 to correct the error.

Noiseless Channels:

1. Simplest Protocol

It has no flow or error control. It is a unidirectional protocol in which data frames are
travelling in only one direction-from the sender to receiver. The data link layer of the
receiver immediately removes the header from the frame and hands the data packet to its
network layer, which can also accept the packet immediately.

Design

The sender site cannot send a frame until its network layer has a data packet to send. The
receiver site cannot deliver a data packet to its network layer until a frame arrives.

Figure 2.6 The design of the simplest protocol with no flow or error control

If the protocol is implemented as a procedure, we need to introduce the idea of events in


the protocol. The procedure at the sender site is constantly running; there is no action until
there is a request from the network layer. The procedure at the receiver site is also
constantly running, but there is no action until notification from the physical layer arrives.

Example 2.1
It is very simple. The sender sends a sequence of frames without even thinking about the
receiver. To send three frames, three events occur at the sender site and three events at the
receiver site. Note that the data frames are shown by tilted boxes; the height of the box
defines the transmission time difference between the first bit and the last bit in the frame.

2. Stop-and-Wait Protocol
If data frames arrive at the receiver site faster than they can be processed, the frames must
be stored until their use.
In Stop-and-Wait Protocol the sender sends one frame, stops until it receives confirmation
from the receiver (okay to go ahead), and then sends the next frame.
Design
Figure 2.8 illustrates the mechanism.
Comparing this figure with Figure 2.6, we can see the traffic on the forward channel (from
sender to receiver) and the reverse channel. At any time, there is either one data frame on
the forward channel or one ACK frame on the reverse channel. We therefore need a half-
duplex link.
Example 2.2

Figure 2.9 shows an example of communication using this protocol. It is still very simple.
The sender sends one frame and waits for feedback from the receiver. When the ACK
arrives, the sender sends the next frame. Note that sending two frames in the protocol
involves the sender in four events and the receiver in two events.

Noisy Channels:

1. Stop-and-Wait Automatic Repeat Request

The Stop-and-Wait Automatic Repeat Request (Stop-and-Wait ARQ), adds a simple error
control mechanism to the Stop-and-Wait Protocol. To detect and correct corrupted frames,
we need to add redundancy bits to our data frame. When the frame arrives at the receiver
site, it is checked and if it is corrupted, it is silently discarded. The detection of errors in this
protocol is manifested by the silence of the receiver.

Sequence Numbers
The protocol specifies that frames need to be numbered. This is done by using sequence
numbers. A field is added to the data frame to hold the sequence number of that frame. For
example, if we decide that the field is m bits long, the sequence numbers start from 0, go
to 2m - 1, and then are repeated.

Acknowledgment Numbers

Since the sequence numbers must be suitable for both data frames and ACK frames, we use
this convention: The acknowledgment numbers always announce the sequence number of
the next frame expected by the receiver. For example, if frame 0 has arrived safe and sound,
the receiver sends an ACK frame with acknowledgment 1 (meaning frame 1 is expected
next). If frame 1 has arrived safe and sound, the receiver sends an ACK frame with
acknowledgment 0 (meaning frame 0 is expected).

Design

Figure 2.10 shows the design of the Stop-and-Wait ARQ Protocol. The sending device keeps
a copy of the last frame transmitted until it receives an acknowledgment for that frame. A
data frames uses a seq No (sequence number); an ACK frame uses an ack No
(acknowledgment number). The sender has a control variable, which we call Sn (sender,
next frame to send), that holds the sequence number for the next frame to be sent (0 or 1).

The receiver has a control variable, which we call Rn (receiver, next frame expected), that
holds the number of the next frame expected. When a frame is sent, the value of Sn is
incremented (modulo-2), which means if it is 0, it becomes 1 and vice versa. When a frame
is received, the value of Rn is incremented (modulo-2), which means if it is 0, it becomes 1
and vice versa. Three events can happen at the sender site; one event can happen at the
receiver site. Variable Sn points to the slot that matches the sequence number of the frame
that has been sent, but not acknowledged; Rn points to the slot that matches the sequence
number of the expected frame.

Example 2.3

Frame 0 is sent and acknowledged. Frame 1 is lost and resent after the time-out. The resent
frame 1 is acknowledged and the timer stops. Frame 0 is sent and acknowledged, but the
acknowledgment is lost. The sender has no idea if the frame or the acknowledgment is lost,
so after the time-out, it resends frame 0, which is acknowledged.

Example 2.4
Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1
bit takes 20 ms to make a round trip. What is the bandwidth-delay product? If the system
data frames are 1000 bits in length, what is the utilization percentage of the link?

Solution
The bandwidth-delay product is (1x106)x(20x10-3) =20,000bit

Pipelining

In networking and in other areas, a task is often begun before the previous task has ended.
This is known as pipelining. There is no pipelining in Stop-and-Wait ARQ because we need
to wait for a frame to reach the destination and be acknowledged before the next frame can
be sent. However, pipelining does apply to our next two protocols because several frames
can be sent before we receive news about the previous frames. Pipelining improves the
efficiency of the transmission if the number of bits in transition is large with respect to the
bandwidth-delay product.

2. Go-Back-N Automatic Repeat Request

In this protocol we can send several frames before receiving acknowledgments; we keep a
copy of these frames until the acknowledgments arrive.

Sequence Numbers

Frames from a sending station are numbered sequentially. However, because we need to
include the sequence number of each frame in the header, we need to set a limit. If the
header of the frame allows m bits for the sequence number, the sequence numbers range
from 0 to 2m - 1.

Sliding Window

The sliding window is an abstract concept that defines the range of sequence numbers that
is the concern of the sender and receiver. In other words, the sender and receiver need to
deal with only part of the possible sequence numbers. The range which is the concern of the
sender is called the send sliding window; the range that is the concern of the receiver is
called the receive sliding window.
The sender does not worry about these frames and keeps no copies of them. The second
region, colored in Figure 2.12 a, defines the range of sequence numbers belonging to the
frames that are sent and have an unknown status.

The window itself is an abstraction; three variables define its size and location at any time.
We call these variables Sf(send window, the first outstanding frame), Sn (send window, the
next frame to be sent), and Ssize (send window, size). The variable Sf defines the sequence
number of the first (oldest) outstanding frame. The variable Sn holds the sequence number
that will be assigned to the next frame to be sent. Finally, the variable Ssize defines the size
of the window, which is fixed in our protocol.

The receive window makes sure that the correct data frames are received and that the
correct acknowledgments are sent. The size of the receive window is always 1.

Figure 2.13 Receive window for Go-Back-N ARQ


Figure 2.14 Design of Go-Back-N ARQ

Note that we need only one variable Rn (receive window, next frame expected) to define
this abstraction. The sequence numbers to the left of the window belong to the frames
already

received and acknowledged; the sequence numbers to the right of this window define the
frames that cannot be received. Any received frame with a sequence number in these two
regions is discarded. Only a frame with a sequence number matching the value of Rn is
accepted and acknowledged. The receive window also slides, but only one slot at a time.

Design

Figure 2.14 shows the design for this protocol. As we can see, multiple frames can be in
transit in the forward direction, and multiple acknowledgments in the reverse direction.
The idea is similar to Stop-and-Wait ARQ; the difference is that the send window allows us
to have as many frames in transition as there are slots in the send window.

Send Window Size


We can now show why the size of the send window must be less than 2m. As an example,
we choose m =2, which means the size of the window can be 2m- 1, or 3. Figure 2.15
compares a window size of 3 against a window size of 4. If the size of the window is 3 (less
than 22) and all three acknowledgments are lost, the frame 0 timer expires and all three
frames are resent. The receiver is now expecting frame 3, not frame 0, so the duplicate
frame is correctly discarded.

Figure 2.15 Window size for Go-Back-N ARQ

Example 2.4

Figure 2.16 shows an example of Go-Back-N. This is an example of a case where the
forward channel is reliable, but the reverse is not. No data frames are lost, but some ACKs
are delayed and one is lost. The example also shows how cumulative acknowledgments can
help if acknowledgments are delayed or lost.

After initialization, there are seven sender events. Request events are triggered by data
from the network layer; arrival events are triggered by acknowledgments from the physical
layer. There is no time-out event here because all outstanding frames are acknowledged
before the timer expires. Note that although ACK 2 is lost, ACK 3 serves as both ACK 2 and
ACK3. There are four receiver events, all triggered by the arrival of frames from the
physical layer.
Figure 2.16 Flow diagrams for Example 2.4

3. Selective Repeat Automatic Repeat Request

Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps track of only
one variable, and there is no need to buffer out-of-order frames; they are simply discarded.
However, this protocol is very inefficient for a noisy link. In a noisy link a frame has a
higher probability of damage, which means the resending of multiple frames. This
resending uses up the bandwidth and slows down the transmission.

Windows

The Selective Repeat Protocol also uses two windows: a send window and a receive
window. First, the size of the send window is much smaller; it is 2m- 1. Second, the receive
window is the same size as the send window. The send window maximum size can be 2m-
1. For example, if m = 4, the sequence numbers go from 0 to 15, but the size of the window
is just 8 (it is 15 in the Go-Back-N Protocol). The smaller window size means less efficiency
in filling the pipe, but the fact that there are fewer duplicate frames can compensate for this.

Figure 2.17 Send window for Selective Repeat ARQ


The receive window in Selective Repeat is totally different from the one in Go Back- N.
First, the size of the receive window is the same as the size of the send window (2m- 1).
Figure 2.18 shows the receive window in this protocol. Those slots inside the window that
are colored define frames that have arrived out of order and are waiting for their neighbors
to arrive before delivery to the network layer.

Figure 2.18 Receive window for Selective Repeat ARQ

Design

The design in this case is to some extent similar to the one we described for the 00Back-N,
but more complicated, as shown in Figure 2.19.

Window Sizes

We can now show why the size of the sender and receiver windows must be at most on half
of 2m. For an example, we choose m = 2, which means the size of the window is 2m/2, or 2.
If thesize of the window is 2 and all acknowledgments are lost, the timer for frame 0 expires
and frame 0 is resent. However, this time, the window of the receiver expects to receive
frame 0 (0 is part of the window), so it accepts frame 0, not as a duplicate, but as the first
frame in the next cycle. This is clearly an error. In Selective Repeat ARQ, the size of the
sender and receiver window must be at most one-half of 2m
Figure 2.19 Design of Selective Repeat ARQ

Figure 2.20 Selective Repeat ARQ, Window size

Example 2.5
Frame 1 is lost. We show how Selective Repeat behaves in this case.
Figure 2.21 Flow diagram for Example 2.5

One main difference is the number of timers. Here, each frame sent or resent needs a timer,
which means that the timers need to be numbered (0, 1, 2, and 3). The timer for frame 0
starts at the first request, but stops when the ACK for this frame arrives. The timer for
frame 1 starts at the second request restarts when a NAK arrives, and finally stops when
the last ACK arrives. The other two timers start when the corresponding frames are sent
and stop at the last arrival event.

Piggybacking

The three protocols we discussed in this section are all unidirectional: data frames flow in
only one direction although control information such as ACK and NAK frames can travel in
the other direction. In real life, data frames are normally flowing in both directions: from
node A to node B and from node B to node A. This means that the control information also
needs to flow in both directions.
Figure 2.22 Design of Piggybacking in Go-Back-N ARQ

A technique called piggybacking is used to improve the efficiency of the bidirectional


protocols. When a frame is carrying data from A to B, it can also carry control information
about arrived (or lost) frames from B; when a frame is carrying data from B to A, it can also
carry control information about the arrived (or lost) frames from A.

Multiple access protocols

When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices. In that case, multiple stations access
the channel and simultaneously transmits the data over the channel. It may create collision
and cross talk. Hence, the multiple access protocol is required to reduce the collision and
avoid crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same time
due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.

Following are the types of multiple access protocol that is subdivided into the different
process as:
A. Random Access Protocol

In this protocol, all the station has the equal priority to send the data over a channel. In
random access protocol, one or more stations cannot depend on another station nor any
station control another station. Depending on the channel's state (idle or busy), each station
transmits the data frame. However, if more than one station sends the data over a channel,
there may be a collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for broadcasting frames on
the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a
network simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through
multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision
detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the receiver end within the specified
time, the station waits for a random amount of time, called the backoff time (Tb). And the
station may assume the frame has been lost or destroyed. Therefore, it retransmits the
frame until all the data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at
the same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames
fall on a shared channel simultaneously, collisions can occur, and both will suffer damage.
If the new frame's first bit enters the channel before finishing the last bit of the second
frame. Both frames are completely finished, and both stations must retransmit the data
frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S =
G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-


Persistent mode defines that each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data.
CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it
first senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether
the channel is clear. If the station receives only a single (own) acknowledgments, that
means the data frame has been successfully transmitted to the receiver. But if it gets two
signals (its own and one more in which the collision of frames), a collision of the frame
occurs in the shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.

Difference between CSMA CA and CSMA CD

S. No CSMA CD CSMA CA

1. It is the type of CSMA to detect the It is the type of CSMA to avoid collision on
collision on a shared channel. a shared channel.
2. It is the collision detection protocol. It is the collision avoidance protocol.

3. It is used in 802.3 Ethernet network It is used in the 802.11 Ethernet network.


cable.

4. It works in wired networks. It works in wireless networks.

5. It is effective after collision detection It is effective before collision detection on a


on a network. network.

6. Whenever a data packet conflicts in a Whereas the CSMA CA waits until the
shared channel, it resends the data channel is busy and does not recover after a
frame. collision.

7. It minimizes the recovery time. It minimizes the risk of collision.

8. The efficiency of CSMA CD is high as The efficiency of CSMA CA is similar to


compared to CSMA. CSMA.

9. It is more popular than the CSMA CA It is less popular than CSMA CD.
protocol.

COLLISSION FREE PROTOCOLS:

Almost collisions can be avoided in CSMA/CD.they can still occur during the contention
period.the collision during contention period adversely affects the system performance,
this happens when the cable is long and length of packet are short. This problem becomes
serious as fiber optics network come into use. Here we shall discuss some protocols that
resolve the collision during the contention period.
• Bit-map Protocol
• Binary Countdown
• Limited Contention Protocols
• The Adaptive Tree Walk Protocol

1. Bit-map Protocol:
Bit map protocol is collision free Protocol in In bitmap protocol method, each contention
period consists of exactly N slots. if any station has to send frame, then it transmits a 1 bit
in the respective slot. For example if station 2 has a frame to send, it transmits a 1 bit
during the second slot.
In general Station 1 Announce the fact that it has a frame questions by inserting a 1 bit
into slot 1. In this way, each station has complete knowledge of which station wishes to
transmit. There will never be any collisions because everyone agrees on who goes next.
Protocols like this in which the desire to transmit is broadcasting for the actual
transmission are called Reservation Protocols.

For analyzing the performance of this protocol, We will measure time in units of the
contention bits slot, with a data frame consisting of d time units. Under low load
conditions, the bitmap will simply be repeated over and over, for lack of data frames.All
the stations have something to send all the time at high load, the N bit contention period
is prorated over N frames, yielding an overhead of only 1 bit per frame.
Generally, high numbered stations have to wait for half a scan before starting to transmit
low numbered stations have to wait for half a scan(N/2 bit slots) before starting to
transmit, low numbered stations have to wait on an average 1.5 N slots.

2. Binary Countdown:
Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In
binary countdown, binary station addresses are used. A station wanting to use the
channel broadcast its address as binary bit string starting with the high order bit. All
addresses are assumed of the same length. Here, we will see the example to illustrate the
working of the binary countdown.
In this method, different station addresses are ORed together who decide the priority of
transmitting. If these stations 0001, 1001, 1100, 1011 all are trying to seize the channel for
transmission. All the station at first broadcast their most significant address bit that is 0, 1,
1, 1 respectively. The most significant bits are ORed together. Station 0001 see the 1MSB in
another station addresses and knows that a higher numbered station is competing for the
channel, so it gives up for the current round.
Other three stations 1001, 1100, 1011 continue. The next bit is 1 at station 1100, swiss
station 1011 and 1001 give up. Then station 110 starts transmitting a frame, after which
another bidding cycle starts.
Limited Contention Protocols:
• Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good
when the network load is low.
• Collision free protocols (bitmap, binary Countdown) are good when load is
high.
• How about combining their advantages
1. Behave like the ALOHA scheme under light load
2. Behave like the bitmap scheme under heavy load.

Adaptive Tree Walk Protocol:


• partition the group of station and limit the contention for each slot.
• Under light load, everyone can try for each slot like aloha
• Under heavy load, only a group can try for each slot
• How do we do it:
1. treat every stations as the leaf of a binary tree
2. first slot (after successful transmission), all stations
can try to get the slot(under the root node).
3. if no conflict, fine
4. in case of conflict, only nodes under a subtree get to try for the next one. (depth
first search)

For Example:

• Slot-0: C*, E*, F*, H* (all nodes under node 0 can try which are going to send),
conflict
• Slot-1: C* (all nodes under node 1can try}, C sends
• Slot-2: E*, F*, H*(all nodes under node 2 can try}, conflict
• Slot-3: E*, F* (all nodes under node 5 can try to send), conflict
• Slot-4: E* (all nodes under E can try), E sends
• Slot-5: F* (all nodes under F can try), F sends
• Slot-6: H* (all nodes under node 6 can try to send), H sends.

Wireless Local Area Network: WIRELESS LANS

Wireless LAN stands for Wireless Local Area Network. It is also called LAWN (Local Area
Wireless Network). WLAN is one in which a mobile user can connect to a Local Area
Network (LAN) through a wireless connection.

The IEEE 802.11 group of standards defines the technologies for wireless LANs. For path
sharing, 802.11 standard uses the Ethernet protocol and CSMA/CA (carrier sense multiple
access with collision avoidance). It also uses an encryption method i.e. wired equivalent
privacy algorithm.

Wireless LANs provide high speed data communication in small areas such as building or
an office. WLANs allow users to move around in a confined area while they are still
connected to the network.

In some instance wireless LAN technology is used to save costs and avoid laying cable,
while in other cases, it is the only option for providing high-speed internet access to the
public. Whatever the reason, wireless solutions are popping up everywhere.

Examples of WLANs that are available today are NCR's waveLAN and Motorola's
ALTAIR.

Advantages of WLANs
o Flexibility: Within radio coverage, nodes can communicate without further
restriction. Radio waves can penetrate walls, senders and receivers can be placed
anywhere (also non-visible, e.g., within devices, in walls etc.).
o Planning: Only wireless ad-hoc networks allow for communication without
previous planning, any wired network needs wiring plans.
o Design: Wireless networks allow for the design of independent, small devices which
can for example be put into a pocket. Cables not only restrict users but also designers
of small notepads, PDAs, etc.
o Robustness: Wireless networks can handle disasters, e.g., earthquakes, flood etc.
whereas, networks requiring a wired infrastructure will usually break down
completely in disasters.
o Cost: The cost of installing and maintaining a wireless LAN is on average lower than
the cost of installing and maintaining a traditional wired LAN, for two reasons. First,
after providing wireless access to the wireless network via an access point for the
first user, adding additional users to a network will not increase the cost. And
second, wireless LAN eliminates the direct costs of cabling and the labor associated
with installing and repairing it.
o Ease of Use: Wireless LAN is easy to use and the users need very little new
information to take advantage of WLANs.

Disadvantages of WLANs
o Quality of Services: Quality of wireless LAN is typically lower than wired
networks. The main reason for this is the lower bandwidth due to limitations is radio
transmission, higher error rates due to interference and higher delay/delay variation
due to extensive error correction and detection mechanisms.
o Proprietary Solutions: Due to slow standardization procedures, many companies
have come up with proprietary solutions offering standardization functionality plus
many enhanced features. Most components today adhere to the basic standards IEEE
802.11a or 802.11b.
o Restrictions: Several govt. and non-govt. institutions world-wide regulate the
operation and restrict frequencies to minimize interference.
o Global operation: Wireless LAN products are sold in all countries so, national and
international frequency regulations have to be considered.
o Low Power: Devices communicating via a wireless LAN are typically power
consuming, also wireless devices running on battery power. Whereas the LAN
design should take this into account and implement special power saving modes and
power management functions.
o License free operation: LAN operators don't want to apply for a special license to be
able to use the product. The equipment must operate in a license free band, such as
the 2.4 GHz ISM band.
o Robust transmission technology: If wireless LAN uses radio transmission, many
other electrical devices can interfere with them (such as vacuum cleaner, train
engines, hair dryers, etc.).Wireless LAN transceivers cannot be adjusted for perfect
transmission is a standard office or production environment.

What is Data Link Layer Switching


Network switching is the process of forwarding data frames or packets from one port to
another leading to data transmission from source to destination. Data link layer is the
second layer of the Open System Interconnections (OSI) model whose function is to divide
the stream of bits from physical layer into data frames and transmit the frames according to
switching requirements. Switching in data link layer is done by network devices
called bridges.
Bridges
A data link layer bridge connects multiple LANs (local area networks) together to form a
larger LAN. This process of aggregating networks is called network bridging. A bridge
connects the different components so that they appear as parts of a single network.
The following diagram shows connection by a bridge −

Switching by Bridges
When a data frame arrives at a particular port of a bridge, the bridge examines the frame’s
data link address, or more specifically, the MAC address. If the destination address as well
as the required switching is valid, the bridge sends the frame to the destined port.
Otherwise, the frame is discarded.
The bridge is not responsible for end to end data transfer. It is concerned with transmitting
the data frame from one hop to the next. Hence, they do not examine the payload field of
the frame. Due to this, they can help in switching any kind of packets from the network
layer above.
Bridges also connect virtual LANs (VLANs) to make a larger VLAN.
If any segment of the bridged network is wireless, a wireless bridge is used to perform the
switching.
There are three main ways for bridging −

• simple bridging
• multi-port bridging
• learning or transparent bridging

You might also like