0% found this document useful (0 votes)
5 views25 pages

DCN Unit 2

The Data Link Layer is the second layer of the OSI model, providing data reliability and establishing connections among network nodes. It performs functions such as framing, error control, flow control, and access control, and is divided into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC). The layer offers services to the Network Layer, including unacknowledged and acknowledged connectionless services, as well as acknowledged connection-oriented services, with various protocols like HDLC used for data transmission.

Uploaded by

seevaranjinee.s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views25 pages

DCN Unit 2

The Data Link Layer is the second layer of the OSI model, providing data reliability and establishing connections among network nodes. It performs functions such as framing, error control, flow control, and access control, and is divided into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC). The layer offers services to the Network Layer, including unacknowledged and acknowledged connectionless services, as well as acknowledged connection-oriented services, with various protocols like HDLC used for data transmission.

Uploaded by

seevaranjinee.s
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

What is Data-Link layer?

– Definition
By Dinesh Thakur

Data link layer is the second layer in OSI reference model and lies above the physical
layer. The physical layer provides only a raw bitstream service between computers.
The data link layer provides data reliability and provides tools to establish, maintain,
and release data link connections among the network nodes.
The data link Connections between nodes can consist of one or more physical lines.
These include copper-wire cable, optical fiber cable, microwave link, and satellite
channels. The devices located at the network nodes could be terminals, computers,
switching, or other communicating equipment. The information exchanged between
the nodes could be of any form, including link-control functions or user data or remote
function calls.
The data link layer performs the following functions.
• Data link layer receives the data from the network layer & divide it into manageable
units called frames.
• It then provides the addressing information by adding header to each frame.
Physical addresses of source & destination machines are added to each frame.

• It provides flow control mechanism to ensure that sender is not sending the data at
the speed that the receiver cannot process.
• It also provide error control mechanism to detect & retransmit damaged, duplicate,
or lost frame, thus adding reliability to physical layer.
• Another function of data link layer is access control. When two or more devices are
attached to the same link, data link layer protocols determine which device has control
over the link at any given time.
• Initialization: This function establishes an active connection over an already
existing transmission path. It is called link initialization. The data link layer does not
worry about how the path is set up, and bits moved as the physical layer takes care
of those things. To initialize the link, the peer data link layers, exchange service
request, and indication primitives.
• Information segmenting (framing): The process of breaking up a long bit stream
into several small, fixed length bit slices is known as message segmentation or
framing. When data transmitted in the form of a long bit stream, it is always affected
by an error in a noisy environment. Hence, it required to retransmit the entire
message. Shorter messages have lower possibilities for error and take small transit
time. It is an essential reason for framing.
Apart from framing, the data link layer also attaches control information to identify the
start and stop of the frame and to check the possibility of error.
• Error control: Since errors inevitably occur during the transmission; the data link
layer must have the ability to detect and correct errors to maintain a high degree of
information integrity.
• Data synchronization: The synchronization between transmitter and receiver is
essential to ensure that the information received is correct. The data link layer must
align the character-decoding mechanism of the receiver to the character-encoding
mechanism of the transmitter.
• Flow control: Consider a transmission process in which the sender is faster than
the receiver. It is possible for the sender to overwhelm the receiver. Hence, there is
a need to control the speed of transfer. Flow control is used at the data link layer to
control the data transfer process between speed incompatible nodes of a network.
• Abnormal condition recovery: Abnormal conditions such as loss of response, or
failure of transfer, are handled by special functions at the data link layer. These
functions detect the presence of the above said problems and recover the
transmission.
• Termination: After the data transfer process is complete; the data link layer
relinquishes the control of the link. This activity is known as a termination. Data link
layer uses special functions for the termination process.
In LAN data link layer is divided in the 2 layers:

Logical Link Control Sublayer: The uppermost sublayer is Logical Link Control
(LLC). This sublayer multiplexes protocols running atop the data link layer, and
optionally provides flow control, acknowledgment, and error recovery. The LLC
provides addressing and control of the data link. It specifies which mechanisms are
to be used for addressing stations over the transmission medium and for controlling
the data exchanged between the originator and recipient machines.
Media Access Control Sublayer: The sublayer below it is Media Access Control
(MAC). Sometimes this refers to the sublayer that determines who is allowed to
access the media at any one time. Other times it refers to a frame structure with MAC
addresses inside. There are generally two forms of media access control: distributed
and centralized. Both of these may be compared to communication between people:
The Media Access Control sublayer also determines where one frame of data ends
and the next one starts. There are four means of doing that: a time based, character
counting, byte stuffing and bit stuffing.
We’ll be covering the following topics in this tutorial:

•Services Provided To Network Layer


•Data Link Layer Protocols
Services Provided To Network Layer

• Network layer is the layer 3 of OSI model and lies above the data link layer. Data
link layer provides several services to the network layer.
• The one of the major service provided is the transferring the data from network layer
on the source machine to the network layer on destination machine.
• On source machine data link layer receives the data from network layer and on
destination machine pass on this data to the network layer as shown in Fig.
• The path shown in fig (a) is the virtual path. But the actual path is Network layer ->
Data link layer -> Physical layer on source machine, then to physical media and
thereafter physical layer -> Data link layer -> Network layer on destination machine

The three major types of services offered by data link layer are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection oriented service.
1. Unacknowledged Connectionless Service

(a) In this type of service source machine sends frames to destination machine but
the destination machine does not send any acknowledgement of these frames back
to the source. Hence it is called unacknowledged service.
(b) There is no connection establishment between source and destination machine
before data transfer or release after data transfer. Therefore it is known as
connectionless service.
(c) There is no error control i.e. if any frame is lost due to noise on the line, no attempt
is made to recover it.
(d) This type of service is used when error rate is low.
(e) It is suitable for real time traffic such as speech.
2. Acknowledged Connectionless Service

(a) In this service, neither the connection is established before the data transfer nor
is it released after the data transfer between source and destination.
(b) When the sender sends the data frames to destination, destination machine sends
back the acknowledgement of these frames.
(c) This type of service provides additional reliability because source machine
retransmit the frames if it does not receive the acknowledgement of these frames
within the specified time.
(d) This service is useful over unreliable channels, such as wireless systems.
3. Acknowledged Connection – Oriented Service

(a) This service is the most sophisticated service provided by data link layer to
network layer.
(b) It is connection-oriented. It means that connection is establishment between
source & destination before any data is transferred.
(c) In this service, data transfer has three distinct phases:-
(i) Connection establishment
(ii) Actual data transfer
(iii) Connection release
(d) Here, each frame being transmitted from source to destination is given a specific
number and is acknowledged by the destination machine.
(e) All the frames are received by destination in the same order in which they are
send by the source.
Data Link Layer Protocols

Two major classes of protocols, widely used by the data link layer are bit-oriented
protocols and character-oriented protocols.
Character-oriented Protocols

Character-oriented protocols use a predefined set of characters to convey the


information. A special character set is used for the data formatting and supervising
the transfer of information across the link. Different character codes have been
developed. These character codes differ in length by the number of bits used. There
are graphics characters, which represent symbols, control characters, which are used
to control a remote terminal, and communication characters, which
control computer functions such as synchronization and message handling.
The most common character-oriented code is the seven-bit ASCII code. ASCII stands
for American Standard Code for Information Interchange. Every character is identified
by means of a 7-bit binary code. One extra bit is added for the purpose of error
detection. This bit is known as a parity bit. The number of Is in each character
determines this bit. If the number of bits is even, a 1 bit is used as a parity bit, to make
the total number of bits, odd. Otherwise, the parity bit is even. This process is known
as odd parity. With even parity, if the number of Is in the character is odd, a 1 bit is
added as a parity bit, to make the total number of Is into an even number, otherwise
a zero is used as a parity bit. The ASCII character table is shown in Table 3.1. Another
widely used character code is the Extended Binary Coded Decimal Interchange Code
(EBCDIC). It is an eight-bit code similar to the ASCII code.
Even though character-oriented protocols are used extensively in many applications,
they have number of shortcomings. Because of this, they are seldom used in modern
applications.
Bit-oriented Protocols

Out of many bit-oriented protocols, some important protocols and the organizations
that developed them are given below.
• High level Data Link Control (HDLC) – International Standard Organization (ISO)
• Advanced Data Communication Control Procedure (ADCCP) – American
National Standard Institute (ANSI)
• Synchronous Data Link Control (SDLC) – IBM
3. HDLC Protocol Features
The following are the features of HDLC protocol.
• The protocol and data are totally independent. This property is known as
transparency .
• It is suitable for a network with a variety of configurations. Examples include point-
to-point, multi drop, and loop configurations.
• It supports half and full duplex operations.
• It efficiently works over links with long propagation delay and links with high data
rates.
• It provides high reliability. Problems like data loss, data duplication, and data
corruption do not occur.
The HDLC Protocol
Three modes of operations are defined for the HDLC protocol. They are:
• Normal Response Mode (NRM)
• Asynchronous Response Mode (ARM)
• Asynchronous Balanced Mode (ABM)
The first two modes are useful for the point-to-point and multi drop configurations. In
these two modes, there is one primary station and one or more secondary stations.
The primary station is responsible for the initialization of links, controlling the data
flow between primary and secondary stations, error control,. and logical
disconnection of the secondary stations.
In the NRM mode, the primary station, before allowing the secondary stations to start
transmission, performs polling. But in the ARM mode, secondary station may start
transmission without a poll.
NRM is most suitable for multi drop environments.
The ABM mode is suitable only to point-to-point configuration. Each station assumes
the role of a primary and secondary, depending on the need. There is no polling in
this mode. In the HDLC protocol, data is transmitted in the form of frames. The frame
consists of six fields as shown in Figure .
FLAG This field is used for indicating the start and end of a frame. A special eight-bit
sequence 01111110is referred as a flag. Every frame starts and ends with a flag.
Since this bit, sequence could accidentally occur in the data in the information field,
care must be taken to avoid the occurrence of this pattern in the useful data. In HDLC,
a technique called bit stuffing performs this.
ADDRESS This field contains the address of the secondary station. This field may
contain one or more 8-bit addresses. 8-bit address is used when the total number of
stations exceeds 256. Other wise a two-byte address is used.
CONTROL This field identifies the function and purpose of the frame. Depending on
the protocol, an 8-bit or a 16-bit control field is used.
DATA This field contains the user data to be transmitted. It can be arbitrarily long or
empty.
Frame Check Sequence (FCS) Each and every frame is checked for its validity. The
16-bit FCS performs this. It identifies the errors that occur in the data during the
transit. It uses a 16-bit Cyclic Redundancy Check code (CRC). The CRC code is
derived from the data and control fields at the transmitter and will be sent to the
receiver. The receiver again derives the CRC from the received data and control
fields, and checks with the received CRC. If they differ, then it is considered as an
error.
Three types of frames are used with different control fields. They are:
• Information frames
• Supervisory frames
• Unnumbered frames
The information frames carry data. Supervisory frames perform basic link control
functions and the unnumbered frames are used to perform supplementary link control
function.
Information Frames
The first bit of the information frame contains a O. Bits 2 to 4 provide error control and
flow control. This field is known as Seq. The transmitting station sends a 3-bit modulo-
8 number as a sequence number in the Seq field, along with the data. When a
receiving station receives this frame, it sends the acknowledgment in the bit positions
6 to 8. This field is known as Next field. The number in the Next field denotes the next
expected frame number by the receiver. This kind of acknowledgment is known as
piggybacked acknowledgment. Acknowledgment may also be sent along with
supervisory frame. Bit number 5 is known as Poll or Final bit, alternatively known as
P IF. P mode is used by the central computer to poll terminals for sending data.
Terminals, to send their final frame use F or Final mode. The supervisory and
unnumbered frames also use this mode.
Supervisory Frames
The following are some of the supervisory frame functions.
Flow Control Once a station has completed the transmission of seven frames, no
more frames are transmitted, until the acknowledgment for the first frame is received.
Error Control If the received frame contains error, a negative acknowledgment “NAK
“is sent back to the transmitter via a supervisory frame. There are two types of
protocols used for this. They are the Go back-N protocol and Selective repeat
protocol. In the go-back-N mode, all the frames including the garbled ones are
retransmitted. In the selective repeat mode, the sending station retransmits only the
frames that contain error.
Pipelining More than one frame may be in transit at a time. This allows efficient use
of links and reduces the average propagation delay.
Types of Supervisory Frames
The first bit of a supervisory frame has a value ‘1’. The second bit has a value ‘0’. Bits
3 and 4 denote the type. Bit 5 is P IF field. Bits 6, 7 and 8 are used for Next field.

There are four types of supervisory frames. The details of these are given below.
Type 0: This type of supervisory frame is known as receive read. It is an
acknowledgment frame. It is used to indicate the number of the next frame expected.
This supervisory frame is used when there is no frame to send for piggybacked
acknowledgment. The type field of this frame contains the bit pattern ’00’.
Type 1: It is a negative acknowledgment frame known as reject. It is used to indicate
that an error has occurred. The Next field indicates that the sender is requested to
retransmit all the frames starting from the number specified in the Next field. The type
field contains ’01’.
Type 2: The name of this supervisory frame is receive not ready. This frame informs
the sender that all the frames, except the one specified in the next field, are
acknowledged. It also asks the sender to stop sending. This frame is transmitted to
indicate some temporary problem. The type field contains ’10’.
Type 3: This frame is known as selective reject. It calls for the retransmission of only
the specified frame. The sequence number of the frame to be retransmitted is
specified in the next field.
The third type of frame, whose format is given in Figure, is known as unnumbered
frame. It is used for control purposes.
A specific unnumbered frame is an unnumbered acknowledgment frame. This frame
is sent as acknowledgment for the loss or damaged control frames.
The Sliding Window Protocol

One important protocol used by the data link layer is the sliding window protocol. This
protocol uses a stop-and-wait approach. Initially, the sender transmits a frame. The
second frame is transmitted after the acknowledgment for the first frame is received
from the receiver. The acknowledgment or Next field of the frame contains the
number of the next frame to be received by the receiver. If this number matches with
the number of the frame, the sender is trying to send, the sender discards the first
frame from the buffer. Otherwise, it retransmits the same frame. The sender
maintains a list of consecutive sequence numbers, corresponding to the frames it is
prepared to send. This is known as a sending window. Similarly, the receiver also
maintains a window for the frames it is prepared to receive. This is known as receiving
window. Both windows need not have the same upper and lower bounds or even
same size.
The sequence numbers within the sender’s window represent the frames already sent
but not yet acknowledged. Whenever a new frame arrives, it is given the highest
sequence number and advances the upper end of the window by one step. When an
acknowledgment arrives, it advances the lower edge of the window by one step.

It is possible that the frames available within the sender’s window will be damaged or
lost during the transit. Hence, the sender must keep copies of all frames in
its memory for possible retransmission. As soon as the window reaches the
maximum size, the sender stops accepting new frames for transmission.
The receiving data link layer’s window consists of the sequence number of the frame
it is prepared to accept. Any frame received with sequence number outside this range
is discarded. When a frame whose sequence number is equal to the lower edge of
the window is received, the window is moved by one step and an acknowledgment
frame is generated. Unlike the sender’s window, the receiver’s window always
maintains its initial size. When the size of the receiver’s window is 1 that means the
receiver is prepared to accept all frames in a sequential order.

What is Error Correction and Detection?


By Dinesh Thakur

Error detection and correction has great practical importance in maintaining data
(information) integrity across noisy Communication Networks channels and less than
reliable storage media.
Error Correction : Send additional information so incorrect data can be corrected and
accepted. Error correction is the additional ability to reconstruct the original, error-free
data.
There are two basic ways to design the channel code and protocol for an error correcting
system:

• Automatic Repeat-Request (ARQ) : The transmitter sends the data and also an error
detection code, which the receiver uses to check for errors, and request re-transmission
of erroneous data. In many cases, the request is implicit; the receiver sends an
acknowledgement (ACK) of correctly received data, and the transmitter re-sends anything
not acknowledged within a reasonable period of time.

• Forward Error Correction (FEC) : The transmitter encodes the data with an error-
correcting code (ECC) and sends the coded message. The receiver never sends any
messages back to the transmitter. The receiver decodes what it receives into the “most
likely” data. The codes are designed so that it would take an “unreasonable” amount of
noise to trick the receiver into misinterpreting the data.

Error Detection : Send additional information so incorrect data can be detected and
rejected. Error detection is the ability to detect the presence of errors caused by noise or
other impairments during transmission from the transmitter to the receiver.
Error Detection Schemes : In telecommunication, a redundancy check is extra data
added to a message for the purposes of error detection. Several schemes exist to achieve
error detection, and are generally quite simple. All error detection codes transmit more
bits than were in the original data. Most codes are “systematic”: the transmitter sends a
fixed number of original data bits, followed by fixed number of check bits usually referred
to as redundancy which are derived from the data bits by some deterministic algorithm.
The receiver applies the same algorithm to the received data bits and compares its output
to the received check bits; if the values do not match, an error has occurred at some point
during the transmission. In a system that uses a “non-systematic” code, such as some
raptor codes, data bits are transformed into at least as many code bits, and the transmitter
sends only the code bits.

Repetition Schemes : Variations on this theme exist. Given a stream of data that is to
be sent, the data is broken up into blocks of bits, and in sending, each block is sent some
predetermined number of times. For example, if we want to send “1011”, we may repeat
this block three times each. Suppose we send “1011 1011 1011”, and this is received as
“1010 1011 1011”.
As one group is not the same as the other two, we can determine that an error has
occurred. This scheme is not very efficient, and can be susceptible to problems if the error
occurs in exactly the same place for each group e.g. “1010 1010 1010” in the example
above will be detected as correct in this scheme. The scheme however is extremely
simple, and is in fact used in some transmissions of numbers stations.

Parity Schemes : A parity bit is an error detection mechanism . A parity bit is an extra bit
transmitted with a data item, chose to give the resulting bit seven or odd
parity. Parity refers to the number of bits set to 1 in the data item. There are 2 types of
parity

• Even parity – an even number of bits are 1 Even parity – data: 10010001,
parity bit 1
• Odd parity – an odd number of bits are 1 Odd parity – data: 10010111,
parity bit 0
The stream of data is broken up into blocks of bits, and the number of 1 bits is counted.
Then, a “parity bit” is set (or cleared) if the number of one bits is odd (or even). This
scheme is called even parity; odd parity can also be used. There is a limitation to parity
schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three,
five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity
bit appears to be correct, even though the data is corrupt. For example

• Original data and parity: 10010001+1 (even parity)


• Incorrect data: 10110011+1 (even parity!)
Parity usually used to catch one-bit errors
Checksum : A checksum of a message is an arithmetic sum of message code words of
a certain word length, for example byte values, and their carry value. The sum is negated
by means of ones-complement, and stored or transferred as an extra code word extending
the message. On the receiver side, a new checksum may be calculated, from the
extended message.
If the new checksum is not 0, error is detected.Checksum schemes include parity bits,
check digits, and longitudinal redundancy check. Suppose we have a fairly long message,
which can reasonably be divided into shorter words (a 128 byte message, for instance).
We can introduce an accumulator with the same width as a word (one byte, for instance),
and as each word comes in, add it to the accumulator.
When the last word has been added, the contents of the accumulator are appended to
the message (as a 129th byte, in this case). The added word is called a checksum. Now,
the receiver performs the same operation, and checks the checksum. If the checksums
agree, we assume the message was sent without error.
Hamming Distance Based Checks : If we want to detect d bit errors in an n bit word we
can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming
distance between each valid mapping is d+1. This way, if one receives n+d+1 bit word
that doesn’t match any word in the mapping (with a Hamming distance x <= d+1 from any
word in the mapping) it can successfully detect it as an errored word. Even more, d or
fewer errors will never transform a valid word into another, because the Hamming
distance between each valid word is at least d+1, and such errors only lead to invalid
words that are detected correctly.

Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above
method on every n bit word. In fact, we can detect a maximum of m*d errors if every n
word is transmitted with maximum d errors. The Hamming distance between two bit
strings is the number of bits you have to change to convert one to the other. The basic
idea of an error correcting code is to use extra bits to increase the dimensionality of the
hypercube, and make sure the Hamming distance between any two valid points is greater
than one.
• If the Hamming distance between valid strings is only one, a single bit error results in
another valid string. This means we can’t detect an error.
• If it’s two, then changing one bit results in an invalid string, and can be detected as an
error. Unfortunately, changing just one more bit can result in another valid string, which
means we can’t detect which bit was wrong; so we can detect an error but not correct it.
• If the Hamming distance between valid strings is three, then changing one bit leaves us
only one bit away from the original error, but two bits away from any other valid string.
This means if we have a one-bit error, we can figure out which bit is the error; but if we
have a two-bit error, it looks like one bit from the other direction. So we can have single
bit correction, but that’s all.
• Finally, if the Hamming distance is four, then we can correct a single-bit error and detect
a double-bit error. This is frequently referred to as a SECDED (Single Error Correct,
Double Error Detect) scheme.
Cyclic Redundancy Checks : For CRC following some of Peterson & Brown’s notation
here . .

• k is the length of the message we want to send, i.e., the number of


information bits.

• n is the total length of the message we will end up sending the information
bits followed by the check bits. Peterson and Brown call this a code
polynomial.
• n-k is the number of check bits. It is also the degree of the generating
polynomial. The basic (mathematical) idea is that we’re going to pick the n-
k check digits in such a way that the code polynomial is divisible by the
generating polynomial. Then we send the data, and at the other end we look
to see whether it’s still divisible by the generating polynomial; if it’s not then
we know we have an error, if it is, we hope there was no error. The way we
calculate a CRC is we establish some predefined n-k+1 bit number P (called
the Polynomial, for reasons relating to the fact that modulo-2 arithmetic is a
special case of polynomial arithmetic). Now we append n-k 0’s to our
message, and divide the result by P using modulo-2 arithmetic. The
remainder is called the Frame Check Sequence. Now we ship off the
message with the remainder appended in place of the 0’s. The receiver can
either recompute the FCS or see if it gets the same answer, or it can just
divide the whole message (including the FCS) by P and see if it gets a
remainder of 0. As an example, let’s set a 5-bit polynomial of 11001, and
compute the CRC of a 16 bit message:
1
211001)10011101010101100000
11001
3
1010101010101100000
411001
5110001010101100000
611001
700011010101100000
811001
90011101100000
1011001
11100100000
1211001
1310110000
1411001
151111000
1611001
1711100
11001
18
0101
19
In division don’t bother to keep track of the quotient; we don’t care about the quotient. Our
only goal here is to get the remainder (0101), which is the FCS. CRC’s can actually be
computed in hardware using a shift register and some number of exclusive-or gates.

Hamming Code
By Dinesh Thakur

Hamming code is technique developed by R.W. Hamming for error correction. This
method corrects the error by finding the state at which the error has occurred.
We’ll be covering the following topics in this tutorial:

• Determining the positions of redundancy bits


• Generating parity information
• Example of Hamming Code Generation
• Error Detection & Correction
Determining the positions of redundancy bits

Till now, we know the exact number of redundancy bits required to be embedded with
the particular data unit.
We know that to detect errors in a 7 bit code, 4 redundant bits are required.
Now, the next task is to determine the positions at which these redundancy bits will
be placed within the data unit.
• These redundancy bits are placed at the positions which correspond to the power
of2.
• For example in case of 7 bit data, 4 redundancy bits are required, so making total
number of bits as 11. The redundancy bits are placed in position 1, 2, 4 and 8 as
shown in fig.
Generating parity
information

• In Hamming code, each r bit is the VRC for one combination of data bits. r l is the
VRC bit for one combination of data bits, r2 is the VRC for another combination of
data bits and so on.
• Each data bit may be included in more than one VRC calculation.
• rl bit is calculated using all bits positions whose binary representation includes a 1
in the rightmost position.
• r2 bit calculated using all the bit positions with a 1 in the second position and so on.
• Therefore the various r bits are parity bits for different combination of bits.
The various combinations are:
rl : bits 1,3,5, 7, 9, 11
r2 : bits 2, 3, 6, 7, 10, 11
r4 : bits 4, 5, 6, 7
r8 : bits 8, 9, 10, 11
Example of Hamming Code
Generation

Suppose a binary data 1001101 is to be transmitted. To implement hamming code


for this, following steps are used:
1. Calculating the number of redundancy bits required. Since number of data bits is
7, the value of r is calculated as
2r > m + r + 1
24 > 7 + 4 + 1
Therefore no. of redundancy bits = 4
2. Determining the positions of various data bits and redundancy bits. The various r
bits are placed at the position that corresponds to the power of 2 i.e. 1, 2, 4, 8
4. Thus data 1 0 0 1 1 1 0 0 1 0 1 with be transmitted.
Error Detection & Correction

Considering a case of above discussed example, if bit number 7 has been changed
from 1 to 0.The data will be erroneous.
Data sent: 1
0011100101
Data received: 1 00 1 0 1 00 1 0 1 (seventh bit changed)
The receive takes the transmission and recalculates four new VRCs using the same
set of bits used by sender plus the relevant parity (r) bit for each set as shown in fig.
Then it assembles the new parity values into a binary number in order of r position
(r8, r4, r2, r1).
In this example, this step gives us the binary number 0111. This corresponds to
decimal 7. Therefore bit number 7 contains an error. To correct this error, bit 7 is
reversed from 0 to 1.
Random Access Methods in Computer
Networks
By Dinesh Thakur

Random Access, which is to issue a completely random time, relies on the Aloha
method. The latter takes its name from an experiment performed on a network
connecting the various islands of the Hawaiian Archipelago early 1970. In this
method, when a coupler has information to transmit, it sends it without worry about
other users. If there is a collision, that is to say superposition of two signals or more
users, the signals become indecipherable and are lost. They are subsequently
transmitted, as shown in Figure, in which the couplers 1, 2 and 3 collide. The coupler
1 transmits its field first because he shot the smallest timer. Then, the module 2 emits,
and its signals collide with the coupler 1. Both derive a random time of retransmission.
The coupler 3 is listening while the couplers 1 and 2 are silent, so that the frame of
the coupler 3 passes successfully. Technical aloha is the origin of all the random
access methods.
In addition to its extreme simplicity, aloha has the advantage of not requiring any
synchronization and be completely decentralized. Its main drawback is the loss
of information resulting from a collision and its lack of efficiency, since the
transmission of colliding frames is not interrupted.
The flow rate of such a system becomes very small when the number of couplers
increases. It can be shown mathematically that if the number of stations goes to
infinity, the flow becomes zero. From a certain moment, the system is more stable.
To reduce the likelihood of conflict between users, various improvements of this
technique have been proposed.

Slotted aloha, aloha or sliced

Improved technical aloha was to cut the time into time slots, or slots, and to authorize
the issuance of frames that slice first, the transmission time of a frame requiring
exactly a slice of time. In this way, there is no collision if a single frame transmitted at
the beginning of slice. However, if several frames start transmitting in the beginning
of slice, the frames emissions are superimposed along the slot. In the latter case,
there has retransmission after a random time.
This method improves the throughput during the start-up period but remains unstable.
In addition, there is an additional cost from a complication of the devices, since all
emissions must be synchronized.
CSMA, or listen with random access carrier

Technical CSMA (Carrier Sense Multiple Access) is to listen to the channel before
transmitting. If the module detects a signal on the line, it differs his show at a later
date. This significantly reduces the risk of collision, but does not eliminate them
completely. If during the propagation time between the couple of the more remote
stations (vulnerability period), a coupler does not detect the transmission of a frame,
and there may be signal superposition. Therefore, it is necessary to subsequently
retransmit lost frames.
Numerous variations of this technique have been proposed, which differ by three
Features:
• The strategy followed by the module after detecting the channel status.
• The way collisions are detected.
• The message retransmission after collision policy.
Its main variants are:
• Non-persistent CSMA. The coupler the listening channel when a frame is ready to
be sent. If the channel is free, the module emits. Otherwise, it starts the same process
after a random delay.
• Persistent CSMA – A loan coupler to transmit the channel and previously listening
forwards if it is free. If it detects the occupation of carrier, it continues to listen until
the channel is clear and transmits at that time. This technique allows lose less time
than in the previous case, but it has the disadvantage increase the likelihood of
collision, since the frames that accumulate during the busy time are all transmitted
simultaneously.
• P-persistent CSMA – The algorithm is the same as before, but when the
Channel becomes free; the module transmits with probability p. In other words, the
coupler differs his show with probability 1 – p. This algorithm reduces the likelihood
of collision. Assuming both terminals simply making the collision is inevitable in the
standard case. With the new algorithm, there is a probability 1 – p that each terminal
does not transmit, thereby avoiding the collision. However, it increases the time
before transmission, since a terminal may choose not to transmit, with a probability 1
– p, while the channel is free.
• CSMA / CD (Carrier Sense Multiple Access / Collision Detection) – This
technique normalized random access by the IEEE 802.3 working group is currently
the longer used. At a preliminary listening to the network is added listening during
transmission. Coupler to issue a loan that detected free channel transmits and
continues to listen the channel. The coupler continues to listen, which is sometimes
indicated by the CSMA / CD persistent acronym. If there is a collision, it interrupts its
transmission as soon as possible and sends special signals, called padding bits so
that all couplers are notified of the collision. He tries again his show later using an
algorithm that we present later.
Figure shows the CSMA/CD. In this example, the couplers 2 and 3 attempt
broadcasting for the coupler 1 transmits its own frame. The couplers 2 and 3 begin to
listen and transmit at the same time, the propagation delay around, from the end of
the Ethernet frame transmitted by the coupler 1. A collision ensues. Like the couplers
2 and 3 continue to listen to the physical media, they realize the collision, stop their
transmission and draw a random time to start retransmission process.

The CSMA/CD create an efficiency gain compared to other techniques random


access because there are immediate collision detection and interruption of current
transmission. Issuers couplers recognize a collision by comparing the transmitted
signal with the passing on the line. The collisions are no longer recognized by
absence of acknowledgment but by detecting interference. This conflict detection
method is relatively simple, but it requires sufficient performance coding techniques
to easily recognize a superposition signal. It is generally used for this differential
coding technique, such as differential Manchester code.

• CSMA / CA – Less known than the CSMA / CD access CSMA / CA (Carrier Sense
Multiple Access / Collision Avoidance) starts to be heavily used in Wi-Fi networks,
that is to say, the wireless Ethernet IEEE 802.11. This is a variation of the CSMA /
CD, which allows the CSMA method run when collision detection is not possible, as
in the radio. Its operating principle is to resolve contention before the data are
transmitted using acknowledgments and timers.
The couplers are testing wishing to transmit the channel several times to ensure that
no activity is detected. Every message received shall be immediately paid by the
receiver. Sending new messages takes place only after a certain period, so as to
ensure a transport without loss of information. The non-return of an acknowledgment,
after a predetermined time interval, to detect if there was a collision. This strategy not
only makes it possible to implement an acknowledgment mechanism in frame but has
the advantage of being simple and economic, since it does not require collision
detection circuit, unlike the CSMA/ CD.
There are various techniques of CSMA with collision resolution, including the CSMA
/ CR (Carrier Sense Multiple Access / Collision Resolution). Some variants use the
CSMA also priority mechanisms that may come under this term, that avoid collisions
by separate priority levels associated with different stations connected to the network.

Code Division Multiple Access (CDMA).


CDMA (Code Division Multiple Access) also called spread-spectrum and code
division multiplexing, one of the competing transmission technologies for digital
MOBILE PHONES. The transmitter mixes the packets constituting a message into
the digital signal stream in an order determined by a PSEUDO-RANDOM NUMBER
sequence that is also known to the intended receiver, which uses. it to extract those
parts of the signal intended for itself. Hence each different random sequence
corresponds to a separate communication channel. CDMA is most used in the USA.
• Unlike TDMA, in CDMA all stations can transmit data simultaneously, there is no
timesharing.
• CDMA allows each station to transmit over the entire frequency spectrum all the
time.
• Multiple simultaneous transmissions are separated using coding theory.
• In CDMA each user is given a unique code sequence.
• The basic idea of CDMA is explained below:

1. Let us assume that we have four stations 1, 2, 3 and 4 that are connected to same
channel. The data from station 1 are dl, from station 2 are d2 and so on.
2. The code assigned to first station is C1, to the second is C2 and so on.
3. These assigned codes have two properties:
(a) If we multiply each code by another, we get O.
(b) If we multiply each code by itself, we get 4. (No. of stations).
4. When these four stations are sending data on the same channel, station 1
multiplies its data by its code i.e. dl.c1}, station 2 multiplies its data by its
code i.e. d2 .C2 and so on.
5. The data that go on channel are the sum of all these terms as shown in Fig.

6. Any station that wants to receive data from one of the other three stations multiplies
the data on channel by the code of the sender. For example, suppose station 1 and
2 are talking to each other. Station 2 wants to hear what station 1 is saying. It multiples
the data on the channel by CI (the code of station 1).
7. Because (C1. C1) is 4, but (C2. C1), (C3. C1), and (C4. C1) are all zeroes, station 2
divides the result by 4 to get the data from station 1.
data = (dl . C1+ d2 . C2+ d3. C3+ d4. C4) • C1
= dl. C1 . C1+ d2 . C2. C1+ d3 . C3. C1 + d4 . C4. C1= 4 x dl
• The code assigned to each station is a sequence of numbers called chips. These
chips are called orthogonal sequences. This sequence has following properties:
1. Each sequence is made of N elements, where N is the number of stations as shown
in fig.
2. If we multiple a sequence by a number, every element in the sequence is multiplied
by that element. This is called multiplication of a sequence by a scalar.
For example:
[+1 +1-1 -1] = [+2 +2 -2 -2]
3. If we multiply two equal sequences, element by element and add the results, we
get N, where N is the number of elements in each sequence. This is called inner
product of two equal sequences. For example:
[+1 +1-1 -1] . [+1 +1-1-1] = 1+ 1+ 1+ 1 = 4
4. If we multiply two different sequences, element by element and add the results, we
get 0. This is called inner product of two different sequences. For example:
[+1 +1-1-1]. [+1 +1 +1 +1] = 1+1-1 -1= 0
5. Adding two sequences means adding the corresponding elements. The result is
another sequence. For example:
[+1 +1-1 -1] + [+1 +1+1 +1] = [+2 +2 0 0]
• The data representation and encoding is done by different stations in following
manner:
1. If a station needs to send a 0 bit, it encodes it as -1.
2. If it needs to send a 1 bit, it encodes it as + 1.
3. When station is idle, it sends no signal, which is interpreted as a 0.
• For example, If station 1 and station 2 are sending a 0 bit, station 3 is silent and
station 4 is sending a 1 bit; the data at sender site are represented as -1, – 1,0 and
+1 respectively.
• Each station multiplies the corresponding number by its chip, which is unique for
each station.
• Each station send this sequence to the channel ; The sequence of channel is the
sum of all four sequence as shown in fig.
If station 3, which was silent, is listening to station 2. Station 3 multiplies the total data
on the channel by the code for station 2, which is [+ 1 -1 +1 -1], to get
[-1 -1 -3+1] . [+1 -1 +1 -1]= -4/4 = -1 –> bit 0
Frequency Division Multiple Access (FDMA).
• In FDMA, the available bandwidth is divided into various frequency bands.
• Each station is allocated a band to send its data. This band is reserved for that
station for all the time.
• The frequency bands of different stations are separated by small bands of unused
frequency. These unused frequency bands are called guard bands that prevent
station interferences.

• FDMA is different from frequency division multiplying (FDM).


• FDM is a physical layer technique whereas FDMA is an access method in the data
link layer.
• FDM combines loads from different low bandwidth channels and transmit them using
a high bandwidth channel. The channels that are combined are low-pass. The
multiplexer modulates the signal, combines them and creates a band pass signal.
The bandwidth of each channel is shifted by the multiplexer.
• In FDMA, data link layer in each station tells its physical layer to make a band pass
signal from the data passed to it. The signal must be created in the allocated band.
There is no physical multiplexer at the physical layer.
Time Division Multiple Access (TDMA).
• In TDMA, the bandwidth of channel is dividend amongst various stations on the
basis of time.
• Each station is allocated a time slot during which it can sent its data i.e. each station
can transmit its data in its allocated time slot only.
• Each station must know the beginning of its slot and the location of its slot.
• TDMA requires synchronization between different stations.
• Synchronization is achieved by using some synchronization bits (preamble bits) at
the beginning of each slot.
• TDMA is different from TDM, although they are conceptually same.
• TDM is a physical layer technique that combines the data from slower channels and
transmits then by using a faster channel. This process uses physical multiplexer.
• TDMA, on other hand, is an access method in the data link layer. The data link layer
in each station tells its physical layer to use the allocated time slot. There is no
physical multiplexer at the physical layer.

You might also like