0% found this document useful (0 votes)
13 views36 pages

CN MCA Unit-5

The Data Link Layer, the second layer from the bottom in the OSI model, is responsible for transferring datagrams across individual links and includes protocols like Ethernet and PPP. It provides services such as framing, reliable delivery, flow control, error detection, and correction. Error detection techniques include single parity check, two-dimensional parity check, checksum, and cyclic redundancy check (CRC), while error correction can be achieved through backward and forward error correction methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views36 pages

CN MCA Unit-5

The Data Link Layer, the second layer from the bottom in the OSI model, is responsible for transferring datagrams across individual links and includes protocols like Ethernet and PPP. It provides services such as framing, reliable delivery, flow control, error detection, and correction. Error detection techniques include single parity check, two-dimensional parity check, checksum, and cyclic redundancy check (CRC), while error correction can be achieved through backward and forward error correction methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit-5

→ Data Link Layer


o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.
o The communication channel that connects the adjacent nodes is known as links, and in order to
move the datagram from source to the destination, the datagram must be moved across an
individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
o The Data link layer protocol defines the format of the packet exchanged across the nodes as
well as the actions such as Error detection, retransmission, flow control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can be handled by different
link layer protocols on different links in a path. For example, the datagram is handled by
Ethernet on the first link, PPP on the second link.

Following services are provided by the Data Link Layer:

o Framing & Link access: Data Link Layer protocols encapsulate each network frame within a
Link layer frame before the transmission across the link. A frame consists of a data field in
which network layer datagram is inserted and a number of data fields. It specifies the structure
of the frame as well as a channel access protocol by which frame is to be transmitted over the
link.
o Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the
network layer datagram without any error. A reliable delivery service is accomplished with
transmissions and acknowledgements. A data link layer mainly provides the reliable delivery

Page 1 of 36
service over the links as they have higher error rates and they can be corrected locally, link at
which an error occurs rather than forcing to retransmit the data.
o Flow control: A receiving node can receive the frames at a faster rate than it can process the
frame. Without flow control, the receiver's buffer can overflow, and frames can get lost. To
overcome this problem, the data link layer uses the flow control to prevent the sending node
on one side of the link from overwhelming the receiving node on another side of the link.
o Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer
protocol provides a mechanism to detect one or more errors. This is achieved by adding error
detection bits in the frame and then receiving node can perform an error check.
o Error correction: Error correction is similar to the Error detection, except that receiving node
not only detect the errors but also determine where the errors have occurred in the frame.
o Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the data at
the same time. In a Half-Duplex mode, only one node can transmit the data at the same time.

➔ Error Detection
When data is transmitted from one device to another device, the system does not guarantee
whether the data received by the device is identical to the data transmitted by another device.
An Error is a situation when the message received at the receiver end is not identical to the message
transmitted.
Types Of Errors

Errors can be classified into two categories:


o Single-Bit Error
o Burst Error
Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed to 1.

Page 2 of 36
Advertisement
Single-Bit Error does not appear more likely in Serial Data Transmission. For example, Sender sends
the data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a single-bit error to occurred,
a noise must be more than 1 ?s.
Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires are used to
send the eight bits of a byte, if one of the wire is noisy, then single-bit is corrupted per byte.
Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.
The Burst Error is determined from the first corrupted bit to the last corrupted bit.

The duration of noise in Burst Error is more than the duration of noise in Single-Bit.
Burst Errors are most likely to occurr in Serial Data Transmission.
The number of affected bits depends on the duration of the noise and data rate.
Error Detecting Techniques:
The most popular Error Detecting Techniques are:
o Single parity check
o Two-dimensional parity check
o Checksum
o Cyclic redundancy check
Single Parity Check
o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, a redundant bit is also known as a parity bit which is appended at the end
of the data unit so that the number of 1s becomes even. Therefore, the total number of
transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s bits is
even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and compared
with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity checking.

Page 3 of 36
Drawbacks Of Single Parity Checking
o It can only detect single-bit errors which are very rare.
o If two bits are interchanged, then it cannot detect the errors.

Two-Dimensional Parity Check


o Performance can be improved by using Two-Dimensional Parity Check which organizes the
data in the form of a table.
o Parity check bits are computed for each row, which is equivalent to the single-parity check.
o In Two-Dimensional Parity check, a block of bits is divided into rows, and the redundant row of
bits is added to the whole block.
o At the receiving end, the parity bits are compared with the parity bits computed from the
received data.

Page 4 of 36
Drawbacks Of 2D Parity Check
o If two bits in one data unit are corrupted and two bits exactly the same position in another
data unit are also corrupted, then 2D Parity checker will not be able to detect the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.
Checksum
A Checksum is an error detection technique based on the concept of redundancy.
It is divided into two parts:
Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data into
equal segments of n bits each, and all these segments are added together by using one's complement
arithmetic. The sum is complemented and appended to the original data, known as checksum field. The
extended data is transmitted across the network.
Suppose L is the total sum of the data segments, then the checksum would be ?L

1. The Sender follows the given steps:


2. The block unit is divided into k sections, and each of n bits.
3. All the k sections are added together by using one's complement to get the sum.
4. The sum is complemented and it becomes the checksum field.
5. The original data and checksum field are sent across the network.
Checksum Checker
A Checksum is verified at the receiving side. The receiver subdivides the incoming data into equal
segments of n bits each, and all these segments are added together, and then this sum is complemented.
If the complement of the sum is zero, then the data is accepted otherwise data is rejected.
1. The Receiver follows the given steps:
2. The block unit is divided into k sections and each of n bits.
3. All the k sections are added together by using one's complement algorithm to get the sum.
4. The sum is complemented.
5. If the result of the sum is zero, then the data is accepted otherwise the data is discarded.

Page 5 of 36
Cyclic Redundancy Check (CRC)
CRC is a redundancy error technique used to determine the error.
Following are the steps used in CRC for error detection:
o In CRC technique, a string of n 0s is appended to the data unit, and this n number is less than
the number of bits in a predetermined number, known as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known as binary
division. The remainder generated from this division is known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This
newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat this
whole unit as a single unit, and it is divided by the same divisor that was used to find the CRC
remainder.
If the resultant of this division is zero which means that it has no error, and the data is accepted.
If the resultant of this division is not zero which means that the data consists of an error. Therefore, the
data is discarded.

Let's understand this concept through an example:


Suppose the original data is 11100 and divisor is 1001.
CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end of
the data as the length of the divisor is 4 and we know that the length of the string 0s to be
appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.
o The remainder generated from the binary division is known as CRC remainder. The generated
value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the final
string would be 11100111 which is sent across the network.

Page 6 of 36
CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.

➔ Error Correction
Error Correction codes are used to detect and correct the errors when data is transmitted from the
sender to the receiver.
Error Correction can be handled in two ways:
o Backward error correction: Once the error is discovered, the receiver requests the sender to
retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting code which
automatically corrects the errors.
A single additional bit can detect the error, but cannot correct it.
Page 7 of 36
For correcting the errors, one has to know the exact position of the error. For example, If we want to
calculate a single-bit error, the error correction code will determine which one of seven bits is in error.
To achieve this, we have to add some additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data bits. The number of
redundant bits r can be calculated by using the formula:
2r>=d+r+1
The value of r is calculated by using the above formula. For example, if the value of d is 4, then the
possible smallest value that satisfies the above relation would be 3.
To determine the position of the bit which is in error, a technique developed by R.W Hamming is
Hamming code which can be applied to any length of the data unit and uses the relationship between
data units and redundant units.
Hamming Code
Parity bits: The bit which is appended to the original data of binary bits so that the total number of
1s is even or odd.
Even parity: To check for even parity, if the total number of 1s is even, then the value of the parity
bit is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is 1.
Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of parity bit is
1. If the total number of 1s is odd, then the value of parity bit is 0.
Algorithm of Hamming code:
o An information of 'd' bits are added to the redundant bits 'r' to form d+r.
o The location of each of the (d+r) digits is assigned a decimal value.
o The 'r' bits are placed in the positions 1,2,.....2k-1.
o At the receiving end, the parity bits are recalculated. The decimal value of the parity bits
determines the position of an error.
Relationship b/w Error position & binary number.

Let's understand the concept of Hamming code through an example:


Suppose the original data is 1010 which is to be sent.
Total number of data bits 'd' = 4
Number of redundant bits r : 2r >= d+r+1
2r>= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.

Page 8 of 36
Total number of bits = d+r = 4+3 = 7;
Determining the position of the redundant bits
The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The position of the
redundant bits is calculated with corresponds to the raised power of 2. Therefore, their corresponding
positions are 1, 21, 22.
1. The position of r1 = 1
2. The position of r2 = 2
3. The position of r4 = 4
Representation of Data on the addition of parity bits:

Determining the Parity bits


Determining the r1 bit
The r1 bit is calculated by performing a parity check on the bit positions whose binary representation
includes 1 in the first position.

We observe from the above figure that the bit positions that includes 1 in the first position are 1, 3,
5, 7. Now, we perform the even-parity check at these bit positions. The total number of 1 at these bit
positions corresponding to r1 is even, therefore, the value of the r1 bit is 0.
Determining r2 bit
The r2 bit is calculated by performing a parity check on the bit positions whose binary representation
includes 1 in the second position.

Page 9 of 36
We observe from the above figure that the bit positions that includes 1 in the second position are 2,
3, 6, 7. Now, we perform the even-parity check at these bit positions. The total number of 1 at these
bit positions corresponding to r2 is odd, therefore, the value of the r2 bit is 1.
Determining r4 bit
The r4 bit is calculated by performing a parity check on the bit positions whose binary representation
includes 1 in the third position.

We observe from the above figure that the bit positions that includes 1 in the third position are 4, 5,
6, 7. Now, we perform the even-parity check at these bit positions. The total number of 1 at these bit
positions corresponding to r4 is even, therefore, the value of the r4 bit is 0.
Data transferred is given below:

Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits are recalculated.

R1 bit
The bit positions of the r1 bit are 1,3,5,7

We observe from the above figure that the binary representation of r1 is 1100. Now, we perform
the even-parity check, the total number of 1s appearing in the r1 bit is an even number. Therefore,
the value of r1 is 0.
R2 bit
The bit positions of r2 bit are 2,3,6,7.

Page 10 of 36
We observe from the above figure that the binary representation of r2 is 1001. Now, we perform
the even-parity check, the total number of 1s appearing in the r2 bit is an even number. Therefore,
the value of r2 is 0.
R4 bit
The bit positions of r4 bit are 4,5,6,7.

We observe from the above figure that the binary representation of r4 is 1011. Now, we perform
the even-parity check, the total number of 1s appearing in the r4 bit is an odd number. Therefore, the
value of r4 is 1.
o The binary representation of redundant bits, i.e., r4r2r1 is 100, and its corresponding decimal
value is 4. Therefore, the error occurs in a 4th bit position. The bit value must be changed from 1
to 0 to correct the error.
➔ Multiple access protocol- ALOHA, CSMA, CSMA/CA and CSMA/CD
Data Link Layer
The data link layer is used in a computer network to transmit the data between two devices or
nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol. The upper layer has the responsibility to flow control and the error control in
the data link layer, and hence it is termed as logical of data link control. Whereas the lower sub-
layer is used to handle and reduce the collision or multiple access on a channel. Hence it is termed
as media access control or the multiple access resolutions.
Data Link Control
A data link control is a reliable channel for transmitting data over a dedicated link using various
techniques such as framing, error control and flow control of data packets in the computer network.
What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit data packets, the data link control
is enough to handle the channel. Suppose there is no dedicated path to communicate or transfer
Page 11 of 36
the data between two devices. In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk. Hence, the multiple access
protocol is required to reduce the collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher asks a question,
all the students (small channels) in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at the same time due to which data
is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access protocol) to
manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process as:

A. Random Access Protocol


In this protocol, all the station has the equal priority to send the data over a channel. In random
access protocol, one or more stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy), each station transmits the data
frame. However, if more than one station sends the data over a channel, there may be a collision
or data conflict. Due to the collision, the data frame packets may be lost or changed. And hence,
it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on the
channel.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
ALOHA Random Access Protocol
It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to
transmit data. Using this method, any station can transmit data across a network simultaneously
when a data frameset is available for transmission.

Page 12 of 36
Aloha Rules
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is idle or
not, the chances of collision may occur, and the data frame can be lost. When any station transmits the
data frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If it does not
acknowledge the receiver end within the specified time, the station waits for a random amount of time,
called the backoff time (Tb). And the station may assume the frame has been lost or destroyed.
Therefore, it retransmits the frame until all the data are successfully transmitted to the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

Page 13 of 36
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the same
time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver end. At
the same time, other frames are lost or destroyed. Whenever two frames fall on a shared channel
simultaneously, collisions can occur, and both will suffer damage. If the new frame's first bit enters the
channel before finishing the last bit of the second frame. Both frames are completely finished, and
both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has
a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time
interval called slots. So that, if a station wants to send a frame to a shared channel, the frame can
only be sent at the beginning of the slot, and only one frame is allowed to be sent to each slot. And if
the stations are unable to send data to the beginning of the slot, the station will have to wait until the
beginning of the slot for the next time. However, the possibility of a collision remains when trying to
send a frame at the beginning of two or more station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^
- 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)


It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the station can
send data to the channel. Otherwise, it must wait until the channel becomes idle. Hence, it reduces the
chances of a collision on a transmission medium.
Page 14 of 36
CSMA Access Modes
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel
and if the channel is idle, it immediately sends the data. Else it must wait and keep track of the status
of the channel to be idle and broadcast the frame unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node
must sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the
station must wait for a random time (not continuously), and when the channel is found to be idle, it
transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode
defines that each node senses the channel, and if the channel is inactive, it sends a frame with
a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time and
resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each station
waits for its turn to retransmit the data.

Page 15 of 36
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data frames. The
CSMA/CD protocol works with a medium access control layer. Therefore, it first senses the shared
channel before broadcasting the frames, and if the channel is idle, it transmits a frame to check whether
the transmission was successful. If the frame is successfully received, the station sends another frame. If
any collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel
to terminate data transmission. After that, it waits for a random time before sending a frame to a
channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of
data frames. It is a protocol that works with a medium access control layer. When a data frame is sent
to a channel, it receives an acknowledgment to check whether the channel is clear. If the station receives
only a single (own) acknowledgments, that means the data frame has been successfully transmitted to
the receiver. But if it gets two signals (its own and one more in which the collision of frames), a collision
of the frame occurs in the shared channel. Detects the collision of the frame when a sender receives an
acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it gets the
channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this
time period is called the Interframe space or IFS. However, the IFS time is often used to define the
priority of the station.
Contention window: In the Contention window, the total time is divided into different slots. When the
station/ sender is ready to transmit the data frame, it chooses a random slot number of slots as wait
time. If the channel is still busy, it does not restart the entire process, except that it restarts the timer
only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.
B. Controlled Access Protocol
It is a method of reducing data frame collision on a shared channel. In the controlled access method,
each station interacts and decides to send a data frame by a particular station approved by all other
stations. It means that a single station cannot send the data frames unless all other stations are not
approved. It has three types of controlled access: Reservation, Polling, and Token Passing.
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to be shared
across multiple stations based on their time, distance and codes. It can access all the stations at the
same time to send the data frames to the channel.

Page 16 of 36
Following are the various methods to access the channel based on their time, distance and codes:
1. FDMA (Frequency Division Multiple Access)
2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)
FDMA
It is a frequency division multiple access (FDMA) method used to divide the available bandwidth into
equal bands so that multiple users can send data through a different frequency to the subchannel.
Each station is reserved with a particular band to prevent the crosstalk between the channels and
interferences of stations.

TDMA
Time Division Multiple Access (TDMA) is a channel access method. It allows the same frequency
bandwidth to be shared across multiple stations. And to avoid collisions in the shared channel, it divides
the channel into different frequency slots that allocate stations to transmit the data frames. The
same frequency bandwidth into the shared channel by dividing the signal into various time slots to
transmit it. However, TDMA has an overhead of synchronization that specifies each station's time slot
by adding synchronization bits to each slot.
CDMA
The code division multiple access (CDMA) is a channel access method. In CDMA, all stations
can simultaneously send the data over the same channel. It means that it allows each station to transmit
the data frames with full frequency on the shared channel at all times. It does not require the division
of bandwidth on a shared channel based on time slots. If multiple stations send data to a channel
Page 17 of 36
simultaneously, their data frames are separated by a unique code sequence. Each station has a
different unique code for transmitting the data over a shared channel. For example, there are multiple
users in a room that are continuously speaking. Data is received by the users if only two-person interact
with each other using the same language. Similarly, in the network, if different stations communicate
with each other simultaneously with different code language.
➔ Switching
o When a user accesses the internet or another computer network outside their immediate
location, messages are sent through the network of transmission media. This technique of
transferring the information from one computer network to another network is known
as switching.
o Switching in a computer network is achieved by using switches. A switch is a small hardware
device which is used to join multiple computers together with one local area network (LAN).
o Network switches operate at layer 2 (Data link layer) in the OSI model.
o Switching is transparent to the user and does not require any configuration in the home network.
o Switches are used to forward the packets based on MAC addresses.
o A Switch is used to transfer the data only to the device that has been addressed. It verifies the
destination address to route the packet appropriately.
o It is operated in full duplex mode.
o Packet collision is minimum as it directly communicates between source and destination.
o It does not broadcast the message as it works with limited bandwidth.
Why is Switching Concept required?
Switching concept is developed because of the following reasons:
o Bandwidth: It is defined as the maximum transfer rate of a cable. It is a very critical and
expensive resource. Therefore, switching techniques are used for the effective utilization of the
bandwidth of a network.
o Collision: Collision is the effect that occurs when more than one device transmits the message
over the same physical media, and they collide with each other. To overcome this problem,
switching technology is implemented so that packets do not collide with each other.
Advantages of Switching:
o Switch increases the bandwidth of the network.
o It reduces the workload on individual PCs as it sends the information to only that device which
has been addressed.
o It increases the overall performance of the network by reducing the traffic on the network.
o There will be less frame collision as switch creates the collision domain for each connection.
Disadvantages of Switching:
o A Switch is more expensive than network bridges.
o A Switch cannot determine the network connectivity issues easily.
o Proper designing and configuration of the switch are required to handle multicast packets.

Page 18 of 36
➔ Switching Modes
o The layer 2 switches are used for transmitting the data on the data link layer, and it also
performs error checking on transmitted and received frames.
o The layer 2 switches forward the packets with the help of MAC address.
o Different modes are used for forwarding the packets known as Switching modes.
o In switching mode, Different parts of a frame are recognized. The frame consists of several
parts such as preamble, destination MAC address, source MAC address, user's data, FCS.

There are three types of switching modes:


o Store-and-forward
o Cut-through
o Fragment-free

Store-and-forward

o Store-and-forward is a technique in which the intermediate nodes store the received frame
and then check for errors before forwarding the packets to the next node.
o The layer 2 switch waits until the entire frame has received. On receiving the entire frame,
switch store the frame into the switch buffer memory. This process is known as storing the frame.

Page 19 of 36
o When the frame is stored, then the frame is checked for the errors. If any error found, the
message is discarded otherwise the message is forwarded to the next node. This process is
known as forwarding the frame.
o CRC (Cyclic Redundancy Check) technique is implemented that uses a number of bits to check
for the errors on the received frame.
o The store-and-forward technique ensures a high level of security as the destination network
will not be affected by the corrupted frames.
o Store-and-forward switches are highly reliable as it does not forward the collided frames.
Cut-through Switching

o Cut-through switching is a technique in which the switch forwards the packets after the
destination address has been identified without waiting for the entire frame to be received.
o Once the frame is received, it checks the first six bytes of the frame following the preamble,
the switch checks the destination in the switching table to determine the outgoing interface port,
and forwards the frame to the destination.
o It has low latency rate as the switch does not wait for the entire frame to be received before
sending the packets to the destination.
o It has no error checking technique. Therefore, the errors can be sent with or without errors to
the receiver.
o A Cut-through switching technique has low wait time as it forwards the packets as soon as it
identifies the destination MAC address.
o In this technique, collision is not detected, if frames have collided will also be forwarded.
Fragment-free Switching

o A Fragment-free switching is an advanced technique of the Cut-through Switching.


o A Fragment-free switching is a technique that reads atleast 64 bytes of a frame before
forwarding to the next node to provide the error-free transmission.
o It combines the speed of Cut-through Switching with the error checking functionality.

Page 20 of 36
o This technique checks the 64 bytes of the ethernet frame where addressing information is
available.
o A collision is detected within 64 bytes of the frame, the frames which are collided will not be
forwarded further.
Differences b/w Store-and-forward and Cut-through Switching.

Store-and-forward Switching Cut-through Switching

Store-and-forward Switching is a technique that Cut-through Switching is a technique that checks


waits until the entire frame is received. the first 6 bytes following the preamble to
identify the destination address.

It performs error checking functionality. If any It does not perform any error checking. The
error is found in the frame, the frame will be frame with or without errors will be forwarded.
discarded otherwise forwarded to the next node.

It has high latency rate as it waits for the entire It has low latency rate as it checks only six
frame to be received before forwarding to the bytes of the frame to determine the destination
next node. address.

It is highly reliable as it forwards only error-free It is less reliable as compared to Store-and-


packets. forward technique as it forwards error prone
packets as well.

It has a high wait time as it waits for the entire It has low wait time as cut-through switches do
frame to be received before taking any not store the whole frame or packets.
forwarding decisions.

➔ Switching techniques
In large networks, there can be multiple paths from sender to receiver. The switching technique will
decide the best route for data transmission.
Switching technique is used to connect the systems for making one-to-one communication.
Classification Of Switching Techniques

Page 21 of 36
Circuit Switching
o Circuit switching is a switching technique that establishes a dedicated path between sender and
receiver.
o In the Circuit Switching Technique, once the connection is established then the dedicated path
will remain to exist until the connection is terminated.
o Circuit switching in a network operates in a similar way as the telephone works.
o A complete end-to-end path must exist before the communication takes place.
o In case of circuit switching technique, when any user wants to send the data, voice, video, a
request signal is sent to the receiver then the receiver sends back the acknowledgment to ensure
the availability of the dedicated path. After receiving the acknowledgment, dedicated path
transfers the data.
o Circuit switching is used in public telephone network. It is used for voice transmission.
o Fixed data can be transferred at a time in circuit switching technology.
Communication through circuit switching has 3 phases:
o Circuit establishment
o Data transfer
o Circuit Disconnect

Circuit Switching can use either of the two technologies:


Space Division Switches:
o Space Division Switching is a circuit switching technology in which a single transmission path is
accomplished in a switch by using a physically separate set of crosspoints.
o Space Division Switching can be achieved by using crossbar switch. A crossbar switch is a
metallic crosspoint or semiconductor gate that can be enabled or disabled by a control unit.
o The Crossbar switch is made by using the semiconductor. For example, Xilinx crossbar switch
using FPGAs.
o Space Division Switching has high speed, high capacity, and nonblocking switches.
Space Division Switches can be categorized in two ways:
o Crossbar Switch
o Multistage Switch

Page 22 of 36
Crossbar Switch
The Crossbar switch is a switch that has n input lines and n output lines. The crossbar switch has
n2 intersection points known as crosspoints.
Disadvantage of Crossbar switch:
The number of crosspoints increases as the number of stations is increased. Therefore, it becomes very
expensive for a large switch. The solution to this is to use a multistage switch.
Multistage Switch
o Multistage Switch is made by splitting the crossbar switch into the smaller units and then
interconnecting them.
o It reduces the number of crosspoints.
o If one path fails, then there will be an availability of another path.
Advantages Of Circuit Switching:
o In the case of Circuit Switching technique, the communication channel is dedicated.
o It has fixed bandwidth.
Disadvantages Of Circuit Switching:
o Once the dedicated path is established, the only delay occurs in the speed of data transmission.
o It takes a long time to establish a connection approx 10 seconds during which no data can be
transmitted.
o It is more expensive than other switching techniques as a dedicated path is required for each
connection.
o It is inefficient to use because once the path is established and no data is transferred, then the
capacity of the path is wasted.
o In this case, the connection is dedicated therefore no other data can be transferred even if the
channel is free.
Message Switching
o Message Switching is a switching technique in which a message is transferred as a complete
unit and routed through intermediate nodes at which it is stored and forwarded.
o In Message Switching technique, there is no establishment of a dedicated path between the
sender and receiver.
o The destination address is appended to the message. Message Switching provides a dynamic
routing as the message is routed through the intermediate nodes based on the information
available in the message.
o Message switches are programmed in such a way so that they can provide the most efficient
routes.
o Each and every node stores the entire message and then forward it to the next node. This type
of network is known as store and forward network.
o Message switching treats each message as an independent entity.

Page 23 of 36
Advantages Of Message Switching
o Data channels are shared among the communicating devices that improve the efficiency of
using available bandwidth.
o Traffic congestion can be reduced because the message is temporarily stored in the nodes.
o Message priority can be used to manage the network.
o The size of the message which is sent over the network can be varied. Therefore, it supports
the data of unlimited size.
Disadvantages Of Message Switching
o The message switches must be equipped with sufficient storage to enable them to store the
messages until the message is forwarded.
o The Long delay can occur due to the storing and forwarding facility provided by the message
switching technique.
Packet Switching
o The packet switching is a switching technique in which the message is sent in one go, but it is
divided into smaller pieces, and they are sent individually.
o The message splits into smaller pieces known as packets and packets are given a unique number
to identify their order at the receiving end.
o Every packet contains some information in its headers such as source address, destination
address and sequence number.
o Packets will travel across the network, taking the shortest path as possible.
o All the packets are reassembled at the receiving end in correct order.
o If any packet is missing or corrupted, then the message will be sent to resend the message.
o If the correct order of the packets is reached, then the acknowledgment message will be sent.

Page 24 of 36
Approaches Of Packet Switching:
There are two approaches to Packet Switching:
Datagram Packet switching:
o It is a packet switching technology in which packet is known as a datagram, is considered as
an independent entity. Each packet contains the information about the destination and switch
uses this information to forward the packet to the correct destination.
o The packets are reassembled at the receiving end in correct order.
o In Datagram Packet Switching technique, the path is not fixed.
o Intermediate nodes take the routing decisions to forward the packets.
o Datagram Packet Switching is also known as connectionless switching.
Virtual Circuit Switching
o Virtual Circuit Switching is also known as connection-oriented switching.
o In the case of Virtual circuit switching, a preplanned route is established before the messages
are sent.
o Call request and call accept packets are used to establish the connection between sender and
receiver.
o In this case, the path is fixed for the duration of a logical connection.
Let's understand the concept of virtual circuit switching through a diagram:

o In the above diagram, A and B are the sender and receiver respectively. 1 and 2 are the
nodes.
o Call request and call accept packets are used to establish a connection between the sender
and receiver.
o When a route is established, data will be transferred.
o After transmission of data, an acknowledgment signal is sent by the receiver that the message
has been received.
o If the user wants to terminate the connection, a clear signal is sent for the termination.

Page 25 of 36
Differences b/w Datagram approach and Virtual Circuit approach

Datagram approach Virtual Circuit approach

Node takes routing decisions to forward the Node does not take any routing decision.
packets.

Congestion cannot occur as all the packets Congestion can occur when the node is busy, and
travel in different directions. it does not allow other packets to pass through.

It is more flexible as all the packets are treated It is not very flexible.
as an independent entity.

Advantages Of Packet Switching:


o Cost-effective: In packet switching technique, switching devices do not require massive
secondary storage to store the packets, so cost is minimized to some extent. Therefore, we can
say that the packet switching technique is a cost-effective technique.
o Reliable: If any node is busy, then the packets can be rerouted. This ensures that the Packet
Switching technique provides reliable communication.
o Efficient: Packet Switching is an efficient technique. It does not require any established path
prior to the transmission, and many users can use the same communication channel
simultaneously, hence makes use of available bandwidth very efficiently.
Disadvantages Of Packet Switching:
o Packet Switching technique cannot be implemented in those applications that require low delay
and high-quality services.
o The protocols used in a packet switching technique are very complex and requires high
implementation cost.
o If the network is overloaded or corrupted, then it requires retransmission of lost packets. It can
also lead to the loss of critical information if errors are nor recovered.
➔ Data Virtualization
Data virtualization is the process of retrieve data from various resources without knowing its type
and physical location where it is stored. It collects heterogeneous data from different resources
and allows data users across the organization to access this data according to their work
requirements. This heterogeneous data can be accessed using any application such as web portals,
web services, E-commerce, Software as a Service (SaaS), and mobile application.
We can use Data Virtualization in the field of data integration, business intelligence, and cloud
computing.
Advantages of Data Virtualization
There are the following advantages of data virtualization -
o It allows users to access the data without worrying about where it resides on the memory.
o It offers better customer satisfaction, retention, and revenue growth.
Page 26 of 36
o It provides various security mechanism that allows users to safely store their personal and
professional information.
o It reduces costs by removing data replication.
o It provides a user-friendly interface to develop customized views.
o It provides various simple and fast deployment resources.
o It increases business user efficiency by providing data in real-time.
o It is used to perform tasks such as data integration, business integration, Service-Oriented
Architecture (SOA) data services, and enterprise search.
Disadvantages of Data Virtualization
o It creates availability issues, because availability is maintained by third-party providers.
o It required a high implementation cost.
o It creates the availability and scalability issues.
o Although it saves time during the implementation phase of virtualization but it consumes more
time to generate the appropriate result.
Uses of Data Virtualization
There are the following uses of Data Virtualization -ertisement
1. Analyze performance
Data virtualization is used to analyze the performance of the organization compared to previous
years.
2. Search and discover interrelated data
Data Virtualization (DV) provides a mechanism to easily search the data which is similar and internally
related to each other.
3. Agile Business Intelligence
It is one of the most common uses of Data Virtualization. It is used in agile reporting, real-time
dashboards that require timely aggregation, analyze and present the relevant data from multiple
resources. Both individuals and managers use this to monitor performance, which helps to make daily
operational decision processes such as sales, support, finance, logistics, legal, and compliance.
4. Data Management
Data virtualization provides a secure centralized layer to search, discover, and govern the unified
data and its relationships.
Data Virtualization Tools
There are the following Data Virtualization tools -
1. Red Hat JBoss data virtualization
Red Hat virtualization is the best choice for developers and those who are using micro services and
containers. It is written in Java.

Page 27 of 36
2. TIBCO data virtualization
TIBCO helps administrators and users to create a data virtualization platform for accessing the multiple
data sources and data sets. It provides a builtin transformation engine to combine non-relational and
un-structured data sources.
3. Oracle data service integrator
It is a very popular and powerful data integrator tool which is mainly worked with Oracle products. It
allows organizations to quickly develop and manage data services to access a single view of data.
4. SAS Federation Server
SAS Federation Server provides various technologies such as scalable, multi-user, and standards-based
data access to access data from multiple data services. It mainly focuses on securing data.
5. Denodo
Denodo is one of the best data virtualization tools which allows organizations to minimize the network
traffic load and improve response time for large data sets. It is suitable for both small as well as large
organizations.
Industries that use Data Virtualization
o Communication & Technology
In Communication & Technology industry, data virtualization is used to increase revenue per
customer, create a real-time ODS for marketing, manage customers, improve customer insights,
and optimize customer care, etc.
o Finance
In the field of finance, DV is used to improve trade reconciliation, empowering data democracy,
addressing data complexity, and managing fixed-risk income.
o Government
In the government sector, DV is used for protecting the environment.
o Healthcare
Data virtualization plays a very important role in the field of healthcare. In healthcare, DV
helps to improve patient care, drive new product innovation, accelerating M&A synergies, and
provide a more efficient claims analysis.
o Manufacturing
In manufacturing industry, data virtualization is used to optimize a global supply chain,
optimize factories, and improve IT assets utilization.
➔ What Is a Data Centre?
At the most basic level, a data centre is a physical location where companies store their mission-
critical software and data. The data centre is made up of the network of the computing and
storage resources. These resources share various applications and the hardware in order to deliver
the packages.
Data exists and is linked through various data centres, the edge, and public and private clouds in
today's world. The data centre must be able to connect with all of these different locations, on-

Page 28 of 36
premises and in the cloud. The public cloud, too, is made up of data centres. When apps are hosted
in the cloud, the cloud provider's data centre services are used.
Importance of data centre?
Data centres are involved in the all type of activity that are done on the internet. Like :
o The data centre is involved in Email and file sharing
o The data centre is involved in Productivity of applications
o The data centre is involved in Customer relationship management (CRM) etc.
Components of a Data centre
Data centre protection is important in data centre architecture because these components store and
handle business-critical data and applications. There are various components, basically they can
include Routers, switches, firewalls, storage devices, servers, and application delivery controllers. They
all are part of the data centre architecture.
Advertisement
They provide various kind of services when combined in a meaningful way.
It can be :
Infrastructure for the network.
This links end-user locations to physical and virtualized servers, data centre facilities, storage, and
external connectivity.
The Storage infrastructure of data centre.
Data is the modern data center's lifeblood. This useful product is kept in storage systems. These
storages systems are very huge in numbers.
Computing resources.
A data centre's engines are applications. The processing, memory, local storage, and network
connectivity are provided by these servers which provide the ultimate power to the application while
its execution.
Facility
The amount of room that can be used for IT equipment. Data centres are some of the world's most
energy-intensive facilities because they have 24-hour access to knowledge. Both architecture and
environmental management are stressed in order to maintain equipment within specific
temperature/humidity levels.
Operation done in data centre
Data centre facilities are usually safeguarded in order to protect the performance and integrity of
the data centre's core components and the data which is very precious.
And also, the application resiliency and availability through automatic failover and load balancing in
order to maintain application efficiency in the data centre are taken care of.

Page 29 of 36
What are the data centre infrastructure standards?
ANSI/TIA-942 is the most commonly used specification for data centre architecture and infrastructure.
It includes ANSI/TIA-942-ready certification requirements, which ensure compliance with one of four
data centre tiers based on redundancy and fault tolerance levels.
Tier 1: Infrastructure for the site. Physical incidents are only partly covered in a Tier 1 data centre. It
has a single, nonredundant delivery path and single-capacity components.
Tier 2: Component site facilities with high redundancy. This data centre provides better security against
natural disasters.
Tier 3: Infrastructure that can be maintained at the same time. This data centre provides redundant-
capacity modules and numerous separate delivery paths to defend against nearly all physical
incidents. Each part may be substituted or withdrawn without affecting end-user services.
Tier 4: Site infrastructure that is fault-tolerant. This data canter offers the highest degree of
redundancy and fault tolerance. It provides the multiple delivery routes for the packages in the
condition of a failure.
Types
We have different modes and services from which we can choose our data centre. Their classification
is determined by whether they are operated by a single entity or a group of organizations, how they
fit (if at all) into the topology of other data centers, the processing and storage technologies they
employ, and even their energy efficiency. Data centers are classified into four categories:

Enterprise data centres


Companies create, own, and manage these, which are tailored for their end users. The majority of the
time, they are located on the corporate campus.

Page 30 of 36
Managed services data centres
On behalf of a corporation, these data centres are operated by a third party (or a managed services
provider). Instead of purchasing the machinery and facilities, the corporation rents it.

Colocation data centres


A organization leases space in a data centre leased by others and located off-site in colocation
("colo") data centres.

Cloud data centres


Data and applications are hosted by a cloud services provider such as Amazon Web Services (AWS),
Microsoft (Azure), IBM Cloud, or another public cloud provider in this off-premises data centre.
There is a lot more to learn about data centres and what the future holds for them as well as our or
any network.
From mainframes to cloud systems, infrastructure has grown.
Over the last 65 years, computing infrastructure has evolved in three major waves:
o The first wave saw the transition from proprietary mainframes to on-premises, x86-based
servers run by internal IT teams.
o The technology that served applications was widely virtualized in the second wave. This made
for better resource utilization and workload versatility across pools of physical infrastructure.
o We are now in the third wave, which is characterized by the adoption of cloud, hybrid cloud,
and cloud-native technologies. The above refers to cloud-based software.

Page 31 of 36
A data centre is a network of software that is distributed in nature.
Distributed computing is the product of this evolution. Data and applications are spread
through several networks, which are then linked and integrated using network services and
interoperability principles to form a single setting. As a result, the word "data centre" is now used to
refer to the agency in charge of these systems, regardless of their location.
Organizations have the option of building and maintaining their own hybrid cloud data centres,
leasing space in colocation facilities (colos), consuming pooled compute and storage resources, or
using public cloud-based services. As a consequence, applications are no longer limited to a single
venue.
In this multi cloud age, the data centre has increased in size and complexity, with the goal of
delivering the best possible user experience.
➔ A Day in the Life of a Web Page Request
Getting Started: DHCP, UDP, IP, and Ethernet
Let’s suppose that Bob boots up his laptop and then connects it to an Ethernet cable connected to
the school’s Ethernet switch, which in turn is connected to the school’s router, as shown in Figure
5.32. The school’s router is connected to an ISP, in this example, comcast.net. In this example,
comcast.net is providing the DNS service for the school; thus, the DNS server resides in the Comcast
network rather than the school network. We’ll assume that the DHCP server is running within the
router, as is often the case.
When Bob first connects his laptop to the network, he can’t do anything (e.g., download a Web
page) without an IP address. Thus, the first network-related action taken by Bob’s laptop is to run
the DHCP protocol to obtain an IP address, as well as other information, from the local DHCP
server:
1. The operating system on Bob’s laptop creates a DHCP request message (Sec- tion 4.4.2)
and puts this message within a UDP segment (Section 3.3) with destination port 67 (DHCP
server) and source port 68 (DHCP client). The UDP segment is then placed within an IP
datagram (Section 4.4.1) with a broadcast IP destination address (255.255.255.255) and
a source IP address of 0.0.0.0, since Bob’s laptop doesn’t yet have an IP address.
2. The IP datagram containing the DHCP request message is then placed within an Ethernet
frame (Section 5.4.2). The Ethernet frame has a destination MAC addresses of
FF:FF:FF:FF:FF:FF so that the frame will be broadcast to all devices connected to the switch
(hopefully including a DHCP server); the frame’s source MAC address is that of Bob’s laptop,
00:16:D3:23:68:8A.
3. The broadcast Ethernet frame containing the DHCP request is the first frame sent by Bob’s
laptop to the Ethernet switch. The switch broadcasts the incoming frame on all outgoing ports,
including the port connected to the router.
4. The router receives the broadcast Ethernet frame containing the DHCP request on its
interface with MAC address 00:22:6B:45:1F:1B and the IP datagram is extracted from the

Page 32 of 36
Ethernet frame. The datagram’s broadcast IP destination address indicates that this IP
datagram should be processed by upper layer proto- cols at this node, so the datagram’s
payload (a UDP segment) is thus demultiplexed (Section 3.2) up to UDP, and the DHCP
request message is extracted from the UDP segment. The DHCP server now has the DHCP
request message.
5. Let’s suppose that the DHCP server running within the router can allocate IP addresses in the
CIDR (Section 4.4.2) block 68.85.2.0/24. In this example, all IP addresses used within the
school are thus within Comcast’s address block Let’s suppose the DHCP server allocates
address 68.85.2.101 to Bob’s laptop. The DHCP server creates a DHCP ACK message
(Section 4.4.2) containing this IP address, as well as the IP address of the DNS server
(68.87.71.226), the IP address for the default gateway router (68.85.2.1), and the subnet
block (68.85.2.0/24) (equivalently, the “network mask”). The DHCP message is put inside a
UDP segment, which is put inside an IP datagram, which is put inside an Ethernet frame. The
Ethernet frame has a source MAC address of the router’s interface to the home network
(00:22:6B:45:1F:1B) and a destination MAC address of Bob’s laptop (00:16:D3:23:68:8A).
6. The Ethernet frame containing the DHCP ACK is sent (unicast) by the router to the switch.
Because the switch is self-learning (Section 5.4.3) and previously received an Ethernet frame
(containing the DHCP request) from Bob’s laptop, the switch knows to forward a frame
addressed to 00:16:D3:23:68:8A only to the output port leading to Bob’s laptop.
7. Bob’s laptop receives the Ethernet frame containing the DHCP ACK, extracts the IP datagram
from the Ethernet frame, extracts the UDP segment from the IP datagram, and extracts the
DHCP ACK message from the UDP segment. Bob’s DHCP client then records its IP address
and the IP address of its DNS server. It also installs the address of the default gateway into
its IP forwarding table (Section 4.1). Bob’s laptop will send all datagrams with destination
address outside of its subnet 68.85.2.0/24 to the default gateway. At this point, Bob’s
laptop has initialized its networking components and is ready to begin processing the Web
page fetch. (Note that only the last two DHCP steps of the four presented in Chapter 4 are
actually necessary.)
Still Getting Started: DNS and ARP
When Bob types the URL for www.google.com into his Web browser, he begins the long
chain of events that will eventually result in Google’s home page being dis- played by his
Web browser. Bob’s Web browser begins the process by creating a TCP socket (Section
2.7) that will be used to send the HTTP request (Section 2.2) to www.google.com. In order
to create the socket, Bob’s laptop will need to know the IP address of www.google.com. We
learned in Section 2.5, that the DNS proto- col is used to provide this name-to-IP-address
translation service.
8. The operating system on Bob’s laptop thus creates a DNS query message (Section 2.5.3),
putting the string “www.google.com” in the question section of the DNS message. This DNS

Page 33 of 36
message is then placed within a UDP segment with a destination port of 53 (DNS server).
The UDP segment is then placed within an IP data- gram with an IP destination address of
68.87.71.226 (the address of the DNS server returned in the DHCP ACK in step 5) and a
source IP address of 68.85.2.101.
9. Bob’s laptop then places the datagram containing the DNS query message in an Ethernet
frame. This frame will be sent (addressed, at the link layer) to the gateway router in Bob’s
school’s network. However, even though Bob’s laptop knows the IP address of the school’s
gateway router (68.85.2.1) via the DHCP ACK message in step 5 above, it doesn’t know
the gateway router’s MAC address. In order to obtain the MAC address of the gateway
router, Bob’s lap- top will need to use the ARP protocol (Section 5.4.1).
10. Bob’s laptop creates an ARP query message with a target IP address of 68.85.2.1 (the
default gateway), places the ARP message within an Ethernet frame with a broadcast
destination address (FF:FF:FF:FF:FF:FF) and sends the Ethernet frame to the switch, which
delivers the frame to all connected devices, including the gateway router.
11. The gateway router receives the frame containing the ARP query message on the interface
to the school network, and finds that the target IP address of 68.85.2.1 in the ARP message
matches the IP address of its interface. The gateway router thus prepares an ARP reply,
indicating that its MAC address of 00:22:6B:45:1F:1B corresponds to IP address 68.85. 2.1.
It places the ARP reply message in an Ethernet frame, with a destination address of
00:16:D3:23:68:8A (Bob’s laptop) and sends the frame to the switch, which delivers the
frame to Bob’s laptop.
12. Bob’s laptop receives the frame containing the ARP reply message and extracts the MAC
address of the gateway router (00:22:6B:45:1F:1B) from the ARP reply message.
13. Bob’s laptop can now (finally!) address the Ether net frame containing the DNS query to the
gateway router’s MAC address. Note that the IP datagram in this frame has an IP destination
address of 68.87.71.226 (the DNS server), while the frame has a destination address of
00:22:6B:45:1F:1B (the gateway router). Bob’s laptop sends this frame to the switch, which
delivers the frame to the gateway router.
Still Getting Started: Intra-Domain Routing to the DNS Server
14. The gate way router receives the frame and extracts the IP datagram containing the DNS
query. The router looks up the destination address of this datagram (68.87.71.226) and
determines from its forwarding table that the datagram should be sent to the leftmost router
in the Comcast network in Figure 5.32. The IP data- gram is placed inside a link-layer frame
appropriate for the link connecting the school’s router to the leftmost Comcast route r and
the frame is sent over this link.
15. The leftmost router in the Comcast network receives the frame, extracts the IP datagram,
examines the datagram’s destination address (68.87.71.226) and determines the outgoing
interface on which to forward the datagram towards the DNS server from its forwarding

Page 34 of 36
table, which has been filled in by Com- cast’s intra-domain protocol (such as RIP, OSPF or
IS-IS, Section 4.6) as well as the Internet’s inter-domain protocol, BGP.
16. Eventually the IP datagram containing the DNS query arrives at the DNS server. The DNS
server extracts the DNS query message, looks up the name www.google.com in its DNS
database (Section 2.5), and finds the DNS resource record that contains the IP address
(64.233.169.105) for www.google.com. (assuming that it is currently cached in the DNS
server). Recall that this cached data originated in the authoritative DNS server (Section
2.5.2) for google.com. The DNS server forms a DNS reply message containing this hostname-
to-IP- address mapping, and places the DNS reply message in a UDP segment, and the
segment within an IP datagram addressed to Bob’s laptop (68.85.2.101). This datagram
will be forwarded back through the Comcast network to the school’s router and from there,
via the Ethernet switch to Bob’s laptop.
17. Bob’s laptop extracts the IP address of the server www.google.com from the DNS message.
Finally, after a lot of work, Bob’s laptop is now ready to contact the www.google.com server!
Web Client-Server Interaction: TCP and HTTP
18. Now that Bob’s laptop has the IP address of www.google.com,it can create the TCP socket
(Section 2.7) that will be used to send the HTTP GET message (Section 2.2.3) to
www.google.com. When Bob creates the TCP socket, the TCP in Bob’s laptop must first
perform a three-way handshake (Section 3.5.6) with the TCP in www.google.com. Bob’s
laptop thus first creates a TCP SYN segment with destination port 80 (for HTTP), places the
TCP segment inside an IP data- gram with a destination IP address of 64.233.169.105
(www.google.com), places the datagram inside a frame with a destination MAC address of
00:22:6B:45:1F:1B (the gateway router) and sends the frame to the switch.
19. The routers in the school network, Comcast’s network, and Google’s network forward the
datagram containing the TCP SYN towards www.google.com, using the forwarding table in
each router, as in steps 14–16 above. Recall that the router forwarding table entries
governing forwarding of packets over the inter-domain link between the Comcast and
Google networks are determined by the BGP protocol (Section 4.6.3).
20. Eventually, the datagram containing the TCP SYN arrives at www.google.com. The TCP SYN
message is extracted from the datagram and demultiplexed to the welcome socket
associated with port 80. A connection socket (Section 2.7) is created for the TCP connection
between the Google HTTP server and Bob’s laptop. A TCP SYNACK (Section 3.5.6) segment
is generated, placed inside a datagram addressed to Bob’s laptop, and finally placed
inside a link-layer frame appropriate for the link connecting www.google.com to its first-
hop router.

Page 35 of 36
21. The datagram containing the TCP SYNACK segment is forwarded through the Google,
Comcast, and school networks, eventually arriving at the Ethernet card in Bob’s laptop. The
datagram is demultiplexed within the operating system to the TCP socket created in step
18, which enters the connected state.
22. With the socket on Bob’s laptop now (finally!) ready to send by test www.google .com, Bob’s
browser creates the HTTP GET message (Section 2.2.3) containing the URL to be fetched.
The HTTP GET message is then written into the socket, with the GET message becoming the
payload of a TCP segment. The TCP segment is placed in a datagram and sent and
delivered to www.google.com as in steps 18–20 above.
23. The HTTP server at www.google.com reads the HTTP GET message from the TCP socket,
creates an HTTP response message (Section 2.2), places the requested Web page content
in the body of the HTTP response message, and sends the message into the TCP socket.
24. The datagram containing the HTTP reply message is forwarded through the Google,
Comcast, and school networks, and arrives at Bob’s laptop. Bob’s Web browser pro - gram
reads the HTTP response from the socket, extracts the html for the Web page from the body
of the HTTP response, and finally ( finally!) displays the Web page!.

***WISH YOU ALL THE VERY BEST***

Page 36 of 36

You might also like