Networking Basics for Students
Networking Basics for Students
UNIT-1
Q.1 What is the difference between a host and an end system? List several different types of end
systems. [SUMMER 2022 (3 MARKS)]
Q.2 Draw the layered architecture of OSI reference model and write at least two services provided by
each layer of the model. [SUMMER 2022, WINTER 2021 (7 MARKS)]
OSI stands for Open Systems Interconnection. It has been developed by ISO – ‘International Organization
for Standardization‘, in the year 1984. It is a 7 layer architecture with each layer having specific
functionality to perform. All these 7 layers work collaboratively to transmit the data from one person to
another across the globe.
CN December 30, 1899
The lowest layer of the OSI reference model is the physical layer. It is responsible for the actual
physical connection between the devices. The physical layer contains information in the form of bits. It
is responsible for transmitting individual bits from one node to the next. When receiving data, this
layer will get the signal received and convert it into 0s and 1s and send them to the Data Link layer,
which will put the frame back together.
1. Bit synchronization: The physical layer provides the synchronization of the bits by providing a
clock. This clock controls both sender and receiver thus providing synchronization at bit level.
2. Bit rate control: The Physical layer also defines the transmission rate i.e. the number of bits sent
per second.
3. Physical topologies: Physical layer specifies the way in which the different, devices/nodes are
arranged in a network i.e. bus, star, or mesh topology.
4. Transmission mode: Physical layer also defines the way in which the data flows between the two
connected devices. The various transmission modes possible are Simplex, half-duplex and full-
duplex.
** Network Layer, Data Link Layer, and Physical Layer are also known as Lower Layers or Hardware
Layers.
The data link layer is responsible for the node-to-node delivery of the message. The main function of this
layer is to make sure data transfer is error-free from one node to another, over the physical layer. When
a packet arrives in a network, it is the responsibility of DLL to transmit it to the Host using its MAC
address.
Data Link Layer is divided into two sublayers:
The packet received from the Network layer is further divided into frames depending on the frame size of
NIC(Network Interface Card). DLL also encapsulates Sender and Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution Protocol) request onto the
wire asking “Who has that IP address?” and the destination host will reply with its MAC address.
1. Framing: Framing is a function of the data link layer. It provides a way for a sender to transmit a
set of bits that are meaningful to the receiver. This can be accomplished by attaching special bit
patterns to the beginning and end of the frame.
2. Physical addressing: After creating frames, the Data link layer adds physical addresses (MAC
address) of the sender and/or receiver in the header of each frame.
3. Error control: Data link layer provides the mechanism of error control in which it detects and
retransmits damaged or lost frames.
4. Flow Control: The data rate must be constant on both sides else the data may get corrupted thus,
flow control coordinates the amount of data that can be sent before receiving acknowledgement.
5. Access control: When a single communication channel is shared by multiple devices, the MAC
sub-layer of the data link layer helps to determine which device has control over the channel at a
given time.
** Data Link layer is handled by the NIC (Network Interface Card) and device drivers of host machines.
The network layer works for the transmission of data from one host to the other located in different
networks. It also takes care of packet routing i.e. selection of the shortest path to transmit the packet,
from the number of routes available. The sender & receiver’s IP addresses are placed in the header by the
network layer.
1. Routing: The network layer protocols determine which route is suitable from source to
destination. This function of the network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork uniquely, the network layer
defines an addressing scheme. The sender & receiver’s IP addresses are placed in the header by
the network layer. Such an address distinguishes each device uniquely and universally.
The transport layer provides services to the application layer and takes services from the network layer.
The data in the transport layer is referred to as Segments. It is responsible for the End to End Delivery of
the complete message. The transport layer also provides the acknowledgement of the successful data
transmission and re-transmits the data if an error is found.
At sender’s side: Transport layer receives the formatted data from the upper layers, performs
Segmentation, and also implements Flow & Error control to ensure proper data transmission. It
also adds Source and Destination port numbers in its header and forwards the segmented data to
the Network Layer.
Note: The sender needs to know the port number associated with the receiver’s application.
Generally, this destination port number is configured, either by default or manually. For example, when a
web application makes a request to a web server, it typically uses port number 80, because this is the
default port assigned to web applications. Many applications have default ports assigned.
At receiver’s side: Transport Layer reads the port number from its header and forwards the Data which it
has received to the respective application. It also performs sequencing and reassembling of the
segmented data.
1. Segmentation and Reassembly: This layer accepts the message from the (session) layer, and
breaks the message into smaller units. Each of the segments produced has a header associated
with it. The transport layer at the destination station reassembles the message.
2. Service Point Addressing: In order to deliver the message to the correct process, the transport
layer header includes a type of address called service point address or port address. Thus by
specifying this address, the transport layer makes sure that the message is delivered to the
correct process.
– Connection Establishment
– Data Transfer
– Termination / disconnection
In this type of transmission, the receiving device sends an acknowledgement, back to the source after a
packet or group of packets is received. This type of transmission is reliable and secure.
B. Connectionless service: It is a one-phase process and includes Data Transfer. In this type of
transmission, the receiver does not acknowledge receipt of a packet. This approach allows for much
faster communication between devices. Connection-oriented service is more reliable than connectionless
Service.
1. Session establishment, maintenance, and termination: The layer allows the two processes to
establish, use and terminate a connection.
2. Synchronization: This layer allows a process to add checkpoints which are considered
synchronization points into the data. These synchronization points help to identify the error so
that the data is re-synchronized properly, and ends of the messages are not cut prematurely and
data loss is avoided.
3. Dialog Controller: The session layer allows two systems to start communication with each other
in half-duplex or full-duplex.
**All the below 3 layers(including Session Layer) are integrated as a single layer in the TCP/IP model as
“Application Layer”.
**Implementation of these 3 layers is done by the network application itself. These are also known as
Upper Layers or Software Layers.
Scenario:
Let us consider a scenario where a user wants to send a message through some Messenger application
running in his browser. The “Messenger” here acts as the application layer which provides the user with
an interface to create the data. This message or so-called Data is compressed, encrypted (if any secure
data), and converted into bits (0’s and 1’s) so that it can be transmitted.
CN December 30, 1899
The presentation layer is also called the Translation layer. The data from the application layer is
extracted here and manipulated as per the required format to transmit over the network. The
functions of the presentation layer are :
• Encryption/ Decryption: Data encryption translates the data into another form or code. The
encrypted data is known as the ciphertext and the decrypted data is known as plain text. A key
value is used for encrypting as well as decrypting data.
• Compression: Reduces the number of bits that need to be transmitted on the network.
At the very top of the OSI Reference Model stack of layers, we find the Application layer which is
implemented by the network applications. These applications produce the data, which has to be
transferred over the network. This layer also serves as a window for the application services to access the
network and for displaying the received information to the user.
3. Mail Services
4. Directory Services
OSI model acts as a reference model and is not implemented on the Internet because of its late invention.
The current model being used is the TCP/IP model.
Q.3 Why the data encryption is necessary at the presentation layer of OSI reference model? [SUMMER
2021]
Presentation Layer is the 6th layer in the Open System Interconnection (OSI) model. This layer is also
known as Translation layer, as this layer serves as a data translator for the network. The data which this
layer receives from the Application Layer is extracted and manipulated here as per the required format to
transmit over the network. The main responsibility of this layer is to provide or define the data format
and encryption. The presentation layer is also called as Syntax layer since it is responsible for maintaining
the proper syntax of the data which it either receives or transmits to other layer(s).
Application Layer
CN December 30, 1899
Session Layer
Transport Layer
Network Layer
Data Layer
Physical Layer
Data from Application Layer <=> Presentation layer <=> Data from Session Layer
The presentation layer, being the 6th layer in the OSI model, performs several types of functions, which
are described below-
• Presentation layer format and encrypts data to be sent across the network.
• This layer takes care that the data is sent in such a way that the receiver will understand the
information (data) and will be able to use the data efficiently and effectively.
• This layer manages the abstract data structures and allows high-level data structures (example-
banking records), which are to be defined or exchanged.
• This layer carries out the encryption at the transmitter and decryption at the receiver.
• This layer carries out data compression to reduce the bandwidth of the data to be transmitted
(the primary goal of data compression is to reduce the number of bits which is to be transmitted).
• This layer is responsible for interoperability (ability of computers to exchange and make use of
information) between encoding methods as different computers use different encoding methods.
• This layer basically deals with the presentation part of the data.
• Presentation layer, carries out the data compression (number of bits reduction while
transmission), which in return improves the data throughput.
• The presentation layer is also responsible for integrating all the formats into a standardized
format for efficient and effective communication.
• This layer encodes the message from the user-dependent format to the common format and
vice-versa for communication between dissimilar systems.
CN December 30, 1899
• This layer deals with the syntax and semantics of the messages.
• This layer also ensures that the messages which are to be presented to the upper as well as the
lower layer should be standardized as well as in an accurate format too.
• Presentation layer is also responsible for translation, formatting, and delivery of information for
processing or display.
• This layer also performs serialization (process of translating a data structure or an object into a
format that can be stored or transmitted easily).
Features of Presentation Layer in the OSI model: Presentation layer, being the 6th layer in the OSI model,
plays a vital role while communication is taking place between two devices in a network.
• Presentation layer could apply certain sophisticated compression techniques, so fewer bytes of
data are required to represent the information when it is sent over the network.
• If two or more devices are communicating over an encrypted connection, then this presentation
layer is responsible for adding encryption on the sender’s end as well as the decoding the
encryption on the receiver’s end so that it can represent the application layer with unencrypted,
readable data.
• This layer formats and encrypts data to be sent over a network, providing freedom from
compatibility problems.
• This presentation layer is also responsible for compressing data it receives from the application
layer before delivering it to the session layer (which is the 5th layer in the OSI model) and thus
improves the speed as well as the efficiency of communication by minimizing the amount of the
data to be transferred.
Presentation layer being the 6th layer, but the most important layer in the OSI model performs several
types of functionalities, which makes sure that data which is being transferred or received should be
accurate or clear to all the devices which are there in a closed network.
Presentation Layer, for performing translations or other specified functions, needs to use certain
protocols which are defined below –
• Apple Filing Protocol (AFP): Apple Filing Protocol is the proprietary network protocol
(communications protocol) that offers services to macOS or the classic macOS. This is basically
the network file control protocol specifically designed for Mac-based platforms.
• Lightweight Presentation Protocol (LPP): Lightweight Presentation Protocol is that protocol which
is used to provide ISO presentation services on the top of TCP/IP based protocol stacks.
• NetWare Core Protocol (NCP): NetWare Core Protocol is the network protocol which is used to
access file, print, directory, clock synchronization, messaging, remote command execution and
other network service functions.
• External Data Representation (XDR): External Data Representation (XDR) is the standard for the
description and encoding of data. It is useful for transferring data between computer
architectures and has been used to communicate data between very diverse machines.
Converting from local representation to XDR is called encoding, whereas converting XDR into
local representation is called decoding.
• Secure Socket Layer (SSL): The Secure Socket Layer protocol provides security to the data that is
being transferred between the web browser and the server. SSL encrypts the link between a web
server and a browser, which ensures that all data passed between them remains private and free
from attacks.
Q. 4 Explain functionality of Repeater, HUB, Bridge, Switch, Router and Gateway [SUMMER 2022(4
MARKS), WINTER 2021(7 MARKS)]
Network Devices: Network devices, also known as networking hardware, are physical devices that allow
hardware on a computer network to communicate and interact with one another. For example Repeater,
Hub, Bridge, Switch, Routers, Gateway, Brouter, and NIC, etc.
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal over the
same network before the signal becomes too weak or corrupted to extend the length to which the signal
can be transmitted over the same network. An important point to be noted about repeaters is that they
do not amplify the signal. When the signal becomes weak, they copy it bit by bit and regenerate it at its
star topology connectors connecting if original strength. It is a 2-port device.
CN December 30, 1899
2. Hub – A hub is a basically multi-port repeater. A hub connects multiple wires coming from
different branches, for example, the connector in star topology which connects different stations. Hubs
cannot filter data, so data packets are sent to all connected devices. In other words, the collision domain
of all hosts connected through Hub remains one. Also, they do not have the intelligence to find out the
best path for data packets which leads to inefficiencies and wastage.
Types of Hub
• Active Hub:- These are the hubs that have their power supply and can clean, boost, and relay the
signal along with the network. It serves both as a repeater as well as a wiring center. These are
used to extend the maximum distance between nodes.
• Passive Hub:- These are the hubs that collect wiring from nodes and power supply from the active
hub. These hubs relay signals onto the network without cleaning and boosting them and can’t be
used to extend the distance between nodes.
• Intelligent Hub:- It works like an active hub and includes remote management capabilities. They
also provide flexible data rates to network devices. It also enables an administrator to monitor
the traffic passing through the hub and to configure each port in the hub.
3. Bridge – A bridge operates at the data link layer. A bridge is a repeater, with add on the
functionality of filtering content by reading the MAC addresses of the source and destination. It is also
used for interconnecting two LANs working on the same protocol. It has a single input and single output
port, thus making it a 2 port device.
Types of Bridges
• Transparent Bridges:- These are the bridge in which the stations are completely unaware of the
bridge’s existence i.e. whether or not a bridge is added or deleted from the network,
reconfiguration of the stations is unnecessary. These bridges make use of two processes i.e.
bridge forwarding and bridge learning.
• Source Routing Bridges:- In these bridges, routing operation is performed by the source station
and the frame specifies which route to follow. The host can discover the frame by sending a
special frame called the discovery frame, which spreads through the entire network using all
possible paths to the destination.
4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its efficiency(a
large number of ports imply less traffic) and performance. A switch is a data link layer device. The switch
can perform error checking before forwarding data, which makes it very efficient as it does not forward
packets that have errors and forward good packets selectively to the correct port only. In other words,
the switch divides the collision domain of hosts, but the broadcast domain remains the same.
5. Routers – A router is a device like a switch that routes data packets based on their IP addresses.
The router is mainly a Network Layer device. Routers normally connect LANs and WANs and have a
CN December 30, 1899
dynamically updating routing table based on which they make decisions on routing the data packets. The
router divides the broadcast domains of hosts connected through it.
6. Gateway – A gateway, as the name suggests, is a passage to connect two networks that may work
upon different networking models. They work as messenger agents that take data from one system,
interpret it, and transfer it to another system. Gateways are also called protocol converters and can
operate at any network layer. Gateways are generally more complex than switches or routers. A gateway
is also called a protocol converter.
7. Brouter – It is also known as the bridging router is a device that combines features of both bridge
and router. It can work either at the data link layer or a network layer. Working as a router, it is capable
of routing packets across networks and working as the bridge, it is capable of filtering local area network
traffic.
8. NIC – NIC or network interface card is a network adapter that is used to connect the computer to
the network. It is installed in the computer to establish a LAN. It has a unique id that is written on the
CN December 30, 1899
chip, and it has a connector to connect the cable to it. The cable acts as an interface between the
computer and the router or modem. NIC card is a layer 2 device which means that it works on both the
physical and data link layers of the network model.
Throughput is a measure of how many units of information a system can process in a given amount of
time. It is applied broadly to systems ranging from various aspects of computer and network systems to
organizations.
Related measures of system productivity include the speed with which a specific workload can be
completed and response time, which is the amount of time between a single interactive user request and
receipt of the response.
An early throughput measure was the number of batch jobs completed in a day. More recent measures
assume either a more complicated mixture of work or focus on a particular aspect of computer operation.
Units like trillion floating-point operations per second (teraflops) provide a metric to compare the cost of
raw computing over time or by manufacturer.
In data transmission, network throughput is the amount of data moved successfully from one place to
another in a given time period. Network throughput is typically measured in bits per second (bps), as in
megabits per second (Mbps) or gigabits per second (Gbps).
CN December 30, 1899
Q.6 Differentiate TCP/IP protocol stack and OSI Reference model of the computer network. [SUMMER
2021 (3 marks)]
OSI TCP/IP
The OSI model was developed first, The protocols were created first and then built
and then protocols were created to fit the TCP/IP model.
the network architecture’s needs.
The OSI model represents defines It does not mention the services, interfaces,
administration, interfaces and and protocols.
conventions. It describes clearly which
layer provides services.
The protocols of the OSI model are The TCP/IP model protocols are not hidden,
better unseen and can be returned and we cannot fit a new protocol stack in it.
with another appropriate protocol
quickly.
The smallest size of the OSI header is The smallest size of the TCP/IP header is 20
5 bytes. bytes.
UNIT – 2
Q.1 Discriminate fully qualified domain name from partially qualified domain name.
CN December 30, 1899
Technically, if a top-level domain “A” contains a subdomain “B” that in turn contains subdomain “C”, the
full domain name for “C” is “C.B.A.”. This is called the fully-qualified domain name (FQDN) for the node.
Here, the word “qualified” is synonymous with “specified”. The domain name “C.B.A.” is fullyqualified
because it gives the full location of the specific domain that bears its name within the whole DNS name
space.
Fully-qualified domain names are also sometimes called absolute domain names. This term reflects the
fact that one can refer unambiguously to the name of any device using its FQDN from any other portion
of the name space. Using the FQDN always instructs the person or software interpreting the name to
start at the root and then follow the sequence of domain labels from right to left, going top to bottom
within the tree.
There are also some situations in which we may refer to a device using an incomplete name specification.
This is called a partially-qualified domain name (PQDN), which means that the name only partially
specifies the location of the device. By definition, a PQDN is ambiguous, because it doesn't give the full
path to the domain. Thus, one can only use a PQDN within the context of a particular parent domain,
whose absolute domain name is known. We can then find the FQDN of a partially-specified domain name
by appending the partial name to the absolute name of the parent domain. For example, if we have the
PQDN “Z” within the context of the FQDN “Y.X.”, we know the FQDN for “Z” is “Z.Y.X.”
Why bother with this? The answer is convenience. An administrator for a domain can use relative names
as a short-hand to refer to devices or subdomains without having to repeat the entire full name. For
example, suppose you are in charge of the computer science department at the University of Widgetopia.
The domain name for the department as a whole is “cs.widgetopia.edu.” and the individual hosts you
manage are named after fruit.
In the DNS files you maintain you could refer to each device by its FQDN every time; for example,
“apple.cs.widgetopia.edu.”, “banana.cs.widgetopia.edu.” and so on. But it's easier to tell the software “if
you see a name that is not fully qualified, assume it is in the ‘cs.widgetopia.edu’ domain”. Then you can
just call the machines “apple”, “banana”, etc. Whenever the DNS software sees a PQDN such as “kiwi” it
will treat it as “kiwi.cs.widgetopia.edu”.
Q.2 What is HTTP? Differentiate its persistent and non-persistent types with request-response behavior of
HTTP. [WINTER 2021 4 MARKS]
CN December 30, 1899
The Hypertext Transfer Protocol (HTTP) is an application-level protocol that uses TCP as an underlying
transport and typically runs on port 80. HTTP is a stateless protocol i.e. server maintains no information
about past client requests.
HTTP Connections
1. Non-Persistent
2. Persistent
Before starting with persistent and non-persistent HTTP connection lets know what is RTT.
RTT-> Time for a small packet to travel from client to server and back.
Non-Persistent Connection
Persistent connection
1. Non-Pipelined
CN December 30, 1899
2. Pipelined
In Non-pipeline connection we first establish connection which takes two RTT then we send all the
objects images/text files which takes 1 RTT each (TCP for each object is not required).
In Pipelined connection 2RTT for connection establishment and then 1RTT(assuming no window limit) for
all the objects i.e. images/text.
Most of the modern browsers like Chrome, Firefox and Internet Explorer use persistent connections.
CN December 30, 1899
Q.3 Give differences between TCP and UDP. [WINTER 2021 3 MARKS]
1) Root Level
4) Sub-Domain
5) Host
The DNS root zone is the highest level in the DNS hierarchy tree. The root name server is the name server for the
root zone. It answers the requests for records in the root zone and answers other requests by providing a list of
authoritative name servers for the appropriate TLD (top-level domain). The root nameservers are very important
because they are the first step in resolving a domain name. These are the authoritative nameservers which serve
the DNS root zone. These servers contain the global list of the top-level domains. The root zone contains the
following:
1) Verisign
3) Cogent
4) University of Maryland
7) US Department of Defense
9) Netnod
10) RIPE
11) ICANN
12) WIDE
The next level in the DNS hierarchy is Top level domains. There are many TLDs available at the moment. As we have
seen the TLDs are classified as two sub categories. They are organizational hierarchy and geographic hierarchy. Let
us see each in detail.
Organizational Hierarchy
Domain Purpose
Geographic hierarchy
In the geographic hierarchy, each country is assigned with two letter codes. These codes are used to identify
countries.
Here, the “.com” is the top-level domain. It is called as tld in short. This is the next component in the DNS hierarchy.
A TLD can have many domains under it. For example, a .com tld can have linux.com, centos.com, ubuntu.com, etc.
Sometimes, there is a second level hierarchy to a tld. They deal with the type of entity intended to register an SLD
under it. For example, for the .uk tld, a college or other academic institution would register under the .ac.uk ccSLD,
while companies would register under .co.uk.
The next level in the DNS hierarchy is the Second Level Domains. This is the domain that is directly below the tld.
This is the main part of the domain name. It can vary according to the buyer. There are no limits here as the tlds.
Once the domain is available anyone can purchase it. If the domain is unavailable at the moment, same 2nd level
name with other tlds is the best option.
Sub-domain
The sub-domain is the next level in the DNS hierarchy. The sub-domain can be defined as the domain that is a part
of the main domain. The only domain that is not also a sub-domain is the root domain. Suppose two
domains. one.example.com and two.example.com. Here, both the domains are the sub-domains of the main
domain example.com and the example.com is also a subdomain of the com top level domain.
Q.5 What is POP3 protocol? How the limitations of POP3 protocols are overcome by IMAP? [SUMMER
2021 7 MARKS]
With the TCP connection established, POP3 progresses through three phases: authorization,
transaction, and update.
During the first phase, authorization, the user agent sends a username and a password to
authenticate the user.
During the second phase, transaction, the user agent retrieves messages; also during this phase,
the user agent can mark messages for deletion, remove deletion marks and obtain mail
statistics.
The third phase, update, occurs after the client has issued the quit command, ending the POP3
session; at this time, the mail server deletes the messages that were marked for deletion.
POP3 is designed to delete mail on the server as soon as the user has downloaded it.
However, some implementations allow users or an administrator to specify that mail is saved for
some period of time. POP can be thought of as a "store-and-forward" service.
modem link) between the user agent and its mail server.
With a low bandwidth connection, the user may not want to download all of the messages in its
mailbox, particularly avoiding long messages that might contain, for example, an audio or video
clip.
UNIT – 4
Q .1 Explain distance vector routing algorithm.
o Distributed: It is distributed in that each node receives information from one or more of its
directly attached neighbors, performs calculation and then distributes the result back to its
neighbors.
o Iterative: It is iterative in that its process continues until no more information is available to
be exchanged between neighbors.
o Asynchronous: It does not require that all of its nodes operate in the lock step with each
other.
o Knowledge about the whole network: Each router shares its knowledge through the entire
network. The Router sends its collected knowledge about the network to its neighbors.
o Routing only to neighbors: The router sends its knowledge about the network to only those
routers which have direct links. The router sends whatever it has about the network through
the ports. The information is received by the router and uses the information to update its
own routing table.
o Information sharing at regular intervals: Within 30 seconds, the router sends the information
to the neighboring routers.
Let dx(y) be the cost of the least-cost path from node x to node y. The least costs are related by
Bellman-Ford equation, dx(y) = minv{c(x,v) + dv(y)}
Where the minv is the equation taken for all x neighbors. After traveling from x to v, if we consider
the least-cost path from v to y, the path cost will be c(x,v)+dv(y). The least cost from x to y is the
minimum of c(x,v)+dv(y) taken over all neighbors.
With the Distance Vector Routing algorithm, the node x contains the following routing information:
o For each neighbor v, the cost c(x,v) is the path cost from x to directly attached neighbor, v. o
The distance vector x, i.e., Dx = [ Dx(y) : y in N ], containing its cost to all destinations, y, in N.
o The distance vector of each of its neighbors, i.e., Dv = [ Dv(y) : y in N ] for each neighbor v of x.
Distance vector routing is an asynchronous algorithm in which node x sends the copy of its distance
vector to all its neighbors. When node x receives the new distance vector from one of its neighboring
vector, v, it saves the distance vector of v and uses the Bellman-Ford equation to update its own
distance vector. The equation is given below:
The node x has updated its own distance vector table by using the above equation and sends its
updated table to all its neighbors so that they can update their own distance vectors.
Algorithm
At each node x,
Initialization
each neighbor w
Dx(y) = minv{c(x,v)+Dv(y)}
Sharing Information
o In the above figure, each cloud represents the network, and the number inside the cloud
represents the network ID.
o All the LANs are connected by routers, and they are represented in boxes labeled as A, B, C,
D, E, F.
o Distance vector routing algorithm simplifies the routing process by assuming the cost of
every link is one unit. Therefore, the efficiency of transmission can be measured by the
number of links to reach the destination.
Routing Table
Initially, the routing table is created for each router that contains atleast three types of information
such as Network ID, the cost and the next hop.
o NET ID: The Network ID defines the final destination of the packet. o Cost: The cost is the
number of hops that packet must take to get there. o Next hop: It is the router to which
For Example:
Updating the Table o When A receives a routing table from B, then it uses its information to
update the table. o The routing table of B shows how the packets can move to the
networks 1 and 4.
o The B is a neighbor to the A router, the packets from A to B can reach in one hop. So, 1 is
added to all the costs given in the B's table and the sum will be the cost to reach a particular
network.
o After adjustment, A then combines this table with its own table to create a combined table.
o The combined table may contain some duplicate data. In the above figure, the combined
table of router A contains the duplicate data, so it keeps only those data which has the
lowest cost. For example, A can send the data to network 1 in two ways. The first, which
uses no next router, so it costs one hop. The second requires two hops (A to B, then B to
Network 1). The first option has the lowest cost, therefore it is kept and the second one is
dropped.
o The process of creating the routing table continues for all routers. Every router receives the
information from the neighbors, and update the routing table.
1. IP address :
An Internet Protocol address (IP address) is the logical address of our network hardware by which
other devices identify it in a network. IP address stands for Internet Protocol address which is an
unique number or a numerical representation that uniquely identifies a specific interface on the
network. Each device that is connected to internet an IP address is assigned to it for its unique
identification.
12.244.233.165
2001:0db8:0000:0000:0000:ff00:0042:7879
2. Port Number :
Port number is the part of the addressing information used to identify the senders and receivers of
messages in computer networking. Different port numbers are used to determine what protocol
incoming traffic should be directed to. Port number identifies a specific process to which an Internet
or other network message is to be forwarded when it arrives at a server. Ports are identified for each
protocol and It is considered as a communication endpoint.
Ports are represented by 16-bit numbers. There are 2^16 port numbers i.e 65536.
0 to 1023 are well-known port numbers are as they are used by well-known protocol services.
These are allocated to server services by the Internet Assigned Numbers Authority (IANA).
1024 to 49151 are registered port numbers i.e it can be registered to specific protocols by software
corporations
49152 to 65535 are dynamic port numbers and they can be used by anyone physical
address :
physical address refers to a memory address or the location of a memory cell in the main memory. It
is used by both hardware and software for accessing data. Software, however, does not use physical
addresses directly; instead, it accesses memory using a virtual address. A hardware component
known as the memory management unit (MMU) is responsible for translating a virtual address to a
physical address.
In networking, physical address refers to a computer's MAC address, which is a unique identifier
associated with a network adapter that is used for identifying a computer in a network.
UNIT-5
Q.1 Explain Link State routing protocol.
Link state routing is a technique in which each router shares the knowledge of its neighborhood with
every other router in the internetwork.
o Knowledge about the neighborhood: Instead of sending its routing table, a router sends the
information about its neighborhood only. A router broadcast its identities and cost of the
directly attached links to other routers.
o Flooding: Each router sends the information to every other router on the internetwork
except its neighbors. This process is known as Flooding. Every router that receives the packet
sends the copies to all its neighbors. Finally, each and every router receives a copy of the
same information.
o Information sharing: A router sends the information to every other router only when the
change occurs in the information.
of its neighbors.
Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.
o The Link state routing algorithm is also known as Dijkstra's algorithm which is used to find
the shortest path from one node to every other node in the network.
o The Dijkstra's algorithm is an iterative, and it has the property that after kth iteration of the
algorithm, the least cost paths are well known for k destination nodes.
o c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then c(i , j) = ∞.
o D(v): It defines the cost of the path from source code to destination v that has the least cost
currently. o P(v): It defines the previous node (neighbor of v) along with current least
cost path from source to v.
Algorithm
Initialization
Add w to N
In the above algorithm, an initialization step is followed by the loop. The number of times the loop is
executed is equal to the total number of nodes available in the network.
Step 1:
The first step is an initialization step. The currently known least cost path from A to its directly
attached neighbors, B, C, D are 2,5,1 respectively. The cost from A to B is set to 2, from A to D is set
to 1 and from A to C is set to 5. The cost from A to E and F are set to infinity as they are not directly
linked to A.
Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)
Step 2:
In the above table, we observe that vertex D contains the least cost path in step 1. Therefore, it is
added in N. Now, we need to determine a least-cost path through D vertex.
1. v = B, w = D
1. v = C, w = D
3. = min( 5, 1+3)
4. = min( 5, 4)
5. The minimum value is 4. Therefore, the currently shortest path from A to C is 4.</p> c)
3. = min( ∞, 1+1)
4. = min(∞, 2)
Note: The vertex D has no direct link to vertex E. Therefore, the value of D(F) is infinity.
Step 3:
In the above table, we observe that both E and B have the least cost path in step 2. Let's consider the
E vertex. Now, we determine the least cost path of remaining vertices through E.
1. v = B, w = E
3. = min( 2 , 2+ ∞ )
4. = min( 2, ∞)
3. = min( 4 , 2+1 )
4. = min( 4,3)
1. v = F, w = E
3. = min( ∞ , 2+2 )
4. = min(∞ ,4)
Step 4:
In the above table, we observe that B vertex has the least cost path in step 3. Therefore, it is added
in N. Now, we determine the least cost path of remaining vertices through B.
3. = min( 3 , 2+3 )
4. = min( 3,5)
1. v = F, w = B
3. = min( 4, ∞)
4. = min(4, ∞)
Step 5:
In the above table, we observe that C vertex has the least cost path in step 4. Therefore, it is added
in N. Now, we determine the least cost path of remaining vertices through C.
1. v = F, w = C
3. = min( 4, 3+5)
4. = min(4,8)
5 ADEBC 4,E
Final table:
5 ADEBC 4,E
6 ADEBCF
Disadvantage:
Heavy traffic is created in Line state routing due to Flooding. Flooding can cause an infinite looping,
this problem can be solved by using Time-to-leave field
Error is a condition when the output information does not match with the input information. During
transmission, digital signals suffer from noise that can introduce errors in the binary bits travelling
from one system to other. That means a 0 bit may change to 1 or a 1 bit may change to 0.
Error-Detecting codes
Whenever a message is transmitted, it may get scrambled by noise or data may get corrupted. To
avoid this, we use error-detecting codes which are additional data added to a given digital message
to help us detect if an error occurred during transmission of the message. A simple example of
errordetecting code is parity check.
Error-Correcting codes
Along with error-detecting code, we can also pass some data to figure out the original message from
the corrupt message that we received. This type of code is called an error-correcting code.
Errorcorrecting codes also deploy the same strategy as error-detecting codes but additionally, such
codes also detect the exact location of the corrupt bit.
In error-correcting codes, parity check has a simple way to detect errors along with a sophisticated
mechanism to determine the corrupt bit location. Once the corrupt bit is located, its value is
reverted (from 0 to 1 or 1 to 0) to get the original message.
How to Detect and Correct Errors?
To detect and correct the errors, additional bits are added to the data bits at the time of
transmission.
• The additional bits are called parity bits. They allow detection or correction of the errors.
• The data bits along with the parity bits form a code word.
It is the simplest technique for detecting and correcting errors. The MSB of an 8-bits word is used as
the parity bit and the remaining 7 bits are used as data or message bits. The parity of 8-bits
transmitted word can be either even parity or odd parity.
Even parity -- Even parity means the number of 1's in the given word including the parity bit should
be even (2,4,6,....).
Odd parity -- Odd parity means the number of 1's in the given word including the parity bit should be
odd (1,3,5,....).
The parity bit can be set to 0 and 1 depending on the type of the parity required.
• For even parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is even.
Shown in fig. (a).
• For odd parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is odd.
Shown in fig. (b).
How Does Error Detection Take Place?
Parity checking at the receiver can detect the presence of an error if the parity of the receiver signal
is different from the expected parity. That means, if it is known that the parity of the transmitted
signal is always going to be "even" and if the received signal has an odd parity, then the receiver can
conclude that the received signal is not correct. If an error is detected, then the receiver will ignore
the received byte and request for retransmission of the same byte to the transmitter.
Before understanding the working of Go-Back-N ARQ, we first look at the sliding window
protocol. As we know that the sliding window protocol is different from the stop-and-wait
protocol. In the stop-and-wait protocol, the sender can send only one frame at a time and
cannot send the next frame without receiving the acknowledgment of the previously sent
frame, whereas, in the case of sliding window protocol, the multiple frames can be sent at a
time. The variations of sliding window protocol are Go-Back-N ARQ and Selective Repeat
ARQ. Let's understand 'what is Go-Back-N ARQ'.
In Go-Back-N ARQ, N is the sender's window size. Suppose we say that Go-Back-3, which
means that the three frames can be sent at a time before expecting the acknowledgment
from the receiver.
It uses the principle of protocol pipelining in which the multiple frames can be sent before
receiving the acknowledgment of the first frame. If we have five frames and the concept is
Go-Back-3, which means that the three frames can be sent, i.e., frame no 1, frame no 2, frame
no 3 can be sent before expecting the acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the
multiple frames at a time that requires the numbering approach to distinguish the frame
from another frame, and these numbers are known as the sequential numbers.
The number of frames that can be sent at a time totally depends on the size of the sender's
window. So, we can say that 'N' is the number of frames that can be sent at a time before
receiving the acknowledgment from the receiver.
If the acknowledgment of a frame is not received within an agreed-upon time period, then all
the frames available in the current window will be retransmitted. Suppose we have sent the
frame no 5, but we didn't receive the acknowledgment of frame no 5, and the current
window is holding three frames, then these three frames will be retransmitted.
The sequence number of the outbound frames depends upon the size of the sender's
window. Suppose the sender's window size is 2, and we have ten frames to send, then the
sequence numbers will not be 1,2,3,4,5,6,7,8,9,10. Let's understand through an example.
The number of bits in the sequence number is 2 to generate the binary sequence 00,01,10,11.
Suppose there are a sender and a receiver, and let's assume that there are 11 frames to be
sent. These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence
numbers of the frames. Mainly, the sequence number is decided by the sender's window size.
But, for the better understanding, we took the running sequence numbers, i.e.,
0,1,2,3,4,5,6,7,8,9,10. Let's consider the window size as 4, which means that the four frames
can be sent at a time before expecting the acknowledgment of the first frame.
Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and now
the sender is expected to receive the acknowledgment of the 0 th frame.
Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the receiver
has successfully received it.
The sender will then send the next frame, i.e., 4, and the window slides containing four
frames (1,2,3,4).
The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window will
slide having four frames (2,3,4,5).
Now, let's assume that the receiver is not acknowledging the frame no 2, either the frame is
lost, or the acknowledgment is lost. Instead of sending the frame no 6, the sender Go-Back
to 2, which is the first frame of the current window, retransmits all the frames in the current
window, i.e., 2,3,4,5.
o In Go-Back-N, N determines the sender's window size, and the size of the receiver's
window is always 1.
o It does not consider the corrupted frames and simply discards them. o It does not
accept the frames which are out of order and discards them.
o If the sender does not receive the acknowledgment, it leads to the retransmission of all
the current window frames.
Example 1: In GB4, if every 6th packet being transmitted is lost and if we have to spend 10
packets then how many transmissions are required?
Solution: Here, GB4 means that N is equal to 4. The size of the sender's window is 4.
Step 1: As the window size is 4, so four packets are transferred at a time, i.e., packet no 1,
packet no 2, packet no 3, and packet no 4.
Step 2: Once the transfer of window size is completed, the sender receives the
acknowledgment of the first frame, i.e., packet no1. As the acknowledgment receives, the
sender sends the next packet, i.e., packet no 5. In this case, the window slides having four
packets, i.e., 2,3,4,5 and excluded the packet 1 as the acknowledgment of the packet 1 has
been received successfully.
Step 3: Now, the sender receives the acknowledgment of packet 2. After receiving the
acknowledgment for packet 2, the sender sends the next packet, i.e., packet no 6. As
mentioned in the question that every 6 th is being lost, so this 6th packet is lost, but the sender
does not know that the 6th packet has been lost.
Step 4: The sender receives the acknowledgment for the packet no 3. After receiving the
acknowledgment of 3rd packet, the sender sends the next packet, i.e., 7 th packet. The window
will slide having four packets, i.e., 4, 5, 6, 7.
Step 5: When the packet 7 has been sent, then the sender receives the acknowledgment for
the packet no 4. When the sender has received the acknowledgment, then the sender sends
the next packet, i.e., the 8th packet. The window will slide having four packets, i.e., 5, 6, 7, 8.
Step 6: When the packet 8 is sent, then the sender receives the acknowledgment of packet 5.
On receiving the acknowledgment of packet 5, the sender sends the next packet, i.e., 9 th
packet. The window will slide having four packets, i.e., 6, 7, 8, 9.
Step 7: The current window is holding four packets, i.e., 6, 7, 8, 9, where the 6 th packet is the
first packet in the window. As we know, the 6 th packet has been lost, so the sender receives
the negative acknowledgment NAK(6). As we know that every 6 th packet is being lost, so the
counter will be restarted from 1. So, the counter values 1, 2, 3 are given to the 7 th packet, 8th
packet, 9th packet respectively.
Step 8: As it is Go-BACK, so it retransmits all the packets of the current window. It will resend
6, 7, 8, 9. The counter values of 6, 7, 8, 9 are 4, 5, 6, 1, respectively. In this case, the 8 th packet
is lost as it has a 6-counter value, so the counter variable will again be restarted from 1.
Step 9: After the retransmission, the sender receives the acknowledgment of packet 6. On
receiving the acknowledgment of packet 6, the sender sends the 10 th packet. Now, the
current window is holding four packets, i.e., 7, 8, 9, 10.
Step 10: When the 10th packet is sent, the sender receives the acknowledgment of packet 7.
Now the current window is holding three packets, 8, 9 and 10. The counter values of 8, 9, 10
are 6, 1, 2.
Step 11: As the 8th packet has 6 counter value which means that 8 th packet has been lost, and
the sender receives NAK (8).
Step 12: Since the sender has received the negative acknowledgment for the 8 th packet, it
resends all the packets of the current window, i.e., 8, 9, 10.
Step 13: The counter values of 8, 9, 10 are 3, 4, 5, respectively, so their acknowledgments
have been received successfully.
We conclude from the above figure that total 17 transmissions are required.
Q.4 What do you mean by random access protocols? Explain slotted ALOHA in brief.
Random Access Protocols is a Multiple access protocol that is divided into four categories
which are ALOHA, CSMA, CSMA/CD, and CSMA/CA. In this article, we will cover all of these
Random Access Protocols in detail.
Have you ever been to a railway station? And noticed the ticket counter over there?
Above are the scenarios for approaching a ticket counter. Which one do you think is more
productive? The ordered one, right? And we all know the reason why. Just to get things
working and avoid problems we have some rules or protocols, like "please stand in the
queue", "do not push each other", "wait for your turn", etc. in the same way computer
network channels also have protocols like multiple access protocols, random access
protocols, etc.
Let's say you are talking to your friend using a mobile phone. This means there is a link
established between you and him. But the point to be remembered is that the
communication channel between you and him (the sender & the receiver or vice-versa) is not
always a dedicated link, which means the channels are not only providing service to you at
that time but to others as well. This means multiple users might be communicating through
the same channel.
How is that possible? The reason behind this is the multiple access protocols. If you refer to
the OSI model you will come across the data link layer. Now divide the layers into 2 parts, the
upper part of the layer will take care of the data link control, and the lower half will be taking
care in resolving the access to the shared media, as shown in the above diagram.
The following diagram classifies the multiple-access protocol. In this article, we are going to
cover Random Access Protocol.
Once again, let's use the example of mobile phone communication. Whenever you call
someone, a connection between you and the desired person is established, also anyone can
call anyone. So here we have all the users (stations) at an equal priority, where any station
can send data depending on medium's state whether it is idle or busy, meaning that if you
friend is talking to someone else through the mobile phone, then its status is busy and you
cannot establish a connection and since all the users are assigned equal priority you can not
disconnect your friend's ongoing call and connect yours.
1. There is no time restriction for sending the data (you can talk to your friend without a
time restriction).
As in the above diagram you might have observed that the random-access protocol is further
divided into four categories, which are:
1. ALOHA
2. CSMA
3. CSMA/CD
4. CSMA/CA
The ALOHA protocol or also known as the ALOHA method is a simple communication
scheme in which every transmitting station or source in a network will send the data
whenever a frame is available for transmission. If we succeed and the frame reaches its
destination, then the next frame is lined-up for transmission. But remember, if the data frame
is not received by the receiver (maybe due to collision) then the frame is sent again until it
successfully reaches the receiver's end.
Whenever we talk about a wireless broadcast system or a half-duplex two-way link, the
ALOHA method works efficiently. But as the network becomes more and more complex e.g.
the ethernet. Now here in the ethernet, the system involves multiple sources and
destinations which share a common data path or channel, then the conflict occurs because
data-frames collide, and the information is lost. Following is the flow chart of Pure ALOHA.
So, to minimize these collisions and to optimize network efficiency as well as to increase the
number of subscribers that can use a given network, the slotted ALOHA was developed. This
system consists of the signals termed as beacons which are sent at precise time intervals and
inform each source when the channel is clear to send the frame.
Now, as we came to know about ALOHA's 2 types i.e. Pure & Slotted ALOHA, the following is
the difference between both.
CSMA stands for Carrier Sense Multiple Access. Till now we have understood that when 2 or
more stations start sending data, then a collision occurs, so this CSMA method was
developed to decrease the chances of collisions when 2 or more stations start sending their
signals over the data link layer. But how do they do it? The CSMA makes each station to first
check the medium (whether it is busy or not) before sending any data packet.
o 1-persistent mode: In this, first the node checks the channel, if the channel is idle
then the node or station transmits data, otherwise it keeps on waiting and whenever
the channel is idle, the stations transmit the data-frame.
o Non-persistent mode: In this, the station checks the channel similarly as 1-persistent
mode, but the only difference is that when the channel is busy it checks it again after
a random amount of time, unlike the 1-persistent where the stations keep on
checking continuously.
o P-persistent mode: In this, the station checks the channel and if found idle then it
transmits the data frame with the probability of P and if the data is not transmitted
(1-P) then the station waits for a random amount of time and again transmits the
data with the probability P and this cycle goes on continuously until the data-frame is
successfully sent.
In this, whenever station transmits data-frame it then monitors the channel or the medium to
acknowledge the state of the transmission i.e. successfully transmitted or failed. If the
transmission succeeds, then it prepares for the next frame otherwise it resends the previously
failed data-frame. The point to remember here is, that the frame transmission time should be
at least twice the maximum propagation time, which can be deduced when the distance
between the two stations involved in a collision is maximum.
To detect the possible collisions, the sender receives the acknowledgement and if there is
only one acknowledgment present (it's own) then this means that the data-frame has been
sent successfully. But, if there are 2 or more acknowledgment signals then this indicates that
the collision has occurred.
o Interframe space: in this case, assume that your station waits for the channel to
become idle and found that the channel is idle, then it will not send the data-frame
immediately (in order to avoid collision due to propagation delay) it rather waits for
some time called interframe space or IFS, and after this time the station again checks
the medium for being idle. But it should be kept in mind that the IFS duration
depends on the priority of the station.
o Contention Window: here, the time is divided into slots. Say, if the sender is ready for
transmission of the data, it then chooses a random number of slots as waiting time
which doubles every time whenever the channel is busy. But, if the channel is not idle
at that moment, then it does not restart the entire process but restarts the timer
when the channel is found idle again.
o Acknowledgment: as we discussed above that the sender station will re-transmits the
data if acknowledgment is not received before the timer expires. Q.5 Discuss the
principles of Reliable Data Transfer.
Transport layer protocols are central piece of layered architectures, these provides the
logical communication between application processes. These processes uses the logical
communication to transfer data from transport layer to network layer and this transfer of data should
be reliable and secure. The data is transferred in the form of packets but the problem occurs in
reliable transfer of data.
The problem of transferring the data occurs not only at the transport layer, but also at the
application layer as well as in the link layer. This problem occur when a reliable service runs
on an unreliable service, For example, TCP (Transmission Control Protocol) is a reliable data
transfer protocol that is implemented on top of an unreliable layer, i.e., Internet Protocol (IP)
is an end to end network layer protocol.
In this model, we have design the sender and receiver sides of a protocol over a reliable
channel. In the reliable transfer of data the layer receives the data from the above layer
breaks the message in the form of segment and put the header on each segment and
transfer. Below layer receives the segments and remove the header from each segment and
make it a packet by adding to header.
The data which is transferred from the above has no transferred data bits corrupted or lost,
and all are delivered in the same sequence in which they were sent to the below layer this is
reliable data transfer protocol. This service model is offered by TCP to the Internet
applications that invoke this transfer of data.
Similarly in an unreliable channel we have design the sending and receiving side. The
sending side of the protocol is called from the above layer to rdt_send() then it will pass the
data that is to be delivered to the application layer at the receiving side (here rdt-send() is a
function for sending data where rdt stands for reliable data transfer protocol and _send() is
used for the sending side).
On the receiving side, rdt_rcv() (rdt_rcv() is a function for receiving data where -rcv() is used
for receiving side), will be called when a packet arrives from the receiving side of the
unreliable channel. When the rdt protocol wants to deliver data to the application layer, it
will do so by calling deliver_data() (where deliver_data() is a function for delivering data to
upper layer).
In reliable data transfer protocol, we only consider the case of unidirectional data transfer,
that is transfer of data from the sending side to receiving side(i.e. only in one direction). In
case of bidirectional (full duplex or transfer of data on both the sides) data transfer is
conceptually more difficult. Although we only consider unidirectional data transfer but it is
important to note that the sending and receiving sides of our protocol will needs to transmit
packets in both directions, as shown in above figure.
In order to exchange packets containing the data that is needed to be transferred the both
(sending and receiving) sides of rdt also need to exchange control packets in both direction
(i.e., back and forth), both the sides of rdt send packets to the other side by a call to
udt_send() (udt_send() is a function used for sending data to other side where udt stands for
unreliable data transfer protocol).
OR
Reliable Data Transfer (RDT) 2.0 protocol works on a Reliable Data Transfer over a bit error channel.
It is a more realistic model for checking bit errors that are present in a channel while transferring it
may be the bits in the packet are corrupted. Such bit errors occurs in the physical components of a
network when a packet is transmitted, propagated, or buffered. In this, we will be assuming that all
transmitted packets that are received in the order in which they were sent (whether they are corrupted).
In this condition we ask the user to send ACK (acknowledgement, i.e., the packet that received is
correct and it is not corrupted) or NAK (negative acknowledgement i.e. the packet received is
corrupted). In this protocol we detect the error by using Checksum Field, checksum is a value that
represents the number of bits in a transmission message. To check the checksum value calculated by
the end user is even slightly different from the original checksum value, this means that the packet is
corrupted, the mechanism that is needed to allow the receiver to detect bit errors in a packet using
checksum is called Error Detection.
This techniques allow the receiver to detect, and possibly correct packet bit errors. In this we only
need to know that this technique require that extra bits (beyond the bits of original data to be
transferred) be sent from the sender to receiver; these bits will be gathered into the packet checksum
field of the RDT 2.0 data packet.
Another technique is Receiver Feedback since the sender and receiver are executing on different end
systems, the only way for the sender to learn of the receiver’s scenario i.e., whether or not a packet
was received correctly, it is that the receiver should provide explicit feedback to the sender. The
positive (ACK) and negative acknowledgement (NAK) replies in the message dictation scenario are
an example of such feedback. A zero value indicate a NAK and a value of 1 indicate an ACK.
Sending Side:
The send side of RDT 2.0 has two states. In one state, the send-side protocol is waiting for data to be
passed down from the upper layer to lower layer . In the other state, the sender protocol is waiting for
an ACK or a NAK packet from the receiver( a feedback). If an ACK packet is received i.e
rdt_rcv(rcvpkt) && is ACK(rcvpkt), the sender knows that the most recently transmitted packet has
been received correctly and thus the protocol returns to the state of waiting for data from the upper
layer.
If a NAK is received, the protocol re-transmits the last packet and waits for an ACK or NAK to be
returned by the receiver in response to the re-transmitted data packet. It is important to note that when
the receiver is in the wait-for-ACK-or-NAK state, it can not get more data from the upper layer, that
will only happen after the sender receives an ACK and leaves this state. Thus, the sender will not send
a new piece of data until it is sure that the receiver has correctly received the current packet, due to
this behavior of protocol this protocol is also known as Stop and Wait Protocol.
Receiving Side:
The receiver-site has a single state, as soon as the packet arrives, the receiver replies with either an
ACK or a NAK, depending on whether or not the received packet is corrupted i.e. by using
rdt_rcv(rcvpkt) && corrupt(rcvpkt) where a packet is received and is found to be in error or
rdt_rcv(rcvpkt) && not corrupt(rcvpkt) where a packet received is correct.
RDT 2.0 may look as if it works but it has some has some flaws. It is difficult to understand whether
the bits to ACK/NAK packets are actually corrupted or not, if the packet is corrupted how protocol
will recover from this errors in ACK or NAK packets. The difficulty here is that if an ACK or NAK is
corrupted, the sender has no way of knowing whether or not the receiver has correctly received the
last piece of transmitted data or no