0% found this document useful (0 votes)
43 views31 pages

Unit 4 and 5

The document provides an overview of network layer concepts including switching techniques like circuit switching, message switching and packet switching. It discusses Internet Protocol addressing including IPv4 and IPv6 addressing formats. It also describes the Address Resolution Protocol and its purpose in mapping IP addresses to MAC addresses.

Uploaded by

devgarg792004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views31 pages

Unit 4 and 5

The document provides an overview of network layer concepts including switching techniques like circuit switching, message switching and packet switching. It discusses Internet Protocol addressing including IPv4 and IPv6 addressing formats. It also describes the Address Resolution Protocol and its purpose in mapping IP addresses to MAC addresses.

Uploaded by

devgarg792004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit 4 (Overview of Network Layer)

1. SWITCHING TECHNIQUES

Circuit Switching

✓ Circuit switching is a switching technique that establishes a dedicated path between sender
and receiver.
✓ In the Circuit Switching Technique, once the connection is established then the dedicated path
will remain to exist until the connection is terminated.
✓ Circuit switching in a network operates in a similar way as the telephone works.
✓ A complete end-to-end path must exist before the communication takes place.
✓ In case of circuit switching technique, when any user wants to send the data, voice, video, a
request signal is sent to the receiver then the receiver sends back the acknowledgment to
ensure the availability of the dedicated path. After receiving the acknowledgment, dedicated
path transfers the data.
✓ Circuit switching is used in public telephone network. It is used for voice transmission.
✓ Fixed data can be transferred at a time in circuit switching technology.

Communication through circuit switching has 3 phases:

✓ Circuit establishment
✓ Data transfer
✓ Circuit Disconnect
Message Switching

✓ Message Switching is a switching technique in which a message is transferred as a


complete unit and routed through intermediate nodes at which it is stored and
forwarded.
✓ In Message Switching technique, there is no establishment of a dedicated path between
the sender and receiver.
✓ The destination address is appended to the message. Message Switching provides a
dynamic routing as the message is routed through the intermediate nodes based on the
information available in the message.
✓ Message switches are programmed in such a way so that they can provide the most
efficient routes.
✓ Each and every node stores the entire message and then forward it to the next node.
This type of network is known as store and forward network.
✓ Message switching treats each message as an independent entity.

Packet Switching

✓ The packet switching is a switching technique in which the message is sent in one go,
but it is divided into smaller pieces, and they are sent individually.
✓ The message splits into smaller pieces known as packets and packets are given a unique
number to identify their order at the receiving end.
✓ Every packet contains some information in its headers such as source address,
destination address and sequence number.
✓ Packets will travel across the network, taking the shortest path as possible.
✓ All the packets are reassembled at the receiving end in correct order.
✓ If any packet is missing or corrupted, then the message will be sent to resend the
message.
✓ If the correct order of the packets is reached, then the acknowledgment message will be
sent.

Approaches of Packet Switching

There are two approaches to Packet Switching:

Datagram Packet switching

✓ It is a packet switching technology in which packet is known as a datagram, is


considered as an independent entity. Each packet contains the information about the
destination and switch uses this information to forward the packet to the correct
destination.
✓ The packets are reassembled at the receiving end in correct order.
✓ In Datagram Packet Switching technique, the path is not fixed.
✓ Intermediate nodes take the routing decisions to forward the packets.
✓ Datagram Packet Switching is also known as connectionless switching.

Virtual Circuit Switching

✓ Virtual Circuit Switching is also known as connection-oriented switching.


✓ In the case of Virtual circuit switching, a preplanned route is established before the
messages are sent.
✓ Call request and call accept packets are used to establish the connection between sender
and receiver.
✓ In this case, the path is fixed for the duration of a logical connection.

Concept of virtual circuit switching through a diagram:

✓ In the above diagram, A and B are the sender and receiver respectively. 1 and 2 are the
nodes.
✓ Call request and call accept packets are used to establish a connection between the
sender and receiver.
✓ When a route is established, data will be transferred.
✓ After transmission of data, an acknowledgment signal is sent by the receiver that the
message has been received.
✓ If the user wants to terminate the connection, a clear signal is sent for the termination.

2. INTERNET PROTOCOL (IP)


An IP stands for internet protocol. An IP address is assigned to each device connected to a
network. Each device uses an IP address for communication. It also behaves as an identifier
as this address is used to identify the device on a network. It defines the technical format
of the packets. Mainly, both the networks, i.e., IP and TCP, are combined together, so
together, they are referred to as a TCP/IP. It creates a virtual connection between the source
and the destination.

✓ We can also define an IP address as a numeric address assigned to each device on a


network. An IP address is assigned to each device so that the device on a network can
be identified uniquely. To facilitate the routing of packets, TCP/IP protocol uses a 32-
bit logical address known as IPv4(Internet Protocol version 4).
✓ An IP address consists of two parts, i.e., the first one is a network address, and the other
one is a host address.

There are two types of IP addresses:

 IPv4
 IPv6

IPv4
IPv4 is a version 4 of IP. It is a current version and the most commonly used IP address. It is a
32-bit address written in four numbers separated by 'dot', i.e., periods. This address is unique
for each device.

For example, 66.94.29.13

The above example represents the IP address in which each group of numbers separated by
periods is called an Octet. Each number in an octet is in the range from 0-255. This address can
produce 4,294,967,296 possible unique addresses.

In today's computer network world, computers do not understand the IP addresses in the
standard numeric format as the computers understand the numbers in binary form only. The
binary number can be either 1 or 0. The IPv4 consists of four sets, and these sets represent the
octet. The bits in each octet represent a number.

Each bit in an octet can be either 1 or 0. If the bit the 1, then the number it represents will count,
and if the bit is 0, then the number it represents does not count.

Representation of 8 Bit Octet


The above representation shows the structure of 8- bit octet.

Now, we will see how to obtain the binary representation of the above IP address, i.e.,
66.94.29.13

Step 1: First, we find the binary number of 66.

To obtain 66, we put 1 under 64 and 2 as the sum of 64 and 2 is equal to 66 (64+2=66), and the
remaining bits will be zero, as shown above. Therefore, the binary bit version of 66 is
01000010.

Step 2: Now, we calculate the binary number of 94.

To obtain 94, we put 1 under 64, 16, 8, 4, and 2 as the sum of these numbers is equal to 94, and
the remaining bits will be zero. Therefore, the binary bit version of 94 is 01011110.

Step 3: The next number is 29.

To obtain 29, we put 1 under 16, 8, 4, and 1 as the sum of these numbers is equal to 29, and the
remaining bits will be zero. Therefore, the binary bit version of 29 is 00011101.

Step 4: The last number is 13.


To obtain 13, we put 1 under 8, 4, and 1 as the sum of these numbers is equal to 13, and the
remaining bits will be zero. Therefore, the binary bit version of 13 is 00001101.

IPv6
IPv4 produces 4 billion addresses, and the developers think that these addresses are enough,
but they were wrong. IPv6 is the next generation of IP addresses. The main difference between
IPv4 and IPv6 is the address size of IP addresses. The IPv4 is a 32-bit address, whereas IPv6
is a 128-bit hexadecimal address. IPv6 provides a large address space, and it contains a simple
header as compared to IPv4.

It provides transition strategies that convert IPv4 into IPv6, and these strategies are as follows:

✓ Dual stacking: It allows us to have both the versions, i.e., IPv4 and IPv6, on the same
device.
✓ Tunneling: In this approach, all the users have IPv6 communicates with an IPv4
network to reach IPv6.
✓ Network Address Translation: The translation allows the communication between the
hosts having a different version of IP.

This hexadecimal address contains both numbers and alphabets. Due to the usage of both the
numbers and alphabets, IPv6 is capable of producing over 340 undecillion (3.4*1038) addresses.

IPv6 is a 128-bit hexadecimal address made up of 8 sets of 16 bits each, and these 8 sets are
separated by a colon. In IPv6, each hexadecimal character represents 4 bits. So, we need to
convert 4 bits to a hexadecimal number at a time.
Address format

The address format of IPv4

The address format of IPv6

The above diagram shows the address format of IPv4 and IPv6. An IPv4 is a 32-bit decimal
address. It contains 4 octets or fields separated by 'dot', and each field is 8-bit in size. The
number that each field contains should be in the range of 0-255. Whereas an IPv6 is a 128-bit
hexadecimal address. It contains 8 fields separated by a colon, and each field is 16-bit in size.

3. Address Resolution Protocol (ARP)

The acronym ARP stands for Address Resolution Protocol which is one of the most
important protocols of the Data link layer in the OSI model. It is responsible to find the
hardware address of a host from a known IP address. There are three basic ARP terms.
Note: ARP finds the hardware address, also known as the Media Access Control
(MAC) address, of a host from its known IP address.
Reverse ARP

Reverse Address Resolution Protocol is a protocol that is used in local area networks (LAN)
by client machines for requesting IP Address (IPv4) from Router’s ARP Table. Whenever a
new machine comes, which requires an IP Address for its use. In that case, the machine sends
a RARP broadcast packet containing MAC Address in the sender and receiver hardware field.

Proxy ARP

Proxy Address Resolution Protocol work to enable devices that are separated into network
segments connected through the router in the same IP to resolve IP Address to MAC Address.
Proxy ARP is enabled so that the ‘proxy router’ resides with its MAC address in a local
network as it is the desired router to which broadcast is addressed. In case, when the sender
receives the MAC Address of the Proxy Router, it is going to send the datagram to Proxy
Router, which will be sent to the destination device.

Inverse ARP
Inverse Address Resolution Protocol uses MAC Address to find the IP Address, it can be
simply illustrated as Inverse ARP is just the inverse of ARP. In ATM (Asynchronous Transfer
Mode) Networks, Inverse ARP is used by default. Inverse ARP helps in finding Layer-3
Addresses from Layer-2 Addresses.

RARP ARP

A protocol used to map a physical A protocol used to map an IP


(MAC) address to an IP address address to a physical (MAC) address

To obtain the IP address of a To obtain the MAC address of a


network device when only its MAC network device when only its IP
address is known address is known
RARP ARP

Client broadcasts its MAC address Client broadcasts its IP address and
and requests an IP address, and the requests a MAC address, and the
server responds with the server responds with the
corresponding IP address corresponding MAC address

MAC addresses IP addresses

Rarely used in modern networks as Widely used in modern networks to


most devices have a pre-assigned IP resolve IP addresses to MAC
address addresses

RFC 903 Standardization RFC 826 Standardization

RARP stands for Reverse Address ARP stands for Address Resolution
Resolution Protocol Protocol

In RARP, we find our own IP In ARP, we find the IP address of a


address remote machine

The MAC address is known and the The IP address is known, and the
IP address is requested MAC address is being requested

It uses the value 3 for requests and It uses the value 1 for requests and 2
4 for responses for responses

4. BOOTSTRAP PROTOCOL (BOOTP)

✓ As soon as a device connects to the network, the Bootstrap Protocol (BOOTP)


immediately provides each member in the connection a distinct IP address for
authentication and identification purposes. This aids the server in accelerating
connection requests and data transfers.
✓ BOOTP employs a special IP address method to instantly assign a fully distinct IP
address to each system connected to the network. BOOTP is a broadcast protocol since
it must transmit messages to all of the network's active hosts in order to receive
responses or resources. The name BOOTP refers to the Bootstrap procedure that occurs
when a computer first starts up.
✓ The connection time between the server and the client is reduced as a result. Even with
very little information, it begins the process of downloading and updating the source
code.
✓ BOOTP servers generally use bootpd daemon to give the response of the requests
received from the clients and which has the data for the clients using BOOTP gateway
and without any broadcasting. The server's file /etc/inet/bootptab contains the BOOTP
configuration database.
✓ In BOOTP, devices or clients use the combination of Dynamic Host Configuration
Protocol (DHCP) and User Datagram Protocol (UDP) to transmit, receive and manage
the data/information from the various other nodes connected to the network.
✓ The server and client only require an IP address and a gateway address to successfully
connect in a BOOTP connection. The server and client often share the same LAN in a
BOOTP network, and the routers that are used in the network must enable BOOTP
bridging.
✓ The Bootstrap Protocol network is a wonderful illustration of a network using a TCP/IP
configuration. BOOTP uses its individual IP address whenever a computer on the
network sends a specific request to the server in order to swiftly resolve it.

5. UNICAST ROUTING – LINK STATE ROUTING

Unicast means the transmission from a single sender to a single receiver. It is a point-to-point
communication between the sender and receiver. There are various unicast protocols such as
TCP, HTTP, etc.

✓ TCP is the most commonly used unicast protocol. It is a connection-oriented


protocol that relies on acknowledgment from the receiver side.
✓ HTTP stands for HyperText Transfer Protocol. It is an object-oriented protocol
for communication.
Major Protocols of Unicast Routing

1. Distance Vector Routing: Distance-Vector routers use a distributed algorithm to


compute their routing tables.
2. Link-State Routing: Link-State routing uses link-state routers to exchange
messages that allow each router to learn the entire network topology.
3. Path-Vector Routing: It is a routing protocol that maintains the path that is
updated dynamically.

Link State Routing

Link state routing is the second family of routing protocols. While distance-vector routers
use a distributed algorithm to compute their routing tables, link-state routing uses link-state
routers to exchange messages that allow each router to learn the entire network topology.
Based on this learned topology, each router is then able to compute its routing table by using
the shortest path computation.

Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router i.e. the internet work. The three keys to understand the
link state routing algorithm.
✓ Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities
and cost of the directly attached links to other routers.

✓ Flooding: Each router sends the information to every other router on the
internetwork except its neighbors. This process is known as flooding. Every router
that receives the packet sends the copies to all the neighbors. Finally each and every
router receives a copy of the same information.

✓ Information Sharing: A router send the information to every other router only
when the change occurs in the information.

Link state routing has two phase

1. Reliable Flooding: Initial state– Each node knows the cost of its neighbors.
Final state- Each node knows the entire graph.
2. Route Calculation: Each node uses Dijkstra’ s algorithm on the graph to
calculate the optimal routes to all nodes. The link state routing algorithm is also
known as Dijkstra’s algorithm which is used to find the shortest path from one
node to every other node in the network.

Features of Link State Routing Protocols

• Link State Packet: A small packet that contains routing information.


• Link-State Database: A collection of information gathered from the link-state
packet.
• Shortest Path First Algorithm (Dijkstra algorithm): A calculation performed
on the database results in the shortest path
• Routing Table: A list of known paths and interfaces.

Calculation of Shortest Path

To find the shortest path, each node needs to run the famous Dijkstra algorithm. Let us
understand how can we find the shortest path using an example.
Illustration
To understand the Dijkstra Algorithm, let’s take a graph and find the shortest path from the
source to all nodes.
Note: We use a boolean array sptSet[] to represent the set of vertices included in SPT. If a
value sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array dist[] is used
to store the shortest distance values of all vertices.
Consider the below graph and src = 0.

Shortest Path Calculation – Step 1

STEP 1: The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF,
INF, INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with a minimum
distance value. The vertex 0 is picked and included in sptSet. So sptSet becomes {0}. After
including 0 to sptSet, update the distance values of its adjacent vertices. Adjacent vertices of
0 are 1 and 7. The distance values of 1 and 7 are updated as 4 and 8.
The following subgraph shows vertices and their distance values. Vertices included in SPT
are included in GREEN color.
Shortest Path Calculation – Step 2

STEP 2: Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}. Update
the distance values of adjacent vertices of 1. The distance value of vertex 2 becomes 12.

Shortest Path Calculation – Step 3

STEP 3: Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values
of adjacent vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9
respectively).

Shortest Path Calculation – Step 4


STEP 4: Pick the vertex with minimum distance value and not already included in SPT (not
in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance
values of adjacent vertices of 6. The distance value of vertex 5 and 8 are updated.

Shortest Path Calculation – Step 5

We repeat the above steps until sptSet includes all vertices of the given graph. Finally, we
get the following Shortest Path Tree (SPT).

Shortest Path Calculation – Step 6

Characteristics of Link State Protocol

• It requires a large amount of memory.


• Shortest path computations require many CPU circles.
• If a network uses little bandwidth; it quickly reacts to topology changes
• All items in the database must be sent to neighbors to form link-state packets.
• All neighbors must be trusted in the topology.
• Authentication mechanisms can be used to avoid undesired adjacency and
problems.
• No split horizon techniques are possible in the link-state routing.
• OSPF Protocol

Protocols of Link State Routing

1. Open Shortest Path First (OSPF)


2. Intermediate System to Intermediate System (IS-IS)

Open Shortest Path First (OSPF): Open Shortest Path First (OSPF) is a unicast routing
protocol developed by a working group of the Internet Engineering Task Force (IETF). It is
an intradomain routing protocol. It is an open-source protocol. It is similar to Routing
Information Protocol (RIP). OSPF is a classless routing protocol, which means that in its
updates, it includes the subnet of each route it knows about, thus, enabling variable-length
subnet masks. With variable-length subnet masks, an IP network can be broken into many
subnets of various sizes. This provides network administrators with extra network
configuration flexibility. These updates are multicasts at specific addresses (224.0.0.5 and
224.0.0.6). OSPF is implemented as a program in the network layer using the services
provided by the Internet Protocol. IP datagram that carries the messages from OSPF sets the
value of the protocol field to 89. OSPF is based on the SPF algorithm, which sometimes is
referred to as the Dijkstra algorithm.

Intermediate System to Intermediate System (IS-IS): Intermediate System to Intermediate


System is a standardized link-state protocol that was developed as the definitive routing
protocol for the OSI Model. IS-IS uses System ID to identify a router on the network. IS-IS
doesn’t require IP connectivity between the routers as updates are sent via CLNS instead of
IP.
Numerical-1: A and B are the only two stations on an Ethernet. Each has a steady queue of
frames to send. Both A and B attempt to transmit a frame, collide, and A wins the first backoff
race. At the end of this successful transmission by A, both A and B attempt to transmit and
collide. The probability that A wins the second backoff race.

Solution:

For access control, Ethernet(802.3) uses CSMA/CD. In CSMS/CD, collision is resolved by


binary backoff algorithm, which says if a collision occurs, then the stations involved in the
collision have to wait.

where K is a number chosen from (0−2n−1), where n is the collision number

as A has Successfully Transmit one packet, and B has undergone the first collision

when A and B collide again for B, a second collision happens for A new packets! collision
occurred

A can wait between (0 or 1) before retransmission

B can wait between (0-3) before retransmission (as it is has 2nd collision)

Winner(who
wait by A wait by B
can transmit
0 0 Collision
0 1 A
0 2 A
0 3 A
1 0 B
1 1 Collision
1 2 A
1 3 A

among 8 possibilities A wins 5 times

prob=5/8=0.625
Numerical-2: An organization is granted the block 16.0.0.0/8. The administrator wants to
create 500 fixed- length subnets. I) find the subnet mask. (ii) find the number of addresses in
each subnet.

Solution:

(i) Mask = /17

(ii) 32,768 addresses per subnet.

(i) An administrator wants 500 fixed-length subnets, the number of subnets needs to be of
power 2. Here, the number of subnets is 500, the number of extras is in the prefix length is
, where s = .

Now, = 512. We need 500 subnets, which means we need to add nine more extra
1s to the site prefix. So, the number of extra 1s is 9 and possible subnets
is 512. The subnet prefix is then /17. Therefore, the subnet mask is 255.255.128.0.

(ii) The site has addresses. Each subnet


has addresses.
Numerical-3: Consider the network of Figure. Distance vector routing is used, and the
following vectors have just come in to router C: from B: (5, 0, 8, 12, 6, 2); from D: (16, 12,
6, 0, 9, 10); and from E: (7, 6, 3, 9, 0, 4). The cost of the links from C to B, D, and E,
are 6, 3, and 5, respectively. What is C’s new routing table? Give both the outgoing
line to use and the cost.

Solution:

C's table (-,6,0,3,5,-)

B's table (5,0,8,12,6,2)

Going from C via B [add 6 to B's table] gives (11,6,14,18,12,8)

D's table (16,12,6,0,9,10)

Going from C via D [add 3 to D's table] gives (19,15,9,3,12,13)

E's table (7,6,3,9,0,4)

Going from C via E [add 5 to E's table] gives (12,11,8,14,5,9)

Min from each table is taken

C's new table (11,6,0,3,5,8)

Outgoing line (B, B, -, D, E, B)


Ethernet Frame Format (IEEE 802.3)

The IEEE 802.3 standard defines the fundamental frame format that is necessary for all MAC
implementations. However, the core functionality of the protocol is being extended by several
optional forms.

Preamble and SFD, which operate at the physical layer, begin an Ethernet frame. The packet's
payload follows the Ethernet header, which includes the MAC addresses for the source and
destination. CRC, the final field, is utilized to find errors. Let's now examine each section of
the fundamental frame format.

1. PREAMBLE - Ethernet frames begin with a 7-byte. This is a sequence of alternate 0s


and 1s that denotes the beginning of the frame and enables bit synchronization between
the sender and receiver. PRE (Preamble) was initially developed to accommodate the
loss of a few bits as a result of signal delays. However, the frame bits in high-speed
Ethernet today are protected without the need for a preamble.
Prior to the actual frame beginning, PRE (Preamble) alerts the receiver that a frame is
about to start and enables the receiver to lock onto the data stream.
2. Start of frame Delimiter (SFD) - This 1-byte field is always set to 10101011. The
destination address is the next set of bits that will begin the frame, as indicated by SFD.
The preamble is frequently referred to as 8 Bytes since SFD is sometimes seen as a
component of PRE. The SFD notifies the station or stations that synchronization is now
impossible.
3. Destination Address - This 6-Byte element contains the MAC address of the device
for which the data is intended.
4. Source Address - This 6-byte element contains the source machine's MAC address.
Since Source Address is always a unique address (Unicast), 0 is always the least
significant bit of the first byte.
5. Length - A 2-Byte field called Length represents the size of an Ethernet frame as a
whole. Due to some inherent constraints of Ethernet, this 16-bit field can store length
values from 0 to 65534, but length values greater than 1500 are not permitted.
6. Data - This area, sometimes referred to as the Payload, is where the real data is placed.
If Internet Protocol is utilised via Ethernet, both the IP header and data will be placed
here. The longest possible piece of data might be 1500 bytes long. If the data length is
less than the minimum length, which is 46 bytes, padding 0's are appended to make up
the difference.
7. Cyclic Redundancy Check (CRC) - CRC is a field of 4 bytes. The data in this field is
a 32-bit hash code created using the fields for the destination address, source address,
length, and data. Data is damaged if the checksum calculated by the destination differs
from the checksum value supplied.

Note- Ethernet IEEE 802.3 frames range in size from 64 to 1518 bytes, including data length
(46 to 1500 bytes),

Complete illustration of Extended Ethernet Frame (Ethernet II frame):

Below, a thorough explanation of the IEEE 802.3 basic frame format is provided. Let's look at
the expanded Ethernet frame header, which allows for a payload of even more than 1500 bytes.

DA [Destination MAC Address]: 6 bytes

SA [Source MAC Address]: 6 bytes

Type [0x8870 (Ethertype)]: 2 bytes

DSAP [802.2 Destination Service Access Point]: 1 byte

SSAP [802.2 Source Service Access Point]: 1 byte

Ctrl [802.2 Control Field]: 1 byte

Data [Protocol Data]: > 46 bytes

FCS [Frame Checksum]: 4 bytes

Although the Ethernet II frame lacks a length field, the network interface knows the frame
length because it accepts the frame.

Advantages of using Ethernet:


1. Simple to implement
2. Maintenance is Easy
3. Less cost
Flaws of Ethernet:
1. It can't be applied in real-time situations. Data delivery within a certain time frame is
necessary for real-time applications. Due to the high likelihood of collisions, Ethernet
is unreliable. The delivery of the data to its destination may be delayed due to an
increased number of collisions.
2. Applications requiring interaction cannot be utilized with it. Even extremely little
amounts of data must be delivered for interactive apps like chatting. The minimum data
length required by Ethernet is 46 bytes.
3. It is incompatible with client-server applications. Applications that use client-server
architecture demand that the server is prioritised over the client. Priorities cannot be set
in Ethernet.

Shortest Paths from Source to all Vertices using Dijkstra’s Algorithm (Link State
Routing protocol)

Numerical-3: Given a weighted graph and a source vertex, find the shortest paths from the
source to all the other vertices in the graph.

Note: The given graph does not contain any negative edge.
Input: src = 0, the graph is shown below.

Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8

Finally, we get the following Shortest Path Tree (SPT).


TCP Segment Header Format

A TCP segment consists of data bytes to be sent and a header that is added to the data by
TCP as shown:

The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If there
are no options, a header is 20 bytes else it can be of upmost 60 bytes.
Headerfields:

• Source Port Address –


A 16-bit field that holds the port address of the application that is sending the
data segment.

• Destination Port Address –


A 16-bit field that holds the port address of the application in the host that is
receiving the data segment.
• Sequence Number –
A 32-bit field that holds the sequence number, i.e, the byte number of the first
byte that is sent in that particular segment. It is used to reassemble the message
at the receiving end of the segments that are received out of order.

• Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e, the byte number that
the receiver expects to receive next. It is an acknowledgement for the previous
bytes being received successfully.

• Header Length (HLEN) –


This is a 4-bit field that indicates the length of the TCP header by a number of 4-
byte words in the header, i.e if the header is 20 bytes(min length of TCP header),
then this field will hold 5 (because 5 x 4 = 20) and the maximum length: 60
bytes, then it’ll hold the value 15(because 15 x 4 = 60). Hence, the value of this
field is always between 5 and 15.

• Control flags –
These are 6 1-bit control bits that control connection establishment, connection
termination, connection abortion, flow control, mode of transfer etc. Their
function is:
• URG: Urgent pointer is valid
• ACK: Acknowledgement number is valid( used in case of cumulative
acknowledgement)
• PSH: Request for push
• RST: Reset the connection
• SYN: Synchronize sequence numbers
• FIN: Terminate the connection
• Window size –
This field tells the window size of the sending TCP in bytes.

• Checksum –
This field holds the checksum for error control. It is mandatory in TCP as
opposed to UDP.

• Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data that
is urgently required that needs to reach the receiving process at the earliest. The
value of this field is added to the sequence number to get the byte number of the
last urgent byte.

Token Bucket vs Leaky Bucket


Token Bucket and Leaky Bucket are two algorithms used for network traffic shaping and rate

limiting. They help manage the rate of traffic flow in a network, but they do so in slightly

different ways.

Token Bucket Algorithm

• Mechanism: The token bucket algorithm is based on tokens being added to a bucket

at a fixed rate. Each token represents permission to send a certain amount of data.

When a packet (data) needs to be sent, it can only be transmitted if there is a token

available, which is then removed from the bucket.

• Characteristics:
• Burst Allowance: Can handle bursty traffic because the bucket can store tokens,

allowing for temporary bursts of data as long as there are tokens in the bucket.

• Flexibility: The rate of token addition and the size of the bucket can be adjusted to

control the data rate.

• Example: Think of a video streaming service. The service allows data bursts for fast

initial streaming (buffering) as long as tokens are available in the bucket. Once the

tokens are used up, the streaming rate is limited to the rate of token replenishment.

• Pros:

• Allows for flexibility in handling bursts of traffic.

• Useful for applications where occasional bursts are acceptable.

• Cons:

• Requires monitoring the number of available tokens, which might add complexity.

Leaky Bucket Algorithm

• Mechanism: In the leaky bucket algorithm, packets are added to a queue (bucket),

and they are released at a steady, constant rate. If the bucket (buffer) is full,

incoming packets are discarded or queued for later transmission.

• Characteristics:

• Smooth Traffic: Ensures a steady, uniform output rate regardless of the input

burstiness.

• Overflow: Can result in packet loss if the bucket overflows.


• Example: Imagine an ISP limiting internet speed. The ISP uses a leaky bucket to

smooth out the internet traffic. Regardless of how bursty the incoming traffic is, the

data flow to the user is at a consistent, predetermined rate. If the data comes in too

fast and the bucket fills up, excess packets are dropped.

• Pros:

• Simple to implement and understand.

• Ensures a steady, consistent flow of traffic.

• Cons:

• Does not allow for much flexibility in handling traffic bursts.

• Can lead to packet loss if incoming rate exceeds the bucket’s capacity.

Key Differences

• Traffic Burst Handling: Token bucket allows for bursts of data until the bucket’s

tokens are exhausted, making it suitable for applications where such bursts are
common. In contrast, the leaky bucket smooths out the data flow, releasing packets

at a steady, constant rate.

• Use Cases: Token bucket is ideal for applications that require flexibility and can

tolerate bursts, like video streaming. Leaky bucket is suited for scenarios where a

steady, continuous data flow is required, like voice over IP (VoIP) or real-time

streaming.
Conclusion

Choosing between Token Bucket and Leaky Bucket depends on the specific requirements for

traffic management in a network. Token Bucket offers more flexibility and is better suited for

bursty traffic scenarios, while Leaky Bucket is ideal for maintaining a uniform output rate

You might also like