CN Unit Iii
CN Unit Iii
Network layer
• Network layer: is majorly focused on getting packets from the source to the destination,
routing and congestion control.
• Before learning about design issues in the network layer, let’s learn about it’s various
functions.
• Addressing:
Maintains the address at the packet header of both source and destination and performs
addressing to detect various devices in network.
• Packeting:
This is performed by Internet Protocol. The network layer converts the packets from its
upper layer.
• Routing :
It is the most important functionality. The network layer chooses the most relevant and best
path for the data transmission from source to destination.
• Inter-networking:
It works to deliver a logical connection across multiple devices.
Design Issues in Network Layer:
The network layer comes with some design issues they are described as follows:
1. Store and Forward Packet Switching
2. Services provided to Transport Layer
3. Implementation of Connection Oriented service
4. Implementation of Connectionless Service
Working Principle
The node which has a packet to send, delivers it to the nearest node, i.e. router. The packet is stored in
the router until it has fully arrived and its checksum is verified for error detection. Once, this is done,
the packet is transmitted to the next router. The same process is continued in each router until the packet
reaches its destination.
In the above diagram, we can see that the Internet Service Provider (ISP) has six routers (A to F)
connected by transmission lines. There are two hosts, host H1 is connected to router A, while host
H2 is connected to router D. Suppose that H1 wants to send a data packet to H2. H1 sends the
packet to router A. The packet is stored in router A until it has arrived fully. Router A verifies the
checksum using CRC (cyclic redundancy check) code. If there is a CRC error, the packet is
discarded, otherwise it is transmitted to the next hop, here router F. The same process is followed
by router F which then transmits the packet to router D. Finally router D delivers the packet to host
H2.
The network layer provides service its immediate upper layer, namely transport layer, through the
network − transport layer interface. The two types of services provided are −
Connection − Oriented Service − In this service, a path is setup between the source and the
destination, and all the data packets belonging to a message are routed along this path.
Connectionless Service − In this service, each packet of the message is considered as an
independent entity and is individually routed from the source to the destination
Routing Algorithms
• In order to transfer the packets from source to the destination, the network layer must
determine the best route through which packets can be transmitted.
• Whether the network layer provides datagram service or virtual circuit service, the main job
of the network layer is to provide the best route. The routing protocol provides this job.
• The routing protocol is a routing algorithm that provides the best path from the source to the
destination. The best path is the path that has the "least-cost path" from source to the
destination.
• Routing is the process of forwarding the packets from source to the destination but the best
route to send the packets is determined by the routing algorithm.
Classification of a Routing algorithm
The Routing algorithm is divided into two categories:
Adaptive Routing algorithm
Non-adaptive Routing algorithm
Flooding
when a data packet arrives at a router, it is sent to all the outgoing links except the one it has
arrived on.
Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number
unless some measures are taken to damp the process.
Using flooding technique −
An incoming packet to A will be sent to B, C and D.
B will send the packet to C and E.
C will send the packet to B, D and F.
D will send the packet to C and F. E will send the packet to F.
F will send the packet to C and E.
One such measure is to have a hop counter contained in the header of each packet, which is
decremented at each hop, with the packet being discarded when the counter reaches zero. Ideally,
the hop counter should be initialized to the length of the path from source to destination. A variation
of flooding that is slightly more practical is selective flooding. In this algorithm the routers do not
send every incoming packet out on every line, only on those lines that are going approximately in
the right direction. Flooding is not practical in most applications.
Random Walk
• In case of random walks, a packet sent by the node to one of its neighbors randomly. An
advantage of using random walks is that it uses the alternative routes very efficiently.
One can make a statement about optimal routes without regard to network topology or traffic. The
optimality principle states that if router J is on the optimal path from router I to router K, then the
optimal path from J to K also falls along the same route.
As a consequence, the set of optimal routes from all sources to a given destination form a tree
routed at the destination. This is shown here using as distance metric the number of hops. Note that
other trees with the same path length exist.
Such a tree is called a sink tree, as it does not contain loops, each packet will be delivered in a
bounded number of hops.
1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to this node and
make it the first permanent node.
2. Examine each neighbor of the node that was the last permanent node.
3. Assign a cumulative cost to each node and make it tentative
4. Among the list of tentative nodes
– Find the node with the smallest cost and make it Permanent
– If a node can be reached from more than one route then select the route with the shortest
cumulative cost.
5. Repeat steps 2 to 4 until every node becomes permanent
Finally, Destination ‘D’ is relabeled as D(10,H). The path is (D-H-F-E-B-A) as follows:
D (10, H) = H (8, F)
= F (6, E)
= E (4, B)
= B (2, A)
= A
Hierarchical Routing
As networks grow in size, the router routing tables grow proportionally.
Not only is router memory consumed by ever increasing tables, but more CPU time is
needed to scan them and more bandwidth is needed to send status reports about them
At a certain point, the network may grow to the point where it is no longer feasible for every
router to have an entry for every other router, so the routing will have to be done
hierarchically, as it is in the telephone network.
When hierarchical routing is used, the routers are divided into what we will call regions.
Each router knows all the details about how to route packets to destinations within its own
region but knows nothing about the internal structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group
the regions into clusters, the clusters into zones, the zones into groups, and so on
When a single network becomes very large, an interesting question is ‘‘how many levels
should the hierarchy have?’’
For example, consider a network with 720 routers. If there is no hierarchy, each router needs
720 routing table entries.
If the network is partitioned into 24 regions of 30 routers each, each router needs 30 local
entries plus 23 remote entries for a total of 53 entries.
If a three-level hierarchy is chosen, with 8 clusters each containing 9 regions of 10 routers,
each router needs 10 entries for local routers, 8 entries for routing to other regions within its
own cluster, and 7 entries for distant clusters, for a total of 25 entries
Unicasting
In unicast communication, there is one source and one destination. The relationship between the
source and the destination is one-to-one. In this type of communication, both the source and
destination addresses, in the packet, are the unicast addresses assigned to the hosts (or host
interfaces, to be more exact). In Figure, a unicast packet starts from the source S1 and passes
through routers to reach the destination D1.We have shown the networks as a link between the
routers to simplify the figure. Note that in unicasting, when a router receives a packet, it forwards
the packet through only one of its interfaces (the one belonging to the optimum path) as defined in
the routing table. The router may discard the packet if it cannot find the destination address in its
routing table.
Multicasting
In multicast communication, there is one source and a group of destinations. The relationship is
one-to-many. In this type of communication, the source address is a unicast address, but the
destination address is a group address, which defines one or more destinations. The group address
identifies the members of the group. Figure shows the idea behind multicasting.
In multicast packet starts from the source S1 and goes to all destinations that belong to group G1. In
multicasting, when a router receives a packet, it may forward it through several of its interfaces.
Broadcast
In broadcast communication, the relationship between the source and the destination is one-to-all.
There is only one source, but all the other hosts are the destinations. The Internet does not explicitly
support broadcasting because of the huge amount of traffic it would create and because of the
bandwidth it would need. Imagine the traffic generated in the Internet if one person wanted to send
a message to everyone else connected to the Internet.
Key Points on Broadcasting
1. Data is sent to all the nodes/stations in the network domain.
2. A special broadcast address exist for every network which is used to receive a broadcasted
message.
3. It generates the most network traffic because the broadcasted message is sent to every node
in the network.
4. It is less secure. A sensitive message shouldn’t be sent to everyone and hence it should be
kept in mind before broadcasting a message.
5. Examples: Address Resolution Protocol (ARP) requests, Dynamic Host Configuration
Protocol (DHCP) requests.
In this type of communication, the source address is a unicast address, but the destination address is
a broadcast address, which defines all destinations. The broadcast address identifies the all
members. Figure shows the idea behind broadcasting. In broadcasting packet starts from the source
S1 and goes to all destinations. In broadcasting, when a router receives a packet, it may forward it
through several of its interfaces
Broadcasting Figure
Initialization
The tables in Figure are stable; each node knows how to reach any other node and the cost. At the
beginning, however, this is not the case. Each node can know only the distance between itself and
its immediate neighbors, those directly connected to it. So for the moment, we assume that each
node can send a message to the immediate neighbors and find the distance between itself and these
neighbors. Figure shows the initial tables for each node. The distance for any entry that is not a
neighbor is marked as infinite (unreachable).
Sharing
The whole idea of distance vector routing is the sharing of information between neighbors. Although
node A does not know about node E, node C does. So if node C shares its routing table with A, node
A can also know how to reach node E. On the other hand, node C does not know how to reach node
D, but node A does. If node A shares its routing table with node C, node C also knows how to reach
node D. In other words, nodes A and C, as immediate neighbors, can improve their routing tables if
they help each other. Each node is to send its entire table to the neighbor and let the neighbor decide
what part to use and what part to discard. However, the third column of a table (next stop) is not
useful for the neighbor. When the neighbor receives a table, this column needs to be replaced with
the sender's name. If any of the rows can be used, the next node is the sender of the table. A node
therefore can send only the first two columns of its table to any neighbor. In other words, sharing
here means sharing only the first two columns.
Updating
When a node receives a two-column table from a neighbor, it needs to update its routing table.
1. The receiving node needs to add the cost between itself and the sending node to each value
in the second column.
2. The receiving node needs to add the name of the sending node to each row as the third
column if the receiving node uses information from any row. The sending node is the next
node in the route.
3. The receiving node needs to compare each row of its old table with the corresponding row
of the modified version of the received table. If the next-node entry is different, the receiving
node chooses the row with the smaller cost. If there is a tie, the old one is kept.
Figure shows how node A updates its routing table after receiving the partial table from
node C.
When to update?
Periodic Update: A node sends its routing table, normally every 30 s, in a periodic update.
The period depends on the protocol that is using distance vector routing.
Triggered Update: A node sends its two column routing table to its neighbors anytime there
is a change in its routing table. This is called a triggered update
The main issue with Distance Vector Routing (DVR) protocols is Routing Loops since Bellman-
Ford Algorithm cannot prevent loops. This routing loop in the DVR network causes the Count to
Infinity Problem. Routing loops usually occur when an interface goes down or two routers send
updates at the same time.
Consider the above diagram, for this setup each router they will have entries for each other. Router
A will infer that it can reach C at a cost of 2 units, and B will infer that it can reach C at a cost of 1
unit.
Consider the case in the above diagram, where the connection between B and C gets disconnected.
In this case, B will know that it cannot get to C at a cost of 1 anymore and update its table
accordingly. However, it can be possible that A sends some information to B that it is possible to
reach C from A at a cost of 2. Then, since B can reach A at a cost of 1, B will erroneously update its
table that it can reach C via A at a cost of 1 + 2 = 3 units. A will then receive updates from B and
update its costs to 4, and so on. Thus, the process enters into a loop of bad feedback and the cost
shoots towards infinity. This entire situation is called the Count to Infinity problem.
Node A knows that it is connected to node B with metric 5, to node C with metric 2, and to
node D with metric 3.
Node C knows that it is connected to node A with metric 2,to node B with metric 4, and to
node E with metric 4.
Node D knows that it is connected only to node A with metric 3. And so on.
Building Routing Tables:
In link state routing, four sets of actions are required to ensure that each node has the routing table
showing the least-cost node to every other node.
a) Creation of the states of the links by each node, called the link state packet (LSP).
b) Dissemination of LSPs to every other router, called flooding, in an efficient and reliable way.
c) Formation of a shortest path tree for each node.
d) Calculation of a routing table based on the shortest path tree (Dijkstra algorithm)
The Dijkstra algorithm creates a shortest path tree from a graph. The algorithm divides the nodes
into two sets: tentative and permanent. It finds the neighbors of a current node, makes them
tentative, examines them, and if they pass the criteria, makes them permanent.
Let us apply the algorithm to node A of our sample graph in Figure below. To find the shortest path
in each step, we need the cumulative cost from the root to each node, which is shown next to the
node. The following shows the steps. At the end of each step, we show the permanent (filled
circles) and the tentative (open circles) nodes and lists with the cumulative costs.
Calculation of Routing Table from Shortest Path Tree Each node uses the shortest path tree protocol
to construct its routing table. The routing table shows the cost of reaching each node from the root.
Table shows the routing table for node A.
Congestion Control
Congestion
Congestion is a situation in Communication Networks in which too many packets are present in a
part of the subnet, performance degrades.
Congestion in a network may occur when the load on the network (i.e. the number of packets sent
to the network) is greater than the capacity of the network (i.e. the number of packets a network can
handle.)
Load on the network >capacity of the network
In other words when too much traffic is offered, congestion sets in and performance
degrades sharply.
Retransmission Policy
The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted.
However retransmission in general may increase the congestion in the network. But we need
to implement good retransmission policy to prevent congestion.
Effective retransmission policy and the retransmission timers need to be designed to prevent
the congestion.
Window Policy
The type of window at the sender’s side may also affect the congestion.
Several packets in the Go-back-n window are resent, although some packets may be
received successfully at the receiver side.
This duplication may increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that
may have been lost
Acknowledgement Policy
Since acknowledgements are also the part of the load in the network, the acknowledgment
policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet.
The receiver should send an acknowledgment only if it has to send a packet or a timer
expires
Discarding Policy
A router may discard corrupted or less sensitive packages when congestion is likely to
happen.
Such a discarding policy may prevent congestion and at the same time may not harm the
integrity of the transmission.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
Admission Policy
According to admission policy, a new virtual connection request is not accepted if that
request carry congestion
A router can deny establishing a virtual circuit connection if there is congestion in the
network or if there is a possibility of future congestion.
Closed loop congestion control mechanisms try to remove the congestion after it happens.
The various methods used for closed loop congestion control are :
Backpressure
Choke Packet
Implicit Signaling
Explicit Signaling
Backpressure
• Backpressure is a node-to-node congestion control that starts with a node and propagates, in
the opposite direction of data flow.
• The backpressure technique can be applied only to virtual circuit networks.
• In such virtual circuit each node knows the upstream node from which a data flow is
coming.
• In this method of congestion control, the congested node stops receiving data from the
immediate upstream node or nodes.
• As shown in figure node 3 is congested and it stops receiving packets and informs its
upstream node 2 to slow down. Node 2 in turns may be congested and informs node 1 to
slow down. Now node 1 may create congestion and informs the source node to slow down.
In this way the congestion is alleviated. Thus, the pressure on node 3 is moved backward to
the source to remove the congestion.
Choke Packet
• In this method of congestion control, congested router or node sends a special type of
packet called choke packet to the source to inform it about the congestion.
• The chock packet technique can be applied to virtual circuit networks and Datagram
networks
• Here, congested node does not inform its upstream node about the congestion as in back
pressure method.
• In choke packet method, congested node sends a warning directly to the source station
i.e. the intermediate nodes through which the packet has traveled are not warned.
Implicit Signaling
• In implicit signaling, there is no communication between the congested node or nodes
and the source.
• The source guesses that there is congestion somewhere in the network when it does not
receive any acknowledgment.
• Therefore the delay in receiving an acknowledgment is interpreted as congestion in the
network.
• On sensing this congestion, the source slows down.
Explicit Signaling
• In this method, the congested nodes explicitly send a signal to the source or destination
to inform about the congestion.
• Explicit signaling is different from the choke packet method.
• In choke packed method, a separate packet is used for this purpose whereas in explicit
signaling method, the signal is included in the packets that carry data .
• Explicit signaling can occur in either the forward direction or the backward direction
In backward signaling, a bit is set in ack moving in the direction opposite to the
congestion. This bit warns the source about the congestion and informs the source to slow
down.
In forward signaling, a bit is set in a packet moving in the direction of congestion. This bit
warns the destination about the congestion. The receiver in this case uses policies such as
slowing down the acknowledgements to remove the congestion.
• Leaky Bucket
• Token Bucket
Leaky Bucket
• Suppose we have a bucket in which we are pouring water in a random order but we have
to get water in a fixed rate, for this we will make a hole at the bottom of the bucket.
• It will ensure that water coming out is in a some fixed rate, and also if bucket will full
• The input rate can vary, but the output rate remains constant.
• Similarly, in networking, a technique called leaky bucket can smooth out bursty traffic.
• Bursty chunks are stored in the bucket and sent out at an average rate Leaky Bucket
• In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a
host.
• The use of the leaky bucket shapes the input traffic to make it conform to this
commitment.
• In Figure the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24
Mbits of data.
• The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data.
• In all, the host has sent 30 Mbits of data in 10 s.
• The leaky bucket smooths the traffic by sending out data at a rate of 3 Mbps during the
same 10 s.
• Without the leaky bucket, the beginning burst may have hurt the network by consuming
more bandwidth than is set aside for this host.
• We can also see that the leaky bucket may prevent congestion.
Token Bucket
The leaky bucket algorithm enforces output patterns at the average rate, no matter how busy the
traffic is.
So, to deal with the more traffic, we need a flexible algorithm so that the data is not lost.
One such approach is the token bucket algorithm.
Step 3 − If the packet is ready, then a token is removed from the bucket, and the packet is sent.
Step 4 − Suppose, if there is no token in the bucket, the packet cannot be sent.
• In below Figure (a) the bucket holds two tokens, and three packets are waiting to be sent out
of the interface.
• In below Figure (b) two packets have been sent out by consuming two tokens, and 1 packet
is still left.
• When compared to Leaky bucket the token bucket algorithm is less restrictive that means it
allows more traffic.
• The limit of busyness is restricted by the number of tokens available in the bucket at a
particular instant of time.
• The implementation of the token bucket algorithm is easy − a variable is used to count the
tokens.
• For every t seconds the counter is incremented and then it is decremented whenever a packet
is sent.
• When the counter reaches zero, no further packet is sent out.
Congestion control for virtual circuit subnets
With admission control once congestion has been signaled, no more virtual circuits are set up until
the problem has gone away. This is a crude approach, but it is simple and easy to implement.
Another way is to route new connections around the problem area. The subnet is "redrawn" leaving
congested routers and all their lines out and then determine the best route for a new connection in
that subnet. A step further is to try to avoid routers that are directly connected to the congested
routers.
Another strategy is to negotiate an agreement between the host and subnet when the connection is
set up. This would specify the volume and shape of the traffic, quality of service and other
parameters. To keep its part of the agreement, the subnet will typically reserve resource along the
path the VC is set up. This can include table and buffer space in the routers and bandwidth on the
lines. In this way congestion can be avoided
1. Warning bit
2. Choke packets
3. Hop-by-hop choke packets
4. Load shedding
Warning bit
A special bit (warning bit) in the packet header is set by the router to warn the source when
congestion is detected.
• The bit is copied and piggy-backed on the ACK and sent to the sender.
• The sender monitors the number of ACK packets it receives with the warning bit set and adjusts
its transmission rate accordingly.
• Algorithm at source
As long as warning bits arrive: reduce traffic
Less warning bits: increase traffic
Choke packets
A router can also directly send a choke packet back to the source host of a packet send out over a too busy
line. That packet is tagged so that it will not generate more choke packets on its way. When the source host
receives the choke packet, it reduces its rate to the specified destination by a certain fraction. If it receives a
choke packet in reaction to its new rate, it reduces again by that fraction. In figure A shows the concept of
choke packet
For high speeds or long distances, the reaction is too slow. With hop-by-hop choke packets each
intermediate router also reacts on a choke packet by reducing its sending rate. For that it needs
sufficient buffers to store the packets which still come in at a too high rate. In figure shows the
concept of choke packet
Load shedding
When routers are being overwhelmed by packets that they cannot handle, they just throw them
away.
• When buffers become full, routers simply discard packets.
• Which packet is chosen to be the victim depends on the application and on the error strategy used
in the data link layer.
• File transfer - cannot discard older packets
It will cause a gap in the received data.
• Real-time voice or video - throw away old data and keep new packets.
• Intelligent discard policy
• Applications must mark their packets in priority classes to indicate how important they are.
Quality of service (QoS) is the use of mechanisms or technologies that work on a network to
control traffic flow and ensure the performance of critical applications with limited network
capacity. We can informally define quality of service as something a flow seeks to attain.
Flow Characteristics
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter, and
bandwidth, as shown in Figure
Reliability
Reliability is a characteristic that a flow needs. Lack of reliability means losing a packet or
acknowledgment, which entails retransmission. However, the sensitivity of application programs to
reliability is not the same. For example, it is more important that electronic mail, file transfer, and
Internet access have reliable transmissions than telephony or audio conferencing.
Delay
Another characteristic of the flow is the delay in transmission between the source and destination.
Again applications can tolerate delay in different degrees. During audio conferencing, telephony,
video conferencing, and remote conferencing there should be a minimum delay.
Jitter
Jitter is the variation in delay for packets belonging to the same flow. For example, if four packets
depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same delay, 20 units of time. On
the other hand, if the above four packets arrive at 21, 23, 21, and 28, they will have different delays:
21, 22, 19, and 25. For applications such as audio and video, the first case is completely acceptable;
the second case is not. For these applications, it does not matter if the packets arrive with a short or
long delay as long as the delay is the same for all packets. For this application, the second case is
not acceptable. Jitter is defined as the variation in the packet delay. High jitter means the difference
between delays is large; low jitter means the variation is small.
Bandwidth
The different applications need different bandwidth. In video conferencing we need to send millions
of bits per second to refresh a color screen while the total number of bits in an e-mail may not reach
even a million.
1. Scheduling,
2. Traffic shaping,
3. Admission control,
4. Resource reservation.
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling techniques
are designed to improve the quality of service. We discuss three of them here:
1. FIFO queuing,
2. Priority queuing,
3. Weighted fair queuing.
FIFO Queuing:
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is familiar
to those who have had to wait for a bus at a bus stop. Figure below shows a conceptual view of a
FIFO queue.
Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue. The packets in the highest-priority queue are processed first. Packets in the lowest-
priority queue are processed last. Note that the system does not stop serving a queue until it is
empty. Figure below shows priority queuing with two priority levels (for simplicity).
A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such
as multimedia, can reach the destination with less delay. However, there is a potential drawback. If
there is a continuous flow in a high-priority queue, the packets in the lower-priority queues will
never have a chance to be processed. This is a condition called starvation.
A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues. The queues, however, are weighted
based on the priority of the queues; higher priority means a higher weight. The system processes
packets in each queue in a round-robin fashion with the number of packets selected from each
queue based on the corresponding weight. For example, if the weights are 3, 2, and 1, three packets
are processed from the first queue, two from the second queue, and one from the third queue. If the
system does not impose priority on the classes, all weights can be equal. In this way, we have fair
queuing with priority. Figure below shows the technique with three classes.
Traffic shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network.
Two techniques can shape traffic:
1. Leaky bucket
2. Token bucket.
Admission control
Admission control refers to the mechanism used by a router, or a switch, to accept or reject a flow
based on predefined parameters called flow specifications. Before a router accepts a flow for
processing, it checks the flow specifications to see if its capacity (in terms of bandwidth, buffer
size, CPU speed, etc.)
Resource Reservation
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The quality of
service is improved if these resources are reserved beforehand. Two QoS model called Integrated
Services and Differentiated Services, which depends heavily on resource reservation to improve the
quality of service.