CN Section A
CN Section A
Lesson No.
Website : www.pbidde.org
Syllabus
The question paper will consist of three sections A, B & C. Sections A & B will have
four questions from the respective sections of the syllabus and will carry 30% marks
each. Section C will have 6-12 short answer type questions which will cover the entire
syllabus uniformly and will carry 40% marks in all.
1. Candidates are required to attempt two questions each from sections A & B of the
question paper and the entire section C.
Course Objectives
● Become familiar with the basics of computer networks
● Become familiar with network architectures
● Become familiar with fundamental protocols
● Become familiar with basic network computing techniques
Learning Outcomes
Upon completion of this module, students will be able to:
● Have a good understanding of the OSI Reference Model and in particular have a
good knowledge of Layers 1-3.
● Analyze the requirements for a given organizational structure and select the
most appropriate networking architecture and technologies
Section-A
Section-B
Network layer: Design issues, Services to the transport layer, Routing algorithms-
Static/non adaptive and dynamic/adaptive algorithms. Congestion control algorithms- the
leaky bucket algorithm, the token bucket algorithm.
Application layer: The DNS name space Electronic mail, the WWW, Network security:
Introduction to cryptography, substitution cipers, one time pads, two fundamental
cryptographic principles, public key algorithms, (RSA, other public key algorithms), digital
signatures (symmetric-key signatures, public-key signatures, message digests).
1.1.1 Objectives
1.1.2 Introduction
1.1.3 What Is Networking?
1.1.4 Uses of Computer Network
1.1.5 Disadvantages of Installing a School Network
1.1.6 Network Hardware
1.1.6.1 File Servers
1.1.6.2 Workstations
1.1.6.3 Network Interface Cards
1.1.6.4 Ethernet Cards
1.1.6.5 Token Ring Cards
1.1.6.6 Concentrators/Hubs
1.1.6.7 Repeaters
1.1.6.8 Bridges
1.1.6.9 Router
1.1.7 Network Operating Systems
1.1.8 Summary
1.1.9 Keywords
1.1.10 Short Answer Type Questions
1.1.11 Long Answer Type Questions
1.1.12 Suggested Readings
1.1.1 Objectives
After reading the lesson, you will be able to understand
• Basics of Computer networks
• Uses of Computer networks
• Network Hardware
• Network Software
1.1.2 Introduction
In this day and age, networks are everywhere. The Internet has also
revolutionized not only the computer world, but the lives of millions in a variety of ways
even in the “real world”. We tend to take for granted that computers should be
1
BCA Sem-4 Paper: BCAB2202T
connected together. In fact, these days, whenever I have two computers in the same
room, I have a difficult time not connecting them together. In approaching any
discussion of networking, it is very useful to take a step back and look at networking
from a high level. What is it, exactly, and why is it now considered so important that it
is assumed that most PCs and other devices should be networked? In this section, we
will have a quick introduction to networking, discussing what it is all about in general
terms. I begin by defining networking in the most general terms. I then place
networking in an overall context by describing some of its advantages and benefits, as
well as some of its disadvantages and costs.
1.1.3 What Is Networking?
A network is simply a collection of computers or other hardware devices that are
connected together, either physically or logically, using special hardware and software,
to allow them to exchange information and cooperate. Networking is the term that
describes the processes involved in designing, implementing, upgrading, managing and
otherwise working with networks and network technologies.
Networks are used for an incredible array of different purposes. In fact, the
definitions above are so simple for the specific reason that networks can be used so
broadly, and can allow such a wide variety of tasks to be accomplished. While most
people learning about networking focus on the interconnection of PCs and other “true”
computers, you use various types of networks every day. Each time you pick up a
phone, use a credit card at a store, get cash from an ATM machine, or even plug in an
electrical appliance, you are using some type of network.
In fact, the definition can even be expanded beyond the world of technology
altogether: I'm sure you've heard the term “networking” used to describe the process of
finding an employer or employee by talking to friends and associates. In this case too,
the idea is that independent units are connected together to share information and
cooperate.
The widespread networking of personal computers is a relatively new
phenomenon. For the first decade or so of their existence, PCs were very much “islands
unto themselves”, and were rarely connected together. In the early 1990s, PC
networking began to grow in popularity as businesses realized the advantages that
networking could provide. By the late 1990s, networking in homes with two or more
PCs started to really take off as well.
This interconnection of small devices represents, in a way, a return to the “good
old days” of mainframe computers. Before computers were small and personal, they
were large and centralized machines that were shared by many users operating remote
terminals. While having all of the computer power in one place had many
disadvantages, one benefit was that all users were connected because they shared the
central computer.
Individualized PCs took away that advantage, in favor of the benefits of
independence. Networking attempts to move computing into the middle ground,
providing PC users with the best of both worlds: the independence and flexibility of
2
BCA Sem-4 Paper: BCAB2202T
personal computers, and the connectivity and resource sharing of mainframes. in fact,
networking is today considered so vital that it’s hard to conceive of an organization
with two or more computers that would not want to connect them together.
1.1.4 Uses of Computer Network
Here are some of the specific advantages or uses generally associated with networking:
• Connectivity and Communication: Networks connect computers and the
users of those computers. Individuals within a building or work group can be
connected into local area networks (LANs); LANs in distant locations can be
interconnected into larger wide area networks (WANs). Once connected, it is
possible for network users to communicate with each other using technologies
such as electronic mail. This makes the transmission of business (or non-
business) information easier, more efficient and less expensive than it would be
without the network.
• Data Sharing: One of the most important uses of networking is to allow the
sharing of data. Before networking was common, an accounting employee who
wanted to prepare a report for her manager would have to produce it on his PC,
put it on a floppy disk, and then walk it over to the manager, who would
transfer the data to her PC's hard disk. (This sort of “shoe-based network” was
sometimes sarcastically called a “sneakernet”.) True networking allows
thousands of employees to share data much more easily and quickly than this.
More so, it makes possible applications that rely on the ability of many people to
access and share the same data, such as databases, group software
development, and much more. Intranets and extranets can be used to distribute
corporate information between sites and to business partners.
• Hardware Sharing: Networks facilitate the sharing of hardware devices. For
example, instead of giving each of 10 employees in a department an expensive
color printer (or resorting to the “sneakernet” again), one printer can be placed
on the network for everyone to share.
• Internet Access: The Internet is itself an enormous network, so whenever you
access the Internet, you are using a network. The significance of the Internet on
modern society is hard to exaggerate, especially for those of us in technical
fields.
• Internet Access Sharing: Small computer networks allow multiple users to
share a single Internet connection. Special hardware devices allow the
bandwidth of the connection to be easily allocated to various individuals as they
need it, and permit an organization to purchase one high-speed connection
instead of many slower ones.
• Data Security and Management: In a business environment, a network allows
the administrators to much better manage the company's critical data. Instead
of having this data spread over dozens or even hundreds of small computers in
a haphazard fashion as their users create it, data can be centralized on shared
servers. This makes it easy for everyone to find the data, makes it possible for
3
BCA Sem-4 Paper: BCAB2202T
the administrators to ensure that the data is regularly backed up, and also
allows for the implementation of security measures to control who can read or
change various pieces of critical information.
• Performance Enhancement and Balancing: Under some circumstances, a
network can be used to enhance the overall performance of some applications
by distributing the computation tasks to various computers on the network.
• Entertainment: Networks facilitate many types of games and entertainment.
The Internet itself offers many sources of entertainment, of course. In addition,
many multi-player games exist that operate over a local area network. Many
home networks are set up for this reason, and gaming across wide area
networks (including the Internet) has also become quite popular. Of course, if
you are running a business and have easily-amused employees, you might
insist that this is really a disadvantage of networking and not an advantage!
4
BCA Sem-4 Paper: BCAB2202T
• Data Security Concerns: If a network is implemented properly, it is possible to
greatly improve the security of important data. In contrast, a poorly-secured
network puts critical data at risk, exposing it to the potential problems
associated with hackers, unauthorized access and even sabotage.
• File Server May Fail. Although a file server is no more susceptible to failure
than any other computer, when the files server "goes down," the entire network
may come to a halt. When this happens, the entire school may lose access to
necessary programs and files.
1.1.6 Network Hardware
Networking hardware includes all computers, peripherals, interface cards and other
equipment needed to perform data-processing and communications within the
network. This section provides information on the following components:
• File Servers
• Workstations
• Network Interface Cards
• Concentrators/Hubs
• Repeaters
• Bridges
• Routers
1.1.6.1 File Servers
A file server stands at the heart of most networks. It is a very fast computer with
a large amount of RAM and storage space, along with a fast network interface card.
The network operating system software resides on this computer, along with any
software applications and data files that need to be shared.
The file server controls the communication of information between the nodes on a
network. For example, it may be asked to send a word processor program to one
workstation, receive a database file from another workstation, and store an e-mail
message during the same time period. This requires a computer that can store a lot of
information and share it very quickly. File servers should have at least the following
characteristics:
• 75 megahertz or faster microprocessor (Pentium, PowerPC)
• A fast hard drive with at least four gigabytes of storage
• A RAID (Redundant Array of Inexpensive Disks) to preserve data after a disk
casualty
• A tape back-up unit
• Numerous expansion slots
• Fast network interface card
• At least of 512 MB of RAM
1.1.6.2 Workstations
All of the computers connected to the file server on a network are called
workstations. A typical workstation is a computer that is configured with a network
5
BCA Sem-4 Paper: BCAB2202T
interface card, networking software, and the appropriate cables. Workstations do not
necessarily need floppy disk drives or hard drives because files can be saved on the file
server. Almost any computer can serve as a network workstation.
1.1.6.3 Network Interface Cards
The network interface card (NIC) provides the physical connection between the
network and the computer workstation. Most NICs are internal, with the card fitting
into an expansion slot inside the computer. Some computers, such as Mac Classics,
use external boxes which are attached to a serial port or a SCSI port. Laptop
computers generally use external LAN adapters connected to the parallel port or
network cards that slip into a PCMCIA slot. Network interface cards are a major factor
in determining the speed and performance of a network. It is a good idea to use the
fastest network card available for the type of workstation you are using.
The three most common network interface connections are Ethernet cards,
LocalTalk connectors, and Token Ring cards. According to a International Data
Corporation study, Ethernet is the most popular, followed by Token Ring and LocalTalk
(Sant'Angelo, R. (1995). NetWare Unleashed, Indianapolis, IN: Sams Publishing).
1.1.6.4 Ethernet Cards
Ethernet cards are usually purchased separately from a computer, although
many computers (such as the Macintosh) now include an option for a pre-installed
Ethernet card. Ethernet cards contain connections for either coaxial or twisted pair
cables (or both) (See fig. 1). If it is designed for coaxial cable, the connection will be
BNC. If it is designed for twisted pair, it will have a RJ-45 connection. Some Ethernet
cards also contain an AUI connector. This can be used to attach coaxial, twisted pair,
or fiber optics cable to an Ethernet card. When this method is used there is always an
external transceiver attached to the workstation.
6
BCA Sem-4 Paper: BCAB2202T
1.1.6.5 Token Ring Cards
Token Ring network cards look similar to Ethernet cards. One visible difference
is the type of connector on the back end of the card. Token Ring cards generally have a
nine pin DIN type connector to attach the card to the network cable.
1.1.6.6 Concentrators/Hubs
A concentrator is a device that provides a central connection point for cables from
workstations, servers, and peripherals. In a star topology, twisted-pair wire is run from
each workstation to a central concentrator. Hubs are multislot concentrators into
which can be plugged a number of multi-port cards to provide additional access as the
network grows in size. Some concentrators are passive, that is they allow the signal to
pass from one computer to another without any change. Most concentrators are active,
that is they electrically amplify the signal as it moves from one device to another. Active
concentrators are used like repeaters to extend the length of a network. Concentrators
are:
• Usually configured with 8, 12, or 24 RJ-45 ports
• Often used in a star or star-wired ring topology
• Sold with specialized software for port management
• Also called hubs
• Usually installed in a standardized metal rack that also may store netmodems,
bridges, or routers
1.1.6.7 Repeaters
When a signal travels along a cable, it tends to lose strength. A repeater is a
device that boosts a network's signal as it passes through. The repeater does this by
electrically amplifying the signal it receives and rebroadcasting it. Repeaters can be
separate devices or they can be incorporated into a concentrator. They are used when
the total length of your network cable exceeds the standards set for the type of cable
being used.
A good example of the use of repeaters would be in a local area network using a
star topology with unshielded twisted-pair cabling. The length limit for unshielded
twisted-pair cable is 100 meters. The most common configuration is for each
workstation to be connected by twisted-pair cable to a multi-port active concentrator.
The concentrator regenerates all the signals that pass through it allowing for the total
length of cable on the network to exceed the 100 meter limit.
1.1.6.8 Bridges
A bridge is a device that allows you to segment a large network into two smaller,
more efficient networks. If you are adding to an older wiring scheme and want the new
network to be up-to-date, a bridge can connect the two.
A bridge monitors the information traffic on both sides of the network so that it
can pass packets of information to the correct location. Most bridges can "listen" to the
network and automatically figure out the address of each computer on both sides of
the bridge. The bridge can inspect each message and, if necessary, broadcast it on the
other side of the network.
7
BCA Sem-4 Paper: BCAB2202T
The bridge manages the traffic to maintain optimum performance on both sides
of the network. You might say that the bridge is like a traffic cop at a busy intersection
during rush hour. It keeps information flowing on both sides of the network, but it
does not allow unnecessary traffic through. Bridges can be used to connect different
types of cabling, or physical topologies. They must, however, be used between networks
with the same protocol.
1.1.6.9 Routers
A router translates information from one network to another; it is similar to a
super intelligent bridge. Routers select the best path to route a message, based on the
destination address and origin. The router can direct traffic to prevent head-on
collisions, and is smart enough to know when to direct traffic along back roads and
shortcuts.
While bridges know the addresses of all computers on each side of the network,
routers know the addresses of computers, bridges, and other routers on the network.
Routers can even "listen" to the entire network to determine which sections are busiest
-- they can then redirect data around those sections until they clear up.
If you have a school LAN that you want to connect to the Internet, you will need to
purchase a router. In this case, the router serves as the translator between the
information on your LAN and the Internet. It also determines the best route to send the
data over the Internet. Routers can:
• Direct signal traffic efficiently
• Route messages between any two protocols
• Route messages between linear bus, star, and star-wired ring topologies
• Route messages across fiber optic, coaxial, and twisted-pair cabling
1.1.7 Network Operating Systems
Unlike operating systems, such as DOS and Windows95, that are designed for
single users to control one computer, network operating systems (NOS) coordinate the
activities of multiple computers across a network. The network operating system acts
as a director to keep the network running smoothly.
The two major types of network operating systems are:
• Peer-to-Peer
• Client/Server
Peer-to-Peer
Peer-to-peer network operating systems allow users to share resources and files located
on their computers and to access shared resources found on other computers.
However, they do not have a file server or a centralized management source (See fig. 1).
In a peer-to-peer network, all computers are considered equal; they all have the same
abilities to use the resources available on the network. Peer-to-peer networks are
designed primarily for small to medium local area networks. AppleShare and Windows
for Workgroups are examples of programs that can function as peer-to-peer network
operating systems.
8
BCA Sem-4 Paper: BCAB2202T
9
BCA Sem-4 Paper: BCAB2202T
• Flexibility - New technology can be easily integrated into system.
• Interoperability - All components (client/network/server) work together.
• Accessibility - Server can be accessed remotely and across multiple platforms.
Disadvantages of a client/server network:
• Expense - Requires initial investment in dedicated server.
• Maintenance - Large networks will require a staff to ensure efficient operation.
• Dependence - When server goes down, operations will cease across the network.
Examples of network operating systems
The following list includes some of the more popular peer-to-peer and client/server
network operating systems.
• Microsoft Windows
• Microsoft Windows NT Server
• Microsoft Windows NT Work Station
1.1.8 Summary
A network is simply a collection of computers or other hardware devices that are
connected together, either physically or logically, using special hardware and software,
to allow them to exchange information and cooperate. Some of the specific advantages
or uses generally associated with networking are: data sharing, hardware sharing,
internet access, etc. Networking hardware includes all computers, peripherals,
interface cards and other equipment needed to perform data-processing and
communications within the network. Unlike operating systems, such as DOS and
Windows95, that are designed for single users to control one computer, network
operating systems (NOS) coordinate the activities of multiple computers across a
network. The network operating system acts as a director to keep the network running
smoothly. The two major types of network operating systems are: Peer-to-Peer and
Client/Server.
1.1.9 Keywords
Server: A central computer that controls and provides information to other computers
in a network
Workstations: A high-performance computer system that is basically designed for a
single user and has advanced graphics capabilities, large storage capacity, and a
powerful central processing unit.
Ethernet: It is a widely used networking technology that allows devices to
communicate and share data over a LAN.
Router: It is a device that connects two or more packet-switched networks or sub-
networks.
1.1.10 Short Answer Type Questions
1. What is the use of NIC?
2. Why do we use Bridge?
3. What is Router?
1.1.11 Long Answer Type Questions
1. Why do we need a network? Explain in detail.
10
BCA Sem-4 Paper: BCAB2202T
2. Explain the different types of network hardware.
3. What is a network Operating System? Explain different types of network
Operating systems.
1.1.12 Suggested Readings
1. Computer Networks by Andrew S. Tanenbaum
2. Data and Computer Communication by William Stallings
3. Data Commutations and Networking by Behrouz A Forouzen
11
B.C.A Sem-4 Paper: BCAB2202T
Computer Networks
Lesson No. 1.2 Author : Dr. Kanwal Preet Singh
Converted into SLM by: Dr. Vishal Singh
Last Updated March, 2024
1.2.1 Objectives
1.2.2 Introduction
1.2.3 Network goals
1.2.4 Applications of Networks
1.2.5 Network Models
1.2.6 Network Topologies
1.2.6.1 Bus Topology
1.2.6.2 Ring Topology
1.2.6.3 Star Topology
1.2.6.4 Mesh Topology
1.2.6.5 Tree Topology
1.2.7 Summary
1.2.8 Keywords
1.2.9 Short Answer Type Questions
1.2.10 Long Answer Type Questions
1.2.11 Suggested Readings
1.2.1 Objectives
After reading the lesson, you will be able to understand:
• Network Goals
• Network Applications
• Network models
• Network Topologies
1.2.2 Introduction
The main goals of computer network are resource sharing and providing
communication. There are some other goals also and are discussed in this lesson. The
major applications of networking are in marketing, finance, telecommunication, email,
cell phones, etc. Network models provide a standard framework to use when designing
complex communication systems. The two types of network models – peer to peer and
client server are discussed in the lesson. A topology refers to both the physical and
logical layout of a network. The different types of topologies are discussed in the lesson.
12
BCA Sem-4 Paper: BCAB2202T
1.2.3 Network goals
• The main goal of networking is "Resource sharing", and it is to make all
programs, data and equipment available to anyone on the network without the
regard to the physical location of the resource and the user.
• A second goal is to provide high reliablity by having alternative sources of
supply. For example, all files could be replicated on two or three machines, so if
one of them is unavailable, the other copies could be available.
• Another goal is saving money. Small computers have a much better
price/performance ratio than larger ones. Mainframes are roughly a factor of
ten times faster than the fastest single chip microprocessors, but they cost
thousand times more. This imbalance has caused many system designers to
build systems consisting of powerful personal computers, one per user, with
data kept on one or more shared file server machines. This goal leads to
networks with many computers located in the same building. Such a network is
called a LAN(local area network).
• Another closely related goal is to increase the systems performance as the work
load increases by just adding more processors. With central mainframes, when
the system is full, it must be replaced by a larger one, usually at great expense
and with even greater disruption to the users.
• Computer networks provide a powerful communication medium. A file that was
updated/modified on a network, can be seen by the other users on the network
immediately.
1.2.4 Applications of Networks
Data networks have become an indispensible part of business, industry and
entertainment. Some of the applications of networks in different fields are as follows:
• Marketing and Sales: Computer networks are used extensively in both
marketing and sales organizations. Marketing professionals use them to collect,
exchange and analyze data relating to customer needs and product development
cycles. Sales applications include teleshopping, which uses order-entry
computers or telephones connected to an order-processing network, and on-line
reservation services for hotels, airlines, railways, etc.
• Financial Services: Now-a-days financial services are totally dependent on
computer networks. Applications include credit history searches, foreign
exchange and investment services, and electronic funds transfer (EFT), which
allows a user to transfer money without going into a bank (e.g. ATMs).
• Manufacturing: Computer networks, these days are used in many aspects of
manufacturing, including the manufacturing process itself. Two applications
that use networks to provide essential services are computer-assisted design
(CAD) and computer-assisted manufacturing (CAM), both of which allow
multiple users to work on a project simultaneously.
13
BCA Sem-4 Paper: BCAB2202T
• Information Services: They provide connections to the Internet and other
information services which includes bulletin boards and data banks.
• Electronic Mail (e-mail or Email): E-mail is the forwarding of electronic files to
an electronic post office for the recipient to pick up.
• Groupware: It is the latest network application; it allows user groups to share
documents, schedules databases, etc.
• Teleconferencing: It allows people in different regions to "attend" meetings
using telephone lines.
• Telecommuting: It allows employees to perform office work at home by "Remote
Access" to the network.
• Videotext: It is the capability of having a 2 way transmission of picture and
sound. Games like Doom, Hearts, distance education lectures, etc use this
technology.
• Cellular Telephones: In the past, two parties wishing to use the services of the
telephone companies had to be linked by a fixed physical connection. Today’s
cellular networks make it possible to maintain wireless phone connections even
while travelling over large distances.
• Cable Television: Future services provided by cable television networks may
include video on request, as well as financial and communication services
currently provided by telephone companies and computer networks.
14
BCA Sem-4 Paper: BCAB2202T
1. Peer-to-peer networks
2. Client/server networks
Let us look into the details of each of these models now.
Peer-to-peer Networks
A peer-to-peer network is a decentralized network model offering no centralized
storage of data or centralized control over the sharing of files or resources. All systems
on a peer-to-peer network can share the resources on their local computer as well as
use resources of other systems.
Peer-to-peer networks are cheaper and easier to implement than client/server
networks, making them an ideal solution for environments in which budgets are a
concern. The peer-to-peer model does not work well with large numbers of computer
systems. As a peer-to-peer network grows, it becomes increasingly complicated to
navigate and access files and resources connected to each computer because they are
distributed throughout the network. Further, the lack of centralized data storage
makes it difficult to locate and back up key files.
Peer-to-peer networks are typically found in small offices or in residential
settings where only a limited number of computers will be attached and only a few files
and resources shared. A general rule of thumb is to have no more than 10 computers
connected to a peer-to-peer network.
Advantages:
• Easy to install and setup costs are relatively low.
Disadvantages:
• They do not expandability and centralized management.
• There is no central repository for files and applications.
• It does not provide the security as available in a client/server network.
Client/Server Networking Model
The client/server networking model is, without question, the most widely
implemented model and the one you are most likely to encounter when working in real-
world environments. The advantages of the client/server system stem from the fact
that it is a centralized model. It allows for centralized network management of all
network services, including user management, security, and backup procedures.
A client/server network often requires technically skilled personnel to
implement and manage the network. This and the cost of a dedicated server hardware
and software increase the cost of the client/server model. Despite this, the advantages
of the centralized management, data storage, administration, and security make it the
network model of choice.
Advantages:
• Centralized - Resources and data security can be controlled through the server.
• Interoperability - All components (client/network/server) work together.
• Accessibility - Server can be accessed remotely and across multiple Operating
systems, such as Windows XP, Windows NT, and Macintosh.
Disadvantages:
15
BCA Sem-4 Paper: BCAB2202T
• Maintenance - They can support thousands of clients, hence requires a staff to
ensure efficient operation.
• Dependence - When server goes down, operations across the network will be
affected.
Bus Topology
16
BCA Sem-4 Paper: BCAB2202T
Advantages and Disadvantages of the Bus Topology
Advantages
• Compared to other topologies, a bus is cheap and easy to implement.
• Does not use any specialized network equipment.
• Requires less cable than other topologies.
Disadvantages
• Because all systems on the network connect to a single backbone, a break in
the cable will prevent all systems from accessing the network.
• There might be network disruption when computers are added or removed.
• Difficult to troubleshoot.
1.2.6.2 Ring Topology
Ring topology is one of the old ways of building computer network design and it
is pretty much obsolete. FDDI, SONET or Token Ring technologies are used to build
ring technology. It is not widely popular in terms of usability but incase if you find it
any where it will mostly be in schools or office buildings. In ring network topology
computers and other networking devices are attached to each other in such a way that
they have devices adjacent to each other (Left and right side). All messages travel in the
same direction either clockwise or anticlockwise. In case of failure of any device or
cable the whole network will be down and communication will not be possible.
Ring Topology
Advantages and Disadvantages of the Ring Topology:
Advantages :
• Cable faults are easily located, making troubleshooting easier.
• Ring networks are moderately easy to install.
Disadvantages :
• Expansion to the network can cause network disruption.
• A single break in the cable can disrupt the entire network.
1.2.6.3 Star Topology
This is the most commonly used network topology design you will come across
in LAN computer networks. In Star, all computers are connected to central device
17
BCA Sem-4 Paper: BCAB2202T
called hub, router or switches using Unshielded Twisted Pair (UTP) or Shielded Twisted
Pair cables. In star topology, we require more connecting devices like routers, cables
unlike in bus topology where entire network is supported by single backbone. All
peripheral nodes may thus communicate with all others by transmitting to, and
receiving from, the central node only. The most practical point of Star topology success
is that the entire network does not go down in case of failure of a computer or cable or
device, it will only affect the computer whose wire failed rest of the network will be
working fine. However, in case of failure of central communication device such as Hub,
Router or Switch the entire network will collapse. Star topology is widely used in
homes, offices and in buildings because of its commercial success.
Star Topology
Advantages and Disadvantages of the Star Topology:
Advantages
• Star networks are easily expanded without disruption to the network.
• Easy to troubleshoot and isolate problems.
• Cable failure affects only a single user.
Disadvantages
• A central connecting device allows for a single point of failure.
• Requires more cable than most of the other topologies.
• More difficult than other topologies to implement.
1.2.6.4 Mesh Topology
Mesh topology is designed over the concept of routing. Basically it uses router to
choose the shortest distance for the destination. In topologies like star, bus etc,
message is broadcasted to entire network and only intended computer accepts the
message, but in mesh the message is only sent to the destination computer which
finds its route itself with the help of router. Internet is based on mesh topology.
Routers plays important role in mesh topology, routers are responsible to route the
message to its destination address or computer. When every device is connected to
18
BCA Sem-4 Paper: BCAB2202T
every other device it is known as full mesh topology and if every device is connected
indirectly to each other then it is called partial mesh topology.
Mesh Topology
Advantages and Disadvantages of the Mesh Topology
Advantages
• Provides redundant paths between devices
• The network can be expanded without disruption to current users.
Disadvantages
• Requires more cable than the other LAN topologies.
• Complicated implementation.
1.2.6.5 Tree Topology
Just as name suggest, the network design is little confusing and complex to
understand at first but if we have better understanding of Star and Bus topologies then
Tree is very simple. Tree topology is basically the mixture of many Star topology
designs connected together using bus topology. Devices like Hub can be directly
connected to Tree bus and each hub performs as root of a tree of the network devices.
Tree topology is very dynamic in nature and it holds potential of expandability of
networks far better than other topologies like Bus and Star.
19
BCA Sem-4 Paper: BCAB2202T
20
BCA Sem-4 Paper: BCAB2202T
physical topology of a network refers to the actual layout of the computer cables and
other network devices. The logical topology of a network, on the other hand, refers to
the way in which the network appears to the devices that use it. There are five different
Networking Topologies: Bus, Star, Ring, Mesh and Tree.
1.2.8 Keywords
Topology: It defines the structure of the network of how all the components are
interconnected to each other.
Bus: A type of network topology in which all devices are connected to a single cable
called a "bus”.
Star: A topology for LAN in which all nodes are individually connected to a central
connection point, like a hub or a switch.
Ring: It is a type of network configuration where devices are connected in a circular
manner, forming a closed loop.
Mesh: It is a type of computer network in which each node (computer or other device)
is connected to every other node in the network.
Tree: It is a structure where devices are connected hierarchically
1.2.9 Short Answer Type Questions
1. What is a Network Topology?
2. What is Client/Server model?
1.2.10 Long Answer Type Questions
1. What are the goals of computer network?
2. Explain the major applications of Computer network.
3. Explain different types of network topologies.
1.2.11 Suggested Readings
• Computer Networks by Andrew S. Tanenbaum
• Data and Computer Communication by William Stallings
• Data Commutations and Networking by Behrouz A Forouzen
21
B.C.A Sem-4 Paper: BCAB2202T
Computer Networks
Lesson No. 1.3 Author : Dr. Kanwal Preet Singh
Converted into SLM by: Dr. Vishal Singh
Last Updated March, 2024
Reference Models
1.3.1 Objectives
1.3.2 Introduction
1.3.3 The Need for Standards
1.3.4 The OSI Reference Model
1.3.5 Protocol and Protocol Stack
1.3.6 OSI Reference Model Layers
1.3.6.1 Physical Layer
1.3.6.2 Data Link Layer
1.3.6.3 Network Layer
1.3.6.4 Transport Layer
1.3.6.5 Session Layer
1.3.6.6 Presentation Layer
1.3.6.7 Application Layer
1.3.7 How OSI Reference Model Works
1.3.8 TCP/IP Protocol Suite
1.3.9 TCP/IP Reference Model
1.3.9.1 Application Layer
1.3.9.2 Transport Layer
1.3.9.3 Internet Layer
1.3.9.4 Network Interface Layer
1.3.10 OSI vs TCP/IP Reference Model
1.3.11 Summary
1.3.12 Keywords
1.3.13 Short Answer Type Questions
1.3.14 Long Answer Type Questions
1.3.15 Suggested Readings
1.3.1 Objectives
After reading the lesson, you will be able to understand:
• Protocols and Protocol stack
• OSI Reference Model
22
BCA Sem-4 Paper: BCAB2202T
• TCP/IP Protocol Suite
• TCP/IP Model
1.3.2 Introduction
As we know, many software and hardware manufacturers supply products for
linking computers in a network. Networking is fundamentally a form of
communication, so the need for manufacturers to take steps to ensure that their
products could interact became apparent early in the development of networking
technology. As networks and suppliers of networking products have spread across the
world, the need for standardization has only increased. To address the issues
surrounding standardization, several independent organizations have created standard
design specifications for computer-networking products. When these standards are
adhered to, communication is possible between hardware and software products
produced by a variety of vendors. This lesson explores these standards in detail.
1.3.3 The Need for Standards
Over the past couple of decades many of the networks that were built used
different hardware and software implementations, as a result they were incompatible
and it became difficult for networks using different specifications to communicate with
each other. To address the problem of networks being incompatible and unable to
communicate with each other, the International Organization for Standardization (ISO)
researched various network schemes. The ISO recognized there was a need to create a
NETWORK MODEL that would help vendors create interoperable network
implementations. The International Organization for Standardization (ISO) is an
International standards organization responsible for a wide range of standards,
including many that are relevant to networking. In 1984 in order to aid network
interconnection without necessarily requiring complete redesign, the Open Systems
Interconnection (OSI) reference model was approved as an international standard for
communications architecture.
1.3.4 The OSI Reference Model
The model was developed by the International Organization for Standardization
(ISO) in 1984. It is now considered the primary Architectural model for inter-computer
communications. The Open Systems Interconnection (OSI) reference model is a
descriptive network scheme. It ensures greater compatibility and interoperability
between various types of network technologies. The OSI model describes how
information or data makes its way from application programs (such as spreadsheets)
through a network medium (such as wire) to another application program located on
another network. The OSI reference model divides the problem of moving information
between computers over a network medium into SEVEN smaller and more manageable
problems. This separation into smaller more manageable functions is known as
layering.
The OSI Reference Model is composed of seven layers, each specifying particular
network functions. The process of breaking up the functions or tasks of networking
into layers reduces complexity. Each layer provides a service to the layer above it in the
23
BCA Sem-4 Paper: BCAB2202T
protocol specification. Each layer communicates with the same layer’s software or
hardware on other computers. The lower 4 layers (transport, network, data link and
physical —Layers 4, 3, 2, and 1) are concerned with the flow of data from end to end
through the network. The upper four layers of the OSI model (application, presentation
and session—Layers 7, 6 and 5) are orientated more toward services to the
applications. Data is encapsulated with the necessary protocol information as it moves
down the layers before network transit.
1.3.5 Protocol and Protocol Stack
Protocol is a “language” used to transmit data over a network. In order to two
computers talk to each other, they must be using the same protocol (i.e. language).
When you send an e-mail from your computer, your e-mail program (called e-mail
client) sends data (your e-mail) to the protocol stack, which does a lot of things we will
be explaining in this lesson, then this protocol stack sends data to the networking
media (usually cable or air, or wireless networks) then the protocol stack on the
computer on the other side (the e-mail server) gets the data, do some processing and
sends data (your e-mail) to the e-mail server program.
The protocol stack does a lot of things and the role of the OSI model is to
standardize the order under which the protocol stack does these things. Two different
protocols may be incompatible but if they follow the OSI model, both will do things on
the same order, making it easier to software developers to understand how they work.
You may have notice that we used the word “stack”. This is because protocols like
TCP/IP aren’t really a single protocol, but several protocols working together. So the
most appropriate name for it isn’t simple “protocol” but “protocol stack”.
The OSI model is divided into seven layers. It is very interesting to note that
TCP/IP (probably the most used network protocol nowadays) and other “famous”
protocols like IPX/SPX (used by Novell Netware) and NetBEUI (used by Microsoft
products) don’t fully follow this model, corresponding only to part of the OSI model. On
the other hand, by studying the OSI model you will understand how protocols work in
a general fashion, meaning that it will be easier for you to understand how real-world
protocols like TCP/IP work.
The basic idea of the OSI reference model is this: each layer is in charge of some
kind of processing and each layer only talks to the layers immediately below and above
it. For example, the sixth layer will only talk to the seventh and fifth layers, and never
directly with the first layer. When your computer is transmitting data to the network,
one given layer will receive data from the layer above, process what it is receiving, add
some control information to the data that this particular layer is in charge of, and
sending the new data with this new control information added to the layer below.
When your computer is receiving data, the contrary process will occur: one
given layer will receive data from the layer below, process what it is receiving, removing
control information from the data that this particular layer is in charge of, and sending
the new data without the control information to the layer above. What is important to
keep in mind is that each layer will add (when your computer is sending data) or
24
BCA Sem-4 Paper: BCAB2202T
remove (when your computer is receiving data) control information that it is in charge
of.
Self Check Exercise-I
Q1. What is OSI Model?
Ans………………………………………………………………………………………………………
……………………………………………………………………………………………………………
…………………………………………………………………………………………………………
Q2. What is Protocol?
Ans………………………………………………………………………………………………………
……………………………………………………………………………………………………………
…………………………………………………………………………………………………………
25
BCA Sem-4 Paper: BCAB2202T
on optical lines or amplified radio frequency transmissions. The information that enters
and exits the Physical Layer must be bits; either 0s or 1s in binary. The higher layers
are responsible for providing the Physical Layer with binary information. Since nearly
all information inside of a computer is already digital, this is not difficult to achieve.
The Physical Layer does not examine the binary information nor does it validate it or
make changes to it. The Physical Layer is simply intended to transport the binary
information between higher layers located at points A and B. The following are the
main responsibilities of the physical layer in the OSI Reference Model:
• Definition of Hardware Specifications: The details of operation of cables,
connectors, wireless radio transceivers, network interface cards and other
hardware devices are generally a function of the physical layer (although also
partially the data link layer).
• Encoding and Signaling: The physical layer is responsible for various encoding
and signaling functions that transform the data from bits that reside within a
computer or other device into signals that can be sent over the network.
• Data Transmission and Reception: After encoding the data appropriately, the
physical layer actually transmits the data, and of course, receives it. Note that
this applies equally to wired and wireless networks, even if there is no tangible
cable in a wireless network!
• Topology and Physical Network Design: The physical layer is also considered
the domain of many hardware-related network design issues, such as LAN and
WAN topology.
1.3.6.2 Data Link Layer
This layer gets the data packets send by the network layer and convert them into
frames that will be sent out to the network media, adding the physical address of the
network card of your computer, the physical address of the network card of the
destination, control data and a checksum data, also known as CRC. The frame created
by this layer is sent to the physical layer, where the frame will be converted into an
electrical signal (or electromagnetic signal, if you are using a wireless network). The
Data Link layer on the receiving computer will recalculate the checksum and see if the
new calculated checksum matches the value sent. If they match, the receiving
computer sends an acknowledge signal (ack) to the transmitting computer. Otherwise
the transmitting computer will re-send the frame, as it didn’t arrive at destination or it
arrived with its data corrupted. The following are the main responsibilities of the Data
link layer in the OSI Reference Model:
• Data Framing: The data link layer is responsible for the final encapsulation of
higher-level messages into frames that are sent over the network at the physical
layer.
• Arbitration: Arbitration simply means determining how to negotiate access to a
single data channel when multiple hosts are attempting to use it at the same
time. In half-duplex baseband transmissions, arbitration is required because
only one device can be actively sending an electrical signal at a time. If two
26
BCA Sem-4 Paper: BCAB2202T
devices attempt to access the medium at the same instant, then the signals
from each device will interfere, causing a collision.
• Physical Addressing: All devices must have a physical address. In LAN
technologies, this is normally a MAC address. The physical address is designed
to uniquely identify the device globally. A MAC address (also known as an
Ethernet address, LAN address, physical address, hardware address, and many
other names) is a 48-bit address usually written as 12 hexadecimal digits, such
as 01-02-03-AB-CD-EF. The first six hexadecimal digits identify the
manufacturer of the device, and the last six represent the individual device from
that manufacturer. These addresses were historically “burnt in,” making them
permanent. However, in rare cases, a MAC address is duplicated. Therefore, a
great many network devices today have configurable MAC addresses. One way
or another, however, a physical address of some type is a required component of
a packet.
• Error Detection: Another data link-layer function, error detection, determines
whether problems with a packet were introduced during transmission. It does
this by introducing a trailer, the FCS, before it sends the packet to the remote
machine. This FCS uses a Cyclic Redundancy Check (CRC) to generate a
mathematical value and places this value in the trailer of the packet. When the
packet arrives at its destination, the FCS is examined and the reverse of the
original algorithm that created the FCS is applied. If the frame was modified in
any way, the FCS will not compute, and the frame will be discarded. The FCS
does not provide error recovery, just error detection. Error recovery is the
responsibility of a higher layer, generally the transport layer.
1.3.6.3 Network Layer
The third-lowest layer of the OSI Reference Model is the network layer. If the data
link layer is the one that basically defines the boundaries of what is considered a
network, the network layer is the one that defines how internetworks (interconnected
networks) function. The network layer is the lowest one in the OSI model that is
concerned with actually getting data from one computer to another even if it is on a
remote network; in contrast, the data link layer only deals with devices that are local to
each other. It is responsible for addressing messages and translating logical addresses
and names into physical addresses. This layer also determines the route from the
source to the destination computer. It determines which path the data should take
based on network conditions, priority of service, and other factors. It also manages
traffic problems on the network, such as switching and routing of packets and
controlling the congestion of data. The following are the main responsibilities of the
Data link layer in the OSI Reference Model:
• Logical Addressing: Every device that communicates over a network has
associated with it a logical address, sometimes called a layer three address. For
example, on the Internet, the Internet Protocol (IP) is the network layer protocol
and every machine has an IP address. Note that addressing is done at the data
27
BCA Sem-4 Paper: BCAB2202T
link layer as well, but those addresses refer to local physical devices. In
contrast, logical addresses are independent of particular hardware and must be
unique across an entire internetwork.
• Routing: Moving data across a series of interconnected networks is probably
the defining function of the network layer. It is the job of the devices and
software routines that function at the network layer to handle incoming packets
from various sources, determine their final destination, and then figure out
where they need to be sent to get them where they are supposed to go.
• Datagram Encapsulation: The network layer normally encapsulates messages
received from higher layers by placing them into datagrams (also called packets)
with a network layer header.
• Fragmentation and Reassembly: The network layer must send messages down
to the data link layer for transmission. Some data link layer technologies have
limits on the length of any message that can be sent. If the packet that the
network layer wants to send is too large, the network layer must split the packet
up, send each piece to the data link layer, and then have pieces reassembled
once they arrive at the network layer on the destination machine.
• Error Handling and Diagnostics: Special protocols are used at the network
layer to allow devices that are logically connected, or that are trying to route
traffic, to exchange information about the status of hosts on the network or the
devices themselves.
1.3.6.4 Transport Layer
On networks data is divided into several packets. When you are transferring a
big file, this file is sliced into several small packets, and then the computer at the other
end gets these packets and put the file back together. The Transport layer is in charge
of getting data sent by the Session layer and dividing them into packets that will be
transmitted over the network. At the receiving computer, this layer is also in charge of
putting the packets in order, if they arrived out-of-order (this task is known as flow
control), and also checking data integrity, usually sending a control signal to the
transmitter called acknowledge, telling it that the packet arrived and data is intact.
This layer separates the Application layers (layers 5 to 7) from the Network layers
(layers 1 to 3). Network layers are concerned with how data is transmitted and received
over the network, i.e. how data packets are transmitted, while Application layers are
concerned with what is inside the packets, i.e. of the data itself. Layer 4, Transport,
makes the interface between these two groups.
The transport layer ensures that packets are delivered error free, in sequence, and
without losses or duplications. At the sending computer, this layer repackages
messages, dividing long messages into several packets and collecting small packets
together in one package. This process ensures that packets are transmitted efficiently
over the network. At the receiving computer, the transport layer opens the packets,
reassembles the original messages, and, typically, sends an acknowledgment that the
message was received. If a duplicate packet arrives, this layer will recognize the
28
BCA Sem-4 Paper: BCAB2202T
duplicate and discard it. The following are the main responsibilities of the Transport
layer in the OSI Reference Model:
• Multiplexing and Demultiplexing: Transport layer protocols on a sending
device multiplex the data received from many application programs for
transport, combining them into a single stream of data to be sent. The same
protocols receive data and then demultiplex it from the incoming stream of
datagrams, and direct each package of data to the appropriate recipient
application processes.
• Segmentation, Packaging and Reassembly: The transport layers segments the
large amounts of data it sends over the network into smaller pieces on the
source machine, and then reassemble them on the destination machine. This
function is similar conceptually to the fragmentation function of the network
layer; just as the network layer fragments messages to fit the limits of the data
link layer, the transport layer segments messages to suit the requirements of
the underlying network layer.
• Connection Establishment, Management and Termination: Transport layer
connection-oriented protocols are responsible for the series of communications
required to establish a connection, maintain it as data is sent over it, and then
terminate the connection when it is no longer required.
• Acknowledgments and Retransmissions: As mentioned above, the transport
layer is where many protocols are implemented that guarantee reliable delivery
of data. This is done using a variety of techniques, most commonly the
combination of acknowledgments and retransmission timers. Each time data is
sent a timer is started; if it is received, the recipient sends back an
acknowledgment to the transmitter to indicate successful transmission. If no
acknowledgment comes back before the timer expires, the data is retransmitted.
Other algorithms and techniques are usually required to support this basic
process.
• Flow Control: Transport layer protocols that offer reliable delivery also often
implement flow control features. These features allow one device in a
communication to specify to another that it must "throttle back" the rate at
which it is sending data, to avoid bogging down the receiver with data. These
allow mismatches in speed between sender and receiver to be detected and dealt
with.
29
BCA Sem-4 Paper: BCAB2202T
the two nodes, the session layer performs the same function on behalf of the
application. The boundaries between layers start to get very fuzzy once you get to the
session layer, which makes it hard to categorize what exactly belongs at layer 5. Some
technologies really span layers 5 through 7, and especially in the world of TCP/IP, it is
not common to identify protocols that are specific to the OSI session layer.
1.3.6.6 Presentation Layer
Also called translation layer, this layer converts the data format received by the
Application layer to a common format used by the protocol stack. For example, if the
program is using a non-ASCII code page, this layer will be in charge of translating the
received data into ASCII. This layer can also be used to compress data and add
encryption. Data compression increases network speed, as less data will be sent to the
layer below (layer 5). If encryption is used, your data will be encrypted while in
transit between layers 5 and 1 and they will only be decrypted on the layer 6 of the
computer at the other end. The following are the main responsibilities of the Transport
layer in the OSI Reference Model:
• Translation: Networks can connect very different types of computers together:
PCs, Macintoshes, UNIX systems, AS/400 servers and mainframes can all exist
on the same network. These systems have many distinct characteristics and
represent data in different ways; they may use different character sets for
example. The presentation layer handles the job of hiding these differences
between machines.
• Compression: Compression (and decompression) may be done at the
presentation layer to improve the throughput of data.
• Encryption: Some types of encryption (and decryption) are performed at the
presentation layer. This ensures the security of the data as it travels down the
protocol stack. For example, one of the most popular encryption schemes that is
usually associated with the presentation layer is the Secure Sockets Layer (SSL)
protocol. Not all encryption is done at layer 6, however; some encryption is often
done at lower layers in the protocol stack.
1.3.6.7 Application Layer
The application layer is responsible for interacting with your actual user
application. Note that it is not (generally) the user application itself, but, rather, the
network applications used by the user application. For instance, in web browsing, your
user application is your browser software, such as Microsoft Internet Explorer.
However, the network application being used in this case is HTTP, which is also used
by a number of other user applications (such as Netscape Navigator). Generally, the
application layer is responsible for the initial packet creation; so if a protocol seems to
create packets out of thin air, it is generally an application- layer protocol. While this is
not always the case (some protocols that exist in other layers create their own packets),
it’s not bad as a general guideline. In simple terms, the function of the application layer
is to take requests and data from the users and pass them to the lower layers of the
OSI model. Incoming information is passed to the application layer, which then
30
BCA Sem-4 Paper: BCAB2202T
displays the information to the users. Some of the most basic application-layer services
include file and print capabilities.
Self Check Exercise-II
Q3. Define Logical Addressing.
Ans………………………………………………………………………………………………………
……………………………………………………………………………………………………………
…………………………………………………………………………………………………………
Q4. What is Data Link Layer?
Ans………………………………………………………………………………………………………
……………………………………………………………………………………………………………
…………………………………………………………………………………………………………
31
BCA Sem-4 Paper: BCAB2202T
We can also say that each layer on the transmitting computer talks directly to
the same layer on the receiving computer. For example, the fourth layer on the
transmitting computer is talking directly to the fourth layer on the receiving computer.
We can say that because the control data added by each layer can only be interpreted
by the same layer on the receiving computer. This becomes clear in the following figure:
32
BCA Sem-4 Paper: BCAB2202T
Each layer on the transmitting computer talks directly to the same layer on the
receiving computer
33
BCA Sem-4 Paper: BCAB2202T
The Internet is a primary reason why TCP/IP is what it is today. In fact, the
Internet and TCP/IP are so closely related in their history that it is difficult to discuss
one without also talking about the other. They were developed together, with TCP/IP
providing the mechanism for implementing the Internet. TCP/IP has over the years
continued to evolve to meet the needs of the Internet and also smaller, private
networks that use the technology. I will provide a brief summary of the history of
TCP/IP here:
The TCP/IP protocols were initially developed as part of the research network
developed by the United States Defense Advanced Research Projects Agency (DARPA or
ARPA). Initially, this fledgling network, called the ARPAnet, was designed to use a
number of protocols that had been adapted from existing technologies. However, they
all had flaws or limitations, either in concept or in practical matters such as capacity,
when used on the ARPAnet. The developers of the new network recognized that trying
to use these existing protocols might eventually lead to problems as the ARPAnet
scaled to a larger size and was adapted for newer uses and applications.
In 1973, development of a full-fledged system of internetworking protocols for
the ARPAnet began. What many people don't realize is that in early versions of this
technology, there was only one core protocol: TCP. And in fact, these letters didn't even
stand for what they do today; they were for the Transmission Control Program. The first
version of this predecessor of modern TCP was written in 1973, then revised and
formally documented in RFC 675, Specification of Internet Transmission Control
Program, December 1974.
Testing and development of TCP continued for several years. In March 1977,
version 2 of TCP was documented. In August 1977, a significant turning point came in
TCP/IP’s development. Jon Postel, one of the most important pioneers of the Internet
and TCP/IP, published a set of comments on the state of TCP. What Postel was
essentially saying was that the version of TCP created in the mid-1970s was trying to
do too much. Specifically, it was encompassing both layer three and layer four
activities (in terms of OSI Reference Model layer numbers). His vision was prophetic,
because we now know that having TCP handle all of these activities would have indeed
led to problems down the road. Postel's observation led to the creation of TCP/IP
architecture, and the splitting of TCP into TCP at the transport layer and IP at the
network layer; thus the name “TCP/IP”. (As an aside, it's interesting; given this history,
that sometimes the entire TCP/IP suite is called just “IP”, even though TCP came first.)
The process of dividing TCP into two portions began in version 3 of TCP, written in
1978. The first formal standard for the versions of IP and TCP used in modern
networks (version 4) were created in 1980. This is why the first “real” version of IP is
version 4 and not version 1. TCP/IP quickly became the standard protocol set for
running the ARPAnet. In the 1980s, more and more machines and networks were
connected to the evolving ARPAnet using TCP/IP protocols, and the TCP/IP Internet
was born.
1.3.9 TCP/IP Reference Model
34
BCA Sem-4 Paper: BCAB2202T
The developers of the TCP/IP protocol suite created their own architectural
model to help describe its components and functions. This model goes by different
names, including the TCP/IP model, the DARPA model (after the agency that was
largely responsible for developing TCP/IP) and the DOD model (after the United States
Department of Defense, the “D” in “DARPA”). We just call it the TCP/IP model since
this seems the simplest designation for modern times. The following figure shows the
four layers of TCP/IP along with corresponding protocols:
35
BCA Sem-4 Paper: BCAB2202T
When you ask your e-mail program (called e-mail client) to download e-mails
that are stored on an e-mail server, it will request this task to the TCP/IP Application
layer, being served by the SMTP protocol. When you type in a www address on your
web browser to open a web page, your browser will request this task to the TCP/IP
Application layer, being served by the HTTP protocol (that is why web pages start with
“http://”). And so on.
The Application layer talks to the Transport layer through a port. Ports are
numbered and standard applications always use the same port. For example, SMTP
protocol always use port 25, HTTP protocol always use port 80 and FTP protocol always
use ports 20 (for data transmission) and 21 (for control). The use of a port number
allows the Transport protocol (typically TCP) to know which kind of contents is inside
the packet (for example, to know that the data being transported is an e-mail) allowing
it to know, at the reception side, to which Application protocol it should deliver the
received data. So, when receiving a packet target to port 25, TCP protocol will know
that it must deliver data to the protocol connected to this port, usually SMTP, which in
turn will deliver data to the program that requested it (the e-mail program). In the
following figure, we illustrate how the Application layer works.
36
BCA Sem-4 Paper: BCAB2202T
(User Datagram Protocol). Thus TCP is considered a reliable protocol, while UDP is
considered an unreliable protocol. UDP is typically used when no important data is
being transmitted, typically on DNS (Domain Name System) requests. Because it does
not implement reordering nor an acknowledge system, UDP is faster than TCP. When
UDP is used, the application that requested the transmission will be in charge of
checking whether data arrived and if it is intact or not and also reordering the received
packets, i.e. the application will do the task of TCP.
Both UDP and TCP will get the data from the Application layer and add a header
to it when transmitting data. When receiving data, the header will be removed before
sending data to the proper port. On this header there are several control information,
in particular the source port number, the target port number, a sequence number (for
the acknowledge and reordering systems used on TCP) and a checksum (which is a
calculation used to check whether data arrived intact at destination or not). UDP
header has 8 bytes while TCP header has 20 or 24 bytes (whether the options field isn’t
or is used, respectively). In the following figure, we illustrate the data packet generated
on the transport layer. This data packet will be sent to the Internet layer (if we are
transmitting data) or was sent from the Internet layer (if we are receiving data).
37
BCA Sem-4 Paper: BCAB2202T
hard task, but also does not help out packet routing, because it does not use a tree-
like structure (putting in other words, while using virtual addresses computers on the
same network will have sequential addresses, with MAC addresses the computer with
the next sequential MAC address to yours may be in USA).
Routing is the path that a data packet should use in order to arrive at
destination. When requesting data from an Internet server, for example, this data
passes through several locations (called routers) before arriving at your computer. If
you want to see this in action, try this: click on Start, Run, Cmd. Then on the
command prompt type in tracert www.google.com. The output will be the path between
your computer and Google’s web server. See how the data packet passes thru several
different routers before arriving at its destination. Each router in the middle of the road
is also called hop. On every network that is connected to the Internet there is a device
called router, which makes the bridge between the computers on your local area
network and the Internet. Every router has a table with its known networks and also a
configuration called default gateway pointing to another router on the Internet. When
your computer sends a data packet to the Internet, the router connected to your
network first looks if it knows the target computer – in other words, if the target
computer is located on the same network or on a network that the router knows the
path to. If it doesn’t know, it will send the packet to its default gateway, i.e. to another
router. Then the process repeats until the data packet arrives at its destination.
There are several protocols that work on the Internet layer: IP (Internet
Protocol), ICMP (Internet Control Message Protocol), ARP (Address Resolution Protocol)
and RARP (Reverse Address Resolution Protocol). Data packets are sent using the IP
protocol, so that is the protocol we will explain. IP protocol gets the data packets
received from the Transport layer (from TCP protocol if you are transmitting real data
like e-mails or files) and divide them into datagrams. Datagram is a packet that does
not have any kind acknowledge system, meaning that IP does not implement any
acknowledge system, thus it is an unreliable protocol. You must note that when
transferring data TCP protocol will be used on top, and TCP implements an
acknowledge system. Thus even though IP protocol does not check whether the
datagram arrived at the destination, the TCP protocol will. The connection will be then
reliable, even though IP protocol alone isn’t reliable.
Each IP datagram can have a maximum size of 65,535 bytes, including its
header, which can use 20 or 24 bytes, depending whether a field called “options” is
used or not. Thus IP datagrams can carry up to 65,515 or 65,511 bytes of data. If the
data packet received from the Transport layer is bigger than 65,515 or 65,511 bytes, IP
protocol will cut the packet down in as many datagrams as necessary. In the following
figure, we illustrate the datagram generated on the Internet layer by the IP protocol. It
is interesting to notice that what the Internet layer sees as “data” is the whole packet it
got from the Transport layer, which includes the TCP or UDP header. This datagram
will be sent to the Network Interface layer (if we are transmitting data) or was sent from
the Network Interface layer (if we are receiving data). As we mentioned before, the
38
BCA Sem-4 Paper: BCAB2202T
header added by the IP protocol includes the source IP address, the target IP address
and several other control information.
If you pay close attention, we didn’t say that the IP datagram has 65,535 bytes,
but it can have up to 65,535 bytes. This means that the data field of the datagram does
not have a fixed size. Since datagrams will be sent over the network inside frames
produced by the Network Interface layer, usually the operating system will configure
the size of the IP datagram to have the maximum size of the data area of the data
frames used on your network. The maximum size of the data field of the frames that
will be sent over the network is called MTU, Maximum Transfer Unit. Ethernet
networks – which is the most common type of network available, including its wireless
incarnation – can carry up to 1,500 bytes of data, i.e. its MTU is of 1,500 bytes. Thus
usually the operating system automatically configures the IP protocol to create IP
datagrams that are 1,500 bytes long, instead of 65,535 (which wouldn’t fit the frame).
On the next page we will see that the real size is of 1,497 or 1,492 bytes, as the LLC
layer “eats” 3 or 5 bytes for adding its header.
Just a clarification, you may be confused how a network can be classified as TCP/IP
and Ethernet at the same time. TCP/IP is a set of protocols that deals with layers 3 to
7 from the OSI reference model. Ethernet is a set of protocols that deals with layers 1
and 2 from the OSI reference model – meaning Ethernet deals with the physical aspect
of the data transmission. So they complement each other, as we need the full seven
layers (or their equivalents) to establish a network connection.
Another feature that IP protocol allows is fragmentation. As we mentioned, until
arriving at its destination, the IP datagram will probably pass thru several other
networks in the middle of the road. If all networks in the path between the transmitting
computer and the receiving one use the same kind of network (e.g. Ethernet) then
39
BCA Sem-4 Paper: BCAB2202T
everything is fine, as all routers will work with the same frame structure (i.e. the same
MTU size). However, if those other networks are not Ethernet networks, they may use a
different MTU size. If that happens, the router that is receiving the frames with the
MTU set to 1,500 bytes will cut the IP datagram inside each frame in as many as
necessary in order to cross over the network with the small MTU size. Upon arriving at
a router that has its output connected to an Ethernet network, this router will re-
assemble the original datagram.
1.3.9.4 Network Interface Layer
Datagrams generated on the Internet layer will be sent down to the Network
Interface layer, if we are sending data, or the Network Interface layer will get data from
the network and send it to the Internet layer, if we are receiving data. This layer is
defined by what type of physical network your computer is connected to. Almost always
your computer will be connected to an Ethernet network. Like we said earlier, TCP/IP
is a set of protocols that deals with layers 3 to 7 from the OSI reference model, while
Ethernet is a set of protocols that deals with layers 1 and 2 from the OSI reference
model – meaning Ethernet deals with the physical aspect of the data transmission. So
they complement each other, as we need the full seven layers (or their equivalents) to
establish a network connection.
Ethernet has three layers: Logic Link Control (LLC), Media Access Control (MAC)
and Physical. LLC and MAC layers correspond, together, to the second layer from the
OSI reference model. You can see Ethernet architecture in the following figure:
Ethernet Architecture
The Logic Link Control layer (LLC) is in charge of adding information of which
protocol on the Internet layer delivered data to be transmitted, so when receiving a
frame from the network this layer on the receiving computer has to know to which
protocol from the Internet layer it should deliver data. This layer is defined by IEEE
802.2 protocol.
40
BCA Sem-4 Paper: BCAB2202T
The Media Access Control layer (MAC) is in charge of assembling the frame that
will be sent over the network. This layer is in charge of adding the source MAC address
and the target MAC address – as we explained before, MAC address is the physical
address of a network card. Frames that are targeted to another network will use the
router MAC address as the target address. This layer is defined by IEEE 802.3
protocol, if a cabled network is being used, or by IEEE 802.11 protocol, if a wireless
network is being used.
The Physical layer is in charge of converting the frame generated by the MAC
layer into electricity (if a cabled network is being used) or into electromagnetic waves (if
a wireless network is being used). This layer is also defined by IEEE 802.3 protocol, if a
cabled network is being used, or by IEEE 802.11 protocol, if a wireless network is
being used.
The LLC and MAC layers add their own headers to the datagram they receive
from the Internet layer. So a complete structure of the frames generated by these two
layers can be seen on the following figure. Notice that the headers added by the upper
layers are seen as “data” by the LLC layer. The same thing happens with the header
inserted by the LLC layer, which will be seen as data by the MAC layer. The LLC layer
adds a 3-byte or 5-byte header and its datagram has a maximum total size of 1,500
bytes, leaving a maximum of 1,497 or 1,492 bytes for data. The MAC layer adds a 22-
byte header and a 4-byte CRC (data correction) data at the end of the datagram
received from the LLC layer, forming the Ethernet frame. Thus the maximum size of an
Ethernet frame is of 1,526 bytes.
41
BCA Sem-4 Paper: BCAB2202T
Similarities
The main similarities between the two models include the following:
• They share similar architecture. Both of the models share a similar architecture.
This can be illustrated by the fact that both of them are constructed with layers.
• They share a common application layer. Both of the models share a common
"application layer". However in practice this layer includes different services
depending upon each model.
• Both models have comparable transport and network layers. This can be
illustrated by the fact that whatever functions are performed between the
presentation and network layer of the OSI model similar functions are
performed at the Transport layer of the TCP/IP model.
• Knowledge of both models is required by networking professionals.
• Both models assume that packets are switched. Basically this means that
individual packets may take differing paths in order to reach the same
destination.
Differences
The main differences between the two models are as follows:
• The OSI model consists of 7 architectural layers whereas the TCP/IP only has 4
layers.
• TCP/IP combines the presentation and session layer issues into its application
layer.
• TCP/IP combines the OSI data link and physical layers into the network access
layer.
• TCP/IP appears to be a simpler model and this is mainly due to the fact that it
has fewer layers.
• OSI introduced concept of services, interface, and protocols. These were force-
fitted to TCP later. The OSI Reference model was devised before the protocols
were invented. It was not designed for particular set of protocols which made it
42
BCA Sem-4 Paper: BCAB2202T
quite general but the designers did not have much experience about the subject
and did not have a good idea of which functionality to put in which layer. In
case of TCP/IP, the protocols came first and the model was just a description of
the existing protocols. So, there was no problem of the protocols fitting the
model.
• TCP/IP Protocols are considered to be standards around which the internet has
developed. The OSI model however is a "generic, protocol- independent
standard."
• TCP/IP is considered to be a more credible model. This is mainly due to the fact
because TCP/IP protocols are the standards around which the internet was
developed therefore it mainly gains creditability due to this reason. Where as in
contrast networks are not usually built around the OSI model as it is merely
used as a guidance tool.
• The OSI model supports both connectionless and connection-oriented
communication in the network layer, but only connection-oriented
communication in the transport layer. The TCP/IP supports only connectionless
communication in the network layer but support both modes in the transport
layer.
1.3.11 Summary
To address the problem of networks being incompatible and unable to
communicate with each other, the International Organization for Standardization (ISO)
researched various network schemes. The ISO recognized there was a need to create a
NETWORK MODEL that would help vendors create interoperable network
implementations. The OSI model was developed by the International Organization for
Standardization (ISO) in 1984. The OSI Reference Model is composed of seven layers,
each specifying particular network functions. The process of breaking up the functions
or tasks of networking into layers reduces complexity. Each layer provides a service to
the layer above it in the protocol specification. The seven layers of the OSI Reference
model are: Physical, Data link, Network, Transport, Session, Presentation and
Application. Modern internetworking is dominated by the protocol suite known as
TCP/IP. Named for two key protocols of the many that comprise it, TCP/IP has been in
continual development and use for about three decades. The developers of the TCP/IP
protocol suite created their own architectural model to help describe its components
and functions. This model goes by different names, including the TCP/IP model, the
DARPA model (after the agency that was largely responsible for developing TCP/IP) and
the DOD model (after the United States Department of Defense, the “D” in “DARPA”).
The TCP/IP model consists of four layers: Application, Transport, Internet and Network
Access Layers.
1.3.12 Keywords
OSI: It is a conceptual model created by the International Organization for
Standardization which enables diverse communication systems to communicate using
standard protocols.
43
BCA Sem-4 Paper: BCAB2202T
TCP/IP: It is a suite of communication protocols used to interconnect network devices
on the internet.
Protocol: It is a set of rules that determine how data is transmitted between different
devices in the same network.
1.3.13 Short Answer Type Questions
1. What do you mean by protocol stack?
2. Why do we use Reference models?
1.3.14 Long Answer Type Questions
1. Explain OSI model in detail.
2. Explain TCP/IP model in detail.
3. Compare OSI and TCP/IP models.
1.3.15 Suggested Readings
• Computer Networks by Andrew S. Tanenbaum
• Data and Computer Communication by William Stallings
• Data Commutations and Networking by Behrouz A Forouzen
44
B.C.A Sem-4 Paper: BCAB2202T
Computer Networks
Lesson No. 1.4 Author : Dr. Dilraj Singh
Converted into SLM by: Dr. Vishal Singh
Last Updated March, 2024
1.4.1 Objectives
1.4.2 Introduction
1.4.3 Data Link Layer Design Issues
1.4.3.1 Services Provided to the Network Layer
1.4.3.1 Framing
1.4.3.1 Error Control
1.4.3.1 Flow Control
1.4.4 Elementary Data Link Protocols
1.4.4.1 NOISELESS CHANNELS
1.4.4.2 Simplest Protocol
1.4.4.3 Stop-and-Wait Protocol
1.4.5 NOISY CHANNELS
1.4.5.1 Stop-and-Wait Automatic Repeat Request
1.4.5.2 Go-Back-N Automatic Repeat Request
1.4.5.3 Selective Repeat Automatic Repeat Request
1.4.6 Summary
1.4.7 Keywords
1.4.8 Short Answer Type Questions
1.4.9 Long Answer Type Questions
1.4.10 Suggested Readings
1.4.1 Objectives
In this lesson we will study data link layer, its design issues and various data link layer
protocols. We will also study about noisy channels in detail.
1.4.2 Introduction
45
BCA Sem-4 Paper: BCAB2202T
The data link layer transforms the physical layer, a raw transmission facility, to
a link responsible for node-to-node (hop-to-hop) communication. Specific
responsibilities of the data link layer include framing, addressing, flow control, error
control, and media access control. The data link layer divides the stream of bits
received from the network layer into manageable data units called frames. The data
link layer adds a header to the frame to define the addresses of the sender and receiver
of the frame.
46
BCA Sem-4 Paper: BCAB2202T
Source network layer passes a number of bits to data link layer, who then packs
them into frames and relies on physical layer to do actual communication.
The receiving data link layer must know the start and end of a frame according to the
frame flag. A good design must make it easy for a receiver to find the start of new
frames while using little of the channel bandwidth. We will look at four methods:
47
BCA Sem-4 Paper: BCAB2202T
1. Byte count: The first framing method uses a field in the header to specify the
number of bytes in the frame
Figure 3: A character stream. (a) Without errors. (b) With one error
2. Flag bytes with byte stuffing: Each frame start and end with special bytes.
Often the same byte, called a flag byte, is used as both the starting and ending
delimiter. This byte is shown in Figure 3 (a) as FLAG.
It may happen that the flag byte occurs in the data. One way to solve this
problem is to have the sender’s data link layer insert a special escape byte (ESC) just
before each ‘‘accidental’’ flag byte in the data. This technique is called byte stuffing,
Figure 3 (b).
3. Flag bits with bit stuffing. Frame flag must be a special bit pattern that never
appears any other place inside a frame. Bit stuffing: Use 01111110 as frame flag and
this bit pattern must be made special. So sender adds a 0 bit whenever it encounters
five consecutive 1 bits in data, and receiver deletes the 0 bit that follows five
consecutive 1 bits in the received data.
4. Physical layer coding violations. Recall line coding defines how 0 and 1 bits are
transmitted in voltage pulses, and deliberately violating the rule can be used to signify
something special
48
BCA Sem-4 Paper: BCAB2202T
Figure 4: (a) A frame delimited by flag bytes. (b) Four examples of byte sequences
before and after stuffing.
Having solved the problem of marking the start and end of each frame, we come
to the next problem: how to make sure all frames are eventually delivered to the
network layer at the destination and in the proper order. Suppose that the sender just
kept outputting frames without regard to whether they were arriving properly. This
might be fine for unacknowledged connectionless service, but would most certainly not
be fine for reliable, connection-oriented service.
The usual way to ensure reliable delivery is to provide the sender with some
feedback about what is happening at the other end of the line. Typically, the protocol
calls for the receiver to send back special control frames bearing positive or negative
acknowledgements about the incoming frames. If the sender receives a positive
acknowledgement about a frame, it knows the frame has arrived safely. On the other
49
BCA Sem-4 Paper: BCAB2202T
hand, a negative acknowledgement means that something has gone wrong, and the
frame must be transmitted again.
This possibility is dealt with by introducing timers into the data link layer.
When the sender transmits a frame, it generally also starts a timer. The timer is set to
expire after an interval long enough for the frame to reach the destination, be
processed there, and have the acknowledgement propagate back to the sender.
Normally, the frame will be correctly received and the acknowledgement will get back
before the timer runs out, in which case the timer will be canceled.
However, if either the frame or the acknowledgement is lost, the timer will go off,
alerting the sender to a potential problem. The obvious solution is to just transmit the
frame again. However, when frames may be transmitted multiple times there is a
danger that the receiver will accept the same frame two or more times and pass it to
the network layer more than once. To prevent this from happening, it is generally
necessary to assign sequence numbers to outgoing frames, so that the receiver can
distinguish retransmissions from originals.
The whole issue of managing the timers and sequence numbers so as to ensure that
each frame is ultimately passed to the network layer at the destination exactly once, no
more and no less, is an important part of the data link layer's duties. Later in this
lesson, we will look at a series of increasingly sophisticated examples to see how this
management is done.
Another important design issue that occurs in the data link layer (and higher
layers as well) is what to do with a sender that systematically wants to transmit frames
faster than the receiver can accept them. This situation can easily occur when the
sender is running on a fast (or lightly loaded) computer and the receiver is running on
a slow (or heavily loaded) machine. The sender keeps pumping the frames out at a high
rate until the receiver is completely swamped. Even if the transmission is error free, at
a certain point the receiver will simply be unable to handle the frames as they arrive
and will start to lose some. Clearly, something has to be done to prevent this situation.
Two approaches are commonly used. In the first one, feedback-based flow
control, the receiver sends back information to the sender giving it permission to send
more data or at least telling the sender how the receiver is doing. In the second one,
50
BCA Sem-4 Paper: BCAB2202T
rate-based flow control, the protocol has a built-in mechanism that limits the rate at
which senders may transmit data, without using feedback from the receiver. In this
lesson we will study feedback-based flow control schemes because rate-based schemes
are never used in the data link layer.
Various feedback-based flow control schemes are known, but most of them use
the same basic principle. The protocol contains well-defined rules about when a sender
may transmit the next frame. These rules often prohibit frames from being sent until
the receiver has granted permission, either implicitly or explicitly.
Self Check Exercise-I
Q1. What is Error control in Data Link Layer?
Ans………………………………………………………………………………………………………
……………………………………………………………………………………………………………
…………………………………………………………………………………………………………
Q2. What is Flow control in Data Link Layer?
Ans………………………………………………………………………………………………………
……………………………………………………………………………………………………………
…………………………………………………………………………………………………………
Simplest
Go-Hack-N ARQ
51
BCA Sem-4 Paper: BCAB2202T
Selective Repeat ARQ
Let us first assume we have an ideal channel in which no frames are lost,
duplicated, or corrupted. We introduce two protocols for this type of channel. The first
is a protocol that does not use flow control; the second is the one that does. Of course,
neither has error control because we have assumed that the channel is a perfect
noiseless channel.
Our first protocol, which we call the Simplest Protocol for lack of any other
name, is one that has no flow or error control. Like other protocols we will discuss in
this lesson, it is Unidirectional protocol in which data frames are traveling in only one
direction-from the sender to receiver. We assume that the receiver can immediately
handle any frame it receives with a processing time that is small enough to be
negligible. The data link layer of the receiver immediately removes the header from the
frame and hands the data packet to its network layer, which can also accept the packet
immediately. In other words, the receiver can never be overwhelmed with incoming
frames.
Design
There is no need for flow control in this scheme. The data link layer at the
sender site gets data from its network layer, makes a frame out of the data, and sends
it. The data link layer at the receiver site receives a frame from its physical layer,
extracts data from the frame, and delivers the data to its network layer. The data link
layers of the sender and receiver provide transmission services for their network layers.
The data link layers use the services provided by their physical layers (such as
signaling, multiplexing, and so on) for the physical transmission of bits. Figure 7 shows
a design.
52
BCA Sem-4 Paper: BCAB2202T
Figure 7. The design of the simplest protocol with no flow or error control
We need to elaborate on the procedure used by both data link layers. The
sender site cannot send a frame until its network layer has a data packet to send. The
receiver site cannot deliver a data packet to its network layer until a frame arrives. If
the protocol is implemented as a procedure, we need to introduce the idea of events in
the protocol. The procedure at the sender site is constantly running; there is no action
until there is a request from the network layer. The procedure at the receiver site is
also constantly running, but there is no action until notification from the physical layer
arrives. Both procedures are constantly running because they do not know when the
corresponding events will occur.
If data frames arrive at the receiver site faster than they can be processed, the
frames must be stored until their use. Normally, the receiver does not have enough
storage space, especially if it is receiving data from many sources. This may result in
either the discarding of frames or denial of service. To prevent the receiver from
becoming overwhelmed with frames, we somehow need to tell the sender to slow down.
There must be feedback from the receiver to the sender. The protocol we discuss now is
called the Stop-and-Wait Protocol because the sender sends one frame, stops until it
receives confirmation from the receiver (okay to go ahead), and then sends the next
frame. We still have unidirectional communication for data frames, but auxiliary ACK
frames (simple tokens of acknowledgment) travel from the other direction. We add flow
control to our previous protocol.
53
BCA Sem-4 Paper: BCAB2202T
The Figure 8 illustrates the mechanism. Comparing this figure with Figure 6, we
can see the traffic on the forward channel (from sender to receiver) and the reverse
channel. At any time, there is either one data frame on the forward channel or one ACK
frame on the reverse channel. We therefore need a half-duplex link.
Although the Stop-and-Wait Protocol gives us an idea of how to add flow control
to its predecessor, noiseless channels are nonexistent. We can ignore the error (as we
sometimes do), or we need to add error control to our protocols. We discuss three
protocols in this section that use error control.
Our first protocol, called the Stop-and-Wait Automatic Repeat Request (Stop-
and Wait ARQ), adds a simple error control mechanism to the Stop-and-Wait Protocol.
Let us see how this protocol detects and corrects errors.
To detect and correct corrupted frames, we need to add redundancy bits to our
data frame. When the frame arrives at the receiver site, it is checked and if it is
corrupted, it is silently discarded. The detection of errors in this protocol is manifested
by the silence of the receiver. Lost frames are more difficult to handle than corrupted
54
BCA Sem-4 Paper: BCAB2202T
ones. In our previous protocols, there was no way to identify a frame. The received
frame could be the correct one, or a duplicate, or a frame out of order. The solution is
to number the frames. When the receiver receives a data frame that is out of order, this
means that frames were either lost or duplicated. The completed and lost frames need
to be resent in this protocol. If the receiver does not respond when there is an error,
how can the sender know which frame to resend? To remedy this problem, the sender
keeps a copy of the sent frame. At the same time, it starts a timer. If the timer expires
and there is no ACK for the sent frame, the frame is resent, the copy is held, and the
timer is restarted. Since the protocol uses the stop-and-wait mechanism, there is only
one specific frame that needs an ACK even though several copies of the same frame
can be in the network.
Since an ACK frame can also be corrupted and lost, it too needs redundancy
bits and a sequence number. The ACK frame for this protocol has a sequence number
field. In this protocol, the sender simply discards a corrupted ACK frame or ignores an
out-of-order one.
Sequence Numbers
Let us reason out the range of sequence numbers we need. Assume we have
used x as a sequence number; we only need to use x + 1 after that. There is no need for
x + 2. To show this, assume that the sender has sent the frame numbered x. Three
things can happen.
1. The frame arrives safe and sound at the receiver site; the receiver sends an
acknowledgment. The acknowledgment arrives at the sender site, causing the
sender to send the next frame numbered x + 1.
2. The frame arrives safe and sound at the receiver site; the receiver sends an
acknowledgment, but the acknowledgment is corrupted or lost. The sender
resends the frame (numbered x) after the time-out. Note that the frame here is a
duplicate. The receiver can recognize this fact because it expects frame x + I but
frame x was received.
3. The frame is corrupted or never arrives at the receiver site; the sender resends
the frame (numbered x) after the time-out. We can see that there is a need for
sequence numbers x and x + I because the receiver needs to distinguish
55
BCA Sem-4 Paper: BCAB2202T
between case 1 and case 2. But there is no need for a frame to be numbered x +
2. In case 1, the frame can be numbered x again because frames x and x + 1 are
acknowledged and there is no ambiguity at either site. In cases 2 and 3, the new
frame is x + I, not x + 2. If only x and x + 1 are needed, we can let x = 0 and x + I
== 1. This means that the sequence is 0, I, 0, I, 0, and so on.
Acknowledgment Numbers
Since the sequence numbers must be suitable for both data frames and ACK
frames, we use this convention: The acknowledgment numbers always announce the
sequence number of the next frame expected by the receiver. For example, if frame 0
has arrived safe and sound, the receiver sends an ACK frame with acknowledgment 1
(meaning frame 1 is expected next). If frame 1 has arrived safe and sound, the receiver
sends an ACK frame with acknowledgment 0 (meaning frame 0 is expected).
Design
Figure 9 shows the design of the Stop-and-Wait ARQ Protocol. The sending
device keeps a copy of the last frame transmitted until it receives an acknowledgment
for that frame. A data frames uses a seqNo (sequence number); an ACK frame uses an
ackNo (acknowledgment number). The sender has a control variable, which we call Sn
(sender, next frame to send), that holds the sequence number for the next frame to be
sent (0 or 1).
56
BCA Sem-4 Paper: BCAB2202T
The receiver has a control variable, which we call Rn (receiver, next frame
expected), that holds the number of the next frame expected. When a frame is sent, the
value of Sn is incremented (modulo-2), which means if it is 0, it becomes 1 and vice
versa. When a frame is received, the value of Rn is incremented (modulo-2), which
means if it is 0, it becomes 1 and vice versa. Three events can happen at the sender
site; one event can happen at the receiver site. Variable Sn points to the slot that
matches the sequence number of the frame that has been sent, but not acknowledged;
Rn points to the slot that matches the sequence number of the expected frame.
To improve the efficiency of transmission (filling the pipe), multiple frames must
be in transition while waiting for acknowledgment. In other words, we need to let more
than one frame be outstanding to keep the channel busy while the sender is waiting for
acknowledgment. In this section, we discuss one protocol that can achieve this goal;
The first is called Go-Back-N Automatic Repeat Request (the rationale for the name will
become clear later). In this protocol we can send several frames before receiving
acknowledgments; we keep a copy of these frames until the acknowledgments arrive.
57
BCA Sem-4 Paper: BCAB2202T
Sequence Numbers
Frames from a sending station are numbered sequentially. However, because we
need to include the sequence number of each frame in the header, we need to set a
limit. If the header of the frame allows m bits for the sequence number, the sequence
numbers range from 0 to 2m - 1. For example, if m is 4, the only sequence numbers
are 0 through 15 inclusive. However, we can repeat the sequence. So the sequence
numbers are :
0, 1,2,3,4,5,6, 7,8,9, 10, 11, 12, 13, 14, 15,0, 1,2,3,4,5,6,7,8,9,10, 11, ...
In other words, the sequence numbers are modulo-2m. In the Go-Back-N
Protocol, the sequence numbers are modulo 1!” where m is the size of the sequence
number field in bits.
Sliding Window
In this protocol (and the next), the sliding window is an abstract concept that
defines the range of sequence numbers that is the concern of the sender and receiver.
In other words, the sender and receiver need to deal with only part of the possible
sequence numbers. The range which is the concern of the sender is called the send
sliding window; the range that is the concern of the receiver is called the receive sliding
window. We discuss both here.
The send window is an imaginary box covering the sequence numbers of the
data frames which can be in transit. In each window position, some of these sequence
numbers define the frames that have been sent; others define those that can be sent.
The maximum size of the window is 2m - 1 for reasons that we discuss later. In this
lesson, we let the size be fixed and set to the maximum value, but we will see in future
lessons that some protocols may have a variable window size. Figure 10 shows a
sliding window of size 15 (m =4). The window at any time divides the possible sequence
numbers into four regions. The first region, from the far left to the left wall of the
window, defines the sequence
58
BCA Sem-4 Paper: BCAB2202T
numbers belonging to frames that are already acknowledged. The sender does not
worry about these frames and keeps no copies of them. The second region, colored in
Figure 9 a, defines the range of sequence numbers belonging to the frames that are
sent and have an unknown status. The sender needs to wait to find out if these frames
have been received or were lost. We call these outstanding frames. The third range,
white in the figure, defines the range of sequence numbers for frames that can be sent;
however, the corresponding data packets have not yet been received from the network
layer. Finally, the fourth region defines sequence numbers that cannot be used until
the window slides, as we see next.
The window itself is an abstraction; three variables define its size and location
at any time. We call these variables Sf (send window, the first outstanding frame), Sn
(send window, the next frame to be sent), and Ssize (send window, size). The variable Sf
defines the sequence number of the first (oldest) outstanding frame. The variable Sn
holds the sequence number that will be assigned to the next frame to be sent. Finally,
the variable size defines the size of the window, which is fixed in our protocol.
In Figure 10.b shows how a send window can slide one or more slots to the right
when an acknowledgment arrives from the other end. As we will see shortly, the
acknowledgments in this protocol are cumulative, meaning that more than one frame
can be acknowledged by an ACK frame. In Figure 9b, frames 0, I, and 2 are
acknowledged, so the window has slid to the right three slots. Note that the value of Sf
is 3 because frame 3 is now the first outstanding frame.
59
BCA Sem-4 Paper: BCAB2202T
The receive window makes sure that the correct data frames are received and
that the correct acknowledgments are sent. The size of the receive window is always I.
The receiver is always looking for the arrival of a specific frame. Any frame arriving out
of order is discarded and needs to be resent. Figure 11 shows the receive window.
Note that we need only one variable Rn (receive window, next frame expected) to
define this abstraction. The sequence numbers to the left of the window belong to the
frames already received and acknowledged; the sequence numbers to the right of this
window define the frames that cannot be received. Any received frame with a sequence
number in these two regions is discarded. Only a frame with a sequence number
matching the value of Rn is accepted and acknowledged. The receive window also
slides, but only one slot at a time. When a correct frame is received (and a frame is
received only one at a time), the window slides.
Timers
Although there can be a timer for each frame that is sent, in our protocol we use only
one. The reason is that the timer for the first outstanding frame always expires first; we
send all outstanding frames when this timer expires.
Acknowledgment
The receiver sends a positive acknowledgment if a frame has arrived safe and
sound and in order. If a frame is damaged or is received out of order, the receiver is
silent and will discard all subsequent frames until it receives the one it is expecting.
The silence of the receiver causes the timer of the unacknowledged frame at the sender
site to expire. This, in turn, causes the sender to go back and resend all frames,
60
BCA Sem-4 Paper: BCAB2202T
beginning with the one with the expired timer. The receiver does not have to
acknowledge each frame received. It can send one cumulative acknowledgment for
several frames.
Resending a Frame
When the timer expires, the sender resends all outstanding frames. For
example, suppose the sender has already sent frame 6, but the timer for frame 3
expires. This means that frame 3 has not been acknowledged; the sender goes back
and sends frames 3, 4,5, and 6 again. That is why the protocol is called Go-Back-N
ARQ.
Design
Figure 12 shows the design for this protocol. As we can see, multiple frames can
be in transit in the forward direction, and multiple acknowledgments in the reverse
direction.
The idea is similar to Stop-and-Wait ARQ; the difference is that the send
window allows us to have as many frames in transition as there are slots in the send
window. Send Window Size We can now show why the size of the send window must be
less than 2m. As an example, we choose m =2, which means the size of the window can
be 2m - 1, or 3.
61
BCA Sem-4 Paper: BCAB2202T
Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps
track of only one variable, and there is no need to buffer out-of-order frames; they are
simply discarded. However, this protocol is very inefficient for a noisy link. In a noisy
link a frame has a higher probability of damage, which means the resending of multiple
frames. This resending uses up the bandwidth and slows down the transmission. For
noisy links, there is another mechanism that does not resend N frames when just one
frame is damaged; only the damaged frame is resent. This mechanism is called
Selective Repeat ARQ. It is more efficient for noisy links, but the processing at the
receiver is more complex.
62
BCA Sem-4 Paper: BCAB2202T
Windows
The Selective Repeat Protocol also uses two windows: a send window and a
receive window. However, there are differences between the windows in this protocol
and the ones in Go-Back-N. First, the size of the send window is much smaller; it is
2m- I . The reason for this will be discussed later. Second, the receive window is the
same size as the send window. The send window maximum size can be 2m- I . For
example, if m = 4, the sequence numbers go from 0 to 15, but the size of the window is
just 8 (it is 15 in the Go-Back-N Protocol). The smaller window size means less
efficiency in filling the pipe, but the fact that there are fewer duplicate frames can
compensate for this. The protocol uses the same variables as we discussed for Go-
Back-N. We show the Selective Repeat send window in Figure 13 to emphasize the size.
63
BCA Sem-4 Paper: BCAB2202T
The receive window in Selective Repeat is totally different from the one in Go
Back- N. First, the size of the receive window is the same as the size of the send
window (2m- I). The Selective Repeat Protocol allows as many frames as the size of the
receive window to arrive out of order and be kept until there is a set of in-order frames
to be delivered to the network layer. Because the sizes of the send window and receive
window are the same, all the frames in the send frame can arrive out of order and be
stored until they can be delivered. We need, however, to mention that the receiver
never delivers packets out of order to the network layer. Figure 15 shows the receive
window in this
Protocol. Those slots inside the window that are colored define frames that have arrived
out of order and are waiting for their neighbors to arrive before delivery to the network
layer.
Design
The design in this case is to some extent similar to the one we described for the
Back-N, but more complicated, as shown in Figure 16
Window Sizes
We can now show why the size of the sender and receiver windows must be at
most one half of 2m. For an example, we choose m = 2, which means the size of the
window is 2m/2, or 2. Figure 16 compares a window size of 2 with a window size of 3.
If the size of the window is 2 and all acknowledgments are lost, the timer for frame 0
expires and frame 0 is resent. However, the window of the receiver is now expecting
frame 2, not frame 0, so this duplicate frame is correctly discarded. When the size of
the window is 3 and all acknowledgments are lost, the sender sends a duplicate of
frame O. However, this time, the window of the receiver expects to receive frame 0 (0 is
part of the window), so it accepts frame 0, not as a duplicate, but as the first frame in
the next cycle.
64
BCA Sem-4 Paper: BCAB2202T
65
BCA Sem-4 Paper: BCAB2202T
1.4.6 Summary
In this lesson we have discussed about the data link layer which is situated above the
physical layer and is pivotal in ensuring reliable point-to-point and point-to-multipoint
communication within a network. It handles tasks like framing, error detection, and
flow control, facilitating the seamless transmission of data between adjacent nodes. We
have also discussed about data link layer design issues and various data link layer
protocols.
1.4.7 Keywords
Error control: It ensures that the data received at the receiver end is the same as the
one sent by the sender.
Flow control: It is a technique used to regulate data transfer between computers or
other nodes in a network.
Framing: It is a process of dividing a stream of data into smaller, more manageable
units called frames.
1.4.8 Short Answer Type Questions
1. Define data link layer.
2. What is Stop-and-Wait Protocol?
3. What is Go-Back-N Automatic Repeat Request?
1.4.9 Long Answer Type Questions
1. Explain various Data Link Layer Design Issues in detail.
2. Discuss various Elementary Data Link Protocols.
3. Write a detailed note on Noisy Channels.
1.4.10 Suggested Readings
66
B.C.A Sem-4 Paper: BCAB2202T
Computer Networks
Lesson No. 1.5 Author : Dr. Amandeep kaur
Converted into SLM by: Dr. Vishal Singh
Last Updated March, 2024
1.5.1 Objectives
1.5.2 Introduction
1.5.3 Static Channel Allocation for LANs and MANs
1.5.4 Dynamic Channel Allocation for LANs and MANs
1.5.5 ALOHA protocol
1.5.6 LAN protocols
1.5.7 CSMA
1.5.8 CSMA/CD
1.5.9 Collision Free Protocol
1.5.9.1 Bit-Map Protocol
1.5.9.2 Broadcast Recognition with Alternating Priorities (BRAP)
1.5.9.3 The Multi-Level Multi-Access Protocol (MLMA)
1.5.9.4 Binary Countdown protocol
1.5.10 Limited Contention Protocol
1.5.10.1 Adaptive Tree Walk Protocol
1.5.10.2 URN Protocol
1.5.11 Summary
1.5.12 Keywords
1.5.13 Short Answer Type Questions
1.5.14 Long Answer Type Questions
1.5.15 Suggested Readings
67
BCA Sem-4 Paper: BCAB2202T
1.5.1 Objectives
68
BCA Sem-4 Paper: BCAB2202T
the spectrum is cut up into N regions and fewer than N users are currently interested
in communicating; a large piece of valuable spectrum will be wasted. If more than N
users want to communicate, some of them will be denied permission for lack of
bandwidth, even if some of the users who have been assigned a frequency band hardly
ever transmit or receive anything.
However, even assuming that the number of users could somehow be held
constant at N, dividing the single available channel into static subchannels is
inherently inefficient. The basic problem is that when some users are quiescent, their
bandwidth is simply lost. They are not using it, and no one else is allowed to use it
either. Furthermore, in most computer systems, data traffic is extremely bursty (peak
traffic to mean traffic ratios of 1000:1 are common). Consequently, most of the
channels will be idle most of the time. Precisely the same arguments that apply to FDM
also apply to time division multiplexing (TDM). Each user is statically allocated every
Nth time slot. If a user does not use the allocated slot, it just lies fastest.
Before considering the dynamic channel allocation methods we will consider five key
assumptions underlying all these methods.
69
BCA Sem-4 Paper: BCAB2202T
o Slotted Time. Time is divided into discrete intervals (slots). Frame
transmissions always begin at the start of a slot. A slot may contain 0, 1,
or more frames, corresponding to an idle slot, a successful transmission,
or a collision, respectively.
➢ The stations may not have carrier sense or have carrier sense capability
o Carrier Sense. Stations can tell if the channel is in use before trying to
use it. If the channel is sensed as busy, no station will attempt to use it
until it goes idle.
o No Carrier Sense. Stations cannot sense the channel before trying to
use it. They just go ahead and transmit. Only later can they determine
whether the transmission was successful.
There are many algorithms for allocating multiple access channels. ALOHA is one such
protocol which is discussed below
1.5.5 ALOHA protocol
Many modern LANs evolved from a LAN known as Aloha which was one of the
first primitive LANs to be developed. Aloha was packet based and used radio as its
transmission medium. It was used for the first time in the Packet Radio System of the
University of Hawaii in 1970. It is a predecessor to the Ethernet. There are two versions
of Aloha, Pure Aloha and Slotted Aloha.
ALOHA Protocol:
Aloha, also called the Aloha method, refers to a simple communications scheme
in which each source transmit whenever there is data to send. If the frame
successfully reaches the destination (receiver), the next frame is sent. If there is
collusion colliding frames are destroyed and frames fail to be received at the
destination. Under this protocol the sender can find out whether or not its frame was
destroyed by listening to the channel, it is sent again.
Pure Aloha Protocol
With Pure Aloha, stations are allowed access to the channel whenever they have
data to transmit (fig. 1). Because the threat of data collision exists, each station must
either monitor its transmission on the rebroadcast or await an acknowledgment from
the destination station. By comparing the transmitted packet with the received packet
or by the lack of an acknowledgement, the transmitting station can determine the
success of the transmitted packet. If the transmission was unsuccessful it is resent
after a random amount of time to reduce the probability of re-collision.
70
BCA Sem-4 Paper: BCAB2202T
Consider S as the mean new frames generated by users in a frame time (which
is the amount of time needed to transmit the standard, fixed-length frame).
Advantages:
➢ Superior to fixed assignment when there is a large number of bursty stations.
➢ Adapts to varying number of stations.
Disadvantages:
➢ Theoretically proven throughput maximum of 18.4%.
➢ Requires queueing buffers for retransmission of packets.
Slotted Aloha
The Slotted Aloha protocol is a contention based protocol. The channel
bandwidth is a continuous stream of slots whose length is the time necessary to
transmit one packet fig 2. A station with a packet to send will transmit on the next
available slot boundary. In the event of a collision, each station involved in the collision
retransmits at some random time in order to reduce the possibility of recollision.
Obviously the limits imposed which govern the random retransmission of the packet
will have an effect on the delay associated with successful packet delivery. If the limit is
71
BCA Sem-4 Paper: BCAB2202T
too short, the probability of recollision is high. If the limit is too long the probability of
recollision lessens but there is unnecessary delay in the retransmission.
Another important simulation characteristic of the Slotted Aloha protocol is the
action which takes place on transmission of the packet. Methods include blocking (i.e.
prohibiting packet generation) until verification of successful transmission occurs. This
is known as "stop-and-wait". Another method known as "go-back-n" allows continual
transmission of queued packets, but on the detection of a collision, will retransmit all
packets from the point of the collision. This is done to preserve the order of the
packets. In other cases queued packets are continually sent and only the packets
involved in a collision are retransmitted. This is called "selective-repeat" and allows out
of order transmission of packets.
Slotted Aloha Protocol
By making a small restriction in the transmission freedom of the individual
stations, the throughput of the Aloha protocol can be doubled. Assuming constant
length packets, transmission time is broken into slots equivalent to the transmission
time of a single packet. Stations are only allowed to transmit at slot boundaries. When
packets collide they will overlap completely instead of partially. This has the effect of
doubling the efficiency of the Aloha protocol and has come to be known as Slotted
Aloha. It is known through theoretical analysis of the Slotted ALOHA protocol that the
maximum achievable throughput is or about 0.368 for a Poisson distributed network
with uniform traffic.
72
BCA Sem-4 Paper: BCAB2202T
1.5.6 LAN protocols
A LAN is a high-speed data network that covers a relatively small geographic
area. It typically connects workstations, personal computers, printers, servers and
other devices. LANs offer computer users many advantages, including shared access to
devices and applications, file exchange between connected users and communication
between users via electronic mail and other applications.
With slotted ALOHA the best channel utilization that can be achieved is 1/e
since in this case the stations transmit at will, without paying attention to what the
other stations are doing, there are bound to be many collisions. A simple improvement
which could be made to Aloha is to ‘listen’ to the presence of signal, prior to
transmitting a frame. If there is no signal present the medium may be assumed to be
idle and a station may then make an access. This is the carrier sense strategy. These
networks can achieve a much better utilization than 1/e. In this Lesson we will discuss
some protocols for improving performance. The random access methods which we will
study here have evolved from ALOHA protocol, which used a very simple procedure
called multiple access (MA) fig. 1. The method was improved with the addition of a
procedure that forces the station to sense the medium before transmitting. This was
called carrier sense multiple access (CSMA). This method later evolved into two parallel
methods: carrier senses multiple access with collision detection (CSMA/CD) and
carrier sense multiple access with collision avoidance (CSMA/CA). CSMA/CD tells the
station what to do when a collision is detected. CSMA/CA tries to avoid the collision.
73
BCA Sem-4 Paper: BCAB2202T
The next section discusses various strategies or algorithms which, when used in
an optimal manner, improve the performance of CSMA in terms of throughput
compared with Aloha. These algorithms generally differ in terms of how they deal with
a station which discovers the medium to be busy.
Persistence algorithms
The first carrier sense protocol that we will study here is called 1-persistent
CSMA (Carrier Sense Multiple Access). When a station has data to send, it first listens
to the channel to see if anyone else is transmitting at that moment. If the channel is
busy, the station waits until it becomes idle. When the station detects an idle channel,
it transmits a frame. If a collision occurs, the station waits a random amount of time
and starts all over again. The protocol is called 1-persistent because the station
transmits with a probability of 1 when it finds the channel idle. The propagation delay
has an important effect on the performance of the protocol. There is a small chance
that just after a station begins sending, another station will become ready to send and
sense the channel. If the first station's signal has not yet reached the second one, the
latter will sense an idle channel and will also begin sending, resulting in a collision.
The longer the propagation delay, the more important this effect becomes, and the
worse the performance of the protocol. Even if the propagation delay is zero, there will
74
BCA Sem-4 Paper: BCAB2202T
still be collisions. If two stations become ready in the middle of a third station's
transmission, both will wait politely until the transmission ends and then both will
begin transmitting exactly simultaneously, resulting in a collision. If they were not so
impatient, there would be fewer collisions. Even so, this protocol is far better than pure
ALOHA because both stations have the decency to desist from interfering with the third
station's frame. Intuitively, this approach will lead to a higher performance than pure
ALOHA. Exactly the same holds for slotted ALOHA
Nonpersistent CSMA
P-Persistent CSMA
75
BCA Sem-4 Paper: BCAB2202T
1.5.8 CSMA with Collision Detection (CSMA/CD)
CSMA/CD, as well as many other LAN protocols, use the conceptual model of
Fig. 3. At the point marked t 0 , a station has finished transmitting its frame. Any other
station having a frame to send may now attempt to do so. If two or more stations
decide to transmit simultaneously, there will be a collision. Collisions can be detected
by looking at the power or pulse width of the received signal and comparing it to the
transmitted signal.
76
BCA Sem-4 Paper: BCAB2202T
The CSMA/CD algorithm may be summarized as follows:
3. If busy, defer.
6. Random back-off.
For CSMA/CD to work, we need a restriction on the frame size. Before sending
the last bit of the frame, the sending station must detect a collision, if any and abort
the transmission. This is so because the station, once the entire frame is sent, does not
keep a copy of the frame and does not monitor the line for collision detection.
Therefore, the frame transmission time Tfr must be at least two times the maximum
propagation time . To understand the reason, let us think about the worst-case
scenario. If the two stations involved in a collision are the maximum distance apart,
the signal from the first takes time to reach the second and the effect of the collision
takes another time to reach the first. So the requirement is that the first station must
still be transmitting after 2. The concept becomes more clear from the
figure 4.
77
BCA Sem-4 Paper: BCAB2202T
cable is long (i.e., large τ) and the frames are short. And CSMA/CD is not universally
applicable. In this section, we will examine some protocols that resolve the contention
for the channel without any collisions at all, not even during the contention period.
Most of these are not currently used in major systems, but in a rapidly changing field,
having some protocols with excellent properties available for future systems is often a
good thing. In the protocols to be described, we assume that there are exactly N
stations, each with a unique address from 0 to N - 1 ''wired'' into it. It does not matter
that some stations may be inactive part of the time. We also assume that propagation
delay is negligible.
Since everyone agrees on who goes next, there will never be any collisions. After
the last ready station has transmitted its frame, an event all stations can easily
monitor, another N bit contention period is begun. If a station becomes ready just after
its bit slot has passed by, it is out of luck and must remain silent until every station
has had a chance and the bit map has come around again. Protocols like this in which
the desire to transmit is broadcast before the actual transmission are called
reservation protocols.
Analyze the performance: Consider the situation from the point of view of a low-
numbered station, such as 0 or 1. Typically, when it becomes ready to send, the
''current'' slot will be somewhere in the middle of the bit map. On average, the station
will have to wait N/2 slots for the current scan to finish and another full N slots for the
following scan to run to completion before it may begin transmitting. High-numbered
stations are luckier. Generally, these will only have to wait half a scan (N/2 bit slots)
before starting to transmit. High-numbered stations rarely have to wait for the next
78
BCA Sem-4 Paper: BCAB2202T
scan. Since low-numbered stations must wait on average 1.5N slots and high
numbered stations must wait on average 0.5N slots, the mean for all stations is N slots.
The channel efficiency at low load is easy to compute. The overhead per frame is N bits,
and the amount of data is d bits, for an efficiency of d/(N + d). At high load, when all
the stations have something to send all the time, the N bit contention period is
prorated over N frames, yielding an overhead of only 1 bit per frame or an efficiency of
d/(d + 1). The mean delay for a frame is equal to the sum of the time it queues inside
its station, plus an additional N(d + 1)/2 once it gets to the head of its internal queue.
The basic bit-map protocol has several drawbacks; the major drawback is the
asymmetry with respect to station numbers: higher numbered stations get better
service than the lower numbered ones. Another drawback is that under conditions of
light load a station must always wait for the current scan to be finished before it may
transmit. BRAP protocol eliminates both these problems.
In BRAP as soon as a station inserts a 1 bit into its slot, it begins transmission
of its frame immediately thereafter. In addition instead of starting the bit scan with
station 0 each time, it is started with station following the one that just transmitted.
The permission to send rotates among the stations in a round robin fashion. The
working of BRAP is explained through figure 6.
The problem with BRAP is not the channel utilization, which is excellent in the
case of high load, but with the delay when the system is lightly loaded. When no
stations are ready there are no data frames, and the N-bit header just go on and on till
some station inserts a 1 bit in its bit slot. On the average a station will have to wait
N/2 bit slots before it may begin sending. In this method, a station announces that it
wants to send by broadcasting its address in a particular format. If only one station
attempts to transmit during a frame slot, it uses the 30-bit header to announce itself
and then sends the frame. The trouble arises when more than one station tries to
insert their addresses into the same header. To disambiguate all the addresses, the
stations behave as follows:
79
BCA Sem-4 Paper: BCAB2202T
The first decade in every frame slot corresponds to the hundred places in the
station number. After the first decade is finished. After the first decade is finished,
stations that have not transmitted a bit must remain silent until all the stations that
did set a bit have transmitted their data. Call the highest occupied bit position in the
first decade x. In the second decade, all stations with x as their leading digit announce
their tens’s place. Call the highest occupied bit here as y. In the third decade, all the
stations whose addresses begin with xy may set the bit corresponding to their last
digits. There are at most 10 of them. Consider the following example:
Five stations with addresses 122, 125, 705, 722, and 725 want to transmit
data. Here x=7 and y=2. Finally the data is sent in numerical order of the station
addresses. The Figure 7 shows how stations are recognized and put into numerical
order:
9 8 7 6 5 4 3 2 1 0
A problem with the basic bit-map protocol is that the overhead is 1 bit per
station, so it does not scale well to networks with thousands of stations. We can do
80
BCA Sem-4 Paper: BCAB2202T
better than that by using binary station addresses. A station wanting to use the
channel now broadcasts its address as a binary bit string, starting with the high-order
bit. All addresses are assumed to be the same length. The bits in each address position
from different stations are BOOLEAN ORed together. We will call this protocol binary
countdown. It was used in Datakit (Fraser, 1987). It implicitly assumes that the
transmission delays are negligible so that all stations see asserted bits essentially
instantaneously. To avoid conflicts, an arbitration rule must be applied: as soon as a
station sees that a high order bit position that is 0 in its address has been overwritten
with a 1, it gives up. For example, if stations 0010, 0100, 1001, and 1010 are all trying
to get the channel, in the first bit time the stations transmit 0, 0, 1, and 1, respectively.
These are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that
a higher-numbered station is competing for the channel, so they give up for the current
round. Stations 1001 and 1010 continue. The next bit is 0, and both stations continue.
The next bit is 1, so station 1001 gives up. The winner is station 1010 because it has
the highest address. After winning the bidding, it may now transmit a frame, after
which another bidding cycle starts. The protocol is illustrated in Figure 8. It has the
property that higher-numbered stations have a higher priority than lower numbered
stations, which may be either good or bad, depending on the context.
The channel efficiency of this method is d/(d + log2 N). If, however, the frame
format has been cleverly chosen so that the sender's address is the first field in the
frame, even these log2 N bits are not wasted and the efficiency is 100 percent.
81
BCA Sem-4 Paper: BCAB2202T
We have now considered two basic strategies for channel acquisition in a cable
network: contention, as in CSMA and collision-free methods. Each strategy can be
rated as to how well it does with respect to the two important performance measures,
delay at low load and channel efficiency at high load. Under conditions of light load,
contention (i.e., pure or slotted ALOHA) is preferable due to its low delay. As the load
increases, contention becomes increasingly less attractive, because the overhead
associated with channel arbitration becomes greater. Just the reverse is true for the
collision-free protocols. At low load, they have high delay, but as the load increases, the
channel efficiency improves rather than gets worse as it does for contention protocols.
Obviously, it would be nice if we could combine the best properties of the contention
and collision-free protocols, arriving at a new protocol that used contention at low load
to provide low delay, but used a collision-free technique at high load to provide good
channel efficiency. Such protocols, which we will call limited-contention protocols,
do, in fact, exist, and will conclude our study of carrier sense networks. Up to now the
only contention protocols we have studied have been symmetric, that is, each station
attempts to acquire the channel with some probability, p, with all stations using the
same p. Interestingly enough, the overall system performance can sometimes be
improved by using a protocol that assigns different probabilities to different stations.
Before looking at the asymmetric protocols, let us quickly review the performance of
the symmetric case. Suppose that k stations are contending for channel access. Each
has a probability p of transmitting during each slot. The probability that some station
successfully acquires the channel during a given slot is then kp(1 - p)k - 1. To find the
optimal value of p, we differentiate with respect to p, set the result to zero, and solve
for p. Doing so, we find that the best value of p is 1/k. Substituting p = 1/k, we get Pr
(success with optimal p)
82
BCA Sem-4 Paper: BCAB2202T
k −1
k −1
Pr =
k
For small numbers of stations, the chances of success are good, but as soon as
the number of stations reaches even five, the probability has dropped close to its
asymptotic value of 1/e. thus the probability of some station acquiring the station can
be increased only by decreasing the amount of competition. The limited contention
protocols do precisely that.
83
BCA Sem-4 Paper: BCAB2202T
Figure 9. The tree for eight stations
In essence, if a collision occurs during slot 0, the entire tree is searched, depth
first, to locate all ready stations. Each bit slot is associated with some particular node
in the tree. If a collision occurs, the search continues recursively with the node's left
and right children. If a bit slot is idle or if only one station transmits in it, the
searching of its node can stop because all ready stations have been located. (Were
there more than one, there would have been a collision.) When the load on the system
is heavy, it is hardly worth the effort to dedicate slot 0 to node 1, because that makes
sense only in the unlikely event that precisely one station has a frame to send.
Similarly, one could argue that nodes 2 and 3 should be skipped as well for the same
reason. Put in more general terms, at what level in the tree should the search begin?
Clearly, the heavier the load, the farther down the tree the search should begin. We will
assume that each station has a good estimate of the number of ready stations, q, for
example, from monitoring recent traffic. To proceed, let us number the levels of the tree
from the top, with node 1 in Fig. 9 at level 0, nodes 2 and 3 at level 1, etc. Notice that
each node at level i has a fraction 2-i of the stations below it. If the q ready stations are
uniformly distributed, the expected number of them below a specific node at level i is
just 2-iq. Intuitively, we would expect the optimal level to begin searching the tree as
the one at which the mean number of contending stations per slot is 1, that is, the
level at which 2-iq = 1. Solving this equation, we find that i = log2 q. Numerous
improvements to the basic algorithm have been discovered and are discussed in some
detail by Bertsekas and Gallager (1992).
This protocol is similar to the tree walk protocol, but it uses an urn rather than
a tree as its basis. Like the tree walk protocol, it limits the number of stations which
are allowed to transmit during each slot in such a way as to maximize the probability
of getting exactly one ready station per contention slot. In this protocol an analogy is
made between the stations and the balls in an urn.
The first factor in the numerator is the number of ways of selecting x green balls
from among the k green balls in the urn. The second term in the numerator is the
84
BCA Sem-4 Paper: BCAB2202T
number of ways of selecting n-x red balls from among the N-k red balls in the urn. The
denominator is the number of ways of selecting n balls from the N balls in the urn.
Here we are interested in the probability of drawing exactly one green ball, since
that is the only way a successful transmission can occur. When x=1, the probability of
success is maximized by choosing n=N/k. The mean number of green balls in the
sample is equal to the sample size ‘n’, times the probability that a given ball is green
k/N.
After determining what n should be, the next part relates to choosing the
stations. The decision is made in a distributed way to which all stations agree. Several
methods had been proposed in the literature. One method is outlined below: The
stations are arranged in numerical order around a hypothetical circle. A window of size
n rotates around the circle. During each slot those stations inside the window are given
permission to send. If there was successful transmission or no transmission at all, the
window is advanced n positions. If there was a collision, the window is shrunk back to
half its size and the process is repeated until the collision ceases.
Now let us consider how the network works under the following two conditions:
Light Load
If the ready stations is one or fewer, the window size is N. In other words, the
window will go all the way around and all stations will be allowed to send at will. Under
light load its behavior is similar to slotted ALOHA protocol.
Heavy Load
Now consider if k=2, n will be N/2 and the stations will be partitioned into two
groups, with half the stations operating under slotted ALOHA in the odd slots and the
other half operating the same way in the even slots. Finally if k>N/2, the sample size
will be one. During each slot exactly one station will be given permission to send so
there will be no collisions. The position of the lucky station will rotate around the
circle. In this case the system becomes identical to synchronous time division
multiplexing.
All the limited-contention protocols assume that stations have an estimate of
the number of stations wanting to transmit.
1.5.11 Summary
In this lesson we have discussed the MAC layer, which resides within the data link
layer, governs access to the shared communication medium, ensuring efficient and fair
distribution of bandwidth among connected devices. It employs techniques like
CSMA/CD (Carrier Sense Multiple Access with Collision Detection) in wired networks
and CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) in networks to
manage access and mitigate collisions. MAC addresses, unique identifiers assigned to
85
BCA Sem-4 Paper: BCAB2202T
network interfaces, play a crucial role in facilitating communication between devices at
this layer. We have also studied various LAN protocols in this lesson.
1.5.12 Keywords
ALOHA: It is a multiple access protocol that allows data to be transmitted over a public
network channel.
CSMA: It is a network protocol for carrier transmission that operates in the Medium
Access Control layer.
Bit-Map Protocol: It is a collision free protocol that operates in the Medium Access
Control layer of the OSI model.
BRAP: It is a protocol that provides inter-domain routing.
1.5.13 Short Answer Type Questions
1. What were the drawbacks of static channel allocation?
2. What are the five key assumptions underlying mostly all dynamic channel
allocation methods?
3. What is meant by multi access protocols?
4. How Bit-map Protocol is different from Binary countdown?
1.5.14 Long Answer Type Questions
1. MAC Sublayer belongs to which layer of OSI reference model. What is its
main function?
2. Explain the working of Pure ALOHA and what its main disadvantages are?
3. Explain CSMA protocol and bring out how it is different from CSMA/CD?
4. Write a note on any one protocol that resolve the contention for the channel
without any collisions at all?
5. Explain Multi-Level Multi-Access Protocol with the help of an example?
6. How does Urn Protocol behave when the load is light and when the load is
heavy?
1.5.15 Suggested Readings
1. Andrew S. Tanenbaum, Computer Networks, Prentice Hall India, Third Edition.
2. Forouzan, Data Communication and Networking, Tata McGraw Hill
3. D.E. Cormer and D.L. Stevens, “ Inter-networking with TCP-IP: Design,
Implementation and Internals,” Vol II, Prentice Hall, 1990.
4. D.E. Cormer, “ Computer Networks and Internet,” 2nd Edition, Addison Wesley,
2000.
86
BCA Sem-4 Paper: BCAB2202T
https://forms.gle/KS5CLhvpwrpgjwN98
Note: Students, kindly click this google form link, and fill
this feedback form once.
87