0% found this document useful (0 votes)
47 views32 pages

CN Imp

The document describes the seven-layer OSI reference model which defines networking standards. It explains each of the seven layers including physical, data link, network, transport, session, presentation and application layers and their functions.

Uploaded by

blkhackr1j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views32 pages

CN Imp

The document describes the seven-layer OSI reference model which defines networking standards. It explains each of the seven layers including physical, data link, network, transport, session, presentation and application layers and their functions.

Uploaded by

blkhackr1j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

OSI reference model

The Open System Interconnection model is a seven-layer structure that specifies the requirements for
communications between two computers. The ISO (International Organization for Standardization) standard
7498-1 defined this model. This model allows all network elements to operate together, no matter who
created the protocols and what computer vendor supports them.

The main benefits of the OSI model include the following:

• Helps users understand the big picture of networking

• Helps users understand how hardware and software elements function together

• Makes troubleshooting easier by separating networks into manageable pieces

• Defines terms that networking professionals can use to compare basic functional relationships on different
networks

• Helps users understand new technologies as they are developed

Layer 1 – The Physical Layer

The physical layer of the OSI model defines connector and interface specifications, as well as the medium
(cable) requirements. Electrical, mechanical, functional, and procedural specifications are provided for sending
a bit stream on a computer network.

Components of the physical layer include:

• Cabling system components

• Adapters that connect media to physical interfaces


• Connector design and pin assignments

• Hub, repeater, and patch panel specifications

• Wireless system components

• Parallel SCSI (Small Computer System Interface)

• Network Interface Card (NIC)

Layer 2 – The Data Link Layer

Layer 2 of the OSI model provides the following functions: Point to point delivery

• Allows a device to access the network to send and receive messages

• Offers a physical address (MAC address) so a device’s data can be sent on the network

• Works with a device’s networking software when sending and receiving messages

• Provides error control, flow control

Layer 3 – The Network Layer

IP addressing

Routing

Fragmentation

Diagnostics and the reporting of logical variations in normal network operation.

Layer 4 – The Transport Layer

Some of the functions offered by the transport layer include:

• Application identification

• Client-side entity identification

• Confirmation that the entire message arrived intact

• Segmentation and reassembly of data for network transport

• Error control, flow control and access control

• Establishment and maintenance of both ends of virtual circuits

• Realignment of segmented data in the correct order on the receiving side

• Multiplexing or sharing of multiple sessions over a single physical link

Layer 5 – The Session Layer


• Virtual connection between application entities

• Synchronization of data flow

• Creation of dialog units

• Connection parameter negotiations

• Partitioning of services into functional groups

• Acknowledgements of data received during a session

• Retransmission of data if it is not received by a device

Layer 6 – The Presentation Layer

• Encryption and decryption of a message for security

• Compression and expansion of a message so that it travels efficiently

• Graphics formatting

• Content translation

• System-specific translation

Layer 7 – The Application Layer

Provides an interface for the end user operating a device connected to a network.

This layer is what the user sees, in terms of loading an application (such as Web browser or e-mail); that is, this
application layer is the data the user views while using these applications.

Examples of application layer functionality include:

• Support for file transfers

• Ability to print on a network

• Electronic mail

• Electronic messaging

• Browsing the World Wide Web

 Transport (port addressing)- end to end delivery of segments


 Network(IP addressing)- source to destination delivery of packets
 Datali nk(MAC addressing)- point to point delivery of frame
TCP PROTOCOL

TCP organizes data so that it can be transmitted between a server and a client. It guarantees the integrity of
the data being communicated over a network. Before it transmits data, TCP establishes a connection between
a source and its destination, which it ensures remains live until communication begins. It then breaks large
amounts of data into smaller packets, while ensuring data integrity is in place throughout the process.

 TCP is short for Transmission Control Protocol.


 It is a transport layer protocol.
 It has been designed to send data packets over the Internet.
 It establishes a reliable end to end connection before sending any data.

Characteristics Of TCP-

TCP is a reliable protocol.


 It guarantees the delivery of data packets to its correct destination.
 After receiving the data packet, receiver sends an acknowledgement to the sender.
 It tells the sender whether data packet has reached its destination safely or not.
 TCP employs retransmission to compensate for packet loss.

TCP is a connection oriented protocol.


 TCP establishes an end to end connection between the source and destination.
 The connection is established before exchanging the data.
 The connection is maintained until the application programs at each end finishes exchanging the data.

TCP handles both congestion and flow control.


 TCP handles congestion and flow control by controlling the window size.
 TCP reacts to congestion by reducing the sender window size.

TCP ensures in-order delivery.


 TCP ensures that the data packets get deliver to the destination in the same order they are sent by the sender.
 Sequence Numbers are used to coordinate which data has been transmitted and received.

TCP connections are full duplex.


 TCP connection allows to send data in both the directions at the same time.
 So, TCP connections are Full Duplex.

Localhost-127.0.0.1
TCP works in collaboration with Internet Protocol.
 A TCP connection is uniquely identified by using-
Combination of port numbers and IP Address of sender and receiver.
 IP Addresses indicate which systems are communicating.
 Port numbers indicate which end to end sockets are communicating.
 Port numbers are contained in the TCP header and IP Addresses are contained in the IP header.
 TCP segments are encapsulated into an IP datagram.
 So, TCP header immediately follows the IP header during transmission.

TCP can use both selective & cumulative acknowledgements.SACK


 TCP uses a combination of Selective Repeat and Go back N protocols.
 In TCP, sender window size = receiver window size.
 In TCP, out of order packets are accepted by the receiver.
 When receiver receives an out of order packet, it accepts that packet but sends an acknowledgement for the expected packet.
 Receiver may choose to send independent acknowledgements or cumulative acknowledgement.

 TCP is a byte stream protocol. (Streamed data transfer)


 TCP provides error checking & recovery mechanism.

The 4 Layers of the TCP/IP Model

The TCP/IP model defines how devices should transmit data between them and enables communication over
networks and large distances. The model represents how data is exchanged and organized over networks. It is
split into four layers, which set the standards for data exchange and represent how data is handled and packaged
when being delivered between applications, devices, and servers.

The four layers of the TCP/IP model are as follows:

1. Datalink layer: The datalink layer defines how data should be sent, handles the physical act of sending and
receiving data, and is responsible for transmitting data between applications or devices on a network. This
includes defining how data should be signaled by hardware and other transmission devices on a network, such
as a computer’s device driver, an Ethernet cable, a network interface card (NIC), or a wireless network. It is also
referred to as the link layer, network access layer, network interface layer, or physical layer and is the
combination of the physical and data link layers of the Open Systems Interconnection (OSI) model, which
standardizes communications functions on computing and telecommunications systems.
2. Internet layer: The internet layer is responsible for sending packets from a network and controlling their
movement across a network to ensure they reach their destination. It provides the functions and procedures for
transferring data sequences between applications and devices across networks.
3. Transport layer: The transport layer is responsible for providing a solid and reliable data connection between the
original application or device and its intended destination. This is the level where data is divided into packets
and numbered to create a sequence. The transport layer then determines how much data must be sent, where
it should be sent to, and at what rate. It ensures that data packets are sent without errors and in sequence and
obtains the acknowledgment that the destination device has received the data packets.
4. Application layer: The application layer refers to programs that need TCP/IP to help them communicate with
each other. This is the level that users typically interact with, such as email systems and messaging platforms. It
combines the session, presentation, and application layers of the OSI model.

UDP

 UDP is short for User Datagram Protocol.


 It is the simplest transport layer protocol.
 It has been designed to send data packets over the Internet.
 It simply takes the datagram from the network layer, attaches its header and sends it to the user.

Characteristics of UDP-
 It is a connectionless protocol.
 It is a stateless protocol.
 It is an unreliable protocol.
 It is a fast protocol.
 It offers the minimal transport service.
 It is almost a null protocol.
 It does not guarantee in order delivery.
 It does not provide congestion control mechanism.
 It is a good protocol for data flowing in one direction.

Need of UDP-
 TCP proves to be an overhead for certain kinds of applications.
 The Connection Establishment Phase, Connection Termination Phase etc of TCP are time consuming.
 To avoid this overhead, certain applications which require fast speed and less overhead use UDP.

Applications Using UDP-


 Applications which require one response for one request use UDP. Example- DNS.
 Routing Protocols like RIP and OSPF use UDP because they have very small amount of data to be transmitted.
 Trivial File Transfer Protocol (TFTP) uses UDP to send very small sized files.
 Broadcasting and multicasting applications use UDP.
 Streaming applications like multimedia, video conferencing etc use UDP since they require speed over reliability.
 Real time applications like chatting and online games use UDP.
 Management protocols like SNMP (Simple Network Management Protocol) use UDP.
 Bootp / DHCP uses UDP.

Difference between TCP/IP and OSI Model


TCP/IP OSI Model
The full form of TCP/IP is Transmission Control Protocol/ The full form of OSI is Open Systems
Internet Protocol. Interconnection.
It is a communication protocol that is based on standard
It is a structured model which deals which the
protocols and allows the connection of hosts over a
functioning of a network.
network.
In 1982, the TCP/IP model became the standard language In 1984, the OSI model was introduced by the
of ARPANET. International Organisation of Standardization (ISO).
It comprises seven layers:
It comprises of four layers:
 Physical
 Data Link
 Network Interface
 Network
 Internet
 Transport
 Transport
 Session
 Application
 Presentation
 Application

It follows a horizontal approach. It follows a vertical approach.


An OSI Model is a reference model, based on
The TCP/IP is the implementation of the OSI Model.
which a network is created.
It is protocol dependent. It is protocol independent.

IPv4 - Packet Structure

Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport) and divides it
into packets. IP packet encapsulates data unit received from above layer and add to its own header
information.

IPv4 short for Internet Protocol Version 4 is the fourth version of the Internet Protocol (IP).
IP is responsible to deliver data packets from the source host to the destination host.
This delivery is solely based on the IP Addresses in the packet headers.

IPv4 is the first major version of IP.


IPv4 is a connectionless protocol for use on packet-switched networks.
Version-
 Version is a 4 bit field that indicates the IP version used.
 The most popularly used IP versions are version-4 (IPv4) and version-6 (IPv6).
 Only IPv4 uses the above header.
 So, this field always contains the decimal value 4.

Header Length-
 Header length is a 4 bit field that contains the length of the IP header.
 It helps in knowing from where the actual data begins.
 The length of IP header always lies in the range-[20 bytes , 60 bytes]

The initial 5 rows of the IP header are always used.


So, minimum length of IP header = 5 x 4 bytes = 20 bytes.
The size of the 6th row representing the Options field vary.
The size of Options field can go up to 40 bytes.
So, maximum length of IP header = 20 bytes + 40 bytes = 60 bytes.
Header length = Header length field value x 4 bytes
NOTES
Header length and Header length field value are two different things.
The range of header length field value is always [5, 15].
The range of header length is always [20, 60].

Type of Service-
 Type of service is 8 bit field that is used for Quality of Service (QoS).

Total Length-
 Total length is a 16 bit field that contains the total length of the datagram (in bytes)
 Total length = Header length + Payload length
 Minimum total length of datagram = 20 bytes (20 bytes header + 0 bytes data)
 Maximum total length of datagram = Maximum value of 16 bit word = 65535 bytes

Identification-
 Identification is a 16 bit field.
 It is used for the identification of the fragments of an original IP datagram.

 When an IP datagram is fragmented, each fragmented datagram is assigned the same identification
number.
 This number is useful during the re assembly of fragmented datagrams. It helps to identify to which IP
datagram, the fragmented datagram belongs to.

DF Bit-
 DF bit stands for Do Not Fragment bit.
 Its value may be 0 or 1.
 When DF bit is set to 0, it grants the permission to the intermediate devices to fragment the datagram
if required.

 When DF bit is set to 1, it indicates the intermediate devices not to fragment the IP datagram at any
cost.
 If network requires the datagram to be fragmented to travel further but settings does not allow its
fragmentation, then it is discarded.
 An error message is sent to the sender saying that the datagram has been discarded due to its settings.

MF Bit-
 MF bit stands for More Fragments bit.
 Its value may be 0 or 1.

 When MF bit is set to 0, it indicates to the receiver that the current datagram is either the last
fragment in the set or that it is the only fragment.

 When MF bit is set to 1, it indicates to the receiver that the current datagram is a fragment of some
larger datagram. More fragments are following.
 MF bit is set to 1 on all the fragments except the last one.

Fragment Offset-
 Fragment Offset is a 13 bit field.
 It indicates the position of a fragmented datagram in the original unfragmented IP datagram.
 The first fragmented datagram has a fragment offset of zero.
 Fragment offset for a given fragmented datagram = Number of data bytes ahead of it in the original
unfragmented datagram

Time to Live-

 Time to live (TTL) is a 8 bit field.


 It indicates the maximum number of hops a datagram can take to reach the destination.
 The main purpose of TTL is to prevent the IP datagrams from looping around forever in a routing loop.
 The value of TTL is decremented by 1 when-Datagram takes a hop to any intermediate device having
network layer.
 Datagram takes a hop to the destination.

 If the value of TTL becomes zero before reaching the destination, then datagram is discarded.
Protocol-

 Protocol is a 8 bit field.


 It tells the network layer at the destination host to which protocol the IP datagram belongs to.
 In other words, it tells the next level protocol to the network layer at the destination side.
 Protocol number of ICMP is 1, IGMP is 2, TCP is 6 and UDP is 17.

Header Checksum-

 Header checksum is a 16 bit field.


 It contains the checksum value of the entire header.
 The checksum value is used for error checking of the header.

 At each hop, the header checksum is compared with the value contained in this field.
 If header checksum is found to be mismatched, then the datagram is discarded.
 Router updates the checksum field whenever it modifies the datagram header.

The fields that may be modified are-


TTL
Options
Datagram Length
Header Length
Fragment Offset
NOTE
It is important to note-
Computation of header checksum includes IP header only.
Errors in the data field are handled by the encapsulated protocol.

Source IP Address-
 Source IP Address is a 32 bit field.
 It contains the logical address of the sender of the datagram.

Destination IP Address-
 Destination IP Address is a 32 bit field.
 It contains the logical address of the receiver of the datagram.

Options-
 Options is a field whose size vary from 0 bytes to 40 bytes.
 This field is used for several purposes such as-
o Record route
o Loose Source Routing
o Strict Source Routing
o Padding
The maximum number of IPv4 router addresses that can be recorded in the Record Route option field of
an IPv4 header is 9.
Explanation-

In IPv4, size of IP Addresses = 32 bits = 4 bytes.


Maximum size of Options field = 40 bytes.
So, it seems maximum number of IP Addresses that can be recorded = 40 / 4 = 10.
But some space is required to indicate the type of option being used.
Also, some space is to be left between the IP Addresses.
So, the space of 4 bytes is left for this purpose.
Therefore, the maximum number of IP addresses that can be recorded = 9.
Fragmentation
 IP Fragmentation is a process of dividing the datagram into fragments during its transmission.
 It is done by intermediary devices such as routers at the destination host at network layer.
 Each network has its maximum transmission unit (MTU).
 It dictates the maximum size of the packet that can be transmitted through it.
 Data packets of size greater than MTU cannot be transmitted through the network.
 So, datagrams are divided into fragments of size less than or equal to MTU.
 When router receives a datagram to transmit further, it examines the following-
o Size of the datagram
o MTU of the destination network
o DF bit value in the IP header
Case-01:
 Size of the datagram is found to be smaller than or equal to MTU.
 In this case, router transmits the datagram without any fragmentation.

Case-02:
 Size of the datagram is found to be greater than MTU and DF bit set to 1.
 In this case, router discards the datagram and sends error ACK to the actual sender.

Case-03:
 Size of the datagram is found to be greater than MTU and DF bit set to 0.
 In this case, router divides the datagram into fragments of size less than or equal to MTU.
 Router attaches an IP header with each fragment making the following changes in it.
 Then, router transmits all the fragments of the datagram.

Changes Made By Router-
 It changes the value of total length field to the size of fragment.
 It sets the MF bit to 1 for all the fragments except the last one.
 For the last fragment, it sets the MF bit to 0.
 It sets the fragment offset field value.
 It recalculates the header checksum.

CYCLIC REDUNDANCY CHECK CODES FOR ERROR DETECTION


The most commonly used method for detecting burst error in the data stream is Cyclic Redundancy Check Method. This
method is based on the use of polynomial codes. Polynomial codes are based on representing bit strings as polynomials
with coefficients as 0 and 1 only.

For example, the bit string 1110011 can be represented by the following polynomial:
1. 6 5 4 3 2 1 0
x +1.x + 1.x + 0.x + 0.x + 1.x + 1.x
This is equivalent to:

6 5 4 1
x + x + x + x + 1.
The polynomial is manipulated using modulo 2 arithmetic (which is equivalent to Exclusive OR or XOR).

Depending on the content of the frame a set of check digits is computed for each frame that is to be transmitted. Then the
receiver performs the same arithmetic as the sender on the frame and checks the digits. If the result after computation is
the same then the data received is error free.

A different answer after the computation by the receiver indicates that, some error is present in the data.
The computed check digits are called the frame check sequence (FCS) or the cyclic redundancy check (CRC).

The CRC method requires that:


The sender and receiver should agree upon a generator polynomial before the transmission process start.
To compute the checksum for a frame with m bits, the size of the frame must be longer than the generator polynomial.

Algorithm for Computing the Checksum is as follows as:


Let D(x) be the data and G(x) be the generating polynomial. Let r be the degree of generator polynomial G(x).
r
Step 1: Multiple the data D(x) by x , giving r zeros in the low-order end of the frame.
Step 2: Divide the result obtained in step1 by G(x), using modulo-2 division.
Step 3: Append the remainder from step 2 to D(x), thus, placing r terms in the
r low-order positions.
Example:
Data 1011101
4 2
Generator Polynomial G(x): x +x +1 = 10101
Here the size of the generator polynomial is 5 bit long, so, we will place 5 − 1 = 4 0’s in the low order data stream. So the
data will be:
Data 1011101 0000

MEDIA ACCESS PROTOCOLS


The Data Link Layer (DLL) is divided into two sub layers i.e., the Media Access Control (MAC) layer and the Logical Link
Control (LLC) layer. In a network nodes are connected to or use a common transmission media. Based on the connection
of nodes, a network can be divided into two categories, that is, point-to-point link and broadcast link.

If, we talk about broadcast network then, a control process for solving the problem of accessing a multi access channel is
required. It is an important issue to be taken into consideration that is, how to who gets access to the channel while,
many nodes are in competition

The protocol which decides who will get access to the channel and who will go next on the channel belongs to MAC sub-
layer of DLL.

MAC sub layer’s primary function is to manage the allocation of one broadcast channel among N competing users. For
the same, many methods are available such as static, and dynamic allocation method.

In the static channel allocation method, allocating a single channel among N competing users can be either FDM
(Frequency division multiplexing) or TDM (Time division multiplexing). In the dynamic channel allocation the
important issues to be considered are whether, time is continuous or discrete or whether the station is carrier sensing
large number of stations each with small and bursty traffic.

Many methods are available for multiple access channel like ALOHA, CSMA etc.
Pure Aloha
In an ALOHA network one station will work as the central controller and the other station will be connected to the
central station. If, any of stations want to transmit data among themselves, then, the station sends the data first to the
central station, which broadcast it to all the stations.

Here, the medium is shared between the stations. So, if two stations transmit a frame at overlapping time then, collision
will occur in the system. Here, no station is constrained; any station that has data /frame to transmit can transmit at any
time. Once one station sends a frame (when it receives its own frame and assumes that the destination has received it)
after 2 times the maximum propagation time. If the sender station does not receive its own frame during this time limit
then, it retransmit this frame.

Let R be the bit rate of the transmission channel and L be the length of the frame. Here, we are assuming that the size of
frame will be constant and hence, it will take constant time t= L/R for transmission of each packet.

As in the case of Pure ALOHA protocol frames can be sent any time so, the probability of collision will be very high.
Hence, to present a frame from colliding, no other frame should be sent within its transmission time.

Let a frame is that transmitted at time t0 and t be the time required for its transmission. If, any other station sends a
frame between t0 and t0+t then the end of the frame will collide with that earlier sent frame. Similarly, if any other
station transmits a frame between the time interval t0+t and t0+2t again, it will result in a garbage frame due to collision
with the reference frame. Hence, 2t is the vulnerable interval for the frame. In case a frame meets with collision that
frame is retransmitted after a random delay.

Its maximum throughput is 18%.

Hence, for the probability of successful transmission, no additional frame should be transmitted in the vulnerable
interval 2t.

To find the probability of no collision with a reference a frame, we assume that a number of users are generating new
frames according to Poisons distribution. Let S be the arrival rate of new frames per frame time. As we find probability of
no collision, S will represent the throughput of the system. Let G be the total arrival rate of frames including
retransmission frames (also called load of the system). For finding the probability of transmission from the new and
retransmitted frame. It is assumed that frames arrival is Poisson distributed with an average number of arrivals of G
frames/ frame time. The probability of k frames transmission in 2t seconds is given by the Poisson distribution as
follows:

The throughput of the system S is equal to total arrival rate G times the probability of successful transmission with no
collision,

The relationship between S vs. G can be shown in Figure

As G is increasing, S is also increasing for small values of G. At G=1/2, S attains its peak value i.e., S=1/2e i.e.,
0.18(approx). After that, it starts decreasing for increasing values of G. Here, the average number of successful
transmission attempts/frames can be given as G/S = e . 2G

An average number of unsuccessful transmission attempts/frame is G/S – 1 = e − 1. By this, we know that the
2G

performance of ALOHA is not good as unsuccessful transmission are increasing exponentially with load G.
By this, we know that the performance of ALOHA is not good as unsuccessful transmission are increasing exponentially
with load G.

SLOTTED ALOHA
In this, we can improve the performance by reducing the probability of collision. In the slotted ALOHA stations are
allowed to transmit frames in slots only. If more than one station transmit in the same slot, it will lead to collision This
reduces the occurrence of collision in the network system. Here, every station has to maintain the record of time slot.
The process of transmission will be initiated by any station at the beginning of the time slot only. Here also, frames are
assumed to be of constant length and with the same transmission time. Here the frame will collide with the reference
frame only if, it arrives in the interval t0-t to t0. Hence, here the vulnerable period is reduced that is to t seconds long.

The throughput of the system S is equal to the total arrival rate G times the probability of successful transmission with
no collision That is

S=G*P

S=G * P (zero frame transmission in t seconds)

The probability of k frames transmission in t seconds and is given by the Poisson distribution as follows:

P[ k ]= (G)k * e-G/k!, k=0,1,2,3……

Here average load in the vulnerable interval is G (one frame time) Hence, the probability of zero frames in t seconds = e -G

S= G * e-G

Its maximum throughput 36 % (Approx.)

CARRIER SENSE MULTIPLE ACCESS (CSMA)


Protocols in which station senses the channel before starting transmission are known as CSMA protocols (also known as
listen before talk protocols).

CSMA have many variants available that are to be adapted according to the behavior of the station that has frames to be
transmitted when the channel is busy or that some transmission is going on.

The following are some versions of CSMA protocols:

• 1-Persistent CSMA
In this protocol a station i.e., who wants to transmit some frame will sense the channel first, if it is found busy than that
some transmission is going on the medium, then, this station will continuously keep sensing that the channel. And as
soon as this station finds that the channel has become idle it will transmit its frame. But if more than one station is in
waiting state and keeps track of the channel then a collision will occur in the system because both waiting station will
transmit their frames at the same time. The other possibility of collision can be if the frame has not reached any other
station then, it indicates to the second station that the channel is free. So the second station also starts its transmission
and that will lead to collision. Thus 1-persistent CSMA a greedy protocol as to capture the channel as soon as it finds it
idle. And, hence, it has a high frequency of collision in the system. In case of collision, the station senses the channel
again after random delay.

• Non-Persistent CSMA
To reduce the frequency of the occurrence of collision in the system then, another version of CSMA that is non-
persistent CSMA can be used. Here, the station who has frames to transmit first sense whether the channel is busy or
free. If the station finds that channel to be free it simply transmits its frame. Otherwise, it will wait for a random amount
of time and repeat the process after that time span is over. As it does not continuously senses the channel to be, it is less
greedy in comparison of 1-Persistent CSMA. It reduces the probability of the occurrence of collision as the waiting
stations will not transmit their frames at the same time because the stations are waiting for a random amount of time,
before restarting the process. Random time may be different for different stations so, the likelihood waiting station will
start their transmission at the same time is reduced. But, it can lead to longer delays than the 1-Persistent CSMA.

• p-Persistent CSMA
This category of CSMA combines features of the above versions of CSMA that is 1-persistent CSMA and non-persistent
CSMA. This version is applicable for the slotted channel. The station that has frames to transmit senses the channel and
if found free then simply transmits the frame with p probability and with probability 1-p it, defers the process. If the
channel is found busy then, the station persists sensing the channel until it became idle. Here value of p is the
controlling parameter.

After studying the behavior of throughput vs load for persistent CSMA it is found that Non-Persistent CSMA has
maximum throughput. But we can using collision detection mechanism improve upon this to achieve more throughput
in the system using collision defection mechanism and for the same we will discuss CSMA/CD in the next section.

CSMA WITH COLLISION DETECTION (CSMA/CD)


In CSMA/CD the station aborts the process of transmission as soon as they detect some collision in the system. If two
stations sense that the channel is free at the same time, then, both start transmission process immediately. And after
that, both stations get information that collision has occurred in the system. Here, after the station detecting the
collision, the system aborts the process of transmission. In this way, time is saved and utilization of bandwidth is
optimized. This protocol is known as CSMA/CD and, this scheme is commonly used in LANs.

Collision of frames will be detected by looking at the strength of electric pulse or signal received after collision. After a
station detects a collision, it aborts the transmission process and waits for some random amount of time and tries the
transmission again with the assumption that no other station has started its transmission in the interval of propagation
time. And hence, in CSMA/CD the channel can be any of the following three states as it can be shown with the Figure.
CSMA with Collision Avoidance (CSMA/CA)
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) is a network protocol for carrier
transmission that operates in the Medium Access Control (MAC) layer. In contrast to CSMA/CD (Carrier Sense
Multiple Access/Collision Detection) that deals with collisions after their occurrence, CSMA/CA prevents
collisions prior to their occurrence.
 When a frame is ready, the transmitting station checks whether the channel is idle or busy.
 If the channel is busy, the station waits until the channel becomes idle.
 If the channel is idle, the station waits for an Inter-frame gap (IFG) amount of time and then sends the
frame.
 After sending the frame, it sets a timer.
 The station then waits for acknowledgement from the receiver. If it receives the acknowledgement
before expiry of timer, it marks a successful transmission.
 Otherwise, it waits for a back-off time period and restarts the algorithm.
 CMSA/CA prevents collision.
 Due to acknowledgements, data is not lost unnecessarily.
 It avoids wasteful transmission.
 It is very much suited for wireless transmissions.

CSMA/CD (Carrier-sense multiple access with collision detection):-

Carrier sense multiple access with collision detection or CSMA/CD is a protocol (or rule) used by
computer ethernet networks. It stops computers from sending information on the same ethernet wire at
the same time. With this rule, a computer will check that the wire is not being used before it sends
information. This ability to check is called "carrier sense." This rule is used when many computers can
use the same connection. This is called "multiple access." If computers do send information at exactly
the same time, the computers can tell a mistake has been made and stop sending. This is called
"collision detection." When this collision occurs, the computers stop sending information, wait for a
random amount of time, and then check before resending the information.

The following procedure is used to initiate a transmission. The procedure is complete when the frame is
transmitted successfully or a collision is detected during transmission.[3]: 33

1. Is a frame ready for transmission? If not, wait for a frame.


2. Is medium idle? If not, wait until it becomes ready.[note 1]
3. Start transmitting and monitor for collision during transmission.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and complete frame transmission.

The following procedure is used to resolve a detected collision. The procedure is complete when retransmission
is initiated or the retransmission is aborted due to numerous collisions.

1. Continue transmission (with a jam signal instead of frame header/data/CRC) until minimum packet time is
reached to ensure that all receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort transmission.
4. Calculate and wait the random backoff period based on number of collisions.
5. Re-enter main procedure at stage 1.

CRC

What Is a Cyclic Redundancy Check (CRC)?

The CRC is a network method designed to detect errors in the data and information transmitted over the
network. This is performed by performing a binary solution on the transmitted data at the sender’s side and
verifying the same at the receiver’s side.

CRC Terms and Attributes

CRC is performed both at the sender and the receiver side. CRC applies the CRC Generator and CRC Checker
at the sender and receiver sides, respectively.

The CRC is a complex algorithm derived from the CHECKSUM error detection algorithm, using the MODULO
algorithm as the basis of operation. It is based on the value of polynomial coefficients in binary format for
performing the calculations.

To understand the working of the CRC method, we will divide the steps into two parts:

Sender Side (CRC Generator and Modulo Division):

1. The first step is to add the no. of zeroes to the data to be sent, calculated using k-1 (k - is the bits obtained
through the polynomial equation.)
2. Applying the Modulo Binary Division to the data bit by applying the XOR and obtaining the remainder from the
division
3. The last step is to append the remainder to the end of the data bit and share it with the receiver.

Receiver Side (Checking for errors in the received data):

To check the error, perform the Modulo division again and check whether the remainder is 0 or not,

1. If the remainder is 0, the data bit received is correct, without any errors.
2. If the remainder is not 0, the data received is corrupted during transmission.

NETWORK TOPOLOGY
Topology refers to the shape of a network, or the network’s layout. How different nodes in a network are connected to
each other and how they communicate with each other is determined by the network's topology.

Some of the most common network topologies are:

• Bus topology
• Star topology
• Ring topology
• Tree topology
• Mesh topology
• Cellular Topology

The parameters that are to be considered while selecting a physical topology are:

• Ease of installation.
• Ease of reconfiguration.
• Ease of troubleshooting.

Bus Topology

In Bus topology, all devices are connected to a central cable, called the bus or backbone. The bus topology connects
workstations using a single cable. Each workstation is connected to the next workstation in a point-to-point fashion.

In this type of topology, if one workstation goes faulty all workstations may be affected as all workstations share the
same cable for the sending and receiving of information. The cabling cost of bus systems is the least of all the different
topologies. Each end of the cable is terminated using a special terminator.

Advantages of Bus Topology

• Installation is easy and cheap when compared to other topologies.


• Connections are simple and this topology is easy to use.
• Less cabling is required.

Disadvantages of Bus Topology

• Used only in comparatively small networks.


• As all computers share the same bus, the performance of the network deteriorates when we increase the number of
computers beyond a certain limit.
• Fault identification is difficult.
• A single fault in the cable stops all transmission.

Star Topology
Star topology uses a central hub through which, all components are connected. In a Star topology, the central hub is the
host computer, and at the end of each connection is a terminal as shown in Figure.

Nodes communicate across the network by passing data through the hub. A star network uses a significant amount of
cable as each terminal is wired back to the central hub, even if two terminals are side by side but several hundred
meters away from the host. The central hub makes all routing decisions, and all other workstations can be simple.

An advantage of the star topology is, that failure, in one of the terminals does not affect any other terminal; however,
failure of the central hub affects all terminals.

This type of topology is frequently used to connect terminals to a large time-sharing host computer.

Advantages of Star Topology

• Installation and configuration of network is easy.


• Less expensive when compared to mesh topology.
• Faults in the network can be easily traced.
• Expansion and modification of star network is easy.
• Single computer failure does not affect the network.
• Supports multiple cable types like shielded twisted pair cable, unshielded twisted pair cable, ordinary telephone cable
etc.
Disadvantages of Star Topology

• Failure in the central hub brings the entire network to a halt.


• More cabling is required in comparison to tree or bus topology because each node is connected to the central hub.

Ring Topology

In Ring Topology all devices are connected to one another in the shape of a closed loop, so that each device is connected
directly to two other devices, one on either side of it. Data is transmitted around the ring in one direction only; each
station passing on the data to the next station till it reaches its destination.

Information travels around the ring from one workstation to the next. Each packet of data sent on the ring is prefixed by
the address of the station to which it is being sent. When a packet of data arrives, the workstation checks to see if the
packet address is the same as its own, if it is, it grabs the data in the packet. If the packet does not belong to it, it sends
the packet to the next workstation in the ring.

The common implementation of this topology is token ring. A break in the ring causes the entire network to fail.
Individual workstations can be isolated from the ring.

Advantages of Ring Topology

• Easy to install and modify the network.


• Fault isolation is simplified.
• Unlike Bus topology, there is no signal loss in Ring topology because the tokens are data packets that are re-generated
at each node.

Disadvantages of Ring Topology

• Adding or removing computers disrupts the entire network.


• A break in the ring can stop the transmission in the entire network.
• Finding fault is difficult.
• Expensive when compared to other topologies.

Tree Topology

Tree topology is a LAN topology in which only one route exists between any two nodes on the network. The pattern of
connection resembles a tree in which all branches spring from one root. Tree topology is a hybrid topology, it is similar
to the star topology but the nodes are connected to the secondary hub, which in turn is connected to the central hub. In
this topology groups of star-configured networks are connected to a linear bus backbone.

Advantages of Tree Topology

• Installation and configuration of network is easy.


• Less expensive when compared to mesh topology.
• Faults in the network can be detected traced.
• The addition of the secondary hub allows more devices to be attached to the central hub.
• Supports multiple cable types like shielded twisted pair cable, unshielded twisted pair cable, ordinary telephone cable
etc.

Disadvantages of Tree Topology

• Failure in the central hub brings the entire network to a halt.


• More cabling is required when compared to bus topology because each node is connected to the central hub.

Mesh Topology
Devices are connected with many redundant interconnections between network nodes. In a well-connected topology,
every node has a connection to every other node in the network. The cable requirements are high, but there are
redundant paths built in. Failure in one of the computers does not cause the network to break down, as they have
alternative paths to other computers.

Mesh topologies are used in critical connection of host computers (typically telephone exchanges). Alternate paths allow
each computer to balance the load to other computer systems in the network by using more than one of the connection
paths available. A fully connected mesh network therefore has n (n-1)/2 physical channels to link n devices. To
accommodate these, every device on the network must have (n-1) input/output ports

Advantages of Mesh Topology • Use of dedicated links eliminates traffic problems.


• Failure in one of the computers does not affect the entire network. • Point-to-point link makes fault isolation easy.
• It is robust.
• Privacy between computers is maintained as messages travel along dedicated path.
Disadvantages of Mesh Topology • The amount of cabling required is high.
• A large number of I/O (input/output) ports are required.

Cellular Topology

Cellular topology, divides the area being serviced into cells. In wireless media
each point transmits in a certain geographical area called a cell, each cell represents a portion of the total network area.
Figure shows computers using Cellular Topology. Devices that are present within the cell, communicate through a
central hub. Hubs in different cells are interconnected and hubs are responsible for routing data across the network.
They provide a complete network infrastructure.

Cellular topology is applicable only in case of wireless media that does not require cable connection.

Advantages of Cellular Topology

• If the hubs maintain a point-to-point link with devices, trouble shooting is easy.
• Hub-to-hub fault tracking is more complicated, but allows simple fault isolation.

Disadvantages of Cellular Topology

• When a hub fails, all devices serviced by the hub lose service (are affected).
What is Multiplexing?
 George Owen Squier developed the telephone carrier multiplexing in 1910.

Multiplexing is a technique used to combine and send the multiple data streams over a single medium. The
process of combining the data streams is known as multiplexing and hardware used for multiplexing is known
as a multiplexer.

Multiplexing is achieved by using a device called Multiplexer (MUX) that combines n input lines to generate a
single output line. Multiplexing follows many-to-one, i.e., n input lines and one output line.

Demultiplexing is achieved by using a device called Demultiplexer (DEMUX) available at the receiving end.
DEMUX separates a signal into its component signals (one input and n outputs). Therefore, we can say that
demultiplexing follows the one-to-many approach.

Why Multiplexing?

 The transmission medium is used to send the signal from sender to receiver. The medium can only have one
signal at a time.
 If there are multiple signals to share one medium, then the medium must be divided in such a way that each
signal is given some portion of the available bandwidth. For example: If there are 10 signals and bandwidth of
medium is100 units, then the 10 unit is shared by each signal.
 When multiple signals share the common medium, there is a possibility of collision. Multiplexing concept is used
to avoid such collision.
 Transmission services are very expensive.

Concept of Multiplexing
 The 'n' input lines are transmitted through a multiplexer and multiplexer combines the signals to form a
composite signal.

 The composite signal is passed through a Demultiplexer and demultiplexer separates a signal to component
signals and transfers them to their respective destinations

Frequency Division Multiplexing

Suppose, it is human voice that has to be transmitted, over a telephone. This has frequencies that are mostly within the
range of 300 Hz to 3400 Hz. We can modulate this on a bearer or carrier channel, such as one at 300 kHz. Another
transmission that has to be made can be modulated to a different frequency, such as, 304 kHz, and yet another
transmission could be made simultaneously at 308 kHz. We are thus, dividing up the channel from 300 kHz up to 312 kHz
into different frequencies for sending data. This is Frequency Division Multiplexing (FDM) because all the different
transmissions are happening at the same time – it is only the frequencies that are divided up.

The composite signal to be transmitted over the medium of our choice is obtained by summing up the different signals
to be multiplexed. The transmission is received at the other end, the destination and there, it has to be separated into
its original components, by demultiplexing. Frequency overlap is a serious issue when it comes to frequency division
multiplexing and it must be completely avoided. Two frequency ranges can be separated by using some narrow
frequency ranges called guard bands. The guard bands avoid signal interference and enhance the quality of
communication
Time Division Multiplexing

P1 4 3

P2 6 5

P3 3 2

In frequency division multiplexing, all signals operate at the same time with different frequencies, but in Time-division
multiplexing all signals operate with same frequency at different times. This is a base band transmission system, where it
sequentially samples all data source and combines them to form a composite base band signal, which travels through
the media and is being demultiplexed into appropriate independent message signals by the receiving end.

The composite signal has some dead space between the successive sampled pulses, which is essential to prevent
interchannel cross talks. Along with the sampled pulses, one synchronizing pulse is sent in each cycle. These data pulses
along with the control information form a frame. Each of these frames contain a cycle of time slots and in each frame,
one or more slots are dedicated to each data source. The maximum bandwidth (data rate) of a TDM system should be at
least equal to the same data rate of the sources.

Synchronous Time Division Multiplexing

Synchronous TDM is called synchronous mainly because each time slot is preassigned to a fixed source. The time slots
are transmitted irrespective of whether the sources have any data to send or not. Hence, for the sake of simplicity of
implementation, channel capacity is wasted. Although fixed assignment is used TDM, devices can handle sources of
different data rates. This is done by assigning fewer slots per cycle to the slower input devices than the faster devices.

Asynchronous Time Division Multiplexing

One drawback of the TDM approach, as discussed earlier, is that many of the time slots in the frame are wasted. It is
because, if a particular terminal has no data to transmit at particular instant of time, an empty time slot will be
transmitted. An efficient alternative to this synchronous TDM is statistical TDM, also known as asynchronous TDM or
Intelligent TDM. It dynamically allocates the time slots on demand to separate input channels, thus saving the channel
capacity.

Switching:

1. Message Switching
2. Packet switching
a. Datagram packet switching(UDP/Connectionless communication)
b. Virtual circuit packet switching(TCP/ Connection oriented
communication)
i. SVC(connect, transmit, idle, disconnect)
ii. PVC(transmit, idle)
3. Circuit switching.
Shorts:-

MODES OF DATA TRANSMISSION / DATA TRANSFER MODE :-


Simplex, Half Duplex and Full Duplex Communication

Simplex

The simplest signal flow technique is the simplex configuration. In Simplex transmission, one of the communicating
devices can only send data, whereas the other can only receive it. Here, communication is only in one direction
(unidirectional) where one party is the transmitter and the other is the receiver. Examples of simplex communication are
the simple radio, and Public broadcast television where, you can receive data from stations but can’t transmit data back.
The television station sends out electromagnetic signals. The station does not expect and does not monitor for a return
signal from the television set. This type of channel design is easy and inexpensive to set up.

Half Duplex

Half duplex refers to two-way communication where, only one party can transmit data at a time. Unlike, the Simplex
mode here, both devices can transmit data though, not at the same time, that is Half duplex provides Simplex
communication in both directions in a single channel. When one device is sending data, the other device must only
receive it and vice versa. Thus, both sides take turns at sending data. This requires a definite turnaround time during
which, the device changes from the receiving mode to the transmitting mode. Due to this delay, half duplex
communication is slower than simplex communication. However, it is more convenient than simplex communication as
both the devices can send and receive data.

Note: the difference between simplex and half-duplex. Half-duplex refers to two-way communication where, only one
party can transmit data at a time. Simplex refers to one-way communication where, one party is the transmitter and the
other is the receiver For example, a walkie-talkie is a half-duplex device because only one party can talk at a time.

Full Duplex

Full duplex refers to the transmission of data in two directions simultaneously. Here, both the devices are capable of
sending as well as receiving data at the same time as shown in Figure. As you can see from Figure, that simultaneously
bi-directional communication is possible, as a result, this configuration requires full and independent transmitting and
receiving capabilities at both ends of the communication channel. Sharing the same channel and moving signals in both
directions increases the channel throughput without increasing its bandwidth. For example, a telephone is a full-duplex
device because both parties can talk to each other simultaneously. In contrast, a walkie-talkie is a half-duplex device
because only one party can transmit at a time. Most modems have a switch that lets you choose between full-duplex
and half-duplex modes. The choice depends on which communications program you are running.

Serial and Parallel Communication

Serial Communication

In Serial data transmission, bits are transmitted serially, one after the other. The least significant bit (LSB) is usually
transmitted first. While sending data serially, characters or bytes have to be separated and sent bit by bit. Thus, some
hardware is required to convert the data from parallel to serial. At the destination, all the bits are collected, measured
and put together as bytes in the memory of the destination. This requires conversion from serial to parallel.
As compared to parallel transmission, serial transmission requires only one circuit interconnecting the two devices.
Therefore, serial transmission is suitable for transmission over long distances.

Parallel Communication

In Parallel transmission, all the bits of a byte are transmitted simultaneously on separate wires. Here, multiple
connections between the two devices are therefore, required. This is a very fast method of transmitting data from one
place to another. The disadvantage of Parallel transmission is that it is very expensive, as it requires several wires for
both sending, as well as receiving equipment. Secondly, it demands extraordinary accuracy that cannot be guaranteed
over long distances.

NETWORK GOALS
One of the main goals of a computer network is to enable its users to share resources, to provide low cost facilities and
easy addition of new processing services. The computer network thus, creates a global environment for its users and
computers.

Some of the basic goals that a Computer network should satisfy are:
• Cost reduction by sharing hardware and software resources.
• Provide high reliability by having multiple sources of supply.
• Provide an efficient means of transport for large volumes of data among various locations (High throughput).

• Provide inter-process communication among users and processors.

• Reduction in delay driving data transport.

• Increase productivity by making it easier to share data amongst users.

• Repairs, upgrades, expansions, and changes to the network should be performed with minimal impact on the majority
of network users.
• Standards and protocols should be supported to allow many types of equipment from different vendors to share the
network (Interpretability).

• Provide centralized/distributed management and allocation of network resources like host processors, transmission
facilities etc.

CLASSIFICATION OF NETWORKS
According to scope of network : PAN, LAN, MAN, WAN

According to network service : Connection oriented, connectionless

According to transmission technology i.e., whether the network contains switching elements or not, we have two types
of networks:

• Broadcast networks.
• Point-to-point or Switched networks.

*****************************************************************************
Network Devices
Layer Devices

Application Gateway

Network Layer3 Switch, Router

Data Link Switch, Bridge

Physical Hub, Repeater

HUB
A hub, also called a network hub, is a common connection point for devices in a network. Hubs are devices commonly
used to connect segments of a LAN. The hub contains multiple ports. When a packet arrives at one port, it is copied to
the other ports so that all segments of the LAN can see all packets.

In a hub, a frame is passed along or "broadcast" to every one of its ports. It doesn't matter that the frame is only
destined for one port. The hub has no way of distinguishing which port a frame should be sent to. Passing it along to
every port ensures that it will reach its intended destination. This places a lot of traffic on the network and can lead to
poor network response times.
Compared to a standard switch, the hub is slower as it can send or receive information just not at the same time, but
typically costs more than a hub.
Passive and Intelligent Hubs
A passive hub serves simply as a conduit for the data, enabling it to go from one device (or segment) to another. So-
called intelligent hubs include additional features that enables an administrator to monitor the traffic passing through
the hub and to configure each port in the hub. It has capability of repeater also. Intelligent hubs are also
called manageable hubs.

Repeaters
As signals travel along a network cable (or any other medium of transmission), they degrade and become distorted in a
process that is called attenuation. If a cable is long enough, the attenuation will finally make a signal unrecognizable by
the receiver.

A Repeater enables signals to travel longer distances over a network. Repeaters work at the OSI's Physical layer. A
repeater regenerates the received signals and then retransmits the regenerated (or conditioned) signals on other
segments.

Bridges
Like a repeater, a bridge can join segments or workgroup LANs. However, a bridge can also divide a network to isolate
traffic or problems. For example, if the volume of traffic from one or two computers or a single department is flooding
the network with data and slowing down entire operation, a bridge can isolate those computers or that department.
In the following figure, a bridge is used to connect two segment segment 1 and segment 2.

Bridges can be used to:

i. Expand the distance of a segment.

ii. Provide for an increased number of computers on the network.

iii. Reduce traffic bottlenecks resulting from an excessive number of attached computers.

Bridges work at the Data Link Layer of the OSI model. Because they work at this layer, all information contained in the
higher levels of the OSI model is unavailable to them. Therefore, they do not distinguish between one protocol and
another.

Bridges simply pass all protocols along the network. Because all protocols pass across the bridges, it is up to the
individual computers to determine which protocols they can recognize.

A bridge works on the principle that each network node has its own address. A bridge forwards the packets based on the
address of the particular destination node.

As traffic passes through the bridge, information about the computer addresses is then stored in the bridge's RAM. The
bridge will then use this RAM to build a routing table based on source addresses.
Layer 2 Switch

Layer 2 switches basically do switching only, which means they operate using devices’ MAC addresses to redirect the

data packets from the source port to the destination port. It does that by maintaining a MAC address table to remember

which ports have which MAC addresses assigned. A MAC address operates within the Layer 2 of the OSI reference

model. A MAC address simply differentiates one device from another with each device being assigned a unique MAC

address. It utilizes hardware based switching techniques to manage traffic in a LAN (Local Area Network). As switching

occurs at Layer 2, the process is quite faster because all it does is sorting MAC addresses at a physical layer. In simple

terms, a Layer 2 switch acts as a bridge between multiple devices

Layer 3 Switch

A Layer 3 switch is exactly the opposite of what a Layer 2 switch does. Layer 2 switches were not able to route data

packets at layer 3. Unlike Layer 2 switches, Layer 3 does routing using IP addresses. It’s a specialized hardware

device used in routing data packets. Layer 3 switches have fast switching capabilities and they have higher port density.

They are significant upgrades over the traditional routers to provide better performance and the main advantage of

using Layer 3 switches is that they can route data packets without making extra network hops, thus making it faster than

routers. However, they lack some added functionalities of a router. Layer 3 switches are commonly used in large scale

enterprises. Simply put, a Layer 3 switch is nothing but a high-speed router but without WAN connectivity.

Routers
In an environment consisting of several network segments with different protocols and architecture, a bridge may not
be adequate for ensuring fast communication among all of the segments. A complex network needs a device, which not
only knows the address of each segment, but also can determine the best path for sending data and filtering broadcast
traffic to the local segment. Such device is called a Router.

Routers work at the Network layer of the OSI model meaning that the Routers can switch and route packets across
multiple networks. They do this by exchanging protocol-specific information between separate networks. Routers have
access to more information in packets than bridges, and use this information to improve packet deliveries. Routers are
usually used in a complex network situation because they provide better traffic management than bridges and do not
pass broadcast traffic.

Routers can share status and routing information with one another and use this information to bypass slow or
malfunctioning connections.

Routers do not look at the destination node address; they only look at the network address. Routers will only pass the
information if the network address is known. This ability to control the data passing through the router reduces the
amount of traffic between networks and allows routers to use these links more efficiently than bridge
Gateways
Gateways make communication possible between different architectures and environments. They repackage and
convert data going from one environment to another so that each environment can understand the other's environment
data.

A gateway repackages information to match the requirements of the destination system. Gateways can change the
format of a message so that it will conform to the application program at the receiving end of the transfer.

A gateway links two systems that do not use the same:

i. Communication protocols

ii. Data formatting structures

iii. Languages

iv. Architecture

For example, electronic mail gateways, such as X.400 gateway, receive messages in one format, and then translate it,
and forward in X.400 format used by the receiver, and vice versa.

To process the data, the gateway:

Decapsulates incoming data through the networks complete protocol stack. Encapsulates the outgoing data in the
complete protocol stack of the other network to allow transmission.

NIC
A NIC or Network Interface Card is a circuit board or chip, which allows the computer to communicate to other
computers on a Network. This board when connected to a cable or other method of transferring data such as infrared
can share resources, information and computer hardware. Local or Wide area networks are generally used for large
businesses as well as are beginning to be found in homes as home users begin to have more then one computer.
Utilizing network cards to connect to a network allow users to share data such as companies being able to have the
capability of having a database that can be accessed all at the same time send and receive e-mail internally within the
company or share hardware devices such as printers.

AP(Access Point)

RJ45 connector

You might also like