0% found this document useful (0 votes)
12 views9 pages

NAME3

The document covers key concepts in computer networking, including the OSI model's seven layers, data link protocols for error detection and retransmission, and IPv4 address classifications. It also discusses routing algorithms, congestion control methods, and multimedia data compression techniques. The information is structured as a series of questions and answers, providing detailed explanations of each topic.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views9 pages

NAME3

The document covers key concepts in computer networking, including the OSI model's seven layers, data link protocols for error detection and retransmission, and IPv4 address classifications. It also discusses routing algorithms, congestion control methods, and multimedia data compression techniques. The information is structured as a series of questions and answers, providing detailed explanations of each topic.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

[ASSIGNMENT]

NAME MD. MEHAR IMAM

ROLL 2314511487

PROGRAM BACHELOR OF COMPUTER


APPLICATION
SEMESTER 4

COURSE NAME COMPUTER NETWORKING

COURSE CODE DCA2201


(SET 1)
Q.1)
Ans. The reference model, Open Structures Interconnection (OSI) was made by worldwide
standardization affiliation (ISO) as a model for a PC show designing and as a framework for
making show standards. The OSI model comprises of seven layers.
1 Application Layer
2 Presentation Layer
3 Session Layer
4 Transport Layer
5 Network Layer
6 Data Link Layer
7 Physical Layer
Physical Layer: This layer establishes the necessary conditions for transmitting data over a
certified physical link. It manages the electrical aspects of the connection and the physical
medium, specifying the required capabilities of hardware and cables for successful data
transfer.
Presentation Layer: This layer focuses on the meaning and interpretation of the data exchanged
between different systems. Its key functions include translation, security through encryption,
and data compression. Essentially, it receives information from the application layer and
transforms it into a format that lower layers can understand and process.
Session Layer: Acting as the connection manager, this layer allows applications on different
devices to establish, maintain, and terminate communication sessions. The session layer
provides crucial services such as retransmitting data if it's not received, ensuring data streams
are synchronized, and managing requests for data received during a session.
Transport Layer: This layer stays aware of stream control of data and obliges for botch checking
and recovery of data between the contraptions
Network Layer: This layer provides the mechanisms for routing and sequencing data. It ensures
reliable end-to-end delivery of data packets across various networks. In contrast, the Data Link
Layer manages the transfer of packets between two directly connected systems on the same
network segment. If two systems are on the same link, the network layer's routing functions
become unnecessary.
Data Link Layer: This layer transforms the physical layer's basic transmission capabilities into
a dependable connection. It makes the underlying physical link appear error-free to the layer
above it (the Network layer). Additionally, it manages several crucial functions, including
framing data, error detection and correction, flow control, physical addressing, and media
access control.
Physical Layer: This layer works with the limits expected to pass a piece stream on over a
genuine medium. It manages the electrical nuances of the characteristic of correspondence and
transmission medium. It depicts the procedures and capacities that genuine contraptions and
signs of correspondence need to perform for transmission to happen
Q2.)
Ans. Another data link protocol is as well involved. This protocol addresses normal condition
whereby errors arise in a communications channel. Communication channels could make
frames become destroyed or completely disappear. If a frame is corrupted during transit, the
receiving hardware is capable of finding the error during information processing via checksum.
Unlike the error-free channel protocol, an additional timer is added here which permit the
sender to send out a frame that is acknowledged by the receiver only after the correct reception.
The recipient will throw away any frame coming in damaged. After a certain time ith the
sender’s timer elapses, it will try to retransmit the frame. The process would continue until the
frame arrives correctly at the other end of receiver. Picture the situation when sender A sends
packet 1 to receiver B and B receives it correctly and sends an acknowledgement frame back
to A. However, the sender gets no acknowledgement frame and the timer on A time out after
an amount of time. Since not having received the acknowledgment, sender A supposes that
frame had been lost or was damaged and re-sends the packet 1 frame. The second copy of the
frame is sent to B, whereas it is transmitted to the network layer for further processing. This
results in repetition of frame arriving at B. As a result, there is a need for a detection mechanism
of duplicate frames. A natural solution for the sender is to append a sequence number onto the
header of each data frames. The receiver compares each received frame's sequence number in
order to identify the real packets and the packets received previously. This purpose requires
only one-bit sequence numbers – 0 or 1. At any given time the receiver is ready to accept the
next sequence number it is expecting. When a frame of right sequence-no comes to hand, it is
acknowledged and sent to the network layer with a confirmation. Then the expected sequence
number is modified through increment modulo 2, i.e. 0 changes to 1 and 1 changes to 0. If
arriving frame has a wrong sequence number then it is discarded as a duplicate. Upon
transmission of a frame, the sender starts the timer. If it was already timed out it would be
updated so as to have another full cycle. The interval should be set so that the frame is receiving
at the receiver, sending out an acknowledgment that is going through the network, and being
received by the sender. If that interval elapses, it will make sense to assume that either the
acknowledged or the transmitted frame has been destroyed, and it is worthwhile re-sending it.
The sender allows the frame–enough time for transmission, waits for a signal from the timer to
elapse. One of the three possibilities of this point can arise at this point. arrival of a duly
formatted acknowledgment frame; data in the acknowledgment frame is incorrect or the timer
has expired. Waiting for a valid acknowledgment, once it has been received, the sender obtains
the next packet from the network layer and puts it into the buffer replacing the previous packet.
The sender proceeds to the next sequence number in the sequence. If a corrupted frame does
through or the timer expires then the sender resends the identical frame without modifying the
buffer or sequence number. When a receiver receives a valid frame, it checks the sequence
number to see if it’s a duplicate.
Q3.)
Ans. The following IPV4 addresses are defined by Internet standards:
Unicast: Assigned to a single network interface located on a specific subnet; used for one-to-
one communication
Multicast: Assigned to one or more subnet-specific network interfaces; used for one-to-many
communication.
Broadcast: Assigned to all network interfaces located on a subnet; used for one-to everyone on
a subnet communication.
In an IPV4 unicast address, you will see a subnet prefix and a unique host ID. The subnet, or
network identifier, or network address, part of an IPV4 unicast address, indicates which group
of interfaces are connected to the same physical or logical network, and the boundaries are
defined by IPV4 routers. Another name for a TCP/IP network segment is a subnet or a link.
Nodes within the same logical or physical subnet must adopt the same subnet prefix, and such
prefix must be unique all through the IP and TCP network, and the host ID portion of an IV4
unicast address identifies each node’s interface on All hosts identifier should be unique in the
same network segment.
The address division told us which bits were used for the subnet prefix (i.e., the prefix below
the subnet) and which were reserved for the host. Classes addressed in both maximum number
of networks and the number of hosts allowed in a network. The classes A, B, and C of the five
address types are reserved for the support of unicast IPv4 addresses. IPV4 multicasts are given
allocations from the class D addresses, and class E addresses continue to be limited to
experimental environments. Class A address prefixes are allocated to those networks which
have a lot of hosts. With a prefix length of eight bits for Class A address prefixes, the left over
24 bits allow for the identification of as much as a 16, 777, 214 hosts. However, the fact that
short prefixes are used means that only 126 networks can be assigned class A address prefixes.
First of all, all the most important bit in class A address prefixes is always assigned the value
of 0. This rule reduces the number of possibilities for Class A address prefixes from a complete
256 set to 128. Second, no address whose most significant eight bits are zero can be assigned
because these are reserved for special addresses. Third, addresses in which the first eight bits
have 01111111 (or 127 in decimal), have been reserved and allocated for loopback; hence,
cannot be used in general. By those conventions, the total number of class A address prefixes
is reduced from 128 to 126.
Internet address classes were no longer applicable as they showed an inefficiency of assigning
unicast addresses. For example, a mega organization having a Class A address prefix can
accommodate at most 16,777,214 hosts. In the case such organization uses only 70,000 host
IDs approximately 16,707,214 IPV4 unicast addresses are not operational upon the Internet.
Since 1993 onwards, organization allocation of IPv4 address prefixes was contingent on the
actual demand for the Internet-accessible IPv4 unicast. Such an approach is called Classless
Inter-Domain Routing (CIDR). CIDR permits more flexible allocation and specification of
Internet addresses in an Internet address classes system based on the original internet protocol
system. Consequently, the number of available internet addresses has greatly risen. By way of
example, a specific organization discovers the need to have 2000 IPV4 unicast addresses that
are provided on the Internet. Either the Internet Corporation for assigned names and numbers
(ICANN) or an Internet service provider (ISP) sets an IPV4 address prefix with 21 bits preset
and 11 bits available for host IDs. With 11 bits assigned to host IDs, 2046 independent IPV4
unicast addresses can be constructed.
(SET 2)
Q.4)
Ans. Routing algorithms can be divided into two broad categories. They are inflexible and
adaptive routing algorithms. As the name specifies, nonadaptive do not establish its routing
decisions on measurements or estimates of the current topology and traffic. The routing
decision (choice of the route) is computed in advance and this information is downloaded to
each router. This procedure is also called static routing. This is useful in situations where it
is clear which path to take.
Distance vector routing: Each router’s table in distance vector routing lists both the minimum
distance and the best link for reaching each destination. Updating the tables is achieved when
each router shares data with its neighbors. Distance vector routing bears the alternative name
distributed Bellman-Ford, in honor of the original research team (Bellman, 1957;. and Ford
and Fulkerson, 1962). Each router must store a table with one record for each router in the
network, according to this algorithm. This entry is separated into two main parts. the router’s
choice of path to use and a measure of how distant that destination is
Link State Routing: Previously, we found that the network does not reach convergence fast
enough following changes in the topology when using distance vector routing, primarily
because of the count to infinity challenge. As such, an alternate routing technique, link state
routing, is adopted to address this problem. The idea behind link state. Below, you will find
the five essential steps in routing. Every router needs to complete these five steps: steps
Hierarchical Routing: The expansion of router routing tables is directly associated with the
growth of large networks. More than just router memory is used as tables enlarge; scanning
these tables also takes more CPU time and transmitting status information takes up more
bandwidth. At some stage, the network may become so large that it is infeasible for every
router to maintain information on every other router, so routing will instead have to conform
to a hierarchical structure, much like that of the telephone network. Routers in a hierarchical
routing configuration are grouped into individual regions. While each router comprehends its
region’s routing requirements, it maintains no awareness of the interconnections within other
regions. When several networks are connected, it is logical to treat each separately as a region
so routers within one network do not need to learn the infrastructure of other networks.
Broadcast Routing: CPW works best when applications such as weather updates or live radio
are sent to every machine so that anyone interested may receive the data. When a device sends
information to reach every possible machine, it is referred to as broadcasting. Among
broadcasting methods, flooding is superior, and reverse path forwarding is also used for
broadcast routing. When a broadcast packet is received by a router, the router verifies if the
packet came in on the standard transmission link for packets heading toward the broadcast
source. When a match is found, the orientation means that the broadcast packet likely took the
most efficient way from the router and so will be the first version received at that point. In
such circumstances, the router sends the packet on to every other link, but does not forward it
over the link from which it arrived. Should the broadcast arrive at a router through an unusual
link, it will be discarded as a duplicate packet.
Q.5)
Ans. A high number of packets travelling through the network contributes to packet delays and
losses that impair performance. This situation is called congestion. The best way to control
congestion is by lowering the demand that the transport layer places on the network. A simple
approach to stopping congestion is to design a network that can handle the traffic it is expected
to carry efficiently. Several approaches intended to control congestion are discussed later in
this section..
Network Provisioning: Most traffic routing over low-bandwidth links usually results in
congestion. We may respond to serious congestion by dynamically introducing backup lines or
extra routers into the system. Upgrades for links and routers subject to frequent high usage are
usually performed as soon as is practical. We call this process provisioning, which happens at
intervals, usually covering a few months, due to predictions based on long-term traffic growth..
Traffic-aware routing: Routing protocols are responsive to topological changes, but do not
address variation in link utilization. We propose to incorporate load information into route
computation to minimize traffic at underloaded and overloaded locations. Network congestion
generally occurs first in hotspot areas.
Admission Control: Virtual-circuit networks may reject new connections when their addition
could cause network congestion. In other words, the system verifies resource sufficiency ahead
of establishing a connection. A shortage of available resources may require the refusal of new
connections. This is called admission control. Admission control is designed to prevent new
virtual circuits from being established unless the network can comfortably handle the additional
traffic. Engineers commonly characterize traffic by measuring its rate and its shape. An
illustration of an admission control technique is the leaky bucket algorithm, sometimes called
token bucket. Leaky bucket is based on the premise that, regardless of how fast traffic enters
the system, the network’s output will always go at a fixed speed.
Traffic throttling: Across the Internet as well as many other networks, the senders try to keep
the amount of traffic they send at a level the network can easily carry. In these networks, the
objective is for the network to function right as congestion is about to happen. When near
congestion, the network must direct senders to ease their traffic flow. Giving feedback requires
the router to pick out the correct senders.
In order to control network congestion, a choke packet is employed to notify particular nodes
or transmitters of their contribution to it. To alert a sender of congestion most clearly, the
information is sent directly to it. To accomplish this, the router identifies a congested packet
and returns a choke packet to the source host, carrying the destination from the original packet.
The original packet can be marked so that no additional choke packets are produced as it travels
further, and it is then conveyed as regular traffic. To prevent adding to the existing congestion,
the router may issue choke packets slowly, even during busy network periods.
Q.6)
Ans. Multimedia data includes audio, video and images which constitutes the majority of traffic
on the internet by many measures. In this section, we will discuss some major efforts in
representing and compressing multimedia data. Compression is mainly performed for
multimedia data and it is a lossy compression. The use of compression is not limited to
multimedia data. We must zip or compress our data files as well. For this reason, it generally
employs a lossless technique, so no data is omitted from the file. The receipt of data from lossy
compression does not guarantee it is identical to the original data. The lossy method produces
a smaller file size than the lossless does. You cannot separate data encoding from data
compression. We attempt to encode data by figuring out how to use the fewest possible bits in
the representation. Suppose that in a data block made up of 26 symbols A through Z, every
symbol is equally likely. In that case, using 5 bits per symbol is the optimal encoding method.
In particular, if R shows up 50% of the time, it makes sense to reduce the number of bits
assigned to encode R compared to the other symbols. Provided that we are aware of the
frequency of each symbol in the data, we can code each symbol with a specific number of bits
so the entire block uses the least number of bits possible. It is with this concept that Huffman
codes (one of the first major approaches to data compression) operate.
Run length encoding (RLE) is considered a compression method. The essence of the technique
is to encode long uninterrupted runs of the same character as one symbol plus a count, giving
it the name run length. As an example, the string AAABBCDDDD would be represented as
3A2B1C4D by RLE. RLE is effective in this case by looking at neighboring pixels, encoding
just the variations in each pair. When an image has extensive uniform regions, this method
stands out as quite effective. As an example, scanned text images usually experience RLE
achieving compression in the range of eight to one. Files with a great deal of white space lend
themselves to RLE because this type of data can be deleted efficiently. Fax transmissions are
largely dependent on RLE as the key compression method.
Videos that reside on disk and those that are transmitted over streams can employ an MPEG-
oriented connection, because MPEG does not determine the method for breaking the video
stream into network packets. MPEG format provides the benefit that the encoding method can
be changed afterward by the encoder. This adaptability of MPEG includes changing frame rate,
resolution, GOP frame type blending, quantization table, and encoding level for each
macroblock. As a consequence, it is feasible to regulate the data rate of the video by balancing
image sharpness against the amount of bandwidth required for network transmission.
Moreover, significant attention is paid to the method used for packetizing the MPEG stream as
it is transmitted across a network. When using TCP, packetization becomes an unimportant
detail because the protocol takes care of this task. To moderation is handled by TCP, which
parcels video data into IP datagrams as amounts are available. It is uncommon, however, for
interactive video to be sent over TCP. To reduce the impact of packet loss when using UDP for
video transmission, breaking the stream at macroblock boundaries makes sense, limiting
damage to only one macroblock instead of several..

You might also like