0% found this document useful (0 votes)
97 views14 pages

Tcp-Ip Unit 3

RSVP is a resource reservation protocol that allows network hosts and routers to reserve resources for particular data flows to ensure quality of service. It treats each data flow as a simplex connection from sender to receiver. RSVP is not a routing protocol but works with routing protocols to establish paths. It takes a soft-state approach where states time out and are deleted if not refreshed periodically.

Uploaded by

poojadharme16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views14 pages

Tcp-Ip Unit 3

RSVP is a resource reservation protocol that allows network hosts and routers to reserve resources for particular data flows to ensure quality of service. It treats each data flow as a simplex connection from sender to receiver. RSVP is not a routing protocol but works with routing protocols to establish paths. It takes a soft-state approach where states time out and are deleted if not refreshed periodically.

Uploaded by

poojadharme16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

TULSIRAMJI GAIKWAD-PATIL College of Engineering and Technology

Wardha Road, Nagpur - 441108


Accredited with NAAC A+ Grade
Approved by AICTE, New Delhi, Govt. of Maharashtra
(An Autonomous Institute Affiliated to RTM Nagpur University)
Department of Information Technology
Session 2023-2024

Sub:TCP/IP Unit 3

Resource Reservation:

RSVP is a resource reservation setup protocol that is used by both network hosts and routers.
Hosts use RSVP to request a specific class of service (CoS) from the network for particular
application flows. Routers use RSVP to deliver CoS requests to all routers along the data
path. RSVP also can maintain and refresh states for a requested CoS application flow.

RSVP treats an application flow as a simplex connection. That is, the CoS request travels
only in one direction—from the sender to the receiver. RSVP is a transport layer protocol that
uses IP as its network layer. However, RSVP does not transport application flows. Rather, it
is more of an Internet control protocol, similar to the Internet Control Message Protocol
(ICMP) and Internet Group Management Protocol (IGMP). RSVP runs as a separate software
process in the Junos OS and is not in the packet forwarding path.

RSVP is not a routing protocol, but rather is designed to operate with current and future
unicast and multicast routing protocols. The routing protocols are responsible for choosing
the routes to use to forward packets, and RSVP consults local routing tables to obtain routes.
RSVP only ensures the CoS of packets traveling along a data path.

The receiver in an application flow requests the preferred CoS from the sender. To do this,
the receiver issues an RSVP CoS request on behalf of the local application. The request
propagates to all routers in reverse direction of the data paths toward the sender. In this
process, RSVP requests might be merged, resulting in a protocol that scales well when there
are a large number of receivers.

Because the number of receivers in an application flow is likely to change and the flow of
delivery paths might change during the life of an application flow, RSVP takes a soft-state
approach in its design, creating and removing the protocol states in routers and hosts
incrementally over time. RSVP sends periodic refresh messages to maintain its state and to
recover from occasional lost messages. In the absence of refresh messages, RSVP states
automatically time out and are deleted.

RSVP is used for data flow and provides QoS to all its network agents/devices. By using
RSVP, a client may request quality of service from a network for data flow. Network devices
like routers us RSVP to provide information to all the nodes on a network. As RSVP is not a
routing protocol, it gains the data path and routing information from neighboring routers.
Applications on a network send requests for QoS. Then routers on the network provide the
requested information. RSVP keeps all the records of information being exchanged. RSVP is
also used to maintain and transport traffic and policy control issues.

Traffic Shaping:
Controlling Traffic Flow:

Traffic shaping is like traffic control for data on a network. It helps manage the speed and
amount of data that can move through the network.

Preventing Congestion:

It prevents the network from getting congested by regulating how fast data can be sent. This
ensures a smooth and efficient flow of information.

Token Bucket or Leaky Bucket:

Imagine a bucket that holds tokens. These tokens represent permission to send data. In the
token bucket approach, devices can only send data when they have tokens. In the leaky
bucket approach, data is released at a steady rate.

Traffic Policing:

At the transport layer, traffic policing monitors data flow. If it's too fast, some data might be
delayed, dropped, or marked for lower priority to keep things in check.

Queue Management:

Queues are like waiting lines for data. Different types of data might have different priorities.
Queue management decides which data gets sent first, helping prioritize important
information.

TCP Window Size:

For TCP (a common protocol on the internet), traffic shaping influences the "window" of
data that can be sent before waiting for acknowledgment. It affects how much data can be in
transit at a time.

Congestion Avoidance:

Traffic shaping helps TCP react appropriately to network congestion. It adjusts how much
data can be sent and how quickly, preventing the network from becoming overloaded.
Quality of Service (QoS):

QoS is like giving VIP treatment to certain types of data. Traffic shaping can be part of QoS,
making sure important data gets priority over less critical information.

Hardware and Software Control:

Implementing traffic shaping involves both hardware (like routers and switches) and software
(configurations and settings). It's like having the right traffic signals and rules on the roads.

Test and Configure Carefully:

To avoid problems, it's crucial to carefully set up and test traffic shaping. The goal is to
improve network performance without causing new issues.

In essence, traffic shaping is about making sure data moves smoothly on the network,
preventing jams, and ensuring important information gets where it needs to go without
unnecessary delays.

Scheduling:
Scheduling in TCP/IP refers to the process of managing and prioritizing the transmission of
data packets over a network. It involves deciding the order in which packets are sent to
optimize network performance. Here's a simple point-wise explanation:

1.Definition: Scheduling in TCP/IP is the method of determining the order in which data
packets are transmitted over a network to ensure efficient and fair use of network resources.
2.Packet Prioritization: Scheduling involves assigning priorities to different types of
packets based on factors such as application requirements, Quality of Service (QoS) policies,
and network conditions.
3.Fairness: Scheduling algorithms aim to provide fair access to the network for all connected
devices or users. This prevents a single user or application from monopolizing network
resources to the detriment of others.
4.Queue Management: In a network, data packets are often placed in queues before being
transmitted. Scheduling includes managing these queues, deciding which packets to send
next, and ensuring that high-priority packets are transmitted promptly.
5.Service Differentiation: Scheduling allows for service differentiation, meaning that critical
or time-sensitive applications can be given preferential treatment over less time-sensitive
traffic. This is essential for applications such as voice and video streaming.
6.Round Robin Scheduling: One common scheduling algorithm is Round Robin, where
each device or connection takes turns sending its next packet. This helps in sharing resources
fairly among connected devices.
7.Weighted Fair Queuing (WFQ): WFQ is a more advanced scheduling algorithm that
assigns weights to different queues based on priority or bandwidth requirements. It ensures
that higher-priority queues receive a larger share of network resources.
8.First Come First Serve (FCFS): FCFS is a simple scheduling algorithm that transmits
packets in the order they arrive in the queue. While straightforward, it may not prioritize
critical or time-sensitive traffic.
9.Priority Queues: Scheduling often involves the use of priority queues, where packets are
organized based on their priority levels. Higher-priority packets are dequeued and transmitted
before lower-priority ones.
10.Adaptive Scheduling: Some scheduling algorithms adapt to changing network
conditions. For example, during periods of high congestion, the scheduling algorithm might
dynamically adjust priorities to ensure efficient use of available bandwidth.
11.Congestion Avoidance: Scheduling plays a role in congestion avoidance by managing the
rate at which packets are transmitted. This helps prevent network congestion, ensuring that
the network operates smoothly.
12.Flow Control: Scheduling also interacts with flow control mechanisms in TCP/IP to
manage the pace at which data is transmitted between sender and receiver, preventing
overload and ensuring reliable communication.
Admission Control:

Admission control in TCP/IP refers to the process of managing and controlling the entry of
new connections into a network to ensure optimal performance and resource utilization.

1.Definition: Admission control is a network management mechanism that regulates the


acceptance of new connections or sessions based on the available resources and the network's
capacity.
2.Preventing Overload: The primary goal of admission control is to prevent network
congestion and overload by carefully allowing only a manageable number of connections that
the network can handle effectively.
3.Resource Evaluation: Before accepting a new connection, admission control evaluates the
available network resources, such as bandwidth, processing capacity, and memory, to ensure
that adding a new connection will not degrade the overall performance.
4.Quality of Service (QoS): Admission control plays a crucial role in maintaining the quality
of service for existing connections. It ensures that accepting a new connection will not
compromise the performance and reliability of already established connections.
5.Traffic Prioritization: Some admission control mechanisms prioritize certain types of
traffic over others. For example, critical applications or services may be given priority access
to network resources over less critical ones.
6.Policy Enforcement: Admission control enforces network policies by determining whether
a new connection complies with predefined rules and thresholds. These policies could be
based on security requirements, service-level agreements (SLAs), or other network
management criteria.
7.Dynamic Adjustment: Admission control is often dynamic and may adjust its decisions
based on real-time changes in network conditions. For instance, during periods of high traffic,
admission control may become more restrictive to prevent congestion.
8.Feedback Mechanisms: Some admission control systems incorporate feedback
mechanisms that continuously monitor network performance and adjust admission decisions
accordingly. This adaptability helps in responding to changing network conditions.
9.Connection Rejection: If admission control determines that accepting a new connection
would exceed the network's capacity or violate established policies, it rejects the connection
request, preventing potential network degradation.
10.Scalability: Admission control mechanisms are designed to scale with the size and
complexity of the network. They should be able to handle varying levels of traffic and adapt
to changes in the network environment.
Data Traffic:
Network traffic is the amount of data that moves across a network during any given time. Network
traffic may also be referred to as data traffic, or just traffic.

In search engine optimization, traffic to a network can be characterized as being direct, organic or
paid. Direct traffic occurs when someone enters a website's URL in a browser. Organic traffic is the
direct result of someone using a search engine to locate content. Paid traffic means someone has
clicked on a sponsored advertisement.
In data center administration, network traffic can be characterized as being either north-south or east-
west. North-south describes client-to-server traffic that moves between the data center and a location
outside the network. North-south traffic is typically depicted vertically to illustrate traffic that flows in
and out of the data center. In the early days of the internet, most network traffic was north-south.

Common network traffic problems

Common network traffic issues include the following:

component failures, such as server, router or firewall failures; and

traffic failures, such as bottlenecks and high latency.

Bottlenecks can occur when there is not enough data handling capacity to process the current traffic
volume. Latency, or the delay from input into a system to its outcome, can be caused by components
in the data center relaying information to each other, increasing network traffic. High latency can
occur more commonly with east-west traffic.

Monitoring network traffic

To help ensure network quality, network administrators should analyze, monitor and secure network
traffic. Network monitoring enables the oversight of a computer network for failures and deficiencies
to ensure continued network performance.

Tools made to aid network monitoring also commonly notify users if there are any significant or
troublesome changes to network performance. Network monitoring enables administrators and IT
teams to react quickly to any network issues.

It's essential for IT and network administrators to monitor network traffic and take steps to maintain
the network, to ensure its integrity, so it remains operational for those working across it. A smooth
flow of network packets ensures users can access and share data without any issues and keeps the
nodes of the network connected for further communication.
Quality of Service:
Quality of service (QoS) is the use of mechanisms or technologies that work on a network to control
traffic and ensure the performance of critical applications with limited network capacity. It enables
organizations to adjust their overall network traffic by prioritizing specific high-performance
applications.

QoS is typically applied to networks that carry traffic for resource-intensive systems. Common
services for which it is required include internet protocol television (IPTV), online gaming, streaming
media, videoconferencing, video on demand (VOD), and Voice over IP (VoIP).

Using QoS in networking, organizations have the ability to optimize the performance of multiple
applications on their network and gain visibility into the bit rate, delay, jitter, and packet rate of their
network. This ensures they can engineer the traffic on their network and change the way that packets
are routed to the internet or other networks to avoid transmission delay. This also ensures that the
organization achieves the expected service quality for applications and delivers expected user
experiences.

As per the QoS meaning, the key goal is to enable networks and organizations to prioritize traffic,
which includes offering dedicated bandwidth, controlled jitter, and lower latency. The technologies
used to ensure this are vital to enhancing the performance of business applications, wide-area
networks (WANs), and service provider networks.

How Does QoS Work?

QoS networking technology works by marking packets to identify service types, then configuring
routers to create separate virtual queues for each application, based on their priority. As a result,
bandwidth is reserved for critical applications or websites that have been assigned priority access.

QoS technologies provide capacity and handling allocation to specific flows in network traffic. This
enables the network administrator to assign the order in which packets are handled and provide the
appropriate amount of bandwidth to each application or traffic flow.

Types of Network Traffic

Understanding how QoS network software works is reliant on defining the various types of traffic that
it measures. These are:

1.Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.

2.Delay: The time it takes for a packet to go from its source to its end destination. This can often be
affected by queuing delay, which occurs during times of congestion and a packet waits in a queue
before being transmitted. QoS enables organizations to avoid this by creating a priority queue for
certain types of traffic.

3.Loss: The amount of data lost as a result of packet loss, which typically occurs due to network
congestion. QoS enables organizations to decide which packets to drop in this event.

4.Jitter: The irregular speed of packets on a network as a result of congestion, which can result in
packets arriving late and out of sequence. This can cause distortion or gaps in audio and video being
delivered.
Congestion Control in TCP:

Networked systems commonly face congestion due to traffic overload on links beyond their

capacity. It results in the loss of packets in the network and, as a result, can severely damage

the user's experience. Although it is impossible to avoid congestion entirely, mechanisms

exist to manage it proactively to prevent damaging effects.

A congestion control protocol must have the following features:

•It must avoid congestion. As the primary goal of the algorithm, it must ensure that the

bandwidth allocated to a particular host does not exceed the bandwidth of the bottleneck link,

which could be responsible for congestion on the network.

•It should be fair. The network resources should be allocated fairly between different hosts.

•The scheme must be efficient. As such, it should ensure that the sender efficiently utilizes

the bandwidth. It should neither be too below the bandwidth of the bottleneck link nor should

it entirely consume it.

Congestion is an important factor in packet switched network. It refers to the state of a

network where the message traffic becomes so heavy that the network response time slows

down leading to the failure of the packet. It leads to packet loss. Due to this, it is necessary to

control the congestion in the network, however, it cannot be avoided.

TCP congestion control refers to the mechanism that prevents congestion from happening or

removes it after congestion takes place. When congestion takes place in the network, TCP

handles it by reducing the size of the sender’s window. The window size of the sender is

determined by the following two factors:

•Receiver window size

•Congestion window size

Receiver Window Size

It shows how much data can a receiver receive in bytes without giving any acknowledgment.

Things to remember for receiver window size:


1.The sender should not send data greater than that of the size of receiver window.
2.If the data sent is greater than that of the size of the receiver’s window, then it causes
retransmission of TCP due to the dropping of TCP segment.
3.Hence sender should always send data that is less than or equal to the size of the receiver’s
window.
4.TCP header is used for sending the window size of the receiver to the sender.
Congestion Window
It is the state of TCP that limits the amount of data to be sent by the sender into the network
even before receiving the acknowledgment.
Following are the things to remember for the congestion window:
1.To calculate the size of the congestion window, different variants of TCP and methods are
used.
2.Only the sender knows the congestion window and its size and it is not sent over the link or
network. The formula for determining the sender’s window size is:
Sender window size = Minimum (Receiver window size, Congestion window size)

Congestion Control in Frame Relay:


Frame Relay is a standardized wide area network (WAN) technology that specifies the
physical and data link layers of digital telecommunications channels using a packet switching
methodology. Originally designed for transport across Integrated Services Digital Network
(ISDN) infrastructure, it may be used today in the context of many other network
interfaces.Frame relay is a packet-switching telecommunications service designed for cost-
efficient data transmission for intermittent traffic between local area networks (LANs) and
between endpoints in wide area networks (WANs). Frame relay services are usually used
either for transferring data between geographically separated LANs or across a WAN. Frame
relay is a cost-effective alternative to point-to-point circuits, which are dedicated leased lines
between LANs or in a WAN. Frame relay is cheaper because rather than paying for the
bandwidth of one or more leased lines, each router in the network shares the single, multi-
access network provided by the frame relay virtual circuit.
Congestion control in Frame Relay is essential to maintain network performance and prevent
the degradation of service quality during periods of high traffic. Below are some key aspects
of congestion control in Frame Relay:

 Frame Relay is like a system for sending data over long distances, and it needs a way
to manage traffic to avoid problems when there's too much data. Here's how it works
in plain language:

 Speed Limit: Frame Relay sets a speed limit for how much data you can send. It's like
saying, "You can send this much data every second." This limit is called the
Committed Information Rate (CIR).

 Extra Data: Sometimes, you might need to send a little more data, like when you have
a burst of traffic. Frame Relay allows for a small extra amount of data for short
periods. This extra amount is called the Excess Burst.

 Importance Tag: Each piece of data you send can have a tag that says how important
it is. This tag is like a "handle with care" label. It's called Discard Eligibility (DE).

 When There's Traffic Jam: If there's too much data and the network gets congested
(like a traffic jam), Frame Relay may start to drop some data to relieve the congestion.

 First, it looks at the tags. Data with the "handle with care" tag (DE) is usually kept,
while other data might be dropped first.

 If the congestion is very bad, even important data might be dropped. This means it
can get lost.

 Warnings: The network can send warnings (Explicit Congestion Notification or ECN)
to tell the sender that it's getting too congested. When the sender gets this warning, it
can slow down to avoid making the congestion worse.

 Policing Traffic: Devices in the network might also check if data is following the
speed limit (CIR). If not, they may stop some data from going through.
 Shaping Traffic: Sometimes, they shape traffic to make sure it follows the speed limit,
preventing big bursts of data.

 Priority: Certain data, like important business information, might be given special
treatment, so it's less likely to be dropped when there's congestion.

Integrated Services and Differentiated Services:

In computer networking, integrated services or IntServ is an architecture that specifies the


elements to guarantee quality of service (QoS) on networks. IntServ can for example be used
to allow video and sound to reach the receiver without interruption.

In the DiffServ model a packet's "class" can be marked directly in the packet, which contrasts
with the IntServ model where a signaling protocol is required to tell the routers which flows
of packets requires special QoS treatment. DiffServ achieves better QoS scalability, while
IntServ provides a tighter QoS mechanism for real-time traffic. These approaches can be
complimentary and are not mutually exclusive.

The IntServ architecture model (RFC 1633, June 1994) was motivated by the needs of real-
time applications such as remote video, multimedia conferencing, visualization, and virtual
reality. It provides a way to deliver the end-to-end Quality of Service (QoS) that real-time
applications require by explicitly managing network resources to provide QoS to specific user
packet streams (flows). It uses "resource reservation" and "admission control" mechanisms as
key building blocks to establish and maintain QoS.

IntServ uses Resource Reservation Protocol (RSVP) to explicitly signal the QoS needs of an
application's traffic along the devices in the end-to-end path through the network. If every
network device along the path can reserve the necessary bandwidth, the originating
application can begin transmitting.

What is DiffServ?

DiffServ is a model for providing QoS in the Internet by differentiating the traffic. The best
effort method used in the internet tries to provide the best possible service depending on the
varying traffic flow, rather than trying to differentiate the flow and provide higher level of
service to some of the traffic. DiffServ tries to provide a improved level of service in the
existing best effort environment by differentiating the traffic flow. For example, DiffServ will
reduce the latency in traffic containing voice or streaming video, while providing best effort
service to traffic containing file transfers. Packets are marked by the DiffServ devices at the
boarders of the network with information about the level of service required by them. Other
nodes in the network read this information and respond accordingly to provide the requested
level of service.
What is IntServ?

IntServ is another model for providing QoS in networks. IntServ is based on building a
virtual circuit in the internet using the bandwidth reservation technique. Requests for
reserving the bandwidth come from the applications that require some kind of level of
service. According to this model, each router in the network has to implement IntServ and
each application that requires a service guarantee has to make a reservation. When bandwidth
is reserved for a certain application, it cannot be reassigned for another application. Routers
between the sender and the receiver determine whether they can support the reservation made
by the application. If they cannot support it, they notify the receiver. Else they have to route
the traffic to the receiver. Therefore, in this method, routers remember the properties of the
traffic flow and also supervise it. The task of reserving paths would be very tedious in a busy
network such as the Internet.

Source Based Congestion Avoidance:

While "Source-Based Congestion Avoidance" may not be a widely recognized term, it seems

to suggest an approach to managing network congestion by focusing on the source of data

transmission. In the context of networking, congestion occurs when there is more data trying

to traverse a network than the network can handle efficiently.

Congestion avoidance mechanisms are designed to prevent or alleviate congestion to

maintain a stable and efficient network. In a source-based approach, the emphasis is likely on
the behavior and actions taken by the source of the data transmission. This could involve

adjusting the rate at which data is sent, implementing algorithms to detect and respond to

congestion signals, or employing other strategies to ensure that the source doesn't overwhelm

the network.The specifics of source-based congestion avoidance would depend on the context

in which the term is used.

DEC Bit Scheme:

DEC bit is a TCP congestion control technique implemented in routers to avoid congestion.

Its utility is to predict possible congestion and prevent it.

When a router wants to signal congestion to the sender, it adds a bit in the header of packets

sent. When a packet arrives at the router, the router calculates the average queue length for

the last (busy + idle) period plus the current busy period. (The router is busy when it is

transmitting packets, and idle otherwise). When the average queue length exceeds 1, then the

router sets the congestion indication bit in the packet header of arriving packets.

When the destination replies, the corresponding ACK includes a set congestion bit. The

sender receives the ACK and calculates how many packets it received with the congestion

indication bit set to one. If less than half of the packets in the last window had the congestion

indication bit set, then the window is increased linearly. Otherwise, the window is decreased

exponentially. This technique dynamically manages the window to avoid congestion and

increasing freight if it detects congestion and tries to balance bandwidth with respect to the

delay. "DEC Bit Scheme" in the context of TCP/IP. However, it's possible that there might be

specific developments or references that emerged after that date. If there have been updates

or new terminologies related to TCP/IP, especially those associated with Digital Equipment

Corporation (DEC) or any specific bit schemes, you might want to refer to more recent

sources or documentation related to networking protocols.TCP/IP (Transmission Control

Protocol/Internet Protocol) is a suite of communication protocols that form the basis for the

internet. DEC, being a significant player in the computer industry, could have contributed to
the development or implementation of certain aspects of TCP/IP. It's recommended to check

official documentation, standards, or resources specific to DEC or TCP/IP for the most

accurate and up-to-date information.

You might also like