CN UNIT-4
Transport Layer [09 Periods] Connection Oriented and Connectionless Protocols -Process to
Process Delivery, UDP and TCP protocols, SCTP.
Congestion Control - Data Traffic, Congestion, Congestion Control, QoS, Integrated Services,
Differentiated Services, QoS in Switched Networks.
1. Transport Layer and Process-to-Process Delivery
    Process-to-Process Delivery:
       The transport layer delivers data between application processes, not just between
       hosts.
       Each process is identified by a unique combination of IP address and port number
       (socket address)5910.
    Client/Server Paradigm:
       Most process-to-process communication uses the client/server model.
       A process on the local host is the client; the process on the remote host providing the
       service is the server257.
    Multiplexing/Demultiplexing:
       Multiplexing: Combines data from multiple processes into one stream at the sender.
       Demultiplexing: Separates incoming data to the correct process at the receiver5.
2. UDP (User Datagram Protocol)
    Connectionless & Unreliable:
       UDP does not establish a connection before sending data.
       It is unreliable: does not guarantee delivery, order, or error correction.
       No acknowledgments or retransmissions; data packets are called datagrams6810.
    Lightweight & Fast:
       Adds minimal overhead; suitable for small, time-sensitive messages.
       Used in real-time applications (e.g., video streaming, DNS, VoIP) where speed is more
       important than reliability6810.
3. TCP (Transmission Control Protocol)
    Connection-Oriented & Reliable:
      Establishes a connection before data transfer.
      Ensures reliable, ordered, and error-checked delivery.
      Uses handshake, flow control, and error control mechanisms910.
    Sequencing & Retransmission:
      Assigns sequence numbers to data bytes.
      Retransmits lost or corrupted data.
      Used in applications requiring high reliability (e.g., HTTP, HTTPS, FTP, SMTP)910.
4. Key Differences: UDP vs. TCP
 Feature                         UDP (User Datagram          TCP (Transmission Control
                                 Protocol)                   Protocol)
 Connection Type                 Connectionless              Connection-oriented
 Reliability                     Unreliable                  Reliable
 Speed                           Faster                      Slower
 Data Order                      No guarantee                Guarantees order
 Error Checking                  Minimal (checksum only)     Extensive (handshake,
                                                             retransmission)
 Use Cases                       Real-time, small, fast      Applications requiring
                                 applications                reliability
User Datagram Protocol (UDP):
   Definition and Development
      UDP stands for User Datagram Protocol.
      Developed by David P. Reed in 198025.
      Operates at the transport layer for process-to-process communication67.
   Protocol Characteristics
      Connectionless: No connection is established before sending data247.
      Unreliable: Does not guarantee delivery or order of data467.
      No Acknowledgments: Receiver does not confirm receipt of data56.
      Datagrams: Data packets are called datagrams457.
      Stateless: Each datagram is independent45.
   UDP Header Format
       Header Size: 8 bytes (64 bits)48.
       Fields:
          Source Port (16 bits): Identifies sender’s process48.
          Destination Port (16 bits): Identifies receiver’s process48.
          Length (16 bits): Total length of UDP packet (header + data)48.
          UDP length = IP length - IP header's length
          Checksum (16 bits): Optional; detects errors in datagram48.
    Importance and Uses
       Speed Over Reliability: Used when speed is more important than reliability167.
       One-Way Data Flow: Ideal for data flowing in one direction56.
       Streaming and Real-Time: Used for video streaming, online gaming, VoIP167.
       Faster than TCP: Lower overhead and latency167.
    Advantages
       Fast: No connection setup or handshake167.
       Low Overhead: Small header (8 bytes)48.
       Supports Broadcast/Multicast: Easy to send to multiple destinations5.
       Less Memory Usage: Simple and lightweight5.
    Disadvantages
       Unreliable: No guarantee of delivery or order467.
       No Acknowledgments: Cannot confirm if data is received56.
       No Congestion Control: Can overwhelm networks56.
       No Retransmission: Routers do not resend lost datagrams5.
    Applications
       Domain Name System (DNS)
       Simple Network Management Protocol (SNMP)
       Routing Information Protocol (RIP)
       Trivial File Transfer Protocol (TFTP)56
    Comparison Table: UDP vs TCP
 Service/Feature                  UDP                           TCP
 Connection-oriented              No                            Yes
 Connection-less                  Yes                           No
 Reliable delivery                No                            Yes
 Ordered delivery                 No                            Yes
 Congestion control               No                            Yes
 Acknowledgments                  No                            Yes
Transmission Control Protocol (TCP):
Definition and History
   TCP stands for Transmission Control Protocol.
   Introduced in 1974 by Vint Cerf and Bob Kahn as a core protocol of the Internet
   protocol suite123.
   Standardized in 1981 as RFC 793.
Protocol Characteristics
   Connection-Oriented: Establishes a connection before data transfer.
   Reliable: Guarantees delivery, order, and error correction.
   Flow and Congestion Control: Manages data flow and prevents network overload.
   Data Units: Data packets are called segments.
Importance
  Accuracy and Reliability: Used where data integrity is critical (e.g., web browsing,
  email).
  Error Correction: Can fix errors by retransmitting lost or corrupted data.
TCP Segment Format
   Header Size: 20–60 bytes.
   Key Fields:
     Source/Destination Port (16 bits each): Identify the sending and receiving
     processes.
     Sequence/Acknowledgment Number (32 bits each): Track data order and confirm
     receipt.
     Header Length (4 bits): Indicates header size.
     Control Flags (6 bits): Control connection states (SYN, ACK, FIN, etc.).
      Window Size (16 bits): Manages flow control.
      Checksum (16 bits): Ensures error detection.
      Urgent Pointer (16 bits): For urgent data.
      Options and Padding: Optional fields for advanced features.
Advantages
   Reliable Delivery: Data is received in the same order as sent.
   Error Detection and Correction: Retransmits lost or corrupted data.
   Flow and Congestion Control: Prevents network congestion and ensures smooth data
   flow.
Disadvantages
   Slower than UDP: More overhead due to connection setup and control mechanisms.
   No Broadcast/Multicast: Only supports unicast communication.
TCP Handshake Process (Three-Way Handshake)
1. SYN: Client initiates connection with a SYN packet.
2. SYN-ACK: Server responds with SYN-ACK.
3. ACK: Client sends ACK to establish the connection.
   Connection Termination: FIN packets are used to close the connection.
  Message Types
    SYN: Initiate and synchronize connection.
    ACK: Acknowledge received data.
    SYN-ACK: Combined SYN and ACK.
    FIN: Terminate connection.
  Real-World Use
     Applications: HTTP, HTTPS, Telnet, FTP, SMTP, etc.
     Daily Life: Used for web browsing, email, file transfer, and more278.
TCP 3-Way Handshake: Connection Establishment
  Step 1: SYN
     The client sends a SYN (synchronize) packet to the server, initiating the connection
     and specifying its initial sequence number25.
  Step 2: SYN-ACK
     The server responds with a SYN-ACK (synchronize-acknowledge) packet,
     acknowledging the client’s SYN and providing its own sequence number25.
  Step 3: ACK
     The client sends an ACK (acknowledge) packet to the server, confirming receipt of the
     server’s SYN-ACK and completing the connection setup25.
  Result:
     A reliable, full-duplex connection is established between client and server, allowing
     data transfer.
TCP 3-Way Handshake: Connection Termination
  Step 1: FIN
     The client sends a FIN (finish) packet to request connection termination2.
  Step 2: FIN-ACK
     The server responds with a FIN-ACK (finish-acknowledge) packet, acknowledging the
     request and sending its own FIN2.
  Step 3: ACK
     The client sends an ACK to the server, confirming the termination, and the connection
     is closed2.
Stream Control Transmission Protocol (SCTP)
  Overview
     SCTP is a reliable, message-oriented transport protocol developed by IETF.
     Designed for applications needing reliability and multi-streaming, such as multimedia.
  Key Features
     Process-to-process communication: Like TCP and UDP.
     Multi-streaming: Multiple independent streams within one connection (association).
     Multi-homing: Supports multiple IP addresses for redundancy.
     Full-duplex: Data can flow in both directions simultaneously.
     Acknowledgment (ACK) mechanism: Ensures reliable data delivery.
  Packet Format
    Header: Contains source/destination port (16 bits each), verification tag (32 bits),
    checksum (32 bits).
    Chunks: Packets are divided into control chunks (manage connection) and data
    chunks (carry user data).
Comparison Table: SCTP vs TCP vs UDP
 Service/Feature         UDP                 TCP                    SCTP
 Sequence data           No                  Yes                    Yes
 delivery
 Multi-Streaming         No                  No                     Yes
 Multi-Homing            No                  No                     Yes
 Connection-             No                  Yes                    Yes
 Oriented
 Connection-less         Yes                 No                     No
 Half-closed             N/A                 Yes                    No
 connection
 Congestion Control      No                  Yes                    Yes
 Partial reliable data   No                  No                     Optional
1. Congestion Definition
   Congestion occurs when the network load exceeds the network capacity, leading to
   degraded performance or data loss.
2. Congestion Control Overview
   Congestion control refers to mechanisms that manage or prevent congestion, keeping
   traffic below network capacity.
   Two main categories: Open-loop and Closed-loop congestion control.
3. Open-Loop Congestion Control
   Goal: Prevent congestion before it happens.
   Policies:
   1. Retransmission Policy: Carefully manage retransmission timers to avoid excessive
      resending and increased congestion.
   2. Window Policy: Use selective repeat windows instead of Go-Back-N to avoid resending
      already-received packets.
   3. Discarding Policy: Routers discard less important or corrupted packets to reduce
      congestion while maintaining acceptable quality.
   4. Acknowledgment Policy: Reduce acknowledgment traffic by sending
      acknowledgments for multiple packets or only when necessary.
   5. Admission Policy: Deny new virtual circuit requests if resources are insufficient,
      preventing further congestion.
4. Closed-Loop Congestion Control
   Goal: Treat or alleviate congestion after it occurs.
   Techniques:
   1. Backpressure: Congested nodes stop accepting packets from upstream nodes,
      causing congestion to propagate backward and slow the source.
   1. Choke Packet Technique: Routers send choke packets to the source, requesting it to
      reduce traffic.
  1. Implicit Signaling: The source detects congestion indirectly, such as by missing
     acknowledgments.
  2. Explicit Signaling: Congested nodes send explicit signals (forward or backward) to
     inform sources or destinations about congestion.
QoS Techniques
  Traffic Prioritization
     Delay-sensitive traffic (e.g., VoIP, video) is given higher priority at routers and switches.
     Ensures critical applications are not affected by congestion58.
  Resource Reservation
     Protocols like RSVP reserve bandwidth for specific applications or streams.
     Guarantees minimum performance for prioritized traffic6.
  Queuing
     Packets are placed in different queues based on their priority.
     High-priority queues are processed first, reducing delays for critical data45.
  Traffic Marking
     Packets are tagged (e.g., using DSCP or CoS) to indicate their importance.
     Helps network devices recognize and prioritize marked traffic57.
QoS Best Practices
  Avoid Low Bandwidth Limits: Set sufficient bandwidth to prevent excessive packet
  drops25.
  Proper Queue Distribution: Allocate queues so high-priority traffic gets preferential
  treatment5.
  Bandwidth Guarantees: Reserve bandwidth only for specific services, not all traffic5.
  Consistent Prioritization: Use either service-based or security policy-based prioritization,
  not both, to simplify management5.
  Minimize Complexity: Keep QoS configuration simple for easier management and
  troubleshooting5.
  Accurate Testing: Use UDP for testing and avoid oversubscribing bandwidth during
  tests5.
Advantages of QoS
  Application Prioritization: Ensures critical applications always get necessary resources.
  Better Resource Management: Optimizes use of network resources and reduces costs.
  Enhanced User Experience: Improves performance of key applications for end users.
  Point-to-Point Traffic Management: Delivers reliable, ordered data transmission.
  Packet Loss Prevention: Reduces dropped packets, especially for high-priority traffic.
  Latency Reduction: Speeds up network requests by prioritizing critical data8.
Types of Network Traffic (QoS Perspective)
  Bandwidth:
    The maximum data transfer rate of a network link.
      QoS assigns specific bandwidth amounts to different traffic types or queues167.
  Delay (Latency):
      The time taken for a packet to travel from source to destination.
      Queuing delay can occur during congestion; QoS uses priority queues to minimize
      delays for important traffic167.
  Loss:
      Packet loss happens when data is dropped due to congestion or errors.
      QoS helps decide which packets to drop to protect critical applications167.
  Jitter:
      The variation in packet arrival times, causing out-of-order packets.
      High jitter affects real-time applications like voice and video; QoS reduces jitter by
      prioritizing such traffic167.
Getting Started with QoS
  Identify Critical Traffic:
     Determine which applications or services are important, bandwidth-heavy, or
     sensitive to latency and loss26.
  Traffic Classification:
     Classify traffic by port, IP, application, or user to assign priorities245.
  Bandwidth Management:
     Set bandwidth limits and use queuing to ensure critical traffic gets priority238.
  Traffic Shaping and Scheduling:
     Use traffic shaping and scheduling algorithms to control flow and prevent
     congestion238.
  Policy Deployment:
     Deploy policies that classify and prioritize traffic, ensuring availability and
     performance for key applications26.
Congestion:
     Occurs when too many packets are present in the network, causing delays and packet
     loss.
     Degrades network performance.
     Both network and transport layers work together to manage congestion by reducing
     the load on the network.
   Leaky Bucket Algorithm:
      Here is a simplified, pointwise explanation of the leaky bucket algorithm:
   Purpose:
      Controls and smooths the rate and amount of data sent to the network.
      Prevents bursts of traffic from causing congestion.
   How it works:
   1. Bucket Analogy:
         Imagine a bucket with a small hole at the bottom.
         Water (data) is poured into the bucket at a variable rate.
         Water leaks out of the bucket at a constant rate, no matter how fast it is poured in.
   2. Overflow:
         If the bucket is full, extra water (data) spills over and is lost.
   3. Network Application:
         Data packets enter the bucket (buffer) at varying speeds.
         Packets leave the bucket at a steady, fixed rate.
         If the bucket is full, new packets are dropped.
   4. Result:
         Bursty traffic is converted into a smooth, constant flow.
         Example: If data is sent at 10 Mbps for 4 sec, then 0 Mbps for 3 sec, then 8 Mbps
         for 2 sec, the output is smoothed to a steady rate (e.g., 8 Mbps for 9 sec)126.
Token bucket algorithm:
   Purpose:
      Controls the rate and burstiness of data sent over a network by using tokens123.
      Designed to overcome the limitations of the leaky bucket algorithm, which only allows
      a constant average rate and does not benefit from idle periods.
   How It Works:
   1. Tokens are added to a bucket at a fixed rate (e.g., every second).
   2. The bucket has a maximum capacity (cannot hold more than a certain number of
      tokens).
   3. To send a packet, a token must be removed from the bucket.
   4. If there are no tokens, packets must wait or are dropped.
   5. If the host is idle, tokens accumulate in the bucket.
   6. When the host wants to send a burst of data, it can use all accumulated tokens at
      once.
   Example:
     If 10 tokens are added per tick and the host is idle for 10 ticks, it can send a burst of
     100 packets (if the bucket can hold that many tokens).
   Advantages:
     Allows for bursts of traffic as long as tokens are available235.
     Makes use of idle time by letting hosts accumulate tokens for future use.
     More flexible than the leaky bucket algorithm for real-world traffic patterns.
Congestion:
   Definition:
      A state in the network layer where message traffic is so heavy that it slows down
      network response time27.
   Effects:
      Increased delay: Performance decreases as packets take longer to reach their
      destination.
      Retransmission: Higher delay leads to more retransmissions, worsening the
      congestion23.
Congestion Control Algorithms:
   Leaky Bucket Algorithm
      Concept:
         Imagine a bucket with a small hole at the bottom.
         Water (data) enters at any rate but leaks out at a constant rate.
         If the bucket is full, extra water (packets) spills over and is lost.
      How it works:
      1. Host sends packets into the bucket (buffer).
      2. Packets exit the bucket at a steady, fixed rate.
      3. Converts bursty traffic into a smooth, uniform flow.
      4. If the bucket is full, new packets are dropped.
      Result:
         Prevents sudden bursts from overwhelming the network.
   Token Bucket Algorithm
      Purpose:
         Overcomes the inflexibility of leaky bucket by allowing bursts and using idle time.
      How it works:
      1. Tokens are added to the bucket at regular intervals, up to a maximum.
      2. To send a packet, a token is removed from the bucket.
      3. If there are no tokens, packets must wait or are dropped.
      4. If the host is idle, tokens accumulate for future use.
      Superiority:
         Allows for bursts of data if tokens are available.
         More flexible than leaky bucket for real-world traffic patterns.
   Formula (Token Bucket):
Integrated Services (IntServ):
   Definition:
     Integrated Services (IntServ) is a network architecture for ensuring Quality of Service
     (QoS) by reserving resources for specific data flows.
     Designed to allow applications like video and audio to be delivered without
     interruption135.
Key Features:
   End-to-End Resource Reservation:
      Each router along the path must support IntServ and reserve resources for each
      flow.
      Uses signaling protocols (like RSVP) to request and reserve bandwidth, delay, and
      loss guarantees135.
   Explicit Signaling:
      Applications signal their requirements to routers, which then reserve resources if
      available.
      If a router cannot support the reservation, it sends a reject message.
   Soft State:
      Reservations are maintained in "soft state" and time out if not refreshed, so
      resources are released if the application stops or fails13.
Service Classes:
   Guaranteed Service:
      Provides strict guarantees for bandwidth, delay, and packet loss—ideal for real-
      time, intolerant applications like VoIP36.
   Controlled Load Service:
      Mimics best-effort service in a lightly loaded network; suitable for adaptive
      applications that can tolerate some delay36.
   Best-Effort Service:
      Standard Internet service with no guarantees; further divided into interactive
      burst, interactive bulk, and asynchronous categories36.
Implementation Components:
   Signaling Protocol (RSVP):
      Used to request and reserve resources.
   Admission Control:
      Determines if requested resources are available.
   Classifier and Packet Scheduler:
      Ensures packets are handled according to their reserved service class.
Scalability Issues:
   Resource Intensive:
      Requires every router to track and maintain reservations for each flow, making it
      difficult to scale in large networks145.
   Limited Deployment:
         Not widely used in large networks due to scalability concerns; more common in
         small or controlled environments.
Differentiated Services (DiffServ):
     Definition:
       A networking architecture for classifying and managing network traffic to provide
       Quality of Service (QoS) on modern IP networks126.
       Designed to be simple and scalable, supporting multiple mission-critical
       applications.
     How it works:
       Traffic Conditioning: Ensures traffic entering the DiffServ domain complies with
       agreed policies.
       Packet Classification: Groups packets into specific classes using traffic descriptors.
       Packet Marking: Assigns priority by marking packets (using DSCP in the IP
       header)126.
       Congestion Management: Uses queuing and scheduling to manage traffic.
       Congestion Avoidance: Monitors and controls traffic to minimize congestion,
       including selective packet dropping.
     Benefits:
       Scalability: Handles large networks efficiently by treating traffic in aggregated
       classes rather than per flow.
       Reduced Device Burden: Minimizes processing overhead for network devices.
       Domain Scope: QoS is managed within a network domain, not necessarily end-to-
       end7.
   Comparison: Integrated Services vs. Differentiated Services
 Feature                          Integrated Services              Differentiated Services
                                  (IntServ)                        (DiffServ)
 Definition                       QoS architecture with            QoS architecture with
                                  resource reservation for         packet classification and
                                  each flow                        marking
 Functionality                    Requires prior resource          Marks packets with priority;
                                  reservation                      no prior reservation
                                                                   needed
 Scalability                      Not scalable (per-flow           Highly scalable (per-class
                                  state)                           state)
 Setup                            Per-flow setup                   Long-term, class-based
                                                                   setup
 Service Scope                    End-to-end                       Domain-based
Conclusion:
 DiffServ is a scalable, efficient way to provide QoS by classifying and marking traffic, while
IntServ requires per-flow resource reservation and is less scalable167.