0% found this document useful (0 votes)
4 views27 pages

Module1 2

The document outlines key concepts for building a network, including performance metrics, network models, topologies, components, and security requirements. It emphasizes the importance of scalability, reliability, and cost efficiency, while also detailing the roles of application programmers, network operators, and designers. Additionally, it discusses the significance of manageability and protocol layering in network design.

Uploaded by

Mamtha P P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views27 pages

Module1 2

The document outlines key concepts for building a network, including performance metrics, network models, topologies, components, and security requirements. It emphasizes the importance of scalability, reliability, and cost efficiency, while also detailing the roles of application programmers, network operators, and designers. Additionally, it discusses the significance of manageability and protocol layering in network design.

Uploaded by

Mamtha P P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Foundation: Building a Network – Key Concepts

1. Network Requirements

 Performance Metrics:
o Bandwidth: Maximum data transfer rate.
o Latency: Delay in data transfer.
o Throughput: Actual transfer rate.
o Jitter: Variation in delay.
 Scalability: Network’s ability to grow.
 Reliability: Redundancy and fault tolerance.
 Security: Data protection, authentication, and authorization.

2. Network Models

 OSI Model (7 Layers):


o Physical, Data Link, Network, Transport, Session, Presentation, Application.
 TCP/IP Model (4 Layers):
o Network Interface, Internet, Transport, Application.

3. Network Topologies

 Physical: Star, Ring, Bus, Mesh


 Logical: How data flows regardless of physical layout

4. Network Components

 End Devices: PCs, phones, servers


 Intermediary Devices: Routers, Switches, Access Points, Firewalls
 Media: Twisted pair, fiber optics, wireless

5. IP Addressing & Subnetting

 IPv4 vs IPv6
 Subnetting: Divide networks for efficient routing
 CIDR (Classless Inter-Domain Routing)

6. Routing & Switching Basics

 Switching:
o MAC address tables, VLANs
 Routing:
o Static vs Dynamic (RIP, OSPF, EIGRP, BGP)
o Routing tables and algorithms

7. Protocols

 Application Layer: HTTP, FTP, DNS, DHCP


 Transport Layer: TCP vs UDP
 Network Layer: IP, ICMP
 Link Layer: Ethernet, ARP

8. Network Design Principles


 Core, Distribution, Access layers
 Redundancy (STP, HSRP)
 Security Considerations
 Modular Design: Easy scaling and troubleshooting

9. Network Tools & Simulation

 Packet Tracer, GNS3, Wireshark


 Use these to design and simulate networks

10. Network Implementation Steps

1. Requirement Analysis
2. Design: Logical & Physical
3. Addressing & Subnetting
4. Device Configuration
5. Testing and Validation
6. Deployment
7. Monitoring & Maintenance

Requirements – Foundation: Building a Network (VTU M.Tech)

This topic typically comes under Module 1 in the subject Foundation: Building a Network.
Below are detailed notes on Network Requirements, which define the goals and constraints
while designing or building a network.

1. Key Network Requirements

When designing a network, the following major requirements must be considered:

1.1 Functionality

The network must meet the needs of users, such as:

 Resource sharing (files, printers)


 Communication (email, messaging, VoIP)
 Access to the Internet
 Scalability for future growth

1.2 Performance Metrics

Performance refers to how well a network can transmit data. Key metrics:
Metric Description
Bandwidth Maximum data transfer rate (bits/sec)
Throughput Actual data transferred per second
Latency Time delay between sender and receiver
Jitter Variation in delay
Error Rate Number of corrupted bits over time

1.3 Reliability

 The ability of the network to maintain service despite failures


 Measured using:
o Mean Time Between Failures (MTBF)
o Mean Time To Repair (MTTR)
 Requires redundancy (backup links, multiple routers)

1.4 Security
Essential to protect data from threats like eavesdropping, tampering, or DoS attacks.
Key areas:

 Confidentiality: Data should not be accessed by unauthorized users


 Integrity: Data must not be altered during transit
 Authentication: Identity verification of users/devices
 Authorization: Control of resource access

1.5 Scalability

 The network should handle:


o Growing number of devices/users
o Increased data traffic
 Solutions: VLANs, subnetting, hierarchical network design

1.6 Cost Efficiency

 Optimize initial setup cost (hardware, cables, labor)


 Reduce operational cost (maintenance, power)
 Balance between cost vs performance vs scalability

📋 Summary Table

Requirement Description
Functionality Meet user demands for communication, data sharing
Performance High throughput, low latency/jitter
Reliability Fault tolerance, backup systems
Security Confidentiality, integrity, authentication
Scalability Easy expansion without redesign
Cost Minimize setup and operational costs
topic2 Response 2

Here’s a detailed breakdown of the "Requirements" topic under the Foundation: Building a
Network subject (VTU M.Tech):

Requirements for Building a Network


When designing or building a computer network, several key requirements must be considered
to ensure it meets the expected performance, scalability, and security standards.

1. Performance Requirements

These define how well the network performs under certain conditions.

Key Metrics:

 Bandwidth: The maximum data rate (bps) that the network can handle.
 Throughput: Actual rate of successful message delivery over the communication
channel.
 Latency (Delay): Time taken for a data packet to reach the destination.
 Jitter: Variation in packet arrival times (important for real-time applications like VoIP).
 Packet Loss: Number of packets that never reach the destination.

➤ Examples:

 A VoIP call needs low latency and jitter.


 A cloud backup system needs high throughput.

2. Scalability

 Ability of a network to grow in size (more users, devices, and services) without
significant performance degradation.
 Consider use of hierarchical design, modular components, and dynamic routing
protocols.

3. Availability & Reliability

 Availability: Network should be accessible when needed.


 Reliability: Network should work consistently without failure.

Techniques to ensure these:

 Redundant links and devices (failover mechanisms)


 Load balancing
 Use of RAID in storage
 UPS for power continuity

4. Security

Protect the network from unauthorized access and attacks.

Key Security Measures:

 Authentication & Authorization


 Firewalls, Intrusion Detection/Prevention Systems (IDS/IPS)
 VPNs, Encryption protocols (SSL/TLS, IPsec)
 Access control lists (ACLs)

5. Manageability
The ease with which the network can be monitored, maintained, and configured.

Tools and Features:

 SNMP (Simple Network Management Protocol)


 Network Monitoring Tools: SolarWinds, Nagios, Wireshark
 Remote Management: SSH, RDP
 Documentation and configuration backups

6. Cost Considerations

Balancing between budget constraints and technical needs:

 Initial setup cost vs. operational cost


 Equipment cost vs. performance
 Licensing (Cisco, Juniper, etc.)

7. Compliance & Standards

Ensure that the network adheres to:

 Industry standards (IEEE, IETF)


 Regulatory requirements (HIPAA, GDPR, etc.)
 Vendor-specific compatibility

Perspectives – Foundation: Building a Network (VTU M.Tech)

In the context of this subject, "Perspectives" refers to the different viewpoints or


considerations taken into account when designing, analyzing, or managing a computer
network. These perspectives guide network architects in building efficient, secure, and scalable
systems.

Key Perspectives in Network Design

1. Service Perspective

Focuses on what the network should provide to users:

 End-to-end communication
 Application support (email, video, file transfer)
 Quality of Service (QoS)
 Security & Privacy

Example: A VoIP application needs low latency and jitter-free communication

2. Performance Perspective

Evaluates how well the network meets its operational goals.

Metrics include:

 Latency: Delay in data transfer


 Throughput: Volume of data moved successfully
 Packet loss: Data lost in transit
 Jitter: Fluctuation in packet arrival time

Goal: Optimize performance based on application needs (e.g., video streaming vs file
download).

3. Implementation Perspective

Deals with how the network is built and configured:

 Physical topology: Star, mesh, bus, hybrid


 Logical topology: VLANs, subnets
 Routing & switching methods
 Protocol stack used (TCP/IP, OSI)

Example: Choosing between centralized or distributed network control (e.g., SDN vs


traditional).

4. Economic Perspective

Addresses cost-related concerns:

 Initial setup costs (equipment, cabling)


 Operational costs (maintenance, power)
 Scalability cost (future upgrades)

Design must balance performance and cost effectively.

5. Security Perspective

Ensures confidentiality, integrity, and availability of data.

 Encryption, authentication, firewalls


 Protection from:
o DDoS
o Data tampering
o Unauthorized access

Network design must include security protocols (e.g., HTTPS, VPN, IPSec).

6. Manageability Perspective

Focuses on ease of monitoring, configuring, and troubleshooting the network.

 Centralized management tools . Fault detection and logging . Remote configuration and
updates
Helps reduce downtime and ensures efficient operations.

Summary Table

Perspective Focus Area

Service What services the network should support

Performance How efficiently data flows through the network

Implementation How the network is physically/logically built

Economic Cost efficiency and ROI

Security Protecting network data and devices

Manageability Monitoring and maintenance

These perspectives are often interdependent. For instance, improving performance may
increase cost, or adding security could affect latency.

Would you like:

 A diagram summarizing these perspectives?


 Important exam questions on this topic?
 Notes in PDF format?

application Programmer,network operator network designer

Here's a clear breakdown of the roles and perspectives of key stakeholders in building and
maintaining a computer network: the Application Programmer, Network Operator, and
Network Designer

1. Application Programmer

Focus: Develops software applications that use the network to communicate.

Responsibilities:

 Design applications that use protocols like HTTP, FTP, or custom protocols.
 Ensure reliable data transmission using services like TCP or UDP.
 Handle error detection, retransmission logic (if not handled by lower layers).
 Optimize for performance, especially in real-time apps (e.g., chat, video streaming).

Key Questions:

 Does the network provide enough bandwidth?


 Is latency low enough for real-time operations?
 Should the application use TCP or UDP?

2. Network Operator

Focus: Day-to-day management, operation, and troubleshooting of the network.


Responsibilities:

 Monitor and maintain network devices (routers, switches, firewalls).


 Configure and troubleshoot routing protocols, VLANs, and ACLs.
 Ensure availability, security, and performance.
 Use tools like Wireshark, SNMP, Nagios, or SolarWinds.

Key Questions:

 Is the network meeting performance and uptime SLAs?


 Are there bottlenecks, packet loss, or security threats?
 Are all devices configured and updated correctly?

3. Network Designer (Architect)

Focus: Planning and designing the structure and layout of the network.

Responsibilities:

 Design topology (e.g., star, mesh), IP addressing scheme, subnets.


 Choose appropriate routing protocols and network technologies.
 Incorporate redundancy, scalability, and security.
 Document network architecture and create network diagrams.

Cost-Effective Resource Sharing

Definition:
Sharing resources in a way that minimizes overall cost while maximizing the utilization and
efficiency of those resources.

Why It Matters:

 Reduces Capital and Operational Expenses: Instead of buying or maintaining


individual resources for each user or department, shared resources cut down on redundant
spending.
 Improves Utilization: Avoids idle resources by distributing them among multiple users
or processes.
 Enhances Collaboration: Shared resources can be accessed by multiple stakeholders,
promoting better teamwork and communication.
 Scales with Demand: Enables flexible allocation based on current needs, avoiding over-
provisioning.

How to Achieve Cost-Effective Resource Sharing:

1. Virtualization
Running multiple virtual machines or applications on a single physical device to
maximize hardware use.
2. Cloud Computing
Using cloud services where resources like computing power and storage are shared and
billed based on usage.
3. Time-Sharing Systems
Multiple users share system resources by accessing them at different times, common in
early computer systems and still relevant in some environments.
4. Network Resource Sharing
Sharing bandwidth and connectivity among multiple users instead of individual dedicated
connections.
5. Load Balancing
Distributing workloads efficiently across shared resources to prevent bottlenecks and
overuse.
6. Centralized Management
Using management tools to monitor and allocate shared resources dynamically and fairly.

Support for Common Services

Definition:
The infrastructure, tools, and mechanisms provided to facilitate and enhance shared services that
multiple users or applications rely on for essential functions.

What Are Common Services?

Common services are standard, reusable functionalities used across different applications or
systems, such as:

 Authentication and Authorization: Managing user login and permissions.


 Directory Services: Centralized storage of user and resource information.
 Communication Services: Email, messaging, notifications.
 Data Storage: Shared databases and file systems.
 Logging and Monitoring: Tracking system events and health.
 Transaction Management: Ensuring data integrity in multi-step operations.

Support Includes:

1. Standardized APIs and Protocols


Ensures different systems can access services consistently and easily.
2. Middleware Platforms
Software layers that manage communication and integration between services and
applications.
3. Security Features
Protect common services from unauthorized access or data breaches.
4. Service Level Agreements (SLAs)
Define availability, reliability, and performance standards.
5. Scalability and Load Balancing
Allows services to handle varying amounts of usage efficiently.
6. Fault Tolerance and Backup
Maintains service continuity even when some components fail.
7. Centralized Monitoring and Management
Tools to oversee usage, performance, and issues of common services.

Benefits of Supporting Common Services

 Cost Reduction: Avoids duplicating similar services across multiple systems.


 Consistency: Ensures uniform behavior and policies.
 Simplified Development: Developers can rely on existing services instead of building
from scratch.
 Easier Maintenance: Centralized updates and security patches.
 Improved Interoperability: Facilitates smooth communication among systems.

Manageability

Definition:
Manageability refers to how easily and effectively a system, network, or resource can be
monitored, controlled, configured, and maintained to ensure smooth operation.

Key Aspects of Manageability:

1. Monitoring
Continuously observing system performance, resource usage, and health status to detect
issues early.
2. Configuration Management
Ability to set up, modify, and maintain system settings and policies in an organized way.
3. Fault Management
Detecting, diagnosing, and resolving errors or failures quickly to minimize downtime.
4. Performance Management
Ensuring the system meets performance benchmarks and optimizing resource usage.
5. Security Management
Managing access control, authentication, and protecting systems from threats.
6. Automation
Using tools and scripts to automate repetitive management tasks, reducing manual effort
and errors.
7. Scalability
Managing the system’s growth efficiently without compromising performance or control.
8. User-Friendly Interfaces
Providing intuitive dashboards and tools that simplify system management for
administrators.

Why is Manageability Important?

 Reduces Operational Costs: Easier management saves time and labor.


 Improves Reliability: Quick detection and resolution of issues prevent service
disruption.
 Enhances Security: Better control reduces vulnerabilities.
 Supports Growth: Efficiently manages expanding resources or users.
 Simplifies Compliance: Easier to enforce policies and meet regulations.

Examples of Manageability Features:

 Network management software like SNMP (Simple Network Management Protocol)


tools.
 Centralized dashboards for cloud resource monitoring.
 Automated backup and recovery systems.
 Role-based access control (RBAC) systems.

Protocol Layering

Definition:
Protocol layering is a design approach in computer networking where the communication
process is divided into distinct layers, each responsible for specific functions. Each layer
interacts with the layers directly above and below it, enabling modularity and simplifying
network design and troubleshooting.

Why Use Protocol Layering?

 Modularity: Changes in one layer don’t affect others, making updates easier.
 Interoperability: Standard layers allow different systems and vendors to communicate.
 Simplifies Design: Complex communication tasks are broken down into manageable
parts.
 Troubleshooting: Problems can be isolated to specific layers.

Common Protocol Layer Models

1. OSI Model (Open Systems Interconnection)


o 7 Layers:
1. Physical Layer: Transmission of raw bits over a physical medium.
2. Data Link Layer: Node-to-node data transfer and error
detection/correction.
3. Network Layer: Routing and forwarding of packets.
4. Transport Layer: End-to-end communication, reliability, flow control.
5. Session Layer: Managing sessions or connections between applications.
6. Presentation Layer: Data translation, encryption, compression.
7. Application Layer: Network services to applications (e.g., HTTP, FTP).
2. TCP/IP Model (Internet Protocol Suite)
o 4 Layers:
1. Network Interface Layer: Combines OSI’s physical and data link layers.
2. Internet Layer: Routing and addressing (e.g., IP).
3. Transport Layer: Reliable or unreliable delivery (e.g., TCP, UDP).
4. Application Layer: Protocols for applications (e.g., HTTP, SMTP).

How It Works

 Each layer adds its own header (and sometimes trailer) to the data unit received from the
layer above, creating a Protocol Data Unit (PDU).
 When data is sent, it passes down through the layers on the sender side, getting
encapsulated at each layer.
 On the receiver side, data moves up through layers, each removing its respective header,
a process called decapsulation.

Performance (in Computer Networks)

Definition:
Performance in computer networks refers to how efficiently and effectively a network can
transfer data between devices. It is measured using various parameters that indicate speed,
reliability, and capacity.

Key Performance Metrics:

1. Bandwidth (Throughput)
o The maximum rate at which data can be transmitted over a network.
o Measured in bits per second (bps), e.g., Mbps, Gbps.
2. Latency (Delay)
o The time it takes for a data packet to travel from source to destination.
o Measured in milliseconds (ms).
o Includes transmission delay, propagation delay, processing delay, and queuing
delay.
3. Jitter
o Variation in packet arrival time.
o Important for real-time applications like VoIP and video conferencing.
4. Packet Loss
o The percentage of packets that are lost during transmission.
o Affects data integrity and quality of service (QoS).
5. Reliability
o The ability of the network to consistently perform well and recover from errors or
failures.
6. Scalability
o How well the network performs as the number of users or devices increases.

Factors Affecting Network Performance:

 Network congestion (too many users or devices)


 Hardware limitations (old routers, switches)
 Protocol overhead (data added for control and management)
 Distance and media (e.g., fiber is faster than copper)
 Network configuration and design

Improving Network Performance:

 Increase bandwidth or upgrade to higher-speed media.


 Use load balancers to distribute traffic.
 Optimize routing protocols and configurations.
 Use Quality of Service (QoS) settings to prioritize critical traffic.
 Implement caching and content delivery networks (CDNs) for faster access.

Conclusion:

Network performance is crucial for ensuring fast, reliable, and smooth communication between
systems. Monitoring and optimizing key metrics helps maintain good user experience and system
efficiency.

Bandwidth

Definition:

Bandwidth refers to the maximum amount of data that can be transmitted over a network
connection in a given amount of time.

Measured in:

 Bits per second (bps)


 Common units: Kbps, Mbps, Gbps

Example:

If a network has a bandwidth of 100 Mbps, it can theoretically transmit 100 megabits of data
every second.
Key Points:

 Think of bandwidth as the width of a highway — the wider it is, the more cars (data) can
travel at the same time.
 Higher bandwidth = more data can be transferred at once
 Important for downloading/uploading large files, video streaming, and online gaming

Latency

Definition:

Latency is the time delay it takes for a data packet to travel from the source to the destination.

Measured in:

 Milliseconds (ms)

Example:

A latency of 20 ms means it takes 20 milliseconds for a packet to go from sender to receiver.

Key Points:

 Think of latency as the travel time — how long it takes a car (data) to reach its
destination.
 Lower latency = faster response times
 Critical for real-time applications like voice/video calls, online gaming, and remote
control systems

Bandwidth vs. Latency – Quick Comparison

Feature Bandwidth Latency

What it measures Data transfer capacity Time delay in data transfer

Units Mbps, Gbps ms (milliseconds)

High value means More data can be sent Slower communication

Affects Download/upload speeds Real-time interaction

Example concern Video buffering Gaming lag or voice delay

Real-Life Analogy:

 Bandwidth = Size of a water pipe (how much water flows per second)
 Latency = Time it takes for the first drop to reach the faucet after turning it on

Bandwidth and Latency – Short Notes for M.Tech

Bandwidth
Definition:
Bandwidth is the maximum rate at which data can be transmitted over a network in a given
amount of time.

Units:

 Bits per second (bps)


 Common: Kbps, Mbps, Gbps

Key Points:

 Indicates capacity of the network.


 Higher bandwidth = more data per second.
 Affects: Download/upload speed, video streaming, file transfers.

Example:
A 100 Mbps link can transfer 100 megabits every second.

Latency

Definition:
Latency is the delay between sending and receiving data across a network.

Units:

 Milliseconds (ms)

Key Points:

 Indicates speed of response.


 Lower latency = faster communication.
 Affects: Gaming, VoIP, video calls, remote control apps.

Example:
If latency is 50 ms, it takes 50 milliseconds for a packet to reach its destination.

Difference Between Bandwidth and Latency (Table)

Feature Bandwidth Latency

Definition Max data transfer rate Delay in data transmission

Unit bps (e.g., Mbps) ms (milliseconds)

Indicates Quantity of data flow Speed of data travel

High value means More data at once Slower response

Affects File transfers, streaming Gaming, video calls

topic10

Delay × Bandwidth Product


Definition:

The Delay × Bandwidth Product is a measure of the amount of data (in bits) that can exist (or
be in transit) on a network link at any given time.

It tells us how much "data is in flight" on the link — that is, data that has been transmitted but
not yet received.

Formula:

Delay × Bandwidth Product=Propagation Delay×Bandwidth\text{Delay × Bandwidth Product} =


\text{Propagation Delay} \times \
text{Bandwidth}Delay × Bandwidth Product=Propagation Delay×Bandwidth

 Delay = Time taken by a bit to travel from sender to receiver (in seconds)
 Bandwidth = Data transmission rate (in bits per second)

Unit:

 Bits

Example:

 Bandwidth = 10 Mbps
 Propagation delay = 0.1 seconds

Delay × Bandwidth=10×106×0.1=1,000,000 bits\text{Delay × Bandwidth} = 10 \times 10^6 \


times 0.1 = 1,000,000 \text{ bits}Delay × Bandwidth=10×106×0.1=1,000,000 bits

So, at any moment, 1 million bits are in transit on the link.

Interpretation:

 It represents the capacity of the "pipe" — how much data is in transit at once.
 Useful for designing buffer sizes and window sizes in high-speed networks.

Applications:

 TCP Window Size: To fully utilize a link, the sender must be able to send at least Delay
× Bandwidth bits before needing an acknowledgment.
 High-Speed Networks: In long-distance, high-bandwidth networks (like satellite links),
this product becomes very large, requiring large buffers.

Q: Define and Explain Delay × Bandwidth Product. Calculate it for a given link. Explain its
importance in TCP or Network Design.

1. Definition:

The Delay × Bandwidth Product is a measure of the maximum amount of data (in bits) that
can be in transit on a network link at any given time.

It is calculated by multiplying:

 the propagation delay (in seconds), and


 the bandwidth (in bits per second) of the link.

2. Formula:

Delay × Bandwidth Product=Propagation Delay (s)×Bandwidth (bps)\text{Delay × Bandwidth


Product} = \text{Propagation Delay (s)} \times \text{Bandwidth
(bps)}Delay × Bandwidth Product=Propagation Delay (s)×Bandwidth (bps)

3. Explanation:

 It tells us how much data is "in flight" — i.e., data that has been sent but not yet
received.
 Think of it as the amount of water in a pipe at any moment:
o Bandwidth = how wide the pipe is
o Delay = how long the pipe is

4. Example Calculation:

Given:

 Bandwidth = 10 Mbps = 10×10610 \times 10^610×106 bps


 Propagation delay = 100 ms = 0.1 seconds

Delay × Bandwidth Product=10×106×0.1=1,000,000 bits\text{Delay × Bandwidth Product} = 10


\times 10^6 \times 0.1 = 1,000,000 \
text{ bits}Delay × Bandwidth Product=10×106×0.1=1,000,000 bits

✅ So, 1 million bits can be in transit at any time on this link.

5. Importance in TCP / Network Design:

 In TCP, to fully utilize the link, the TCP window size (amount of unacknowledged data
the sender can transmit) must be at least equal to the Delay × Bandwidth Product.
 If the window size is too small, the sender has to wait for ACKs before sending more
data — under-utilizing the link.
 In high-bandwidth, long-delay networks (e.g., satellite links), the product is large, so
large buffers and larger TCP window sizes are needed.
 Helps in buffer sizing, window scaling, and flow control design.

✅ Final Summary:

 Delay × Bandwidth Product = Bits in transit


 Helps determine TCP window size and buffer requirements
 Crucial for designing efficient, high-speed networks

topic11

✅ Perspectives on Connecting
Definition:

"Perspectives on Connecting" refers to the different approaches or viewpoints on how devices or


systems are connected and communicate over a network. These perspectives help us understand
the design, functionality, and goals of a networked system.

🔹 Key Perspectives:

1. Hardware Perspective
o Focuses on physical components like cables, switches, routers, NICs.
o Deals with signal transmission, media types (wired/wireless), and hardware
compatibility.
o Concerned with topology, bandwidth, and latency at the physical level.
2. Software Perspective
o Emphasizes protocols, network services, and applications.
o Involves network programming, protocol stacks (TCP/IP, OSI), and security
mechanisms.
o Looks at how software enables communication, handles errors, and manages
resources.
3. Service/User Perspective
o Focuses on what users expect from the network: speed, reliability, security, and
ease of access.
o Includes Quality of Service (QoS), availability, latency, and application
support.
o Important in cloud services, streaming, VoIP, etc.
4. Economic/Business Perspective
o Considers cost-efficiency, scalability, manageability, and ROI.
o Focuses on resource sharing, operational cost, and deployment strategy.
o Helps design networks that are affordable and maintainable over time.

topic 12

✅ Classes of Links

In computer networks, links refer to the communication channels that connect devices (nodes)
for data transmission. Based on the number of devices connected and how they communicate,
links are classified into the following types:

🔹 1. Point-to-Point Link

 Connects exactly two devices.


 Direct and dedicated connection.
 Most common in wired connections (e.g., between a PC and a router).

Example: Ethernet cable between a computer and a switch.

🔹 2. Multipoint Link (Multidrop)


 A single link is shared by more than two devices.
 Devices must coordinate to avoid collisions.
 More cost-effective than multiple point-to-point links.

Example: Shared bus in LANs or wireless medium in Wi-Fi networks.

🔹 3. Simplex Link

 Data flows in only one direction.


 No return path for acknowledgment or data.

Example: TV broadcast (from station to viewer).

🔹 4. Half-Duplex Link

 Data flows in both directions, but only one direction at a time.


 Devices take turns to transmit.

Example: Walkie-talkie communication.

🔹 5. Full-Duplex Link

 Data flows in both directions simultaneously.


 Uses separate channels or frequencies for each direction.

Example: Telephone conversation or full-duplex Ethernet.

📝 Summary Table:

Link Class Description Direction Example


Point-to-Point Two devices only Bi/Uni-direction PC to router
Multipoint Shared among multiple devices Bi-directional Shared LAN/Wi-Fi
Simplex One-way communication One-way TV broadcast
Half-Duplex Two-way, but one direction at a time Alternating Walkie-talkies
Full-Duplex Two-way, simultaneous Simultaneous Telephone, modern LANs

topic13

Reliable Transmission

🔹 Definition:
Reliable transmission ensures that data sent from the sender is delivered to the receiver
accurately, in the correct order, and without loss or duplication — even in the presence of
network errors.

🔹 Why It’s Needed:

 Networks can lose, duplicate, or reorder packets.


 Bit errors may occur during transmission.
 Reliability is critical for applications like file transfer, emails, and web browsing.

🔹 Techniques Used for Reliable Transmission:

1. Acknowledgments (ACKs)
o Receiver sends back an ACK when a packet is successfully received.
2. Timeouts and Retransmissions
o If ACK is not received within a timeout period, sender retransmits the packet.
3. Sequence Numbers
o Each packet is given a unique number to detect lost or out-of-order packets.
4. Checksums
o Error-detection mechanism to ensure data integrity during transmission.
5. Sliding Window Protocol
o Allows multiple packets to be in transit before requiring an ACK.
o Efficient flow control and retransmission management.

🔹 Example Protocols That Ensure Reliability:

Protocol Reliability Mechanism


TCP (Transmission Control Provides full reliability with ACKs, retransmissions,
Protocol) sequencing
FTP, HTTP over TCP Rely on TCP for reliable data delivery
UDP (User Datagram Protocol) No reliability; suitable for real-time apps

🔹 Comparison: TCP vs UDP

Feature TCP (Reliable) UDP (Unreliable)


Delivery Guaranteed Not guaranteed
Order Maintains packet order No order control
Use Case File transfer, web Streaming, gaming

✅ Summary:

 Reliable transmission ensures no data is lost, duplicated, or corrupted.


 Achieved using ACKs, sequence numbers, checksums, and retransmissions.
 TCP is the most common protocol for reliable transmission.
📝 In Exams, You May Be Asked:

 What is reliable transmission?


 Explain mechanisms for reliable delivery.
 Compare reliable and unreliable protocols (TCP vs UDP).

topic 14

Stop-and-Wait Protocol

🔹 Definition:

The Stop-and-Wait protocol is a simple method of error control and reliable data
transmission in which the sender sends one data packet at a time and waits for an
acknowledgment (ACK) from the receiver before sending the next packet.

🔹 Working Mechanism:

1. Sender transmits one packet.


2. It waits for an ACK from the receiver.
3. If ACK is received, the sender sends the next packet.
4. If no ACK is received within a timeout period, the packet is retransmitted.
5. Process repeats for each packet.

🔹 Diagram:

lua
CopyEdit
Sender Receiver
| ---Frame1---> |
| <---ACK1----- |
| ---Frame2---> |
| <---ACK2----- |

🔹 Advantages:

 Simple to implement
 Ensures reliable and ordered delivery
 Works well for low-speed or low-latency networks

🔹 Disadvantages:

 Inefficient for high-latency or high-bandwidth networks


 Wastes bandwidth, as the sender is idle while waiting for ACK
 Only one packet in transmission at a time → low throughput
🔹 Performance:

Efficiency=Transmission TimeTransmission Time+Round Trip Time (RTT)\text{Efficiency} = \


frac{\text{Transmission Time}}{\text{Transmission Time} + \text{Round Trip Time
(RTT)}}Efficiency=Transmission Time+Round Trip Time (RTT)Transmission Time

If RTT is high compared to transmission time → efficiency drops.

🔹 Use Cases:

 Used in simpler protocols or teaching basic principles of reliable transmission.


 Not used in modern high-speed networks due to inefficiency.

✅ Summary:

Feature Stop-and-Wait Protocol


Packets Sent One at a time
Waits for ACK Yes
Retransmission If no ACK (timeout)
Efficiency Low for high-latency networks
Complexity Very simple

📝 Exam Tip:

You may be asked:

 Explain the Stop-and-Wait protocol with a diagram.


 Discuss its advantages and limitations.
 Compare Stop-and-Wait with Sliding Window Protocol.

topic 15

Sliding Window Protocol

🔹 Definition:

The Sliding Window Protocol is a flow control and reliable data transmission method where
multiple frames can be sent before receiving acknowledgments, thus improving efficiency
compared to Stop-and-Wait.

🔹 Key Concepts:

 The "window" refers to the range of frame sequence numbers that can be sent or
received without waiting.
 The window slides forward as acknowledgments are received.
 Both sender and receiver maintain their own sliding windows.

🔸 Types of Sliding Window Protocols:

1. ✅ Go-Back-N (GBN):

 Sender can send N frames before needing an ACK.


 If an error is detected or a frame is lost, the receiver discards all subsequent frames.
 The sender retransmits from the lost frame onward.

Pros: Simpler receiver


Cons: Wastes bandwidth on retransmission

2. ✅ Selective Repeat (SR):

 Sender can send multiple frames up to window size.


 Receiver accepts out-of-order frames and stores them temporarily.
 Only the specific lost/damaged frame is retransmitted.

Pros: Efficient, avoids unnecessary retransmissions


Cons: More complex receiver and buffer management

🔹 Terminology:

 Window Size (N): Number of frames that can be sent before requiring an ACK.
 ACK: Acknowledgment from receiver (can be cumulative or individual).
 Sequence Number: Uniquely identifies each frame; wraps around after max value.

🔹 Illustration:

Sender Window (Go-Back-N):

css
CopyEdit
[Frame3][Frame4][Frame5][Frame6] --> sent, waiting for ACK

ACK received for Frame3 → window slides forward to allow Frame7.

🔹 Comparison Table:
Feature Go-Back-N Selective Repeat

ACK Type Cumulative Individual

Receiver Buffering Discards out-of-order Stores out-of-order frames

Retransmission From error onward Only specific lost frames

Efficiency Moderate High

Complexity Simple Complex

🔹 Advantages of Sliding Window:

 Improves efficiency over Stop-and-Wait


 Enables pipelining (multiple frames in transit)
 Allows better utilization of bandwidth

✅ Summary:

Feature Sliding Window Protocol

Flow Control Yes

Error Control Yes (with retransmission)

Types Go-Back-N, Selective Repeat

Efficiency Higher than Stop-and-Wait

📝 Exam Tips:

You may be asked:

 Explain Sliding Window Protocol with a diagram.


 Differentiate Go-Back-N and Selective Repeat.
 Calculate window size for given bandwidth and delay.

Let me know if you’d like a diagram or numerical example based on bandwidth-delay product
or ACK timing!

4o

You said:
numerical ex
ChatGPT said:

Here’s a numerical example illustrating the Sliding Window Protocol—perfect for your
M.Tech exam prep:

🔢 Numerical Example: Sliding Window Protocol

Problem:

 Link bandwidth = 1 Mbps (1 million bits per second)


 Round Trip Time (RTT) = 100 ms (0.1 seconds)
 Frame size = 10,000 bits
 Assume Go-Back-N protocol.

Questions:

1. What is the window size required to fully utilize the link?


2. How many frames can be sent before waiting for acknowledgment?

Step 1: Calculate the Bandwidth-Delay Product

Bandwidth × RTT=1,000,000 bps×0.1 s=100,000 bits\text{Bandwidth × RTT} = 1,000,000 \, \


text{bps} \times 0.1 \, \text{s} = 100,000 \, \
text{bits}Bandwidth × RTT=1,000,000bps×0.1s=100,000bits

Step 2: Calculate Number of Frames in Flight (Window Size)

Window Size=Bandwidth-Delay ProductFrame Size=100,00010,000=10 frames\text{Window


Size} = \frac{\text{Bandwidth-Delay Product}}{\text{Frame Size}} = \frac{100,000}{10,000} =
10 \, \text{frames}Window Size=Frame SizeBandwidth-Delay Product=10,000100,000=10frames

Step 3: Interpretation

 To fully utilize the link, the sender should have 10 frames in transit before receiving
ACKs.
 That means the sender's window size should be at least 10.

Summary:
Parameter Value

Bandwidth 1 Mbps

RTT 100 ms

Frame Size 10,000 bits

Bandwidth-Delay Product 100,000 bits

Window Size 10 frames

Answer:

 The sender must use a window size of 10 frames in Go-Back-N to keep the link fully
utilized.

✅ Concurrent Logical Channels

🔹 Definition:

Concurrent Logical Channels refer to multiple independent communication paths or logical


connections that exist simultaneously over a single physical network link.

🔹 Key Points:

 A single physical link can carry multiple logical channels concurrently.


 Each logical channel behaves like an independent communication session.
 Enables multiplexing — sharing the physical medium efficiently among multiple users
or data streams.
 Helps support multiple applications, protocols, or sessions at the same time.

🔹 Example:

 In telephone networks, a single physical line can carry multiple conversations using
logical channels (e.g., via time-division multiplexing).
 In packet-switched networks, multiple TCP connections run concurrently over the same
physical link.

🔹 Benefits:

 Resource Sharing: Efficient use of bandwidth.


 Isolation: Faults or congestion on one channel don’t affect others.
 Flexibility: Supports diverse services and applications concurrently.
✅ In summary:

Feature Description
Physical vs Logical Link One physical link, multiple logical channels
Enables Multiplexing, concurrent sessions
Used in Telephone, packet-switched networks

You might also like