Computer Networks
Computer Networks
Computer Networks
        No part of this eBook may be used or reproduced in any manner whatsoever without the publisher’s
        prior written consent.
        This eBook may or may not include all assets that were part of the print version. The publisher reserves
        the right to remove any material present in this eBook at any time.
        ISBN 9788131761274
        eISBN 9788131776209
        Head Office: A-8(A), Sector 62, Knowledge Boulevard, 7th Floor, NOIDA 201 309, India
        Registered Office: 11 Local Shopping Centre, Panchsheel Park, New Delhi 110 017, India
         		 Preface                                                            vii
                    Unit I     Introduction
                     1. Overview of Data Communications and Networking           1
                     2. Reference Models and Network Devices                 23
Index 293
         Data communications and computer networks are two aspects of a multifarious field that caters to both
         telecommunications and computing industries. Over the past 10 years, an enormous growth has been
         seen in this field. Nowadays, when we talk about communications and networking, we are not restricted
         to just traditional telephone networks or wired LANs such as Ethernet; rather, we have plentiful new
         developments including Wi-Fi, 2G and 3G mobile networks, Bluetooth, WAP and wireless local loops.
         Because of these technological advancements, the demand for the courses on data communications
         and computer networks has been continuously increasing. Keeping pace with this trend, almost all
         universities have integrated the study of data communications and computer networks in B.Tech. (CSE
         and IT), M.C.A. and M.B.A. courses. This book, Data Communications and Computer Networks, in its
         unique easy-to-understand question-and-answer format, directly addresses the need of students enrolled
         in these courses.
              The questions and corresponding answers in this book have been designed and selected to cover all
         the necessary material in data communications and networking as per the syllabi of most universities.
         The text has been designed to make it particularly easy for students having little or even no background
         in data communications and networking to grasp the concepts. The students can use it as a textbook for
         one- or two-semester course while interested professionals can use it as a self-study guide. The orga-
         nized and accessible format allows students to quickly find the questions on specific topic
              This book is a part of series named Express Learning Series which has a number of books designed
         as quick reference guides.
         Unique Features
            1. This book is designed as a student-friendly and self-learning guide. In addition, it is written in a
               clear, concise and lucid manner.
            2. It has been prepared in an easy-to-understand question-and-answer format.
            3. The chapters of this book include previously asked as well as new questions.
            4. The chapters cover various types of questions such as multiple choice, short and long
               questions.
            5. Solutions to numerical questions asked in various examinations are provided in this book.
            6. All ideas and concepts included in this book are presented with clear examples.
            7. All concepts are well structured and supported with suitable illustrations.
            8. To help readers, the inter-chapter dependencies are kept to a minimum.
            9. A comprehensive index, which is set at the end of the book, will be useful for readers to access
                the desired topics quickly.
         Chapter Organization
         All questions and answers are organized into seven units with each unit consisting of one or more
         chapters. A brief description of these units is as follows:
           �    Unit I: Introduction
                This unit covers the introductory concepts of data communications and computer networks in
                two chapters. Chapter 1 discusses the differences between communication and transmission,
                the components of data communications, the categories and applications of computer networks,
                network topologies, protocols and standards and the general idea of the Internet. Chapter 2
                introduces two standard networking models, which are open systems interconnection (OSI) and
                the Internet model (TCP/IP), along with a brief idea about the different layers of these models. It
                also describes various network devices such as switch, hub, bridge and gateway. This unit serves
                as a prelude to the rest of the book.
           �    Unit II: Physical Layer
                This unit focuses on the physical layer of reference models and consists of three chapters. Chapter 3
                deals with analog and digital transmissions. It describes various techniques to convert digital/
                analog data to analog or digital signals. Chapter 4 spells out the characteristics of transmission
                media and various guided and unguided media such as twisted-pair cables, coaxial cables, fibre
                optic cables, radio waves, microwaves, infrared waves and satellite communication. Chapter 5
                discusses how the available bandwidth of a channel can be utilized efficiently using multiplexing
                and spreading. It also explains the concept of switching which is not only related to this layer but
                also to several layers. The use of two common public networks, telephone and cable TV networks,
                for data transfer is also covered in the chapter.
           �    Unit III: Data Link Layer
                This unit discusses, in detail, the data link layer of reference models and consists of four chapters.
                Chapter 6 outlines various design issues and types of services provided by the data link layer. It also
                discusses how to detect and correct the errors that occur due to certain transmission impairments.
                Chapter 7 spells out two main responsibilities of data link layer: flow control and error control. It
                first discusses the protocols needed to implement flow and error control and then the discussion
                moves on to bit-oriented protocols such as high-level data link control (HDLC) protocol and
                byte-oriented protocols such as point-to-point protocol (PPP). Chapter 8 familiarizes the reader
                with a sub-layer of data link layer named media access control (MAC) layer that is responsible
                for resolving access to shared media. It discusses a number of multiple-access protocols such as
                ALOHA, CSMA, CSMA/CD, CSMA/CA, reservation, polling, token passing, FDMA, TDMA
                and CDMA that have been devised to control access to shared media. Chapter 9 throws light on
                various IEEE 802 standards specified for wired and wireless LANs. It discusses, in detail, the
                Ethernet—a wired LAN, Bluetooth—a short-range wireless technology, X.25—a virtual circuit
                network, frame relay and asynchronous transfer mode (ATM)—switched WANs, and synchronous
                optical network (SONET)—a high-speed WAN that uses fibre-optic technolog .
           �    Unit IV: Network Layer
                This unit is all about the network layer of reference models and consists of two chapters.
                Chapter 10 explores major design issues in switched data networks and Internet which are routing
                and congestion control. It includes discussion on IP addressing and various routing algorithms for
               designing optimal routes through the network, so that the network can be used efficientl . It also
               describes what the congestion is, when it occurs, what its effects are and how it can be controlled.
               Chapter 11 familiarizes the reader with the meaning of quality of service (QoS) of a network and
               the ways to help improve the QoS. It includes discussion on the major protocol defined at the
               network layer which is Internet protocol (IP). It also discusses other network layer protocols such
               as ARP and ICMP that assist IP to perform its function.
          � Unit V: Transport Layer
         		 This unit comprises a single chapter detailing the transport layer of reference models. Chapter 12
            provides an overview of transport layer and explains the duties and services of this layer. It also
            discusses two transport layer protocols, which are user datagram protocol (UDP) and transmission
            control protocol (TCP).
          � Unit VI: Application Layer
         		 This unit revolves around the application layer of reference models and consists of two chapters.
            Chapter 13 describes a client/server application named domain name system (DNS) that is
            responsible for providing name services for other applications. It also expounds on three common
            applications in the Internet, which are TELNET, e-mail and file transfer protocol (FTP). The chapter
            concludes with a brief discussion on the famous World Wide Web (www), hypertext transfer
            protocol (HTTP), which is used to access the Web, and the network management protocol named
            as simple network management protocol (SNMP). Chapter 14 details multimedia and some new
            protocols that have been developed to deal with specific problems related to multimedia in other
            layers.
          � Unit VII: Security
         		 This unit consists of a single chapter that discusses, in detail, the security in the network. Chapter 15
            describes the importance of security in communications and networking. It discusses cryptography,
            different cryptographic algorithms, such as data encryption standard (DES), triple-DES and RSA,
            hash functions and digital signatures. It also discusses the role of firewall in the network and the
            means of user authentication and message authentication.
         Acknowledgements
         We would like to thank the publisher, Pearson Education, their editorial team and panel of reviewers
         for their valuable contributions towards content enrichment. We are indebted to the technical and edito-
         rial consultants for devoting their precious time to improve the quality of the book. In addition, we are
         grateful to the members of our research and development team who have put in their sincere efforts to
         bring out a high-quality book.
         Feedback
         For any suggestions and comments about this book, please send an e-mail to itlesl@rediffmail.com.
             We hope that you will enjoy reading this book as much as we have enjoyed writing it.
                                                                                            Rohit Khurana
                                                                                           Founder and CEO
                                                                                                    ITL ESL
          Communication                                            Transmission
          • It refers to exchange of meaningful information       • It refers to the physical movement of information.
             between two communicating devices.
          • It is a two-way scheme.                                • It is a one-way scheme.
              2. What is meant by data communication? What are the characteristics of an efficien data
        communication system?
           Ans: Data communication refers to the exchange of data between two devices through some form
        of wired or wireless transmission medium. It includes the transfer of data, the method of transfer and the
        preservation of the data during the transfer process. To initiate data communication, the communicating
        devices should be a part of a data communication system that is formed by the collection of physical
        equipments (hardware) and programs (software). The characteristics of an efficient data communication
        system are as follows:
          Reliable Delivery: Data sent from a source across the communication system must be delivered
             only to the intended destination.
          Accuracy: Data must be delivered at the destination without any alteration. If the data is altered
             or changed during its transmission, it may become unusable.
          Timely Delivery: Data must be delivered in time without much time lags; otherwise, it may
             be useless for the receiver. As in case of video and audio transmissions, timely delivery means
                  delivering data at the same time it is produced, in the same order in which it is produced and also
                  without any delay.
                Jitter: It refers to the differences in the delays experienced during the arrival of packets. It is
                 uneven delay in the arrival time of audio or video data packets. These packets must be delivered at
                  a constant rate; otherwise, the quality of the video will be poor.
                  3. What are the components of data communication system?
             Ans: There are five basic components of a data communication system (Figure 1.1)
             Message: It refers to the information that is to be communicated. It can be in the form of text,
               numbers, images, audio or video.
             Sender: It refers to the device, such as a computer, video camera and workstation, which sends
               the message.
             Receiver: It refers to the device, such as a computer, video camera and workstation, for which the
               message is intended.
             Transmission Medium: It refers to the path which communicates the message from sender to
               receiver. It can be wired such as twisted-pair cable and coaxial cable or wireless such as satellite.
             Protocol: It refers to a set of rules (agreed upon by the sender and the receiver) that coordi-
                nates the exchange of information. Both sender and receiver should follow the same protocol to
                communicate with each other. Without the protocol, the sender and the receiver cannot communi-
                 cate. For example, consider two persons; one of which speaks only English while another speaks
                 only Hindi. Now, these persons can communicate with each other only if they use a translator
                 (protocol) that converts the messages in English to Hindi and vice versa.
                         Protocol                                                           Protocol
P ro t o c ol
Message
Transmission medium
                       Sender                                                                     Receiver
                                         Figure 1.1   Components of a Data Communication System
               4. What are the different forms of data representation? Explain in detail any two coding
        schemes used for data representation.
          Ans: Data can be available in various forms such as text, numbers, images, audio and video.
        In networking, data has to be transmitted from source to destination in binary form. Thus, information
        such as alphabets (a–z, A–Z), numbers (0, 1, 2, …, 9), special symbols (!, @, #, $ etc.) and images
        (in the form of pixels/picture elements) has to be converted into sequences of bits, that is, 0 and 1.
        Audio and video data have to be converted from analog to digital signal for transmission with the help
        of different encoding schemes (explained in Chapter 03). There are various coding schemes including
        Unicode, ASCII and EBCDIC which are used these days to represent the data. Here, we discuss only
        ASCII and EBCDIC.
                                                      Alphabetic characters
                                  Uppercase                                         Lowercase
                                       ASCII-7 code                                      ASCII-7 code
                                 In binary               In                        In binary               In
          Prints as          Zone         Digit     hexadecimal   Prints as    Zone         Digit     hexadecimal
          A                  100          0001          41            a        110          0001          61
          B                  100          0010          42            b        110          0010          62
          C                  100          0011          43            c        110          0011          63
          D                  100          0100          44            d        110          0100          64
          E                  100          0101          45            e        110          0101          65
          F                  100          0110          46            f        110          0110          66
          G                  100          0111          47            g        110          0111          67
          H                  100          1000          48            h        110          1000          68
          I                  100          1001          49            i        110          1001          69
          J                  100          1010          4A            j        110          1010          6A
          K                  100          1011          4B            k        110          1011          6B
          L                  100          1100          4C            l        110          1100          6C
          M                  100          1101          4D           m         110          1101          6D
                                                                                                        (Continued )
                                                       Alphabetic characters
                                  Uppercase                                               Lowercase
                                       ASCII-7 code                                            ASCII-7 code
                                 In binary               In                              In binary               In
          Prints as          Zone         Digit     hexadecimal Prints as           Zone          Digit     hexadecimal
          ’                  010          1100          2C            {             111           1011          7B
          -                  010          1101          2D            |             111           1100          7C
          .                  010          1110          2E            }              111          1101          7D
          /                  010          1111          2F            ~              111          1110          7E
          @                  100          0000          40                           111          1111          7F
                                                           Alphabetic characters
                                       Uppercase                                               Lowercase
                                         EBCDIC                                                   EBCDIC
                                In binary             In                                In binary             In
          Prints as         Zone         Digit   hexadecimal           Prints as    Zone         Digit   hexadecimal
          A                 1100         0001        C1                    a        1000         0001        81
          B                 1100         0010        C2                    b        1000         0010        82
          C                 1100         0011        C3                    c        1000         0011        83
          D                 1100         0100        C4                    d        1000         0100        84
          E                 1100         0101        C5                    e        1000         0101        85
                                                                                                             (Continued )
SIMPLEX
One-way communication
DIRECTION OF DATA
HALF-DUPLEX
                                                   Two-way communication
                                               DIRECTION OF DATA AT TIME T1
FULL-DUPLEX
                                                   Two-way communication
                                              DIRECTION OF DATA AT ALL TIME
                transmission can be considered as an example of simplex mode of transmission where the satellite
                only transmits the data to the television, vice versa is not possible.
              Half-Duplex: In this transmission mode, each communicating device can receive and transmit
                information, but not at the same time. When one device is sending, the other can only receive at that
                 point of time. In half-duplex transmission mode, the entire capacity of the transmission medium is
                 taken over by the device, which is transmitting at that moment. Radio wireless set is an example of
                 half-duplex transmission mode where one party speaks and the other party listens.
               Full-Duplex: This transmission mode allows both the communicating devices to transmit and
               receive data simultaneously. A full-duplex mode can be compared to a two-way road with traffic
                flowing in both directions. A standard voice telephone call is a full-duplex call because both parties
                can talk at the same time and be heard.
              6. Defin computer network. What are the different criteria that a network should meet?
          Ans: A computer network refers to a collection of two or more computers (nodes) which are con-
        nected together to share information and resources. Nodes are connected if they are capable of exchang-
        ing information with each other. To be able to provide effective communication, a network must meet a
        certain number of criteria, some of which are as follows:
          Performance: Performance of a network can be determined by considering some factors such as
             transit time, response time, throughput and delay. The amount of time taken by a message to travel
                 from one device to another is known as transit time and the time elapsed between the user initiates
                 a request and the system starts responding to this request is called the response time. The amount
                 of work done in a unit of time is known as throughput. To achieve greater performance, we need
                  to improve throughput and reduce the transit time, response time and delay. However, increasing
                  the throughput by sending more data to the network often leads to traffic congestion in the network
                  and thus, increases the delay. Some other factors that affect the performance of a network are the
                  type of transmission medium, the total number of users connected to the network and the efficiency
                  of connected hardware and software.
                Reliability: An efficient network must be reliable and robust. Reliability of a network is deter-
                 mined by the factors such as how frequently the failure is occurring and how much time is being
                 spent in recovering from a link failure.
               Security: A network must also provide security by protecting important data from damage and
                 unauthorized access. In addition, there must be procedures and policies to handle theft and recovery
                 of data.
               of computer. However, in a networked environment, a copy of the important data can be kept on
               the server as well as on other connected computers on the network. In this case, failure of one com-
               puter will not result in loss of information, as the data can still be accessed from other computers
               whenever required.
             Communication: Computer networks have revolutionized the way people communicate. Rather
               than exchanging memos and directives on paper, which involves a lot of printing costs and delays,
               network users can instantly send messages to others and even check whether or not their messages
               have been received.
                8. What are the various applications of a computer network?
           Ans: Nowadays, computer networks have become an essential part of industry, entertainment world,
        business as well as our daily lives. Some of the applications of a computer network in different fields are
        as follows:
          Business Applications: There is a need of effective resource sharing in companies for the ex-
             change of ideas. This can be achieved by connecting a number of computers with each other.
             It allows transferring of business information effectively without using paper. For example, an
             employee of one department can access the required information about another department using
             network.
          Marketing and Sales: Marketing firms utilize networks for conducting surveys to gather and
             analyze data from the customers. This helps them to understand the requirements of a customer
              and use this information in the development of the product. Sales professionals can use various
              applications such as online shopping, teleshopping and online reservation for airlines, hotel rooms
              etc. in order to increase the revenue of their organization.
          Financial Services: Computer networks play a major role in providing financial services to people
              across the globe. For example, the financial application such as electronic fund transfer helps
              the user to transfer money without going into a bank. Some other financial applications that are
              entirely dependent on the use of networks include ATM, foreign exchange and investment services,
               credit history searches and many more.
          Directory and Information Services: Directory services permit a large number of files to be stored
              a central location thereby speeding up the worldwide search operations. Information services of
              Internet such as bulletin boards and data banks provide a vast amount of information to the users
              within seconds.
          Manufacturing: Computer networks are widely being used in manufacturing. For example, the
              applications such as computer-aided design (CAD) and computer-assisted manufacturing (CAM)
              use network services to help design and manufacture the products.
          E-mail Services: This is one of the most widely used applications of network. With the help of
             computer networks, one can send e-mails across the world within a few seconds and without using
             paper.
          Mobile Applications: With the help of mobile applications such as cellular phones and wireless
             phones, people wishing to communicate are not bound by the limitation of being connected by
             fixed physical connections. Cellular networks allow people to communicate with each other even
             while travelling across large distances.
          Conferencing: With the help of networking, conferencing (teleconferencing or videoconferencing)
             can be conducted that allows remotely located participants to communicate with each other as if
             they are present in the same room.
        Client/Server Architecture
        In this architecture, each computer is either a client or a server. To complete a particular task, there exists
        a centralized powerful host computer known as server and a user’s individual workstation known as
        client (Figure 1.3). The client requests for services (file sharing, resource sharing etc.) from the server
        and the server responds by providing that service. The servers provide access to resources, while the
        clients have access to the resources available only on the servers. In addition, no clients can communi-
        cate directly with each other in this architecture. A typical example of client/server architecture is accessing
        a website (server) from home with the help of a browser (client). When a client makes a request for an
        object to the server, then the server responds by sending the object to the client. In addition, it must be
        noticed that two browsers accessing the same website, never communicate with each other.
                                                                 Server
                                                                                          Client
                                Client
                                                  Client                    Client
                                             Figure 1.3    Client/Server Architecture
           An advantage of client/server architecture is that the IP address of the server is always fixed and the
        server is always available on the network for clients. However, the disadvantage of this architecture is that
        with time as the number of clients starts to increase, the number of requests to the server also increases
        rapidly. In this scenario, we might need more than one server to serve larger number of requests.
Client Client
Client Client
             11. Distinguish between point-to-point and multipoint connections? Give suitable diagrams.
           Ans: In order to communicate with each other, two or more devices must be connected using a
        link. A link is a physical path using which data can be transferred between two or more devices. The
        two possible types of connections between two or more devices are point-to-point and multipoint
        connections.
           Point-to-Point: In a point-to-point connection, there is a dedicated link between the communicating
              devices (Figure 1.5). The link is totally reserved for transmission specifically for those two devices.
              Most point-to-point connections are established with cable or wire though satellite or microwave
              links are also possible. For example, operating any device such as television using a remote control
              establishes a point-to-point connection between the device and the remote control.
Link
Workstation Workstation
Link
Workstation Workstation
Workstation Workstation
Link
Mainframe
Workstation
        Bus Topology
        Bus topology uses a common bus or backbone (a single cable) to connect all devices with terminators
        at both ends. The backbone acts as a shared communication medium and each node (file server,
        workstations and peripherals) is attached to it with an interface connector. Whenever a message is to
         be transmitted on the network, it is passed back and forth along the cable, past the stations (computers)
         and between the two terminators, from one end of the network to the other. As the message passes each
         station, the station checks the message’s destination address. If the address in the message matches
         the station’s address, the station receives the message. If the addresses do not match, the bus carries
         the message to the next station and so on. Figure 1.7 illustrates how devices such as file servers, work-
         stations and printers are connected to the linear cable or the backbone.
                                 Terminator                                                      Backbone
                                                                  File server
Nodes
         14. Explain ring, star and tree topologies along with their advantages and disadvantages.
        		 Ans: The ring, star and tree topology are detailed as under.
        Ring Topology
        In this topology, computers are placed on a circle of cable without any terminated ends since there are
        no unconnected ends (Figure 1.9). Every node has exactly two neighbours for communication purposes.
        All messages travel through a ring in the same direction (clockwise or counterclockwise) until it reaches
        its destination. Each node in the ring incorporates a repeater. When a node receives a signal intended for
        another device, its repeater regenerates the bits and passes them along the wire.
        Star Topology
         In this topology, devices are not directly linked to each other; however, they are connected via a
         centralized network component known as hub or concentrator (Figure 1.10). The hub acts as a central
        controller and if a node wants to send data to another node, it boosts up the message and sends the
        message to the intended node. This topology commonly uses twisted-pair cable; however, coaxial cable
         or fibre-optic cable can also be used
File server
        Tree Topology
        A tree topology combines characteristics of linear bus and star topologies (Figure 1.11). It consists of groups
        of star-configured workstations connected to a bus backbone cable. Not every node plugs directly to the cen-
                                                                     Sec
                                                                     Secondary
                                                                        hub
                                                                         ub
                                                                         u b
                                                                Central
                                                                 hub
Backbone
        tral hub. The majority of nodes connect to a secondary hub that, in turn, is connected to the central hub. Each
        secondary hub in this topology functions as the originating point of a branch to which other nodes connect.
        A tree topology is best suited when the network is widely spread and partitioned into many branches.
           The advantages of tree topology are as follows:
           The distance a signal can travel increases, as the signal passes through a chain of hubs.
           It allows isolating and prioritizing communications from different nodes.
           It allows for easy expansion of an existing network, which enables organizations to configure a
             network to meet their needs.
           The disadvantages of tree topology are as follows:
           If the backbone line breaks, the entire segment goes down.
           It is more difficult to configure and wire than other topologie
           WAN offers many advantages to business organizations. Some of them are as follows:
           It offers flexibility of location because not all the people using the same data have to work at the
             same site.
           Communication between branch offices can be improved using e-mail and file sharin
           It facilitates a centralized company wide data backup system.
           Companies located in a number of small and interrelated offices can store files centrally and access
             each other’s information.
              17. What are the two types of transmission technology available?
            Ans: The two types of transmission technology that are available include broadcast networks and
         point-to-point networks.
            In broadcast networks, a single communication channel is shared by all the machines of that
        network. When a short message, let us say a packet is sent by any machine, it is received by all the
         other machines on that network. This packet contains an address field which stores the address of
         the intended recipient. Once a machine receives a packet, it checks the address field. If the address
         mentioned in the address field of packet is matched with the address of the recipient machine, it is
         processed; otherwise, the packet is ignored. In broadcast systems, there is a special code in the address
         field of the packet which is intended for all the destinations. When a packet with this code is trans-
         mitted, it is supposed to be received and processed by all the machines on that network. This mode
         of operation is called broadcasting. Some of the networks also support transmission to a subset of
         machines what is called multicasting.
            In point-to-point networks, there could be various intermediate machines (such as switching de-
         vices called nodes) between the pair of end points called stations. Thus, there could be various pos-
         sible routes of different lengths for a packet to travel from the source to the destination. Various routing
         algorithms are considered and finall , one of them is chosen for the packets to travel from the source to
         the destination.
            Generally, for small geographically localized network (such as LAN), broadcasting is considered
         favourable while larger networks (such as WAN) use point-to-point networks. If there is specifically one
         sender and one receiver in any point-to-point network, it is sometimes considered as unicasting.
        out to be more efficient and capable. Initially, NSFNET was designed to link five super computers
        situated at the major universities of NSF and allowed only academic research. Over the time, this
        network expanded to include sites for business, universities, government etc. and finally becoming a
        network consisting of millions of computers, now known as the Internet. Now, it is probably the most
        powerful and important technological advancement since the introduction of the desktop computer.
        With the advancement of Internet, the quality, quantity and variety of information also grew. Today,
        the Internet is a repository of every type of information. Nowadays, an Internet user can get all sorts
        of information ranging from how to add to the design of a functional spaceship to how to choose a
        product for personal use.
             19. Defin protocol. Describe the key elements of protocols.
           Ans: Protocol refers to a set of rules that coordinates the exchange of information between the
        sender and receiver. It determines what is to be communicated, and how and when it is to be commu-
        nicated. Both sender and receiver should follow the same protocol to communicate data. Without the
        protocol, the sender and receiver cannot communicate with each other. The main elements of a protocol
        are as follows:
          Syntax: It refers to the format of the data that is to be transmitted.
          Semantics: It refers to the meaning and interpretation of each bit of data.
          Timing: It defines when the data should be sent and with what speed
               the Internet is that an intranet user can view only those websites that are owned and maintained
               by the organization hosting the intranet. On the other hand, an Internet user may visit any website
               without any permission.
             Extranet: This is an extended intranet owned, operated and controlled by an organization. In addi-
               tion to allow access to members of an organization, an extranet uses firewalls, access profiles and
               privacy protocols to allow access to users from outside the organization. In essence, an extranet is
               a private network that uses Internet protocols and public networks to securely share resources with
               customers, suppliers, vendors, partners or other businesses.
              22. Represent the message 5A.dat in ASCII code. Assume parity bit position (eighth bit) as 0.
             Ans: Using the ASCII coding chart shown in Table 1.2, 5A.dat will be coded as shown here.
          Bit positions        7          6          5           4          3          2          1          0
          5                    0          0          1           1          0          1          0          1
          A                    0          1          0           0          0          0          0          1
          .                    0          0          1           0          1          1          1          0
          d                    0          1          1           0          0          1          0          0
          a                    0          1          1           0          0          0          0          1
          t                    0          1          1           1          0          1          0          0
          Bit positions        7          6          5          4           3          2          1          0
          H                    1          1          0          0           1          0          0          0
          e                    1          0          0          0           0          1          0          1
          l                    1          0          0          1           0          0          1          1
          L                    1          1          0          1           0          0          1          1
          O                    1          1          0          1           0          1          1          0
          3                    1          1          1          1           0          0          1          1
          4                    1          1          1          1           0          1          0          0
(a) A hybrid topology with star backbone and three bus networks is shown in Figure 1.15.
Hub
Figure 1.15 Hybrid Topology with a Star Backbone and Three Bus Networks
              (b) A hybrid topology with star backbone and four ring topologies is shown in Figure 1.16.
                                                                   Station
                                                   Station                       Station
                                         Station                                                 Station
Station
                           Station
                                                                      Hub
                                                                       Station
                             Figure 1.16      Hybrid Topology with a Star Backbone and Four Ring Networks
            25. Assume a network with n devices. Calculate how many links are required to set up this
        network with mesh, ring, bus and star topologies?
          Ans: The number of links required to set up this network with the various topologies are listed in
        Table 1.6.
                                     Presentation protocol
         6        Presentation                                       PH AH     Data                Presentation           6
                                     Session protocol
         5              Session                                  SH PH AH      Data                  Session              5
                                     Network protocol
         3          Network                               NH TH SH PH AH       Data                  Network              3
                                     Data link
         2               Data                     DH NH TH SH PH AH Data DT                            Data               2
                                     protocol
Machine A Machine B
        header PH to the received data and passes it to the session layer. The same process is repeated at the
        session (layer 5), transport (layer 4) and network layers (layer 3). At the data link layer (layer 2), a trailer
         DT is also added to the data received from network layer along with the header DH. Finally, the entire
         package (data plus headers and trailer) reaches the physical layer where it is transformed into a form
         (electromagnetic signals) that can be transmitted to the machine B.
             At machine B, the reverse process happens. The physical layer transforms the electromagnetic signals
         back into digital form. As the data travels up through the layers, each layer strips off the header (or
         trailer) added by its peer layer and passes the rest package to the layer directly above it. For example, the
         data link layer removes DH and DT from the package received from the physical layer and passes the
         resultant package to the network layer. The network layer then removes NH and passes the rest package
         to the transport layer and so on. Ultimately, the data reaching the application layer of machine B is in a
         format appropriate for the receiving process on machine B and thus is passed to the process.
                6. Explain the duties of each layer in OSI model.
           Ans: The seven layers of OSI model are divided into two groups according to their functionalities.
        Physical, data link and network layers are put in one group, as all these layers help to move data between
        devices. Transport, session, presentation and application layers are kept in other group, because they
        allow interoperability among different software. The functions of each layer are discussed as follows:
           1. Physical Layer: This layer defines the physical and electrical characteristics of the network.
               It acts as a conduit between computers’ networking hardware and their networking software.
                  It handles the transfer of bits (0s and 1s) from one computer to another. This is where the bits
                  are actually converted into electrical signals that travel across the physical circuit. Physical layer
                  communication media include various types of copper or fibre-optic cable, as well as many
                  different wireless media.
             2.    Data Link Layer: This layer is responsible for reliable delivery of data from node to node and
                   for providing services to the network layer. At sender’s side, the data link layer divides the pack-
                   ets received from the network layer into manageable form known as frames. These data frames
                   are then transmitted sequentially to the receiver. At the receiver’s end, data link layer detects and
                   corrects any errors in the transmitted data, which it gets from the physical layer. Other functions
                   of data link layer are error control and flow control. Error control ensures that all frames have
                   finally been delivered to the destination network layer and in the proper order. Flow control manages
                   the sender to send frames according to the receiving capability of the recipient.
             3.    Network Layer: This layer provides end-to-end communication and is responsible for transport-
                   ing traffic between devices that are not locally attached. Data in the network layer is called packet
                   (group of bits) which in addition to data contains source and destination address. Packets are sent
                   from node to node with the help of any of two approaches, namely, virtual circuit (connection-
                   oriented) and datagram (connectionless). In virtual circuit method, route is decided while estab-
                   lishing connection between two users and the same path is followed for the transmission of all
                   packets. In datagram method, there is no connection established; therefore, sequenced packets
                   take different paths to reach destination. Therefore, virtual circuit method resembles telephone
                   system and datagram method resembles postal system. Other functions of network layer include
                   routing, deadlock prevention and congestion control. Network layer makes routing decisions
                   with the help of routing algorithms to ensure the best route for packet from source to destination.
                   Congestion control tries to reduce the traffic on the network, so that delay can be reduced and
                   overall performance can be increased.
             4.    Transport Layer: The basic function of this layer is to handle error recognition and recovery of
                   the data packets. It provides end-to-end communication between processes which are executing
                   on different machines. It establishes, maintains and terminates communications between the sender
                   process and the receiver process. It splits the message at the sender’s end and passes each one onto
                   the network layer. At the receiver’s end, transport layer rebuilds packets into the original message,
                   and to ensure that the packets arrive correctly, the receiving transport layer sends acknowledgements
                   to the sender’s end.
             5.    Session Layer: The session layer comes into play primarily at the beginning and end of transmis-
                   sion. At the beginning of the transmission, it lets the receiver know its intent to start transmission.
                   At the end of the transmission, it determines whether or not the transmission was successful.
                   This layer also manages errors that occur in the upper layers such as a shortage of memory or
                   disk space necessary to complete an operation or printer errors. Some services provided by the
                   session layer are dialog control, synchronization and token management. Dialog control service
                   allows traffic to flow in both directions or in single direction at a time and also keeps track of
                   whose turn it is to transmit data. Synchronization helps to insert checkpoints in data streams, so
                   that if connection breaks during a long transmission then only the data which have not passed the
                   checkpoint yet need to be retransmitted. Token management prevents two nodes to execute the
                   same operation at the same time.
             6.    Presentation Layer: The function of this layer is to ensure that information sent from the applica-
                   tion layer of one system would be readable by the application layer of another system. Therefore,
               presentation layer concerns with the syntax and semantics of the information transmitted. This is
               the place where application data is packed or unpacked and is made ready to use by the running
               application. This layer also manages security issues by providing services such as data encryp-
               tion and compression, so that fewer bits need to be transferred on the network.
            7. Application Layer: This layer is the entrance point that programs use to access the OSI model and
               utilize network resources. This layer represents the services that directly support applications.
               This OSI layer is closest to the end users. Application layer includes network software that
               directly serves the end users of the network by providing them user interface and application
               features such as electronic mail.
               7. Explain the terms interfaces and services? Discuss protocol data unit (PDU).
            Ans: Each layer contains some active elements called entities, such as process. Entities are named
        as peer entities when they are in the same layer on different systems. Between two adjacent layers is an
        interface which defines the operations and services of the lower layer that are available to its immediate
        upper layer. A well-defined interface in a layered network helps to minimize the amount of traffic passed
        between layers. The set of operations provided by a layer to the layer above it is called service. The
        service is not concerned about the implementations of operations but defines what operations the layer
        can perform for its users. Thus, lower layer implements services that can be used by its immediate upper
        layer with the help of an interface. The lower layer is called a service provider and the upper layer is
        called a service user.
            A layer can request for the services of the lower layer, which
        is present below it, through a specific location known as service     Application layer           PDU
        access point (SAP). SAP has a unique address associated with          Presentation layer
        it. For example, in a fax machine system, SAP is the socket in        Session layer
        which a fax machine can be plugged and SAP addresses are
                                                                              Transport layer           Segments
        the fax numbers which are unique for every user. Therefore, to                                  Packets
                                                                              Network layer
        send a fax, the destination SAP address (fax number) must be
                                                                              Data link layer           Frames
        known (Figure 2.3).
            For communication and information sharing, each layer             Physical layer            Bits
        makes use of PDUs. PDU can be attached in front (header) or           Figure 2.3 Layers and Their PDU
        end (trailer) of the data and contains control information which
        is encapsulated with data at each layer. Depending on the information provided in the header, each
        PDU is given a specific name. For example, at transport layer, data plus PDU is called a segment; at
        network layer, segment and PDU (added by network layer) is given the name packet or datagram and
        at data link layer, packet with data link PDU is called a frame. The PDU information attached by a
        specific layer at the sender’s end can only be read by the peer layer at the receiver’s end. After reading
        the information, the peer layer strips off the PDU and passes the remaining package to its immediate
        upper layer.
              8. Write the functions of data link layer in OSI model.
           Ans: Data link layer is responsible for the transmission of frames between two nodes and provides
        error notificatio to ensure that data is delivered to the intended node. To achieve this, it performs the
        following functions:
          Framing: The data link layer takes the raw stream of bits from the physical layer and divides it into
             manageable units called frames. To indicate the start and end of each frame to the receiver, several
             methods including character count, bit stuffing and byte stuffing are use
             Physical Addressing: The data link layer adds physical address of the sender and/or receiver by
               attaching header to each frame.
             Flow Control: The data link layer provides flow control mechanism, which prohibits a slow receiv-
               er from being flooded by the fast sender. If the sender’s transmission speed is faster as compared
               to the receiving capability of receiver, it is quite probable that some frames are lost. To avoid such
               undesirable events, the data link layer must provide a means to control the flow of data between
               sender and receiver.
             Error Control: The data link layer is responsible for ensuring that all frames are finally delivered
               to the desired destination. To achieve this, the error control mechanism of data link layer makes
               the receiver to send positive or negative acknowledgement to the sender. Positive acknowledge-
               ment gives surety to the sender that frame has been received without any errors and negative
               acknowledgement indicates that frame has not been received or has been damaged or lost during
               transmission. It is the responsibilty of data link layer to ensure the retransmission of damaged and
               lost frames.
             Access Control: When two or more devices are connected to each other via same link then
               data link layer protocol detects which device has control over the link at a point of time. The
               Institute of Electrical and Electronis Engineers (IEEE) has divided the data link layer into two
               sublayers.
                • Logical Link Control (LLC) Sublayer: This sublayer establishes and maintains links between
                    the communicating devices. It also provides SAPs, so that hosts can transfer information from
                    LLC to the network layer.
                • Media Access Control (MAC) Sublayer: This sublayer determines which device to access
                    the channel next in case the channel is being shared by multiple devices. It communicates
                    directly with the network interface card (NIC) of hosts. NIC has MAC address of 48 bits that
                    is unique for each card.
              9. Discuss some functions of the session layer and presentation layer.
          Ans: Session layer is responsible for the establishing and maintaining session between processes
        running on different machines as well as synchronizing the communication between them. Some other
        functions of this layer are as follows:
          It allows the processes to communicate in either half duplex or full duplex mode. That is, infor-
             mation can be transmitted between the processes either only in one direction at a time or in both
             directions at same time.
          It makes use of checkpoints for synchronization that helps in identifying which data to retransmit.
        The presentation layer is concerned with the syntax and semantics of the information being transmitted
        between communicating devices. Other functions of presentation layer are as follows:
          It converts the representation of information used within the computer to network standard repre-
            sentation and vice versa.
          It encrypts and decrypts data at the sender’s and receiver’s end respectively. It also compresses the
            data for reducing the traffic on the communication channel
             10. Compare the functionalities of network layer with transport layer.
           Ans: Both network and transport layers are responsible for end-to-end communication but still there
        are certain differences in the set of services they provide. These differences are listed in Table 2.1.
             11.	Explain in brief the transmission control protocol/internet protocol (TCP/IP) reference model.
          Ans: TCP/IP model was developed by the U.S. Department
                                                                                                                             ..
        of Defense (DoD) to connect multiple networks and preserve                                                      TP.
                                                                                                                  S HT
                                                                                                                 N UD     P
        data integrity. TCP/IP protocol model came after the OSI model                                      PD
                                                                                       Application layer FT P TCP P ...
        and the numbers of layers in TCP/IP differ from that of the OSI                                      T      M
                                                                                       Transport layer   SC RP IC
        model. TCP/IP model comprises of four layers, namely, network                                          A
                                                                                                         IP anda     r d
                                                                                       Internet layer          st ols
        access (also called host-to-network layer), Internet, transport                Network access
                                                                                                           l l
                                                                                                          A toc
        and application layers (see Figure 2.4).                                                          pro
                                                                                       layer
           The network access layer of TCP/IP model corresponds to the
        combination of physical and data link layers of OSI model. The                    Figure 2.4 TCP/IP Model
        Internet layer corresponds to the network layer of OSI model and
        the application layer performs tasks of session, presentation and application layers of OSI model with
        the transport layer of TCP/IP performing a part of responsibilities of session layer of OSI model.
           TCP/IP protocol suite contains a group of protocols forming a hierarchy such that the lower layer
        protocols support upper layer protocols.
           1. Network Access Layer: This layer does not rely on specific protocol hence supports all standard
               protocols. It connects two nodes to the network with the help of some protocol and move data
               across two nodes which are connected via same link. The nodes after connection can transfer IP
               packets to each other. This layer is also referred to as host-to-network layer.
           2. Internet Layer: The main function of this layer is to enable the hosts to transmit packets to
               different networks by taking any of the routes available for reaching the destination. This layer
                strengthens the whole architecture and defines the format of packet. The rules to be followed
                while delivering the packets are transparent to the users. Internet layer supports many protocols
                such as IP, address resolution protocol (ARP), reverse ARP (RARP), Internet control message
                protocol (ICMP) and Internet group message protocol (IGMP). IP is an unreliable and connec-
                tionless protocol that transmits data in the form of packets called datagrams. Each datagram is
                transmitted independently and can travel through a different route. Moreover, the datagrams may
                reach the destination not necessarily in the order in which they were sent or may be duplicated.
                IP neither keeps track of the routes followed by the datagrams nor does it perform any error checking.
                It tries its best to deliver the datagrams at their intended destinations; however, it does not ensure
                the delivery. ARP is used to identify the physical address of a node whose logical address is
                known. RARP performs just the reverse of ARP, that is, it enables a host whose physical address
                is known to identify its logical address. This protocol is used when a new node is connected to the
                network or when an already connected node is formatted. ICMP is used to send error messages
                 to the sender in case the datagram does not reach its destination. IGMP is used to deliver the
                 same message to a number of recipients at the same time.
             3. Transport Layer: The main function of this layer is to deliver a message from a process on
                 the source machine to a process on the destination machine. This layer is designed to allow
                 end-to-end conversation between peer entities. It uses three protocols, namely, TCP, user datagram
                protocol (UDP) and stream control message protocol (SCMP) to accomplish its responsibilities.
                TCP is a connection-oriented protocol which means a connection must be established between
                the source and the destination before any transmission begins. It is also a reliable protocol, as it
                ensures error-free delivery of data to the destination. UDP is an unreliable and a connectionless
                protocol that performs very limited error checking. SCTP is the combination of UDP and TCP
                and it supports advanced features such as voice over the Internet.
             4. Application Layer: This layer contains all the high-level protocols such as file transfer protocol
                (FTP) and virtual terminal (TELNET). Some more protocols which were added later include
                domain name service (DNS), hyper text transfer protocol (HTTP) and many more. With the
                help of various protocols, this layer integrates many activities and responsibilities for effective
                communication.
             12.	Describe various types of addresses associated with the layers of TCP/IP model.
           Ans: Each layer in the TCP/IP model uses an address for the efficient delivery of data between
        communicating nodes. The host-to-network layer (physical plus data link layer) relates to physical address,
         network layer relates to logical address, transport layer concerns with port address and application layer
        defines specifi address. The description of these addresses is as follows:
           Physical Address: It is the address assigned to a node by the network (LAN or WAN) in which it is
             connected. It is the lowest-level address that is included in the frames at the data link layer to identify
             the destination node. The size and format of physical address is highly dependent on the underlying
             physical network. That is, different networks can have different address formats. Physical address is
             also known by other names including link address, MAC address and hardware address.
           Logical Address: In an inter-networked environment connecting different networks having differ-
             ent address formats, the physical addresses are inadequate for communication. Thus, a universal
             addressing system is used that assigns each host in the network a unique address called logical
             address (also referred to as IP address or software address) which is independent of the underlying
             physical network. For example, in Internet, each host connected to the Internet is assigned a 32-bit
             IP address and no two hosts connected to the Internet can have the same IP address. It is the respon-
             sibility of the network layer to translate the logical addresses into physical addresses.
           Port Address: The data link layer (using physical address) and network layer (using IP address)
             ensure end-to-end delivery of data, that is, data is reached the destination host. Now, since multiple
             processes may be running simultaneously on the host machine, there should be some means to iden-
             tify the process to which data is to be communicated. To enable this, each running process on the host
             machine is assigned with a label what is known as port address. Using the port address, the transport
             layer ensures process-to-process delivery. In TCP/IP architecture, port address is of 16 bits.
           Specific Address: Some applications such as e-mail and World Wide Web (WWW) provide user-
             friendly addresses designed for that specific address. Some examples of specific address include
             an e-mail address that helps to identify the recipient of that e-mail and URL of a website that helps
             to search a document on the web.
          14. Explain where the following fit in the OSI eference model.
        		 (a) A 4-kHz analog connection across the telephone network.
        		 (b) A 33.6-kbps modem connection across the telephone network.
        		 (c) A 64-kbps digital connection across the telephone network.
         Ans: (a) The actual 4-kHz analog signal exists only in the physical layer of the OSI reference model.
          (b)	A 33.6-kbps modem is used for connecting a user to the switch across the telephone network
               and a modem also performs error checking, framing and flow control. Therefore, data link
               layer will be used for performing such functionality.
          (c)	A 64-kbps digital signal carries user information and is similar to the 4-kHz analog connection
               which makes use of twisted pair cable. Therefore, physical layer will be used for performing
               such functionality.
            15. Discuss in brief the Novell Netware network model.
           Ans: Novell Netware is the popular network system which was designed for replacing mainframes
        with a network of PCs thereby reducing the cost of companies. In this network, each user is assigned
        a desktop PC operating as client that uses services (database, file etc) provided by some powerful PCs
        operating as servers. Novell network is the modification of old Xerox network system (XNS) and it
        uses a protocol stack (see Figure 2.5). It was developed prior to OSI and looks like TCP/IP model.
Layer
        Repeater
        It is the most basic device on a network. Signals that carry information within a network can travel a
        fixed distance before attenuation endangers the integrity of the data. A repeater installed on the link
        receives signal, regenerates it and sends the refreshed copy back to the link. Doing this means that the
        new signal is clean and free from any background noise introduced while travelling down the wire.
        In Figure 2.7, two sections in a network are connected by the repeater.
Repeater
11001100 11001100
           Repeaters are most commonly used to extend a network. All network cable standards have maximum
        cable length specification. If the distance between two network devices is longer than this specification,
        a repeater is needed to regenerate the signal. Without the repeater, the signal will be too weak for the
        computers on each end to reliably understand. A good example of the use of repeaters would be in a
        LAN using a star topology with unshielded twisted pair cabling. The length limit for unshielded twisted
        pair cable is 100 m. The repeater amplifies all the signals that pass through it allowing for the total
        length of cable on the network to exceed the 100 m limit. Nonetheless, the repeaters have no in-built
        intelligence and they do not look at the contents of the packet while regenerating the signal. Thus, there
        is no processing overhead in sending a packet through a repeater. However, a repeater will repeat any
        errors in the original signal.
        Hub
        It is a small box that connects individual devices on a network, so that they can communicate with one
        another. The hub operates by gathering the signals from individual network devices, optionally ampli-
        fying the signals, and then sending them onto all other connected devices. Amplification of the signal
        ensures that devices on the network receive reliable information. A hub can be thought of as the centre
        of a bicycle wheel, where the spokes (individual computers) meet.
            Nowadays, the terms repeater and hub are used synonymously, but actually they are not same.
        Although at its very basic level, a hub can be thought of as a multi-port repeater. Typically, hubs have
        anywhere from 4 to over 400 ports. When a signal is received on one port of the hub, it is regenerated
        out to all the other ports. It is most commonly used to connect multiple machines to the same LAN.
        Administrators connect a computer to each port on the hub, leaving one port free to connect to another
        hub or to a higher-level device such as a bridge or router.
        Bridge
        This device allows the division of a large network into two or more smaller and efficient networks.
        It monitors the information traffic on both sides of the network, so that it can pass packets of infor-
        mation to the correct location. Most bridges can ‘listen’ to the network and automatically figure out
        the address of each computer on both sides of the bridge. A bridge examines each packet as it enters
        though one of the ports. It first looks at the MAC address of the sender and creates a mapping between
        the port and the sender’s MAC address. It then looks at the address of the recipient, comparing the
        MAC address to the list of all learned MAC addresses. If the address is in the list, the bridge looks
        up the port number and forwards the packet to the port where it thinks the recipient is connected. If
        the recipient’s MAC address is not in the list, the bridge then does a flood; it sends the signal to all the
        ports except the one from where it was received. As a result, a bridge reduces the amount of traffic on
        a LAN by dividing it into two segments. It inspects incoming traffic and decides whether to forward
        or discard it (Figure 2.8).
LAN 1
                                                                                           LAN 2
                                            Bridge
           Bridges can be used to connect networks with different types of cabling or physical topologies. They
        must, however, be used between networks employing the same protocol. Since a bridge examines the
        packet to record the sender and looks up the recipient, there is overhead in sending a packet through a
        bridge. On a modern bridge, this overhead is miniscule and does not affect network performance.
        Switch
        It is a multi-port bridge. It connects all the devices on a network, so that they can communicate with one
        another. The behaviour of a switch is same as that of a bridge. It is capable of inspecting the data packets
        as they are received, determining the source and destination device of that packet, and forwarding that
        packet appropriately. The difference is that most switches implement these functions in hardware using
        a dedicated processor. This makes them much faster than traditional software-based bridges.
        Router
        It is an essential network device for interconnecting two or more networks. The router’s sole aim
        is to trace the best route for information to travel. As network traffic changes during the day, rout-
        ers can redirect information to take less congested routes. A router creates and/or maintains a table,
        called a routing table that stores the best routes to certain network destinations. While bridges know
        the addresses of all computers on each side of the network, routers know the addresses of computers,
        bridges and other routers on the network. Routers can even ‘listen’ to the entire network to determine
        which sections are the busiest. They can then redirect data around those sections until they clear up
        (Figure 2.9).
                                            Hub
                                                                 Gateway
                        LAN
                                                                 It is an internetworking device, which joins net-
                                                                 works operating on different protocols together.
                                                                 It is also known as protocol converter. A gateway
                                                                 accepts the packet formatted for one protocol
                                                                 and converts the formatted packet into another
                                                                 protocol. For example, a gateway can receive
                                                                 e-mail message in one format and convert it into
                                                                  another format. A gateway can be implemented
                                                                  completely in software, hardware, or as a com-
                               Figure 2.9     Router              bination of both. One can connect systems with
                                                                  different protocols, languages and architecture
                                                                  using a gateway (Figure 2.10).
Windows
Gateway
        addresses of all stations connected through it and, thus, helps to forward frames from one station to
        another. Different types of bridges are as follows.
        Transparent Bridge
        As the name implies, the existence of bridge is transparent to the stations connected through it. The
        transparent bridge is also called learning bridge, as its forwarding table is made automatically by
        learning the movement of frames in the network. Initially, the forwarding table contains no entries;
        however, as the frame move across the networks, the bridge uses the source addresses to make or
        update entries to the forwarding table and the destination address to make forwarding decisions.
        Figure 2.11 shows a transparent bridge connecting two networks LAN1 and LAN2 via ports 1 and 2,
        respectively.
                                    LAN 1                                                     LAN 2
                                                           1         2
                                                           Transparent
                        Station A       Station B             bridge              Station C           Station D
                                                    Figure 2.11   Transparent Bridge
           To understand how transparent bridge works, suppose station A wishes to send frame to station D.
        Since there is no entry corresponding to A or D in the forwarding table, the bridge broadcasts the
        frames to both the ports (that is, ports 1 and 2). However, at the same time, it learns from the source
        address A that the frame has come through port 1 and, thus, adds an entry (the source address A and
        port 1) to the forwarding table. Next time, whenever a frame from a station (say, C) destined for A
        comes to the bridge, it is forwarded only to port 1 and not elsewhere. In addition, an entry (source
        address C and port 2) is added to the forwarding table. Thus, as the frames are forwarded, the learning
        process of bridge continues.
           An important advantage of transparent bridge is that since stations are not aware of the presence of
        bridge, the stations need not be reconfigured in case the bridge is added or deleted
        Remote Bridge
        It is used to connect two bridges at remote locations using dedicated links. Remote bridge configuration
        is shown in Figure 2.13 in which two bridges are interconnected using WAN.
                                                                                             Token ring
                                                                                                 3
                                                                        Source-routing
                                                                          bridge B
                                                                                             Token ring
                                                                                                 4
                                                                         Source-routing
                                                                           bridge C
Remote bridge
WAN
          Switch                                                              Hub
          A switch operates at the data link layer.                           A hub operates at the physical layer.
          It is a complex device and more expensive than a hub.               It is a simple device and cheaper than switch.
          It is a full-duplex device and more secured.                        It is a half-duplex device and less secured.
          Each port of switch has its own collision domain, that is, each     The entire hub forms a single-collision domain, that
          port has buffer space for storing frame; therefore, when two        is, when two frames arrive at the same time, they will
          frames arrive at the same time, frames will not be lost.            always collide and hence frame will be lost.
          It is an intelligent device, as it maintains a table to transmit    It is a non-intelligent device, as each time frame is
          the frame to the intended recipient.                                broadcast to all the connected nodes.
          It utilizes the bandwidth effectively.                              Wastage of bandwidth is more in the case of hub.
          Router                                                             Switch
          • It connects nodes on different networks.                        • It connects nodes within a same network.
          • It operates in network layer.                                   • It operates in data link layer.
          • It uses IP address for transmission of packets.                 • It uses MAC address for transmission of frames.
          • It is more intelligent and complex device than switch.          • It is less intelligent and simpler device than router.
          • Various algorithms are used to forward packets along            • No such algorithms are used by switch.
             their best path.
          • It needs to be configured before using.                         • M
                                                                                ost switches are ready to use and need not be
                                                                               configured.
        Answers
        1. (b)     2. (c) 3. (b)   4. (d)   5. (c) 6. (b)   7. (b)     8. (b)   9. (a)   10. (d)
Amplitude Amplitude
                                                                            1       1       1
                                                                                0       0
                                                          Time                                       Time
        In contrast, a signal that has only a finite range of values (generally 0 and 1) is referred to as digital signal
        [Figure 3.1(b)]. Either of the analog or digital signals can be used to transmit either analog or digital data.
               4. What do you understand by periodic and non-periodic signals?
            Ans: Both analog and digital signals can be either periodic or non-periodic. A periodic signal
        exhibits a specific signal pattern that repeats over time [Figures 3.2(a) and 3.2(b)]. Sine waves and
         square waves are the most common examples of periodic analog and digital signals, respectively. On
         the other hand, a non-periodic (or aperiodic) signal does not repeat any specific signal pattern over
         time [Figures 3.2(c) and 3.2(d)]. Usually, in data communications, periodic analog and non-periodic
         digital signals are used.
Amplitude Amplitude
                                                                            1       1       1
                                                                                0       0
                                                            Time                                       Time
Amplitude Amplitude
Time Time
Amplitude
                                              Peak amplitude
                                                                                  Frequency = 3Hz
Time
                                                       1
                                            Period =
                                                       3
Amplitude
Time
               For example, if a signal wave completes one period in 1 s, its frequency is 1 Hz.
              Phase: It refers to the measure of shift in the position of a signal with respect to time 0. Figure 3.3(b)
               shows a shift of 90° in the sine wave shown in Figure 3.3(a). Phase is measured in degree (°) or
               radians (rad) where 1° = 2π/360 rad.
Amplitude
                                                    Maximum
                                                    amplitude
Time
                                                Wavelength
                                                (one cycle)
                                                Figure 3.4      Wavelength of a Signal
            Wavelength relates the frequency (f ) of a signal with the speed of propagation of the medium through
        which the signal is being transmitted. The relation between frequency and wavelength is expressed
        using the following formula.
                                         λ = propagation speed/f
                                      ⇒ λ = propagation speed × period
        For example, in vacuum, the signal propagates with speed of light (c), which is equal to 3 × 108 m/s. Thus,
                                                               λ = c/f
              7. Distinguish time-domain and frequency-domain representations.
           Ans: A signal can be represented in two ways: time-domain and frequency-domain. In time-domain
        representation, the signal is represented as a function of time. The time-domain plot of a signal depicts
        the changes in the amplitude of a signal with time. Figure 3.5 represents the sine wave using time-
        domain plot. The horizontal axis represents the time and the vertical axis represents the amplitude.
           In frequency-domain representation, a signal is represented as a function of frequency. Unlike time-
        domain plot, the frequency-domain plot does not depict the changes in amplitude; rather it depicts only
        the peak amplitude of the signal and frequency. In addition, the complete sine wave is represented just
        by a spike. Figure 3.6 shows the frequency domain plot for the sine wave shown in Figure 3.5.
           Amplitude
                                                                    Amplitude
                          Frequency = 3Hz
              6                                                            Peak value = 6
                                                                       6
                                                            Time
                                                                                                             Frequency (Hz)
                                                                             1   2    3     4   5   6
Time
                                S(t) = A+ ∑ A
                                            0         n
                                                          sin(2πnft) +    ∑     Bn cos(2πnft),
                                                n=1                       n=1
        where
                                    T
                                1
                        A
                         0
                           =
                                T   ∫   s(t)dt, the average value of a signal over a period
                                    0
                                    T
                                2
                         A
                          n
                            =
                                T   ∫
                                    0
                                        s(t)cos(2πnft)dt, the coefficient of nth cosine component
                                    T
                            = 2         s(t)sin(2πnft)dt, the coefficient of nth sine component.
                         B
                          n
                              T     ∫
                                    0
         Fourier series converts time domain of a periodic signal to discrete frequency domain. To convert
         time domain of a non-periodic signal to continuous frequency domain, Fourier transform is used as
        expressed below:
                                                    S(f) =    ∫
                                                              -∞
                                                                   s(t)e
                                                                        j2Πft
                                                                              dt
                                                   s(t) =     ∫
                                                              -∞
                                                                   S(f)e
                                                                        –j2Πft
                                                                               dt
             11. Defin bit interval, bit rate and bit length of a digital signal.
           Ans:
           Bit Interval: It is the time required to transmit one bit.
           Bit Rate: The number of bits transmitted per second is known as bit rate. It can also be defined as
             the number of bit intervals per second. Its unit is bits per second (bps). The bit rate for a bit having
             bit interval t will be 1/t.
           Bit Length: The distance occupied by a single bit while transmission on a transmission medium is
             known as bit length. It is related to bit interval as per the formula given below:
                        Bit length = bit interval × speed of propagation
            12. Differentiate baseband and broadband transmissions?
           Ans: Though both baseband and broadband transmissions are the approaches used to transmit digital
        signals, there are certain differences between the two. These differences are listed in Table 3.1.
             Delay Distortion: This is another type of distortion that results because of difference in the delays
               experienced by different frequency components while passing through the transmission medium.
               Since the speed of propagation of a signal varies with frequency, different frequency components
               of a composite signal arrive at the receiver at different times leading to delay distortion. As a
               result, the shape of received signal changes or gets distorted. Like attenuation distortion, the effect
               of delay distortion can be neutralized with the use of equalizers. However, unlike attenuation
               distortion, delay distortion is predominant in digital signals.
              Noise: When a signal transits through a transmission medium, some types of undesired signals
               may mix with it such as intermodulation noise and crosstalk. Intermodulation noise occurs in the
               cases where signals with different frequencies pass through the same transmission medium. In such
               cases, the frequencies of some signals may combine (add or subtract) to generate new frequency
               components which may interfere with the signals of same frequency sent by the transmitter. This
               leads to distortion in the signal what is known as intermodulation noise. Crosstalk results when
               the electromagnetic signals passing through one wire are induced on other wires due to the close
               proximity of wires.
                                                 SNR = avg(PS)/avg(PN)
        More the value of SNR is, the less is the signal distorted due to noise and vice versa. SNR is usually
        expressed in decibel (dB) units. SNRdB is defined as
            15. Write down the theoretical formula to calculate data rate for a noiseless channel and a
        noisy channel.
          Ans: Data rate for a noiseless channel can be determined by using the Nyquist bit rate formula as
        given below.
                                        Bit rate = 2 × B × log2L,
            where B = bandwidth of channel and L = the number of signal levels used to represent data.
        Data rate for a noisy channel can be determined by Shannon capacity, as given below.
                                               C = B × log2(1 + SNR),
        where C = data rate and B = bandwidth of channel.
             16. Describe the factors used to measure the performance of a network?
           Ans: The performance of a network is one of the most important issues in networking. Some major
        factors that measure the performance of a network are as follows.
          Bandwidth: It is one of the main characteristics that measure the performance of a network. It
             can be used in two different contexts. First is the bandwidth in hertz which refers to the range of
             frequencies in a composite signal and second is the bandwidth in bps which refers to the number
             of bits that can be transmitted by a channel or network in 1 s. An increase in bandwidth in hertz
             implies the increase in bandwidth in bps.
             Throughput: It is the measure of how much data can be transmitted through a network in 1 s.
               Though from the definitions throughput sounds similar to bandwidth, it can be less than bandwidth.
               For example, a network may have bandwidth of 10 Mbps but only 2 Mbps can be transmitted
               through it.
             Latency (Delay): It refers to the time elapsed between the first bit transmitted from the source and
               the complete message arrived at the destination. It is the sum of propagation time, transmission
               time, queuing time and processing delay of a frame.
                   The time taken by a bit to travel from the source to destination is referred to as the propagation
               time. It can be calculated using the following formula.
                             Propagation time = distance/propagation speed
               As the propagation speed increases, propagation time decreases.
                   The transmission time measures the time between the transmission of first bit from the send-
               er’s end and the arrival of last bit of the message at the destination.
                                Transmission time = message size/bandwidth
               Greater is the message size is, the more will be the transmission time.
                    Whenever a message being transmitted arrives at some intermediate device, such as router, it
               is kept in a queue (if device is not free) maintained by the device. The device processes the queued
               messages one by one. The time a message spends in the queue of some intermediate or end device
               before being processed is referred to as the queuing time. It depends on the traffic load in the
               network. More the load of traffic is, the more is the queuing time
              Bandwidth-Delay Product: It refers to the product of bandwidth and delay of a network which
                specifies the maximum number of bits that can be at any time on the link. For example, if a link
                has a bandwidth of 8 bps and delay is 5 s, then the maximum number of bits that can fill the link is
                40 bits.
              Jitter: It refers to a problem that occurs when the inter-arrival time between packets is not constant
               and the application using the data is time sensitive.
            17. Defin line coding. List some common characteristics of line coding schemes.
           Ans: Line Coding is defined as the process of converting digital data or sequence of bits to digital
        signals. Encoder is used at the sender’s end to create a digital signal from digital data and decoder is
        used at the receiver’s end to recreate digital data from digital signal (Figure 3.8).
Sender Receiver
                                                           Digital signal
                                    Encoder                                        Decoder
             18. Can the bit rate be less than the pulse rate? Why or why not?
           Ans: Bit rate is always greater than or equal to the pulse rate because the relationship between pulse
        rate and bit rate is defined by the following formula
                                     Bit rate = pulse rate × log2L,
        where L denotes the number of data levels of the signal and log2L denotes the number of bits per level.
           If a pulse carries only 1 bit (that is, log2L=1), the pulse rate and the bit rate are the same. How-
        ever, if the pulse carries more than 1 bit, then the bit rate is greater than the pulse rate.
            19. Discuss various line coding schemes.
          Ans: There are various line coding schemes that can be classified into three basic categories namely,
        unipolar, polar and bipolar.
        Unipolar Scheme
        This scheme uses two voltage levels of a signal and both of these voltage levels are on one side of the
        time axis (above or below). In this scheme, bit rate and baud rate are equal. However, the encoded signal
        includes DC components and there is lack of synchronization in case of long series of 0s and 1s. The
        only coding scheme that falls under this category is non-return-
                                                                              Amplitude
        to-zero (NRZ).
           NRZ: In this scheme, bit 1 is used to define positive volt-
                                                                                    0 1 0 1
             age while bit 0 is used to define zero voltage of a signal.
             The name NRZ comes from the fact that the signal does not
             return to zero during the middle of a bit rather only between
             two bits (Figure 3.9). The unipolar NRZ scheme is not gen-                                   Time
             erally used for data communication.
                                                                              Figure 3.9 Unipolar NRZ Scheme
        Polar scheme
        This scheme uses two voltage levels of a signal, positive and negative, that can be on both sides of the
        time axis. The positive voltage may represent 0 while negative voltage may represent 1 or vice versa.
        Four different schemes fall under this category, which are discussed as follows.
           NRZ: It is the most common type of polar coding scheme. In this scheme, positive voltage
             is used to represent one binary value and negative voltage is used to represent another. There
             are two types of polar NRZ schemes namely, NRZ-level (NRZ-L) and NRZ-invert (NRZ-I).
             In NRZ-L, the value of bit is determined by the signal level which remains constant during
             bit duration (Figure 3.10). In NRZ-I, the value of bit is determined by inversion or lack of
             inversion in the signal level. If there is change in the signal level, the value of bit will be 0,
             whereas if there is lack of change in the signal level, the value of bit will be 1 (Figure 3.11).
             Both NRZ-L and NRZ-I suffer from synchronization problem in case of long sequence of 0s.
             However, NRZ-I suffers in case of long sequence of 1s also. In addition, both NRZ-L and NRZ-I
             have DC component problem.
           Return-to-Zero (RZ): This scheme solves the synchronization problem of NRZ scheme. It uses
             three values of voltage namely, zero, positive and negative. Unlike NRZ, the signal changes during
             the middle of a bit but not between the bits. Once it changes during the bit, it remains there until
Amplitude Amplitude
0 1 0 1 1 0 1 0 1 1
Time Time
Figure 3.10 Polar NRZ-L Scheme Figure 3.11 Polar NRZ-I Scheme
               the next bit starts. Thus, to encode a single bit, two signal changes are required (Figure 3.12). There
               is no DC component problem in RZ coding scheme. However, it is complex as it requires three
               levels of voltage.
Amplitude Amplitude
0 1 0 1 1 0 1 0 1 0
Time Time
             Manchester Scheme: This is a biphase scheme that combines the idea of RZ and NRZ-L
               schemes. In this scheme, the level of voltage changes at the middle of bit to provide synchro-
               nization (Figure 3.13). The low-to-high transition in the middle of a bit indicates 1 while the
               high-to-low indicates 0. This scheme overcomes the disad-
               vantages of NRZ-L scheme.                                        Amplitude
             Differential Manchester: This is also a biphase scheme. It
               combines the idea of RZ and NRZ-I schemes. It is similar
               to the manchester scheme in the sense that it changes during
               the middle of a bit but the only difference is that the bit            0 1 0 1 1
               values are determined at the starting of bit. Transition
                occurs when the next bit is 0; otherwise, no transition occurs                         Time
                 (Figure 3.14). This scheme overcomes all the disadvantages
                  of NRZ-I scheme.
                                                                                 Figure 3.14 D ifferential Manchester
                                                                                              Scheme
        Bipolar Scheme
        This scheme is similar to RZ encoding scheme and has three levels of voltage. The only difference
        is that bit 0 is represented by zero level and bit 1 is represented by positive and negative levels of
        voltage. In bipolar encoding, a stream of bits containing long sequence of binary 0s creates a constant
        zero voltage. Therefore, bipolar encoding schemes does not have                   Amplitude
        DC component. This scheme is available in two forms, which are
        as follows:
          Alternate Mark Inversion (AMI): This is the most commonly
             used bipolar coding scheme. It is so called because it involves                     0    1   0   1     1
             representation of binary 1s by alternate positive and negative
                                                                                                                                 Time
             levels of voltages. Here, bit 0 is represented using zero level
             of voltage (Figure 3.15). This scheme is used for communica-
             tion between devices placed at a large distance from each other.
                                                                                                 Figure 3.15 AMI Scheme
             However, it is difficult to achieve synchronization in this scheme
             when a continuous stream of bit 0 is present in the data.
          Pseudoternary: This scheme is a modification of AMI encod-                    Amplitude
             ing scheme. In this scheme, binary 1 is represented using zero
             level of voltage while binary 0s are represented by alternate
             positive and negative voltages (Figure 3.16).
                                                                                                 0   1    0   1 1
             20. Explain the concept of block coding.
                                                                                                          Time
           Ans: Block Coding is an alternative to line coding scheme that is
        also used to convert digital data to digital signals. However, it is much
        better than line coding, as it ensures synchronization and has built-in
        error detecting capability, which results in better performance than Figure 3.16 Psedoternary Scheme
        line coding. In this scheme, a three-step process (Figure 3.17) is used
        to code the digital data.
           1. Division: The original sequence of bits is divided into blocks or groups of n bits each.
           2. Substitution: Each n-bit group is replaced with m bits where m >n.
           3. Line Coding: An appropriate line coding scheme is used to combine the m-bit groups to form a
                stream.
                                                                  Line coding
                                            Digital signal                           011..0001            1011..000
                                                                                          Group of m bits each
                                                        Figure 3.17   Block Coding
           The block coding is usually represented as nB/mB (n binary/m binary) such as 4B/5B and 5B/6B. For
        example, in 4B/5B coding, original bit sequence is divided into 4-bit codes and each 4-bit code is replaced
        with a 5-bit block and then NRZ-I line coding is used to convert 5-bit groups into digital signal.
             21. Explain pulse code modulation (PCM) and delta modulation (DM).
            Ans: Both PCM and DM are the techniques used to convert analog signal to digital data.
PCM encoder
                                                                PAM                  Quantized
                        Analog signal          Sampling               Quantization               Encoding
                                                            Signal                    Signal
Digital data
             1. Sampling: In this process, the numbers of samples of the original analog signal are taken at
                regular intervals of time (called sampling period). The inverse of sampling period is referred to
                as sampling rate. According to Nyquist theorem, the sampling rate should be at least two times
                of the highest frequency contained in the signal to regenerate the original analog signal at the
                receiver’s end. For example, during the sampling of voice data, with frequency in the range of
                400–5,000 Hz, 10,000 samples per second are sufficient for the coding. The sampling process is
                also termed as pulse amplitude modulation (PAM) because it produces a PAM signal—a series
                of pulses having amplitude between the highest and the lowest amplitudes of the original analog
                signal. Sample and hold is the most common sampling method used today for creating flat-top
                samples.
             2. Quantization: The sampled PAM pulses may have non-integral values of amplitude which
                cannot be encoded directly. Thus, the sampled pulses need to be quantized and approximated
                 to integral values using analog-to-digital converter. Considering the original analog signal has
                 amplitudes between Vmaximum and Vminimum, the following steps are used for quantizing the signal.
                   i) The range of amplitudes of analog signal is partitioned into L levels or zones each of height h,
                      where
                                                h = (Vmaximum - Vminimum)/L.
                      The value of L is chosen depending on the range of amplitudes of the original analog signal
                      as well as the extent of accuracy required in the recovered signal. The signal having more
                      amplitude values require more number of quantization levels.
                  ii) The quantized values of 0 to L-1 are assigned at the midpoint of each zone.
                 iii) The value of the sample amplitude is approximated to the quantized value.
                         Since the output values of the quantization process are only the approximated values, an
                      error known as quantization error may occur due to which it may not be possible to re-
                      cover the original signal exactly. The quantization error also affects the signal-to-noise ratio
                      (SNRdB) of the signal and the amount of effect depends on the value of L or the number of
                      bits per sample (nb) as shown in the following formula.
                                                SNRdB = 6.02 nb + 1.76 dB.
               The effect of quantization error can be minimized by using a process called companding and
               expanding. This process uses a compressor before encoding and uses an expander after decoding.
               Companding refers to decreasing the instantaneous voltage amplitude for larger values while
               expanding refers to increasing the instantaneous voltage amplitude for smaller values. This helps
               to improve the SNRdB of the signal.
           3. Encoding: After quantization, encoding is done in which each sample is converted to m-bit codeword
               where m is equal to number of bits per sample (nb). The value of nb depends on the value of L as
               shown in the following formula.
                                                      m = nb = log2 L
               The relationship between bit rate and the number of bits per sample (nb) can be expressed as:
                                           Bit rate = nb × sampling rate
        At the receiver’s side, original signal is recovered using PCM decoder which converts the codeword
        into a staircase signal. The staircase signal is formed by changing the codeword into a pulse that main-
        tains the amplitude till the next pulse. Then, a low-pass filter is used to smoothen the staircase signal
        into an analog signal. Figure 3.19 depicts this process.
PCM Decoder
        Delta Modulation
        This is an alternative to PCM technique with much reduced quantization error. In this technique, a
        modulator is used at the sender’s side that produces the bits from an analog signal and these bits are sent
        one after another; only one bit is sent per sample. The modulator generates a staircase signal against
        which analog signal is compared. At each sampling interval, the amplitude value of analog signal is
        compared with last amplitude of staircase signal to determine the bit in the digital data. If amplitude of
        analog signal is smaller, the next bit will be 0. However, if the amplitude of analog signal is greater,
        the next bit will be 1. Figure 3.20 shows the components of DM.
Modulator
                 Analog signal
                                                           Comparator                               Digital data
           To reproduce analog signal from digital data, demodulator is used at the receiver’s end. The
         emodulator uses staircase maker and delay unit to generate analog signal which is then passed through
        d
        a low-pass filter for smoothing. Figure 3.21 depicts the delta demodulation process
Demodulator
Delay unit
                                                       Staircase                          Low-pass
                        Digital data                                                                               Analog signal
                                                        maker                               filter
            22. What are the differences between serial and parallel transmissions?
          Ans: The transmission of binary data across a link connecting two or more digital devices can be
        performed in either of two modes: serial and parallel (Figure 3.22). There are certain differences
        between these modes which are listed in Table 3.2.
                                                  Parallel to                          Serial to
                                                serial converter                   parallel converter
1 1
                                                                   1   0       1
                                       Sender         0                                   0          Receiver
                                       device                                                         device
1 1
                                                                           0
                                       Sender                                                           Receiver
                                       device                                                            device
Direction of flow
                                                                  Gaps between
                                                                   data units
                                                  Figure 3.23 Asynchronous Transmission
               (usually represented by 0) is added to the starting of each byte. Similarly, one or more stop bits
               (usually represented by 1) are added to indicate the end of each byte. Though the transmission is
               asynchronous at the byte level, still some synchronization is needed during the transmission of bits
               within a byte. To achieve this synchronization, the receiver starts the timer after finding the start bit
               and counts the number of bits until it finds the stop bit. There may be gap between the transmission
               of two bytes which is filled with a series of stop bits or idle channel. Asynchronous transmission is
               slow as it adds control information such as start bits, stop bits, and gap between bytes. However,
               it is a cheaper and effective mode of transmission and thus is suitable for communication between
               devices which do not demand fast speed.
             Synchronous Transmission: In this transmission, timing source is used for synchronization, so
               that the receiver can receive the information in the order in which it is sent. Multiple bytes are
               combined together to form frames. Data is transmitted as a continuous sequence of 1s and 0s
               with no gap in between. However, if there is any gap, that gap is filled with a special sequence
               of 0s and 1s called idle (Figure 3.24). At the receiver’s end, the receiver keeps counting the bit
               and separates them into byte groups for decoding. Synchronous transmission is fast as compared
               to asynchronous transmission, as it does not use any extra bits such as start bits and stop bits.
               Thus, it is best suitable for applications requiring high speed. However, this transmission can-
               not be used for real-time applications such as television broadcasting, as there can be uneven
               delays between the arrival of adjacent frames at the receiver’s end which in turn results in poor
               quality of video.
Direction of flow
                        Quadrature
                          carrier
                                       em de
                                            t
                                         en
                                     el itu
        Peak amplitude
                                   al pl
                                                                           Bits   Phase               01               11
                                 gn m
          of quadrature
                               si k a
                                                                            00       45°
                             of ea
             component
                                P
                                                                            01      135°
                                      Phase
                                                                            10    −135°
                                                       In-phase carrier
                                Peak amplitude                              11      −45°              00               10
                                  of in-phase
                                  component
Figure 3.28 Constellation Diagram Figure 3.29 Constellation Diagram for QPSK Signal
        Amplitude Modulation
        In this modulation, the amplitude of a carrier wave is varied in accordance with the characteristic of the
        modulating signal. The frequency of the carrier remains the same, only the amplitude changes to follow
        variations in the signal. In simpler words, the two discrete binary digits (0 and 1) are represented by
        two different amplitudes of the carrier signal. Figure 3.31 depicts how the modulating signal is super-
        imposed over the carrier signal that results in an amplitude-modulated signal.
             The bandwidth of the amplitude-modulated signal is twice of that of modulating signal. That is,
                                                             BAM = 2B,
                               Amplitude
                                                       Modulating signal
Time
                               Amplitude
                                                       Carrier frequency
Time
                               Amplitude
                                                             AM signal
                                                                                         Time
        Frequency Modulation
        In this modulation, the instantaneous frequency of carrier wave is caused to depart from the centre
        frequency by an amount proportional to the instantaneous value of the modulating signal. In simple
         words, FM is the method of impressing modulating signal onto a carrier signal wave by varying its
         instantaneous frequency rather than its amplitude (Figure 3.32).
            The total bandwidth that is needed for the frequency-modulated signal can be calculated from the
         below-given formula.
                                                  BFM = 2 (1 + β) B,
                                   Amplitude
                                                             Modulating signal
Time
                                   Amplitude
                                                             Carrier frequency
Time
                                   Amplitude
                                                                AM signal
Time
        where β is a factor whose value depends on the modulation technique with a common value of 4 and β
        is the bandwidth of modulating signal.
        Phase Modulation
        This is the encoding of information into a carrier wave by variation of its phase in accordance with an
        input signal. In this modulation technique, the phase of sine wave carrier is modified according to the
        amplitude of the message to be transmitted (Figure 3.33).
Amplitude
                                                     Modulating signal
                                                                                          Time
                              Amplitude
                                                     Carrier frequency
Time
                              Amplitude
                                                         AM signal
Time
          The total bandwidth that is needed by a PM signal can be calculated from the bandwidth and maximum
        amplitude of the modulating signal as shown here.
                                                 BPM = 2 (1 + β) B,
        where β is a factor whose value is lower in PM than FM. For narrowband, its value is 1 and for wideband,
        its value is 3. B is the bandwidth of modulating signal.
             29. What does a decibel (dB) measure? Give example.
           Ans: Decibel (dB) is a measure of comparative strength of two signals or one signal at two differ-
        ent points. It is used by engineers to determine whether the signal has lost or gained strength.
            The positive value of a signal indicates gain in strength while negative value indicates that the signal
        is attenuated.
                                                dB = 10 log10(P2/P1),
        where P1 and P2 are the powers of signal at two different points or the powers of two different signals.
            30. For the following frequencies, calculate the corresponding periods. Write the result in
        seconds (s), milliseconds (ms) and microseconds (ms): 24 Hz; 8 MHz.
          Ans: As frequency = 1/period. Thus, for frequency 24 Hz, period = 1/24 s = 0.041 s
                                  As 1 s = 103 ms ⇒ 0.041 s = 0.041 × 103 = 41 ms
                                  As 1 s = 106 µs ⇒ 0.041 s = 0.041 × 106 = 41000 ms.
Amplitude
Bandwidth = 30Hz
                    50 51 52 53                                                                      80
                                                                                                 Frequency
                                                                                                    (Hz)
             34. An image has the size of 1024 × 786 pixel with 256 colours. Assume the image is uncompressed.
        How does it take over a 56-kbps modem channel?
          Ans: To represent 256 colours, we need log2256 = 8 bits. Therefore, the total number of bits that
        are to be transmitted will be equal to 1024 × 786 × 8 = 6438912 bits. Thus, bit rate = 6438912/
        (56 × 1000) = 115 bps.
             35. An analog signal carries four bits in each signal element. If 1000 signal elements are sent
        per second, fin the baud rate and bit rate.
           Ans: Baud rate = the number of signal elements per second = 1000. As we know that baud rate = bit
        rate × (1/number of data elements carried in one signal element). Bit rate = 1000 × 4 = 4000 bps.
           36. The power of a signal is 10 mW and the power of the noise is 1 mW. What is the value of
        SNR in dB?
          Ans: SNR = signal power/noise power. Thus, SNR = (10 × 10–3)/(1 × 10–6) = 10,000.
        SNRdB = 10 log10 SNR = 10 log10 10000 =10 × 4 = 40.
            37. Calculate the highest bit rate for a telephone channel, given the bandwidth of a line to be
        3000 Hz and the SNR being 35 dB.
          Ans: As SNRdB = 10 log10 SNR. SNR = 10 (SNRdB/10) = 103.5 = 3162 (rounded off). According to
        Shannon’s capacity, The highest bit rate = bandwidth × log2 (1 + SNR). Therefore, the highest bit rate
        = 3000 log2 (1 + 3162) = 3000 × log2 3163 = 3000 × 11.6 = 34800 bps.
              38. What is the Nyquist minimum sampling rate for the following?
             (a) A complex low-pass signal with a bandwidth of 300 kHz.
          Ans: According to Nyquist theorem, the sampling rate is two times of the highest frequency
        in the signal. As the frequency of a low-pass signal starts from zero, the highest frequency in the
        signal = bandwidth; that is, 300 kHz. Thus, sampling rate = 2 × 300 kHz = 600000 samples per
        second.
             (b) A complex bandpass signal with a bandwidth of 300 kHz.
           Ans: The sampling rate cannot be found in this case because we do not know the maximum frequency
        of the signal.
             (c) A complex bandpass signal with bandwidth of 300 kHz and the lowest frequency of
        100 Hz.
           Ans: The highest frequency = 100 +300 = 400 kHz. Thus, sampling rate = 2 × 400 kHz = 800000
        samples per second.
             39. What is the maximum bit rate for a noiseless channel with a bandwidth of 6000 Hz
        transmitting a signal with four signal levels?
            Ans: Bit rate = 2 × channel bandwidth × log2 (number of signal levels). Therefore, bit rate = 2 × 6000
         × log2 4 = 24000 bps.
             40. What is the total delay (latency) for a frame size of 10 million bits that is being set up
        on a link with 15 routers, each having a queuing time of 2 ms and a processing time of 1 ms? The
        length of link is 3000 km. The speed of light inside the link is 2 × 108 m/s. The link has bandwidth
        of 6 Mbps.
           Ans: Here, propagation time = distance / propagation speed
                                        ⇒ (3000 × 1000)/(2 × 108) = 1.5 × 10–2 s.
        Transmission time = message size / bandwidth
                                         ⇒ (10 × 106)/(6 × 106) = 1.7 s (approx).
        As there are 15 routers, total queuing time = 15 × 2 × 10–6 = 30 × 10–6 s.
        Processing time = 15 × 1 × 10–6 = 15 × 10–6 s. Now, latency = propagation time + transmission time +
        queuing time + processing time
                              ⇒ 1.5 × 10–2 + 1.7 + 30 × 10–6 s + 15 × 10–6 s = 1.715045 s.
            41. A data stream is made of 10 alternating 0s and 1s. Encode this stream using the following
        encoding schemes.
             (a) Unipolar
            Ans:
                                   Amplitude
0 1 0 1 0 1 0 1 0 1
Time
0 1 0 1 0 1 0 1 0 1
Time
             (c) NRZ-I
            Ans:
                                   Amplitude
0 1 0 1 0 1 0 1 0 1
Time
             (d) RZ
            Ans:
                                    Amplitude
0 1 0 1 0 1 0 1 0 1
Time
             (e) Manchester
            Ans:
                                  Amplitude
0 1 0 1 0 1 0 1 0 1
Time
0 1 0 1 0 1 0 1 0 1
Time
              (g) AMI
                                  Amplitude
0 1 0 1 0 1 0 1 0 1
Time
              (h) Pseudoternary
                                  Amplitude
0 1 0 1 0 1 0 1 0 1
Time
           42. A signal is quantized using 10-bit PCM. Find the SNR in dB.
          Ans: SNRdB = 6.02 nb + 1.76. Here, nb is the number of bits used for quantization = 10.
        Thus, SNRdB = 6.02 × 10 + 1.76 = 61.96 dB.
            43. A system is designed to sample analog signals, convert them to digital form with a 4-bit
        converter, and transmit them. What bit rate is required if the analog signal consists of frequencies
        between 400 Hz and 3400 Hz.
          Ans: Bandwidth = 3400 – 400 = 3000 Hz. Bit rate = 2 × bandwidth × log2 L = 2 × 3000 × log2 4
             ⇒ 2 × 2 × 3000 = 12000 bps = 12 kbps.
            44. Given the bit pattern 01100, encode this data using ASK, BFSK, and BPSK.
           Ans: Bit pattern 01100 can be encoded using ASK, BFSK and BPSK as shown in Figure 3.35(a),
        3.35 (b) and 3.35 (c), respectively.
Amplitude Amplitude
0 1 1 0 0 0 1 1 0 0
Time Time
(a) Amplitude shift keying (ASK) (b) Binary frequency shift keying (BFSK)
Amplitude
0 1 1 0 0
Time
Figure 3.35 Encoding of Bit Pattern Using ASK, BFSK and BPSK
            45. Find the maximum bit rate for an FSK signal if the bandwidth of the medium is 12,000 Hz
        and the difference between the two carriers must be at 2000 Hz. Transmission is in full-duplex mode.
           Ans: As we know that in FSK, bandwidth = (1 + fc) × signal rate + difference between carrier
        frequencies.
                                       ⇒ 12000 = (1 + 1) × signal rate + 2000
                                            ⇒ Signal rate = 5000 baud.
            Now, as we know that signal rate = bit rate × (1/r). For FSK, one data element is carried by one
        signal element, that is, r = 1
                                              ⇒ Bit rate = 4000 kbps.
              46. Find the bandwidth for the following situations if we need to modulate a 5-kHz voice.
             (a) AM
            Ans: BAM = 2 × B = 2 × 5 = 10 kHz.
             (b) PM (set b = 5)
            Ans: BFM = 2 × (1 + b) × B = 2 × (1 + 5) × 5 = 60 kHz.
             (c) PM (set b =1)
            Ans: BPM = 2 × (1 + b) × B = 2 × (1 + 1) × 5 = 20 kHz.
        12. Amplitude modulation is a technique used          14. How many carrier frequencies are used in
            for                                                   BFSK?
            (a) Analog-to-digital conversion                      (a) 2                (b) 1
            (b) Digital-to-analog conversion                      (c) 0                (d) None of these
            (c) Analog-to-analog conversion                   15. How many dots are present in the constellation
            (d) Digital-to-digital conversion                     diagram of 8-QAM?
        13. Which of the following factors of a carrier           (a) 4
            frequency is varied in QAM?
                                                                  (b) 8
            (a) Frequency          (b) Amplitude
                                                                  (c) 16
            (c) Phase              (d) Both (b) and (c)
                                                                  (d) None of these
        Answers
          1. (d)   2. (a)   3. (a)   4. (c)   5. (c)   6. (b)   7. (c)   8. (b)   9. (b)   10. (c)
        11. (c)         12. (c)   13. (d) 14. (a)   15. (b)
              1. What are transmission media? What are the different categories of transmission media?
           Ans: Transmission media refer to the media through which data can be carried from a source
        to a destination. Data is transmitted from one device to another through electromagnetic signals.
        Transmission media are located under and controlled by the physical layer as shown in Figure 4.1.
                          Physical layer (sender)                                  Physical layer (receiver)
                                                          Transmission media
                                                             (cable or air)
Transmission media
            Unguided transmission media facilitate data transmission without the use of a physical conduit.
        The electromagnetic signals are transmitted through earth’s atmosphere (air, water or vacuum) at a much
        faster rate covering a wide area. The electromagnetic waves are not guided or bound to a fixed channel
        to follow. There are basically four types of unguided transmission media including radio waves, micro-
        waves, satellite transmission and infrared waves.
              2. Differentiate guided and unguided transmission media.
           Ans. In order to transmit a message or data from a source to a destination, we need a transmission
        medium. Transmission medium can be broadly classified into two categories, which are guided and
        unguided media. The differences between these two transmission media are listed in Table 4.1.
        Twisted-pair Cable
        It is one of the most common and least expensive transmission media. A twisted-pair cable consists of
        two insulated copper conductors that are twisted together forming a spiral pattern. A number of such
        pairs are bundled together into a cable by wrapping them in a protective shield. One of the wires in each
        twisted pair is used for receiving data signal and another for transmitting data signal. Twisted pairs are
         used in short-distance communication (less than 100 metres). The biggest network in the world, the
         telephone network, originally used only twisted-pair cabling and still does for most local connections.
         A twisted-pair cable has the capability of passing a wide range of frequencies. However, with the in-
         crease in frequency, attenuation also increases sharply. As a result, the performance of a twisted-pair
         cable decreases with the increase in frequency. A twisted-pair cable comes in two forms: unshielded and
         shielded with a metal sheath or braid around it. Accordingly, they are known as unshielded twisted-pair
         (UTP) and shielded twisted-pair (STP) cables.
           UTP Cable: This cable has four pairs of wires inside the jacket (Figure 4.3). Each pair is twisted
              with a different number of twists per inch to help eliminate interference from adjacent pairs and
              other electrical devices. The tighter the twisting is, the higher will be the supported transmission
              rate and greater will be the cost per foot. Each twisted pair consists of two metal conductors (usu-
              ally copper) that are insulated separately with their own coloured plastic insulation. UTP cables
              are well suited for both data and voice transmissions; hence, they are commonly used in telephone
              systems. They are also widely used in DSL lines, 10Base-T and 100Base-T local area networks.
                                                                                     Copper
                                                                                      wire
                                         Jacket
Insulator
                                                                                          Copper
                                                                                           wire
                                     Jacket
                                                      Shield
Insulator
        Fibre-optic Cable
        Fibre-optic cable or optical fib e consists of thin glass fibres that
                                                                                            Cladding
        can carry information in the form of visible light. The typical optical   Jacket
        fibre consists of a very narrow strand of glass or plastic called the                      Core
        core. Around the core is a concentric layer of less dense glass or
        plastic called the cladding. The core diameter is in the range of
        8–50 microns (1 micron = 10−6 metres) while cladding generally has            Side view           End view
        a diameter of 125 microns. The cladding is covered by a protective
                                                                                      Figure 4.6 Optical Fibre
        coating of plastic, known as jacket (see Figure 4.6).
           Optical fibres transmit a beam of light by means of total inter-
        nal reflectio . When a light beam from a source enters the core, the core refracts the light and guides
        the light along its path. The cladding reflects the light back into the core and prevents it from escaping
        through the medium (see Figure 4.7). These light pulses, which can be carried over long distances via
        optical fibre cable at a very high speed, carry all the information
Cladding
Light rays
Core
           Optical fibre has the capability to carry information at greater speeds, higher bandwidth and data rate.
        A single optical fibre can pack hundreds of fibres, where each fibre has the capacity equivalent to that of
        thousands of twisted-pair wires. This capacity broadens communication possibilities to include services
        such as video conferencing and interactive services. In addition, fibre optic cables offer lower attenuation
        and superior performance and require fewer repeaters as compared to coaxial and twisted-pair cables.
        The major applications of the fibre-optic cables are cable TV, military applications, long-haul trunks,
        subscriber loops, local area networks, metropolitan trunks and rural trunks.
             5. Why twisting of wires is necessary in twisted-pair cables?
          Ans: Twisting is done in twisted-pair cables because it tends to minimize the interference (noise)
        between the adjacent pair of wires in cable thereby reducing the crosstalk. In case the two wires are
        parallel, they may not get affected equally by the electromagnetic interferences (noise and crosstalk)
        from nearby sources due to their different locations relative to the source. As a result, the receiver would
        receive some unwanted signals. On the other hand, if the wires are twisted, both wires are probable to be
        get affected equally by the external interferences thereby maintaining a balance at the receiver. To under
        stand, suppose in one twist, one of the twisted wires is closer to noise source and the other is farther, then
        in the next twist, the opposite will be true. As a result, the unwanted signals of both the wires cancel out
        each other and the receiver does not receive any unwanted signal. Thus, crosstalk is reduced.
              6. What are the different categories and connectors of UTP cables?
           Ans: As per the standards developed by the Electronic Industries Association (EIA), UTP cables
        have been classified into seven categories, from 1 to 7. Each category is based on the quality of the cable
        and the higher number denotes higher quality. Table 4.2 lists the categories of UTP cables along with
        their specification and data rate
             To connect UTP cables to network devices, UTP connectors are required. The most common
        c onnector for the UTP cables is RJ45 (RJ stands for registered jack) as shown in Figure 4.8. Being a
         keyed connector, the RJ45 can be inserted in only one way.
        Table 4.2       Categories of UTP Cables
          Category           Specification                                                                          Data rate
          (CAT)                                                                                                     (in Mbps)
          1                  UTP cables used in telephones                                                              <0.1
          2                  UTP cables used in T-lines                                                                 2
          3                  Improved CAT2 used in LANs                                                                 10
          4                  Improved CAT3 used in token ring network                                                   20
          5                  Cable wire is normally 24AWG with a jacket and outside sheath; used in LANs                100
          5E                 Extension to category 5 that reduces crosstalk and interference; used in LANs.             125
          6                  A new category with matched components coming from the same manufacturer.                 200
                             This cable must be tested at a data rate of 200 Mbps. This is used in LANs.
          7                  This is the shielded screen twisted-pair cable (SSTP). Shielding increases data rate     600
                             and reduces crosstalk effect. This cable is used in LANs.
        terminator and BNC T connector as shown in Figure 4.9. The Bayone-Neill-Concelman (BNC) connector
        is the most commonly used connector that connects the coaxial cable to a device such as an amplifier or
        television set. The BNC terminator is used at the end of the cable to prevent the reflection of the signal and
        the BNC T connector is often used in Ethernet networks for branching out connections to other devices.
        Multimode Propagation
        In this mode, many beams from a light source traverse the fibre along multiple paths and at multiple angles
        as shown in Figure 4.10(a). Depending upon the structure of core inside the cable, multimode can be
        implemented in two forms: step-index and graded-index. In multimode step-index fib e, the core’s den-
        sity is constant from the centre to the edges. A light beam moves through the core in a straight path until it
        meets the interface of the core and cladding. As the interface has a lower density than the core, there comes
        a sudden change in the angle of the beam’s motion further adding to distortion of the signal as it moves on.
        The multimode graded-index fib e reduces such distortion of signal through the cable. As the density is
        high at the centre of the core, the refractive index at the centre is high which causes the light beams at the
        centre to move slower than the rays that are near the cladding. The light beams curve in a helical manner
        [see Figure 4.10(b)], thus, reducing the distance travelled as compared to zigzag movement. The reduction
        in path and higher speed allows light to arrive at the destination in almost the same time as straight lines.
        Single-mode Propagation
        This mode employs step-index fibre of relatively small diameter and less density than that of multimode
        fibre and a much focused light source. Because of the focused light source, the beams spread out to a
        small range of angles and propagate almost horizontally. Since all beams propagate through the fibre
        along a single path, distortion does not occur. Moreover, all beams reaching at the destination together
        can be recombined to form the signal. The single-mode propagation is well suited for long-distance
        applications such as cable television and telephones.
Source Destination
Source Destination
                        Source                                                                 Destination
                                                      (c) Single-mode
              10. What are the advantages and disadvantages of fib e optic cables?
            Ans: Fibre-optic cables are widely used in many domains such as telephone network and cable
        television network. Some advantages of fibre-optic cables are as follows
           Since transmission is light-based rather than electricity, it is immune to noise interference.
           Transmission distance is greater than other guided media because of less signal attenuation
               (degradation in quality over distance).
           It is extremely hard to tap into, making it desirable from the security viewpoint.
           They are smaller and lighter than copper wire and are free from corrosion as well.
           Fibre optic offers, by far, the greatest bandwidth of any transmission system.
           Transmission through fibre optic cable requires lesser number of repeaters for covering larger
               transmission distances.
        The disadvantages of fibre-optic cables are as follows
           The installation and maintenance of fibre-optic cables are quite expensive
           The propagation of light is unidirectional and often requires precise alignment.
           Extending the fibre-optic cables by joining them together is a tough task
           It is more fragile when compared to copper wires.
            11. What are the two kinds of light sources used in fib e-optic cables?
          Ans: In fibre-optic cables, the light sources generate a pulse of light that is carried through the fibre
        medium. The two kinds of light sources used with the fibre-optic cables include light-emitting diodes
        (LEDs) and semiconductor laser diodes. These light sources are used to perform signalling and can be
        tuned in wavelengths by inserting Mach–Zehnder or Fabry–Pérot interferometers between the source
        and the fibre media. Table 4.5 lists the comparison between these two light sources.
f(Hz) 100 102 104 106 108 1010 1012 1014 1016 1018 1020 1022 1024
                                                                       Visible
                                                                         light
                        f(Hz) 104 105         106     107     108     109 1010       1011    1012    1013   1014 1015 1016
                                     Twistedpair                        Satellite                            Fibre
                                              Coax                     Terrestrial                           optics
                                         AM                  FM        microwave
                               Maritime radio               radio
TV
           The portion of the electromagnetic spectrum that can be used for transmitting information is parted
        into eight different ranges. These ranges are regulated by the government authorities and are known as
        bands. Some of the properties of bands are listed in Table 4.6.
                                                                                                   House
                        Radio                                 Earth
                        tower
                                                                                                 Home
                            Radio
                            tower                                 Earth
lonosphere
                                        Radio                                             Home
                                        tower
                                                                 Earth
                                                                 50 km
                                             Figure 4.15      Line-of-Sight Propagation
        Radio Waves
        The electromagnetic waves with frequency in the range of 3 kHz to 1 GHz are generally known as radio
        waves. These waves are omnidirectional, that is they are propagated in all directions when transmitted by
        an antenna. Thus, the antennas that send and receive the signals need not be aligned. However, the radio
        waves transmitted by two antennas using the same band or frequency may interfere with each other.
           Radio waves present different characteristics at different frequencies. At low (VLF, LF) and medium
        (MF) frequencies, they follow the curvature of earth and can penetrate walls easily. Thus, a device such
        as a portable radio inside a building can receive the signal. At high
        frequencies (HF and VHF bands), as the earth absorbs the radio
        waves, they are propagated in sky mode. The higher frequency
        radio waves can be transmitted up to greater distances and thus
        are best suited for long-distance broadcasting. However, at all fre-
        quencies, radio waves are susceptible to interference from electri-
        cal equipments.
           An omnidirectional antenna is used to transmit radio waves
        in all directions (see Figure 4.16). Due to their omnidirectional
        characteristics, radio waves are useful for multicast (one sender,
        many receivers) communication. Examples of multicasting are Figure 4.16 Omnidirectional Antenna
        cordless phones, AM and FM radios, paging and maritime radio.
        Microwaves
        The electromagnetic waves with frequency in the range of 1–300 GHz are known as microwaves. Un-
        like radio waves, microwaves are unidirectional. The advantage of the unidirectional property is that
        multiple transmitters can transmit waves to multiple receivers without any interference. Since micro-
        waves are transmitted using line-of-sight propagation method, the towers with mounted antennas used
        for sending and receiving the signal must be in direct sight of each other (Figure 4.17). In case, the
        antenna towers are located far away from each other, the towers should be quite tall so that the signals
        do not get block off due to curvature of earth as well as other obstacles. Moreover, repeaters should be
        often used to amplify the signal strength. Microwaves at lower frequencies cannot penetrate buildings
        and also, during propagation, refraction or delays can occur due to divergence of beams in space. These
                                                                                        Repeater
                        Transmitter
                                                                         Receiver
        delayed waves can come out of phase with the direct wave leading to cancellation of the signal. This
        phenomenon is known as the multipath fading effect.
           Microwaves require unidirectional antennas that transmit signal only in one direction. Two such
        antennas are dish antenna and horn antenna (see Figure 4.18). A dish antenna works based on the
        geometry of a parabola. All the lines parallel to the line of sight when hit the parabola, they are reflected
        by the parabola curve at angles such that they converge at a common point called the focus. The dish
        parabola catches many waves and directs them on the focus. As a result, the receiver receives more of
        the signal. In a horn antenna, outgoing transmission is sent through a stem and as it hits the curved
        head, the transmission is deflect d outward as a series of parallel beams. The received transmissions are
        collected by a horn similar to the parabolic dish, which deflects them back into the stem
           Since microwaves are unidirectional, they are best suited for unicast communication such as in
        cellular networks, wireless local area networks and satellite networks.
Focus
Waveguide
        Infrared Waves
        The electromagnetic waves with frequency in the range of 300 GHz to 400 THz are known as infrared
        waves. These waves are widely used for indoor wireless LANs and for short-range communication; for
        example, for connecting a PC with a wireless peripheral device, in remote controls used with stereos,
        VCRs, TVs, etc. (Figure 4.19). Infrared waves at high frequencies are propagated using line-of-sight
        method and cannot penetrate solid objects. Therefore, a short range infrared system in a room will not
        be interfered by such a system present in an adjacent room. Furthermore, infrared waves cannot be
        used outside a building because the infrared rays coming from the sun may interfere with it and distort
        the signal.
            The use of infrared waves has been sponsored by an association, known as Infrared Data
        Association (IrDA). This association has also established standards for the usage of infrared signals
         for communication between devices such as keyboards, printers, PCs and mouses. For example, some
         wireless keyboards are attached with an infrared port that enables the keyboard to communicate with
        the PC. Since infrared signals transmit through line-of-sight mode, the infrared port must be pointed
        towards the PC for communication.
        Communication Satellites
        A communication satellite can be referred to as a microwave relay station. A satellite links two or more
        ground-based (earth) stations that transmit/receive microwaves. Once a satellite receives any signal on a
        frequency (uplink), it repeats or amplifies that signal and sends it back to earth on a separate frequency
        (downlink). The area shadowed by the satellite (see Figure 4.20) in which the information or data can
        be transmitted and received is called the footprint.
           Satellites are generally set in geostationary orbits directly over the equator, which rotates in
        synchronization with the earth and hence looks stationary from any point on the earth. These geostationary
        orbits are placed approximately 36,000 km above the earth’s surface and satellites placed in this orbit
        are known as geostationary satellites.
           Satellite transmission is also a kind of line-of-sight transmission and the best frequency for it is in
        the range of 1–10 GHz. The major applications for satellites are long-distance telephone transmission,
        weather forecasting, global positioning, television, and many more.
                                                                       Satellite
                                              36,000 km
                                                 orbit
Antenna
                                                           Downlink
                                         Uplink
                                                                                   Downlink
                                                       Earth stations
                                                           Footprint
                                             Figure 4.20      Satellite Transmission
        Answers
        1. (c)     2. (d)   3. (b)   4. (c) 5. (b)   6. (a)     7. (c)     8. (d)      9. (c)   10. (d)
MUX DEMUX
                                  N                                                     N
                                inputs               1 Link, N channels              outputs
           Multiplexing can be used in situations where the signals to be transmitted through the transmission
        medium have lower bandwidth than that of the medium. This is because in such situations, it is possible
        to combine the several low-bandwidth signals and transmitting them simultaneously through the trans-
        mission medium of larger bandwidth.
             3. Why we need multiplexing in a communication channel?
           Ans: Multiplexing is needed in a communication channel because of the following reasons:
           To send several signals simultaneously over a single communication channel.
           To reduce the cost of transmission.
           To effectively utilize the available bandwidth of the communication channel.
        Frequency-Division Multiplexing
        FDM is used when the bandwidth of the transmission medium is greater than the total bandwidth require-
        ment of the signals to be transmitted. It is often used for baseband analog signals. At the sender’s end, the
        signals generated by each sending device are of similar frequency range. Within the multiplexer, these
        similar signals modulate carrier waves of different frequencies and then the modulated signals are merged
        into a single composite signal, which is sent over the transmission medium. The carrier frequencies are
        kept well separated by assigning a different range of bandwidth (channel) that is sufficient to hold the
        modulated signal. There may also be some unused bandwidth between successive channels called guard
        bands, in order to avoid interference of signals between those channels.
           At the receiver’s end, the multiplexed signal is applied to a series of bandpass filters which breaks it
        into component signals. These component signals are then passed to a demodulator that separates the
        signals from their carriers and distributes them to different output lines (Figure 5.2).
                 Signal 1 (115 KHz)
                                                                                                    Signal 1 (115 KHz)
            Though FDM is an analog multiplexing technique, it can also be used to multiplex the digital signals.
        However, before multiplexing, the digital signals must be converted to analog signals. Some common
        applications of FDM include radio broadcasting and TV networks.
        Wavelength-Division Multiplexing
        WDM is an analog multiplexing technique designed to utilize the high data rate capability of fibre-optic
        cables. A fibre-optic cable has a much higher data rate than coaxial and twisted-pair cables and using
        it as a single link wastes a lot of precious bandwidth. Using WDM, we can merge many signals into
        a single one and hence, utilize the available bandwidth efficientl . Conceptually, FDM and WDM are
        same; that is, both combine several signals of different frequencies into one, but the major difference
        is that the latter involves fibre-optic cables and optical signals as well as the signals are of very high
        frequency.
            In WDM, the multiplexing and demultiplexing are done with the help of a prism (Figure 5.3), which
        bends the light beams by different amounts depending on their angle of incidence and wavelength. The
        prism used at the sender’s end combines the multiple light beams with narrow frequency bands from
        different sources to form a single wider band of light which is then passed through fibre-optic cable. At
        the receiver’s end, the prism splits the composite signal into individual signals. An application of WDM
        is the SONET network.
λ1 λ1
                        λ2                                                                           λ2
                                                       Fibre-optic cable
                             λ3          Mutiplexer                        Demutiplexer         λ3
                                        Figure 5.3    Wavelength-division Multiplexing
        Time-Division Multiplexing
        TDM is a digital multiplexing technique that allows the high bandwidth of a link to be shared amongst
        several signals. Unlike FDM and WDM, in which signals operate at the same time but with different
        frequencies, in TDM, signals operate at the same frequency but at different times. In other words, the
        link is time-shared instead of sharing parts of bandwidth among several signals.
           Figure 5.4 gives a conceptual view of TDM. At the sender’s end, the time-division multiplexer allocates
        each input signal a period of time or time slot. Each sending device is assigned the transmission path for
        a predefined time slot. Three sending signals, Signals 1, 2 and 3, occupy the transmission sequentially.
        As shown in the figure, time slots A, B, P, Q, X and Y follow one after the other to carry signals from the
        three sources, which upon reaching the demultiplexer, are sent to the intended receiver.
           Though TDM is a digital multiplexing technique, it can also multiplex analog signals. However,
        before multiplexing, analog signals must be converted to DSs.
              5. Explain different schemes of TDM.
            Ans: TDM can be divided into two different schemes, namely, synchronous TDM and statistical TDM.
Channel
                              Q        P                                                        Q    P
                              Signal 2                                                          Signal 2
                                                Multiplexer                   Demultiplexer
                              Y        X         (sender)                      (receiver)       Y    X
                              Signal 3                                                         Signal 3
                                                  Figure 5.4    Time-division Multiplexing
        Synchronous TDM
        In this technique, data flow of each input source is divided into units where a unit can be a bit, byte or
        a combination of bytes. Each input unit is allotted one input time slot. A cycle of input units from each
        input source is grouped into a frame. Each frame is divided into time slots and one slot is dedicated for
        a unit from each input source. That is, for n connections, we have n time slots in a frame and every slot
        has its own respective sender. The duration of input unit and the duration of each frame are same unless
        some other information is carried by the frame. However, the duration of each slot is n times shorter
        where n is the number of input lines. For example, if duration of input unit is t seconds, then the dura-
        tion of each slot will be t/n seconds for n input lines. The data transmission rate for the output link
        must be n times the data transmission rate of an input line to ensure the transmission of data. In addition,
        one or more synchronization bits are to be included in the beginning of each frame to ensure the data
        synchronization between the multiplexer and the demultiplexer.
           Figure 5.5 shows a conceptual view of synchronous TDM in which data from the three input sources
        has been divided into units (P1, P2, P3), (Q1, Q2) and (R1, R2, R3). As the total number of input
        lines is three, each frame also has three time slots. Now, if the duration of an input unit is t seconds, then
        after every t seconds, a unit is collected from each source and kept into the corresponding time slot in the
        frame. For example, Frame 1 contains (P1, Q1, R1). In our example, the data rate for the transmission
        link must be three times the connection rate to ensure the flow of data. In addition, the duration of each
        frame is three time slots whereas the duration of each time slot is t/3.
                                                                t
                                                         t /3
                         P3       P2       P1
                                                         R3       P3          R2 Q2 P2          R1 Q1 P1
                                  Q2       Q1      M       Frame 3              Frame 2          Frame 1
                                                   U
                         R3       R2       R1      X                               Link
            A major drawback of synchronous TDM is that it is not highly efficient. Since each slot in the frame
        is pre-assigned to a fixed source, the empty slots are transmitted in case one or more sources do not have
        any data to send. As a result, the capacity of a channel is wasted. For example, in Figure 5.5, an empty
        slot is transmitted in Frame 3 because the corresponding input line has no data to send during that time
        period.
        Statistical TDM
        Statistical TDM, also called asynchronous TDM or intelligent TDM, overcomes the disadvantage of
        synchronous TDM by assigning the slots in a frame dynamically to input sources. A slot is assigned to
        an input source only when the source has some data to send. Thus, no empty slots are transmitted which
        in turn result in an efficient utilization of bandwidth. In statistical TDM, generally each frame has less
        number of slots than the number of input lines and the capacity of link is also less than the combined
        capacity of channels.
            Figure 5.6 shows a conceptual view of statistical TDM. At the sender’s end, the multiplexer scans
        the input sources one by one in a circular manner and assigns a slot if the source has some data to send;
        otherwise, it moves on to the next input source. Hence, slots are filled up as long as an input source has
        some data to send. At the receiver’s end, the demultiplexer receives the frame and distributes data in
        slots of frame to appropriate output lines.
                        P3    P2      P1
                                                   R3 P3            R2 Q2 P2          R1 Q1 P1
                              Q2      Q1    M      Frame 3            Frame 2          Frame 1
                                            U
                        R3    R2      R1    X                             Link
            Unlike synchronous TDM, there is no fixed relationship between the inputs and the outputs in statis-
        tical TDM because here slots in a frame are not pre-assigned or reserved. Thus, to ensure the delivery of
        data to appropriate output lines, each slot in the frame stores the destination address along with data to
        indicate where the data has to be delivered. For n output lines, an m-bit address is required to define each
        output line where m = log2 n. Though the inclusion of address information in a slot ensures the proper
        delivery, it incurs more overhead per slot. Another difference between synchronous and statistical TDMs
        is that the latter technique does not require the synchronization among the frames thereby eliminating
        the need of including synchronization bits within each frame.
            6. Distinguish multilevel, multiple-slot and pulse-stuffed TDMs.
          Ans: Multilevel multiplexing, multiple-slot allocation and pulse stuffing are the strategies used in
        TDM to handle the different input data rates of the input lines.
        Multilevel Multiplexing
        Multiple multiplexing is used in situations where the data rate of an input line is an integral multiple
        of other input lines. As the name suggests, several levels of multiplexing are used in this technique.
        To understand, consider Figure 5.7 where we have three input lines; the first two input lines have a data
        rate of 80 kbps each and the last input line has a data rate of 160 kbps. As the data rate of third input
        line (160 kbps) is a multiple of that of other two (80 kbps), the first two input lines can be multiplexed
        to produce a data rate of 160 kbps that is equal to that of the third input line. Then, another level of
        multiplexing can be used to combine the output of first two input lines and the third input line thereby
        generating an output line with a data rate of 320 kbps.
                                         80 kbps
                                                          M 160 kbps
                                         80 kbps          U                M      320 kbps
                                                          X                U
                                         160 kbps                          X
        Multiple-Slot Allocation
        Like multilevel multiplexing, multiple-slot allocation technique is also used in situations where the data
        rate of an input line is an integral multiple of other input lines. Generally, one slot per each input source
        is reserved in the frame being transmitted. However, sometimes, it is more useful to allocate multiple
        slots corresponding to a single input line in the frame. For example, again consider Figure 5.7 where
        we can divide one 160-kbps input line into two (each of 80 kbps) with the help of serial-to-parallel
        converter and then multiplex the four input lines of 80 kbps each to create an output line of 320 kbps.
        However, now there will be total four slots in the transmitting frame with two slots corresponding to
        input line with originally 160 kbps data rate (Figure 5.8).
                                  80 kbps
                                  80 kbps                    M
                                                             U
                                  160 kbps           80 kbps                                 320 kbps
                                                             X
                                                      80 kbps
                                         Serial-to-parallel
                                            convertor
                                                Figure 5.8      Multiple-slot Allocation
        Pulse Stuffing
        Pulse stuffing, also known as bit padding or bit stuffin , technique is used in situations where the data
        rates of input lines are not the integral multiples of each other. In this technique, the highest input data rate
        is determined and then dummy bits are added to input lines with lower data rates to make the data rate of
        all the lines equal. For example, consider Figure 5.9 where the first input line has the highest data rate equal
        to 80 kbps and other two input lines have data rate of 60 and 70 kbps, respectively. Hence, the data rates
        of second and third input lines are pulse-stuffed to increase the rate to 80 kbps.
                               80 kpbs
                               60 kpbs                80 kpbs M
                                         Pulse stuffing       U
                               70 kpbs                80 kpbs X                                240 kbps
                                       Pulse stuffing
Address Data
DS-0
                                           T
                                                  DS-1
                            24             D                 T
                                           M                      DS-2
                                                             D
                                                                           T
                                                             M                    DS-3
                                                                           D
                                                                                          T
                                                                           M                  DS-4
                                                                                          D
                                                                                          M
             DS-0: This service is at the lowest level of hierarchy. It is a single digital channel of just 64 kbps.
             DS-1: This service has a capacity of 1.544 Mbps which is 24 times of a DS-0 channel with an
               additional overhead of 8 kbps. It can be utilized for multiplexing 24 DS-0 channels, as a single
                service for transmissions of 1.544 Mbps or can even be used in different combinations to utilize
                capacity up to 1.544 Mbps.
              DS-2: This service has a capacity of 6.312 Mbps which is 96 times of DS-0 with an additional
                overhead of 168 kbps. It can be utilized for multiplexing 96 DS-0 channels, four DS-1 channels or
                can also be utilized as a single service for transmission up to 6.312 Mbps.
              DS-3: This service has a capacity of 44.376 Mbps which is seven times the data rate of DS-2
                channels with an additional overhead of 1.368 Mbps. It can be utilized for multiplexing 672 DS-0
                channels, 28 DS-1 channels or seven DS-2 channels. In addition, DS-3 can be utilized as a single
                transmission line or as a combination of its previous hierarchies.
              DS-4: This service is at the highest level of the DS hierarchy and has a capacity of 274.176 Mbps. It
               can be utilized for multiplexing six DS-3 channels, 42 DS-2 channels, 168 DS-1 channels or 4,032
               DS-0 channels. It can also be used as a combination of service types at lower levels of hierarchy.
             10. Explain the analog hierarchy used by telephone networks with an example.
           Ans: Analog hierarchy is used by telephone companies for maximizing the efficiency of their infra-
        structure such as switches and other transmission equipments. In analog hierarchy, the analog signals
        from the lower bandwidth channels are multiplexed onto the higher bandwidth lines at different levels.
        For multiplexing of analog signals at all levels of hierarchy, the FDM technique is used. The standards
        for multiplexing vary in countries but the basic principle remains the same. One of such analog hierar-
        chies is used by AT&T in the United States. This analog hierarchy comprises voice channels, groups,
        super groups, master groups and jumbo groups (Figure 5.12).
                                            4 kHz
                                            4 kHz
                        12 voice channels
                                                    F       Group
                                                    D
                                                    M                   F   Super group
                                                        5 groups
                                            4 kHz                       D
                                                                             10 super groups
                                                                                                        F    Master group
                                                                        M
                                                                                                        D                               Jumbo
                                                                                                                  6 master groups
                                                                                                        M                           F   group
                                                                                                                                    D
                                                                                                                                    M
            At the lowest level of hierarchy, 12 voice channels, each having a bandwidth of 4 kHz, are multiplexed
         onto a higher bandwidth line thereby forming a group of 48 kHz (12 × 4 kHz). At the next level of
        hierarchy, five groups (that is, 60 voice channels) are multiplexed to form a super group that has a band-
         width of 240 kHz (5 × 48 kHz). Further, 10 super groups (that is, 600 voice channels) are multiplexed to
         form a master group. A master group must have a bandwidth of 2.4 MHz (10 × 240 kHz). However, to
         avoid interference between the multiplexed signals, the requirement of guard bands increases the total
         bandwidth of a master group to 2.52 MHz. Finally, six master groups are multiplexed to form a jumbo
        group that must have a bandwidth of 15.12 MHz (6 × 2.52 MHz). However, the requirement of guard
        bands to separate the master groups increases the total bandwidth of a jumbo group to 16.984 MHz.
            11. What is inverse multiplexing?
           Ans: Inverse multiplexing is the opposite of the multiplexing technique. In this technique, the data
        from a single high-speed line or source is split into chunks of data that can be transmitted over low-
        speed lines at the same time and without any data loss in the combined data rate.
               12. What do you understand by spread spectrum?
             Ans: Spread spectrum is a technique used to expand the available bandwidth of a communication link
          in order to achieve goals such as privacy and anti-jamming. For achieving these goals, the spread spec-
          trum techniques expand the original bandwidth required for every system (adding redundancy) such that
          the signals from different sources can together fit into larger bandwidth. For example, if the bandwidth
        requirement of each system is B, then the spread spectrum techniques increase it to B´ such that B´ >> B.
        To spread the bandwidth, a spreading process is used which is independent of the original signal. The
        spreading process uses a spreading code—a series of numbers following the specific pattern—and
        results in expanded bandwidth.
             The spread spectrum technique is generally used in wireless LANs and WANs where the systems
         sharing the air transmission medium must be able to communicate without any interception from
         intruders, blocking of message during transmission and so on.
             13. List the principles to be followed by the spread spectrum technique?
           Ans: The spread spectrum technique uses the following principles to achieve its goals.
           The spreading process must be independent of the original signal. This implies that spreading process
              is used after the source has generated the signal.
           Each source station is assigned a bandwidth much greater than what it actually needs thereby
              allowing redundancy.
             14. Compare frequency hopping spread spectrum (FHSS) and direct sequence spread
        spectrum (DSSS) technique?
           Ans: FHSS and DSSS are the techniques to expand the bandwidth of the digital signal. However,
         both techniques follow a different process to achieve their goal.
                        Original
                                                                           Modulator              Spread signal
                         signal
                                                                       Frequency
                                                                       synthesizer
                                              Pseudorandom
                                              code generator
Frequency table
           Since one out of N hopping frequencies is used by a source station in each hopping period, the remain-
        ing N-1 frequencies can be used by other N-1 stations. Thus, N hopping frequencies (N channels) can
        be multiplexed into one using the same bandwidth BFHSS. That is, N source stations can share the same
        bandwidth BFHSS. In addition, due to randomization in the frequency and the frequent frequency hops,
        the intruders may be able to intercept the signal or send noise to jam the signal possibly for one hopping
        period but not for the entire period thus, achieving privacy and anti-jamming.
                                   Original
                                                               Modulator                 Spread signal
                                    signal
                                                                      Spreading code
Chip generator
           Figure 5.15 illustrates the DSSS process. The spreading code used in this example is eight chips
        with the pattern 10110100. Each bit of the original signal is multiplied by the eight-bit spreading code
        to obtain the new spread signal. Now, if the original signal rate is S, then the new spread signal rate
        will be 8S. This implies that the required bandwidth for the spread signal would be eight times of the
        original bandwidth.
1 0 1 1
Original signal
10110100101101001011010010110100
                                                                                      Spreading
                                                                                        code
10110100010010111011010010110100
                                                                                      Spread
                                                                                      signal
                                                              III
                                           I                               VI
IV
                                           II                              VII
                                                              V
        to a communicating device or to any other switch for forwarding information. Notice that multiple
        switches are used to complete the connection between any two communicating devices at a time,
        hence saving the extra links required in case of a point-to-point connection.
            16. Explain different types of switching techniques along with their advantages and disad-
        vantages.
          Ans: There are three different types of switching techniques; namely, circuit switching, message
        switching and packet switching.
        Circuit Switching
        When a device wants to communicate with another device, circuit switching technique creates a fixed
        bandwidth channel, called a circuit, between the source and the destination. This circuit is a physical
        path that is reserved exclusively for a particular information flow, and no other flow can use it. Other
        circuits are isolated from each other, and thus their environment is well controlled. For example, in
        Figure 5.17, if device A wants to communicate with device D, sets of resources (switches I,  II and
        III) are allocated which act as a circuit for the communication to take place. The path taken by data
        between its source and destination is determined by the circuit on which it is flowing, and does not
        change during the lifetime of the connection. The circuit is terminated when the connection is closed.
        Therefore, this method is called circuit switching.
                                                                    III
                                              I             II
IV
             The resources are not efficiently utilized during circuit switching in a computer network
             For communication amongst stations that use costly and high-speed transmission lines, circuit
               switching is not cost effective and economical, as communication between stations occurs gener-
               ally in fast and small time gaps.
        Packet Switching
        Packet switching introduces the idea of breaking data into packets, which are discrete units of poten-
        tially variable length blocks of data. Apart from data, these packets also contain a header with control
        information such as the destination address and the priority of the message. These packets are passed
        by the source point to their local packet switching exchange (PSE). When the PSE receives a packet,
        it inspects the destination address contained in the packet. Each PSE contains a navigation directory
        specifying the outgoing links to be used for each network address. On receipt of each packet, the PSE
        examines the packet header information and then either removes the header or forwards the packet to
        another system. If the communication channel is not free, then the packet is placed in a queue until
        the channel becomes free. As each packet is received at each transitional PSE along the route, it is
        forwarded on the appropriate link mixed with other packets. At the destination PSE, the packet is finally
         passed to its destination. Note that not all packets of the same message, travelling between the same two
         points, will necessarily follow the same route. Therefore, after reaching their destination, each packet is
         put into order by a packet assembler and disassembler (PAD).
            For example, in Figure 5.18, four packets (1, 2, 3 and 4) once divided on machine A are transmitted
        via various routes, which arrive on the destination machine D in an unordered manner. The destination
        machine then assembles the arrived packets in order and retrieves the information.
                            4 3 2 1                     1
                                                                      1
                                                        3
                                                                              3 1
                                                4                3
                                                2                 4                 3
                                                                          4         1
                                                    4
                                                            2                 2 4           2341
        Message Switching
        A message is a unit of information which can be of varying length. Message switching is one of the
        earliest types of switching techniques, which was common in the 1960s and 1970s. This switching tech-
        nique employs two mechanisms; they are store-and-forward mechanism and broadcast mechanism. In
        store-and-forward mechanism (Figure 5.19), a special device (usually, a computer system with large
        storage capacity) in the network receives the message from a communicating device and stores it into
        its memory. Then, it finds a free route and sends the stored information to the intended receiver. In such
        kind of switching, a message is always delivered to one intermediate device where it is stored and then
        rerouted to its final destination
                          D                                                                         B
                                                       D
                              B                                                                 D
                                                                        D
            In broadcast switching mechanism, the message is broadcasted over a broadcast channel as shown
        in Figure 5.20. As the messages pass by the broadcast channel, every station connected to the channel
        checks the destination address of each message. A station accepts only the message that is addressed
        to it.
            Some advantages of message switching are as follows:
          In message switching technique, no physical path is established in advance.
          The transmission channels are used very effectively in message switching, as they are allotted only
               when they are required.
            Some disadvantages of message switching are as follows:
          It is a slow process.
Message
Table 5.1 Comparison of Circuit Switching, Packet Switching and Message Switching
                        Subscriber 1
                                       Local loop
                                                Toll connecting
                                                                           Intertoll trunks
                                                     trunks
                        Subscriber 2
                                          End offices
                                                              Tandem offices
                                                                                              Regional offices
                        Subscriber n
               Also referred to as the last mile, the local loops follow analog technology and can extend up to
               several miles. When a local loop utilized for voice, its bandwidth is 4 kHz. Earlier, uninsulated
               copper wires were commonly used for local loops. However, these days, category 3 twisted pairs
               are used for local loops.
            Trunks: The transmission media used for communication between switching offices is referred to
               as trunk. Generally, fibre-optic cable or satellite transmission is used for trunks. A single trunk can
               carry multiple conversations over a single path by using a multiplexing technique.
            Switching Offices: In telephone network, no two subscribers are connected by a permanent physical
               link. Instead, switches are used to provide a connection between the different subscribers. These
               switches are located in the switching offices of the telephone company and connect many local
               loops or trunks. In telephone network, there are several levels of switching offices including end
               offices, toll offices, tandem offices and regional offi
              When a subscriber calls to another subscriber connected to same end office, a direct electrical
        connection is established between the local loops with the help of switching mechanism within the end
         office. However, if the called subscriber’s telephone is connected to a different end office, the connection is
         established by the toll offices or tandem offices. Several end offices are connected to their nearby switching
         centre called toll office or tandem office (if end offices are in the same local area) through toll connecting
         trunks. If the caller and called subscriber do not have a common toll office, the connection between them
         is established by regional offices to which toll offices are co ected via high bandwidth intertoll trunks.
             19. Explain dial-up modems along with their standards in detail.
           Ans: The word modem is an acronym for modulator-demodulator. The modulator converts digital
        data into analog signals (that is, modulation) and the demodulator converts the analog signal back into
        digital data (that is, demodulation). A dial-up modem uses the traditional telephone line as the trans-
        mission media and enables to send digital data over analog telephone lines.
           Figure 5.22 depicts the modulation/demodulation process during communication between machines
        A and B. At the sender’ side (that is, machine A), the digital signals from machine A are sent to the mo-
        dem, which converts them into analog signals. These analog signals are transmitted over the telephone
        lines and received by the modem at the receiver’s end (that is, machine B). The modem at machine B
        converts analog signals back into digital signals and sends them to computer B.
           ITU-T gave V-series standards for effective working of modems. Some modems of this series are
        as follows:
           V.22: This modem transmits data at 1,200 bps and works over two-wire leased line. It provides
              full-duplex synchronous transmission.
                                         Digital                                          Digital
                                         signal                                           signal
                                                   Analog                    Analog
                                                   signal      Telephone     signal
                                    Modem                       network               Modem
             V.32: This modem transmits data at 9,600 bps and designed for full-duplex synchronous trans
               mission. This modem works on two-wire leased line or telephone network. This standard uses the
               technique called trellis coding that detects error which gets introduced during transmission. In
               this coding, data stream is divided into four-bit sections and one extra bit is added to each quadbit
               (four-bit pattern) during data transmission for error detection.
             V.32bis: This modem transmits data at 14,400 bps and is the enhancement of V.32 standard.
               V.32bis includes feature of fall-back and fall-forward which help the modem to adjust upstream
               and downstream speed automatically. This adjustment of speed depends on the quality of
               the signal.
             V.34: This modem transmits data up to 33.6 kbps and works on a two-wire leased line. It is designed
               for full-duplex synchronous and asynchronous transmissions and also supports error correcting
               feature.
             V.90: This modem is designed for full-duplex synchronous and asynchronous transmission over
               two-wire connection. The V.90 series modems are also called 56-K modems because their data
               transmission rate is 56 kbps. They are asymmetric in nature as the speed of upstream and down-
               stream varies. They support downloading data rate up to 56 kbps and uploading up to 33.6 kbps.
               The main reason for the difference in the speed between uploading and downloading is that in
               uploading, signal gets sampled at various switching stations and gets affected by noise which is
               introduced here but in downloading, there is no such sampling of signals.
             V.92: This modem has the capability to upload the data at the rate of 48 kbps while its downloading
               data rate is same as that of V.90 standard, that is, 56 kbps. This modem has the advanced feature
               of call-waiting service.
             20. Write a short note on DSL and its different technologies.
           Ans: DSL (stands for digital subscriber line) was developed by the telephone companies to fulfil
        the requirement of high-speed data access and the efficient utilization of the bandwidth of the local
        loops. DSL is a very promising technology, as it provides the customers with a telephone connection
        and high-speed Internet access simultaneously over the existing local loops. To provide a simultaneous
        telephone connection and Internet access, DSL systems are installed between the telephone exchange
        and the customers’ site.
           The DSL technology comprises many different technologies including ADSL, VDSL, HDSL and
        SDSL. Often, this group of technologies is referred to as xDSL where x represents A, V, H or S. These
        technologies are described as follows:
          ADSL: It stands for asymmetric digital subscriber line, also called asymmetric DSL. ADSL
             uses the existing local loops and provides a higher downstream rate from the telephone exchange
             to the subscriber than in the reverse direction. This asymmetry in data rates is suitable for the
               a pplications such as video-on-demand and Internet surfing, as the users of these applications need
                to download more data in comparison to uploading and that too at a higher speed. Along with the
                Internet service, ADSL also provides simultaneous telephone connection over the same copper
                cable. By using FDM, the voice signals are separated from data signals (upstream and down-
                 stream). ADSL technology is useful for residential customers and not for the business customers,
                 as business customers often require larger bandwidth for both downloading and u ploading.
           ADSL Lite: It is a newer version of the ADSL technology. Also known as the spliterless
             ADSL, this technology can provide a maximum upstream data rate of 512 kbps and a maximum
             downstream data rate of 1.5 Mbps. This technology differs from ADSL technology in the sense
             that ADSL technology requires installing a splitter at the subscribers’ home or business loca-
             tion to split voice and data, while if ADSL Lite modem is used, no such splitter is required
             at the premises of the subscriber and all the splitting is done at the telephone exchange. The
             ADSL Lite modem is directly plugged into the telephone’s jack at the subscribers’ premises and
             connected to the computer.
           HDSL: It stands for high-bit-rate digital subscriber line, also called high-bit-rate DSL. It was
              developed by Bellcore as a substitute for the T-1 (1.544 Mbps) line. In T-1 lines, alternate mark
              inversion (AMI) encoding is used. This encoding is subject to attenuation at high frequencies and
              repeaters are to be used for covering longer distance. Thus, it was quite expensive. On the other
              hand, in HDSL, 2B1Q encoding is used which is less susceptible to attenuation at higher frequen-
              cies. Also, a very high data rate of 1.544 Mbps can be achieved without using repeaters to a dis-
              tance of approximately 4 km. To achieve full-duplex transmission, HDSL uses two twisted pairs
              with one pair for each direction. HDSL2, a variant of HDSL, uses single pair of twisted cables and
              is under development.
           SDSL: It stands for symmetric digital subscriber line, also called symmetric DSL. It was
                 developed as a single copper pair version of HDSL. SDSL uses 2B1Q line coding and provides
                  full-duplex transmission on a single pair of wires. It provides same data rate of 768 kbps for both
                  upstream and downstream, as it supports symmetric communication.
           VDSL: It stands for very high-bit-rate digital subscriber line, also called very high-bit-rate
              DSL. It is similar to ADSL and offers very high data transfer rates over small distances using
              coaxial, fibre-optic and twisted-pair cables. The downstream data rate for VDSL is 25–55 Mbps
               and the upstream data rate is generally 3.2 Mbps.
           Table 5.2 summarizes all the DSL technologies along with their characteristics.
                                                                                                         Number of
                                                                   Distance in      Simultaneous voice   twisted
          Technology          Upstream rate      Downstream rate   thousands feet   capacity             pairs
          ADSL                16–640 kbps        1.5–6.1 Mbps      12               Yes                  1
          ADSL Lite           500 kbps           1.5               18               Yes                  1
          HDSL                1.5–2.0 Mbps       1.5–2.0 Mbps      12               No                   2
          SDSL                768 kbps           768 kbps          12               Yes                  1
          VDSL                3.2 Mbps           25–55 Mbps        3–10             Yes                  1
            21. What are the two types of switch technologies used in circuit switching? Compare them.
          Ans: In circuit switching, two types of switch technologies are used, namely, space-division switch
        and time-division switch.
        Space-Division Switch
        In space-division switching, many different connections in the circuit that are active at the same time
        use different switching paths which are separated spatially. Initially, this technology was developed for
        the analog signals; however, presently, it is being used for both analog signals and DSs. Space-division
        switches may be cross-bar switches or multistage switches.
          Cross-bar Switch: This switch connects p input lines with q output lines in the form of a matrix of
             the order p×q; the value of p and q may be equal. Each input line is connected to output line with
             the help of a transistor at each cross-point—the intersection
                                                                                               Output lines
             point of input and output lines. Each cross-point is basically
                                                                                            1    2      3    4
             an electric switch that can be opened or closed by a control                 1
             unit depending on whether or not communication is required
                                                                                   Input 2
             between the connecting input and output lines. Figure 5.23             lines 3
             shows a schematic diagram of a 4 × 4 cross-bar switch.                      4
           A cross-bar switch has certain limitations, which are as follows:
                                                                               Figure 5.23 A 4 × 4 Cross-bar Switch
             •  To connect m inputs with m outputs, m2 number of cross-
                 points is required. In addition, the number of cross-points required increases with the increase
                 in number of lines to be connected. This makes the cross-bar switches expensive and more
                 complex.
             •  The cross-points are not utilized properly. Even when all the devices are connected, only a few
                 cross-points are active.
             • If a cross-point fails, the devices connected via that cross-point cannot connect to each other.
             Multistage Switch: This switch overcomes the limitations of the cross-bar switch. A multistage
               space-division switch consists of several stages, each containing (p×q) cross-bar switches. These
               switches can be connected in several ways to build a large multistage network (usually, three-stage
               network). In general, for n inputs and n outputs, m = log2n stages are required. Figure 5.24 shows a
               three-stage space-division switch. Here, each cross-point in the middle stage can be accessed through
               several cross-points in the first or third stage
                                8                                                              8
                                7                                                              7
                                6         4*2                2*2                2*4            6
                                5                                                              5
                                4                                                              4
                                3                                                              3
                                2         4*2                2*2                2*4            2
                                1                                                              1
           The main advantage of multistage switches is that the cost of having an m × m multistage network is
        much lower than that of an m × m cross-bar switch because the former network requires a lesser number
        of cross-points as compared to the latter network. However, in case of heavy traffic, it suffers from
        blocking problem due to which it cannot process every set of requests simultaneously.
        Time-division Switch
        In time-division switch, TDM is implemented inside a switch. Various active connections inside the
        switch can utilize the same switching path in an interleaved manner. One of the commonly used meth-
        ods of time-division switching is time slot interchange (TSI), which alters the sequencing of slots
        depending on the connection desired. TSI has a random access memory (RAM) and a control unit.
        The data is stored in RAM sequentially, however, retrieved selectively based on the information in
        control unit. The control unit stores the control information such as which input line to connect to
        which output.
            Figure 5.25 depicts the operation of TSI. Suppose the input lines want to send data to output lines in
        the order 1 to 4, 2 to 3, 3 to 2 and 4 to 1. This information is stored in control unit. At the sender’s end,
        the input unit from each of the four input lines is put serially into time slots to build a frame. The data
        in the time slots is stored on the RAM in the exact order in which it is received, that is, A,  B,  C and  D.
        The data is retrieved from RAM and then filled up in the time slots of the output frame in the order as
        determined by the control unit, that is, D,  C,  B and A.
                                                                TSI
                 A                                             RAM                                          D
           1                                                                                                     1
                 B                                               A                    A   B   C D           C
                         T   D C   B   A                                                              T
           2                                                                                                     2
                 C       D                                       B                                    D     B
           3             M                                                                            M          3
                 D                                               C                                          A
           4                                                                                                     4
                                                                 D
                                                           1          4
                                                           2          3
                                                           3          2
                                                           4          1
                                                           Control unit
Headend
                                                Coaxial cable
                                                                          Splitter
                                                                                                      Drop
                                                                                                      cable
Tap
Coaxial cable
Fibre node
                                                     Fibre-optic
                                                        cable    Switch
                                           Distribution
                                               hub
                                                                            Fibre node
                                 RCH
        The distribution hubs perform modulation of signals and distribute them to fibre nodes over the
        fibre-optic cable. The fibre nodes split the signal to send same signal to each coaxial cable
            In comparison to traditional cable TV network, HFC network requires less number of amplifiers as
         well as offers bidirectional communication.
             23. Which devices are needed for using cable networks for data transmission?
          Ans: Traditional cable TV network was used only for providing video signals but nowadays, HFC
        network—an extension of traditional cable TV network—is being used to provide high-speed access to
        the Internet. However, to use cable TV network for data transmission, two devices are needed, namely,
        cable modem (CM ) and CM transmission system (CMTS ).
        Cable Modem
        This is an external device that is installed at the end user’s premises where the Internet has to be
        accessed. It is connected to the user’s home PC via Ethernet port and it works similar to an ADSL
         modem. CM divides the HFC network into two channels, upstream and downstream channels, with
         downstream channel having a higher data transmission rate than the upstream channel. The upstream
         channel is used for transmission from the home PC to the head end and the downstream channel is used
         to handle transmission in the opposite direction. Since HFC is a shared broadcast network, each packet
         from the head end is broadcasted on every link to every home; however, vice versa does not happen. As
         a result, when several users are downloading contents simultaneously through the downstream channel,
         the transmission rate received by each user is much lesser than the actual rate of downstream channel.
         In contrast, when only some users are surfing the web, each user will receive a full downstream rate
         because it is very rare that two users request the same page at the same time.
        telephone number) for establishing a connection and connecting to other sites. A user can establish a digital
        connection by dialling another ISDN number from his/her own ISDN number. However, unlike leased
         lines which provide permanent connection, the ISDN users can disconnect ISDN WAN link whenever
         desired.
            25. What are the two types of ISDN?
          Ans: Based on the transmission and switching capabilities, there are two types of ISDN, namely,
        narrowband ISDN and broadband ISDN.
          Narrowband ISDN (N-ISDN): This is the first generation of ISDN and uses circuit-switching
             technique. It has smaller bandwidth and supports low bit rates (usually up to 64 kbps). Due to lower
             bit rates, N-ISDN cannot cater to the needs of certain applications such as multimedia applications.
             Four-wire twisted pairs are used for transmission in N-ISDN thereby resulting in poor quality of
             service. N-ISDN follows the concept of frame relay.
          Broadband ISDN (B-ISDN): This is the second generation of ISDN and uses packet-switching tech-
             nology. It supports high-resolution video and multimedia applications. It can support data rates of hun-
             dreds of Mbps (usually up to 622 Mbps) and thus, is suitable to be used for high-resolution video and
             multimedia applications. Optical fibre is used as the transmission media for B-ISDN thereby resulting
             in better quality of service as compared to N-ISDN. B-ISDN follows the concept of cell relay.
            26. What are the services provided by ISDN?
          Ans: ISDN provides various services including data applications, existing voice applications, fac-
        simile, videotext and teletext services. These services are grouped under the following three c ategories.
          Bearer or Carrier Services: These services allow the sender and the receiver to exchange informa-
             tion such as video, voice and text in real time. A message from the sender can be communicated to the
             receiver without any modification in the original content of the message. Bearer services are provided
             by ISDN using packet switching, circuit switching, cell relay and frame relay. These services corre-
             spond to the physical, data link and network layers of the OSI model.
          Teleservices: These services not only permit to transport information but include information-
             processing function also. To transport information, teleservices make use of bearer services while
             to process the information, a set of higher-level functions that correspond to the transport, session,
             presentation and application layers of the OSI model are used. Some examples of teleservices are
             videotex, teletex and telefax.
          Supplementary Services: These services are used in combination with at least one teleservice or
             bearer service but cannot be used alone. For example, reverse charging, call waiting and message
             handling are some of the supplementary services that are provided with the bearer and teleservices.
            27. Describe the type of channels provided by ISDN along with their purpose.
          Ans: The ISDN standard has defined three types of channels, namely, bearer (B) channel, data (D)
        channel and hybrid (H) channel, with each channel having a specific data rate
          B Channel: This is the basic channel used to carry only the user traffic in digital form such as
            digitized voice and digital data at the rate of 64 kbps. The transmission through B channel is
            full-duplex as long as the required data transmission rate is up to 64 kbps. Multiple conversations
            destined for a single receiver can be carried over a single B channel with the use of multiplexing.
             Since a B channel provides end-to-end transmission, signals can be demultiplexed only at the
             receiver’s end and not in the mid way.
                D Channel: This channel is mainly used to carry the control information that is required for
                e stablishing and terminating the switched connections on the B channels. For example, the dialled
                 digits while establishing a telephone connection are passed over the D channel. Under certain
                 circumstances, the D channel can be used to carry user data also. The D channel provides data rates
                  of 16 or 64 kbps depending on the requirements of the user.
                H Channel: This channel is used for applications having higher data requirements such as video-
                  conferencing and teleconferencing. There are certain types of H channels including H0 channel
                  with a data rate of 384 kbps, H11 channel with a data rate of 1,536 kbps and H12 channel with a
                  data rate of 1,920 kbps.
              28. Distinguish BRI and PRI in ISDN.
           Ans: In ISDN network, each user is connected to the central office via digital pipes called digital
        subscriber loops. The digital subscriber loops are of two types, namely, basic rate interface (BRI) and
        primary rate interface (PRI), with each type catering to a different level of customers’ needs. Table 5.3
        lists some differences between these two types.
          BRI                                                  PRI
          • It comprises two B channels and one D channel,    • It comprises up to 23 B channels and one D channel in
             thus, referred to as 2B + D channel.                 North America and Japan. In countries such as Europe,
                                                                  Australia and other parts of the world, it comprises 30
                                                                  B channels and one D channel.
          • The BRI-D channel operates at 16 kbps.             • The PRI-D channel operates at 64 kbps.
          • It provides total bit rate of 192 kbps.            • In North America and Japan, it provides total bit rate of
                                                                  1.544 Mbps and in other parts of the world, it provides
                                                                  total bit rate of 2.048 Mbps.
          • It is primarily used at home for connecting to    • It is primarily used in business replacing the leased
             Internet or business networks.                       lines that can provide the same bandwidth and signal
                                                                  quality but with more flexibility.
               Subscriber’s
                premises                                   Central office
                                                                              Switched
                                                                              network
                               NT2               NT1
                         S              T              U
                 TE1
                                                              ISDN             Packet          ISDN               Subscriber
                                                              switch           network         switch             or provider
                              TA       NT2       NT1
                         R         S         T         U                     Private line
                 TE2                                                          network
             Terminal Adapters: It refers to a device that translates the information in a non-ISDN format into
               a form usable by an ISDN device. It is generally used with TE2s devices.
             Network Termination 1 (NT1): It refers to a device that connects the internal system at the sub-
               scriber’s end to digital subscriber loop. It organizes the data received from the subscriber’s end into
               frames that are to be sent over the network. It also translates the frames received from the network
               into a format which can be understood by the subscriber’s devices. It interleaves the bytes of data
               from subscriber’s devices but is not a multiplexer; the multiplexing occurs automatically, as NTI
               provides synchronization between the data stream and the process of creating a frame.
             Network Termination 2 (NT2): It refers to a device that performs multiplexing, flow control
               and packetizing at the physical, data link and network layers of OSI model, respectively. It acts as
               an intermediate device between data-producing devices and NT1. An example of NT2 is private
               branch exchange (PBX) that coordinates transmissions from telephone lines and multiplexes them,
               so that they can be transmitted by NT1.
             Reference Points: It refers to a tag that is used to define logical interfaces between various termi-
               nation devices. The reference points used in ISDN architecture include R, S, T and U. Reference
               point R specifies the link between TE1 and TA. Reference point S specifies the link between TE1
               or TA and NT1 or NT2. Reference point T specifies the link between NT1 and NT2. Reference
               point U specifies the link between NT1 and ISDN offic
             To exchange the control information between end user and the network, protocols are used. ISDN
        uses more than one twisted pair to provide the full-duplex communication link between the end user
        and the central office. The operation performed by central office includes accommodating multiplexed
        access to provide high-speed interface using digital PBX and LAN. With the help of central office, sub-
        scriber can access circuit switched network and packet switched network.
         30. Five 1-kbps connections are multiplexed together and each unit represents 1 bit. Find:
        		 (a) the duration of 1 bit before multiplexing,
        		 (b) the transmission rate of the link,
        10. The transmission media used for communica-        12. The broadcasting stations send video signals
            tion between switching offices is referred to a       to cable TV office, called ______
            (a) Switch                                            (a) Fibre node
            (b) Local loop                                        (b) Head end
            (c) Trunk                                             (c) Toll offic
            (d) None of these                                     (d) CATV
        11. Which of the following is a DSL technology?       13. Narrow band ISDN supports bandwidth up to
            (a) ADSL Lite                                         (a) 128 kbps
            (b) VDSL                                              (b) 112 kbps
            (c) SDSL                                              (c) 96 kbps
            (d) All of these                                      (d) 64 kbps
        Answers
         1. (d)   2. (b)   3. (c)   4. (a)   5. (a)   6. (c)   7. (a)   8. (b)   9. (d) 10. (c)
        11. (d) 12. (b) 13. (d)
                 2. What are the different types of services provided by the data link layer to the network layer?
           Ans: The data link layer offers various services to the network layer; however, some of the basic
        types are as follows.
           Unacknowledged Connectionless Service: As the name suggests, this is a connectionless service,
               that is, no logical connection between the sender and the receiver needs to be established or released
               before or after the transmission respectively. In addition, the receiver does not send acknowledgement
               frame to the sender on receiving a data frame. If a frame is lost or damaged during the transmission,
               no attempt is made to discover the loss and recover from it. This type of service is suitable for applica-
               tions with low error rate and for real-time traffi such as voice.
           Acknowledged Connectionless Service: In this service also, no logical connection is established
               between the communicating nodes; however, each frame received by the receiver is acknowledged in
               order to indicate the sender that the data frame has been received properly. Thus, the frames for which
               the sender does not receive any acknowledgement within a stipulated time can be retransmitted. This
              service is suitable for wireless systems.
           Acknowledged Connection-oriented Service: In this type of service, a connection is established
              between the communicating nodes before starting data transmission. The data link layer ensures
               that each frame is received exactly once by the receiver and all the frames are received in the
              correct order. For this, each frame is numbered before the transmission and the receiver sends back
               an acknowledgement frame on receiving a data frame. In a connection-oriented service, the data
               transfer takes place by following three phases, namely, connection establishment, data transmission
                and connection release. During the first phase, a connection is made between the sender and the
                receiver. For establishing a connection, both sender and receiver initialize some variables and coun-
                ters to identify the frames that have been received and that have not. After the connection has been
               established, the second phase commences during which the actual transmission of frames is carried
              out between the sender and the receiver. After the frames have been transmitted, the third phase
              begins in which the connection is released by freeing up the used variables, counters and buffers.
                 3. What is framing? Discuss various framing methods.
           Ans: The data link layer accepts the bit stream from the physical layer and breaks it into discrete
        frames. This process is known as framing. For ensuring that the bit stream is free of errors, the data
        link layer computes the checksum of each transmitting frame at the sender’s side. After the frame has
        been received, the data link layer on the receiver’s side re-computes the checksum. If the new value of
        checksum differs from the checksum contained in the frame, then the data link layer comes to know that
        an error has occurred and takes appropriate measures to deal with errors. For example, some bad frames
        can be discarded and the request for their retransmission can be made.
           In order to make the receiver to detect the frame boundaries, several framing methods have been
        developed. These methods are described as follows.
        Character Count
        This framing method uses a field in the header of each frame which specifies the total number of
        characters in that frame. This field helps the data link layer at the destination node in knowing the
        number of characters that follow and detecting the end of each frame. The disadvantage of this
        method is that while transmitting a frame, the count of total characters in it may alter due to
        errors, thereby making the destination node go out of synchronization. Furthermore, the destination
        node will not be able to locate the correct start of the new frame and, therefore, asking sender for the
         retransmission of the frame, as the destination node does not know the exact count of characters that
        should be skipped to get to the start of the retransmission.
        Byte Stuffing
        This framing method uses a byte called the flag byte at the starting and the ending of each frame.
        This helps the destination node to find the end of the current frame even if the destination goes out
        of synchronization. The presence of two successive flag bytes marks the ending of one frame and the
        beginning of the next one. However, a problem occurs when binary data such as object programs or
        floating-point numbers are transmitted. In such cases, the bit pattern of the flag byte may also appear
        in the data and thus, may cause interference with the framing. To solve this problem, the data link
        layer at the sender’s side can insert a special escape byte (ESC) before each fla byte appearing in the
        data. At the destination node, the data link layer stripes off the escape byte and passes the data to the
        network layer.
           Now, it may also happen that the escape byte appears in the data. To prevent the escape byte from
        being mixed with the data, each escape byte appearing in data is stuffed with another escape byte. Thus,
        a single escape byte in the data indicates that it is a part of escape sequence while the occurrence of two
        consecutive escape bytes in the data indicates that one escape byte is the part of data. The disadvan-
        tage of this framing method is that it can only be used for eight-bit characters; therefore, this method
        cannot be used for the character codes that use more than eight bits. Byte stuffin is also referred to as
        character stuffin .
        Bit Stuffing
        This framing method overcomes the limitation of byte stuffing and can be used with any number of bits
         per character and allows a data frame to comprise an arbitrary number of bits. In bit-stuffing method,
         a special bit pattern, 01111110, called flag byte is inserted at the start and end of each frame. In order
         to prevent the occurrence of fla byte in data to get misunderstood as start or end of frame, when a
         sender’s data link layer find fiv consecutive 1s in the data, it automatically stuffs a 0 bit at the end of
         these 1s (refer Q13). The data link layer at the destination de-stuffs or deletes a 0 bit followed by five
        consecutive 1s and then passes de-stuffed bits onto the network layer. An advantage of bit stuffing is that
         the boundary between two consecutive frames can be clearly identified by the flag byte. Therefore, in a
         case where receiver misses its track, it can scan the incoming bits for the flag byte, since a flag byte can
         be present only at the boundaries of a frame.
1 0 0 0 1 0 0 0 0 1
                                                                                                 Error
                                                           Figure 6.1     Single-bit Error
             Burst Error: A burst error occurs when two or more bits (consecutive or non-consecutive) are
               altered during the transmission (Figure 6.2). The length of burst is measured from the firs altered
               bit to the last altered bit. However, it does not necessarily mean that all bits comprising the burst
               length have been altered.
1 0 0 0 1 0 0 0 0 1 Transmitted data
Errors
1 0 0 1 1 1 0 0 1 1 Received data
        a single-bit error or burst error, it does not make any difference. Some of the commonly error-detecting
        codes include simple parity check, two-dimensional (2D) parity check, checksum and cyclic redundancy
        check (CRC).
            Contrastive to error detection, in error correction, the emphasis is on not only to detect the errors
        but also to correct them. For this, it becomes important to know the exact number of bits that have
        been corrupted as well as their positions in the received data unit. In error correction, the sender adds
        enough redundant bits to the data being transmitted that enable the receiver to infer what the original
        data was. This helps the receiver to detect as well as correct the errors in the received data. Error
        correction is more difficult than error detection. Hamming code is one of the commonly used categories
        of error-correcting codes.
            Generally, for highly reliable channels such as fibres error-detecting codes are used; however, for
        wireless links, the use of error-correcting codes is preferred.
              7. Define the following
                  (a) Codeword
                  (b) Code Rate
                  (c) Hamming Weight of a Codeword
            Ans: (a) Codeword: While transmitting a block of data (referred to as dataword) containing d bits, a
                      few parity or redundancy bits (say, r) are added to form an n-bit (where n  =  d  +  r) encoded
                      block of data what is referred to as codeword. In other words, codeword is formed by
                      appending redundancy bits to the dataword.
             (b) Code Rate: It refers to the ratio of the number of data bits in the codeword to the total number
                               of bits. That is,
                                                      Code rate = d/n
             (c)	Hamming Weight of a Codeword: It refers to the number of non-zero elements in the code-
                  word. It can be obtained by performing an XOR operation on the given codeword and a code-
                  word containing all zeros.
                8. What do you understand by Hamming distance? Describe the role of minimum Ham-
          ming distance in error detection and error correction.
             Ans: Given a pair of codewords, say X and Y, containing same number of bits. The Hamming
          distance between X and Y, denoted as d(X, Y  ), is define as the number of positions in which the
         corresponding bits of X and Y differ. For example, if X = 10010010 and Y = 11011001, then the Hamming
          distance between X and Y is four, as the bits in X and Y differ in four positions, namely, 1, 2, 4 and 7.
             The Hamming distance between two codewords can be calculated by firs performing XOR operation
          on them and then checking for the number of 1s in the resulting bit string; the Hamming distance will
          be equal to the number of 1s in the result string. For example, the result of XOR operation on X and Y is
         01001011. Since there are four 1s in the resulting bit string, the Hamming distance is four.
             Generally, during transmission, a large number of binary words are transmitted in a continuous
        sequence from sender to receiver. Accordingly, a block of codewords are received at the receiver; as a
        result, many values of Hamming distances can be calculated between each possible pair of codewords.
        Therefore, the Hamming distance is not of much significance instead minimum Hamming distance
        (dmin) is used for designing a code. The minimum Hamming distance (dmin) of a linear block of code is
        define as the smallest of the Hamming distances calculated between each possible pair of codewords.
             The minimum Hamming distance (dmin) plays a significant role in error detection and error correction.
        It is always possible to detect the errors in a received codeword if the total number of errors in the
        codeword is less than dmin; otherwise, errors cannot be detected. This is because if the number of errors in
        a codeword is equal to or greater than dmin, the codeword may correspond to another valid codeword; as
         a result, the receiver may not be able to detect errors. The error-detection and error-correction capabilities
         of any coding technique largely depend on the minimum Hamming distance as explained below.
             To enable the receiver to only detect up to n errors in the codeword, dmin must be greater than or
               equal to n  +  1. For example, for dmin = 5, up to four errors can be detected in a codeword, as 5  >  =
               n  +  1, that is, n  <  =  4.
             To enable the receiver to correct up to m errors in the codeword, dmin must be greater than or equal to
               2m+1. For example, for dmin= 5, up to two errors can be corrected in a codeword, as 5  >  =  2m  +  1,
               that is, m  <  =  2.
             To enable the receiver to correct up to m errors and detect up to n errors (where n>m) in the
               codeword, dmin must be greater than or equal to m+n+1.
           In case, the value of dmin  is even, this scheme is partially ineffective. For instance, if dmin = 4, the value
        of m (the number of errors that can be corrected) comes out to be 1.5 that is rounded off to 1. This means
        only one error can be corrected for dmin = 4 and thus, efficienc is wasted.
               9. Explain various error-detection methods with the help of suitable examples.
           Ans: To perform error detection, a few redundant bits must be added to the data. Redundant bits are
        also called check bits, as they do not carry any useful information; however, their presence in data block
        lets the receiver to detect errors that have occurred during transmission. Some of the commonly used
        error-detection methods are discussed here.
        Parity Checking
        This is the simplest method for detecting errors in the data. An extra bit known as parity bit is added
        to each word that is being transmitted. In a word, the parity bit is generally the MSB and remaining bits
        are used as data bits. Parity of a word can be either even or odd depending on the total number of 1s in
        it. If the total number of 1s in a word including parity bit is even, then it is called even parity while if
        the total number of 1s in a word including parity bit is odd, then it is called odd parity.
            Depending on the type of parity required, the parity bit can be set to either 0 or 1. For example, if data
        is to be sent with odd parity, then the parity bit is set to 0 or 1, such that the number of 1s becomes odd.
        Similarly, to obtain even parity, the parity bit is set to 0 or 1, such that the total number of 1s in the whole
        word is even. The receiver can detect an error when it receives data in parity different from the expected
        parity, that is, if receiver knew that the data will be in even parity but it receives data with an odd parity.
        If the receiver detects an error, it discards the received data and requests the sender to retransmit the
        same byte. Examples given in Q14 and Q15 illustrate the use of simple parity checking method.
            The detection of errors by the parity checking method depends on the number of errors that have
        occurred. If the total number of errors in the received data is odd, then the parity of received data will be
        different from that of transmitted data; therefore, error will be detected. However, if the number of errors
        in the received data is even, the parity of received data will be the same as that of the transmitted data
        and hence, the receiver will not be able to detect any errors. The main advantage of parity method is that
        no additional storage or computational overhead is needed for generating parity. The parity checking
        method has certain disadvantages too, which are listed below:
           Parity checking cannot correct error.
           Even number of errors cannot be detected.
           It is not possible to fin the location of the erroneous bit.
        Checksum
         To overcome the limitation of detecting multiple errors in parity checking, checksum was introduced.
         Checksum method can be used for detecting multiple errors within the same word. With the transmis-
         sion of a word in a block of data bytes, its contents are added bit by bit to that of its predecessor and
         the sum is maintained at the sender’s side. Similarly, the contents of each successive word are added
         to the previous sum. After all, the words of a block of data bytes have been transmitted and the total
         sum (called checksum byte) up to that time is transmitted. While computing the checksum byte,
         the carries of the MSB are discarded. The example described in Q18 illustrates how to compute the
         checksum byte.
             At the receiver’s end, the checksum is again computed by adding the bits of received bytes. The
        newly computed checksum is compared with the checksum transmitted by the sender. If both are found
         to be same, then it implies no error; otherwise, errors are present in the received block of data. Some-
        times, the 2’s compliment of the checksum is transmitted instead of the checksum. In such a case, the
         receiver adds the data block of received bytes with 2’s complement of the checksum obtained bit by bit
         from the sender. If the sum is zero, then there is no error in data; otherwise, the errors are present. The
         advantage of checksum method is that it can detect burst errors.
             The performance of checksum detection method is better than simple parity checking, as it can detect
         all odd number of errors and most of the even number of errors.
        on a predetermined pattern of bits known as generator polynomial, denoted by G(x). Notice that the
        high- and low-order bits of the generator should always be 1.
           CRC is based on binary division.  Initially, at the sender’s end, n 0s are appended to the bit string
        (say, d bits) to be transmitted where n is one less than the number of bits in the generator (say, m bits)
        or we can say equal to the degree of the generator polynomial. Then, the resultant bit string of d  +  n
        bits is divided by the generator using modulo-2 arithmetic with ignoring the carries and borrows in the
        addition and subtraction, respectively. Here, it is important to note that whenever during division, the
        leftmost bit of the dividend becomes zero, the regular divisor is not used for division; instead, a divisor
        containing all zeros is used. After performing the division, the remainder bits (say, r bits), called redun-
        dant bits or CRC, are added to the dividend to form the codeword of d  +  r bits. Notice that the number
        of bits in CRC must be exactly one less than the number of bits in generator, that is, r  =  m–1. Finally,
        the codeword of d  +  r bits is transmitted to the receiver. The example described in Q19 illustrates how
        to generate the CRC code.
           At the receiver’s end, the received codeword of d  +  r bits is again divided by the same generator as
        used by the sender. If the remainder obtained after this division is non-zero, the receiver comes to know
        that an error has occurred and thus, rejects the received data unit; otherwise, the receiver knows that
        there is no error in the received data and thus, accepts the same. The example described in Q20 illustrates
        how to detect errors using CRC method.
            10. Reliability of CRC is better than that of simple parity and LRC. Justify this statement.
           Ans: CRC is more reliable than simple parity and LRC, because it can detect single-bit errors, burst
        errors, double errors and an odd number of errors. On the other hand, simple parity can detect only an
        odd number of errors and is considered only 50% efficien as compared to CRC, whereas LRC can
        detect only up to three errors. Therefore, both simple parity and LRC are considered less reliable than
        CRC and cannot be implemented in all networks.
              11. Write a short note on error correction.
           Ans: In error correction, the receiver that has received corrupt data needs to discover the original
        data, the exact number of error bits and their location. To enable error correction, encoder and decoder
        are used at the sender’s and receiver’s ends, respectively. Figure 6.3 shows the structure of encoder and
        decoder in error correction. Each m-bit dataword at the sender’s end is converted into an n-bit codeword
        (n  >  m) with the help of a generator that applies some encoding on the dataword thereby adding r redun
        dant bits to the dataword. Thus, an n-bit codeword contains m bit of data and r redundant bits, that is,
        n  =  m  +  r. The codeword is then transmitted to the receiver. At the receiver’s end, the checker examines
        the redundant bits to detect and correct errors in the data. After this, the dataword (without redundant bits)
        is passed to the receiver.
                                       Sender                                    Receiver
                                      Encoder                                   Decoder
                                    m-bit dataword                           m-bit dataword
                                                                                      Correct
                                                          Codeword
                                      Generator                                 Checker
            The two main methods for error correction are forward error correction (FEC) and retransmission.
         FEC is the technique in which the receiver makes guesses to find out the actual message using the
        redundant bits. In this technique, no request can be made for the retransmission of the message and
        thus, is useful only when the number of errors is small. In retransmission technique, whenever a
         receiver detects that there is some error in the received data, it rejects the received data and asks the
         sender to retransmit the data. The receiver continues to do so until it receives the data that it considers
         free of errors.
              12. Explain Hamming codes in detail.
            Ans: Hamming code is a linear block code that was developed by a scientist named R.W. Hamming.
        It is a single-bit error-correction technique. The basic idea is to insert parity bits in between the data bits
        to be transmitted. The parity bits are placed at each 2n bit position where n = 0, 1, 2, and so on. That is,
        firs parity bit is placed at firs (20 = 1) position, second parity is placed at second (21=2) position, third
        parity bit is placed at fourth (22 = 4) position and so on.
            There can be any number of bits (say, n) in the Hamming code including data bits (d) and parity
        bits (r). For finding the total number of parity bits in the dataword so as to form Hamming code, the
        condition, 2r  >  =  n  +  1, that is, 2r  >  =  d  +  r  +  1 must be satisfied. For example, if the dataword is of
        four bits, then three parity bits will be required as r = 3 satisfie the condition 23 > = (4 + 3 + 1). A seven-
        bit Hamming code is commonly used that comprises four data bits and three parity bits. Figure 6.4
        shows the structure of seven-bit Hamming code.
7 6 5 4 3 2 1 Bit position
D7 D6 D5 P4 D3 P2 P1
           The value of parity bits can be set to either 1 or 0. The value of P1 is set to 0 or 1, such that the bits
        at positions 1, 3, 5 and 7, that is, P1, D3, D5 and D7, make an even parity. The value of P2 is set to 0 or 1,
        such that the bits at positions 2, 3, 6 and 7, that is, P2, D3, D6 and D7, together make an even parity.
        Similarly, the value of P4 is set to 0 or 1, such that the bits at positions 4, 5, 6 and 7, that is, P4, D5, D6
        and D7, together make an even parity.
           After Hamming code has been generated at the sender’s side, the code is transmitted to the receiver. At
        the receiver’s end, it is decoded back into the original data. The bits at positions (1, 3, 5 and 7), (2, 3, 6 and
        7) and (4, 5, 6 and 7) are again checked for even parity. If each group of bits makes an even parity, then the
        received codeword is free from errors; otherwise, the error exists. In case of an error, a three-bit number is
        made from three parity bits (P4P2P1) and its decimal equivalent is determined, which gives the position of
        erroneous bit. After detecting the position of error bit, it is corrected by inverting the error bit (refer Q21).
             13. Apply bit stuffing to the given bit st eam 01101111111111111110010.
           Ans: In bit stuffing the data link layer stuffs one zero after each group of fiv consecutive 1s. Thus,
        the outgoing bit stream after bit stuffin will be:
01101111101111101111100010
Stuffed bits
bytes
1 1 0 0 0 0 0 0
0 1 1 0 1 1 1 1
0 1 1 0 0 0 0 0
0 0 0 1 1 1 0 1
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1
VRC bits 1 1 0 0 1 1 1 1
bytes
1 1 0 0 0 0 0 0
                                                      0      1   1    0    1   1    1   1
                                                                                            Wrong
                                                      0      1   0    0    0   0    0   0
                                                                                            parity
                                                      0      0   0    1    1   1    0   1
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1
VRC bits 1 1 0 0 1 1 1 1
Wrong parity
        Since fourth LRC and third VRC bits make the parity of its respective row and column even, there is an
        error in the received data stream. The erroneous bit will be present at the intersecting position of fourth
        row and third column, that is, the fourth bit of third byte is incorrect.
             18. Compute the checksum byte for the following datawords.
             10110011    10101011         01011010          11010101
            Ans: The checksum of the given datawords will be computed as shown below:
           19. Given message is M(X) = X^5 + X^4 + X + 1 and the generator is G(X) = X^4 + X^3 + 1.
        Compute CRC code.
          Ans: Given dataword = 110011
                              divisor = 11001
          Since the degree of generator polynomial is four, four zeros will be appended to the given dataword.
        Thus, the dividend will be 1100110000. Now, the CRC will be computed as shown below:
                                                       100001
                                                 11001 1100110000
                                                       11001
                                                        00001
                                                        00000
                                         As MSB = 0,
                                         Zero divisor    00010
                                                         00000
                                            used
                                                          00100
                                                          00000
                                                           01000
                                                           00000
                                                             10000
                                                             11001
                                                              1001       CRC
        The CRC code is obtained by appending CRC (that is, 1001) to the given dataword (that is, 110011).
        Thus, the CRC code is 1100111001.
            20. The codeword is received as 1100100101011. Check whether or not there are errors in the
        received codeword, if the divisor is 10101.
          Ans: To check codeword for errors, the received codeword is divided by the given divisor, that is,
        10101 as shown below.
                                           111110001
                                    10101 1100100101011
                                          10101
                                          011000
                                           10101
                                            11010
                                            10101
                                             11111
                                             10101
                                              10100
                                              10101
                                               00011
                                               00000
                                                00110
                                                00000
                                                 01101
                                                 00000
                                                  11011
                                                  10101
                                                   1110           Remainder (CRC)
            Since the remainder obtained (CRC) is non-zero, the received codeword contains errors.
             21. For the bit sequence 1001110:
                 (a) Calculate the number of parity bits that will be required to construct Hamming code.
                 (b) Construct the Hamming code.
            Ans: (a) To calculate the number of parity bits, the following condition must be satisfied
                                                    2r > = d + r + 1
        bits to construct the Hamming code and these parity bits will be inserted at first second, fourth and
        eighth positions.
             (b) The structure for Hamming code for the data 1001110 is shown below.
                         D11   D10   D9   P8        D7       D6       D5   P4       D3     P2     P1
To be decided
            As the bits D3D5D7D9D11 are 01101, thus, P1 = 1 to establish the even parity.
            As the bits D3D6D7D10D11 are 01101, thus, P2 = 1 to establish the even parity.
            As the bits D5D6D7 are 111, thus, P4 = 1 to establish the even parity.
            As the bits D9D10D11 are 001, thus, P8 = 1 to establish the even parity.
            Therefore, the Hamming code which will be transmitted to the receiver is:
                         D11   D10   D9   P8        D7       D6       D5    P4       D3     P2       P1
                                                                                                             Complete
                          1     0    0     1        1        1        1         1     0     1        1
                                                                                                             codeword
             22. A seven-bit Hamming code is received as 1100101. What is the correct code?
            Ans: The received codeword is:
                                     D7   D6        D5       P4       D3    P2       P1
                                                                                                 Received
                                      1    1         0        0        1        0     1
                                                                                                 codeword
            As the bits P1D3D5D7 (that is, 1101) together make an odd parity, there is an error. Thus, P1 = 1.
            As the bits P2D3D6D7 (that is, 0111) together make an odd parity, there is an error. Thus, P2 = 1.
            As the bits P4D5D6D7 (that is, 0011) together make an even parity, there is no error. Thus, P4 = 0.
            The erroneous bit position can be determined as follows:
                                            P4          P2       P1
                                                                                3-bit number, that
                                                0        1        1
                                                                                is, error word
             The decimal equivalent of (P4P2P1), that is, (011) is 3. Hence, third bit in the received codeword
            c ontains the error, so by inverting the third bit, the correct code can be obtained as shown below.
                                     D7   D6        D5       P4       D3    P2       P1
                                                                                                  Correct
                                     1     1        0        0        0         0     1
                                                                                                 codeword
Inverted bit
          1. Which of the following service is not pro-          6. Which of the following error-detection
             vided by the data link layer?                          method involves the use of parity bits?
             (a) Unacknowledged connectionless                      (a) LRC
             (b) Unacknowledged connection-oriented                 (b) VRC
             (c) Acknowledged connectionless                        (c) Checksum
             (d) Acknowledged connection-oriented                   (d) Both (a) and (b)
          2. The process of breaking stream of bits into         7. VRC parity bit is associated with ________.
             frames is known as __________.                         (a) Rows
             (a) Bit stuffin                                        (b) Columns
             (b) Byte stuffin                                       (c) Both (a) and (b)
             (c) Framing                                            (d) None of these
             (d) Character count
                                                                 8. What is the formula to calculate the number
          3. What is the process of adding one extra 0              of parity bits (r) required to be inserted in
             whenever there are fiv consecutive 1s in the           dataword (d bits) to construct the Hamming
             data, so that the receiver does not mistake            code?
             the data about a flag                                  (a) 2r >  = d + r
             (a) Bit stuffing      (b) Bit padding                  (b) 2r >  = d
             (c) Byte stuffing (d) Byte padding                     (c) 2d > = d + r
          4. In single-bit error, _____ bit is altered during       (d) 2r >  = d + r + 1
             the transmission.                                   9. Hamming code can detect up to ________ if
             (a) Zero               (b) One                         minimum Hamming distance is 3.
             (c) Two                (d) None of these               (a) 2           (b) 3
          5. CRC computation is based on                            (c) 4           (d) 5
             (a) OR operation                                   10. If a codeword in Hamming code is of seven
             (b) AND operation                                      bits, then how many parity bits it contains?
             (c) XOR operation                                      (a) 2             (b) 3
             (d) NOR operation                                      (c) 7             (d) 9
        Answers
        1. (b)     2. (c) 3. (a) 4. (b) 5. (c) 6. (d) 7. (b)         8. (d)   9. (a)   10. (b)
                                              Sender                 Receiver
                                                           Frame
                                                             K
                                                          AC
                                                             Frame
                                       Sender                              Receivers
                                       cannot                              holds the
                                      send next                              ACK
                                       Frame                   K
                                                            AC
Frame
           Mostly, the sender breaks the large block of data into smaller blocks called frames because sending
        the message in the larger blocks may increase the transmission time and errors are more likely to occur.
        In contrast, when message is transmitted in smaller frames, errors may be detected earlier and a smaller
        amount of data needs to be transmitted. Another reason of breaking into frames may be the limited buffer
        size of the receiver due to which entire large frame cannot be processed at a single time, thus causing
        long delays. But the disadvantage of sending small frames for a single message is that stop-and-wait
        method cannot be used effectively because in this protocol only one frame can be sent at a time (either
        from sender to receiver or vice versa) which results in underutilization of the communication link.
        Link Utilization
         To understand the link utilization in stop-and-wait flow control, consider tframe that denotes the time
         taken in transmitting all bits of a frame and tprop denotes the propagation time from the sender to
        receiver. Now, the propagation delay (a) which is defined as the time taken by one bit to reach from the
         sender to receiver can be expressed as
                                                     a = tprop/tframe
        If a < 1 then tprop< tframe, which implies that the frame is longer enough such that the leading bits of
        frame reach the receiver before the entire frame is transmitted. That is, link is inefficiently utilized. On
        the other hand, if a > 1, then tprop > tframe, which implies that the sender has transmitted the entire
        frame and the leading bits of that frame have still not arrived at the receiver. This results in underutiliza-
        tion of the link.
               3. Derive the expression for the maximum possible utilization of link in stop-and-wait
        flo control.
           Ans: Suppose a long message is to be transmitted between two stations. The message is divided
        into a number of frames F1, F2, F3,…, Fn. The communication between stations proceeds in such a
        manner that the frames (from station 1) and the ACKs (from station 2) are sent alternatively. That is,
        initially sender (station 1) sends F1, then receiver (station 2) sends an ACK and then sender sends F2
        then receiver sends an ACK and so on.
           Now, if Tf denotes the time taken in sending one frame and receiving an ACK, then the total time (T)
        taken to send the message can be expressed as
                                                       T = nTF
        where n is the total number of frames and TF can be expressed as
                               Tf = tprop + tframe + tproc + tprop + tack + tproc
         where
         tprop is the propagation time from station 1 to station 2.
         tframe is the time taken in transmitting all bits of the frame.
         tproc is the processing time by each station to respond to an incoming event.
         tack is the time to transmit an ACK.
            tprop + tframe + tproc is the total time taken by the station 1 to send the frame and getting it
        processed at the receiver’s end and tprop + tack + tproc is the total time taken by the station 2 to send
         the ACK frame and getting it processed by the sender.
            Therefore,      T = n(tprop + tframe + tproc + tprop + tack + tproc)
           Assuming that the processing time is negligible and the ACK frame is very small as compared to the
        data frame, we can neglect tproc and tack. Thus, T can be written as
                                                 T = n(2tprop + tframe)
           As n*tframe is the time that is actually involved in sending the frames and rest is considered as
        overhead, the link utilization (U) may be given as
                                                n*tframe               tframe
                                      U=                       =
                                           n*(2tprop +tframe) (2tprop +tframe)
                                  ⇒ U = 1/(1 + 2a)
        where a = tprop/tframe = propagation delay.
            This is the maximum utilization of the link, which can be achieved in situations where propagation
        delay (a) is constant and fixed-length frames are often sent. However, due to overhead bits contained in
        a frame, the actual utilization of the link is usually lower than this.
             4. Derive the expression for the propagation delay showing that it is proportional to the bit
        length of the channel.
           Ans: We know, propagation delay (a) = propagation time/transmission time.
        Now, propagation time              = d/V
        where d is the distance of the link and V is velocity of propagation. For guided media, V = 3*108 m/s
        and for unguided transmission, V = 0.67 *3*108 (approximately).
        Transmission time               = L/R
        where L is the length of the frame in bits and R is the data rate of transmission. Therefore,
                                                          d/V Rd
                                                     a=      =
                                                          L/R VL
        Since, we know that the bit length of channel (B) = Rd/V
                                                    ⇒ a = B/L
        Hence,                  a ∝ B
                                                              RR4        0123456701234567
                           0123456701234567
                                                              F4
                                                              F5
                                                              F6
                                                              F7         0123456701234567
                           0123456701234567
                                                        RR5
0123456701234567 0123456701234567
        numbered from 0 to 6 without having to wait for any ACK from B and B can receive frames numbered from
        0 to 6 from A, respectively. As the frames are sent or received at the sender’s and receiver’s end respec-
        tively, the window size shrinks. For example, after transmitting frames F0, F1, F2 and F3, the size of A’s
        window shrinks to three frames and after receiving frames F0, F1, F2 and F3, the size of B’s window
        shrinks to three frames (Figure 7.2). Similarly, the window size expands as the ACKs are received or sent
        by the sender and receiver, respectively. Therefore, the scheme is named as sliding window flow control.
            Notice that at A’s end in Figure 7.2, the frames between the vertical line and the box indicate those
        frames that have been sent but not yet acknowledged by B. Similarly, at B’s end, the frames between the
        vertical line and the box indicate those frames which have been received but not yet acknowledged.
            As each sequence number used occupies some field in the frame, the range of sequence number that
        can be used gets limited. In general, for a k-bit field, the range of sequence number will be from 0 to 2k - 1
        and frames will be numbered as modulo 2k. That is, after sequence number 2k - 1, the next number
        again will be 0. For example, for a 3-bit sequence number, the frames will be numbered sequentially
        from 0 to 7 and the maximum window size will be 7 (23 – 1). Thus, seven frames can be accommodated
        in the sender’s window and the receiver can also receive a maximum of seven frames at a time.
        Link Utilization
        In sliding window flow control, the link utilization (U) depends on the window size (N) and propagation
        delay (a) and is expressed as
                                                  1, if N > = 2a + 1
                                           U = 
                                                  N/(2a + 1), if N < 2a + 1
              6. Defin piggybacking. What is its usefulness?
           Ans: Usually, the link used between the communicating devices is full duplex and both the sender
        and receiver need to exchange data. In addition to data, both sender and receiver send ACK for the
        received data to each other. To improve the efficiency in a two-way transmission, a technique called
        piggybacking can be employed, which permits an ACK from either device (sender or receiver) to be
        assembled with the next outgoing data frame from that device instead of sending a separate ACK frame.
        In case a device has only ACK but no data to send, it needs to use a separate ACK frame. However, if
        the device has to send the ACK followed by data, it can temporarily withhold the ACK and later, include
        the ACK in the outgoing data frame.
           By temporarily delaying the outgoing ACKs and sending them along with data frames, piggybacking
        results in better use of the available channel bandwidth. Moreover, by using piggybacking technique,
        less number of frames needs to be sent which further implies less number of interrupts as well as less
        number of buffers required at the receiver’s end.
              7. What is meant by lost frame and damaged frame? List the common techniques of error
        control method.
           Ans: A frame that fails to arrive at the destination node due to noise burst is called lost frame. In this case,
        the receiver does not know that the frame has been transmitted by the sender. A damaged frame is the frame
        in which some bits have been changed or altered during the transmission. In case of lost frame, no frame is
        received by the receiver while in case of damaged frame, a frame is received but not in the exact order.
              8. Explain the mechanism of stop-and-wait ARQ.
          Ans: Stop-and-Wait ARQ is the extended form of stop-and-wait flow control technique where
        data frames are retransmitted if the sender does not receive any ACK for the frame. In this method,
        the sender transmits a single frame and waits for the ACK from the receiver before sending the next
        frame. The sender cannot transmit a new frame until the ACK is received from the other side. Since
        either the frame or the ACK can be transmitted at a time, this technique is also known as one-bit sliding
        window protocol.
            Notice that the ACK will not be received by the sender if the frame has been lost during the
        transmission, that is, it has not been received by the receiver or if the receiver has received the frame
         free of errors but its ACK has been lost or delayed during the transmission. In both cases, the sender
         needs to retransmit the frame. For this, the sender maintains a copy of sent data frame in its buffer so
         that they can be retransmitted whenever required.
            To identify the frames to be retransmitted, sequence numbers are attached to the consecutive data
         frames and ACKs. As only one frame or one ACK can be on the transmission link at a given time, only
         two numbers 0 and 1 are needed to address the data frames and ACK. Data frames are alternatively
         numbered 0 and 1, that is, first data frame will be marked as F0, second will be F1, third again will be
         marked as F0 and so on. The numbers that are attached with ACK indicates the sender to send that num-
         bered data frame. For example, if ACK1 is sent by the receiver, then the sender comes to know that F0
         has been received successfully and now, it has to send F1 to the receiver. This process helps the receiver
         to discard the duplicate frames; however, ACKs need to be sent by the receiver for the duplicate frames
         also so as to keep the sender in synchronization.
        Link Utilization
         The link utilization (U) in the stop-and-wait ARQ is given as
                                               U = (1 - p)/(1 + 2a)
         where
            p is the probability of receiving a bit in error
            a is the propagation delay which is also equal to (tprop*R)/L where tprop, R and L denotes
        propagation time, data rate of the link and number of bits in the frame, respectively.
              9. What are the two types of sliding window ARQ error control?
           Ans: To provide the efficient utilization of the link, the sliding window error control technique is
        adopted. The sliding window ARQ is also known as continuous ARQ because it allows the sender
        to send many frames continuously without waiting for an ACK. There are two techniques, namely,
        Go-back-N ARQ and selective-reject ARQ that are based on the concept of sliding window.
              10. Explain the mechanism of Go-back-N ARQ technique.
           Ans: Go-back-N ARQ is the most commonly used form of error control that is based on the sliding
        window flow control. In this technique, the sender sends the data frames depending on its window size
        but at the receiving end window size is always one. That is, the sender can send many frames without
        waiting for an ACK from the receiver but the receiver can receive only one frame at a time. For each
        frame that is received without any error, the receiver acknowledges the frames by sending RR message.
        However, when it detects error in the frame, it discards that frame and sends a negative acknowledgment
        reject (REJ) to the sender. The receiver continues to discard the further transmitted frames until the
        frame in error is received correctly. When the sender receives an REJ, it retransmits the frame in error
        plus all the succeeding frames that it has sent already.
            Figure 7.3 shows the mechanism of Go-back-N ARQ where the sender’s window size is seven. That
        is, the sender can transmit frames F0-F6 to the receiver without waiting for an ACK.
                                                                                            Receiver
                                 Sender
                         0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
                                                                           F0
                                                                                           F0
                                                                     F1
                                                                  RR1             ∗
                         0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
                                                                            F2
                                                                                   1
                                                                      F3        REJ         F2 and F3
                         0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7                                    discarded
                                                                           F1
                                                                                           F1
                                                                               RR2
                                                                                  F2
                         0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
                                                                                           F2    F1, F2 and F3
                                                                                                 retransmitted
                                                                           3          F3
                                                                       RR
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 F3
Sender Receiver
               0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7                                0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
                                                            F0
                                                         F1
                                                     RR1           ∗
                                                              F2              0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
               0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
                                                                   J1
                                                       F3      SRE
F1
                                                              RR4
                                                                  F4
               0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7                  F5
                                                                              0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
                                                               6
                                                            RR
        or delayed. In both cases, the sender does not receive any further acknowledgements because the
        receiver will deny accepting any frame other than F4. Thus, the sender has to retransmit F4.
        Link Utilization
        The link utilization (U) in Go-back-N ARQ method can be considered in the following two situations:
          1. When N > 2a + 1, U = (1 - p)/(1 + 2ap) and
          2. When N < 2a + 1, U = N(1 - p)/((2a + 1)*(1 – p + Np)).
             11. Explain the mechanism of selective-reject ARQ technique.
           Ans: Selective-reject ARQ (also known as selective-repeat ARQ) method offers an improvement
        over Go-back-N by retransmitting only the lost or damaged frame but not all the frames succeeding it. In
        this method, usually the size of receiver’s window is same as that of sender’s window and the damaged
        frames are acknowledged by the receiver by transmitting selective reject (SREJ) frame to the sender. As
        this method retransmits only lost or SREJ frames, a large buffer needs to be maintained at the receiver’s
        end so that if some frame is not received in proper order then the receiver can accept the rest incoming
        frames and buffer them until that valid frame has been received. When all the data frames are reached, the
        receiver arranges all frames in proper sequence and sends them to the next upper layer.
           Figure 7.4 shows the selective-reject ARQ mechanism where the window size of both sender and
        receiver is seven. That is, the sender can transmit frames F0-F6 to the receiver without waiting for an
        acknowledgement and the r eceiver can also receive frames F0-F6.
        Lost or Delayed RR
        To understand how lost or delayed RR is handled, suppose that the receiver receives frame F2 and in
        response sends RR3 which either gets lost or gets delayed. In both cases, when RR4 reaches the sender, the
         sender assumes it as a cumulative ACK of frames up to F3 and thus, the operation remains unaffected.
        Link Utilization
        The link utilization (U) in selective-reject ARQ can be considered in the following two situations:
          1. when N ≥ 2a + 1, U = 1 - p and
          2. when N < 2a + 1, U = (N*(1 - p))/(2a + 1).
        Types of stations
        Three types of stations defined by HDLC are as follows:
          Primary Station: This station has the responsibility of controlling the operation of the link. The
            frames that are sent by a primary station are called commands.
          Secondary Station: This station works under the control of the primary station and the frames that
            are sent by a secondary station are called responses. Each secondary station on the line is linked with
            the primary station via a separate logical link. During communication between primary and second-
            ary stations, the primary station is responsible for establishing, managing and terminating the link.
          Combined Station: This station can act as both a primary and a secondary station and thus, can
            send both commands and responses. It does not rely on any other stations on the network and thus,
            no other station can control a combined station.
        Frame Structure
        The frame structure of I-frames and U-frames is exactly similar (Figure 7.5) and consists of six fields:
        starting flag field, an address field, a control field, an information field, a frame check sequence (FCS)
        field and an ending flag. Since S-frame cannot carry user data, it consists of rest of the five fields
        excluding the user information field. When multiple frames are transmitted between the communicating
         stations, then the ending flag field of one frame may act as the starting flag field of the next frame and
         each frame is transmitted from left to right.
                                                          User
                         Flag   Address   Control                           FCS                Flag   I-frame
                                                      information
                                                     Management
                         Flag   Address   Control                           FCS                Flag   U-frame
                                                      information
0 P/F I-frame
N(S) N(R)
1 0 P/F S-frame
Code N(R)
1 1 P/F U-frame
                                                   Code                 Code
                                    Figure 7.6    Control Field Format for HDLC Frame Types
        extended frame format in which the control field is of 16-bits is used then N(S) field becomes larger.
        The next single bit after N(S) is the P/F bit where P stands for poll and F stands for fina . If P/F bit is
        set to one, it means poll, that is, frame has been sent by a primary station to a secondary station. On
        the other hand, if P/F bit is zero, it means final, that is, frame has been sent by a secondary station to
        the primary station. The last three bits in control field comprise the N(R) subfield that defines an ACK
        number corresponding to received frame, which has been piggybacked on the I-frame.
                                                             User’s
                         Flag   Address       Control                            ECF                Flag
                                                              data
           In order to keep PPP simple, some services have not been implemented in PPP. These services are
        as follows:
          It does not concern about flow control. However, it provides error control to some extent. It enables
             to detect errors; however, the errors cannot be corrected.
          It operates only where there is single sender and single receiver. Thus, cannot be used over multi-
             point links.
          It does not support frame sequencing, thus, frames are delivered to the destination not necessarily
             in the same order in which they were sent by the sender.
            18. Explain the frame format of the PPP.
          Ans: The PPP is a byte-oriented protocol and uses an HDLC like frame format (Figure 7.8). The
        description of each field in PP frame is as follows:
          Flag: It is a one-byte long field having the unique pattern of 01111110. This unique pattern is in-
             serted at the start and end of each PPP frame. Though the flag field in PPP uses the same bit pattern
             as that of HDLC, the flag is treated as a byte in PP .
          Address: It is a one-byte long field that is set to a constant value of 11111111, which means a broad-
             cast address. This byte can be omitted upon an agreement between two parties during negotiation.
          Control: It is a one-byte long field that is set to a constant value of 00000011, which indicates an
             unnumbered frame. This implies that PPP does not use sequence numbers and ACKs and thus, does
             not provide reliable transmission. Though PPP does not provide flow control and error control is
             also limited to error detection only, the control field can also be omitted if both parties agree to do
             so during negotiation.
          Protocol: It is a two-byte long field that defines what is being carried in the payload field, that is,
             user data or some other information. Only one byte of this field can be used if both parties agree
             during negotiation.
          Payload: It is a variable-length field containing either the user data or other information in the
             form of sequence of bytes. The default maximum length of payload field is 1,500 bytes; however,
             two parties can negotiate on this maximum length. If the flag byte appears in the data in payload
             field, the data is byte stuffed. In case the size of payload field is less than 1,500 bytes or the maxi-
             mum negotiated value, padding is required.
          FCS: It is a two-byte or four-byte long field that is used to detect errors in the transmitted frame.
             This two- or four byte is simply a CRC.
Failed
                                                  Carrier
                                                 detected
                         Dead                                                   Establish
                                                  Failed                                                If authentication
                     Terminate                                               Authenticate
                                                                                                        not needed
                                                                     Authentication
                           Done                                         successful
                         Open                                                  Network
                                              Network layer
                                              configuration
             Establish: In this phase, one of the nodes starts the communication and negotiations are made
               between the communicating nodes. The link control protocol (LCP) packets and several other
                packets are exchanged between the nodes if the negotiation is successful. With this, the connection
                transits to authenticate phase. Notice that if authentication is not required, the connection directly
                switches to network phase.
              Authenticate: This is an optional phase that can be skipped upon an agreement between both
               nodes during the negotiation in the establishment phase. If both parties agree to undergo this phase
               then several authenticate protocols are used by the communicating nodes to verify each other’s
               identity. If the authentication becomes successful, then the connection moves to the network phase
               else it goes to the terminate phase.
             Network: In this phase, two nodes negotiate on the network layer protocol that should be used
               so that the data can be received. As PPP supports multiple protocols and many protocols may run
               simultaneously at the network layer, the receiver node should negotiate with the sender node on a
               common protocol to be used for exchanging the data at the network layer. After the protocol has
               been decided, the connection switches to open phase.
             Open: In this phase, data packets are exchanged between the communicating nodes. The connection
               remains in this phase until one of the nodes wants to close the connection.
             Terminate: In this phase, the connection link is closed and the nodes now cannot transfer any data
               packets between themselves. Certain packets are exchanged between two nodes for house cleaning
               the connection and then the link is terminated. After this, the connection goes to dead phase.
             20. Write a short note on LCP and NCP.
           Ans: The link control protocol (LCP) and network control protocol (NCP) are the PPP protocols
        that are used to establish the link, authenticate the communicating nodes and move the network layer data.
        provides the negotiation mechanism. Each LCP packet is encapsulated in the payload field of PPP frame
        and a value 0xC021 (hexadecimal) is placed in the protocol field of PPP frame to indicate that PPP is
        carrying an LCP frame (Figure 7.10).
        v alidated, then the system sends an authenticate-nak packet indicating that access is denied. When-
         ever a PAP packet is being carried in a PPP frame, the protocol field of PPP frame is set to a value
         0xC023 (hexadecimal).
        a llocation can be used when bursty traffic is present in the network. To implement this method, certain
         assumptions are followed.
            Station Model: The model consists of n stations (sometimes, called terminals) such as computers
                and telephones that can work independently on a network. Each terminal has the capability to
                produce frames for transmission. After a frame has been generated, the station remains idle or is
                blocked until the entire frame has been transmitted.
            Single Channel: A single communication channel is shared by all the stations for transmission.
            Collision: When two stations transmit frames simultaneously, the reception of frames may overlap
                in time resulting in a garbled signal. This phenomenon is known as collision. The collided frames
                must be retransmitted by their respective stations.
            Continuous Time and Slotted Time: Some systems follow continuous time assumption while
               others follow slotted time assumption. If continuous time assumption is followed in a network,
                then a station can transmit a frame at any time. However, if slotted time assumption is followed
                then time is partitioned into discrete intervals and a frame can be transmitted only at the beginning
                of a slot. Thus, for synchronization, some mechanism such as a clock is required.
            Carrier Sense and no Carrier Sense: A network can have either carrier sensing or not. A station
                with a carrier sense can determine whether a channel is in use before attempting to use it. If a chan-
                nel is busy, no station attempts to use it until it gets free. However, if the network has no carrier
                sense then a station cannot sense the channel before starting the transmission. If the station needs
                to send a frame, it just transmits it.
              3. What is meant by random access method? Give examples of random access protocols.
          Ans: The method, which allows the stations to access the transmission medium (channel) ran-
        domly at any time, is known as random access method. No station is under the control of any
        other station and there is no specific time for stations to start transmission. Moreover, there are
        no rules to be followed for deciding which station should be allowed access to the channel. If
        multiple users need to access the channel at the same time, they have to compete with each other.
        Hence, this method is also known as contention method. Examples of random access protocols
        are ALOHA, carrier sense multiple access (CSMA), CSMA/collision detection (CD) and CSMA/
        collision avoidance (CA).
        Pure ALOHA
        Pure ALOHA is a simple protocol. It was originally developed for packet radio networks but can be
        applied to any system where a single shared channel is being used among multiple competing users. The
         basic idea is that each station can send a frame whenever it needs to transmit. When two frames are sent
         at the same time, they may collide with each other fully or partially and get corrupted (Figure 8.1). The
         sender, therefore, must retransmit the corrupted frames.
S1
S2
S3
Broadcast channel
0.5
                                 Throughput
                                                                                                  Slotted ALOHA
                                     S
                                                                                                  Pure ALOH
                                                                                                           A
0 0.5 1 2 3
                                                                        G (offered traffic)
                                      Figure 8.2            Throughput of Pure ALOHA and Slotted ALOHA
        Slotted ALOHA
        Slotted ALOHA is a modification of ALOHA that has been developed to improve the efficienc . This is
        a discrete time system in which time is partitioned into intervals or slots with each slot corresponding
        to a frame. Slotted ALOHA demands for the stations to agree on the boundaries of slots and a clock are
        needed to synchronize the stations. In this method, a station can begin the transmission of a frame only
        at the beginning of a slot. If a station is unable to start the transmission at the beginning of a slot, then it
        has to wait until the next slot, thereby reducing the chances of collisions. However, collisions may still
        occur when two stations are ready for transmission in same slot resulting in garbled frames, which have
        to be retransmitted by the sender (Fig. 8.3).
S1
S2
S3
Broadcast channel
                                                                           Collisions
                                                        Figure 8.3    Collisions in Slotted ALOHA
F1
                                                                               Collides with
                                                   F2                          end of F1
                                         Collides with                             F3
                                         beginning of F1
                                t0 − t                     t0             t0 + t                         Time
                                                Vulnerable period
                                          Figure 8.4    Vulnerable Period in Pure ALOHA Protocol
           In slotted ALOHA, the stations are allowed to transmit frames only in the beginning of each time slot. If
        any station misses the chance in one time slot, it must wait for the beginning of next time slot. This implies
        that a station, which has started at the beginning of this time slot, has already transmitted its frame completely.
        However, a collision may still occur if two stations attempt to transmit in the beginning of same time slot
        (Figure 8.3). Hence, the vulnerable period of slotted ALOHA is t, which is half of that of pure ALOHA.
               7. Explain the binary exponential back-off algorithm with the help of an example.
            Ans: In case of a collision, the stations whose frames have collided are required to retransmit the
        frames. However, before attempting to retransmit the frames, each station that was involved in the col-
        lision waits for some random amount of time (random delay) to ensure that the collision is not likely
        to occur at the time of retransmission. The random delay increases with each repeated collision.
             Binary exponential back-off algorithm specifies the random amount of time a station has to wait for
        retransmission of a frame after a collision occurs. After a collision occurs, this algorithm works as follows:
             1. Time is divided into discrete slots. For the kth attempt of retransmission, a random number r
                between 0 and 2k – 1 is selected. The value of k is incremented by one with every retransmis-
                sion attempt and thus, the range of random numbers also increases.
             2. A station waits for r number of slots before making an attempt to retransmit the frames.
             3. If number of collision or number of attempts is more than 15, transmission is aborted and failure
                is reported to the station.
        For example, after two collisions, that is, two attempts of retransmission, a station will wait for random
        number chosen between 0 and 3 (that is, 22 – 1) of slots. If more than one station selects the same random
         number, then collision occurs again. As the number of collisions increases, the range of random number
         increases thereby reducing the chances of collision among stations. However, the increase in range of
         random numbers also implies increase in average wait time, which results in increased delay.
                8. Explain the working of carrier sense multiple-access protocol.
           Ans: Carrier Sense Multiple Access (CSMA) is a contention-based protocol that was developed
        to overcome the drawbacks of ALOHA system. This protocol tends to reduce the chances of collisions
        and thus, improving the performance of the network. This protocol operates on the principle of carrier
        sensing which states that any station, before attempting to use the channel, must sense (listen to) it to
         check whether any other station is using the channel or not. A station sends the data only if it finds the
         channel free, that is, when there is no other carrier.
            There are three variations of CSMA protocol, which are as follows:
           1-persistent CSMA: In this method, when a station needs to transmit a frame, it monitors the
              channel, and transmits the frame as soon as it finds the channel idle. However, if it finds the channel
              busy, it continuously senses the channel until it becomes free. Despite of the fact that carrier sensing
              is used, collisions may still occur if two or more stations find the channel idle at the same time and
               thus, transmit frames simultaneously. The performance of this protocol is greatly affected by the
               propagation delay. To understand, consider that a station sends a frame and the propagation delay is
               greater than the transmission time of a frame. It takes some time for the first bit of the transmitted
              frame to reach and be sensed by other stations. In the mean time, any other station that becomes
              ready to send, may sense the channel idle and thus, transmits the frame. This leads to a collision.
              This protocol is so named as a station transmits with probability one on finding the channel idle
           Non-persistent CSMA: In this method, when a station needs to send a frame, it first senses the channel.
              If the channel is idle, the station transmits the frame instantly. However, if the station finds the channel
              busy, it waits for a random amount of time before sensing the channel again. This method reduces the
              chances of collision as it is rare that two stations will wait for same amount of time and then will retry
              to transmit at the same time. Due to fewer collisions, the utilization of network is also improved.
           P-persistent CSMA: This method is applicable and used with the channel that has time slots with
               each slot being greater than or equal to maximum propagation time. In this method, if the channel
               is found idle, a station is allowed to transmit with a probability p. With probability q (which is
               equal to 1 – p), the station waits for the beginning of the next time slot and senses the line again.
               If the channel is idle in the next time slot also, the above method is repeated; the station either
               transmits or waits with probabilities p and q, respectively. This process continues and eventually,
                either that station transmits the frame or some other station starts transmitting the frame. In the
                latter case, the station assumes that a collision has occurred and thus, it waits for a random period
                (computed using back-off algorithm) and then starts again.
               9. Compare the performance of non-persistent and 1-persistent CSMA protocols.
            Ans: The performance of non-persistent and 1-persistent CSMA protocols can be compared with
        respect to two parameters: delay and channel utilization. In case of fewer load on the network, 1-persistent
         has shorter delay than non-persistent because 1-persistent method permits a station to begin sending a
         frame as soon as the channel is sensed idle. Due to less delay, the channel utilization of 1-persistent method
         is better than that of non-persistent method. On the other hand, in case of heavy load on the network,
         non-persistent method has shorter delay than 1-persistent method because the number of collisions in
         1-persistent method is higher than that of non-persistent method. Due to less probability of collisions,
         the channel utilization of non-persistent method is better than that of 1-persistent method. Thus, it can be
         concluded that at low loads, 1-persistent method is preferable over non-persistent method while at heavy
         loads, non-persistent method is preferable over 1-persistent method.
             10. What is CSMA/CD? Explain its working.
           Ans: Carrier Sense Multiple Access/Collision Detection (CSMA/CD) is the refinement of CSMA
        protocol. It was developed to overcome the inefficiency of CSMA protocol where the stations involved
        in the collision do no stop transmitting bits of frames even though the collision is detected. This leads
        to inefficient channel utilization. Further, CSMA/CD improves the channel utilization by making the
        stations to sense the channel not only before starting the transmission but during the frame transmission
        also (referred to as collision detection). This implies that in CSMA/CD, both transmission and collision
        detection are continuous processes. Once a station starts transmitting bits of a frame, it continuously
        senses the channel to detect the collision. As soon as the station detects a collision, it aborts the frame
        transmission immediately rather than finishing the transmission. Collision can be detected by compar-
        ing the power of the received signal with that of transmitted signal.
            The CSMA/CD can be in any of the three states, namely, contention, transmission or idle. Conten-
        tion refers to the state in which more than one station is ready to transmit and competing to get the
        channel. Transmission refers to the state in which data are transmitted. Idle refers to the state in which
        channel is idle and no station is transmitting data.
            Figure 8.5 depicts the working of CSMA/CD. When a station has a frame to send, a variable k denoting
        the number of retransmission attempts is initialized with zero. The station applies one of the persistence
        methods to sense the channel. When the channel is found idle, the station starts transmitting bits of frame
        and detecting for collision as well. If collision does not occur during the entire transmission period, the
        station assumes that the frame has been received successfully. However, if a collision takes place during
        the transmission, the station aborts the transmission and a jam signal is sent to inform all other stations
        on the network about the collision. If the station has not crossed the limit of maximum retransmission
        attempts (kmax), the back-off time is computed for the station using the binary exponential algorithm
        (explained in Q7). The station is made to wait for the back-off period and then restart the process.
            To better understand CSMA/CD, consider two stations S1 and S2. Suppose when contention period
        starts at time t1, station S1 starts transmitting a frame F1. After some time, station S2 senses the channel
Start
k=0
                              Idle channel
                                                                                        Apply binary exponential back off
                         Sendframe                                                     algorithm to compute back off time
No
No Yes
Success Abort
        and finds it idle as the first bit of frame F1 sent by station S1 has not yet reached. Therefore, station S2
        starts transmitting the bits of frame F2 at time t2; the bits are propagated in both forward and backward
        direction. Collision occurs between frames F1 and F2, which is detected by the station S2 at time t3.
        As soon as the station S2 detects the collision, it discontinues sending further bits of frame F2. Now, let
        at time t4, station S1 detects a collision. It also aborts the transmission of frame F1. After detecting a
        collision, stations S1 and S2 immediately abort the transmission rather than finishing the transmission of
        frames, which will be garbled anyway (Figure 8.6).
S1 S2
                                             t1          F1
                                                                Fi
                                             t2                                F2
                                                                rs
                                                                  tb
                                                              F
                                                      t bit of 2
                                                                     it o
                                                  Firs
                                                                       fF
                                             t3
                                                                        1
                                                                              Collision detection
                                                           Collision          and abortion by S2
                                             t4             occurs
                                                  Collision detection
                                                  and abortion by S1
           The advantage of this protocol over CSMA is that it saves time and bandwidth as it aborts the transmission
        of a frame as soon as the collision is detected. Thus, it offers better performance than CSMA. It is widely
        used on LANs in MAC sublayer.
             11. Describe CSMA/CA protocol.
           Ans: In wireless networks, it is difficult to effectively detect the collision using CSMA/CD. This is
        because in wireless networks, most energy of the signal is lost during transmission and as a result, the
        signal received by a station is of very less energy. That is why in wireless networks, a need arises to
        avoid the collision rather than detect the collision. The CSMA/CA is a protocol developed for wireless
        networks to help in avoiding collisions.
           Figure 8.7 shows the working of CSMA/CA protocol. When a station is ready to transmit the frames, a
        variable k denoting the number of retransmission attempts is initialized with zero. The station uses any of
        the persistent methods to sense the channel. If the channel is busy, it again senses the channel. The station
        continues to do so until it finds the station idle. After the station has found the channel idle, it does not send
        the frame immediately rather it waits for a random amount of time known as interframe space (IFS). The
        station is required to wait for IFS time because it may happen that some other distant station has already
        started the transmission of a frame but the leading bits of the frame have not reached here until yet.
Start
k=0
                No           Is channel
                                idle?
Yes
                                   Yes
                Apply binary exponential back off                                         k=k+1
            algorithm to choose n between 0 and 2k−1
                                                                                               No
Success
            After waiting for IFS amount of time, the station again senses the channel. If it finds the channel still
        idle, it waits for a random amount of time partitioned into slots. This time is referred to as the contention
        time and it is computed using the binary exponential back-off algorithm; a random number n is chosen
        between 0 and 2k - 1 and the station is made to wait for n time slots. After each time slot, the station
        senses the channel. If the channel is busy, the timer is stopped and it is not restarted until the channel is
        found idle.
            After waiting for n time slots, the station sends the frame and starts waiting for its acknowledgement.
        If the station receives the positive acknowledgement before the time-out, it knows that the frame has
        been received successfully. However, if acknowledgement is not received before the time-out, the station
        needs to retransmit the frame. To check whether the station is eligible for retransmission, the value of k is
        increased by one and compared with kmax—the maximum number of retransmissions allowed. If the value
        of k is less than kmax, the whole process is repeated. Otherwise, the station aborts the transmission.
        Bit-Map Protocol
        This protocol involves the division of contention period into n number of slots for n number of stations
        in the network with one slot corresponding to each station. Both stations and slots are numbered from
        0 to n – 1. If a station needs to transmit a frame, it will transmit a 1 bit in its respective slot; no other
        station can transmit in this time slot. At the end of n-bit contention period, each station knows which
        stations wish to transmit. Then, the stations begin transmitting frames in the numerical order. After the
        last ready station has transmitted the frame, the next n-bit contention period starts. In case a station is
        ready to send but its slot has passed by, then it must wait for its turn. Since, every station knows which
        station has to transmit next, collisions cannot occur.
            To understand the bit-map protocol, consider a network
        with five stations numbered from 0 to 4. If stations 0, 2 and 3
        want to transmit frames, they insert 1 bit into their correspond-
        ing slots and they are transmitted in the order 0, 2 and 3 only 1             1 1               0    2    3
        (Figure 8.8).
            To analyze the performance of channel in bit-map protocol, 0 1 2 3 4                          Frames
        let the total number of stations is n and the quantity of data per Contention slots
        each frame is d bits. At low loads, the bit-map will be repeated            Figure 8.8 Bit-map Protocol
        over and over lacking of frames, creating an overhead of n bits
        per frame. Thus, the performance of channel at low loads can be
        given as
                                                   Slow = d/(N + d)
           On the other hand, at high loads, almost every station has some data to send. As a result, each n-bit
        contention period will have bits per each frame, creating an overhead of one bit per frame. Thus, the
        performance of channel at high loads can be given as
                                                  Shigh = d/(1 + d)
             To appropriately divide the stations into groups, adaptive tree-walk algorithm is used, which
        c onsiders all the stations in the network as the leaves of a binary tree as shown in Figure 8.10. The pro-
         cess starts at the root of the tree (node 1). Initially, in the first slot (slot 0), all the stations under node 1
         are allowed to compete for the channel. If there is a single station that is ready to transmit, it is allocated
         the channel. However, if there is a collision, stations under the node 2 try to seize the channel in slot
         1. If some station under node 2 acquires the channel, the next slot after the successful transmission of
         frame is kept reserved for the stations under node 3. However, if collision occurs in slot 1, the stations
         under node 4 contend during the slot 2. In essence, in case of collision during slot 0, the search continues
         recursively in the left and right subtree of binary tree and it stops when a bit slot is idle or there is only
         one station that is ready to transmit the data.
                                                             1
2 3
4 5 6 7
                                                                                                    Stations
                           S1    S2       S3      S4                   S5     S6        S7   S8
                                               Figure 8.10       Adaptive Tree-walk Algorithm
        Reservation
        In this method, any station that wants to transmit a frame has to make a reservation before sending. During
        each time interval, the reservation frame precedes the data frames to be transmitted in that interval.
        A reservation frame consists of mini-slots with the number of mini-slots being equal to the number of
        stations in the network. Whenever, a station wants to transmit a data frame it makes a reservation by
        inserting a 1 bit into its corresponding mini-slot. After making reservations, the stations can transmit
        their data frames in the numerical order following the reservation frame.
        Polling
        In this method, two types of stations namely, primary node and secondary node exist. A primary node
        refers to the master station through which the exchange of the data takes place irrespective of whether
        the destination station is primary or secondary. Secondary nodes are the terminals that follow the com-
        mands of a primary node. Primary node decides which secondary node is allowed to use the channel and
        transmit at a particular time. Polling involves two operations, namely, poll and select.
            Poll: When the primary node is ready to receive data, it polls or asks all the secondary nodes in a
              round-robin fashion to determine if they have any data to transmit. The primary node sends a mes-
              sage to a secondary node by specifying the maximum number of frames the secondary node can
              send. The secondary node can respond with either negative acknowledgement (NAK) or positive
              acknowledgement (ACK). If the response is NAK, the primary node polls the next secondary node.
              This process goes on in a cyclic manner until any secondary node responds with an ACK. Once a
              positive response is received, the primary node also responds with an ACK to confirm the receipt
           Select: This operation is performed when primary node has some data to transmit. Before sending
              the data, it alerts the receiving secondary node about the transmission and does not transmit until
              it gets a ready acknowledgement from the receiving secondary node.
           An advantage of polling is that it eliminates collision and results in better efficienc . However, it suffers
        from some limitations too. First, this protocol requires large amount of time in notifying secondary nodes and
        thus, introduces large amount of delay. Second, if the primary node fails, the entire network goes down.
        Token Passing
        In this method, all stations in the network are arranged in a logical                   S4
        ring manner (Figure 8.11). To allow stations to access the channel,
        a special packet known as token is circulated among the stations
        throughout the ring in a fixed order. Whenever a station receives the
        token from its predecessor and it has some data to send, it keeps the S                               S3
                                                                                  1
        token with itself and transmits the data. After transmitting the data,                     T
        it releases the token and passes it to its successor in the ring. In case
        a station receives the token but has no data to send, it immediately
        passes the token to its successor in the ring.
            In this method, a priority can be assigned to each station in the                   S2
        ring such that whenever a high priority station wants to transmit, a        Figure 8.11 Token Passing
        low priority station has to release the token. In addition, type of data
        being transmitted from the stations can also be prioritized.
            An advantage of token-passing method over polling is that there is no primary node present in the
        network, so this method is decentralized. However, it also suffers from some limitations. First, if any
        station goes down then entire network goes down. Second, if some problem such as failure of a station
        possessing the token occurs then token will vanish from the network.
            15. What is the advantage of token-passing protocol over CSMA/CD protocol?
          Ans: In CSMA/CD, a frame may be delivered at the destination either after many collisions or not
        delivered at all. Thus, CSMA/CD is not suitable for real-time applications. On the other hand, in token
        passing, a frame is delivered within the time. Also, the frames can be assigned priorities. This makes
        token passing suitable for real-time applications.
              16. What is meant by the term channelization? Explain in detail various channelization
        protocols.
            Ans: Channelization refers to a multiple-access method in which the bandwidth of a communica-
         tion link is partitioned in time, frequency or using code among the stations that want to transmit data.
         Three channelization protocols, namely, FDMA, TDMA and CDMA are discussed as follows:
            Frequency Division Multiple Access (FDMA): This method involves partitioning the bandwidth
               of a channel into parts called frequency bands. Each frequency band is allocated to a particular
               s tation for transmitting its data and it remains fixed throughout the entire communication period.
                The frequency bands are kept well separated by placing guard bands in between to avoid the station
                interference. In case of higher load on the network, the performance of FDMA can be improved by
                allocating the frequency bands to the stations dynamically when they demand rather than keeping
                the frequency bands reserved for stations.
           Time Division Multiple Access (TDMA): In this method, the bandwidth of channel is time shared
              among the stations. The entire time is partitioned into slots and each slot is allocated to a particular
              station in which it can transmit data. The major limitation of TDMA is that synchronization is required
              between different stations so that every station could know the beginning as well as position of its
              respective slot. To achieve synchronization, preamble bits also known as synchronization bits are
              inserted at the start of each time slot.
           Code Division Multiple Access (CDMA): This method allows all the stations to send data through
              the same channel and at the same time. As compared to FDMA, in CDMA, the whole bandwidth
              of a link can be utilized by only one channel. Further, CDMA is also different from TDMA in
              the sense that in CDMA, all stations can transmit the data at the same time. All transmissions are
              separated using coding theory. Each station is assigned a code—a sequence of numbers called
               chips. These sequences are called orthogonal sequences and possess the following properties.
                •     Each sequence contains a number of elements equal to the number of stations. For example,
                      (+1 -1 -1 +1) is a sequence with four elements. Here, -1 denotes the 0 bit while +1
                      denotes the bit 1.
                •     When a sequence is multiplied with a scalar quantity, every element of the sequence is multi-
                      plied with it. For example, on multiplying the sequence (-1 -1 +1 +1 -1 -1) with 5,
                      we get (-5 -5 +5 +5 -5 -5).
                •     On multiplying two equal sequences, element by element with each other, and further adding
                      the elements of resulting sequence, the final result obtained is equal to the number of elements
                      in each sequence. This is also known as inner product of two sequences. For example, when
                      (-1 -1 +1 +1 -1 -1 +1 +1) is multiplied with (-1 -1 +1 +1 -1 -1 +1 +1),
                      the resulting sequence is (+1 +1 +1 +1 +1 +1 +1 +1) whose sum of element is 8, equal
                      to the number of elements in the sequence.
                •     On multiplying two different sequences, element by element with each other, and then adding
                      the elements of resulting sequence, the final result obtained is zero. This is called the inner
                      product of two different sequences. For example when (-1 -1 +1 +1 -1 -1 +1 +1)
                      is multiplied with (+1 +1 +1 +1 +1 +1 +1 +1), the resulting sequence is (-1 -1 +1
                      +1 -1 -1 +1 +1) whose sum of elements is zero.
                •     When two sequences are added, the corresponding elements of both sequences are added to
                      each other. For example, on adding a sequence (-1 -1 +1 +1 -1 -1 +1 +1) with (-1
                      -1 +1 +1 -1 -1 -1 -1), the resulting sequence is (-2 -2 +2 +2 -2 -2 0 0).
           To understand how CDMA works, consider four stations S1, S2, S3 and S4 with data d1, d2, d3 and
        d4 and codes c1, c2, c3 and c4, respectively. Each station multiplies its data by its code to get c1*d1,
        c2*d2, c3*d3 and c4*d4, respectively. The data that is sent to the channel is the sum of c1*d1, c2*d2,
        c3*d3 and c4*d4. Any station (say S4) that wants to receive data from another station (say S2) mul-
        tiplies the data on the channel with the code of the sender (in our case c2) to get the data as shown
        here
                                          Data = (c1*d1+c2*d2+c3*d3+c4*d4)*c2
                                             = 4*d2
        It is clear from the second and third property that c2*c2 will result in four while other products c1*c2,
        c3*c2 and c4*c2 will result in zero. Thus, in order to receive data from S2 (which is d2), the station S4
        has to divide the result by four.
            Further, CDMA is widely used for cellular networks. The main advantage of CDMA is that it uses
        a larger bandwidth, so transmission of data is less affected by the transmission impairments. However,
        the limitation of CDMA is self-jamming and distance.
             17. How FDM differs from FDMA?
            Ans: Although FDM and FDMA employ the same concept, there are differences between the two.
        The FDM is a physical layer technique, which involves using a multiplexer to combine traffic from low
        bandwidth channels and transmitting them on a higher bandwidth channel. In this technique, channels
        are statically allocated to different stations, which is inefficie t in case of bursty traffic. In contrast,
        FDMA is an access method in the data link layer in which channels can be allocated on demand. In
        FDMA, the bandwidth is divided into frequency bands and each band is reserved for a specific station.
        The efficiency is improved in FDMA by using a dynamic-sharing technique to access a particular
        frequency band. No physical multiplexer is used at the physical layer in FDMA.
             18. How TDM differs from TDMA?
           Ans: TDM and TDMA have certain differences. The TDM is a physical layer technique, which uses
        a physical multiplexer to render data from slower channel, combine them and transmit using faster channel,
        whereas TDMA is a multiple-access method in the data link layer where the data link layer of each
        station tells the physical layer to use the specific slot allocated for it. No physical multiplexer is used at
        the physical layer in TDMA.
            19. A slotted ALOHA channel has an average 10% of the slots idle. What is the offered traffic G?
        Calculate the throughput and determine whether the channel is overloaded or under loaded?
          Ans: According to Poisson’s distribution
                                                       P0 = e–G
                                                 Also, G = –ln P0
                                                         = –ln 0.1
                                                         = 2.3
        As we know that, throughput (S) of slotted ALOHA = Ge–G
                                                        S = 2.3 × 0.1 = 0.23
        As G > 1, the channel is overloaded.
             20. Consider a slotted ALOHA having fiv stations. If the offered load G1 = 0.1, G2 = 0.15,
        G3 = 0.2, G4 = 0.25 and G5 = 0.3 packets, fin the individual throughput of each station and channel
        throughput.
          Ans: In slotted ALOHA, individual throughput, Si = Gi e–Gi
        Let the individual throughput of each station be S1, S2, S3, S4 and S5.
        Then,                                 S1 = G1e–G1 = 0.1e–0.1 = 0.0905
                                              S2 = G2e–G2 = 0.15e–0.15 = 0.1291
                                              S3 = G3e–G3 = 0.2e–0.2 = 0.1637
                                              S4 = G4e–G4 = 0.25e–0.25 = 0.1947
                                              S5 = G5e–G5 = 0.3e–0.3 = 0.2222
          6. Which of the following protocol involves divi-        8. In which of the following methods, the stations
             sion of time into intervals and at each interval         are arranged in logical ring?
             reservation frame precedes the data frame?               (a) reservation           (b) polling
             (a) reservation          (b) polling                     (c) token passing         (d) none of these
             (c) token passing        (d) none of these            9. Channelization involves:
          7. In which of the following methods, primary               (a) FDMA              (b) TDMA
             station controls the link?                               (c) CDMA              (d) all of these
             (a) reservation                                      10. Which of the following is based on coding
             (b) polling                                              theory?
             (c) token passing                                        (a) FDMA              (b) TDMA
             (d) none of these                                        (c) CDMA              (d) none of these
        Answers
        1. (a)      2. (b)   3. (a)   4. (a)    5. (b)   6. (a)     7. (b)    8. (c)   9. (d)    10. (c)
                                                    LLC
                                                                                       Data link layer
                                           Token Ring     Token Bus
                          Ethernet MAC                                     ...
                                              MAC           MAC
                            Ethernet
                                          Token Ring     Token Bus                     Physical layer
                         physical layers                                   ...
                                         physical layer physical layer
                           (several)
                                          (a) IEEE 802 Standard                        (b) OSI model
                           Figure 9.1    Relationship Between IEEE 802 Reference Model and OSI Model
        Physical Layer
        The lowest layer of IEEE 802 reference model is equivalent to the physical layer of the OSI model.
        It depends on the type and implementation of transmission media. It defines a specification for the
        medium of transmission and the topology. It deals with the following functions:
           encoding of signals at the sender’s side and the decoding of signals at the receiver’s side;
           transmission and reception of bits and
           preamble generation for synchronization.
             MAC: This layer forms the lower part of the data link layer that specifies the access control method
               used for each type of LAN. For example, CSMA/CD is specified for Ethernet LANs and the to-
               ken passing method is specified for Token Ring and Token Bus LANs. It also handles a part of
               framing function for transmitting data. To implement all the specified functions, MAC also uses a
               PDU at its layer, which is referred to as MAC frame. The MAC frame consists of various fields
               (Figure 9.3) which are described as follows:
                • MAC Control: This field contains protocol control information such as priority level, which
                     is required for the proper functioning of protocol.
                • Destination MAC Address: This field contains the address of the destination on the LAN for
                     this frame.
                • Source MAC Address: This field contains the address of the source on the LAN for th s frame.
                • LLC PDU: This field contains the LLC data arriving from the immediate upper laye .
                • Cyclic Redundancy Check (CRC): This field contains the CRC code that is used for error
                    detection. This field is also referred to as frame check sequence (FCS).
             Length or Type: It is a 2-byte-long field that defines either the length or type of data. The
               Ethernet originally used this field as a type field to define the total length of data in upper-layer
                protocols, while the IEEE standard used it as the length field to indicate total number of bytes in
                data field
              Data: This field carries data arriving from upper-layer protocols. The amount of data stored in this
                field can range between 46 and 1,500 bytes
              CRC: It is a 4-byte-long field that contains error detection information. In case of Ethernet
                MAC frame, it is the CRC that is computed over all the fields except preamble, SFD and CRC
                itself.
              4. What are the four generations of Ethernet? Discuss Ethernet cabling in all the generations
        of Ethernet.
           Ans: The Ethernet was developed in 1976 at the Xerox’s Palo Alto Research Center (PARC). Since
        its development, the Ethernet has been evolving continuously. This evolution of Ethernet can be cat-
        egorized under four generations, which include standard Ethernet, fast Ethernet, gigabit Ethernet and
        10-gigabit Ethernet. These generations are discussed here.
        Standard Ethernet
        Standard Ethernet uses digital signals (baseband) at the data rate of 10 Mbps and follows 1-persistent
        CSMA/CD as access method. A digital signal is encoded/decoded by sender/receiver using the
        Manchester scheme. Standard Ethernet has defined several physical layer implementations, out of
         which the following four are commonly used.
           10Base5: It uses a thick coaxial cable of 50 ohm and is implemented in bus topology with an
              external transceiver connected to coaxial cable. The transceiver deals with transmitting, receiving
               and detecting collisions in the network. The cable used is too firm to bend with the hands. The maxi-
               mum length of the cable should not be more than 500 m; otherwise, the signals may deteriorate.
               In case greater length of cable is required, the maximum five segments each of 500 m can be used
               and repeaters are used to connect the segments. Thus, the length of the cable can be extended up
               to 2,500 m. The 10Base5 Ethernet is also referred to by other names including thick Ethernet or
              thicknet.
           10Base2: It is also implemented in the bus topology but with a thinner coaxial cable. The transceiver
               in this Ethernet is a part of the NIC. The 10Base2 specification are cheaper than the 10Base5 as
               the cost of thin cable is less than that of the 10Base5 specifications. The thinner cable can easily be
               bent close to the nodes that results in flexibility and thus, making installation of the 10Base2 speci-
               fication easier. The maximum length of the cable segment must not exceed 185 m. The 10Base2
               Ethernet is also referred to as thin Ethernet or Cheapernet.
           10Base-T: It uses two pairs of twisted cable and is implemented in star topology. All nodes are
             connected to the hub via two pairs of cable and thus, creating a separate path for sending and
             receiving the data. The maximum length of the cable should not exceed 100 m; otherwise, the
              signals may attenuate. It is also referred to as twisted-pair Ethernet.
           10Base-F: It is the most common 10-Mbps Ethernet that is implemented in star topology. It uses
              a pair of fibre optic cables to connect the nodes to the central hub. The maximum length of cable
              should not exceed 2,000 m. It is also referred to as the fib e Ethernet.
        Fast Ethernet
        The IEEE 802.3 committee developed a set of specifications referred to as the fast Ethernet to provide
        low-cost data transfer at the rate of 100 Mbps. It was designed to compete with LAN protocols such as
        fibre distributed data interface (FDDI) and it was also compatible with the standard Ethernet. The fast
        Ethernet uses a new feature called autonegotiation, which enables two devices to negotiate on certain
        features such as data rate or mode of transmission. It also allows a station to determine the capability
        of hub and two incompatible devices can also be connected to one another using this feature. Like the
        standard Ethernet, various physical-layer implementations of the fast Ethernet have also been specified.
        Some of them are as follows:
           100Base-TX: It either uses two pairs of either cat5 UTP cable or STP cable. The maximum length
             of the cable should not exceed 100 m. This implementation uses MLT-3 line coding scheme due
             to its high bandwidth. However, since MLT-3 coding scheme is not self-synchronized, the 4B/5B
             block coding scheme is used to prevent long sequences of 0s and 1s. The block coding increases
             the data rate from 100 Mbps to 125 Mbps.
           100Base-FX: It uses two wires of fibre optic cable that can easily satisfy the high bandwidth
             requirements. The implementation uses NRZ-I coding scheme. As NRZ-I scheme suffers from
             synchronization problem in case of long sequence of 0s and 1s, 4B/5B block coding is used with
             NRZ-I to overcome this problem. The block coding results in increased data rate of 125 Mbps. The
             maximum cable length in 100Base-FX must not exceed 100 m.
           100Base-T4: It is the new standard that uses four pairs of cat3 or higher UTP cables. For this
             implementation, 8B/6T line coding scheme is used. The maximum length of cable must not exceed
             100 m.
        Gigabit Ethernet
        Gigabit Ethernet was developed by the IEEE 802.3 committee to meet the higher data rate requirements.
        This standard provides a data rate of 1000 Mbps (1 Gbps). It is backward compatible with traditional
        and fast Ethernet and also supports autonegotiation feature. Various physical layer implementations of
        gigabit Ethernet are as follows:
          1000Base-SX: It is a two-wire implementation that uses short wave fibres. One wire is used for
             sending the data and other is used for receiving the data. The NRZ line coding scheme and 8B/10B
             block coding scheme is used for this implementation. The length of the cable should not exceed
             550 m in the 1000Base-SX specifications
          1000 Base-LX: It is also a two-wire implementation that uses long wave fibres. One wire is used
             for sending the data and other is used for receiving the data. It is implemented by the NRZ line
             coding scheme and the 8B/10B block-coding scheme. The length of the cable should not exceed
             5,000 m in the 1000Base-LX specifications
          1000 Base-CX: It uses two STP wires where one wire is used for sending the data and other is
             used for receiving the data. It is implemented by the NRZ line coding scheme and the 8B/10B
             block-coding scheme. The length of the cable should not exceed 25 m in the 1000Base-CX
             specifications
          1000 Base-T: It uses four cat5 UTP wires. It is implemented by the 4D-PAM5 line coding scheme.
             In this specification, the length of the cable should not exceed 100 m
        Ten-Gigabit Ethernet
        This standard was named as 802.3ae by the IEEE 802.3 committee. It was designed to increase the data
        rate to 1000 Mbps (10 Gbps). It is compatible with standard Ethernet, fast Ethernet and gigabit Ethernet.
        It enables the Ethernet to be used with technologies such as Frame Relay and ATM. Various physical-
        layer implementations of 10-gigabit Ethernet are as follows:
           10GBase-S: It uses short-waves fibres and is designed for 850 nm transmission on multimode
              fibre. The maximum length of the cable should not exceed 300 m.
           10GBase-L: It uses long-wave fibres and is designed for 1,310 nm transmission on single-mode
              fibre. The maximum length of the cable should not exceed 10 km.
           10GBase-E: It uses extended waves and is designed for 1,550 nm transmission on single-mode
              fibre. The maximum distance that can be achieved using this medium, is up to 40 km.
                5. What is Token Ring (IEEE 802.5)? How is it different from Token Bus (IEEE 802.4)?
            Ans: The IEEE 802.5 is a specification of standards for LANs                                S4
         that are based on Token Ring architecture. The Token Ring network
         was originally developed by IBM in the 1970s. It is the most com-
         monly used MAC protocol that uses token passing mechanism with
         ring topology. In this protocol, the stations are connected via point-
         to-point links with the use of repeaters (Figure 9.5). To control the S1                                      S3
                                                                                                            T
         media access, a small frame called token (a 3-byte pattern of 0s and 1s)
        is allowed to move around the network and the station possessing
        the token can only transmit frames in the allotted time.
             Whenever a station wants to transmit a frame, it first needs to grab                       S2
        a token from the network before starting any transmission. Then, it ap-
        pends the information with the token and sends it on the network. The            Figure 9.5 Token Ring LAN
        information frame then circulates around the network and eventually,
        received by the intended destination. After receiving the information frame, the destination copies the
        information and sends the information frame back on the network with its two bits set to indicate that it is an
         acknowledgement. The information frame then moves around the ring and is finall , received by the send-
         ing station. The sending station checks the returned frame to determine whether it has been received with
         or without errors. If the sending station has now finished the transmission, it creates a new token and inserts
         it on the network. Notice that while one station is transmitting the data, no other station can grab a token.
         Thus, collisions cannot occur as only one station can transmit at a time. In addition, if a station does not have
         a frame to send or the time allotted to it passes away, the token is immediately passed to the next station.
             In Token Ring networks, the ring topology is used in which the failure of any one station can bring
         the entire network down. Thus, another standard known as Token Bus (IEEE 802.4) was developed as
         an improvement over Token Ring networks. Like Token Ring, Token Bus is also based on token-passing
         mechanism. In Token Bus, the stations are logically organized into a ring but physically organized into
         a bus (Figure 9.6). Thus, each station knows the addresses of its adjacent (left and right) stations. After
         the logical ring has been initialized, the highest numbered station may transmit. The sending station
         broadcasts the frame in the network. Each station in the network receives the frame and discards if
         frame is not addressed to it. When a station finishes the transmission of data or the time allotted to it
         passes away, it inserts the address of its next neighbouring station (whether logical or physical) on the
         token and passes it to that station.
                                                           S3
                                       S1                                         S5
                                                                                                          Logical
                                                                                                           ring
                                                         Broadband
                                                        coaxial cable          Token
                                                  S2                    S4                  S6
                                                       Figure 9.6     Token Bus LAN
               6. Compare IEEE 802.3, 802.4 and 802.5.
           Ans: There are certain differences among the IEEE 802.3, 802.4 and 802.5 standards that are listed
        in Table 9.1.
        Table 9.1        Comparison among IEEE 802.3, 802.4 and 802.5
                7. Write a short note on FDDI. Explain access method, time registers, timers and station
        procedure.
             Ans: The fib e distributed data interface (FDDI) refers to the first high speed LAN protocol
        standardized by ANSI and ITU-T. It has also been approved by the ISO and resembles IEEE 802.5
         standards. It uses fibre optic cable; thus, packet size, network segment length and number of stations
         increase. It offers a speed of 100 Mbps over the distance of up to 200 km and connects up to 1,000
         stations. The distance between any two stations cannot be more than a few kilometers.
        Access Method
        The FDDI employs the token passing access method. A station possessing the token can transmit
        any number of frames within the allotted time. There are two types of frames provided by the FDDI:
        synchronous and asynchronous. Synchronous frame (also called S-frame) is used for real-time ap-
         plications such as audio and video. The frame needs to be transmitted within a short period of time
         without much delay. Asynchronous frame (also called A-frame) is used for non-real-time applications
        (such as data traffic) that can tolerate large delays. If a station has both S-frames and A-frames to send,
        it must send S-frames first. After sending the S-frame, if the allotted time is still left then A-frames can
        be transmitted.
        Time Registers
        Three time registers are used to manage the movement of token around the ring, namely, synchronous
        allocation (SA), target token rotation time (TTRT) and absolute maximum time (AMT). The SA register
        specifies the time for transmitting synchronous data. Each station can have a different value for it. The
        TTRT register specifies the average time a token needs to move around the ring exactly once. The AMT
        register has a value two times the value of the TTRT. It specifies the maximum time that it can take for
        a station to receive a token. However, if a token takes more time than the AMT, then the reinitialization
        of the ring has to be done.
        Timers
        Each station maintains two types of timers to compare the actual time with the value present in time
        registers. These timers include token rotation timer (TRT) and token holding timer (THT). The TRT cal-
        culates the total time taken by the token to complete one cycle. This timer runs continuously. The THT
        starts when the token is received by a station. This timer indicates the time left for sending A-frames
        after the S-frames have been sent.
        Station Procedure
        When a station receives the token, it uses the following procedure:
          1. It sets the THT to a value equal to (TTRT – TRT).
          2. It sets TRT to zero.
          3. It transmits synchronous frame. With each sent unit, the value of TRT is decremented by one.
          4. It continues to send asynchronous data as long as the value of THT is positive.
Distribution system
                                 BSS                                                   Server or
                                                                                       Gateway
AP AP AP
                •Data + CF-Poll: This frame is used by a point coordinator to send data to a mobile station. It also re-
                 quests the mobile station to send a data frame, which may have been buffered by the mobile station.
              • Data + CF-ACK + CF-Poll: This frame combines the functionality of two frames Data + CF-
                  ACK and Data + CF-Poll into a single frame.
        Besides, there is another group that contains four more subtypes of data frames, which do not carry any
        user data. One of these frames is null-function data frame that carries the power management bit in the
        frame control field to the AP. It indicates that the station is moving to a low-power operating state. The
        remaining three frames (CF-ACK, CF-Poll, CF-ACK + CF-Poll) function in the same way as that of last
        three frames in the first group, the only di ference being that they do not contain any data.
             13. Explain the frame format of 802.11 standards.
           Ans: The IEEE 802.11 has defined three MAC layer frames for WLANs including control, data, and
        management frames. Figure 9.9 shows the format of data frame of IEEE 802.11 that comprises nine
        fields. The format of management frames is also similar to data frames except that it does not include
        one of the base station addresses. The format of control frames does not include frame body and SC
        fields. It also includes one or two address fields
        The description of fields included in the IEEE 802. 1 MAC frame is as follows:
             Frame Control (FC): It is a 2-byte-long field in which the first byte indicates the type of frame
               (control, management, or data) and the second byte contains control information such as fragmen-
               tation information and privacy information.
             Duration (D): It is a 2-byte-long field that defines the channel allocation period for the transmis-
               sion of a frame. However, in case of one control frame, this field stores the ID of the frame.
             Addresses: The IEEE 802.11 MAC frame contains four address fields and each field is 6-byte-
               long. In case of a data frame, two of the frame address fields store the MAC address of the original
               source and destination of the frame, while the other two store the MAC address of transmitting and
               receiving base stations that are transmitting and receiving frames respectively over the WLAN.
             Sequence Control (SC): It is a 2-byte (that is 16 bits) field, of which 12 bits specify the sequence
               number of a frame for flow control. The remaining 4 bits specify the fragment number required for
               reassembling at the receiver’s end.
             Frame Body: It is a field that ranges between 0 and 2,312 bytes and contains payload information.
             FCS: It is a 4-byte-long field that comprises a 32-bit CRC
               14. With reference to 802.11 wireless LAN, explain the following:
             (a) Hidden terminal problem
             (b) Exposed terminal problem
             (c) Collision avoidance mechanisms
           Ans:
             (a) Hidden Terminal Problem The hidden terminal problem occurs during communication
        between two stations in wireless networks. It is the problem where a station is unable to detect another
         data from R to P does not interfere with CTS from Q                Range of P             Range of Q
         to P. The station T being in the transmission range
         of both P and Q hears RTS and CTS both and thus,
         remain silent until the transmission is over.             R               P        Q                 S
            The disadvantage of the MACA protocol is that
         collision can still occur in case both Q and R trans-                         T
         mit RTS frames to P simultaneously. The RTS from
         Q and R may collide at P; because of collision, nei-
         ther Q nor R receives the CTS frame. Then both Q
        and R wait for a random amount of time using bi-                  Figure 9.12 MACA Protocol
        nary exponential back off algorithm (explained in
        Chapter 8) and again retry to transmit.
            To overcome the disadvantage and improve the performance of MACA, it was enhanced in 1994
        and renamed as MACA for wireless (MACAW). This newer version of MACA includes several
        enhancements, some of which are as follows:
           To identify the frames that have been lost during the transmission, the receiver must acknowledge
               each successfully received data frame by sending an ACK frame.
           CSMA protocol is used for carrier sensing so that no two stations could send an RTS frame at the
               same time to the same destination.
           Instead of running the binary back off exponential algorithm for each station, it is run for a pair of
               transmitting stations (source and destination).
             15. What is meant by Bluetooth? Explain its architecture.
           Ans: Bluetooth is a short-range wireless LAN technology through which many devices can be
        linked without using wires. It was originally started as a project by the Ericsson Company and then
        formalized by a consortium of companies (Ericsson, IBM, Intel, Nokia and Toshiba). Bluetooth is sup-
        posed to get its name from 10th century Danish king Harald Bluetooth who united Scandinavian Europe
        (Denmark and Norway) during an era when these areas were torn apart due to wars. The Bluetooth tech-
        nology operates on the 2.4 GHz industrial, scientific and medical (ISM) band and can connect different
        devices such as computer, printer and telephone. The connections can be made up to 10 m or extended up
        to 100 m depending upon the Bluetooth version being used.
        Bluetooth Architecture
        A Bluetooth LAN is an ad hoc network, which means the network is formed by the devices themselves
        by detecting each other’s presence. The number of devices connected in Bluetooth LAN should not be
        very large as it can support only a small number of devices. The Bluetooth LANs can be classified into
        two types of networks, namely, piconet and scatternet (Figure 9.13).
          Piconet: It refers to a small Bluetooth network in which the number of stations cannot exceed
             eight. It consists of only one primary station (known as master) and up to seven secondary stations
             (known as slaves). If number of secondary stations exceeds seven, then 8th secondary station is put
             in the parked state. A secondary station in the parked state cannot participate in communication in
             the network until it leaves the parked state. All stations within a piconet share the common chan-
             nel (communication link) and only a master can establish the link. However, once a link has been
             established, the other stations (slaves) can also request to become master. Slaves within a piconet
             must also synchronize their internal clocks and frequency hops with that of the master.
S3 S6
                                        S1
                                                        M1               S5             M2
                                          S2                     S4                                S7
                                                     Piconet                           Piconet
                                                                      Scatternet
                                             Figure 9.13        Bluetooth Piconets and Scatternets
             Scatternet: It refers to a Bluetooth network that is formed by the combination of piconets. A device
               may be a master in one piconet whiles a slave in another piconet or a slave in more than one piconet.
              16. Discuss the Bluetooth protocol stack.
            Ans: Bluetooth protocol stack is a combination of multiple protocols and layers (Figure 9.14).
        Bluetooth comprises several layers including radio layer, baseband layer, L2CAP layer and other upper
         layers. In addition, various protocols are also associated with Bluetooth protocol stack. The description
         of these layers and protocols is as follows:
vCard/vCal WAE
                           OBEX              WAP
                                                                   AT-
                                                                                   TCS BIN              SDP
                                                                Commands
                                        UDP          TCP
IP
PPP
RFCOMM
                                                                                                               Audio
                                                                        L2CAP Layer
Baseband Layer
Radio Layer
        Radio Layer
        This is the lowest layer in the Bluetooth protocol stack and is similar to a physical layer of transmission
        control protocol/Internet protocol (TCP/IP) model. The Bluetooth devices present in this layer have low
        power and a range of 10 m. This layer uses an ISM band of 2.4 GHz that is divided into 79 channels, each
        of 1 MHz. To avoid interference from other networks, the Bluetooth applies frequency-hopping spread
        spectrum (FHSS) technique. Here, a packet is divided into different parts and each part is transmitted
        at a different frequency. The bits are converted to signal using a variant of FSK, known as Gaussian
        bandwidth filterin shift keying (GFSK). In GFSK, the bit 1 is represented by a frequency deviation
        above the carrier frequency used and bit 0 by a frequency deviation below the carrier frequency.
        Baseband Layer
        This layer is similar to MAC sublayer in LANs. It uses time division multiplexing (TDMA) and the
        primary and secondary stations communicate with each other using time slots. Bluetooth uses a form
        of TDMA known as TDD-TDMA (time-division duplex TDMA)—a sort of half-duplex communica-
        tion, which uses different hops for each direction of communication (from primary to secondary or vice
        versa). If there is only one secondary station in the piconet, then the secondary station uses even num-
        bered slots while the primary station using odd-numbered slots for communication. That is, in slot 0,
        data flows from primary to secondary while in slot 1 data flows from secondary to primary. This process
        continues until the end of frame transmission. Now, consider the case where there is more than one
        secondary station in the piconet. In this case also, primary station sends in even-numbered slots, how-
         ever, only one secondary station (out of many) who had received the data in the previous slot transmits
         in the odd-numbered slot. For example, suppose in slot 0, the primary station (P) has sent data intended
         for a secondary station (S) then only S can transmit in slot 1.
        L2CAP Layer
        The logical link control and adaptation protocol
                                                                        Length    Channel ID Data and control
        (L2CAP) is similar to LLC sublayer in LANs. This                2 bytes     2 bytes     0-65535 bytes
        layer is used for exchanging data packets. Each data
        packet comprises three fields (Figure 9.15), which are Figure 9.15 Format of Data Packet of L2CAP Layer
        as follows:
          Length: It is a 2-byte long field that is used to specify the size of data received from upper layers
             in bytes.
          Channel ID (CID): It is a 2-byte long field that uniquely identifies the virtual channel made at
             this level.
          Data and Control: This field contains data that can be up to 65,535 bytes as well as other control
             information.
        The L2CAP layer performs many functions that are discussed as follows:
          Segmentation and Reassembly: Application layer sometimes delivers a packet that is very large
             in size, however, baseband layer supports only up to 2,774 bits or 343 bytes of data only in the
             payload field. Thus, the L2CAP layer divides large packets into segments at the source and these
             packets are reassembled again at the destination.
             Multiplexing: The L2CAP deals with multiplexing. At the sender’s side, it acquires data from
               the upper layers, frames them and gives them to the baseband layer. At the receiver’s station,
               it acquires frames from the baseband layer, extracts the data and gives them to the appropriate
               protocol layer.
        AT-Commands
        This protocol consists of a set of AT-commands (attention commands) which are used to configure and
        control a mobile phone to act as a modem for fax and data transfers.
        Point-to-Point Bluetooth
        This is a point-to-point protocol (PPP) that takes IP packets to/from the PPP layer and places them onto
        the LAN.
        TCP/IP
        This protocol is used for Internet communication.
        vCard/vCal
        These are the content formats supported by OBEX protocol. A vCard specifies the format for electronic
        business card while vCal specifies the format for entries in personal calendar, which are maintained by
        Internet mail consortium.
             17. Write a short note on virtual circuit networks.
        Ans: A virtual circuit network includes the characteristics of both circuit-switched and a datagram net-
        work and performs switching at the data link layer. Like circuit-switched networks, it requires a virtual
        connection to be established between the communicating nodes before any data can be transmitted. Data
        transmission in virtual circuit networks involves three phases: connection setup, data transfer and con-
        nection teardown phase. In connection setup phase, the resources are allocated and each switch creates
        an entry for a virtual circuit in its table. After establishment of virtual connection, data transfer phase
        begins in which packets are transmitted from source to destination; all packets of a single message take
        the same route to reach the destination. In connection teardown phase, the communicating nodes inform
        the switches to delete the corresponding entry.
            In virtual circuit networks, data is transmitted in the form of packets, where each packet contains an
        address in its header. Each packet to be transmitted contains a virtual circuit identifier (VCI) along with
         the data. The VCI is a small number, which is used as an identifier of packets between two switches.
        When a packet arrives at a switch, its existing VCI is replaced with a new VCI when the frame leaves
        from the switch.
            The main characteristic of virtual circuit networks is that nodes need not make any routing decision
        for the packets, which are to be transferred over the network. Decisions are made only once for all the
        packets using a specific virtual circuit. At any instance of time, each node can be connected via more
        than one virtual circuit to any other node. Thus, transmitted packets of a single message are buffered at
        each node and are queued for output while packets using another virtual circuit on the same node are
        using the line. Some of the advantages associated with virtual circuit approach are as follows:
           All packets belonging to the same message arrive in the same order to the destination as sent by the
              sender. This is because every packet follows the same route to reach the receiver.
           It ensures that all packets arriving at the destination are free from errors. For example, if any node
              receives a frame with an error, then a receiving node can request for retransmission of that frame.
           Packets transmit through the virtual circuit network more rapidly.
              (c) State Diagrams to explain Call Setup and Call Clearing The communication between
        two DTEs initiates through the call setup phase. In this phase, initially, the calling DTE sends a Call
        Request packet to its local DCE. After receiving a Call Request packet, the local DCE forwards this
        packet to the next node thus, establishing the virtual connection up to the remote DCE, which serves
        the required DTE. The remote DCE then sends an Incoming Call packet to the called DTE to indicate
        the willingness of calling DTE to communicate with it. If the called DTE is ready to communicate, it
        sends a Call Accepted packet to the remote DCE, which then forwards this packet to the local DCE via
        the same virtual connection. After receiving the Call Accepted packet, the local DCE sends a Call Con-
        nected packet to the calling DTE to indicate the successful establishment of connection. Figure 9.16
        depicts the whole process of call setup phase.
                                          Call                                        Call
                                                          Virtual circuit
                                       connected                                    accepted
            Generally, the call-clearing phase is initiated after the completion of data transfer between calling
        and called DTEs. However, in certain situations, such as when call is not accepted by the called DTE or
        when a virtual circuit cannot be established, the call-clearing procedure is also initiated. The call can be
        terminated by either of the communicating parties or by the network. For example, if the calling DTE
        wants to clear the connection, it sends a Clear Request packet to the local DCE which forwards this
        packet to the remote DCE. To forward the call-clearing request to called DTE, the remote DCE sends a
        Clear Indication packet to it. In response, the called DTE sends a Clear Confir packet to the remote
        DCE, which then forwards this packet to local DCE. The local DCE passes this packet to the calling
        DTE, thus terminating the connection. Figure 9.17 depicts the whole process of call-clearing phase
        initiated by DTE.
                                         Clear                                         Clear
                                                          Virtual circuit
                                        confirm                                       confirm
           Now, consider the case where the call-clearing phase is initiated by the network. In this case, both
        the local and remote DCE send a Clear Indication packet to the calling and called DTE, respectively.
        On receiving the packets, the calling and called DTEs respond with a Clear Confir packet to the local
        and remote DCE respectively, thus, terminating the connection. The call clearing by the network may
        result in the loss of some data packets.
        Architecture
        Frame Relay is a virtual circuit network in which each virtual circuit is uniquely identified by a number
        known as data link connection identifie (DLCI). It provides two types of virtual circuits, which are
        as follows:
          Permanent Virtual Circuit (PVC): In this circuit, a permanent connection is created between
             a source and a destination and the administrator makes a corresponding entry for all the
             switches in a table. An outgoing DLCI is given to the source and an incoming DLCI is given
              to the destination. Using PVC connections is costly, as both source and destination have to
              pay for the connection even if it is not in use. Moreover, it connects a single source to a single
              destination. Thus, if source needs connection with another destination, then separate PVC is
              to be set up.
           Switched Virtual Circuit (SVC): In this circuit, a short temporary connection is created and that
             connection exists as long as data transmission is taking place between source and the destination.
             After the transmission of the data, the connection is terminated.
           In a Frame Relay network, the frames are routed with the help of a table associated with each switch
        in the network. Each table contains an entry for every virtual circuit that has already been set up. The
        table contains four fields for each entry: incoming port, incoming DLCI, outgoing port and outgoing
        DLCI. Whenever a packet arrives in a switch, it searches the incoming port DLCI combination in the
        table to match with an entry. After a match has been found, the DLCI of the arrived packet is replaced
        with outgoing DLCI (found in table) and the packet is routed to the outgoing port. This way the packet
        travels from switch to switch and eventually, reaches the destination. Figure 9.18 depicts how data is
        transferred from a source to a destination in a Frame Relay network.
                                                                Incoming          Outgoing
                                                            Port DLCI Port DLCI
                                                                2          48         1         50
                                                                                Switch
                                                                                  2         1
                                                                     48
Da
                                                                       a
                                                                    at
                                                                                                    ta
                                                                 D
                                                                                                     50
                              Data 24                       4                                       3
                                                                                                                   Data 70
                                                    Switch 3                                         Switch
                                                1                                                             4
                                                                                                2                               S2
                         S1                          2                                                   1
           The Frame Relay frame consists of five fields, which are described as follow
           Flag: It is an 8-bit long field, which is used at the start and end of the frame. The starting flag
             indicates the start of the frame, whereas the ending flag indicates the end of the frame.
           Address: This field has a default length of 16 bits (2 bytes), and may be extended up to 4 bytes.
              The address field is further divided into various subfields, which are described as follow
               • DLCI: This field is specified in the two parts in the frame as shown in Figure 9.19. The first
                    part is 6 bits long while the second part is 4 bits long. These 10 bits together identify the data
                    link connection defined by the standard
               • Command/Response (C/R): It is 1-bit long field that enables the upper layers to recognize
                  whether a frame is a command or a response.
               • Extended Address (EA): This field is also specified in two parts in the frame, with each part of
                   1 bit. It indicates whether the current byte is final byte of the address. If the value is zero, then it
                   indicates that another address byte is to follow; else, it means that the current byte is the final byte
               • Forward Explicit Congestion Notificatio (FECN): It is a 1-bit long field that informs the
                   destination that congestion has occurred. It may lead to loss of data or delay.
               • Backward Explicit Congestion Notificatio (BECN): It is a 1-bit long field that informs the
                   sender that congestion has occurred. The sender then slows down the speed of the transmission
                   in order to prevent data.
               • Discard Eligibility (DE): It is a 1-bit long field that indicates the priority level of the frame. If
                   its value is set to one, it indicates the network to discard the packet in case of congestion.
           Information: It is a variable-length field that carries highe -level data.
           FCS: It is a 16-bit long field, which is used for error detection.
        of header and 48 bytes of payload or data. Further, ATM networks are connection-oriented though they
        employ packet-switching technique. They also allow bursty traffic to pass through as well as devices
        with different speeds can communicate with each other via ATM network. Thus, it combines the advan-
        tages of both packet switching and circuit switching.
Endpoint
                                             UNI                                    UNI
                                                                  NNI
                                                       Switch            Switch
                                                       NNI               NNI
                                                                Switch
UNI
            Two ATM endpoints are connected through transmission path (TP), virtual path (VP) and virtual
        circuits (VC). A transmission path such as wire or cable that links an ATM endpoint with an ATM
        switch or two ATM switches with one another. It consists of a set of virtual paths. A virtual path refers
        to the link (or a group of links) between two ATM switches. Each virtual path is a combination of virtual
        circuits having the similar path. A virtual circuit refers to the logical path that connects two points. All
        the cells corresponding to the same message pass through the same virtual circuit and in the same order
        until they reach the destination.
            In order to route cells from one end ATM end point to another, the virtual connections must be
        uniquely identified. Each virtual connection is identified by the combination of virtual path identifier
        (VPI) and virtual circuit identifier (VCI). Further, VPI uniquely identifies the virtual path while VCI
        uniquely identifies the virtual circuit; both VPI and VCI are included in the ATM cell header. Notice that
        all the virtual circuits belonging to the same virtual path possess the same VPI. The length of VPI is differ-
        ent in UNI and NNI. It is of 8 bits in UNI but of 12 bits in NNI. On the other hand, the length of VCI is
        same in both UNI and NNI and is 16 bits. Thus, to identify a VC, a total of 24 bits are required in UNI
        while 28 bits are required in NNI.
           The ATM also uses PVC and SVC connections like Frame Relay. However, the difference is that
        ATM was developed from the starting to support audio and video applications using SVCs while in a
        Frame Relay, PVC and SVC were added later.
             27. How are ATM cells multiplexed?
           Ans: In an ATM, asynchronous time division multiplexing technique is followed to multiplex the cells
        from different input sources. The size of each slot is fixed and is equal to the size of a cell. Each input source
        is assigned a slot when it has some cell to send. In case none of the sources has cell to send, the slots remain
        empty. Whenever any channel has a cell to send, the ATM multiplexer puts any of these cells into a particu-
        lar slot. However, if all the cells have been multiplexed, the empty time slots are sent in the network.
            Figure 9.21 shows the cell multiplexing from four input sources P, Q, R and S. At the first clock tick,
        as the input source Q has no cell to send, the multiplexer takes a cell from S and puts it into the slot.
        Similarly, at the second clock tick, Q has no data to send. Thus, the multiplexer fills the slot with a cell
        from R. This process continues until all the cells have been multiplexed. After all the cells from all the
        sources have been multiplexed, the output slots are empty.
Empty slots
              P3         P2   P1
         P                                              S4    Q3     P3    S3   R2       Q2    S2    R1   P2   S1   Q1      P1
              Q3   Q2         Q1
        Q                          MUX
                   R2    R1
         R
              S4   S3    S2   S1
         S
AAL Layer
ATM Layer
Physical Layer
        Physical Layer
        The physical layer is responsible for managing the medium-dependent transmission. It carries the ATM
        cells by converting them into bit streams. It is responsible for controlling the transmission and receipt
        of bits as well as maintaining the boundaries of an ATM cell. Originally, ATM was designed to use
        synchronous optical network (SONET) as the physical carrier. However, other physical technologies
         can also be used with ATM.
        ATM Layer
        An ATM layer is similar to the data link layer of the OSI model. It is responsible for cell multiplexing
        and passing cells through ATM network (called cell relay). Other services provided by the ATM layer
        include routing, traffic management and switching. It accepts a 48-byte segment from the AAL layer
        and adds a 5-byte header, transforming it into a 53-byte cell. Further, ATM uses a separate header for
        UNI and NNI cell (Figure 9.23). The header format of the UNI cell is similar to the NNI cell except the
        GFC field that is included in the UNI cell, but not in the NNI cell
        segmentation and reassembly sublayer (SAR) and convergence sublayer (CS). The SAR sublayer is re-
        sponsible for segmentation of payload at the sender’s side and reassembling the segments to create the
        original payload at the receiver’s side. The CS sublayer is responsible for ensuring the integrity of data
        and preparing it for segmentation by the SAR sublayer. There are various types of AAL including AAL1,
        AAL2, AAL3/4 and AAL5. Out of these four versions, only AAL1 and AAL5 are commonly used.
            29. Explain the structure of ATM adaptation layer.
          Ans: The ATM standard has defined four versions of AAL, which include AAL1, AAL2, AAL3/4
        and AAL5. All these versions are discussed as follows:
        AAL1
        It is a connection-oriented service that supports applications needing to transfer information at constant
        bit rates such as voice and video conferencing. The bit stream received from upper layer is divided into
        47-byte segments by the CS sublayer and then segments are passed to SAR sublayers below it. The SAR
        sublayer appends a 1-byte header to each 47-byte segment and sends the 48-byte segments to the ATM
        layer below it. The header added by SAR sublayer consists of two fields (Figure 9.24) namely, sequence
        number (SN) and sequence number protection (SNP). The SN is a 4-bit field
        that specifies a sequence number for ordering the bits. Further, SNP is a 4-bit            SN     SNP
        field that is used to protect the sequence number. It corrects the SN field by            4 bits  4 bits
        using first three bits and the last bit is used as a parity bit to discover error in Figure 9.24 SAR Header
        all eight bits.
        AAL2
        Initially, it was designed to support applications that require variable-data rate. However, now it has
        been redesigned to support low bit rate traffic and short-frame traffic such as audio, video or fax. The
        AAL2 multiplexes short frames into a single cell. Here, the CS sublayer appends a 3-byte header to the
        short packets received from the upper layers and then passes them to the SAR layer. The SAR layer
        combines the short frames to form 47-byte frames and adds a 1-byte header to each frame. Then, it
        passes the 48-byte frames to the ATM layer.
            The header added by CS sublayer consists of five fields (Figure 9.25(a)) which are described as
        follows:
           Channel Identifie (CID): It is an 8-bit long field that specifies the channel user of the packe
           Length Indicator (LI): It is a 6-bit long field that indicates the length of data in a packet
           Packet Payload Type (PPT): It is a 2-bit long field that specifies the type of a packe
           User-to-User Indicator (UUI): It is a 3-bit long field that can be used by end-to-end use .
           Header Error Control (HEC): It is a 5-bit long field that is used to correct errors in the heade .
            The header added by SAR consists of only one field [Figure 9.25(b)], start fiel (SF) that specifies
         an offset from the beginning of the packet.
        AAL3/4
        Originally, AAL3 and AAL4 were defined separately to support connection-oriented and connectionless
        services, respectively. However, later they were combined to form a single format AAL3/4. Thus, it supports
        both connection-oriented and connectionless services. Here, the CS sublayer forms a PDU by inserting a
        header at the beginning of a frame or appending a trailer. It passes the PDU to SAR sublayer, which partitions
        the PDU into segments and adds a 2-byte header to each segment. It also adds a trailer to each segment.
            The header and trailer added by the CS layer together consist of six fields (Figure 9.26) that are
        described as follows:
           Common Part Identifie (CPI): It is an 8-bit long field that helps to interpret the subseque t fields
           Begin Tag (Btag): It is an 8-bit long field that indicates the beginning of a message. The value of
              this field is same for all the cells that correspond to a single message
           Buffer Allocation Size (BAsize): It is a 16-bit long field that specifies to the receiver buffer size
              needed to hold the incoming data that is to be transmitted.
           Alignment (AL): It is an 8-bit long field that is added to make the trailer 4 bytes long
           Ending Tag (Etag): It is a 16-bit long field that indicates the end of the message. It has the same
              value as that of Btag.
           Length (L): It is a 16-bit long field that specifies the length of the data uni
(a) CS header
                                   AL                                 Etag                                   L
                                  8 bits                              8 bits                               16 bits
                                                                 (b) CS trailer
                                                       Figure 9.26   CS Header and Trailer
            The header and trailer added by SAR layer together consists of five fields (Figure 9.27) that are
        described as follows:
           Segment Type (ST): It is a 2-bit long field that specifies the position of a segment corresponding
              to a message.
           Sequence Number (SN): It is a 4-bit long field that specifies the sequence numb .
           Multiplexing Identifie (MID): It is a 10-bit long field that identifies the flow of data to which the
              incoming cells belong.
           Length Identifie (LI): It is a 6-bit long field in the trailer that specifies the length of a data in the
              packet excluding padding.
           CRC: It is a 10-bit long field that contains CRC computed over the entire data unit
                          ST       SN             MID                                   LI                 CRC
                         2 bits   4 bits         10 bits                              6 bits              10 bits
        AAL5
        This layer supports both connection-oriented and connectionless data services. It assumes that all cells
        corresponding to single message follow one another in a sequential order and the upper layers of the
        application provide the control functions. This layer is also known as simple and efficien algorithm
        layer (SEAL). Here, the CS sublayer appends a trailer to the packet taken from upper layers and then
        passes it to the SAR layer. The SAR layer forms 48-bytes frames from it and then passes them to the
        ATM layer.
           The trailer added by CS layer consists of four fields (Figure 9.28) that are described as follows
          User-to-User (UU): It is an 8-bit-long field that is used by users
          Common Part Identifie (CPI): It is an 8-bit-long field that is used for the similar function as that
             in the CS layer of AAL3/4.
          Length (L): It is a 16-bit-long field that specifies the length of dat
          CRC: It is a 32-bit-long field, which is used for error correction
                   UU        CPI                 L                                   CRC
                  8 bits    8 bits             16 bits                              32 bits
        end, STS multiplexer is used that multiplexes the signals coming from various electrical sources into
        the corresponding optical carrier (OC) signal. This optical signal passes through the SONET link and
        finall , reaches the receiver. At the receiver’s end, STS demultiplexer is used that demultiplexes the OC
        signals into the corresponding electrical signals.
           Add/drop multiplexer is used in the SONET link to insert or remove signals. It can combine the STSs
        from several sources into a single path or extract some desired signal and send it to some other path
        without demultiplexing. In SONET, the signals multiplexed by the STS multiplexer (optical signals) are
        passed through regenerator, which regenerates the weak signals. The regenerated signals are then passed
        to add/drop multiplexer that transmits them in the directions as per the information available in data
        frames (Figure 9.29). The main difference between add/drop multiplexer and STS multiplexer is that it
        does not demultiplex the signals before delivering them.
                         Terminal
Regenerator
                                     STS                                                     STS
                                     MUX                         ADM                        DEMUX
Add Drop
             Line Layer: The line layer takes care of the movement of signal across a physical line. At this
               layer, the line layer overhead is added to the frame. The line layer functions are provided by STS
               multiplexers and add/drop multiplexers.
             Path Layer: The movement of a signal from the optical source to the optical destination is the
               responsibility of the path layer. The electronic signal is changed into the optical form from the elec-
               tronic form at the optical source and then multiplexed with other signals, finally being encapsulated
               into a frame. The received frame is demultiplexed and the individual optical signals are changed
               into their electronic form at the optical destination. The STS multiplexers are used to provide path
               layer functionalities. At this layer, the path layer overhead is added to the signal.
              35. What are virtual tributaries?
            Ans: SONET was originally introduced to hold the broadband payloads. However, the data rates of
         the current digital hierarchy ranging from DS-1 to DS-3 are lower than STS-1. Thus, virtual tributary
        (VT) was introduced to make the SONET compatible with the present digital hierarchy. A VT is a partial
        payload, which can be filled into an STS-1 frame and combined with many other partial payloads to
         cover the entire frame. The VTs filled in the STS-1 frame are organized in form of rows and columns.
         There are four types of VTs, which have been defined to make SONET compatible with the existing
         digital hierarchies. These four categories are as follows:
           VT1.5: This VT adapts to the US DS-1 service and provides a bit rate of 1.544 Mbps. It gets three
               columns and nine rows.
           VT2: This VT adapts to the European CEPT-1 service and provides a bit rate of 2.048 Mbps. It gets
               four columns and nine rows.
           VT-3: This VT adapts to the DS-1C service and provides a bit rate of 3.152 Mbps. It gets six
               columns and nine rows.
           VT 6: This VT adapts to the DS-2 service and provides a bit rate of 6.312 Mbps. It gets 12 columns
               and nine rows.
              (c)	Virtual circuit does not require set up of a      9. Which of the following is a type of interface
                   connection before transmission.                      in an ATM network?
              (d) all of these                                          (a) user-to-network (b) network-to-network
                                                                        (c) user-to-user     (d) both (a) and (b)
          8. Which of the following statements is false?
             (a) In Frame Relay, frame length is not fixed.         10. AAL1 is a
             (b)	Frame Relay was developed for real-time               (a) connection-less service
                  applications.                                         (b) connection-oriented service
             (c) ATM uses fixed-size cells                              (c) both (a) and (b)
             (d) none of these                                          (d) none of these
        Answers
        1. (b)     2. (a) 3. (b)    4. (c) 5. (d)     6. (a)      7. (c)   8. (b)   9. (d)   10. (b)
                •   I f the delay increases, packets may be incorrectly transmitted by the sender making the situa-
                     tion worse.
                • If congestion is not controlled, the overall performance of the network degrades as the network
                     resources would be used for processing the packets that have actually been discarded.
             I nternetworking: Internetworking is used to connect different network technologies together.
               Certain issues related to internetworking are as follows:
                • The packets should be able to pass through many different networks.
                • Different networks may have different frame formats. Thus, the network layer should support
                     multiple frame formats.
                • The network layer should support both connectionless and connection-oriented networks.
                               Dotted decimal
                                                                  221.56.7.78
                                  notation
                               Figure 10.1   An IPv4 Address in Binary and Dotted–Decimal Notation
        Classful Addressing
        The traditional IP addressing used the concept of classes thereby named as classful addressing. IPv4
        addressing is classful in which all the IP addresses (address space) are divided into five classes, namely,
        A, B, C, D and E with each class covering a specific portion of the address space. Each class is further
        divided into a fixed number of blocks with each block of fixed size. Given an IPv4 address, its class can
        be identified by looking at either few bits of first byte in case IP address is represented in binary notation
        or the first byte in case the I address is represented in dotted-decimal notation (Table 10.1).
           The blocks of class A, B and C addresses were granted to different organizations depending on their
        size. The large organizations using a vast number of hosts or routers were assigned class A addresses.
        The class B addresses were designed to be used by mid-size organizations having tens of thousands of
        hosts or routers while the class C addresses were designed for small organizations having small number
        of hosts or routers. The class D addresses were projected to be used for multicasting where each address
        identifies a specific group of hosts on the Internet. Only a few of class E addresses were used while
        rest were kept reserved for the future use. The main problem with classful addressing was that a large
        number of available addresses were left unused resulting in a lot of wastage.
           Each IP address in class A, B and C is divided into two parts: net I D that identifies the network on
        the Internet and host I D that identifies the host on that network. The size of each part is different for the
        different classes. In a class A address, net ID is identified by one byte and host ID is identified by three
        bytes. In a class B address, net ID is identified by two bytes and host ID is also identified by two bytes
        and in a class C address, three bytes specify the net ID and one byte specifies the host ID
        Classless Addressing
        There were certain problems with classful addressing such as address depletion and less organization
        access to Internet. To overcome these problems, classful addressing is replaced with classless addressing.
         As the name of the addressing scheme implies, the addresses are not divided into classes; however, they
         are divided into blocks and the size of blocks varies according to the size of entity to which the addresses
         are to be allocated. For instance, only a few addresses may be allocated to a very small organization while
         a larger organization may obtain thousands of addresses. IPv6 addressing is a classless addressing.
             The Internet authorities have enforced certain limitations on classless address blocks to make the
         handling of addresses easier. These limitations are as follows:
           The addresses of a block must be contiguous.
           Each block must have a power of 2 (that is, 1, 2, 4, 8…) number of addresses.
           The first address in a block must be evenly divisible by the total number of addresses in that block.
           Since many of the digits in the IP address represented in Figure 10.2 are zeros, the address even in
        hexadecimal format is still too long. However, it can be abbreviated by omitting leading (but not trail-
        ing) zeros of a section. For example, 0037 in the second section can be written as 37 and 0000 in third,
        fourth, fifth and seventh section can be written as 0. Figure 10.3 shows the abbreviated form of a hexa-
        decimal address after omitting leading zeros of a section.
           As still there are many consecutive sections containing
        zero only in the address shown in Figure 10.3, the address          FABD : 37 : 0 : 0 : 0 : ABCF : 0 : FFFF
        can be more abbreviated. The consecutive section of zeros
                                                                       Figure 10.3 Abbreviated Form of IPv6 Address
        can be eliminated and replaced with double semicolon (::)
        as shown in Figure 10.4. However, there is one limitation with this
        type of abbreviation that it can be applied only once per address.          FABD : 37 : : ABCF : 0 : FFFF
        That is, if there are two or more runs of consecutive sections of
        zeros in a single address, only one run can be replaced with double Figure 10.4 More Abbreviated Form
        semicolon but not others.                                                                of IPv6 Address
             An IP address with all 1s (that is, 255.255.255.255) indicates broadcast on this network. This
               address is used for forwarding the packet to all the hosts on the local network.
              An IP address with net ID all 0s and a proper host ID (see diagram below) identifies a specific host
                on the local network.
                                                     000 ..................... 0      Host ID
             An IP address with a proper net ID, and host ID containing all 1s (see diagram below) indicates
               broadcast to some distant LAN in the Internet.
             An IP address of the form 127.aa.bb.cc (see diagram below) indicates reserved address used for
               loopback testing.
127 Anything
            The address mask (also simply called mask) identifies which part of an IP address defines the net ID
        and which part defines the host ID. To determine this, the IP address and mask are compared bit by bit.
        If a given mask bit is 1, its corresponding bit in IP address will form a part of net ID. On the other hand,
        if mask bit is 0, its corresponding bit in IP address will form a part of host ID. For example, consider an
        IP address 132.7.21.84 (that is, 10000100 00000111 00010101 01010100) that belongs to class B. It is
        clear from Table 10.3 that the mask for class B addresses is 255.255.0.0 (that is, 11111111 11111111
        00000000 00000000). Now, if we compare given IP address with its corresponding mask, we can easily
        determine that 132.7 define the net ID and 21.84 define the host I
        used to split the network into many subnets. When subnetting is done, the network is divided internally,
        however, it appears as a single network to the outside world. For example, in a college campus network
        the main router may be connected to an Internet service provider (ISP) and different departments may
        use their own routers, which are connected to the main router. Here, each department with a router and
        some communication lines form a subnet, however to the outside world, the whole campus appears as a
        single network (Figure 10.5).
                                              PC        Router
                                                                  ISP
                                                                  Main
                         Subnets                                 router                          Subnets
            Now, a problem arises when a packet comes to the main router; how does it know to which subnet
        the packet is to be delivered. One way is to maintain a table in the main router having an entry corre-
        sponding to each host in the network along with the router being used for that host. Though this scheme
        seems fine, it is not feasible as it requires a large table and many manual operations as hosts are added or
        deleted. Thus, a more feasible scheme was introduced, where a part of the host address is used to define
        a subnet number. For example, in a class B address where a host ID is of 16 bits, the six bits can be used
        to define the subnet number and the rest 10 bits can be used to define the host ID. This would lead to up
        to 64 subnetworks, each with a maximum of 1,022 hosts.
            For the implementation of subnetting, the main router requires subnet mask that shows the division of
        addresses between network plus subnet number and host ID. For example, Figure 10.6 shows a subnet
        mask written in binary notation for a subnetted class B address, where six bits have been used to repre-
        sent the subnet number and 10 bits for host ID. The subnet mask can also be written in dotted-decimal
        notation. For example, for the IP address shown in Figure 10.6, the subnet mask in dotted-decimal
        notation can be written as 255.255.192.0. Alternatively, slash (/) notation, also called CI DR (classless
        interdomain routing) notation can also be used to represent the subnet mask. In this notation, a slash
        followed by the number of bits in the network plus subnet number defines the subnet mask. For example,
         the IP address shown in Figure 10.6 can be defined in slash notation as /22 where 22 is the size of subnet
         mask. Notice that /22 indicates that the first 22 bits of subne mask are 1s while rest 10 bits are 0s.
                                     Subnet
                                                   10       Network       Subnet    Host
                                      mask
                                                111111111111111111 1 1 1 10000000000
                              Figure 10.6     A Class B Network Address Subnetted into 64 Subnets
            10. What is supernetting? Why it is needed?
           Ans: Supernetting is just the opposite of subnetting. There was a time when all the class A and B
        addresses were used up but still there was a great demand of address blocks for mid-size organiza-
        tions. The class C address blocks were also not suitable for serving needs of those organizations; more
        a ddresses were required. As a solution to this problem, supernetting was used. In this method, several
         class C blocks can be combined by an organization to create a large address space, that is, smaller
         networks can be combined to create a super-network or supernet. For example, an organization having
          the requirement of 1,000 addresses can be allocated four contiguous blocks of class C addresses.
              11. What is IP spoofing
           Ans: IP spoofin is a mechanism, which is used to hide the original IP address by replacing it with
        an arbitrary IP address. The IP datagrams, which are transferred over a network, carry the sender’s IP
        address apart from the data from the upper layers. Any user having control over the operating system
        can place an arbitrary address into a datagram’s source address field by modifying the device protocols.
        Thus, an IP packet may be made to appear as though it is originating from an arbitrary host. IP spoofing
        is generally used in denial of service attacks to hide the originator of attack.
            If IP spoofing is destructive, it can be prevented through a mechanism known as ingress filtering.
        When it is implemented, the routers receiving the IP datagrams see their source addresses to check
        whether these addresses are in the range of network addresses known to the routers. This check is per-
        formed at the edge of a network such as a corporate gateway or a firewall
         Whenever an IP packet arrives at a router, it sees the destination address of the packet. Then, it looks
         up in the routing table to check whether the destination address matches some existing entry. If so,
         the IP packet is forwarded through the interface specified in the routing table corresponding to that
        matching entry.
            Routing tables are of two types: static and dynamic routing table. Both these types are discussed as
        follows:
           S tatic routing t able: This routing table is created manually. The route of each of the destination
               is entered into the table by the administrator. If there is any change in the network, it is also done
               manually by the administrator. Any broken link in the network would require the routing tables to
               be manually reconfigured immediately so that alternate paths could be used. It is not possible to
               automatically update the table. Static routing tables are well suited for small networks. However, in
               large networks such as Internet, maintenance of static routing tables can be very tiresome.
           Dynamic routing t able: This routing table is updated automatically by the dynamic routing
              protocols such as RIP, BGP and OSPF whenever there is a change in the network. Generally, each
              router periodically shares its routing information with adjacent or all routers using the routing
              protocols so that the information about the network could be obtained. If some change is found in
               the network, the routing protocols automatically update all the routing tables.
              15. Differentiate between intradomain and interdomain routing.
            Ans: Due to the ever-growing size of Internet, using only one routing protocol to update the routing
        tables of all routers is not sufficient. Therefore, the network is divided into various autonomous systems.
        An autonomous system (AS ) is a group consisting of networks and routers, which is controlled by
        a single administrator. Thus, a network can be seen as a large collection of autonomous system. The
        routing, which is done inside an autonomous system, is known as intradomain routing. One or more
        intradomain routing protocols may be used to handle the traffic inside an autonomous system. Two most
        commonly used intradomain routing protocols include distance vector routing and link state routing. On
        the other hand, routing which is done between different autonomous systems is known as interdomain
        routing. Only one interdomain routing protocol is used to handle the routing between autonomous
        systems. The most popular interdomain routing protocol is the path vector routing.
              16. Explain the following algorithms in brief.
              (a) Flooding
             (b) Flow-based routing
           Ans: (a) Flooding: Flooding is a static routing algorithm that works based on forwarding of packets.
        Here, every packet arriving in a router is forwarded (flooded) to all the outgoing lines from the router,
        except the one through which it has arrived. Due to bulk forwarding of packets, a large number of dupli-
        cate packets are generated. The flooding can be controlled by ignoring the flooded packets rather than
        resending them. Some preventive measures that can be used to control flooding are as follows
          A hop counter can be included in the header of each packet. Initially, the hop counter can be set to
              a value equal to the total distance from source to destination. If the distance is not known, then the
              counter can be initialized to the whole length of the subnet. As the packet reaches at every hop, the
              counter is decremented by one and finall , when it becomes zero the packet is discarded.
          Another technique to prevent duplication is to keep track of all those packets, which have already
              been flooded so that they could not be sent for the second time. This technique requires a sequence
               number to be included in every packet that is to be forwarded. This sequence number is entered by
               the source router whenever it receives a packet from a host in its network. Every router maintains
               a list per each source router indicating the sequence numbers that have already been seen by that
               source router. The packet is not flooded if it is already on the list
             A variation of flooding known as selective floodin can be used to overcome the problem. In this
               algorithm, not every incoming packet is sent to every outgoing line, but only to those lines, which
               are going in the right direction.
        Few applications where flooding is useful are as follows
          It is used in distributed database applications where the need arises to update all the databases at
            the same time and to choose the shortest path to the destination resulting in shorter delay.
          It is used in wireless networks where one station can transmit a message, which can be received by
            all other stations within the range of that network.
          It can be used as a metric in routing algorithms. This is because of the reason that flooding always
            gives a shorter delay which no other algorithm can.
              17. What is shortest path routing? Explain with the help of a suitable example.
           Ans: The shortest path routing is a static routing algorithm, which is based on the concept of
        graphs. Here, the whole subnet is depicted as a graph where the nodes of the graph represent the routers
        and the edges represent the links between two routers. The optimal route between two routers is deter-
        mined by finding the shortest path between them. The metric used in shortest path routing can be num-
        ber of hops, distance or the transmission delay. Any of the metrics used is represented as the weights
        assigned to the edges in the graph.
            There are many algorithms for computing the shortest path between two nodes of a graph represent-
        ing a network. However, the Dijkstra’s algorithm is the most often used to calculate the shortest path.
        In this algorithm, each node is labelled with a value equal to its distance from the source node along the
        optimal path. Since no paths are known initially, all nodes are labelled with infinit . As the algorithm
        proceeds, paths are found and thus, label may change. The labels on the nodes are divided into two
        categories: tentative and permanent. Initially, and each label is tentative. However, as it is assured that
         the label presents the shortest distance, it is made permanent.
            To understand the Dijkstra’s algorithm, consider a sample subnet graph shown in Figure 10.8 where
         the weights on the edges represent the distance between nodes connected by that edge. Suppose the
         shortest path has to be calculated from node A to D.
                                                    B                14                 C
                                               4           4                    6              6
                                                                      4                            D
                                         A
                                                               E          F         4
                                                    2                                         4
                                              12
                                                    G                 8                 H
                                               Figure 10.8         A Sample Subnet Graph
            The steps involved in calculating the shortest path from A to D are as follows:
            1. The node A is marked permanent (shown by the filled circle in Figure 10.9). Each adjacent neigh-
               bour of A is examined and is relabelled with distance from A. For example, B is relabelled as
               (4, A) which indicates that distance of B is 4 from A. Similarly, G is relabelled as (12, A). Rest of
               the nodes still remain labelled with infinit . Thus, the new tentative nodes are B and G.
B(4, A) C(∞, −)
                                         G(12, A)                               H(∞, −)
                                             Figure 10.9       Node A Marked as Permanent
            2. As out of nodes B and G, B has the shorter distance from A, it is selected and marked permanent
               as shown in Figure 10.10. Thus, now each adjacent neighbour of B is examined and relabelled
               with distance from B. Notice that a node is relabelled only if the existing label on it is greater than
               the sum of the label on the source node and the distance between that node and source node. For
               example, while examining the node C with respect to node B, we find that the sum of label on B
                (that is, 4) and the distance between B and C (that is, 14) is 18, which is far less than the existing
                label on C (that is, ∞). Thus, node C is relabelled as (18,B). Similarly, E is relabelled as (8,B).
B(4, A) C(18, B)
                                         G(12, A)                               H(∞, −)
                                          Figure 10.10         Node B Marked as Permanent
            3. As out of nodes C and E,E has the shorter distance from B, it is selected and marked permanent
               as shown in Figure 10.11. Thus, now each adjacent neighbour of E is examined and relabelled with
               distance from E. As the node G can be reached via E with a shorter distance (that is, 10) than its
               previous distance (that is, 12), G is relabelled as (10,E). Similarly, F is relabelled as (12,E).
B(4, A) C(18, B)
                                        G(10, E)                          H(∞, −)
                                         Figure 10.11     Node E Marked as Permanent
            4. Though out of nodes G and F,G has the shorter distance from E, it cannot be selected as it will
               result in a loop. Therefore, node F is selected and marked permanent as shown in Figure 10.12 and
               its adjacent neighbours are examined. While examining the neighbouring nodes, the node E is not
               considered because we have reached F via E and thus, cannot go back. In addition, as the existing
               label on node C (that is, 18) is equal to the sum of distance between F and C (that is, 6) and label
               on F (that is, 12), node C is not relabelled. Contrastive to this, node H is relabelled as (16,F).
B(4, A) C(18, B)
                                        G(10, E)                         H(16, F)
                                         Figure 10.12     Node F Marked as Permanent
            5. The node H is marked permanent as shown in Figure 10.13 and its adjacent neighbour D is
               examined. The node D is relabelled as D(20,H). Since our intended destination D has been
                reached, the node D is also marked permanent. Thus, the shortest path from A to D is ABEFHD as
               shown in Figure 10.13 and the shortest distance from A to D is 20.
B(4, A) C(18, B)
                                  A      E(8, B)                                        D(20,H)
                                                          F(12, E)
                                        G(10, A)                         H(16, F)
                                           Figure 10.13     Shortest Path from A to D
             18. Explain distance vector routing algorithm with the help of a suitable example.
            Ans: The distance vector routing (also known as Bellman–Ford routing) is a dynamic routing
        algorithm. In this method, each router maintains a table (known as vector) in which routing information
         about the distance and route to each destination is stored. The table contains an entry per each router in
        the network, where the preferred outgoing line and the distance to that router are specified. Each router
        periodically exchanges its routing information with their neighbours and updates its table accordingly.
        Thus, each router knows the distance to each of its neighbours. To make the routing decisions, the
        metric used maybe the total number of packets queued along the path, number of hops, time delay in
        milliseconds, etc. For example, if delay is used as the metric for making routing decisions then after
        every t milliseconds, every router updates its neighbour with the estimated delays needed to reach each
        destination in the network. Suppose, one such table has just arrived to router J showing that its neighbour
        A can reach the router I in estimated time Ai. If now the router J knows that its delay time to A is n
        milliseconds, then it can reach to router I via A in Ai + n milliseconds. By doing such calculation for
        each neighbour, the optimum estimate is found.
            To understand how calculations are performed in distance vector routing, consider a subnet consist-
        ing of routers and networks as shown in Figure 10.14(a). Suppose at any instant the router D receives the
        delay vectors from its neighbours A,C, and E as shown in Figure 10.14(b). Further, assume that the
        estimated delay from D to its neighbours A,C, and E are 5, 8 and 6, respectively. The router D can now
        calculate the delays to all the other routers in the network. Suppose D wants to compute the route to B,
        using the available delay information. Now, D has a delay time of 5 ms to reach A while A takes 10 ms
        to reach B [Figure 10.14(b)], so D would take a total of 15 ms to reach B via A. In the same way, D can
        calculate the delays to reach B using the lines C and E as 18(11 + 7), 31(9 + 12 + 10) ms, respectively.
        Since, the optimum value is 15 ms, the router D updates its routing table mentioning that the delay to B
        is 15 ms via the route A. The same procedure can be followed by D to calculate the delays to all the other
        routers and then making entries in its routing table as shown in Figure 10.14(c).
              19. What is the count-to-infinit problem in the distance vector routing algorithm?
           Ans: Theoretically, the distance vector routing algorithm works efficiently but practically, it has
        serious loopholes. Even though a correct answer is derived, it takes time. In practice, this algorithm acts
        quickly to good news but acts very late to bad news. For example, let us consider a router whose best
        route to destination X is large. If on the next neighbour exchange, one of its neighbours (say, A) declares
        a short delay to X, the router starts using the line through A to reach X. The good news is thus processed
        by B within a single vector exchange. The news spreads even faster as it is received by its neighbours.
            To understand how fast good news spreads, consider a linear subnet consisting of five nodes
        [Figure 10.15(a)]. Suppose, the metric used in the subnet is the number of hops. In the beginning, the
         node A is down and therefore, all the routers have recorded their delay to A as infinit . When A becomes
         functional, the routers perceive it through the vector exchanges. At the first vector exchange, B realizes
         that its left neighbour has zero delay to A, so it makes an entry 1 in its routing table as A is just one
        hop away from it. At the second vector exchange, the router C comes to know that B is just one hop
A B C D E A B C D E
                         ∞   ∞       ∞   ∞   Initially                               1   2       3   4   Initially
                         1   ∞       ∞   ∞   after 1 exchanges                       3   2       3   4   after 1 exchange
                         1   2       ∞   ∞   after 2 exchanges                       3   4       3   4   after 2 exchanges
                         1   2       3   ∞   after 3 exchanges                       5   4       5   4   after 3 exchanges
                         1   2       3   4   after 4 exchanges                       5   6       5   6   after 4 exchanges
                                 •                                                   7   6       7   6   after 5 exchanges
                                 •
                                 •                                                   7   8       7   8   after 6 exchanges
                                                                                          •
                                                                                          •
                                                                                     ∞   ∞• ∞        ∞
                                         (a)                                                         (b)
                                                 Figure 10.15    The Count-to-Infinity Problem
        away from A, so C updates its table making an entry of two hops to A. Similarly, D and E update their
        routing tables after the third and fourth exchanges, respectively. Thus, the news about the recovery of A
         propagates at a rate of one hop per exchange.
              However, the situation is very different in the case of bad news propagation. To understand the
          propagation of bad news, consider the situation shown in Figure 10.15(b). Here, initially all routers
          are properly functioning and the routers B,C,D and E are 1, 2, 3 and 4 hops away from A, respec-
          tively. Now, if A goes down all of a sudden, the routers still think A to be functional. At the first vector
          exchange, no vector is received from A but the node B sees in the vector received from C that C has a
           path of length 2 to A. As B does not know that path from C to A goes through it, it updates its routing
           table with distance to A as 3 and via C. Now, at the second vector exchange, as C sees the distance of
           B to A as 3, it updates its distance to A as 4 and via B. Similarly, all the routers continue to update their
          routing tables and increase their distance to A after every vector exchange. Eventually, all routers update
          their routing tables with distance to A as infinity where infinity denotes the longest path plus one. This
          problem is known as the count-to-infinit problem.
               20. Explain the link state routing protocol.
             Ans: The link state routing is a dynamic routing algorithm that had been designed to replace its
        previous counterpart distance vector routing. The distance vector routing has certain limitations, as it
         does not take into account the line bandwidth while choosing among the routes. It also suffers from
         count-to-infinity problem. To overcome these limitations, link state routing algorithm was devised.
              In link state routing, the information such as network topology, metric used and type and condition
         of links is available to each router (node). Therefore, each router can use the Dijkstra’s algorithm to
         compute the shortest path to all the available routers and then build their routing tables. Link state
          routing involves the following phases:
             Learning about the neighbours: Whenever a router boots up, the first step for it is to identify all
                its neighbours and get their network addresses. For this, the router sends a HELLO packet on each
                point-to-point line. Each router receiving the HELLO packet responds with its name (or network
                address) to the sending router. Similarly, all the routers in the network discover their neighbours.
                Notice that the name of the routers must be globally unique in order to avoid any ambiguity.
             Measuring the l ine cost: After discovering all the neighbours, the next step for a router is to have
                an estimate of delay to reach every other router in the network. For this, the simplest method that a
                router can adopt is to send a special ECHO packet over the line and then start a timer. As soon as a router
               r eceives an ECHO packet, it immediately sends the packet back to the sending router. After receiving
                the ECHO packet back, the sending router can determine the round-trip time of packet and then divide it
                by two to calculate the delay. To have better estimate of delay, the router can send ECHO packet several
                times and then use the average time as delay. While computing the delay, the traffic load in the network
               may or may not be taken into account. If the load is considered, the sending router starts the timer at
               the time the packet is queued up. On the other hand, if load is not taken into account, the sending router
               starts the timer at the time the packet reaches at the front of the queue.
             Building l ink s tate packets: T he next step for a router is to package the collected information to
                form a link state packet (LSP). Each LSP comprises four fields: the identity of the sending router,
                sequence number, age and list of its neighbouring links. The first and fourth fields of LSP together
                determine the network topology. The second field indicates the sequence number assigned to the
                LSP packet; the sequence number is incremented by one each time the packet is created by the
                router. The third field indicates the duration for which the LSP has been residing in the domain.
                The LSP also includes delay to reach each adjacent neighbour. For example, Figure 10.16(b)
                shows the LSPs for each of the five routers of the subnet shown in Figure 10.16(a).
                                             A
                                                                     A       B       C       D     E
                                   5             7                  SEQ     SEQ     SEQ     SEQ   SEQ
                                         8                          AGE     AGE     AGE     AGE   AGE
                               B                         C
                                                                    B 5     A 5     A 7     B 4   D 3
                               4                     1              C 7             B 8
                                                                            C 8             E 3   C 1
                              D                      E                      D 4     E 1
                                        3
                                       (a)                                           (b)
                                                 Figure 10.16   Subnet and Link State Packets
                 The LSPs are created by the routers periodically at regular intervals of time or at certain occasions such
                 as when some changes occur in the topology or either when some router goes down or comes up.
             Flooding of LS Ps: After each router has created the LSP, it needs to distribute its LSP to its neigh-
               bours in the network. This process is referred to as floodin . The router forwards a copy of its LSP
               through each of its neighbouring interface. For example, the router A sends its LSP through lines
               AB and AC [Figure 10.16(a)]. Each receiving router compares the received LSP against the list
               of LSPs that it already has. If the sequence number of newly arrived LSP is found lower than the
               highest sequence number in the list, the packet is simply discarded. Otherwise, the receiving router
               stores the new LSP and also forwards a copy of it through its neighbouring interfaces except the
               one through which the LSP has arrived.
             Computing new routes: After receiving all LSPs, the next step for a router is to compute the short-
               est path to every possible destination using the Dijkstra’s algorithm (explained in Q 17) and build the
               routing table. The routing table of a router lists all the routers in the network, the minimum cost of
               reaching them and the next router in the path to which the packet should be forwarded.
                •   The HELLO messages exchanged between adjacent routers are much smaller in size than the
                    vectors in the distance vector routing. The LSPs of the link state routing contain information
                    only about the neighbours, while the distance vectors include the net IDs of all the routers in
                    the network.
                • In link state routing, the LSPs are exchanged between neighbours after every 30 min, while in
                     distance vector routing, the vectors are exchanged after a comparatively very small period (for
                     example, 30 s in RIP).
             In link state routing, the optimal paths are calculated by each router independently. However, in
               distance vector routing each router depends on its neighbour for updating its distance vector result-
               ing in slow processing.
             Alternate routes are also possible in link state routing but in distance vector routing only specific
               routes are used.
             Multiple cost metrics can be used in link state routing and the optimal paths can be computed with
               respect to each metric separately. Further, packets can be forwarded based on any one metric.
              22. Explain the path vector routing protocol.
           Ans: Path vector routing is an interdomain routing protocol, which is used between various autono-
        mous systems (ASs). In every AS, there is one node (can be more, but only one is considered here) which
        acts on behalf of the whole system. This node is known as the speaker node. It creates the routing table
        for its AS and announces it with the speaker nodes residing in the neighbouring ASs. Path vector routing is
        similar to that of distance vector routing; however, here only the speaker nodes in ASs can communicate
        with each other. In the advertised tables, only the path is shown excluding the metric of the nodes.
           To understand path vector routing, consider three ASs, AS1, AS2 and AS3 with K, L and M as the speaker
        nodes, respectively (Figure 10.17). The path vector routing works in various phases, which are as follows:
                                                       Dest   Path
                                                        K     AS1
                                          K1            K1    AS1                          L1
                                                        K2    AS1
                                     K2                 K3    AS1
                                                K                                      L        L2
                                  AS1     K3
                                                                                                     AS2
                                                                                       Dest      Path
                                                                     M                  L        AS2
                                               Dest   Path                              L1       AS2
                                                M     AS3     M2         M1             L2       AS2
                                                M1    AS3                 AS3
                                                M2    AS3
             Updating: When a routing table is received by a speaker node, it updates its routing table by adding
               the nodes that are not present in its own routing table along with the path to reach them (Figure 10.18).
               After updating the table, each speaker node knows how to reach every node in other ASs. For
               example, if node K receives a packet for node K2, it knows that the path is in AS1. However, if it
               receives a packet for M2, it knows that it has to follow the path AS1-AS3.
            As the first column of a routing table specifies a destination and in case of RIP, the destination is the
              network; therefore, a network address is defined in the first column of the R routing table.
           The metric used by RIP is the hop count where the term hop defines the number of subnets traversed
              from the source router to the destination subnet with including the destination subnet.
           The maximum cost of a path in RIP cannot be more than 15, that is, any route in an autonomous
              system cannot have more than 15 hops.
           The next-node column in the routing table of RIP contains the address of the router to which the
              packet is to be forwarded (the first hop to be done)
        Consider an autonomous system consisting of six networks and three routers as shown in Figure 10.19.
        Each router has its own routing table showing how to reach each network existing in the AS. Let us
        study the routing table of one of the router, say R1. The networks 120.10.0.0 and 120.11.0.0 are directly
        connected to the router R1; therefore, R1 need not make any hop count entries for these networks. The
        packets destined for both these networks can be sent directly by the router; however, such forwarding
        cannot be done in the case of the other networks. For example, if the router R1 needs to send packets
        to the two networks to its left, then it has to forward the packets to the router R3. Thus, for these two
        networks, the next-node column of the routing table stores the interface of router R2 with the IP ad-
        dress 120.10.0.1. Similarly, if R1 has to send packets to two networks to its right, then it has to forward
        packets to the router R2. Thus, for these two networks, the next-node entry of routing table is the inter-
         face of router R2 with IP address 120.11.0.1. The routing tables of other routers can also be explained
        in the same way.
             (b)	OS PF: O pen S hortest Path First (OS PF) is an intradomain routing protocol which works
         ithin an autonomous system. It is based on the link state routing algorithm. In OSPF, the autonomous
        w
        system is divided into many areas so that the routing traffic can be handled easily. Each area is formed by
        the combination of networks, hosts and routers within an autonomous system and is assigned a unique
        identification number. The networks residing in an area must be connected to each other. The routers
        inside an area function normally, that is, periodically updating the area with routing information.
            In OSPF, there are two types of routers, namely, area border router and backbone router. The area
        border router is situated at the boundary of each area and it shares the information about its area with
        the other areas. The backbone router is situated in a special area (called the backbone area) among all
        the areas inside an autonomous system. The backbone is considered as a primary area and all the other
Autonomous system
                                R1          R2         R3        R4                             R2
                                                                                                           R3
                                                                                    R1
                                                                                                                 R4
                                                                                                           Designated
                                                                                                                router
                                      R5         R6
                                                                                          R5                R6
                                                 (a)                                                 (b)
                                     Figure 10.22            Transient Link and its Graphical Representation
             S tub link: This link is a special case of transient link, which defines a network with only one
               router connected to it [Figure 10.23(a)]. The packets enter and leave the network using this single
               router. Graphically, it can be represented using the designated router for the network and the router
               as a node [Figure 10.23(b)]. The link is only unidirectional from the router to the network.
                                                                      R1
                                           R1
                                                                            Designated
                                                                            router
                                            (a)                       (b)
                                    Figure 10.23   Stub Link and its Graphical Representation
             Virtual link: When the link between two routers is broken, a new link has to be established by
               the administrator, which is known as virtual link. This link may use a longer path covering more
               number of routers.
               (c) BGP: Border Gateway Protocol (BGP) is an interdomain routing protocol, which came into
        existence in 1989. It is implemented using the concept of path vector routing. As BGP works between
        the different ASs, it is an exterior gateway routing protocol. It follows certain policies to transfer packets
        between various types of ASs and these policies are to be configured manually in each BG router.
            In BGP, the ASs are divided into three categories, namely, stub AS, multihomed AS and transit AS.
         The stub AS has a single connection with any other AS. The data traffic can either originate or terminate
         at a stub AS, that is, a stub AS is either a source or a sink; however, data traffic cannot pass through it.
         The multihomed AS can have multiple connections with other ASs, that is, it can send data to more
         than one AS and receive also from many. However, still it is a sink or source as it does not allow data
         traffic to pass through it. The transit AS is also a multihomed AS except that it allows third-party traffic
        to pass through it.
            Each BGP router maintains a routing table that keeps track of the path being used by the BGP router.
        The routing information is exchanged periodically between the routers in a session, referred to as BGP
        session. Whenever a BGP router needs to exchange the routing information with its neighbour, a trans-
        mission control protocol (TCP) connection is established between the two, which indicates the start
        of session. Using TCP connection provides a reliable communication between the routers. However,
        the TCP connection used for BGP is not permanent and it is terminated as soon as some unusual event
        occurs.
            To understand the BGP operation, consider a set of BGP routers shown in Figure 10.24. Let the
         router H uses the path HDE to travel to E. When the neighbours of H provide routing information,
        they give their complete paths to reach E. After receiving all the paths, H examines which one will be
         the optimal path for going to E. The path originating from A is discarded as it passes through H. The
         choice now has to be made between G, C and D. To select from G, C and D, BGP uses a scoring func-
        tion that is responsible for examining the available routes to a destination and scoring them, returning
        a number indicating the distance of each route to that destination. After the paths have been scored,
        the one with the shortest distance is selected. As the path DE has the shortest distance, it is selected.
A G F
                                                                               Information received by H:
                                            H
                                                                        E      From C: “CDE”
                                                                               From D: “DE”
                                                                               From A: “AHDE”
                         B             C                  D                    From G: “GFE”
Figure 10.24 A Set of BGP Routers Along with Routing Information for H
              BGP solves the count-to-infini y problem previously encountered in the distance vector routing
         a lgorithm. To understand how it happens, suppose D crashes and the line HD goes down. When H
        receives routing information from the remaining working neighbours A,C and G, it finds the new routes
          to E as GFE, AHGFE and CHGFE, respectively. The route GFE is selected as the other two pass through
          H itself. Thus, the new shortest path for H to reach E is HGFE via G.
            25. Explain unicast and multicast routing.
          Ans: Unicast and multicast routing are the techniques of routing based on the network structure.
        These are explained as follows:
        Unicast Routing
        In unicast communication, there is only one source and one destination, that is, one-to-one connection.
        The data packets are transferred between the source and the destination using their unicast addresses.
        The packets go through various routers to arrive at their destination. Whenever a packet arrives at the
        router, the shortest path to the intended destination has to be found. For this, the router checks its routing
        table to find out the next hop to which the packet should be forwarded in order to reach the destination
        through the shortest path. If a path is not found in the table, the packet is discarded; otherwise, the packet
        is forwarded only through the interface specified in the routing table
        Multicast Routing
        In multicast communication, there is one source and many destinations. The communication is one-to-
        many communication, that is, the packets are forwarded to one or more destination addresses defined
        altogether by a group address. Whenever a router receives a packet destined for a group address,
        it forwards the packet through its many interfaces so that the packet could reach to all destinations
        belonging to that group. To forward a single packet to a group address, the router needs to create a
         shortest path tree per each group. This makes the multicast routing complex, as N shortest path tree need
         to be created for N groups. To resolve the complexity, two approaches including source-based tree and
         group-shared tree can be used to construct the shortest path tree.
           S ource-based t ree: In this approach, each router constructs a shortest path tree for each group,
               which defines the next hop for each network containing the members for that group. For example,
               consider Figure 10.25 in which there are five groups G1, G2, G3, G4 and G5 and four routers R1, R2,
               R3 and R4. The group G1, G2 and G5 has its members in four networks while the group G2 and
               G3 in two networks. Each router is required to construct five shortest path trees, one per each group
               and having the shortest path entries in its routing table. Figure 10.25 shows a multicast routing table
               maintained by the router R3. Now, suppose the router R3 receives a packet destined for group G5.
               Then, it forwards the copy of a packet to router R2 and R4. Further, router R2 forwards the copy of
               a packet to router R1. This process continues and eventually, the packet reaches each member of
               G5 in the whole network. The problem with this approach is the complexity of routing table, which
               may have thousand of entries in case of large number of groups.
G1,G2
R1 R2 R3
             Group-shared t ree: This approach overcomes the problem of source-based approach, as each
               router is not required to construct shortest path for each group. Rather, one router is designated as
               the core (or rendezvous) and only it is responsible for having N shortest paths in its routing table
               for N groups, one per each group. Whenever a router receives a packet containing a group address,
               it encapsulates the packet into a unicast packet and forwards it to the core router. The core router
               then extracts the multicast packet and checks its routing table to determine the route for the packet.
               Figure 10.26 shows the core router and its routing table.
G1,G2
R1 R2 R3
        Internetworking not only allows these users to communicate with each other, but to access each other’s
        data also. The communication between users is carried through transfer of packets. To transfer packets
        from one network to another, internetworking devices such as routers, transport gateways and appli-
        cation gateways are used at the junction between two networks. These devices help in converting the
        packets in a format accepted by the network to which the packets are to be transferred.
             27. Explain the concept of tunnelling with respect to internetworking.
          Ans: Consider two hosts A and B residing in the same type of networks (say, TCP/IP-based Ethernet
        LAN) and connected via a different type of network (say, a non-IP WAN) in between the two similar
        networks. Now, if A and B wish to communicate with each other, they can do so using the technique
        named as tunnelling, which implements internetworking.
           To send a packet to host B, the host A creates an IP packet containing the IP address of the host B.
        The IP packet is then put in an Ethernet frame addressed to the multiprotocol router residing in A’s
        network and eventually, put on the Ethernet. When the multiprotocol router on A’s network receives
        the frame, it removes the IP packet and inserts it in the payload field of the WAN network layer packet.
        The packet is now addressed to the WAN address of the multiprotocol router residing in B’s network.
        After the packet reaches to the multiprotocol router on B’s network, it removes the IP packet from the
        Ethernet frame and sends to the host B in the local network inside an Ethernet frame. The host B then
        extracts the IP packet from the Ethernet frame and uses it. Figure 10.27 shows the communication
        between host A and B.
                                 IP                              IP                             IP
                                  Multiprotocol                                 Multiprotocol
                                     router                                        router
                                      A                                                 B
                                                             WAN
                             Ethernet LAN         Tunnel                             Ethernet LAN
                                                  Figure 10.27     Tunnelling
           Here, the WAN situated in between acts as a big tunnel connecting both the multiprotocol routers.
        The IP packet and the communicating hosts know nothing about the WAN architecture. Only the
        multiprotocol routers need to understand the IP and WAN packets. In effect, the distance from
        the middle of one multiprotocol router to another is like a tunnel that carries IP packets without any
        obstruction.
             28. What is congestion? Why do we need congestion control?
           Ans: Congestion is a situation that occurs in a network when the number of packets sent to the
        network is far more than its capacity. It results in increased traffic in the network layer and as a conse-
        quence, the TCP segments start getting lost as they pass through the network layer. Since TCP follows
        retransmission policy in case of lost or delayed segments, the congestion problem is further aggregated
        due to retransmission of segments. The situation becomes even worse and the performance of network
        degrades rapidly. That is why congestion needs to be controlled.
        Backpressure
        Backpressure is a node-to-node congestion control technique where each node knows its immediate
        upstream node from which it is receiving the flow of data. When congestion occurs, the congested
        node rejects to receive any more data from its upstream node. This makes the upstream node to become
        congested and thus, it also starts rejecting the data coming from its upstream node. Therefore, they drop
         all the data coming from the upper node. This procedure continues and eventually, the congestion infor-
         mation reaches the original source of flo , which may then slow down the flow of data. As backpressure
         begins with the node where congestion is detected first and propagates back to the source, this technique
         is a kind of explicit feedback algorithm.
             Backpressure technique can be applied only in virtual circuit networks that support node-to-node
         connection. It was introduced in X.25, the first virtual-circuit network. However, this technique
         is not applicable for IP-based networks, as in such networks, a node does not know its upstream
         node.
        Choke Packet
        Choke packet is a control packet that is created by the congested node to inform the source about
        the congestion in the network. Unlike backpressure method, the choke packet is sent by the congested
        node directly to the source of flow rather than to the intermediate nodes. The source receiving the
        choke packet is required to reduce the rate of flow towards the router from where the choke packet
        has come.
            An example of choke packet is the source quench message used in ICMP (discussed in Chapter 11).
        This message is sent by the router or destination node to ask the source to slow down the rate of sending
        traffic. The router sends a source quench message for every datagram it discards due to overloaded
        buffer.
             33. List the differences between congestion control and flo control.
           Ans: The congestion control and flow control are distinct in some ways that are as follows:
           In congestion control, the whole network traffic (containing many nodes) is checked for congestion,
             while in flow control, the point-to-point traffic is checked for smooth functionin
           The congestion control involves many hosts, routers, connections while transferring the data,
             whereas in flow control only sender and receiver interact with each othe .
           In congestion control, when the network is unable to handle the load, packets are dropped to
             remove the congestion. However, in flow control, when the receiver is unable to absorb more data,
              it informs the sender to slow down the speed of sending data.
Q Q Q
                          P                  R Heavy     P                 R            P                R
                                               traffic
                                                                               Choke                        Reduced
                                                                               packet                         flow
                         T                  S            T              S               T               S
                                (a)                           (b)                                (c)
                                                 Q                                 Q
                                      P                  R             P                    R
                                                             Reduced
                                                               flow
                                      T                  S          T                   S
                                          Reduced flow
                                              (d)                                (e)
                                             Figure 10.28    Hop-by-Hop Choke Packet Method
            determine (i) class, (ii) network address, (iii) mask and (iv) broadcast address for each.
            Ans: (a) Given IP address = 32.46.7.3
                 (i) As the first byte of this address is 32 (between 0 and 127), it is class A address.
                     (ii) Since it is class A address, its first byte (that is, 32) denotes the net ID. To obtain the
                           network address, the host ID bits are made zero. Thus, the network address is 32.0.0.0
                    (iii) Being a class A address, the mask for this address is 255.0.0.0.
                    (iv) The broadcast address is obtained by keeping the net ID of the address same and replacing
                            each byte of host ID with 255. Thus, the broadcast address is 32.255.255.255.
              (b) Given IP address = 200.132.110.35
                    (i) As the first byte of this address is 200 (between 192 and 223), it a Class C address
                   (ii) Since it is Class C address, its first three bytes (that is, 200.132.110) denote the net ID. To
                         obtain the network address, the host ID bits are made zero. Thus, the network address is
                         200.132.110.0.
                  (iii) Being a Class C address, the mask for this address is 255.255.255.0.
                  (iv) The broadcast address is obtained by keeping the net ID of the address same and replacing
                         each byte of host ID with 255. Thus, the broadcast address is 200.132.110.255.
              (c) Given IP address = 140.75.8.92
                    (i) As the first byte of this address is 140 (between 128 and 191), it a class B address
                   (ii) Since it is class B address, its first two bytes (that is, 140.75) denote the net ID. To
                         obtain the network address, the host ID bits are made zero. Thus, the network address is
                          140.75.0.0.
                  (iii) Being a class B address, the mask for this address is 255.255.0.0.
                  (iv) The broadcast address is obtained by keeping the net ID of the address same and replacing
                          each byte of host ID with 255. Thus, the broadcast address is 140.75.255.255.
              38. T
                   he IP network 192.168.130.0 is using the subnet mask 255.255.255.224. What subnets
                  are the following hosts on?
              (a) 192.168.130.10
              (b) 192.168.130.67
              (c) 192.168.130.93
              (d) 192.168.130.199
              (e) 192.168.130.222
             (f ) 192.168.130.250
           Ans: To determine which host is lying on which subnet, we need to determine the total number of
        subnets in the network, number of hosts in each subnet and the range of IP addresses assigned to each
        subnet. For finding the number of subnets and the number of hosts in each subnet, we need to know the
        number of masked and unmasked bits in the subnet mask, which can be found as follows:
           Given, subnet mask = 255.255.255.224
           The binary equivalent of host ID byte (that is, 224) is 11100000. Here, three bits are 1s while five
        bits are 0s. Thus, the number of masked bits (m) is 3 and the number of unmasked bits (n) is 5. Now,
        the number of subnets and the number of hosts in each subnet can be determined using the following
        formulas.
         Number of subnets = 2m
                            ⇒ 23
                            ⇒8
         Number of hosts in each subnet = 2n – 2
                                        ⇒ 25 – 2
                                        ⇒ 30
        Thus, there are eight subnets in the network with each subnet comprising 30 hosts. As the given
        IP network is 192.168.130.0, the range of IP addresses assigned to each subnet can be given as
        follows:
        Range of first subne = 192.168.130.0 – 192.168.130.31
        Range of second subnet = 192.168.130.32 – 192.168.130.63
        Range of third subnet = 192.168.130.64 – 192.168.130.95
        Range of fourth subnet = 192.168.130.96 – 192.168.130.127
        Range of fifth subne = 192.168.130.128 – 192.168.130.159
        Range of sixth subnet = 192.168.130.160 – 192.168.130.191
        Range of seventh subnet = 192.168.130.192 – 192.168.130.223
        Range of eighth subnet = 192.168.130.224 – 192.168.130.255
            Notice that the first and last address of each range cannot be assigned to hosts. For example, in the
        first subnet, the addresses 192.168.30.0 and 192.168.130.31 cannot be used for the hosts
            Now, we can find which host lies on which network as given below
              (a) The host 192.168.130.10 lies on the first subnet
              (b) The host 192.168.130.67 lies on the third subnet.
              (c)    The host 192.168.130.93 lies on the third subnet.
              (d)    The host 192.168.130.199 lies on the seventh subnet.
              (e)    The host 192.168.130.222 lies on the seventh subnet.
               (f)   The host 192.168.130.250 lies on the eighth subnet.
            39. W hat will be the subnet address if the destination address is 200.45.34.56 and subnet
                 mask is 255.255.240.0?
          Ans: Given, destination address = 200.45.34.56
           			                Subnet mask = 255.255.240.0
           Now, the subnet address can be determined by performing AND of destination address and the sub-
        net mask.
           The destination address and subnet mask can be written in binary notation as shown below:
                                     11001000 00101101 00100010 00111000
                                     11111111 11111111 11110000 00000000
           On ANDing the both addresses, we get the following subnet address:
                                     11001000 00101101 00100000 00000000
           This subnet address can be written in dotted decimal notation as:
                                                      200.45.32.0.
            40. G iven an IP address 156.36.2.58 with default mask 255.255.0.0/23. Determine its subnet
                 mask in the event of subnetting.
          Ans: Given IP address = 156.36.2.58
                     Default mask = 255.255.0.0/22
           Here, /22 indicates that the first 22 bits of the subnet mask are 1s while the rest are 0s. Thus, the
        subnet mask is:
                                      11111111 11111111 11111100 00000000
        This subnet mask can be written in dotted decimal notation as 255.255.252.0.
Answers
1. (b) 2. (c) 3. (c) 4. (a) 5. (b) 6. (c) 7. (a) 8. (a) 9. (d) 10. (d)
               such as bandwidth, buffer cycles, CPU cycles and packet size. The set of these parameters is referred
               to as flo specificatio . Whenever a flow comes to a router, the router checks the flow specific -
               tion to determine whether it can handle the incoming flow. To determine this, the router checks its
               current buffer size, bandwidth and CPU usage. It also checks its prior commitments made to other
               flows. The flow is accepted only after the router becomes sure that it can handle the flo .
              3. Explain the leaky bucket algorithm.
           Ans: The leaky bucket algorithm is a traffic-shaping technique that is used to control the congestion
        in a network. It was proposed by Turner in 1986 that used the concept of a leaky bucket—a bucket with
        a hole at the bottom. The water is pouring into the bucket and it is leaking continuously. However, the
        rate of leakage is always constant irrespective of the rate of water pouring into the bucket. This process
        continues until the bucket is empty. If the bucket is overflowed, then additional water falls through
        the sides of bucket but leakage rate will always be constant. Turner applied the same idea to packets
        transmitted over the network and therefore, the algorithm was named as leaky bucket algorithm. This
        algorithm smooths out the bursty traffic by storing bursty chunks of packets in the leaky bucket so that
        they can be sent out at a constant rate.
            To understand the leaky bucket algorithm, let us assume that network has allowed the hosts to trans-
        mit data at the rate of 5 Mbps and each host is connected to the network through an interface containing
        a leaky bucket. The leaky bucket enables to shape the incoming traffic in accordance with the data rate
        committed by the network. Suppose the source host transmits the bursts of data at the rate of 10 Mbps
        for the first three seconds, that is, a total of 30 Mbits of data. After resting for 3 s, the source host again
        transmits the bursts of data at the rate of 5 Mbps for 4 s, that is, a total of 20 Mbits. Thus, the total
        amount of data transmitted by the source host is 50 Mbits in 10 s. The leaky bucket sends the whole data
        at the constant rate of 5 Mbps (that is, within the bandwidth commitment of the network for that host)
        regardless of the rate arrived from the source node. If the concept of leaky bucket was not used, then
        more bandwidth would be consumed by the starting burst of data, leading to more congestion.
            The leaky bucket algorithm is implemented by maintaining a FIFO queue to hold the arriving pack-
        ets. The queue has a finite capacity. Whenever a packet arrives, it is appended at the end of queue if there
        is some space in queue; otherwise, the packet is discarded (Figure 11.1). If each packet is of fixed size,
        then a fixed number of packets are removed from the queue per each clock tick. However, if packets are
        of variable sizes, a fixed number of bytes (say, p) are removed from the queue per each clock tick. The
        algorithm for variable size packets is as follows:
            1. A counter is initialized to p at the tick of the clock.
            2. If packet size is smaller than or equal to p, the packet is sent and value of counter is decremented
                by the packet size.
            3. Repeat step 2 until p becomes smaller than the packet size.
            4. Reset the counter and go back to step 1.
                                                         N                                 Remove packets
                           Arrival         Full?
                                                                                           at a constant rate
                                                                    Queue
Discard Y
        entry from the table. The disadvantage of static mapping is that as the physical address of a node may
        change, the table must be updated at regular intervals of time and thus, causing more overhead. On the
        other hand, in dynamic mapping, a protocol can be used by the node to find the other address if one is
        known. The ARP is based on dynamic mapping.
            Whenever a host wishes to send IP packets to another host or router, it knows only the IP address of
        the receiver and needs to know the MAC address as the packet has to be passed through the physical
        network. For this, the host or router broadcasts an ARP request packet over the network. This packet
        consists of IP address and MAC address of the source node and the IP address of the receiver node. As
        the packet travels through the network, each node in between receives and processes the ARP request
        packet. If a node does not find its IP address in the request, it simply discards the packets. However,
        when an intended recipient recognizes its IP address in the ARP request packet, it sends back an ARP
        response packet. This packet contains the IP and MAC address of the receiver node and is delivered
        only to the source node, that is, ARP response packet is unicast instead of broadcast.
            The performance of ARP decreases if every time the source node or router has to broadcast an ARP
        request packet to know the MAC address of the same destination node. Thus, to improve the efficienc ,
        ARP response packets are stored in the cache memory of the source system. Before sending any ARP
        request packet, the system first checks its cache memory and if the system finds the desired mapping in
        it then the packet is unicasted to the intended receiver instead of broadcasting it over the network.
            The format of ARP packet is shown in Figure 11.2.
            The ARP packet comprises various fields, which are described as follows
           Hardware Type: It is a 16-bit long field that defines the type of the network on which ARP is run-
              ning. For example, if ARP is running on Ethernet then the value of this field will be one. ARP can
              be used on physical network.
           Protocol Type: It is a 16-bit long field that defines the protocol used by ARP. For example, if
              ARP is using IPv4 protocol then the value of this field will be (0800)16. ARP can be used with any
              protocol.
           Hardware Length: It is an 8-bit long field that defines the length of MAC address in byte
             Protocol Length: It is an 8-bit long field that defines the length of address in bytes.
             Operation: It is a 16-bit long field that defines the type of packet being carried out. For ARP
               request packet, the value of this field will be one and for ARP response packet, the value will be
               two.
             Sender Hardware Address: It is a variable-length field that defines the MAC address of the
               sender node.
             Sender Protocol Address: It is a of variable-length field that defines the IP address of the sender
               node.
             Target Hardware Address: It is a variable-length field that defines the MAC address of the desti-
               nation node. In case of an ARP request packet, the value of this field is 0s as the MAC address of
               the receiver node is not known to the sender node.
             Target Protocol Address: It is a variable-length field that defines the IP address of the destination
               node.
                7. Write a short note on the following.
             (a) RARP
             (b) BOOTP
             (c) DHCP
           Ans:
             (a) RARP: Reverse address resolution protocol, as the name implies, performs the opposite of
        ARP. That is, it helps a machine that knows only its MAC address (logical address) to find the IP ad-
        dress (logical address). This protocol is used in situations when a diskless machine is booted from read-
        only-memory (ROM). As ROM is installed by the manufacturer, it does not include the IP address in its
        booting information because IP addresses are assigned by the network administrator. However, MAC
        address of the machine can be identified by reading its NIC. Now, to get the IP address of the machine
        in a network, RARP request packet is broadcast to all machines on the local network. The RARP
        request packet contains the MAC address of the inquiring machine. The RARP server on the network
        that knows all the IP addresses sees this request and responds with a RARP reply packet containing the
        corresponding IP address to the sender machine.
           The problem in using RARP protocol is that if there is more than one network or subnets then RARP
        server needs to be configured on each network as RARP requests are not forwarded by the routers and,
        thus, cannot go beyond the boundaries of a network.
             (b) BOOTP: Bootstrap protocol is an application layer protocol designed to run in a client/
        server environment. The BOOTP client and BOOTP server can be on the same or different network. The
        BOOTP uses user datagram protocol (UDP) packets, which are encapsulated in an IP packet. A BOOTP
        request packet from a BOOTP client to BOOTP server is broadcast to all the nodes on the network. In
        case the BOOTP client is in one network and BOOTP server is on another network and two networks
        are separated by many other networks, the broadcast BOOTP request packet cannot be forwarded by the
        router. To solve this problem, one intermediary node or router, which is operational at the application
        layer, is used as a relay agent. The relay agent knows the IP address of the BOOTP server and when
        it receives a BOOTP request packet, it unicasts the BOOTP request packet to the BOOTP server by
        including the IP address of the server and of itself. On receiving the packet, the BOOTP server sends
        BOOTP reply packet to the relay agent, which further sends it to the BOOTP client.
            A problem associated with BOOTP is that it is a static configuration protocol. The mapping table
        containing MAC and IP addresses is configured manually by the network administrator. Thus, a new
        node cannot use BOOTP until its IP and MAC address have been entered manually by the network
        administrator in the table.
             (c) DHCP: Dynamic host control protocol supports both static and dynamic address allocation
        that can be done manually or automatically. Thus, it maintains two databases one for each allocation. In
        static address allocation, DHCP acts like BOOTP, which means that a BOOTP client can request for a
        permanent IP address from a DHCP server. In this type of allocation, DHCP server statically maps MAC
        address to IP address using its database. On the other hand, in dynamic address allocation, dynamic
        database is maintained by DHCP that contains the unused IP addresses. Whenever a request comes for
        an IP address, DHCP server assigns a temporary IP address to the node from the dynamic database using
        a technique called leasing. In this technique, the node requests the DHCP server to renew the IP address
        just before the time of using this IP address expires. If the request is denied by the DHCP server, then
        the node cannot use the same IP address that was assigned to it earlier.
            Like BOOTP, DHCP also uses relay agent on each network to forward the DHCP requests. When
        a DHCP node requests for an IP address, the relay agent on network helps the request to unicast to the
        DHCP server. On receiving the request, the server checks its static database to find the entry of the
        requested physical address. If it finds the address, it returns the permanent IP address to the node; oth-
        erwise, it selects some temporary address from the available pool, returns this address to the host and
        also, adds this entry to the dynamic database.
               8. Draw and discuss the IP datagram frame format. Discuss in detail the various fields
           Ans: The Internet protocol version 4 (IPv4) is the most widely used internetworking protocol. In
        IPv4, the packets are termed as datagrams (a variable length packet). Further, IPv4 is a connectionless
        and unreliable datagram protocol; connectionless means each datagram is handled independently and
        can follow different paths to reach the destination; unreliable means it does not guarantee about the
        successfully delivery of the message. In addition, IPv4 does not provide flow control and error control
        except the error detection in the header of the datagram. To achieve reliability, the IP is paired with
        transmission control protocol (TCP), which is a reliable protocol. Thus, it is considered as a part of
        TCP/IP suite.
            An IPv4 datagram consists of a header field followed by a data field. The header field is 20–60
        bytes long and contains routing and delivery information. The header comprises various subfields
        (Figure 11.3), which are described as follows:
           Version (VER): It is a 4-bit long field, which indicates the version being used. The current version
              of IP is 4.
           Header Length (HLEN): It is a 4-bit long field, which defines the IPv4 header length in 32-bit
              words. The minimum length of IPv4 header is five 32-bit words
           Service: It is an 8-bit long field, which provides an indication of the desired QoS such as prece-
              dence, delay, throughput and reliability. This field is also called type of service (ToS) field
           Total Length: It is a 16-bit long field, which defines the total length (in bytes) of the datagram in-
              cluding header and data. The maximum permitted length is 65,535 (216–1) bytes with 20–60 bytes
              for header and rest for data.
           Identification It is a 16-bit long field, which uniquely identifies the datagram. The datagrams
              can be fragmented at the sender’s end for transmission and then reassembled at the receiver’s end.
                                                              Source IP address
                                                                   32 bits
                                                          Destination IP address
                                                                  32 bits
Options
               When a datagram is fragmented into multiple datagrams, all datagrams belonging to the same da-
               tagram are labelled with same identification number and the datagrams having same identification
               number are reassembled at the receiving side.
             Flags: It is a 3-bit long field in which the first bit is reserved and always zero, the second bit is do
               not fragment (DF) and the third bit is more fragment (MF). If DF bit is set to 1 then it means the
               datagram must not be fragmented while a value of zero indicates the fragmentation of the data-
               gram. If MF bit is set to zero, then it means this fragment is the last fragment. However, if MF bit
               is zero then it means there are more fragments after this fragment.
             Fragmentation Offset: It is a 13-bit long field that indicates the relative position of this fragment
               with respect to the entire fragment. The fragmentation offset indicate the actual datagram to which
               the given fragment belongs. It is measured in units of eight octets (64 bits). The first fragment is
               set to the offset zero.
             Time-to-Live (TTL): It is an 8-bit long field, which indicates the total time (in seconds) or
               number of hops (routers) that an IPv4 datagram can survive before being discarded. As a router
               receives a datagram, it decrements the TTL value of datagram by one and then forwards that da-
               tagram to the next hop and so on. When the TTL value becomes zero, it is discarded. Generally,
               when a datagram is sent by the source node, its TTL value is set to the two times of the maximum
               number of routes between the sending and the receiving hosts. This field is needed to limit the
               lifetime of datagram because it may travel between two or more routers for a long time without
               ever being delivered to the destination host. Therefore, to avoid this, we discard the datagram
               when its TTL value becomes zero.
             Protocol: It is an 8-bit long field, which specifies the higher-level protocols that uses the services
               of IPv4.
             Header Checksum: It is a 16-bit long field that is used to verify the validity of the header and is
               recomputed each time when the TTL value is decremented.
             Source IP Address: It is a 32-bit long field, which holds the IPv4 address of the sending host. This
               field remains unchanged during the lifetime of the datagram
             Destination IP Address: It is a 32-bit long field, which holds the IPv4 address of the receiving
               host. This field remains unchanged during the lifetime of the datagram
             Options: These are included in the variable part of header and can be of maximum 40 bytes. These
               are optional fields, which are used for debugging and network testing. However, if they are present
               in the header then all implementations must able to handle these options. Options can be of single
               or multiple bytes. Some examples of single byte options are no-operation and end of option and
               of multiple bytes are record route and strict source route.
              9. Explain in detail about IPv6.
           Ans: The IP is the foundation for most Internet communications. Further, IPv6 is a version of IP that
        has been designed to overcome the deficiencies in IPv4 design. Some of the important issues that reflect
        IPv4 inadequacies include:
          The IPv4 has a two-level address structure (network number, host number), which is inconvenient
             and inadequate to meet the network prefixes requirements. In addition, IPv4 addresses need extra
             overhead like subnetting, classless addressing, NAT, address depletion, etc., which are still a big
             issue for efficient implementation
          Internet also deals with real-time audio and video transmission, which requires high speed with
             minimum delays and requires reservation schemes of resources. There is no such a procedure in
             IPv4 to deal with such kind of transmission.
          Some confident al applications need authentication and encryption to be performed during data
             transmission, However, IPv4 does not provide any authentication and encryption of the packets.
        Thus, an Internetworking Protocol, version 6 (IPv6), also known as Internetworking Protocol Next
        Generation (Ipng) with enhanced functionality is proposed by the Internet Engineering Task Force
        (IETF) to accommodate the future growth of Internet.
        Packet Format of Ip v 6
        Figure 11.4 shows the format of IPv6 packet. An IPv6 packet consists of two fields: base header field of
        40 bytes and a payload field of length up to 65,535 bytes. The payload field further consists of optional
        extension headers and data packets from upper layer.
                                       Extension header
                                                                 Data packet from upper layer
                                           (optional)
             Auto Configuratio of Addresses: IPv6 protocol can provide dynamic assignment of IPv6 addresses.
             Allow Future Extensions: IPv6 design allows any future extensions, if needed, to meet the
               requirements of future technologies or applications.
              Support for Resource Allocation: In IPv6, a new mechanism called flo label has been intro-
               duced to replace TOS field. With this mechanism, a source can request special handling of the
               packet. This mechanism supports real-time audio and video transmission.
             Enhanced Security: Ipv6 includes encryption and authentication options, which provide integrity
                and confidentiality of the packet
             11. Compare IPv4 header with IPv6 header.
           Ans: Both IPv4 and IPv6 are the variants of IP, which have certain differences between them. These
        differences are listed in Table 11.1.
               Address Mask Request and Reply: The address mask request is sent by a host to a router on a local
                 area network (LAN) that knows its IP address but wants to know its corresponding mask. In response,
                 the address mask reply messages are sent by the router to the host by providing it necessary mask.
               Router Solicitation and Advertisement: To send data to a host on another network, the source
               host needs to know the router address of its own network which connects it to other networks.
               Along with this, it needs to know whether neighbour’s routers are alive and functioning. In these
                situations, the router solicitation and advertisement message helps the host to find out such
               information. The source host broadcasts a router-solicitation message. The router, which receives the
                router-solicitation message, broadcast their routing table information through router-advertisement
                 message. The router-advertisement message includes an announcement about its presence as well
                  as of other routers on the network.
            14. In a leaky bucket, what should be the capacity of the bucket if the output rate is 5 gal/min
        and there is an input burst of 100 gal/min for 12 s and there is no input for 48 s?
          Ans: Total input = 100*(12/60) + 0*48 = 20 gal/min
        		      Given, output rate = 5 gal/min
        		      Thus, the capacity of bucket = 20 – 5 = 15 gal.
            15. Imagine a flo specificatio that has a maximum packet size of 1,000 bytes, a token bucket
        rate of 10 million bytes/s, a token bucket size of 1 million bytes and a maximum transmission rate
        of 50 million bytes/s. How long can a burst at maximum speed last?
           Ans: Given that
        		       Bucket size, C = 1 million bytes
        		       Maximum transmission rate, M = 50 million bytes/s
        		       Token bucket rate, P = 10 million bytes/s
        		       Burst time, S = ?
        		       We know that, S = C/(M – P)
        		       Putting the respective values in the above formula, we get
                                                   S = 1/(50 – 10)
                                                 ⇒ S = 1/40
                                                 ⇒ S = 0.025 s
           Therefore, burst will last for 0.025 s.
        Answers
        1. (a)     2. (b)   3. (d)   4. (a) 5. (b)   6. (d)
               2. In OSI model, both data link layer and transport layer are involved in error control. Why
         same activity twice? Justify.
            Ans: Both data link layer and transport layer provide error control, but where the transport layer
        provides end-to-end error control and the data link control provides error control across a single link.
         The data link layer makes the physical link reliable by adding the mechanism for detecting the errors and
         retransmitting frames in case of lost and damaged frames. On the other hand, the transport layer ensures
         that the entire message is delivered to the receiving process without any error and in the same order as
         sent by the sending process. Retransmission is used in the transport layer to perform error correction.
               3. Why do we need port addresses?
            Ans: Usually, a machine provides a variety of services such as electronic mail, TELNET and FTP.
        To differentiate among these services, each service is assigned with a unique port number. To avail some
        specifi service on a machine, firs it is required to connect to machine and then connect to the port
        assigned for that service. The port numbers less than 1,024 are considered well-known and are reserved
         for standard services. For example, the port number used for TELNET is 23.
                4. What is socket address? Explain socket addressing.
            Ans: In transport layer, two processes communicate with each other via sockets. A socket acts as an
         end-point of the communication path between the processes. To ensure the process-to-process delivery,
         the transport layer needs the IP address and the port number at each communicating end. The IP address
         is used to identify the machine on the network and the port number is used to identify the specifi
        -process on that machine. The IP address and port address together defin the socket address.
             To enable the communication, each of the communicating processes (client and server) creates its
         own socket and these sockets are to be connected. The client socket address uniquely identifie the client
         process while the server socket address uniquely identifie the server process.
             The server listens to a socket bound to a specifi port for a client to make connection request. When-
         ever a client process requests for a connection, it is assigned a port number (greater than 1,024) by the
         host computer (say, M). Using this port number and the IP address of host M, the client socket is created.
         For example, if the client on host M having IP address (125.61.15.7) wants to connect to TELNET server
         (listening to port number 23) having IP address (112.56.71.8), it may be assigned a port number 1,345.
         Thus, the client socket and server socket used for the communication will be (125.61.15.7:1345) and
         (112.56.71.8:23), respectively as shown in Figure 12.1.
             Each connection between client and server employs a unique pair of sockets. That is, if another client
         on host M wants to connect to TELNET server, it must be assigned a port number different from 1,345
         (but greater than 1,024).
                                     Socket                                  Socket
                               (125.61.15.7:1345)                       (112.56.71.8:23)
        (TCP) are referred to as socket primitives. These socket primitives are commonly used for Internet
        programming and also provide more flexibility. Various socket primitives used in Berkeley UNIX for
        TCP are described as follows:
          SOCKET: This primitive is used to create a new communication end point. It also allocates table
              space to the end point within the transport entity. This primitive is executed both by the client and
              by the server.
          BIND: This primitive is used to assign network addresses to newly created sockets. After the addresses
             have been attached with sockets, the remote systems can connect to them. This primitive is executed
             only at the server side and always after the SOCKET primitive. At the client side, there is no need to
             execute BIND primitive after the SOCKET primitive. This is because the server has nothing to do with
             the address used by the client.
          LISTEN: This primitive is used to indicate the willingness of a server to accept the incoming
             connection requests. If a number of clients attempt to make a connection with the server, it allo-
              cates space to queue up all the incoming requests. This primitive is executed only at the server side
              and always after the BIND primitive.
          ACCEPT: This primitive is used to make some incoming connection wait as long as a connection
              request does not arrive. This primitive is executed only at the server side and always after the
              LISTEN primitive.
          CONNECT: This primitive is executed by the client to attempt to establish a connection with the
              server. As the client executes the CONNECT primitive, it gets blocked. It remains blocked until
              it receives a transport packet data unit (TPDU) from the server, which indicates the completion of
              CONNECT primitive. Then, the client gets unblocked and a full-duplex connection is established
              between the client and the server.
          SEND: This primitive is used to send data over the full-duplex connection. Both client and server
              can execute this primitive.
          RECEIVE: This primitive is used to receive data from the full-duplex connection. Both client and
              server can execute this primitive.
          CLOSE: This primitive is used to release the connection between the client and the server. The con-
              nection is terminated only after both the communicating parties have executed the CLOSE primitive.
               6. Explain various schemes used by the transport layer to fin the transport service access
        point (TSAP) at a server.
           Ans: Whenever an application process (client) wishes to establish a connection with some -application
        process running on the remote system (server), it needs to specify which one to connect to. For this, the
        transport layer specifie transport addresses to which the process can listen for connection requests. In
        transport layer, these end points are termed as TSAP. Both client and server processes get attached to
        a TSAP for establishing a connection with remote TSAP. However, now the problem arises, how the
        client knows which TSAP a specifi process on the server is listening to? To identify this, some scheme
        is needed.
            One such scheme is the initial connection protocol scheme. In this scheme, not all the server processes
        are required to listen to well-known TSAPs; rather, each machine that wants to serve to remote users has
        a proxy server called process server. The process server can listen to a number of ports simultaneously.
        When a CONNECT request from a user specifying the TSAP address of a specifi server process arrives,
        it gets connected to the process server in case no server process is waiting for it. After the desired server
        process gets available, the process server creates the requested server which then inherits the existing
        connection with the user. This way, the user gets connected to the desired server process. The newly
        created server then performs the requested job while the process server gets back to listening to ports.
        Though this scheme seems fine, it cannot be used for servers that cannot be created when required. To
        overcome this limitation, another scheme is used.
            In the alternative scheme, there exists a process known as a name server or a directory server which
        listens to a well-known TSAP. The name server contains an internal database in which all the service
        names along with their TSAP addresses are stored. This scheme requires every newly created service
        to register itself with the name server. When a client needs to fin the TSAP address corresponding to
        a service, it establishes a connection with the name server. After the connection has been established,
        the client sends a message including the requested service name to the name server. The name server
        searches through its database and sends the requested TSAP address back to the client. After receiving
        the TSAP address by the client, the connection between client and name server is released and a new
        connection is established between the client and the service requested by it.
               7. What is meant by upward and downward multiplexings?
           Ans: Multiplexing is a technique that allows the simultaneous transmission of multiple signals
        across a single data link. In transport layer, we may need multiplexing in the following two ways, which
        are as follows:
          Upward Multiplexing: When there is only one network address available on a host, then all
              the transport connections on that host use the same network connection. Whenever a TPDU is
             received, some means is required to indicate which process is to give the TPDU. This situation is
              known as upward multiplexing.
          Downward Multiplexing: The subnet uses virtual circuits with each virtual circuit having a fixed data
             rate. Now, if a user needs higher bandwidth than that of a single virtual circuit to transport the data,
             then the traffic from a single transport connection can be multiplexed to multiple network connections
              (virtual circuits) thereby increasing the bandwidth. This is what is called downward multiplexing.
                8. Explain in detail user datagram protocol (UDP). Also list its uses.
            Ans: UDP is a connectionless and unreliable transport protocol that offers process-to-process
        delivery with limited error checking. By connectionless, we mean that the segments are sent to the
         destination host without any prior establishment of the connection between communicating hosts. By
         unreliable, we mean that UDP does not perform error and flo control and thus, does not guarantee
         about the proper delivery of segments at the destination. As segments in UDP are not numbered, there
         is no means to identify the frames that have lost during the transmission.
             Though UDP is considered powerless due to unreliable transport, it is a simple protocol that incurs
         a minimal overhead. It is generally used by a process that has only a small message to send and is not
         much concerned about the reliability. The applications such as speech and video for which instant
         delivery is more important than accurate delivery, prefer to adopt UDP for transport.
             The UDP packets, known as user datagrams, have a fixed-size header of eight bytes, which is -followed
        by the data. The header of UDP packet contains four field of 16 bits each as shown in Figure 12.2.
             Destination Port Number: It is a 16-bit long fiel that define the process running on the desti-
               nation host. If the destination host is the client, the destination port number will be an ephemeral
               port number. However, if the destination host is the server, the destination port number will be a
               well-known port used with UDP.
             Total Length: It is a 16-bit long fiel that specifie the total length of the UDP packet including
               header as well as data. The size of this fiel can range from 0 to 65,535 (that is, 216–1) bytes but the
               size of UDP packet must be much less, as it is to be encapsulated in an IP datagram having a total
               length of 65,535 bytes.
             Checksum: It is a 16-bit long fiel that is used for error detection over both the header and
               the data.
        Uses of UDP
             It is suitable for multicasting.
             It is used for management processes such as SNMP.
             It is used for the route updating protocols such as routing information protocol (RIP).
              9. What is transmission control protocol (TCP)? What are the services provided by it?
           Ans: TCP is a connection-oriented and reliable transport layer protocol. By connection-oriented,
        we mean that a virtual connection must be established between the sending and the receiving pro-
        cesses before any data can be transferred. Whenever a process on source host wishes to communicate
        with a specific process on destination host, first a virtual connection is established between the TCPs
        of the sending and receiving processes. Then, the data can be transferred in both directions. After the
        completion of data transfer, the connection is released. TCP accepts a stream of bytes from the upper
        layer and divides it into a sequence of segments which are then sent to the destination. By reliable,
        we mean that TCP provides error and flow control and thus, ensures the delivery of segments to
        the destination. Each segment sent from the source needs to be acknowledged by the receiver on
        its receipt.
           TCP provides a variety of services to the processes at the application layer. Some of these services
        are described as follows:
          Process-to-Process Communication: Like UDP, TCP is also a process-to-process protocol that
             connects a process on a source host to a specifi process on the destination host using the port
             numbers. Some of the well-known ports used with TCP are listed in Table 12.2.
               Stream Delivery Service: TCP is a byte-oriented protocol that allows the sending process to send
               the stream of bytes and the receiving process to receive the stream of bytes. Since the speed of
               sending and receiving processes may differ, buffers need to be maintained at both ends to store
               the bytes. The buffer at the sender’s end is divided into three sections: empty slots to hold the
               bytes produced by the sending process, slots containing bytes that have been sent but not yet
               acknowledged and the slots containing bytes that are to be sent. On the other hand, the buffer at the
                receiver’s end is divided into two sections: empty slots to hold the bytes received from the sending
                process and the slots containing bytes that are to be read by the receiving process.
              Segments: The communication between sending and receiving TCPs takes places through the IP
                layer, which supports data in packets rather than a stream of bytes. Therefore, before sending the
                bytes (stored in the sending buffer), the sending TCP groups multiple bytes together into a packet
                called segment; different segments may or may not contain equal number of bytes. TCP header is
                attached with each segment and then the segment is delivered to IP layer. The IP layer encapsulates
                the segment into IP datagram and sends the IP datagrams. The IP layer at the receiving machine
                processes the header of IP datagrams and passes the segments to the receiving TCP. The receiving
                TCP stores the bytes of segments in the receiving buffer.
              Full-Duplex Communication: TCP provides full-duplex connection, that is, both the sender and
                 the receiver processes can simultaneously transmit and receive the data.
               Reliable Service: TCP provides a reliable service, that is, every received byte of data is
                acknowledged to the sending process. This helps in detecting the lost data. To ensure reliability,
                 TCP uses byte number, sequence number and acknowledgement number.
                 •   Byte Number: Each of the bytes that are transmitted in a connection is numbered by TCP. The
                     numbering does not start necessarily with zero. As TCP receives the first byte from the sending
                     process, it chooses a random number between 0 and 232 –1 and assigns that number to the first
                     byte. The subsequent bytes are numbered accordingly. For example, if the first byte is numbered
                     as 330 and total 1,000 bytes are to be transmitted, then byte numbers will be from 330 to 1,330.
                 •   Sequence Number: After numbering the individual bytes, the groups of bytes, that is, seg-
                     ments are numbered. Each segment is assigned a sequence number by TCP, which is same as
                     the number of firs byte carried by that segment.
             HLEN: It is a 4-bit long fiel that define the length of segment header in terms of 32-bit words.
               This fiel can take value between 5 (for 20-byte header) and 15 (for 60-byte header).
             Reserved: It is a 6-bit long fiel which has been kept reserved for the future use.
             Control: It is a 6-bit long fiel consisting of six flag each of one bit. These fla bits help in flo
               control, establishment of connection, termination of connection and the mode of transferring data
               in TCP. The description of fla bits are as follows:
               •   URG: This bit is set to 1 if the Urgent pointer fiel is in use else it is set to 0.
               •   ACK: This bit is set to 1 to indicate the valid acknowledgement number. If ACK bit is set to 0, then
                   it indicates that the segment does not carry the acknowledgement and thus, the Acknowledgement
                   number fiel is ignored.
               •   PSH: This bit indicates the pushed data. The receiver is asked to deliver the data to the applica-
                   tion immediately as it arrives and not buffer the data until a full buffer has been received.
               •   RST: This bit is used to reset the data connection in case the connection has been distorted. It also
                   rejects an invalid segment and denies making another connection.
               •   SYN: This bit is used to synchronize the sequence numbers during the connection establishment.
                   If the piggyback Acknowledgement number fiel is not in use, then SYN bit is set to 1 and ACK
                   bit is set to 0. If the connection uses an acknowledgement, then SYN bit is set to 1 and ACK bit
                   is also set to 1.
               •   FIN: This bit is used to terminate the connection. If this bit is set to 1, then it means the sending
                   process has transmitted all data and has no more data to transmit.
             Window Size: It is a 16-bit long fiel that describes the size of the window (in bytes) the receiving
               process should maintain. The maximum size of window is 65,535 bytes. This value is regulated by
               the receiver and is usually known as the receiving window.
             Checksum: It is a16-bit long fiel that is used to detect the errors. The checksum in TCP is neces-
               sary, unlike UDP. The same psuedoheader as that of UDP is added to the segment and for the TCP
               psuedoheader, the value of Protocol fiel is set to 6.
             Urgent Pointer: It is a 16-bit long fiel which is used only when segment contains the urgent data.
             Options: This fiel is up to 40 bytes that is used to contain additional or optional information in
               the TCP header.
             11. Describe three phases of connection-oriented transmission in TCP.
           Ans: The connection-oriented transmission in TCP needs three phases, which are described as follows:
           Connection Establishment: This is the firs phase of connection-oriented transmission in TCP. In
             this phase, the TCPs in the machines that wish to communicate need to be firs connected. Each of
             the communicating parties must initialize communication and take permission from the other party
             before transferring any data. In TCP, connection is established using three-way handshaking
             (discussed in Q12).
           Data Transfer: After the TCPs of the communicating machines are connected, the data transfer
             phase begins. In this phase, both the parties can send segments to each other at the same time as the
             connection established between TCPs is full-duplex. After receiving a segment, the receiving party
             is also required to send an acknowledgement number to the sending party to confir the receipt
             of segment. An acknowledgement from either side (client to server or server to client) can also be
             piggybacked on the segment (containing data) that is travelling in the same direction. That is, a
             single segment may contain both data and acknowledgement.
             Connection Termination: This is the last phase of connection-oriented transmission that commences
               after the data have been transferred. Though either of the communicating parties can close the
               connection, generally the client initiates the connection close command. In TCP, connection is termi-
                nated using three-way handshaking mechanism (discussed in Q14).
             12. Explain how connection is established in TCP using three-way handshaking mechanism.
           Ans: TCP is a connection-oriented protocol that allows full-duplex transmission. In TCP, the
        connection is established using three-way handshaking mechanism.
        Three-Way Handshaking
        The server starts the mechanism. The server process informs its TCP that it is ready to accept an
        incoming connection by executing the LISTEN and ACCEPT primitives. This is called a request for a
         passive open. The client process then sends the request for an active open to its TCP by executing the
         CONNECT primitive. This primitive specifie the IP address and port number that the TCP on client
         could identify the specifi server process to which the client process wants to connect. Now, TCP starts
         the three-way handshaking process, which involves the following steps (Figure 12.4).
Client Server
                                            Active          SYN
                                                                 segm
                                            open            SEQ        ent
                                                                  =m
                                                                         ment      Passive
                                                               A C K seg      +1    open
                                                             +         K=m
                                                         SYN
                                                             Q = n, AC
                                                          SE A
                                                                CK s
                                                          SEQ         egme
                                                               = m,        nt
                                                                    ACK
                                                                          =n+
                                                                               1
Client Server
                                        Active           FIN s
                                        close    SEQ           egme
                                                       = m,         nt
                                                            ACK
                                                                  =n
                                                                        nt      Passive
                                                                  egme
                                                         + A CK s          +1    close
                                                     FIN            K=m
                                                          Q = n, AC
                                                      SE A
                                                             CK s
                                                      SEQ         egme
                                                            = m,        nt
                                                                 ACK
                                                                       =n+
                                                                            1
                 s- equence number. The sequence number of this segment is same as that of firs FIN segment
                  sent by the client (that is, m). It includes an acknowledgement number equal to sequence number
                  received in FIN+ACK segment plus one (that is, n+1).
            15. Compare UDP with TCP.
          Ans: Both UDP and TCP are transport layer protocols that provide process-to-process delivery of
        packets. Some differences between UDP and TCP are listed in Table 12.3.
        Table 12.3 Comparison Between UDP and TCP
          UDP                                                          TCP
          • It is simple and unreliable protocol.                      • It is high featured and reliable protocol.
          • It is a connectionless protocol, which means an es-       • TCP is a connection-oriented protocol, which means
             tablishment of connection between client and server          a virtual connection needs to be established between
             is not required before starting the transfer of data.        client and server before initiating the data transfer.
          • UDP application sends message-based data in               • TCP application sends stream-based data with no
             discrete packages.                                           specific structure.
          • No acknowledgement of data takes place.                    • Each received byte of data needs to be acknowledged.
          • Retransmission is not performed automatically.            • The lost data is retransmitted automatically by TCP.
             Applications must detect the lost data by themselves
             and retransmit if required.
          • It does not provide any flow control mechanism.            • It offers flow control the sliding window mechanism.
          • Its transmission speed is very high.                       • Its transmission speed is high but lower than UDP.
          • It is suitable for transferring small amount of data up   • It is suitable for small as well as large amount of
             to hundreds of bytes.                                        data up to a few gigabytes.
          • Some protocols and well-known applications of UDP         • Some protocols and well-known applications of TCP
             include DNS, BOOTP, TFTP and SNMP.                           include FTP, NNTP, DNS, SMTP, IRC and HTTP.
          • Its header is shorter than that of TCP and thus it        • It incurs more overhead than UDP.
             incurs less overhead.
             16. What is meant by remote procedure call (RPC)? Explain its mechanism.
           Ans: RPC, as the name implies, is a communication mechanism that allows a process to call a proce-
        dure on a remote system connected via network. It was introduced by Birrell and Nelson in 1984. This
        method is implemented to allow programmes to call procedures located on remote host. The calling
        process (client) can call the procedure on the remote host (server) in the same way as it would call the
        local procedure. The syntax of RPC call is very similar to conventional procedure call as given below:
                              Call <Procedure_id>(<List of parameters>);
           The RPC system facilitates the communication between client and server by providing a stub on
        both client and server. For each remote procedure, the RPC system provides a separate stub on the client
        side. When the client process wants to invoke a remote procedure, the RPC call is implemented in the
        following steps.
           1. The RPC system invokes the stub for the remote procedure on the client, passing to it the parameters
               that are to be passed further to the remote procedure. The client process is suspended from execution
               until completion of the call.
             2. The client stub performs parameter marshalling, which involves packaging the parameters into
                a machine-independent form, so that they can be transmitted over the network. It now prepares a
                message containing the identifie of the procedure to be executed and the marshalled parameters.
             3. The client stub sends the message to the server. After the message has been sent, the client stub
                blocks until it gets reply to its message.
             4. The corresponding stub on the server side receives the message and converts the parameters into
                a machine-specifi form suitable for the server.
             5. The server stub invokes the desired procedure, passing parameters to it. The server stub is suspended
                from execution until completion of the call.
             6. The procedure executes and the results are returned to the server stub.
             7. The server stub converts the results into a machine-independent form and prepares a message.
             8. The server stub sends the message containing the results to the client stub.
             9. The client stub converts the results into machine-specifi form suitable for client.
            10. The client stub forwards the results to the client process. With this, the execution of RPC is
                completed, and now, the client process can continue its execution.
                 Figure 12.6 depicts all the steps involved in execution of RPC.
Client Server
                                                   2                        4
                                         1                     3                    5
                               Process            Stub                     Stub          Procedure
                                         10                    8                     6
                                                   9                        7
              5. In TCP, control fiel consists of:                        (c) PSH and URG bits
                 (a) Six flag                    
                                            (b) Three flag                (d) None of these
                 (c) Five flag                  
                                            (d) Seven flag              7.	During connection establishment in TCP,
              6.	In TCP, the connection establishment uses:                the mode of data transmission is:
                  (a) FIN and SYN bits                                      (a) Full-duplex       (b) Half-duplex
                  (b) SYN and ACK bits                                      (c) Simplex           (d) None of these
        Answers
        1. (b)     2. (d)   3. (c) 4. (c) 5. (a)    6. (b)     7. (a)
        Name Space
        It is a representation of the domains of the Internet as a tree structure. Each domain can be subdivided
        into many other domains, which are further partitioned thereby creating a hierarchy. The leaf nodes of
        the tree cannot be subdivided and each leaf node may contain only a single host or several hosts. The
        top-level domains are classified into two groups, namely, generic and country. The generic group con-
        tains domain names such as com (commercial), edu (educational institutions), gov (governments), int
        (international organizations), mil (armed forces), net (network providers) and org (organizations). The
        country domains contain one entry for every country, as per ISO 3166 specification. The tree structure
        of the name space is shown in Figure 13.1.
Generic Countries
           A label is included in each node of the tree (Figure 13.2). The label is a string, which has a maximum
        size of 63 characters. The label at the root node is just an empty string. The child nodes, which have the
        same parent, are not allowed to have the same label name as it may cause ambiguity. Each node in the
root(.)
                                                                     edu.
                                                  edu
                                                                            flag.edu.                 Domain
                                                         flag                                         names
                          Labels
                                                                                    tc.flag.edu.
                                                                    tc
                                                                                        terminator.tc.flag.edu.
                                                           terminator
                                           Figure 13.2    Labels and Domain Names
        tree has a domain name, which is formed by a sequence of labels separated by dots (.). The domain
        names are again divided into two types, namely, fully qualifie domain name (FQDN ) and partially
        qualified domain name (PQDN ).
           FQDN: If the domain name ends with a dot (.), that is, null string, it is said to be an FQDN. It is
             the full name of a host, which includes all labels, starting from the host label toward the root label,
             which is a dot (.). For example, in Figure 13.2, the FQDN of a host named terminator installed at
             the technology centre tc is terminator.tc.flag.ed .
           PQDN: If a domain name does not end with a null string, then it is said to be a PQDN. This means
             a PQDN starts with the host label but does not end with the root node. A partial domain name ad-
             dress is used when the domain name and the client reside in the same site. The PQDN can take
             help of the resolver to convert it to an FQDN. For example, if a person at the flag.ed site wants
             to get the IP address of the terminator computer, he/she defines only the partial name terminator.
             The rest of the part (called suffi ), that is, tc.flag.ed , is added by the resolver and then the address
             is passed to the DNS server.
        Resource Records
        Every domain is associated with a set of information known as resource records (RRs). The most com-
        mon RR is the IP address for a single host, although many other kinds of resource records are also found.
        A RR is a five-tuple set, which is mostly represented as ASCII text, but for better efficienc , it can also
        be encoded in binary form. The different fields of the five-tuple set are described as follow
          Domain_Name: This field identifies the domain with which this record is associated. Usually, many
             records related to a single domain exist in the databases and the database contains RRs for multiple
             domains. Thus, this field is used for search operations so that queries an be executed efficientl .
          Time_to_Live: This field indicates the stability of the record. It specifies the time interval with-
             in which a RR may be cached by the receiver so that the server need not be consulted for RR
             repeatedly. A zero value in this field indicates that the RR is to be used for a single transaction and
              therefore, should not be cached. The highly stable records are given large values while less stable
              records are given small values.
             Class: This field identifies the protocol family. It is set to IN for Internet information; otherwise,
               some other codes are used.
             Type: This field specifies the type of the recor
             Value: This field depends on the Type field and can be a domain name, a number or an ASCII
               character.
        Name Servers
        As DNS database is very vast, it is impossible for a single server to hold information about the complete
        database and respond to all queries. Even if it is done, then a failure in the single name server would
        bring the whole network down. To avoid such a situation, the DNS name space is divided into many
        non-overlapping zones (represented by dotted areas in Figure 13.3) with each zone containing name
        servers holding information about that zone. Each zone covers some nodes of the tree and contains a
        group of one or more subdomains and their associated RRs that exist within that domain. The name
        server creates a zone file, which holds all the information of the nodes under it. Each zone has a primary
        name server and one or more secondary name servers. The primary name server of a zone obtains the
        information from a file on the disk, whereas the secondary name servers obtain information from the
        primary name server. Some servers can also be located outside the zone to improve the reliability.
Generic Country
                                 CS                                                              cs
                                             Iinda                  keio        nec
                                                                                            john
                                  texas
          Advantages                                               Disadvantages
          • The delivery of messages is very fast, sometimes      • Although e-mail is delivered instantly, the recipient may
             almost instantaneous, even though the message is         or may not read his/her mail on time. That defeats the
             meant for overseas or just to a friend next door.        quickness of electronic mailing.
          • The cost of e-mailing is almost free as it involves   • The user must stay online to read and write more than
             negligible amount of telephone and ISP charges.          one mail. In addition, most webmail either display
                                                                      advertisements during use or append them to mails
                                                                      sent. It results in increased size of the original mail,
                                                                      which brings a significant decrease in speed of use.
          • M
             ultiple copies of the same message can be sent       • Since e-mail passes through a network, therefore, it
            to a group of people at the same time and can be          may be intercepted in between. Moreover, viruses can
            sent as easily to a single person.                        enter the system while downloading the e-mails.
          • Pictures, documents and other files can also be       • The slightest error in the address or a failure in one
             attached to messages.                                    of the links between sender and receiver is enough to
                                                                      prevent a delivery.
                                                  MTA                 MAA
                            UA           A                                        B              UA
                                                  client              client
LAN/WAN LAN/WAN
                                      MTA                                        MAA
                                     server                                     server
                                                  Message         Message
                                     MTA                                         MTA
                                                           Internet
                                     client                                     server
          Reading Messages: This service helps the user to read the messages, which are in its inbox. Most
            user agents show a one-line description of each received mail.
          Replying to Messages: This service is used to reply to the messages that have been received by
            the user. While replying, a user can send a new reply or may include the original message sent by
            the sender along with the new one. Moreover, the user can reply either to the original sender or to
            all the recipients of message.
          Forwarding of Messages: This service helps the user to forward the message to the third party
            instead of sending it to the original sender. The user can also add some more content in the message
            to be forwarded.
          Handling Mailboxes: The user agent is responsible for maintaining all the mailboxes in e-mail
            system. Basically, it creates two types of mailboxes, namely, inbox and outbox. The inbox contains
            all the messages received by a user and the outbox contains all the messages sent by the user. The
            messages are kept in both mailboxes until the user deletes them.
          There are two types of UAs namely, command-driven and graphical-user-interface (GUI )-based
        UAs. These types are described as follows:
          Command-driven UA: This UA was used in the early days in e-mail. In this type, the user can
            type one character at a time at the command prompt while replying to the sender. A few command-
            driven UAs include pine, elm and mail.
          GUI-based UA: This UA being used nowadays allows the user to use both mouse and keyboard
            to interact with the software. As the name of this UA suggests, it provides GUI components such
            as menus and icons that help the users to access the services more easily. Thus, GUI-based UAs
            are more user friendly.
             Content Transfer Encoding: This header defines the different methods used for encoding the
               messages into various formats, so that it can be transmitted over the network. Some schemes used
               for encoding the message body are listed in Table 13.3.
                             Type                    Description
                             7 bit                   NVT ASCII characters and short lines
                             8 bit                   Non-ASCII characters and short lines
                             Binary                  Non-ASCII characters with unlimited length
        between the sender and the receiver, SMTP is used twice. Once, it is used to transfer the mail from
        sender’s end to sender’s mail server, and then to transfer the mail from sender’s mail server to receiver’s
        mail server. To retrieve the mail from receiver’s mail server at the receiver’s end, a different mail pro-
        tocol such as POP3 and IMAP (discussed in the next question) is used. While transferring mails, SMTP
        uses commands and responses between MTA client and MTA server.
          Commands: They are sent from the client machine to the server machine. The syntax of a com-
             mand consists of a keyword followed by zero or more arguments. There are total 14 commands
             defined by SMT some of which are listed in Table 13.4.
        Table 13.4       SMTP Commands
             Responses: They are just the opposite of commands, that is, they are sent from a server machine
               to a client machine. A response consists of a three-digit code, which may be followed by additional
               textual information. Some of the SMTP responses are shown in Table 13.5.
                                 Code                    Information
                                 221                     Service closing transmission channel
                                 354                     Start mail input
                                 500                     Syntax error, unrecognized command
                                 503                     Bad sequence or commands
        POP3 software must be installed on the recipient’s machine and on its mail server, respectively. Further,
        POP3 works in two modes: delete mode and keep mode. In the delete mode, as a message has been
        pulled from the mail server, it is deleted from the mailbox on the mail server. On the other hand, in the
        keep mode the message remains in the mailbox even after it has been pulled from the mail server. This
        mail can be read later from any other computer or location.
           Whenever a recipient (client) needs to retrieve mails from the mail server, it establishes a TCP con-
        nection to the server on the port 110. Then, it passes its username as well as the password to the mail
        server to get access to the mailbox on the mail server. After the server has verified the client, the client
        can list and download the messages one at a time.
           POP3 has some disadvantages, which are as follows:
          POP3 does not support mail organization on the server, that is, a user cannot have different folders
             on the mail server.
          POP3 does not allow the contents of the mail to be checked in parts while the mail is being down-
             loaded. The mail can be checked after it has been completely downloaded.
        (b) IMAP4: It stands for Internet mail access protocol and the number 4 denotes its version number.
        Like POP3, it is also an MAA protocol but it provides more functionality and is more complex than
        POP3. Some of the additional features provided by IMAP4 are as follows:
          A user can create folders on the mail server and can delete or rename the mailboxes.
          IMAP enables the user to partially download the mails. This is especially useful in cases where a
             message contains large audio and video files, which may take a lot of time to download because
             of slow Internet connection. In such cases, the user can download only the text part of message if
             required using IMAP4.
          A user can search through the contents of the messages while the messages are still on the mail
             server.
          IMAP allows the user to check the contents of the e-mail before it has been downloaded.
          A user can selectively retrieve the attributes of messages such as body, header, etc.
User
                                         User
                                       interface
                                                        Control
                                                        connection
                                       Control                                 Control
                                       process                                 process
TCP/IP
                                         Data                                    Data
                         Disk          transfer                                transfer             Disk
                                       process         Data connection         process
                                        Client                                   Server
                                        Figure 13.6   Mechanism of File Transfer in FTP
        and server are connected via control connection, whereas the data transfer process of client and server
        are connected via data connection. The control processes of client and server communicate using NVT
        format. They are responsible for converting from their local syntax such as DOS or UNIX to NVT
        format and vice versa. The data transfer processes of client and server communicate under the control
         of commands transferred through the control connection.
               13. Explain the following with respect to FTP:
              File Type
              Data Structure
              Transmission Mode
           Ans: To transfer a file through the data connection in FTP, the user (client) has to specify certain
        attributes to the server including type of file to be transferred, the data structure and the transmission
        mode so that the control connection could be prepared accordingly. These attributes are described as
        follows:
          File Type: FTP supports three types of files for transmission over the data connection, namely, an
              ASCII fil , EBCDIC fil or image file. The ASCII fil is the default format used for text files. It
              uses the 7-bit ASCII format to encode each character of text file. The sender converts the file from
              its original form to ASCII characters, while the receiver converts the ASCII characters back to the
              original form. If EBCDIC encoding (file format used by IBM) is supported at the sender or receiver
              side, then files can be transmitted using the EBCDIC encoding. The image fil is the default format
              used in the transmission of binary files. Binary files are sent as continuous stream of bits without
              using any encoding method. Usually, the compiled programs are transferred using the image file
          Data Structure: FTP uses three data structures to transfer a file, namely, file structure, record struc-
              ture and page structure. When file structure format is used, the file is sent as a continuous stream of
              bytes. The record structure can be used only with text files and the file is divided into many records.
                 In page structure, each file is divided into a number of pages where each page contains a page
                 number and a page header. These pages can be accessed sequentially as well as randomly.
                Transmission Mode: FTP uses three types of transmission modes, namely, stream mode, block
               mode and compressed mode. The default mode of transmission is the stream mode, which sends
               the data as a continuous pattern of bytes. In case the data contains only a stream of bytes, then no
               end-of-file (closing of data connection) is required; the end of file is simply indicated by closing
               of connection by the sender. In the block mode, data is sent in blocks, where each block is pre-
               ceded by a 3-byte header. The first byte is just a description about the block and the next two bytes
               define the size of the block in bytes. The compressed mode is used in case of large files to reduce
                their size so that they can be transmitted conveniently. The size of file is reduced by replacing
                multiple consecutive occurrences of characters with a single character or reducing the number of
                repetitions. For example, in text files blank spaces can be compressed
        Client (Browser)
        Browser is a program which accesses and displays the web pages. It consists of three components, namely,
        controller, interpreter and client protocol. The user provides inputs (request for a web document)
        to the controller through a keyboard or a mouse. After receiving the input, the controller uses client
        p rotocols such as FTP or HTTP to access the web document. Once the controller has accessed the
         desired web document, it selects an appropriate interpreter such as hypertext markup language (HTML)
          or JavaScript depending on the type of the web document accessed. The interpreters help the controller
          to display the web document. A few of the web browsers used today include Microsoft Internet Explorer,
          Opera and Google chrome.
              To understand how a browser works, consider a user who wants to access the link http://www.
          ipl.com/home/teams.html. When the user provides this link (URL) to the browser, the browser goes
          through the following steps:
              1. The browser determines the given URL and sends a query to the DNS server asking for the IP
                 address of www.ipl.com.
              2. The DNS sends a reply to the browser, providing the desired IP address.
              3. A TCP connection to port 80 on the received IP address is made by the browser.
              4. The browser then sends a request for the file home/teams.html.
              5. The file home/teams.html is sent by the www.ipl.com server.
              6. The TCP connection is ended.
              7. The browser displays the text in the file /home/teams.html. It also fetches and displays the images
                 in the file
        Server
        Server is the place where web pages are stored. On a request from the client, the server searches the
        desired document from the disk and returns the document to the browser through a TCP connection. The
        steps performed by a server are as follows:
           1. The server accepts the TCP connection request arriving from the client.
           2. It then acquires the name of the file requested by the client
           3. The server retrieves the file from the disk
           4. The file is sent back to the client
           5. The TCP connection is released.
        The efficiency of a server can be improved by caching the recently accessed pages so that those pages
        could be directly accessed from memory and need not be accessed from the disk. Moreover, server can
        support multithreading, that is, serving multiple clients at the same time to increase the efficienc .
        the protocol and the domain name. Then comes the last part of a URL, namely, the path and the file name.
        The path name specifies the hierarchical location of the said file on the computer. For instance, in http://
        www.xyz.com/tutor/start/main.htm, the file main.htm is located in start, which is a subdirectory of tutor.
        Cookies
        Cookies are the small files or strings, which are used to store information about the users. This stored
        information may be later used by the server while responding to the requests of the client(s). For some
        particular sites, only registered users are permitted to access the information. In such a case, the server
        stores the user’s registration information in the form of cookies on the client’s machine. The size of a
        cookie file cannot exceed 4 KB. A user can disable the cookies in the browser or can even delete them.
             16. What is HTTP? Describe the format of HTTP request and response message.
           Ans: Hypertext transfer protocol (HTTP) is the most common protocol that is used to access infor-
        mation from the Web. It manages the transfer of data between the client and the server. The older version
        of HTTP was 1.0, in which TCP connection was released after serving a single request. This was not ad-
        equate as every time a new connection had to be established. This led to the development of HTTP version
        1.1 that supports persistent connection, that is, it is meant for multiple request–response operations.
            Further, HTTP is a stateless protocol and all the transactions between the server and client are carried
        out in the form of messages. The client sends a request message to the server and the server replies with
        a response message. The HTTP request and response messages have a similar format (Figure 13.7) ex-
        cept that in request message, the first line is the request line while in response message, and the first line
        is the status line. The remaining part of both the messages consists of a header and sometimes, a body.
Headers Headers
Body Body
           The HTTP request messages are of different types, which are categorized into various methods as
        shown in Table 13.6.
           For each HTTP request message, the server sends an HTTP response that consists of status line and
        some additional information. The status line comprises a three digit status code, similar to the response
        message of FTP and SMTP. The status code indicates whether the client request is satisfied or if there is
        some error. The first digit of the status code can be 1, 2, 3, 4 or 5 and it indicates one of the five groups
                          Method                 Description
                          GET                    Request to access a web page from the server.
                          HEAD                   Request to get the header of a web page
                          PUT                    Request to store a web page
                          POST                   Append to a named resource
                          DELETE                 Remove the web page
                          TRACE                  Echo the incoming request
                          CONNECT                Reserved for future use
                          OPTIONS                Enquire about certain options
        into which response messages have been divided. Codes falling in the 100 range are only informational
        and thus, rarely used. The codes falling in the 200 range indicate a successful request, codes in the 300
        range redirect the client to some other site, the codes in the 400 range indicate an error in the client side
        and the codes in the 500 range indicate an error at the server site.
           Further, HTTP also contains various headers, which are
        used to transfer additional information other than the normal
                                                                              Header name            Header value
        message between the client and the server. For example, the
        request header can ask for a message to be delivered in some
        particular format while a response header can contain a de-
        scription of the message. The additional information can be                          Space
        included in one or more header lines within the header. Each           Figure 13.8 HTTP Header Format
        header line has a format as shown in Figure 13.8.
           Each header line may belong to one of the four types of HTTP
        headers, which are discussed as f ollows:
          General Header: This header can be included in both request and response message. It contains
             general information about the sent or received messages. An example of a general header is Date
             that is used to display the current date.
          Request Header: This header can be used only in the request messages from the client. The details
             about the client setup and the preference of the client for any particular format are included in this
             header. An example of a request header is the From header, which shows the e-mail address of
             the user.
          Response Header: This header is part of the response messages only. It contains the server’s setup infor
             mation. An example of a response header is the Age header, which shows the age of the d ocument.
          Entity Header: This header includes information about the body of a document. It is mostly pres-
             ent in the request or response messages. An example of entity header is the Allow header, which
             lists the valid methods that can be used with a URL.
           Further, HTTP is similar to SMTP because in both protocols, the client initiates a request and the
        server responds to that particular request. However, HTTP messages can only be read by HTTP server
        and HTTP client, whereas in SMTP, messages can be read by humans also. Moreover, HTTP messages
        are forwarded immediately unlike in SMTP, where messages are first stored and then forwarded
             18. What is a proxy server and how it is related to HTTP?
           Ans: A proxy server is a computer, which stores the copies of responses to most recently requests in
        its cache so that further requests for these pages need not be sent to the original server. The proxy server
        helps to reduce the load on the original server as well as decrease the network traffic thereby improving
        the latency. To use the proxy server, the client must be configured to send HTTP requests to proxy server
        instead of the original server. Whenever a HTTP client wishes to access a page, it sends HTTP request to
        a proxy server. On receiving a request, the proxy server looks up in the cache for the desired web page.
        If the page is found, the stored copy of response is sent back to the client; otherwise, the HTTP request
        is forwarded to the target server. Similarly, the proxy server receives responses from the target server.
        The proxy server stores these responses in the cache and then sends it to the client.
             19. Explain SNMP and mention the two protocols used by it for managing tasks.
           Ans: Simple network management protocol (SNMP) is an operational framework, which helps
        in the maintenance of the devices used in the Internet. In addition, SNMP comprises two components,
        namely, manager and agent. A station (host) which manages another station is called the manager.
        It runs the SNMP client program. The agent is the station (router) which runs the server program and
        is managed by the manager. Both the stations interact with each other to carry out the management. The
        manager can access all the information stored in the agent station. This information can be used by
        the manager to check the overall performance. The manager can also perform some remote operation on the
        agent station such as rebooting the remote station. On the other hand, the agent can also perform some
        management operations, such as informing the manager about the occurrence of any unusual situation,
        which may hamper the performance.
            In order to manage tasks efficie tly, SNMP uses two protocols: structure of management information
        (SMI) and management information base (MIB).These two protocols function together with the SNMP.
        The role of SMI and MIB protocols is discussed as follows:
           SMI: It generally deals with the naming of objects and defining their range and length. However,
              it does not specify the number of objects maintained by an entity, the relationship between objects,
              and their corresponding values.
           MIB: MIB basically performs the work that SMI has left behind. It defines the object name as per
              the conventions specified by SMI. It also states the number of objects and their types
          4. The well-known port _____ is used for the               (a) SMTP               (b) MIME
             control connection and the well-known port              (c) IMAP4              (d) HTTP
             ______ is used for the data connection.           8. Which of the following supports a persistent
             (a) 20, 21              (b) 21, 20                   connection?
             (c) 21, 22              (d) 20, 22                   (a) FTP                (b) SMTP
          5. Which is the default format used by FTP for          (c) HTTP               (d) MIME
             transferring text files                           9. Which of the following are MAA protocols?
             (a) EBCDIC              (b) binary fil               (a) SMTP and MIME
             (c) ASCII fil           (d) bytes                    (b) FTP and HTTP
          6. Which port is used by SMTP for TCP con-              (c) SMTP and FTP
             nection?                                             (d) POP3 and IMAP4
             (a) 25                 (b) 26                    10. Which of the following is the protocol used
             (c) 21                 (d) 20                        for network management?
          7. Which protocol uses the GET and POST                 (a) SNMP               (b) POP3
             methods?                                             (c) FTP                (d) IMAP4
        Answers
        1. (d)     2. (b)   3. (c) 4. (b)   5. (c) 6. (a)   7. (d)    8. (c)   9. (d)   10. (a)
              1. Defin multimedia.
           Ans: The word multimedia is made up of two separate words, multi means many and media means
        the ways through which information may be transmitted. Therefore, multimedia can be described as
        an integration of multiple media elements (such as text, graphics, audio, video and animation) together
        to influence the given information, so that it can be presented in an attractive and interactive manner.
        In simple words, multimedia means being able to communicate in more than one way.
                2. What are the different categories of Internet audio/video services?
          Ans: Earlier, the use of Internet was limited for only sending/receiving text and images; however,
        nowadays, it is vastly being used for audio and video services. These audio and video services are
        broadly classified into three categories, namely, streaming stored audio/video, streaming live audio/video
        and real-time interactive audio/video. Here, the term streaming implies that the user can listen to or
        view the audio/video file after its downloading has begun
          Streaming Stored Audio/Video: In this category, the audio/video files are kept stored on a server
             in a compressed form. The users can download these stored files whenever required using the
             Internet; that is why this category is also termed as on-demand audio/video. Some examples of
               stored audio/video include songs, movies and video clips.
          Streaming Live Audio/Video: In this category, as the term live implies, the users can listen to
              or view audio/video that are broadcast through the Internet. Some examples of live audio/video
              applications include Internet radio and Internet TV.
          Real Time (Interactive) Audio/Video: In this category, users can communicate with each other in
             an interactive manner using the Internet. This category of Internet audio/video is used for real-time
             interactive audio/video applications such as Internet telephony, online video chatting and Internet
             teleconferencing.
               3. How does streaming live audio/video differ from streaming stored audio/video?
           Ans: The major difference between streaming live audio/video and streaming stored audio/video is
        that in the former, the communication is meant for a single user (unicast) and occurs on demand when
        the user downloads the file whereas in the latter, the communication is live instead of on demand and
        file is broadcast to multiple users at the same time
               4. Explain the approaches that can be used to download streaming stored audio/video files
           Ans: The streaming stored audio/video files are stored on a server in compressed form. In order to
        listen to or view these files, one needs to download them. There are mainly four approaches that can be
        used to download streaming audio/video files. These approaches are discussed as follows:
            The basic function of RTP is to multiplex multiple real-time data streams of multimedia data
        onto single stream of UDP packets. To do this, different streams of multimedia data are first sent to
        RTP library that is implemented in the user space along with the multimedia application. The RTP
        library multiplexes these streams and encodes them in RTP packets, which are then stuffed into a
        socket. At the other end of the socket, the UDP packets are created and embedded in IP packets.
        Now, depending on the physical media say Ethernet, the IP packets are further embedded in Ethernet
        frames for transmission. Besides, some other functions of RTP include sequencing, time-stamping
        and mixing.
               6. Explain the RTP packet header format.
           Ans: The RTP is a transport layer protocol that has been designed to handle real-time traffic on the
        Internet. Figure 14.1 shows the format of an RTP packet header, which consists of various fields. Each
        field of TP packer header is described as follows:
           Ver.: It is a 2-bit long field that indicates the version number of RTP. The current version of
              RTP is 2.
           P: It is a 1-bit long field that indicates the presence of padding (if set to 1) or absence of padding (if
              set to 0) at the end of the packet. Usually, the packet is padded if it has been encrypted. The number
              of padding bytes added to the data is indicated by the last byte in padding.
           X: It is a 1-bit long field that is set to 1 if an extra extension header is present between the basic
              header and the data; otherwise, its value is 0.
           Contributor Count (CC): It is a 4-bit long field that signifies the total number of contributing
              sources participating in the session. Since this field is of four bits, its value can range from 0 to 15,
              that is, the maximum of 15 contributing sources can participate in the same session.
           M: It is a 1-bit marker field that is specific to the application being used. The application uses
              this marker to indicate certain things such as the start of a video frame and the end of a video
              frame.
           Payload Type: It is a 7-bit long field that indicates the type of encoding algorithm used such as
              MP3, delta encoding and predictive encoding.
           Sequence Number: It is a 16-bit long field that indicates the sequence number of the RTP packet.
              Each packet that is sent in an RTP stream during a session is assigned a specific sequence number.
              The sequence number for the first packet during a session is selected randomly and it is increased
              by one for every subsequent packet. The purpose of assigning sequence number to packets is to
              enable the receiver to detect the missing packets (if any).
           Timestamp: It is a 32-bit long field that is set by the source of stream to indicate when the first
              sample in the packet was produced. Generally, the first packet is assigned a timestamp value at
              random while the timestamp value of each subsequent packet is assigned relative to that of the
              first (or previous) packet. For example, consider there are four packets with each packet contain-
              ing 15 s of information. This implies, if the first packet starts at t = 0, then second packet should
              start at t = 15, the third packet at t = 30 and the fourth at t = 45. This time relationship between
              the packets must be preserved at the playback time on the receiver to avoid the jitter problem. For
              this, timestamp values are assigned to receiver. Suppose the first packet is assigned a timestamp
              value 0, then the timestamp value for the second, third and fourth packets should be 15, 30 and 45,
              respectively. Now, the receiver on receiving a packet can add its timestamp to the time at which it
              starts playback. Thus, by separating the playback time of packets from their arrival time, the jitter
              problem is prevented at the receiver.
             Synchronization Source Identifier It is a 32-bit long field that indicates the source stream of
               the packet. In case of a single source, this field identifies that source. However, in case of multiple
               sources, this field identifies the mixer—the synchronization source—and the rest of the sources are
               the contributors identified by the contributing source identifiers. The role of mixer is to combine
               the streams of RTP packets from multiple sources and forward a new RTP packet stream to one or
               more destinations.
             Contributing Source Identifier It is a 32-bit long field that identifies a contributing source in the
               session. There can be maximum 15 contributing sources in a session and accordingly, 15 contribut-
               ing source identifiers
                                                                  32 bits
                         Ver. P X   CC      M      Payload type                       Sequence number
Timestamp
             Application-define Packet: This packet is an experimental packet for applications that wish to use
               new applications that have not been defined in the RTCP standard. Eventually, if an experimental packet
               type is found useful, it may be assigned a packet type number and included in RTCP standards.
               8. How is SIP used in the transmission of multimedia?
           Ans: SIP, which stands for session initiation protocol, is designed by IETF to handle communica-
        tion in real-time (interactive) audio/video applications such as Internet telephony, also called voice over IP
        (VoIP). It is a text-based application layer protocol that is used for establishing, managing and terminating
        a multimedia session. This protocol can be used for establishing two-party, multiparty or multicast sessions
        during which audio, video or text data may be exchanged between the parties. The sender and receiver
        participating in the session can be identified through various means such as e-mail addresses, telephone
        numbers or IP addresses provided all these are in SIP format. That is why SIP is considered very flexible
            Some of the services provided by SIP include defining telephone numbers as URLs in Web pages,
        initiating a telephone call by clicking a link in a Web page, establishing a session from a caller to a callee
        and locating the callee. Some other features of SIP include call waiting, encryption and authentication.
            SIP defines six messages that are used while establishing a session, communicating and terminating
        a session. Each of these messages is described as follows:
           INVITE: This message is used by the caller to start a session.
           ACK: This message is sent by the caller to callee to indicate that the caller has received callee’s reply.
           BYE: This message is used by either caller or callee to request for terminating a session.
           OPTIONS: This message can be sent to any system to query about its capabilities.
           CANCEL: This message is used to cancel the session initialization process that has already started.
           REGISTER: This message is sent by a caller to the registrar server to track the callee in case the
              callee is not available at its terminal, so that the caller can establish a connection with the callee.
              SIP designates some of the servers on the network as registrar server that knows the IP addresses
              of the terminals registered with it. At any moment, each user (terminal) must be registered with at
              least one registrar server on the network.
        Before starting transmission of audio/video between the caller and callee, a session needs to be started.
        A simple SIP session has three phases (see Figure 14.2), which are as follows:
                           Caller                                         Callee
                                                 INVITE
                                                   OK
                                                                                 Session establishment
ACK
                                             Audio exchange
                                                                               Communication
                                                   BYE
                                                                               Session termination
            1. Session Establishment: The session between the caller and callee is established using the three-
               way handshake. Initially, the caller invites the callee to begin a session by sending it an INVITE
               message. If the callee is ready for communication, it responds to the caller with a reply message
               (OK). On receiving the reply message, the caller sends the ACK message to callee to confirm the
               session initialization.
            2. Communication: Once the session has been established, the communication phase commences
               during which the caller and callee exchange audio data using temporary port numbers.
            3. Session Termination: After the data has been exchanged, either the caller or the callee can request
               for the termination of session by sending a BYE message. Once the other side acknowledges the
               BYE message, the session is terminated.
              9. Explain the H.323 standard.
           Ans: H.323 is an ITU standard that allows communication between the telephones connected to
        a public telephone network and the computers (called terminals) connected to the Internet. Like SIP,
        H.323 also allows two-party and multiparty calls using a telephone and a computer. The general archi-
        tecture of this standard is shown in Figure 14.3.
                                                                         Gateway
                         Gatekeeper
                                                                                        Telephone
                                                     Internet
                                                                                         network
                                        Terminal
                                              Figure 14.3   H.323 Architecture
           As shown in Figure 14.3, the two networks, Internet and telephone networks, are interconnected via
        a gateway between them. As we know that a gateway is a five-layer device that translates messages
        from a given protocol stack to different protocol stack. In H.323 architecture, the role of gateway is to
        convert between the H.323 protocols on the Internet side and PSTN protocols on the telephone side.
        The gatekeeper on the local area network serves as the registrar server and knows the IP addresses of
        the terminals registered with it.
           H.323 comprises many protocols which are used to initiate and manage the audio/video communi-
        cation between the caller and the callee. The G.71 or G.723.1 protocols are used for compression and
        encoding/decoding speech. The H.245 protocol is used between the caller and the callee to negotiate on
        a common compression algorithm that will be used by the terminals. The H.225 or RAS (Registration/
        Administration/Status) protocol is used for communicating and registering with the gatekeeper. The
        Q.931 protocol is used for performing functions of the standard telephone system such as providing dial
        tones and establishing and releasing connections.
            Following are the steps involved in communication between a terminal and a telephone using H.323.
            1. The terminal on a LAN that wishes to communicate with a remote telephone broadcasts a UDP
               packet to discover the gatekeeper. In response, the gatekeeper sends its IP address to the terminal.
            2. The terminal sends a RAS message in a UDP packet to the gatekeeper to register itself with the
               gatekeeper.
            3. The terminal communicates with the gatekeeper using the H.225 protocol to negotiate on band-
               width allocation.
            4. After the bandwidth has been allocated, the process of call setup begins. The terminal sends a
               SETUP message to the gatekeeper, which describes the telephone number of the callee or the IP
               address of a terminal if a computer is to be called. In response to receipt of SETUP message, the
               gatekeeper sends CALL PROCEEDING message to the terminal. The SETUP message is then
               forwarded to the gateway, which then makes a call to the desired telephone. As the telephone starts
               ringing up, an ALERT message is sent to the calling terminal by the end office to which the desired
               telephone is connected. After someone picks up the telephone, a CONNECT message is sent by
               the end office to the calling terminal to indicate the establishment of connection. Notice that dur-
               ing call setup, all entities including terminal, gatekeeper, gateway and telephone communicate us-
               ing the Q.931 protocol. After the connection establishment, the gatekeeper is no longer involved.
            5. The terminal, gateway and telephone communicate using the H.245 protocol to negotiate on the
               compression method.
            6. The audio/video in form of RTP packets is exchanged between the terminal and telephone via
               gateway. For controlling the transmission, RTCP is used.
            7. Once either of the communicating parties hangs up, the connection is to be terminated. For this,
               the terminal, gatekeeper, gateway and telephone communicate using Q.931 protocol.
            8. The terminal communicates with the gatekeeper using the H.225 protocol to release the allocated
               bandwidth.
             10. Defin compression. What is the difference between lossy and lossless compression?
           Ans: The components of multimedia such as audio and video cannot be transmitted over the Internet
        until they are compressed. Compression of a file refers to the process of cutting down the size of the file by
        using special compression algorithms. There are two types of compression techniques: lossy and lossless.
           In lossy compression technique, some data is deliberately discarded in order to achieve massive reduc-
        tions in the size of the compressed file. In this compression format, we cannot recover all of its original
        data from the compressed version. JPEG image files and MPEG video files are the examples of lossy
        compressed files. On the other hand, in lossless compression technique, the size of the file is reduced
        without permanently discarding any information of the original data. If an image that has undergone loss-
        less compression is decompressed, the original data can be reconstructed exactly, bit-for-bit, that is, it will
        be identical to the digital image before compression. PNG image file formats use lossless compression
             11. Write a short note on audio compression.
           Ans: Before the audio data can be transmitted over the Internet, it needs to be compressed. Audio
        compression can be applied on speech or music. There are two categories of techniques that can be used
        to compress audio, namely, predictive encoding and perceptual encoding.
          Predictive Encoding: In digital audio or video, successive samples are usually similar to each
             other. Considering this fact, the initial frame and the difference values in the successive samples for
             all the samples are stored in the compressed form. As the size of the difference values between two
               samples is much smaller than the size of sample itself, this encoding technique saves much space.
               While decompressing, the previous sample and the difference value are used to reproduce the next
               sample. Predictive encoding is generally used for speech.
             Perceptual Encoding: The human auditory system suffers from certain flaws. Exploiting this fact,
               the perceptual encoding technique encodes the audio signals in such a manner that they sound
               similar to human listeners, even they are different. This technique is generally used for compress-
               ing music, as it can create CD-quality audio. MP3, a part of MPEG standard, is the most common
               compression technique based on perceptual encoding.
              12. How does frequency masking differ from temporal masking?
             Ans: Effective audio compression takes into account the physiology of human hearing. The com-
         pression algorithm used is based on the phenomenon named simultaneous auditory masking—an
         effect that is produced due to the way the nervous system of human beings perceives sound. Masking
          can occur in frequency or time, accordingly named as frequency masking and temporal masking.
             In frequency masking, a loud sound in a frequency range can partially or completely mask (hide) a
        low or softer sound in another frequency band. For example, in a room with loud noise, we are unable
        to hear properly the sound of a person who is talking to us.
             In temporal masking, a loud sound can make our ears insensitive to any other sound for a few
        milliseconds. For example, on hearing a loud noise such as a gunshot or explosion, it makes our ears
         numb for a very short time before we are actually able to start hearing again properly.
             13. Explain the JPEG process.
           Ans: JPEG, which stands for joint photographic experts group, is the standard compression tech-
        nique used to compress still images. It can compress images in lossy and lossless modes and produces
        high-quality compressed images. Following are the steps involved in the JPEG image compression
        (lossy) process.
           1. Colour Sub-Sampling: This step is performed only if the image to be compressed is coloured;
               for gray-scale images, this step is not required. The RGB colour space of the image is changed to
               YUV colour space and its chrominance component is down-sampled.
           2. Blocking: The image is divided into a series of 8 × 8-pixel blocks. Blocking also reduces the
               number of calculations needed for an image.
           3. Discrete Cosine Transformation (DCT): Each block of 8 × 8 pixels goes through the DCT
               transformation to identify the spatial redundancy in an image. The result of DCT transformation
               for each 8 × 8 block of pixels is 8 × 8 block of DCT coefficients (that is, 64 frequency compo-
               nents in each block).
           4. Quantization: In this phase, the DCT coefficients in each block are scalar quantized with the
               help of a quantization table (Q-table) in order to wipe out the less important DCT coefficients.
               Each value in the block is divided by a weight taken from the corresponding position in Q-table
               by ignoring the fractional part. The changes made in this phase cannot be undone. That is why
               this JPEG method is considered lossy.
           5. Ordering: The output of quantization is then ordered in a zigzag manner to distinguish the low-
               frequency components (usually, non-zero) from the high-frequency components (usually, zero).
               Ordering results in bit stream in which zero-frequency components are placed close to the end.
           6. Run-Length Encoding: The run-length encoding is applied to zeros of zigzag sequence to elimi-
               nate the redundancy. This encoding replaces each repeated symbol in a given sequence with the
               symbol itself and the number of times it is repeated. For example, the text “cccccccbbbbuffffff”
               is compressed as “c7b4u1f6”. Thus, redundant zeros are removed after this phase.
            7. Variable-length Encoding: The variable-length encoding is applied on the output of the
               previous phase to get the compressed JPEG bit stream. In variable-length encoding, a vari-
                able number of bits are used to represent each character rather than a fixed number of bits for
                each character. Fewer bits are used to represent the more frequently used character; the most
                frequently used character can be represented by one bit only. This helps in reducing the length
                 of compressed data.
              14. What is MPEG? Describe spatial and temporal compressions?
            Ans: MPEG, which stands for moving picture experts group, is a method devised for the
        compression of a wide range of video and motion pictures. It is available in two versions: MPEG1 and
         MPEG2. The former version has been designed for CD-ROM with a data arête of 1.5 Mbps while the
         latter version has been designed for DVD with a data rate of 3–6 Mbps.
             Each video is composed of a set of frames where each frame is actually a still image. The frames in
         a video flow so rapidly (for example, 50 frames per second in TV) that the human eye cannot notice the
         discrete images. This property of human eye forms the basis of motion pictures. Video compression us-
         ing MPEG involves spatial compression of each frame and temporal compression of a set of frames.
           Spatial Compression: Each frame in the video is spatially compressed with JPEG. Since each
               frame is an image, it can be separately compressed. Spatial compression is used for the purposes
               such as video editing where frames need to be randomly accessed.
           Temporal Compression: In this compression, the redundancy is removed among the consecutive
               frames that are almost similar. For example, in a movie, there are certain scenes where the back-
               ground is same and stationary and only some portion such as hand movement is changing. In such
               cases, the most consecutive frames will be almost similar except the portion of frame covering the
               hand movements. That is, the consecutive frames will be containing redundant information. The
               temporal redundancy can be eliminated using the differential encoding approach, which encodes
               the differences between adjacent frames and sends them. An alternative approach is motion com-
               pensation that compares each frame with its predecessor and records the changes in the coordinate
               values due to motion as well as the differences in pixels after motion.
               15. Differentiate among the different types of encoded frames used in MPEG video compression.
            Ans: In MPEG video compression, the encoded frames fall under three categories, namely, intracoded
        (I ) frames, predicted (P) frames and bidirectional (B) frames. These frames are described as follows:
          I-frame: This frame is not associated with any other frame (previous or next). It is encoded
               independently of other frames in the video and contains all the necessary information that is needed
                to recreate the entire frame. Thus, I-frames cannot be constructed from any other frames. I-frames
                must appear in a movie at regular intervals to indicate the sudden changes in the frame.
          P-frame: This frame relates to the frame preceding to it whether it is an I-frame or a P-frame.
               It contains small differences related to its preceding I-frame or P-frame; however, it is not
               useful for recording major modifications; for example, in case of fast-moving objects. Thus,
               P-frames carry only a small amount of information as compared to other frames and even more
               less number of bits after compression. Unlike I-frames, a P-frame can be constructed only from
               its preceding frame.
             B-frame: This frame, as the name implies, relates to its preceding as well as succeeding I-frame
               or P-frame. However, a B-frame cannot relate to any other B-frame. B-frames provide improved
               motion compensation and the best compression.
        Answers
        1. (d)     2. (a) 3. (c)   4. (b) 5. (a)    6. (d)   7. (b)    8. (c)
              2. What do you understand by network security attack? Describe active and passive attacks.
          Ans: A network security attack refers to an act of breaching the security provisions of a network.
        Such an act is a threat to the basic goals of secure communication such as confidentialit , integrity and
        authenticity. Network security attacks can be classified under two categories, namely, passive attack and
        active attack.
          Passive Attack: In a passive attack, an opponent is indulged in eavesdropping, that is, listening to
             and monitoring the message contents over the communication channel. The term passive indicates
             that the main goal of the opponent is just to get the information and not to do any alteration in the
             message or harm the system resources. A passive attack is hard to recognize, as the message is not
             tampered or altered; therefore, the sender or receiver remains unaware of the message contents been
             read by some other party. However, some measures such as encryption are available to prevent their
             success. Two types of passive attacks are as follows:
                  Release of Message Contents: This type of passive attack involves the learning of the sensitive
                   information that is sent via e-mail or tapping a conversation being carried over a telephone line.
                Traffi Analysis: In this type of attack, an opponent observes the frequency and the length of
                   messages being exchanged between the communicating nodes. This type of passive attack is
                   more elusive, as location and identity of communicating nodes can be determined.
             Active Attack: In active attack, an opponent either alters the original message or creates a fake
               message. This attack tries to affect the operation of system resources. It is easier to recognize an
               active attack but hard to prevent it. Active attacks can be classified into four different categories
               which are as follows:
                Masquerade: In computer terms, masquerading is said to happen when an entity impersonates
                   another entity. In such an attack, an unauthorized entity tries to gain more privileges than it is
                   authorized for. Masquerading is generally done by using stolen IDs and passwords or through
                   bypassing authentication mechanisms.
                Replay: This active attack involves capturing a copy of message sent by the original sender
                   and retransmitting it later to bring out an unauthorized result.
                Modificatio of Messages: This attack involves making certain modifications in the captured
                   message or delaying or reordering the messages to cause an unauthorized effect.
                Denial of Service (DoS): This attack prevents the normal functioning or proper management
                     of communication facilities. For example, network server can be overloaded by unwanted
                     packets, thus, resulting in performance degradation. DoS attack can interrupt and slow down
                     the services of a network or may completely jam a network.
               3. What is meant by cryptography?
            Ans: The term cryptography is derived from a Greek word kryptos which means “secret writing”. In
         simple terms, cryptography is the process of altering messages to hide their meaning from adversaries
        who might intercept them. In data and telecommunications, cryptography is an essential technique
        required for communicating over any untrusted medium, which includes any network, such as Internet.
         Cryptography provides an important tool for protecting information and is used in many aspects of
         computer security. By using cryptography techniques, the sender can first encrypt a message and then
         transmit it through the network. The receiver on the other hand, must be able to decrypt the message and
         recover the original contents of message.
                                     Encryption                                  Decryption
                         Plaintext                         Ciphertext                             Plaintext
                                                                     Shared
                                                                      key
Sender Receiver
                                         Encryption                                         Decryption
                    Plaintext                                       Ciphertext                                  Plaintext
A B
                                              Encryption                                     Decryption
                             Plaintext                                  Ciphertext                                   Plaintext
        Substitution Cipher
        This cipher replaces a symbol (a single letter or group of letters) of the plaintext with another symbol.
        An example of substitution cipher is the Caesar cipher in which each alphabet of plaintext is replaced
        by an alphabet obtained by shifting three letters from it. That is, A is replaced by D, B is replaced by E, Z
        is replaced by C and so on. For example, cipher formed from the plaintext TACKLE will be WDFNOH.
        A slight generalization of Caesar cipher is shift cipher in which the ciphertext alphabet can be obtained
        by shifting n letters instead of 3; thus, n becomes the key. Substitution ciphers are further categorized
        into two types, which are as follows.
           Monoalphabetic Cipher: In monoalphabetic cipher, the characters in the plaintext have a one-
              to-one relationship with the characters in the ciphertext. It means that a character in the plaintext
              will always be replaced by the same character in the ciphertext. For example, if it is decided that a
              ciphertext character will be obtained by shifting two positions from the character in the plaintext
               and the given plaintext is HAPPY, then its ciphertext will be JCRRA.
           Polyalphabetic Cipher: In polyalphabetic cipher, the characters in the plaintext may have a
              one-to-many relationship with the characters in the ciphertext. It means that the same character
              appearing in plaintext can be replaced by a different character in the ciphertext. For example,
              the plaintext HELLO can be encrypted to ARHIF using a polyalphabetic cipher. Due to one-to-
               many relationship between the characters of plaintext and ciphertext, the key used must indi-
               cate which of the possible characters can be used for replacing a character in the plaintext. For
               this, the plaintext is divided into groups of characters and a set of keys is used for encrypting
               the groups.
        Transposition Cipher
        This cipher changes the location of characters in plaintext to form the ciphertext. In this cipher, there
        is no substitution of characters and thus, the order of characters in the plaintext is no longer preserved
        in the ciphertext. Transposition cipher uses a key that maps the position of characters in the plaintext
        to that of characters in the ciphertext. One of the commonly used transposition ciphers is columnar
        transposition in which a word or phrase without containing any repeated letters is chosen as a key.
        Each letter of the key is numbered to form the columns and the numbering is done in such a way that
        column 1 is one under the key letter closest to the start of the 26-alphabet set. Then, the plaintext is
        arranged horizontally under the columns forming the rows. The rows are padded with extra characters
         to fill the matrix, if required. The ciphertext is then read out column-wise starting from the first column
         to the last column. For example, if the key is BACKIN and the plaintext is given as hellohowareyou,
         then ciphertext will be formed as follows:
                                                B   A   C   K   I   N
                                                2   1   3   5   4   6
                                                h   e   l   l   o   h
                                                o   w   a   r   e   y
                                                o   u   a   b   c   d
         Thus, the ciphertext will be ewuhoolaaoeclrbhyd.
              8. What is the difference between stream cipher and block cipher?
          Ans: The stream cipher and block cipher are the categories of symmetric cipher—the ciphers that
        use the same key for both encryption and decryption.
            The stream cipher operates on one symbol of plaintext at a time and using the key applied it produces
        a symbol of ciphertext one at a time. The stream ciphers implement a feedback mechanism so that the key
        is constantly changing. Thus, the same character in plaintext may be encrypted to different characters in
        ciphertext. However, each character is encrypted and decrypted using the same key regardless of the fact
        that multiple keys are being used. For example, consider the plaintext is user and three different keys (K1,
        K2 and K3) are used to produce ciphertext, such that the characters u and r are encrypted using key K1, the
        characters s is encrypted using key K2 and the character e is encrypted using K3. Then, during decryption
        also, the same set of keys (K1, K2 and K3) are used, such that the characters u and r are decrypted using
        key K1, the character s is decrypted using key K2 and the character e is decrypted using the key K3.
            On the other hand, in block ciphers, an n-bit block of plaintext is encrypted together to produce an
        n-bit block of ciphertext. Similarly, during decryption, n-bit block of ciphertext is converted back to n-bit
        block of plaintext, one block at a time. Each block of bits is encrypted or decrypted using the same key.
        Thus, the same block of plaintext will always be encrypted to same block of ciphertext.
                9. Describe S-box and P-box.
           Ans: S-box (substitution box) and P-box (permutation box) are used to perform substitution and
        transposition function respectively. These are described as follows.
           S-box: This is a substitution box having same characteristics as that of substitution cipher except
              that the substitution of several bits is performed in parallel. It takes n bits of plaintext at a time
              as input and produces m bits of ciphertext as output where the value of n and m may be same or
             different. An S-box can be keyed or keyless. In a keyed S-box, the mapping of n inputs to m outputs
             is decided with the help of a key, while in keyless S-box, the mapping from inputs to outputs is
             predetermined.
           P-box: This is a permutation box having same characteristics as that of traditional transposition
              cipher except that it performs transposition at bit-level and transposition of several bits is per-
               formed at the same time. The input bits are permutated to produce the output bits. For example, the
               first input bit can be the second output bit, second input bit can be the third output bit and so on.
               P-box is normally keyless and can be classified into the following three types based on the length
               of input and output.
              Straight P-box: This P-box takes n bits as input, permutes them and produces n bits as output.
              As the number of inputs and outputs is the same, there are total n! ways to map n inputs to n outputs.
              Compression P-box: This P-box takes n bits as input and permutes them in such a way that
              an output of m bits is produced where m  <  n. This implies that two or more inputs are mapped to
              the same output.
              Expansion P-box: This P-box takes n bits as input and permutes them in such a way that an output
               of m bits is produced where m  >  n. This implies that a single input is mapped to more than one output.
             10. Explain DES in detail.
           Ans: DES is a symmetric-key cipher that was developed by IBM. This encryption standard was
        adopted by the U.S. government for non-classified information and by various industries for the use in
        security products. It is also called a block cipher, as it divides plaintext into blocks and same key is used
        for encryption and decryption of blocks. DES involves multiple rounds to produce the ciphertext and
        the key used in each round is the subset of the general key called round key produced by the round key
        generator. That is, if there are P rounds in cipher, then P number of keys (K1, K2… KP) will be generated
        where K1 will be used in first round, K2 in second round and so on.
           At the sender’s end, the DES takes 64-bit block of plaintext, encrypts it using the 56-bit round key
        and produces 64-bit ciphertext. Originally, the round key is of 64 bits including eight parity bits, thus,
        the usable bits in key are only 56. The whole process of producing ciphertext from plaintext comprises
        19 stages (see Figure 15.4). The first stage is the keyless transposition on the 64-bit plaintext. Next, 16
        stages are the rounds that are functionally similar and in each round, a different key Ki of 48 bits derived
        from the original key of 56 bits is used. The second last stage performs a swap function in which leftmost
        32 bits are exchanged with the rightmost 32 bits. The last stage is simply the opposite of the first stage,
        that is, it performs inverse transposition on 64 bits. At the receiver’s end, the decryption is performed
        using the same key as in encryption; however, now, the steps are performed in the reverse order.
64-bit plaintext
                                                                        K1
                                                                      48 bits            Round 1             Stage 2
                                                Round key generator
                                                                        K2
                                                                      48 bits            Round 2             Stage 3
                                 56-bit key
                                                                       K16
                                                                      48 bits            Round 16            Stage 17
                                              32 bits                                      32 bits
                                         Left input Li                                  Right input Ri
                                                                                                                Ki
                                                                                         Li⊕f(Ri, Ki)
                                                                                                              48 bits
         output (Li+1) is just the right input (Ri). The right output (Ri+1) is obtained by first applying the DES
        f unction (f  ) on the right input (Ri) and the 48-bit key (Ki) being used in the ith round, denoted as
         f(Ri, Ki), and then performing the bitwise XOR of the result of DES func-
        tion and the left input (Li). The structure of decryption round in DES is
        simply the opposite of the encryption round.                                                  Ri(32 bits)
             The essence of DES is the DES function. The function f(Ri, Ki) comprises
                                                                      
        four steps (see Figure 15.6), which need to be carried out sequentially. These       Expansion P-box
        steps are as follows:                                                                         48 bits
                                                                                       Ki(48 bits)
             1. The right output (Ri) of 32 bits is fed into the expansion P-box                       XOR operation
                 which produces an output (say, E) of 48 bits.                         S-boxes        48 bits
             2. A bitwise XOR is performed on 48-bit E and 48-bit key Ki gener-
                 ated for that round, resulting in 48 bits.
             3. The 48-bit output of XOR operation is broken down into eight                          32 bits
                 groups with each group consisting of six bits. Each group of six bits
                 is then fed to one of eight S-boxes. Each S-box maps six inputs to           Straight P-box
                 four outputs and thus, total 32 bits are obtained from eight S-boxes.
             4. The 32 bits obtained from S-boxes are input to a straight P-box,                      32 bits
                 which permutes them and produces 32 bits as output.
                                                                                        Figure 15.6 DES Function
              11. Write a short note on triple DES.
            Ans: The length of the key used in DES was too short. Therefore,
         triple DES (3DES) was developed to increase the key length, thereby making the DES more secure.
         The encryption and decryption in 3DES are performed in three stages with the help of two keys, say K1
         and K2 of 56 bits each. During encryption, the plaintext is encrypted using DES with key K1 in the first
        stage, then the output of first stage is decrypted using DES with key K2 in the second stage and finall ,
        the output of second stage is encrypted using DES with key K1 in the third stage thereby producing the
         ciphertext. On the other hand, during decryption, the ciphertext is decrypted using DES with key K1
         in the first stage, then the output of first stage is encrypted using DES with key K2 in the second stage
        and finall , the output of second stage is decrypted using DES with key K1 in the third stage thereby
        producing the plaintext. The use of two keys and three stages in 3DES increased the key size to 112 bits
         and provides more secured communication.
             Another version of 3DES uses three keys of 56 bits each and a different key is used for encryption/
         decryption in each stage. The use of three different keys further increases the key length to 168 bits;
         however, it results in an increased overhead due to managing and transporting one more key.
             12. Explain the RSA algorithm.
            Ans: In 1978, a group at M.I.T. discovered a strong method for public key encryption. It is known as
        RSA, the name derived from the initials of the three discoverers Ron Rivest, Adi Shamir and Len Adleman. It
        is the most widely accepted public key scheme, in fact most of the practically implemented security is based
        on RSA. The algorithm requires keys of at least 1024 bits for good security. This algorithm is based on some
        principles from number theory, which states that determining the prime factors of a number is extremely dif-
        ficult. The algorithm follows the following steps to determine the e ncryption and decryption keys.
            1. Take two large distinct prime numbers, say m and n (about 1024 bits).
            2. Calculate p  =  m*n and q  = (m – 1)*(n – 1).
            3. Find a number which is relatively prime to q, say D. That number is the decryption key.
            4. Find encryption key E such that E*D  =  1  mod q.
           Using these calculated keys, a block B of plaintext is encrypted as Te = BE mod  p. To recover the original
        data, compute B =  Te)D  mod  p. Note that E and p are needed to perform encryption whereas D and p are
        needed to perform decryption. Thus, the public key consists of (E , p) and the private key consists of (D, p).
        An important property of RSA algorithm is that the roles of E and D can be interchanged. As the number
        theory suggests that it is very hard to find prime factors of p, it is extremely difficult for an intruder to
        determine decryption key D using just E and p, because it requires factoring p which is very hard.
             13. What is digital signature? How it works?
           Ans: The historical legal concept of “signature” is defined as any mark made with the intention of
        authenticating the marked document. Digital signature refers to the digitized images of paper signature
        used to verify the authenticity of an electronic document. In other words, digital signatures play the role
        of physical handwritten signatures in verifying electronic documents. Digital signatures use public key
        cryptography technique, which employs an algorithm using two different but mathematically related
        keys: private and public keys. Both public and private keys have an important property that permits the
        reversal of their roles; the encryption key (E) can be used for decryption and the decryption key (D) can
        be used for encryption, that is, E(D(P)) = D(E(P)) where P denotes the plaintext. This property is used
        for creating messages with digital signature.
           The private key is known only to the signer who uses it for creating a digital signature or transforming
        data into a seemingly unintelligible form and the signed document can be made public. The public key is
        used for verifying the digital signature or returning the message to its original form. Any user can easily
        verify the authenticity of the document by using the public key that means it can be easily verified that
        the data is originated by the person who claims for it. However, no one can sign the document without
        having the private key.
           To have a clear understanding of how digital signature is used, refer Figure 15.7. Suppose A wants
        to send his or her signed message (message with digital signature) to B through network. For this,
        A encrypts the message (M) using his or her private key (EA) and this process results in an encrypted
        message [EA(M)] bearing A’s signature on it. The signed message is then sent through the network to
        B. Now, B attempts to decrypt the received message using A’s public key (DA) in order to verify that the
        received message has really come from A. If the message gets decrypted {that is, DA[EA(M)] = M}, B can
        believe that the message has come from A. However, if the message or the digital signature has been
        modified during the transmission, it cannot be decrypted using A’s public key. From here, B can conclude
        that either the message transmission has tampered or the message has not been generated by A.
           Digital signatures also ensure non-repudiation. For example, on receiving the encrypted message, B
        can keep a copy of that message, so that A cannot later deny of sending of message. Moreover, as B is
        unaware of A’s private key (EA), he or she cannot alter the contents of the encrypted message. However,
        the only problem with this mechanism is that the message can be tapped by anyone (other than the in-
        tended user B) who knows the A’s public key (DA) thereby breaching confidentialit .
           To ensure message confidentiali y, encryption and decryption are performed twice at A’s end and B’s
        end respectively. At A’s end, first the message is encrypted using A’s private key (EA) and then a second
                                        EA                                                  DA
                 A                                                                                           B
                                                      Signed message
                                  Encryption                                        Decryption
                   Message M                                EA(M)                                Message M
                              EA                      DB                          EB                      DA
          A                                                                                                                  B
                                                               Signed
                                                              message
                         Encryption              Encryption                Decryption                Decryption
           Message M                  EA(M)                   DB(EA(M))                  EA(M)                    Message M
              15. What do you understand by the term fi ewall? Explain its use with the help of an example?
           Ans: The ongoing occurrences of incidents pertaining to network security caused a great concern
        to the people, using computers as their medium to exchange data across the country. A need was felt
        for a method of controlling the traffic, which allows access of information to computers. Organizations
        required an application that could protect and isolate their internal systems from the Internet. This applica-
        tion is called fi ewall. Simply put, a firewall prevents certain outside connections from entering into the
        network. It traps inbound or outbound packets, analyzes them and then permits access or discards them.
            Generally, firewall system comprises software (embedded in a router), computer, host or a collection
        of hosts set up specifically to shield a site or subnet from protocols and services that can be a threat from
        hosts outside the subnet. It serves as the gatekeeper between an untrusted network (Internet) and the more
        trusted internal networks. If a remote user tries to access the internal networks without going through the
        firewall, its effectiveness is diluted. For example, if a travelling manager has an office computer that he
        or she can dial into while travelling, and his or her computer is on the protected internal network, then an
        attacker who can dial into that computer has circumvented the firewall. Similarly, if a user has a dial-up
        Internet account, and sometimes connects to the Internet from his or her office computer, he or she opens
        an unsecured connection to the Internet that circumvents the fi ewall.
            To understand the use of firewall, consider an example where an organization is having hundreds of
        computers on the network. In addition, the organization will have one or more connections to the In-
        ternet lines. Now, without a firewall in place, all the computers are directly accessible to anyone on the
        Internet. A person who knows what other people are doing can probe those computers; try to make FTP
        (file transfer protocol) connections to them, or telnet connections and so on. If one employee makes a
        mistake and leaves a security hole, hackers can get to the machine and exploit that hole.
             With a firewall in place, the network landscape becomes much different. An organization will place
         a firewall at every connection to the Internet (for example, at every T1 line coming into the company).
         The firewall can implement security rules. For example, one of the security rules may be: out of the 300
         computers inside an organization, only one is permitted to receive public FTP traffic. A company can
         set up rules like this for FTP servers, web servers, telnet servers and so on. In addition, an organization
         can have control on how employees connect to websites, whether or not files can be sent as attachments
         outside the company over the network and so on. Firewall provides incredible control over how people
         use the network.
              16. What is the role of packet filterin in the fi ewall?
            Ans: A firewall intercepts the data between the Internet and the computer. All data traffic passes
         through it and it allows only authorized data to pass into the corporate network. Firewalls are typically
         implemented using packet filterin ,
             Packet filtering is the most basic firewall protection technique used in an organization. It operates
         at the network layer to examine incoming and outgoing packets and applies a fixed set of rules to the
         packets to determine whether or not they will be allowed to pass. The packet filter firewall is typically
         very fast because it does not examine any of the data in the packet. It simply examines the IP packet
         header, the source and destination IP addresses and the port combinations and then it applies filtering
         rules. For example, it is easy to filter out all packets destined for Port 80, which might be the port for a
         web server. The administrator may decide that Port 80 is off limits except for specific IP subnets and a
         packet filter would suffice for this. Packet filtering is fast, flexible, transparent (no changes are required
         at the client) and cheap. This type of filter is commonly used in small to medium businesses that require
         control over users to use the Internet.
              17. Defin identificatio and authentication. Explain how users can be authenticated?
            Ans: Often people confuse identification from authentication, as both have similar aspects. Identi-
         ficatio is the means through which a user provides a claimed identity to the system. On the other hand,
         authentication refers to establishing the validity of the claim. Computer systems make use of data authen-
        tication for recognizing people, which the systems receive. Authentication presents several challenges
        such as collecting authentication data, transmitting the data securely and identifying the same person who
        was earlier authenticated and is still using the computer system.
             Various methods can be used to authenticate a user, such as a secret password, some physical
        characteristics of the user, a smart card or a key given to the user.
        Password Authentication
        It is the simplest and most commonly used authentication scheme. In this scheme, the user is asked to
        enter the user name and password to log in into the database. The DBMS then verifies the combination
        of user name and password to authenticate the user and allows him or her to access the database if he
        or she is the legitimate user, otherwise access is denied. Generally, password is asked once when a user
        log in into the database; however, this process can be repeated for each operation when the user is trying
        to access sensitive data.
            Though the password scheme is widely used by database systems, this scheme has some limitations. In
        this method, the security of database completely relies on the password. Thus, the password itself needs to
        be secured from unauthorized access. One simple way to secure the password is to store it in an encrypted
        form. Further, care should be taken to ensure that password would never be displayed on the screen in its
        decrypted (non-encrypted) form.
        Smart Card
        In this method, a database user is provided with a smart card that is used for identification. The smart
        card has a key stored on an embedded chip and the operating system of smart card ensures that the key
        can never be read. Instead, it allows data to be sent to the card for encryption or decryption using that
        private key. The smart card is programmed in such a way that it is extremely difficult to extract the
        values from smart card; thus, it is considered as a secure device.
              18. Write a short note on message authentication.
            Ans: Message authentication is a means to verify that the message received by the receiver is from
         the intended sender and not from the intruder. The sender needs to send some proof along the message,
         so that the receiver can authenticate the message. To authenticate a message, the message authentication
         code (MAC) is used. MAC uses a hash function (MAC algorithm) that generates a MAC (a tag-like)
         with the help of a secret key shared between the sender and the receiver. Figure 15.9 depicts the use
         of MAC to authenticate a message at the sender’s end and to verify the authenticity of message at the
         receiver’s end.
            At the sender’s end, the original message that is to be authenticated along with the secret key are given
         as input to the MAC algorithm that produces a MAC as output. The MAC is attached with the original
         message and both are sent to the receiver through the network. To verify the authenticity of message at
         the receiver’s end, the message is distinguished from MAC and the MAC algorithm is again applied on
         the message using the secret key to generate a new MAC. Then, the newly generated MAC is compared
                         Sender                                                             Receiver
                                  Message
                                                                                                    Message
                                                                                              MAC
                           MAC                              MAC
                                                                                               Is      Yes
                                                                                MAC                           Accept
                                                                                               =?
                                                                                                             message
                                                                                                  No
                                                                                         Discard message
        with the received MAC to determine whether they are same or not. If so, the receiver knows that the
        message has not been changed and is actually from the intended sender and thus, accepts the message.
        Otherwise, the message is discarded.
            19. Encrypt the plaintext 6 using RSA public key encryption algorithm. Use prime numbers
        11 and 3 to compute the public and private keys. Moreover, decrypt the ciphertext using the
        private key.
          Ans: Here, m = 11 and n = 3
           According to RSA algorithm (as explained in Q12)
                                     p = m * n = 11 * 3 = 33
                                     q = (m − 1) * (n − 1) = (11 − 1) * (3 − 1) = 10 * 2 = 20
            We choose D = 3 (a number relatively prime to 20, that is, gcd (20, 3) = 1).
            Now,
                                                       E * D = 1 mod q
                                                  ⇒ E * 3 = 1 mod 20
                                                      ⇒ E = 7
          As we know, the public key consists of (E, p) and the private key consists of (D, p). Therefore, the
        public key is (7, 33) and the private key is (3, 33).
          The plaintext 6 can be converted to ciphertext using the public key (7, 33) as follows.
                                                       T e= BE mod p
                                                          ⇒ 67 mod 33
                                                         ⇒ 30
            On applying the private key to the ciphertext 30 to get original plaintext, we get
                                                      B = (Te)D mod p
                                                        ⇒ (30)3 mod 33
                                                        ⇒6
        Multiple Choice Questions
          1. Which of the following are necessary for               (a) Ciphertext              (b) Ciphers
             secured communication?                                 (c) Key                     (d) None of these
             (a) Authentication    (b) Confidentialit            5. In public key cryptography, ________ key is
             (c) Integrity         (d) All of these                 used for encryption.
          2. In ______ attack, an opponent either alters            (a) Public             (b) Private
             the original message or creates a fake mes-            (c) Both (a) and (b) (d) Shared
             sage.                                               6. _________ is the means through which a
             (a) Passive             (b) Inactive                   user provides a claimed identity to the sys-
             (c) Active              (d) Access                     tem.
          3. __________ is a type of passive attack.                (a) Authentication      (b) Identificatio
             (a) Replay             (b) Traffic analysi             (c) Encryption          (d) Decryption
             (c) Masquerade         (d) Denial of service        7. In __________cipher, characters in the
          4. Which of the following is not a component              plaintext and ciphertext are related to each
             of cryptography?                                       other by one-to-many relationship.
        Answers
        1. (d)     2. (c) 3. (b)   4. (d)   5. (a) 6. (b)   7. (c)     8. (c)   9. (d)   10. (b)
        A
        Adaptive tree-walk algorithm, 155                     Authentication protocols types, 140
        Address mask, 198                                         challenge handshake authentication protocol, 141
        Address resolution protocol, 29, 226, 228                 password authentication protocol, 140
        Advantage of token-passing protocol over CSMA/CD      Authentication, 140–141, 171, 177,
          protocol, 156                                         231, 233, 254, 272, 278–279, 288–291
        Advantage and disadvantage of E-mail, 255             Authentication-request packet, 140
        Advantage and disadvantage of fibre optic cables, 7   Autonegotiation, 165
        Advantage of computer networking, 8
            communication, 9                                  B
            preserve information, 8
            sharing hardware resources, 8                     Bandpass channel, 45
            sharing information, 8                            Bandwidth in hertz, 46
            sharing software resources, 8                     Bandwidth utilization, 83
        Advantages of FDDI over a basic Token Ring, 169       Basic types of guided media
        Advantages of reference model, 23                          coaxial cables, 71
        ALOHA protocol limitation, 148                             fibre-optic Cable, 7
        American standard code for information, 3                  stp cable, 71
            hexadecimal values, 3                                  twisted-pair cable, 70
            interchange (ASCII), 3                                 utp cable, 71
        Analog and digital signals, 40–41                     Bayone-Neill-Concelman, 74
        Analog data, 40, 44, 59                               Benefits of ATM, 190
        Analog hierarchy, 90, 108                             Berkeley sockets, 238
        Analog modulation, 59                                 Binary countdown, 153–154
        Analog transmission, 44                               Binary exponential back-off algorithm, 146, 149, 153
        Applications of a computer network, 9                 Bit-map, 153–154
            business application, 9                           bit-oriented and byte-oriented protocol, 133
            conferencing, 9                                   Block Coding, 51, 165
            directory and Information services, 9             Bluetooth, 161, 174–178, 192
            e-mail services, 9                                     architecture, 174–175
            financial services,                                    protocol stack, 175–178
            manufacturing, 9                                  BNC terminator, 74
            marketing and sales, 9                            Bootstrap protocol, 228
            mobile application, 9                             Border Gateway Protocol, 212
        Architecture of e-mail, 255                           Broadcast mechanism, 96
        ASCII-7 Coding Chart, 3–5
        Asynchronous frame, 168
        Asynchronous transfer mode, 184–185
                                                              C
            architecture, 185                                 Cable TV network generation, 102–104
        Authenticate-ack packet, 140                          Call Setup and Call Clearing, 180–181
        Carrier Sense Multiple Access, 145, 149–150         Controlled access method, 155–156
        Carrier Sense Multiple Access/                      Conversion, 57
          Collision Detection, 150                          Cryptanalysis, 280
        Carrier Signal, 56                                  Cryptography, 279–281, 286, 290
        Challenge packet, 141                               CSMA/CA protocol, 152
        Channel allocation, 144
            channel static allocation, 144
                                                            D
            dynamic channel allocation, 144
        Channelization protocol types, 156–157              Data Communication and Computer
        Characteristics of line coding, 48                    Network, 2 ,4, 6, 8, 10, 12, 14, 16, 18, 20
        Check bit, 115                                          characteristics of an efficient data, 1–
        CIDR, 199                                               differences between communication and
        Classful and classless addressing, 195–196                 transmission, 1
        Coaxial cable, 73                                   Data link connection identifie , 182
        Collision detection, 150                            Data link layer design, 110
        Collision-free protocol, 153–154                    Data link layer functions in OSI model, 27
        Communication channel, 84                               access control, 28
        Community antenna, 103                                  error control, 28
        Companding, 53                                          flow control, 2
        Comparison of Bus and Mesh Topology, 13–14              framing, 27
            ring topology, 14                                   physical addressing, 28
            advantages of ring topology, 14                 Data link layer service, 111
            disadvantages of ring topology, 14              Data representation, 2
            star topology, 14                               Data transfer mode, 134
            concentrator, 14                                Data transmission devices for cable network
            advantages of star topology, 15                     cable modem transmission system, 104
            tree topology, 15                                   cable modem, 104
        Comparison of Circuit Switching, Packet Switching   Data Transmission Modes, 7
          and Message Switching, 97                             full duplex, 7
        Components of data communication, 2                     half duplex, 7
            message, 2                                      Data transmission system design, 70
            protocol, 2                                     Datagram network, 179
            receiver, 2                                     Decibel (dB), 46, 61
            sender, 2                                       Defination bit interval, bit rate and bit length, 4
            transmission medium, 2                          Defination of code rate, 14
        Composite signal, 43                                Defination of Codeword, 14
        Compression, 274, 276–277                           Defination of Data, 4
        Computer network, 7                                 Defination of framing, 11
            performance, 7                                  Defination of Signal, 40, 43, 5
            reliability, 8                                  Definition of frequency-domain, 4
            response time, 8                                Definition of Hamming Weight of a Codeword, 114
            security, 8                                     Definition of time-domain, 4
            throughput., 8                                  Delta Modulation, 51, 53
            transit time, 8                                 Demodulator, 54, 84, 98
        Congestion control, 216                             Demultiplexer, 83–87, 89, 108, 191
        Congestion, 215                                     Diagram error, 113
        Contentaion, 145, 149–150, 153–154, 171             Difference guided and unguided transmission media, 70
        Control Field for I-frame, 135                      Differences Between BRI and PRI, 106
        Control Field for S-frame, 136                      Differences Between P2P Architecture and
        Control Field for U-Frame, 137                        Client/Server Architecture, 10–11, 261
        Differences between serial and parallel                Error correction, 113–114, 117, 187, 190, 238
          transmissions, 54–55                                 Error-detection method, 113, 115–117
        Different categories of a network, 16                  Ethernet, 164
            local Area Network, 16                             Expanding, 53
            metropolitan Area Network, 17                      Extended Binary Coded Decimal Interchange
            wide Area Network, 17                                 Code (EBCDIC), 5
        Different categories of transmission media, 69         Extranet, 20
            guided transmission media, 69
            unguided transmission media, 70                    F
        Different propagation method for unguided signal, 78
            ground wave propagation, 78                        FDM differs from FDMA, 158
            Ionospheric propagation, 78                        Fibre distributed data interface, 165, 168
            line-of-sight propagation, 78                      Fibre-optic propagation modes, 74
        Different types of unguided media, 79                       multimode graded-index fibre, 7
        Digital data, 40–41, 51, 53                                 multimode step-index fibre, 7
        Digital signature, 278, 286–287, 291                   File transfer protocol, 259, 287
        Digital transmission, 44                               Flooding, 201–202
        Direct Sequence Spread Spectrum, 92                    Flow control and error control, 124, 162
        Dish antenna, 80                                       Flow-based Routing, 202
        Distance vector routing, 200–201, 204–209, 213         Footprint, 81
        Domain name system, 251                                Formula of BFSK, 58
        Downlink, 81                                           Fourier analysis, 44
        DS hierarchy, 89                                       Frame Relay, 86, 104–105, 166, 182–184,
        DSL technology, 99–100                                    186, 190, 193, 217
            asymmetric digital subscriber line, 99             Framing bit, 89
            high-bit-rate digital subscriber line, 100         Frequency Hopping Spread Spectrum, 91
            spliterless ADSL, 100                              Full-duplex synchronous transmission, 98–99
            symmetric digital subscriber line, 100             Functionalities of network layer with transport layer, 28
            very high-bit-rate digital subscriber line, 100
        DSL, 71, 99–100, 104, 109                              G
        Duty of layer in OSI model, 25
                                                               Geostationary satellites, 81
            data link layer, 26
                                                               Go-back-N ARQ, 130
            datagram method, 26
            dialog control, 26
            error control, 26                                  H
            network Layer, 26                                  Hamming code, 118, 121–123
            physical Layer, 25                                 Hamming distance, 114
            session Layer, 26                                  Hash function, 287
            transport Layer, 26                                HDLC protocol, 133–134
            virtual circuit method, 26                         Hexadecimal colon notation, 197
        Dynamic domain name system, 254                        Hop-by-hop choke packet, 218
        Dynamic host control protocol, 229, 254                Horn antenna, 80
                                                               Hybrid topology, 20
        E                                                      Hypertext transfer protocol, 178, 261, 263