August 2012
Master of Computer Application (MCA) Semester 3
MC0075 Computer Networks 4 Credits
(Book ID: B0813 & B0814)
Assignment Set 1 (60 Marks)
Answer all Questions Each Question carries TEN Marks Book ID: B0813
1. Discuss the following Switching Mechanisms:
a. Circuit switching
b. Message switching
c. Packet switching
Ans:
Circuit switching is a methodology of implementing a telecommunications network in
which two network nodes establish a dedicated communications channel (circuit) through
the network before the nodes may communicate. The circuit guarantees the full
bandwidth of the channel and remains connected for the duration of the communication
session. The circuit functions as if the nodes were physically connected as with an
electrical circuit.
The defining example of a circuit-switched network is the early analog telephone
network. When a call is made from one telephone to another, switches within the
telephone exchanges create a continuous wire circuit between the two telephones, for as
long as the call lasts.
Circuit switching contrasts with packet switching which divides the data to be transmitted
into packets transmitted through the network independently. In packet switching, instead
of being dedicated to one communication session at a time, network links are shared by
packets from multiple competing communication sessions, resulting in the loss of the
quality of service guarantees that are provided by circuit switching.
In circuit switching, the bit delay is constant during a connection, as opposed to packet
switching, where packet queues may cause varying and potentially indefinitely long
packet transfer delays. No circuit can be degraded by competing users because it is
protected from use by other callers until the circuit is released and a new connection is set
up. Even if no actual communication is taking place, the channel remains reserved and
protected from competing users.
Virtual circuit switching is a packet switching technology that emulates circuit switching,
in the sense that the connection is established before any packets are transferred, and
packets are delivered in order.
While circuit switching is commonly used for connecting voice circuits, the concept of a
dedicated path persisting between two communicating parties or nodes can be extended
to signal content other than voice. Its advantage is that it provides for continuous transfer
without the overhead associated with packets making maximal use of available
bandwidth for that communication. Its disadvantage is that it can be relatively inefficient
because unused capacity guaranteed to a connection cannot be used by other connections
on the same network.
Message switching?
A computer system used to switch data between various points. Computers have always
been ideal switches due to their input/output and compare capabilities. It inputs the data,
compares its destination with a set of stored destinations and routes it accordingly. Note: A
"message" switch is a generic term for a data routing device, but a "messaging" switch converts
mail and messaging protocols.
message switching: A method of handling message traffic through a switching center,
either from local users or from other switching centers, whereby the message traffic is stored and
forwarded through the system.
Every input from the terminal receives a response. Most responses are preceded by
indicators where the letters before OK represent the first character of each of the CMSG options
(except CANCEL) as follows:
D DATE
E ERRTERM
H HEADING
I ID
M MSG
O OPCLASS
P PROTECT
R ROUTE
S SEND
T TIME
These indicators identify the options that have been processed and that are currently in effect.
Errors may occur because of:
Syntax (for example, misspelled option, unbalanced parentheses, terminal identifier more than 4
characters, invalid option separator, and message and destination not provided).
Specification (for example, the specified terminal has not been defined to CICS).
Operation (for example, operator not currently signed on to the system).
Syntax errors within an option cause it to be rejected by the message-switching routine. To
correct a known error, reenter the option before typing the SEND keyword.
Packet switching:
Refers to protocols in which messages are divided into packets before they are sent.
Each packet is then transmitted individually and can even follow different routes to its destination.
Once all the packets forming a message arrive at the destination, they are recompiled into the
original message.
Most modern Wide Area Network (WAN) protocols, including TCP/IP, X.25, and Frame
Relay, are based on packet-switching technologies. In contrast, normal telephone service is
based on a circuit-switching technology, in which a dedicated line is allocated for transmission
between two parties. Circuit-switching is ideal when data must be transmitted quickly and must
arrive in the same order in which it's sent. This is the case with most real-time data, such as live
audio and video. Packet switching is more efficient and robust for data that can withstand some
delays in transmission, such as e-mail messages and Web pages.
A new technology, ATM, attempts to combine the best of both worlds -- the guaranteed
delivery of circuit-switched networks and the robustness and efficiency of packet-switching
networks. Packet switching is the dividing of messages into packets before they are sent,
transmitting each packet individually, and then reassembling them into the original message once
all of them have arrived at the intended destination. Packets are the fundamental unit of
information transport in all modern computer networks, and increasingly in other communications
networks as well. Each packet, which can be of fixed or variable size depending on the protocol,
consists of a header, body (also called a payload) and a trailer. The body contains a segment of
the message being transmitted.
This contrasts with circuit switching, in which a dedicated, but temporary, circuit is
established for the duration of the transmission of each message. The most familiar circuit-
switching network is the telephone system when used for voice communications. Circuit-switching
is ideal when data must be transmitted quickly and must arrive in the same order in which it is
sent, as is the case with most real-time data, such as live audio and video. Packet switching is
used to optimize the use of the bandwidth available in a network, to minimize the transmission
latency (i.e. the time it takes for data to pass across the network) and to increase robustness of
communication. It is more efficient and robust for data that can withstand some delays in
transmission, such as web pages and e-mail messages.
2. Discuss the following IEEE standards
o Ethernets
o Fast Ethernet
o Gigabit Ethernet
o IEEE 802.3 frame format.
Ans:
Ethernet pron.: /irnt/ is a family of computer networking technologies for local area
networks (LANs). Ethernet was commercially introduced in 1980 and standardized in
1985 as IEEE 802.3. Ethernet has largely replaced competing wired LAN technologies.
The Ethernet standards comprise several wiring and signaling variants of the OSI
physical layer in use with Ethernet. The original 10BASE5 Ethernet used coaxial cable as
a shared medium. Later the coaxial cables were replaced by twisted pair and fiber optic
links in conjunction with hubs or switches. Data rates were periodically increased from
the original 10 megabits per second, to 100 gigabits per second.
Systems communicating over Ethernet divide a stream of data into shorter pieces called
frames. Each frame contains source and destination addresses and error-checking data so
that damaged data can be detected and re-transmitted. As per the OSI model Ethernet
provides services up to and including the data link layer.
Since its commercial release, Ethernet has retained a good degree of compatibility.
Features such as the 48-bit MAC address and Ethernet frame format have influenced
other networking protocols.
Standardization
In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started
project 802 to standardize local area networks (LAN).
[16][8]
The "DIX-group" with Gary
Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox) submitted the so-called "Blue
Book" CSMA/CD specification as a candidate for the LAN specification.
[9]
In addition to
CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward
supported by General Motors) were also considered as candidates for a LAN standard.
Competing proposals and broad interest in the initiative led to strong disagreement over
which technology to standardize. In December 1980, the group was split into three
subgroups, and standardization proceeded separately for each proposal.
[8]
Delays in the standards process put at risk the market introduction of the Xerox Star
workstation and 3Com's Ethernet LAN products. With such business implications in
mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com)
strongly supported a proposal of Fritz Rscheisen (Siemens Private Networks) for an
alliance in the emerging office communication market, including Siemens' support for the
international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens'
representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE
by the establishment of a competing Task Group "Local Networks" within the European
standards body ECMA TC24. As early as March 1982 ECMA TC24 with its corporate
members reached agreement on a standard for CSMA/CD based on the IEEE 802
draft.
[11]:8
Because the DIX proposal was most technically complete and because of the
speedy action taken by ECMA which decisively contributed to the conciliation of
opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December
1982.
[8]
IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985.
Approval of Ethernet on the international level was achieved by a similar, cross-partisan
action with Fromm as liaison officer working to integrate International Electrotechnical
Commission, TC83 and International Organization for Standardization (ISO) TC97SC6,
and the ISO/IEEE 802/3 standard was approved in 1984
Fast Etbernet
Jump to: navigation, search
In computer networking, Fast Ethernet is a collective term for a number of Ethernet
standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet
speed of 10 Mbit/s. Of the Fast Ethernet standards 100BASE-TX is by far the most
common and is supported by the vast majority of Ethernet hardware currently produced.
Fast Ethernet was introduced in 1995
[1]
and remained the fastest version of Ethernet for
three years before being superseded by gigabit Ethernet.
[2]
[edit] General design
Fast Ethernet is an extension of the existing Ethernet standard. It runs on UTP data or
optical fiber cable and uses CSMA/CD in a star wired bus topology, similar to 10BASE-
T where all cables are attached to a hub. And, it provides compatibility with existing
10BASE-T systems and thus enables plug-and-play upgrades from 10BASE-T. Fast
Ethernet is sometimes referred to as 100BASE-X where X is a placeholder for the FX and
TX variants.
[citation needed]
The 100 in the media type designation refers to the transmission speed of 100 Mbit/s. The
"BASE" refers to baseband signalling. The TX, FX and T4 refer to the physical medium
that carries the signal.
A Fast Ethernet adapter can be logically divided into a Media Access Controller (MAC)
which deals with the higher level issues of medium availability and a Physical Layer
Interface (PHY). The MAC may be linked to the PHY by a 4 bit 25 MHz synchronous
parallel interface known as a Media Independent Interface (MII) or a 2 bit 50 MHz
variant Reduced Media Independent Interface (RMII). Repeaters (hubs) are also allowed
and connect to multiple PHYs for their different interfaces.
The MII may (rarely) be an external connection but is usually a connection between ICs
in a network adapter or even within a single IC. The specs are written based on the
assumption that the interface between MAC and PHY will be a MII but they do not
require it.
The MII fixes the theoretical maximum data bit rate for all versions of Fast Ethernet to
100 Mbit/s. The data signaling rate actually observed on real networks is less than the
theoretical maximum, due to the necessary header and trailer (addressing and error-
detection bits) on every frame, the occasional "lost frame" due to noise, and time waiting
after each sent frame for other devices on the network to finish transmitting.
Cigabit Etbernet
In computer networking, Gigabit Ethernet (GbE or 1 GigE) is a term describing various
technologies for transmitting Ethernet frames at a rate of a gigabit per second
(1,000,000,000 bits per second), as defined by the IEEE 802.3-2008 standard. It came
into use beginning in 1999, gradually supplanting Fast Ethernet in wired local networks
where it performed considerably faster. The cables and equipment are very similar to
previous standards, and by the year 2010, were very common and economical.
Half-duplex gigabit links connected through hubs are allowed by the specification,
[1]
but
full-duplex usage with switches is much more common.
Etbernet IEEE 8. Frame Format J Structure
Ethernet, IEEE 802.3 defines the frame formats or frame structures that are developed
within the MAC layer of the protocol stack.
Essentially the same frame structure is used for the different variants of Ethernet,
although there are some changes to the frame structure to extend the performance of the
system should this be needed. With the high speeds and variety of media used, this basic
format sometimes needs to be adapted to meet the individual requirements of the
transmission system, but this is still specified within the amendment / update for that
given Ethernet variant.
10 / 100 Mbps Ethernet MAC data frame format
The basic MAC data frame format for Ethernet, IEEE 802.3 used within the 10 and 100
Mbps systems is given below:
Basic Ethernet MAC Data Frame Format
The basic frame consists of seven elements split between three main areas:-
Header
o Preamble (PRE) - This is seven bytes long and it consists of a pattern of
alternating ones and zeros, and this informs the receiving stations that a frame
is starting as well as enabling synchronisation. (10 Mbps Ethernet)
o Start Of Frame delimiter (SOF) - This consists of one byte and contains an
alternating pattern of ones and zeros but ending in two ones.
o Destination Address (DA) - This field contains the address of station for which
the data is intended. The left most bit indicates whether the destination is an
individual address or a group address. An individual address is denoted by a
zero, while a one indicates a group address. The next bit into the DA indicates
whether the address is globally administered, or local. If the address is globally
administered the bit is a zero, and a one of it is locally administered. There are
then 46 remaining bits. These are used for the destination address itself.
o Source Address (SA) - The source address consists of six bytes, and it is used to
identify the sending station. As it is always an individual address the left most
bit is always a zero.
o Length / Type - This field is two bytes in length. It provides MAC information and
indicates the number of client data types that are contained in the data field of
the frame. It may also indicate the frame ID type if the frame is assembled using
an optional format.(IEEE 802.3 only).
Payload
o Data - This block contains the payload data and it may be up to 1500 bytes long.
If the length of the field is less than 46 bytes, then padding data is added to
bring its length up to the required minimum of 46 bytes.
Trailer
o Frame Check Sequence (FCS) - This field is four bytes long. It contains a 32 bit
Cyclic Redundancy Check (CRC) which is generated over the DA, SA, Length /
Type and Data fields.
1000 Mbps Ethernet MAC data frame format
The basic MAC data frame format for Ethernet is modified slightly for 1GE, IEEE 802.3z
systems. When using the 1000Base-X standard, there is a minimum frame size of
416bytes, and for 1000Base-T there is a minimum frame size of 520bytes. To
accommodate this, an extension is added as appropriate. This is a non-data variable
extension field to any frames that are shorter than the minimum required length.
1GE / 1000 Mbps Ethernet MAC Data Frame Format
Half-duplex transmission
This access method involves the use of CSMA/CD and it was developed to enable several
stations to share the same transport medium without the need for switching, network
controllers or assigned time slots. Each station is able to determine when it is able to
transmit and the network is self organising.
The CSMA/CD protocol used for Ethernet and a variety of other applications falls into
three categories. The first is Carrier Sense. Here each station listens on the network for
traffic and it can detect when the network is quiet. The second is the Multiple Access
aspect where the stations are able to determine for themselves whether they should
transmit. The final element is the Collision Detect element. Even though stations may
find the network free, it is still possible that two stations will start to transmit at virtually
the same time. If this happens then the two sets of data being transmitted will collide. If
this occurs then the stations can detect this and they will stop transmitting. They then
back off a random amount of time before attempting a retransmission. The random delay
is important as it prevents the two stations starting to transmit together a second time.
Note: According to section 3.3 of the IEEE 802.3 standard, each octet of the Ethernet
frame, with the exception of the FCS, is transmitted low-order bit first.
Full duplex
Another option that is allowed by the Ethernet MAC is full duplex with transmission in
both directions. This is only allowable on point-to-point links, and it is much simpler to
implement than using the CSMA/CD approach as well as providing much higher
transmission throughput rates when the network is being used. Not only is there no need
to schedule transmissions when no other transmissions are underway, as there are only
two stations in the link, but by using a full duplex link, full rate transmissions can be
undertaken in both directions, thereby doubling the effective bandwidth.
Ethernet addresses
Every Ethernet network interface card (NIC) is given a unique identifier called a MAC
address. This is assigned by the manufacturer of the card and each manufacturer that
complies with IEEE standards can apply to the IEEE Registration Authority for a range
of numbers for use in its products.
The MAC address comprises of a 48-bit number. Within the number the first 24 bits
identify the manufacturer and it is known as the manufacturer ID or Organizational
Unique Identifier (OUI) and this is assigned by the registration authority. The second half
of the address is assigned by the manufacturer and it is known as the extension of board
ID.
The MAC address is usually programmed into the hardware so that it cannot be changed.
Because the MAC address is assigned to the NIC, it moves with the computer. Even if the
interface card moves to another location across the world, the user can be reached
because the message is sent to the particular MAC address.
3. Describe the classification of computer networks based on:
o Transmission Technologies
o Scalability
o Geographical Distance covered
Ans:
Computer Networks can be classified into two classes regarding the transmission technology
they use. They are broadcast network and point-to-point networks.
Broadcast Network
Broadcast networks have a single communication channel that is shared by all the machines
on the network. Short messages, called packets in certain contexts, sent by any machine are
received by all the others. An address field within the packet specifies for whom it is
intended. Upon receiving a packet, a machine checks the address field. If the packet is
intended for itself, it processes the packet; if the packet is intended for some other machine,
it is just ignored.
Broadcast systems generally also allow the possibility of addressing a packet to all
destinations by using a special code in the address field. When a packet with this code is
transmitted, it is received and processed by every machine on the network. This mode of
operation is called broadcasting. Some broadcast systems also support transmission to a
subset of the machines, something known as multicasting. One possible scheme is to
reserve one bit to indicate multicasting. The remaining (n1) address bits can hold a group
number. Each machine can "subscribe" to any or all of the groups. When a packet is sent to a
certain group, it is delivered to ail machines subscribing to that group.
Point-to-Point Networks
Point-to-point networks consist of many connections between individual pairs of machines .
To go from the source to the destination, a packet on this type of network may have to first
visit one or more intermediate machines. Often multiple mutes, of different lengths are
possible, so routing algorithms play an important role in point-to-point networks. As a
general rule (although there are many exceptions), smaller, geographicaily localized networks
tend to use broadcasting, whereas larger networks usually are point-to-point.
Book ID: B0814
4. Explain the different classes of IP addresses with suitable
examples.
Ans:
IP address
From Wikipedia, the free encyclopedia
Jump to: navigation, search
An Internet Protocol address (IP address) is a numerical label assigned to each device
(e.g., computer, printer) participating in a computer network that uses the Internet
Protocol for communication.
[1]
An IP address serves two principal functions: host or
network interface identification and location addressing. Its role has been characterized as
follows: "A name indicates what we seek. An address indicates where it is. A route
indicates how to get there."
[2]
The designers of the Internet Protocol defined an IP address as a 32-bit number
[1]
and this
system, known as Internet Protocol Version 4 (IPv4), is still in use today. However, due
to the enormous growth of the Internet and the predicted depletion of available addresses,
a new version of IP (IPv6), using 128 bits for the address, was developed in 1995.
[3]
IPv6
was standardized as RFC 2460 in 1998,
[4]
and its deployment has been ongoing since the
mid-2000s.
IP addresses are binary numbers, but they are usually stored in text files and displayed in
human-readable notations, such as 172.16.254.1 (for IPv4), and
2001:db8:0:1234:0:567:8:1 (for IPv6).
The Internet Assigned Numbers Authority (IANA) manages the IP address space
allocations globally and delegates five regional Internet registries (RIRs) to allocate IP
address blocks to local Internet registries (Internet service providers) and other entities.
=
IP versions
Two versions of the Internet Protocol (IP) are in use: IP Version 4 and IP Version 6. Each
version defines an IP address differently. Because of its prevalence, the generic term IP
address typically still refers to the addresses defined by IPv4. The gap in version
sequence between IPv4 and IPv6 resulted from the assignment of number 5 to the
experimental Internet Stream Protocol in 1979, which however was never referred to as
IPv5.
IPv4 addresses
Main article: IPv4#Addressing
Decomposition of an IPv4 address from dot-decimal notation to its binary value.
In IPv4 an address consists of 32 bits which limits the address space to 4294967296 (2
32
)
possible unique addresses. IPv4 reserves some addresses for special purposes such as
private networks (~18 million addresses) or multicast addresses (~270 million addresses).
IPv4 addresses are canonically represented in dot-decimal notation, which consists of
four decimal numbers, each ranging from 0 to 255, separated by dots, e.g., 172.16.254.1.
Each part represents a group of 8 bits (octet) of the address. In some cases of technical
writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary
representations.
IPv4 subnetting
In the early stages of development of the Internet Protocol,
[1]
network administrators
interpreted an IP address in two parts: network number portion and host number portion.
The highest order octet (most significant eight bits) in an address was designated as the
network number and the remaining bits were called the rest field or host identifier and
were used for host numbering within a network.
This early method soon proved inadequate as additional networks developed that were
independent of the existing networks already designated by a network number. In 1981,
the Internet addressing specification was revised with the introduction of classful network
architecture.
[2]
Classful network design allowed for a larger number of individual network assignments
and fine-grained subnetwork design. The first three bits of the most significant octet of an
IP address were defined as the class of the address. Three classes (A, B, and C) were
defined for universal unicast addressing. Depending on the class derived, the network
identification was based on octet boundary segments of the entire address. Each class
used successively additional octets in the network identifier, thus reducing the possible
number of hosts in the higher order classes (B and C). The following table gives an
overview of this now obsolete system.
Historical classful network architecture
Class
Leading bits in
address
(binary)
Range of first
octet (decimal)
Network ID
format
Host ID
format
Number of
networks
Number of
addresses per
network
A 0 0127 a b.c.d 2
7
= 128 2
24
= 16777216
B 10 128191 a.b c.d 2
14
= 16384 2
16
= 65536
C 110 192223 a.b.c d
2
21
=
2097152
2
8
= 256
Classful network design served its purpose in the startup stage of the Internet, but it
lacked scalability in the face of the rapid expansion of the network in the 1990s. The
class system of the address space was replaced with Classless Inter-Domain Routing
(CIDR) in 1993. CIDR is based on variable-length subnet masking (VLSM) to allow
allocation and routing based on arbitrary-length prefixes.
Today, remnants of classful network concepts function only in a limited scope as the
default configuration parameters of some network software and hardware components
(e.g. netmask), and in the technical jargon used in network administrators' discussions.
IPv4 private addresses
Early network design, when global end-to-end connectivity was envisioned for
communications with all Internet hosts, intended that IP addresses be uniquely assigned
to a particular computer or device. However, it was found that this was not always
necessary as private networks developed and public address space needed to be
conserved.
Computers not connected to the Internet, such as factory machines that communicate
only with each other via TCP/IP, need not have globally unique IP addresses. Three
ranges of IPv4 addresses for private networks were reserved in RFC 1918. These
addresses are not routed on the Internet and thus their use need not be coordinated with
an IP address registry.
Today, when needed, such private networks typically connect to the Internet through
network address translation (NAT).
IANA-reserved private IPv4 network ranges
Start End No. of addresses
24-bit block (/8 prefix, 1 A) 10.0.0.0 10.255.255.255 16777216
20-bit block (/12 prefix, 16 B) 172.16.0.0 172.31.255.255 1048576
16-bit block (/16 prefix, 256 C) 192.168.0.0 192.168.255.255 65536
Any user may use any of the reserved blocks. Typically, a network administrator will
divide a block into subnets; for example, many home routers automatically use a default
address range of 192.168.0.0 through 192.168.0.255 (192.168.0.0/24).
5. Discuss the following with respect to Internet Control Message Protocols:
a. Congested and Datagram Flow control
b. Route change requests from routers
c. Detecting circular or long routes
a) Congested and Datagram Flow control:
IP implementations are required to support this protocol. ICMP is considered an integral part of
IP, although it is architecturally layered upon IP. ICMP provides error reporting, flow control and
first-hop gateway redirection.
Some of ICMP's functions are to:
Announce network errors.
Such as a host or entire portion of the network being unreachable, due to some type of failure. A
TCP or UDP packet directed at a port number with no receiver attached is also reported via
ICMP.
Announce network congestion.
When a router begins buffering too many packets, due to an inability to transmit them as fast as
they are being received, it will generate ICMP Source Quench messages. Directed at the sender,
these messages should cause the rate of packet transmission to be slowed. Of course,
generating too many Source Quench messages would cause even more network congestion, so
they are used sparingly.
Assist Troubleshooting.
ICMP supports an Echo function, which just sends a packet on a round--trip between two hosts.
Ping, a common network management tool, is based on this feature. Ping will transmit a series of
packets, measuring average round--trip times and computing loss percentages.
Announce Timeouts.
If an IP packet's TTL field drops to zero, the router discarding the packet will often generate an
ICMP packet announcing this fact. TraceRoute is a tool which maps network routes by sending
packets with small TTL values and watching the ICMP timeout announcements.
An ICMP error message is never generated inresponse to:
A datagram whose source address does not define a single host (address cannot be zero,
loopback, broadcast, multicast). A datagram whose destination address is an IP broadcast
address. A datagram sent as a link-layer broadcast A fragment other than the first one of a
datagram.
(b) Route change requests from routers:
Network Address Translation (NAT) is a standard IP service which allows for the translation
of one IP address into another IP address. NAT has been enhanced to provide a set of advanced
services called SuperNAT. SuperNAT includes a powerful Proxy Service, Port Address
translation (sometimes called PAT) and Application Specific Gateways (ASGs) as well as other
capabilities defines below.
Up to 32 internal to external host IP address mappings
SuperNAT allows local hosts to be excluded from external services.
SuperNAT Thin Proxyallows single IP for unlimited local hosts.
SuperNAT allows NAT translations plus a nominated IP address to be used as a Thin Proxy for
all other hosts
Port Maps (PAT) allow support of multiple types of servers on a single IP
Context sensitive support for active (PORT) or Passive (PASV) FTP modes.
Automatic support for remote NETBIOS (WINS) networks and remote DHCP servers
Proxy DNS Feature simplifies re-configuration.
User definable NAT route(s) allow router to be used in LAN to LAN, LAN to WAN, WAN to WAN
configurations.
NAT services are defined at the 'Logical Route' level. It is possible to define any Route to
use NAT services. To illustrate, assume an Intranet where WarpTwo is being used as an
concentrator for a group of LAN and remote Hosts (PCs). These IP addresses communicate with
each without using a NAT service (an Intranet) when external communication is required
WarpTwo forwards the traffic to another LAN router. This LAN to LAN route is defined as the NAT
route and uses a NAT service. There are many other network scenarios where this capability can
be used to both increase efficiency and to provide flexible responses to network needs.
(c) Detecting circular or long routes:
IP networks are structured similarly. The whole Internet consists of a number of proper
networks, called autonomous systems. Each system performs routing between its member hosts
internally so that the task of delivering a datagram is reduced to finding a path to the destination
host's network. As soon as the datagram is handed to any host on that particular network, further
processing is done exclusively by the network itself.
Identifying critical nodes in a graph is important to understand the structural
characteristics and the connectivity properties of the network. In this paper, we focus on detecting
critical nodes, or nodes whose deletion results in the minimum pair-wise connectivity among the
remaining nodes. This problem, known as the critical node problem.
IP uses a table for this task that associates networks with the gateways by which they
may be reached. A catch-all entry (the default route) must generally be supplied too; this is the
gateway associated with network 0.0.0.0. All destination addresses match this route, since none
of the 32 bits are required to match, and therefore packets to an unknown network are sent
through the default route. On sophus, the table might look like this:
If you need to use a route to a network that sophus is directly connected to, you don't
need a gateway; the gateway column here contains a hyphen. The process for identifying
whether a particular destination address matches a route is a mathematical operation. The
process is quite simple, but it requires an understanding of binary arithmetic and logic: A route
matches a destination if the network address logically ANDed with the netmask precisely equals
the destination address logically ANDed with the netmask. Translation: a route matches if the
number of bits of the network address specified by the netmask (starting from the left-most bit,
the high order bit of byte one of the address) match that same number of bits in the destination
address.
We depend on dynamic routing to choose the best route to a destination host or network
based on the number of hops. Hops are the gateways a datagram has to pass before reaching a
host or network. The shorter a route is, the better RIP rates it. Very long routes with 16 or more
hops are regarded as unusable and are discarded.
RIP manages routing information internal to your local network, but you have to run gated
on all hosts. At boot time, gated checks for all active network interfaces. If there is more than one
active interface (not counting the loopback interface), it assumes the host is switching packets
between several networks and will actively exchange and broadcast routing information.
Otherwise, it will only passively receive RIP updates and update the local routing table.
6. Discuss the architecture and applications of E-mail.
Ans:
Ans:
E-Mail
Electronic mail or e-mail, as it is known by its fans b
ecame known to the public at large and its use grew exponentially. The first e-mail systems
consisted of file transfer protocols, with the convention that the first line of the message contained
the recipients address. It is a store and forward method of composing, sending, storing, and
receiving messages over electronic communication systems. The term e-mail applies both to
the Internet e-mail system based on the Simple Mail Transfer Protocol (SMTP) and to intranet
systems allowing users within one organization to e-mail each other.
Often workgroup collaboration organizations may use the Internet protocols for internal e-mail
service. E-mail is often used to deliver bulk unwanted messages, or spam, but filter programs
exist which can automatically delete most of these. E-mail systems based on RFC 822 are widely
used.
1 Architecture :
E-mail system normally consists of two sub systems
1. the user agents
2. the message transfer agents
The user agents allow people to read and send e-mails. The message transfer agents move the
messages from source to destination. The user agents are local programs that provide a
command based, menu-based, or graphical method for interacting with e-mail system. The
message transfer agents are daemons, which are processes that run in background. Their job is
to move datagram e-mail through system.
A key idea in e-mail system is the distinction between the envelope and its contents. The
envelope encapsulates the message. It contains all the information needed for transporting the
message like destinations address, priority, and security level, all of which are distinct from the
message itself.
The message transport agents use the envelope for routing. The message inside the envelope
consists of two major sections:
The Header:
The header contains control information for the user agents. It is structured into fields such as
summary, sender, receiver, and other information about the e-mail.
Body:
The body is entirely for human recipient. The message itself as unstructured text; sometimes
containing a signature block at the end
2 Header format
The header is separated from the body by a blank line.
consists of following fields
From: The e-mail address, and optionally name, of the sender of the message.
To: one or more e-mail addresses, and optionally name, of the receivers of the message.
Subject: A brief summary of the contents of the message.
Date: The local time and date when the message was originally sent.
E-mail system based on RFC 822 contains the message header as shown in figure 8.2. The
figure gives the fields along with their meaning.
The fields in the message header of E-mail system based on RFC 822 related to message
transport are given in figure 8.3. The figure gives the fields along with their meaning.
3 User agents:
It is normally a program and sometimes called a mail reader. It accepts a variety of commands for
composing, receiving, replying messages as well as manipulating the mail boxes. Some user
agents have a fancy menu or icon driven interfaced that require a mouse where as others are one
character commands from keyboard. Functionally these are same. Some systems are menu or
icon driven but also have keyboard shortcuts.
To send an e-mail, user provides the message, the destination address and possibly some other
parameters. Most e-mail system supports mailing lists.
Example: Reading e-mail
When a user is started up, it looks at the users mailbox for incoming e-mail before displaying
anything on the screen. Then it announces the number of messages in the mailbox or displays a
one-line summary of each e-mail and wait for a command.
The display may look something like that is shown in figure 8.4. Each line of the display contains
several fields extracted from the envelope or header of the corresponding message. In a simple
e-mail system, the choice of fields is built into the program. In more sophisticated system, user
can specify which fields are to be displayed by providing a user profile.
Referring to the display it contains following fields
1. Message number: it is serial number of the message. It can be displayed from the most
currently received messages or vice versa.
2. Flags: contains K means the message is not new, A means the message is already read and F
means the message has been forwarded to someone else.
3. size of the message: indicates the length of the message
4. source of the message: originators address
5. subject: gives a brief summary of what the message is about.
4 E-mail Services
Basic services:
E-mail systems support five basic functions. These basic functions are:
1. Composition:
It refers to the process of creating messages and answers. Any text editor can be used for the
body of the message, the system itself can provide assistance with addressing and the numerous
header fields attached to each message.
For example: when answering a message, the e-mail system can extract the originators address
from the incoming e-mail and automatically insert it into the proper place in the reply.
2. Transfer:
It refers to moving messages from the originator to the recipient. This requires establishing a
connection to the destination or some intermediate machine, outputting the message, and finally
releasing the connection. E-mail does it automatically without bothering the user.
3. Reporting:
It refers to acknowledging or telling the originator what happened to the message. Was the
message delivered? Was it rejected? Numerous applications exist in which confirmation of
delivery is important and may even have a legal significance. E-mail system is not very reliable.
4. Displaying
The incoming message has to be displayed so that people can read their e-mail. Sometimes
conversation is required or a special viewer must be invoked. For example: if message is a
postscript file or digitized voice. Simple conversations and formatting are sometimes attempted.
5. Disposition
It is the final step and concerns what the recipient does with the message after receiving it.
Possibilities include throwing it away before reading, throwing it away after reading, saving it, and
so on. It should be possible to retrieve and reread saved messages, forward them or process
them in other ways.
Advanced services:
In addition to these basic services, some e-mail systems provide a variety of advanced features.
When people move or when they are away for some period of time, they want their e-mail to be
forwarded, so the system should do it automatically.
Most systems allow user to create mailboxes to store incoming e-mails. Commands are needed
to create and destroy mailboxes, inspect the contents of mailboxes, insert and delete messages
from the mailboxes.
Corporate managers often need to send messages to each of their subordinates, customers, or
suppliers. This gives rise to the idea of mailing list, which is a list of e-mail addresses. When a
message is sent to the mailing list, identical copies are delivered to everyone on the list.
Carbon copies, blind Carbon copies, high priority e-mail, secret e-mail, alternative recipients if
primary one is not currently available, and the ability for secretaries to read and answer their
bosses e-mail.
E-mail is now widely used within an industry for intra company communication. It allows far-flung
employees to cooperate on projects.
August 2012
Master of Computer Application (MCA) Semester 3
MC0075 Computer Networks 4 Credits
(Book ID: B0813 & B0814)
Assignment Set 2 (60 Marks)
Answer all Questions Each Question carries TEN Marks
Book ID: B0813
1. Discuss the following design issues of DLL:
a. Framing
b. Error control
c. Flow control
Ans:
) Framing:
Software design is a process of problem-solving and planning for a software solution. After the
purpose and specifications of software is determined, software developers will design or employ
designers to develop a plan for a solution. It includes low-level component and algorithm
implementation issues as well as the architectural view. The software requirements analysis (SRA)
step of a software development process yields specifications that are used in software engineering. A
software design may be platform-independent or platform-specific, depending on the availability of the
technology called for by the design.
Design is a meaningful engineering representation of something that is to be built. It can be traced to
a customer's requirements and at the same time assessed for for quality against a set of predefined
criteria for 'good' design. In the software engineering context, design focuses on four major areas of
concern, data, architecture, interfaces, and components.
Designing software is an exercise in managing complexity. The complexity exits within the software
design itself, within the software organization of the company, and within the industry as a whole.
Software design is very similar to systems design. It can span multiple technologies and often
involves multiple sub-disciplines. Software specifications tend to be fluid, and change rapidly and
often, usually while the design process is still going on. Software development teams also tend to be
fluid, likewise often changing in the middle of the design process. In many ways, software bears more
resemblance to complex social or organic systems than to hardware. All of this makes software
design a difficult and error prone process.
Software design documentation may be reviewed or presented to allow constraints, specifications and
even requirements to be adjusted prior to programming. Redesign may occur after review of a
programmed simulation or prototype. It is possible to design software in the process of programming,
without a plan or requirement analysis, but for more complex projects this would not be considered a
professional approach.
Frame Technology is a language-neutral system that manufactures custom software[1] from
reusable, machine-adaptable building blocks, called frames.
FT is used to reduce the time, effort, and errors involved in the design, construction, and evolution of
large, complex software systems. Fundamental to FT is its ability to stop the proliferation[2] of similar
but subtly different components, an issue plaguing software engineering, for which programming
language constructs (subroutines, classes, or templates/generics) or add-in techniques such as
macros and generators failed to provide a practical, scalable solution.
A number of implementations of FT exist. Netron Fusion specializes in constructing business software
and is proprietary. XVCL is a general-purpose, open-source implementation of FT. Paul G. Bassett
invented the first FT in order to automate the repetitive, error-prone editing involved in adapting
(generated and hand-written) programs to changing requirements and contexts. Independent
comparisons of FT to alternative approaches[11] confirm that the time and resources needed to build
and maintain complex systems can be substantially reduced. One reason: FT shields programmers
from softwares inherent redundancies: FT has reproduced COTS object-libraries from equivalent
XVCL frame libraries that are two-thirds smaller and simpler[2][6]; custom business applications are
routinely specified and maintained by Netron FusionSPC frames that are 5% - 15% of the size of their
assembled source files[7].
(ii) Error control:
Error control (error management, error handling) The employment, in a computer system or in a
communication system, of error-detecting and/or error-correcting codes with the intention of removing
the effects of error and/or recording the prevalence of error in the system. The effects of errors may
be removed by correcting them in all but a negligible proportion of cases. Error control aims to cope
with errors owing to noise or to equipment malfunction in which case it overlaps with fault tolerance
(see fault-tolerant system) but not usually with the effects of errors in the design of hardware or
software. An important aspect is the prevention of mistakes by users. Checking of data by software as
it is entered is an essential feature of the design of reliable application programs.
Error control is expensive: the balance between the cost and the benefit (measured by the degree of
protection) has to be weighed within the technological and financial context of the system being
designed.
Software Quality Control is the set of procedures used by organizations (1) to ensure that a software
product will meet its quality goals at the best value to the customer, and (2) to continually improve the
organizations ability to produce software products in the future. Software quality control refers to
specified functional requirements as well as non-functional requirements such as supportability,
performance and usability. [2] It also refers to the ability for software to perform well in unforeseeable
scenarios and to keep a relatively low defect rate.
(iii) Flow control:
In computer networking, flow control is the process of managing the rate of data transmission
between two nodes to prevent a fast sender from outrunning a slow receiver. It provides a mechanism
for the receiver to control the transmission speed, so that the receiving node is not overwhelmed with
data from tranceiving nodes. Flow control should be distinguished from congestion control, which is
used for controlling the flow of data when congestion has actually occurred [1]. Flow control
mechanisms can be classified by whether or not the receiving node sends feedback to the sending
node.
Flow control is important because it is possible for a sending computer to transmit information at a
faster rate than the destination computer can receive and process them. This can happen if the
receiving computers have a heavy traffic load in comparison to the sending computer, or if the
receiving computer has less processing power than the sending computer.
In common RS 232 there are pairs of control lines:
RTS flow control, RTS (Request To Send)/CTS (Clear To Send) and
DTR flow control, DTR (Data Terminal Ready)/DSR (Data Set Ready),
which are usually referred to as hardware flow control. Oppositely, XON/XOFF is usually referred to
as software flow control. In the old mainframe days, modems were called "data sets.
2. Discuss the following with respect to Routing algorithms:
a. Shortest path algorithm
b. Flooding
c. Distance vector routing.
Ans:
Dijkstra's algorithm, when applied to a graph, quickly finds the shortest path from a chosen source to
a given destination. (The question "how quickly" is answered later in this article.) In fact, the algorithm
is so powerful that it finds all shortest paths from the source to all destinations! This is known as the
single-source shortest paths problem. In the process of finding all shortest paths to all destinations,
Dijkstra's algorithm will also compute, as a side-effect if you will, a spanning tree for the graph. While
an interesting result in itself, the spanning tree for a graph can be found using lighter (more efficient)
methods than Dijkstra's.
How It Works
First let's start by defining the entities we use. The graph is made of vertices (or nodes, I'll use both
words interchangeably), and edges which link vertices together. Edges are directed and have an
associated distance, sometimes called the weight or the cost. The distance between the vertex u and
the vertex v is noted [u, v] and is always positive.
Dijkstra's algorithm partitions vertices in two distinct sets, the set of unsettled vertices and the set of
settled vertices. Initially all vertices are unsettled, and the algorithm ends once all vertices are in the
settled set. A vertex is considered settled, and moved from the unsettled set to the settled set, once
its shortest distance from the source has been found.
B) Routing:
How do we get packets from one end point to another? Here's what would be nice for a routing
algorithm: correctness, simplicity, robustness, stability, fairness, optimality.
Robustness
The world changes, software changes, use changes, topology and hardware change, things go wrong
in lots of different ways. How well does the routing algorithm handle all this?
Stability
Does the algorithm find a routing table quickly (convergence)? How does it adapt to abrupt changes in
topology or state of the routers? Is it possible to have oscillations?
Fairness & Optimality
May be at odds with one another. What might be fair for a single link may hurt throughput. Must
decide on what is meant by optimality before thinking about algorithms. For example, optimal could be
for an individual packet (least amount of time in transit) or could be for the system as a whole
(greatest throughput). Often times number of hops is chosen as the metric to minimize as this
represents both in some sense.
Algorithms may be static, i.e. the routing decisions are made ahead of time, with information about the
network topology and capacity, then loaded into the routers, or dynamically, where the routers make
decisions based on information they gather, and the routes change over time, adaptively.
Optimality principle and sink trees Without regard to topology we can say:
If a router J is on the optimal path from router I to router K, then the optimal path from J to K also
follows the same route.
Proof: if there was a better way from J to K, then you could use that with the path from I to J for a
better path from I to K, so your starting point (the path from I to K was optimal) is contradicted.
If you apply the optimality principle then you can form a tree by taking the optimal path from every
other router to a single router, B. The tree is rooted at B. Since it is a tree you don't have loops, so
you know that each frame will be delivered in a finite number of hops. Of course finding the set of
optimal trees is a lot harder in practice than in theory, but it still provides a goal for all real routing
algorithms.
C) Distance vector routing:
Distance Vector Routing is one of the two types of routing types. (The other type is Link State
Routing). Basically, Distance Vector protocols determine best path on how far the destination is, while
LinkState protocols are capable of using more sophisticated methods taking into consideration link
variables, such as bandwidth, delay, reliability and load. Distance Vector protocols judge best path on
how far it is. Distance can be hops or a combination of metrics calculated to represent a distance
value. The IP Distance Vector routing protocols still in use today are: Routing Information Protocol
(RIP v1 and v2) and Interior Gateway Routing Protocol (IGRP C developed by Cisco).
A very simple distance-vector routing protocol works as follows:
1.Initially, the router makes a list of which networks it can reach, and how many hops it will cost. In the
outset this will be the two or more networks to which this router is connected. The number of hops for
these networks will be 1. This table is called a routing table.
2.Periodically the routing table is shared with other routers on each of the connected networks via
some specified inter-router protocol. This information is only shared inbetween physically connected
routers ("neighbors"), so routers on other networks are not reached by the new routing tables yet.
3.A new routing table is constructed based on the directly configured network interfaces, as before,
with the addition of the new information received from other routers.
4.Bad routing paths are then purged from the new routing table. If two identical paths to the same
network exists, only the one with the smallest hop-count is kept.
5.The new routing table is then communicated to all neighbors of this router. This way the routing
information will spread and eventually all routers know the routing path to each network, which router
it shall use to reach this network, and to which router it shall route next.
Distance-vector routing protocols are simple and efficient in small networks, and require little, if any
management. However, they do not scale well, and have poor convergence properties, which has led
to the development of more complex but more scalable link-state routing protocols for use in large
networks.
3. Discuss the following with respect to Wireless
transmission:
o Electromagnetic spectrum
o Radio transmission
o Microwave transmission
Ans:
The electromagnetic spectrum is the range of all possible frequencies of electromagnetic
radiation.
[1]
The "electromagnetic spectrum" of an object has a different meaning, and is
instead the characteristic distribution of electromagnetic radiation emitted or absorbed by that
particular object.
The electromagnetic spectrum extends from below the low frequencies used for modern radio
communication to gamma radiation at the short-wavelength (high-frequency) end, thereby
covering wavelengths from thousands of kilometers down to a fraction of the size of an atom.
The limit for long wavelengths is the size of the universe itself, while it is thought that the
short wavelength limit is in the vicinity of the Planck length,
[2]
although in principle the
spectrum is infinite and continuous.
Most parts of the electromagnetic spectrum are used in science for spectroscopic and other
probing interactions, as ways to study and characterize matter.
[3]
In addition, radiation from
various parts of the spectrum has found many other uses for communications and
manufacturing (see electromagnetic radiation for more applications).
Range of the spectrum
Electromagnetic waves are typically described by any of the following three physical
properties: the frequency f, wavelength , or photon energy E. Frequencies observed in
astronomy range from 2.410
23
Hz (1 GeV gamma rays) down to the local plasma frequency
of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to the
wave frequency,
[3]
so gamma rays have very short wavelengths that are fractions of the size
of atoms, whereas wavelengths can be as long as the universe. Photon energy is directly
proportional to the wave frequency, so gamma ray photons have the highest energy (around a
billion electron volts), while radio wave photons have very low energy (around a
femtoelectronvolt). These relations are illustrated by the following equations:
where:
c = 299,792,458 m/s is the speed of light in vacuum and
h = 6.62606896(33)10
34
J s = 4.13566733(10)10
15
eV s is Planck's constant.
[7]
Whenever electromagnetic waves exist in a medium with matter, their wavelength is
decreased. Wavelengths of electromagnetic radiation, no matter what medium they are
traveling through, are usually quoted in terms of the vacuum wavelength, although this is not
always explicitly stated.
Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave,
terahertz (or sub-millimeter) radiation, infrared, the visible region we perceive as light,
ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its
wavelength. When EM radiation interacts with single atoms and molecules, its behavior also
depends on the amount of energy per quantum (photon) it carries.
Spectroscopy can detect a much wider region of the EM spectrum than the visible range of
400 nm to 700 nm. A common laboratory spectroscope can detect wavelengths from 2 nm to
2500 nm. Detailed information about the physical properties of objects, gases, or even stars
can be obtained from this type of device. Spectroscopes are widely used in astrophysics. For
example, many hydrogen atoms emit a radio wave photon that has a wavelength of 21.12 cm.
Also, frequencies of 30 Hz and below can be produced by and are important in the study of
certain stellar nebulae
[8]
and frequencies as high as 2.910
27
Hz have been detected from
astrophysical sources.
[9]
Rationale
Electromagnetic radiation interacts with matter in different ways in different parts of the
spectrum. The types of interaction can be so different that it seems to be justified to refer to
different types of radiation. At the same time, there is a continuum containing all these
"different kinds" of electromagnetic radiation. Thus we refer to a spectrum, but divide it up
based on the different interactions with matter.
Region of the
spectrum
Main interactions with matter
Radio
Collective oscillation of charge carriers in bulk material (plasma oscillation). An
example would be the oscillation of the electrons in an antenna.
Microwave through
far infrared
Plasma oscillation, molecular rotation
Near infrared Molecular vibration, plasma oscillation (in metals only)
Visible
Molecular electron excitation (including pigment molecules found in the human
retina), plasma oscillations (in metals only)
Ultraviolet
Excitation of molecular and atomic valence electrons, including ejection of the
electrons (photoelectric effect)
X-rays
Excitation and ejection of core atomic electrons, Compton scattering (for low
atomic numbers)
Gamma rays
Energetic ejection of core electrons in heavy elements, Compton scattering (for
all atomic numbers), excitation of atomic nuclei, including dissociation of nuclei
High-energy
gamma rays
Creation of particle-antiparticle pairs. At very high energies a single photon can
create a shower of high-energy particles and antiparticles upon interaction with
matter.
Types of radiation
The electromagnetic spectrum
Boundaries
A discussion of the regions (or bands or types) of the electromagnetic spectrum is given
below. Note that there are no precisely defined boundaries between the bands of the
electromagnetic spectrum; rather they fade into each other like the bands in a rainbow (which
is the sub-spectrum of visible light). Radiation of each frequency and wavelength (or in each
band) will have a mixture of properties of two regions of the spectrum that bound it. For
example, red light resembles infrared radiation in that it can excite and add energy to some
chemical bonds and indeed must do so to power the chemical mechanisms responsible for
photosynthesis and the working of the visual system.
Regions of tbe spectrum
The types of electromagnetic radiation are broadly classified into the following classes:
[3]
1. Gamma radiation
2. X-ray radiation
3. Ultraviolet radiation
4. Visible radiation
5. Infrared radiation
6. Terahertz radiation
7. Microwave radiation
8. Radio waves
This classification goes in the increasing order of wavelength, which is characteristic of the
type of radiation.
[3]
While, in general, the classification scheme is accurate, in reality there is
often some overlap between neighboring types of electromagnetic energy. For example, SLF
radio waves at 60 Hz may be received and studied by astronomers, or may be ducted along
wires as electric power, although the latter is, in the strict sense, not electromagnetic radiation
at all (see near and far field)
The distinction between X-rays and gamma rays is partly based on sources: the photons
generated from nuclear decay or other nuclear and subnuclear/particle process, are always
termed gamma rays, whereas X-rays are generated by electronic transitions involving highly
energetic inner atomic electrons.
[10][11][12]
In general, nuclear transitions are much more
energetic than electronic transitions, so gamma-rays are more energetic than X-rays, but
exceptions exist. By analogy to electronic transitions, muonic atom transitions are also said to
produce X-rays, even though their energy may exceed 6 megaelectronvolts (0.96 pJ),
[13]
whereas there are many (77 known to be less than 10 keV (1.6 fJ)) low-energy nuclear
transitions (e.g., the 7.6 eV (1.22 aJ) nuclear transition of thorium-229), and, despite being
one million-fold less energetic than some muonic X-rays, the emitted photons are still called
gamma rays due to their nuclear origin.
[14]
The convention that EM radiation that is known to come from the nucleus, is always called
"gamma ray" radiation is the only convention that is universally respected, however. Many
astronomical gamma ray sources (such as gamma ray bursts) are known to be too energetic
(in both intensity and wavelength) to be of nuclear origin. Quite often, in high energy physics
and in medical radiotherapy, very high energy EMR (in the >10 MeV region) which is of
higher energy than any nuclear gamma ray, is not referred to as either X-ray or gamma-ray,
but instead by the generic term of "high energy photons."
The region of the spectrum in which a particular observed electromagnetic radiation falls, is
reference frame-dependent (due to the Doppler shift for light), so EM radiation that one
observer would say is in one region of the spectrum could appear to an observer moving at a
substantial fraction of the speed of light with respect to the first to be in another part of the
spectrum. For example, consider the cosmic microwave background. It was produced, when
matter and radiation decoupled, by the de-excitation of hydrogen atoms to the ground state.
These photons were from Lyman series transitions, putting them in the ultraviolet (UV) part
of the electromagnetic spectrum. Now this radiation has undergone enough cosmological red
shift to put it into the microwave region of the spectrum for observers moving slowly
(compared to the speed of light) with respect to the cosmos.
Radio frequency
Main articles: Radio frequency, Radio spectrum, and Radio waves
Radio waves generally are utilized by antennas of appropriate size (according to the principle
of resonance), with wavelengths ranging from hundreds of meters to about one millimeter.
They are used for transmission of data, via modulation. Television, mobile phones, wireless
networking, and amateur radio all use radio waves. The use of the radio spectrum is regulated
by many governments through frequency allocation.
Radio waves can be made to carry information by varying a combination of the amplitude,
frequency, and phase of the wave within a frequency band. When EM radiation impinges
upon a conductor, it couples to the conductor, travels along it, and induces an electric current
on the surface of that conductor by exciting the electrons of the conducting material. This
effect (the skin effect) is used in antennas.
Microwaves
Main article: Microwaves
Plot of Earth's atmospheric transmittance (or opacity) to various wavelengths of electromagnetic
radiation.
The super-high frequency (SHF) and extremely high frequency (EHF) of microwaves come
after radio waves. Microwaves are waves that are typically short enough to employ tubular
metal waveguides of reasonable diameter. Microwave energy is produced with klystron and
magnetron tubes, and with solid state diodes such as Gunn and IMPATT devices.
Microwaves are absorbed by molecules that have a dipole moment in liquids. In a microwave
oven, this effect is used to heat food. Low-intensity microwave radiation is used in Wi-Fi,
although this is at intensity levels unable to cause thermal heating.
Volumetric heating, as used by microwave ovens, transfers energy through the material
electromagnetically, not as a thermal heat flux. The benefit of this is a more uniform heating
and reduced heating time; microwaves can heat material in less than 1% of the time of
conventional heating methods.
When active, the average microwave oven is powerful enough to cause interference at close
range with poorly shielded electromagnetic fields such as those found in mobile medical
devices and cheap consumer electronics.
Book ID: B0814
4. Describe the following:
a. IGP
b. OSPF
c. OSPF Message formats
Ans:
An interior gateway protocol (IGP) is a routing protocol that is used within an autonomous system
(AS).In contrast an Exterior Gateway Protocol (EGP) is for determining network reach ability between
autonomous systems and makes use of IGPs to resolve routes within an AS. The interior gateway
protocols can be divided into two categories: 1) Distance-vector routing protocol and 2) Link-state
routing protocol.
Autonomous System like Internet (TCP/IP) terminology for a collection of gateways (routers) that fall
under one administrative entity and cooperate using a common Interior Gateway Protocol (IGP).
IGP repository is an advanced digital preservation archive designed for critical, demanding, long-term
data archiving for a wide range of organization requirements. IGP repository successfully isolates
content and content management from technology and technology obsolescence enabling the
modern enterprise for a data-certain future.
It is purpose designed for:
Document management including images, office documents, maps, etc.
Asset management including images, audio and video
Records management with statutory compliance requirements
Archiving cultural artifacts (as digital surrogates) for museums and formal archives
Maintaining large data sets, including mixed datasets
The design is a faithful execution of the OAIS Reference Model for digital archives. The benchmark
for information system archives. IGP repository complies with a number of international standards for
document and records management . It is designed specifically as a content management foundation
to empower any organization to institute a best practices business model related to the
(ii) OSPF:
(Open Shortest Path First) A routing protocol that determines the best path for routing IP traffic over a
TCP/IP network based on distance between nodes and several quality parameters. OSPF is an
interior gateway protocol (IGP), which is designed to work within an autonomous system. It is also a
link state protocol that provides less router to router update traffic than the RIP protocol (distance
vector protocol) that it was designed to replace. Open Shortest Path First OSPF is widely deployed in
IP networks to manage intra-domain routing. OSPF is a link-state protocol, in which routers reliably
flood "Link State Advertisements" LSAs, enabling each to build a consistent, global view of the routing
topology. Reliable performance hinges on routing stability, OSPF Open Shortest Path First is a widely
used intra-domain routing protocol in IP networks. Internal processing delays in OSPF
implementations impact the speed at which updates propagate in the network, the load on individual
routers, and the time needed for both intra-domain and inter-domain routing
Improving IP control plane routing robustness is critical to the creation of reliable and stable IP
services. Yet very few tools exist for effective IP route monitoring and management. This paper
describes the architecture, design and deployment of a monitoring system for OSPF, an IP intra-
domain routing protocol in wide. Many recent router architectures decouple the routing engine from
the forwarding engine, allowing packet forwarding to continue even when the routing process is not
active. This opens up the possibility of using the forwarding capability of a router even when its
routing process is brought down for software upgrade. Due to the growing commercial importance of
the Internet, resilience is becoming a key design issue for future IP-based networks. Reconfiguration
times on the order of a few hundred milliseconds are required in the case of network element failures-
far away from the slow rerouting of current implementations
(iii) OSPF Message Formats:
OSPF is an interior gateway protocol (IGP), which is designed to work within an autonomous system.
It is also a link state protocol that provides less router to router update traffic than the RIP protocol
(distance vector protocol) that it was designed to replace. Open Shortest Path First OSPF is widely
deployed in IP networks to manage intra-domain routing. OSPF is a link-state protocol, in which
routers reliably flood "Link State Advertisements" LSAs, enabling each to build a consistent, global
view of the routing topology. Reliable performance hinges on routing stability, OSPF Open Shortest
Path First is a widely used intra-domain routing protocol in IP networks
OSPF uses five different types of messages to communicate both link-state and general information
between routers within an autonomous system or area. To help illustrate better how the OSPF
messages are used, it's worth taking a quick look at the format used for each of these messages.
OSPF Common Header Format
Naturally, each type of OSPF message includes a slightly different set of informationotherwise, they
wouldn't be different message types! However, they all share a similar message structure, beginning
with a shared 24-byte header. This common header allows certain standard information to be
conveyed in a consistent manner, such as the number of the version of OSPF that generated the
message. It also allows a device receiving an OSPF message to quickly determine which type of
message it has received, so it knows whether or not it needs to bother examining the rest of the
message
5. Describe the following with respect to Internet Security:
a. Cryptography
b. DES Algorithm
Ans:
Until modern times cryptography referred almost exclusively to encryption, which is the process of
converting ordinary information (plaintext) into unintelligible gibberish (i.e., ciphertext).[2] Decryption is
the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or
cypher) is a pair of algorithms which create the encryption and the reversing decryption. The detailed
operation of a cipher is controlled both by the algorithm and in each instance by a key. This is a secret
parameter (ideally known only to the communicants) for a specific message exchange context. Keys
are important, as ciphers without variable keys can be trivially broken with only the knowledge of the
cipher used and are therefore less than useful for most purposes. Historically, ciphers were often
used directly for encryption or decryption without additional procedures such as authentication or
integrity checks.In colloquial use, the term "code" is often used to mean any method of encryption or
concealment of meaning. However, in cryptography, code has a more specific meaning. It means the
replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example,
apple pie replaces attack at dawn). Codes are no longer used in serious cryptographyexcept
incidentally for such things as unit designations (e.g., Bronco Flight or Operation Overlord) -since
properly chosen ciphers are both more practical and more secure than even the best codes and also
are better adapted to computers.
The most ancient and basic problem of cryptography is secure communication over an insecure
channel. Party A wants to send to party B a secret message over a communication line which may be
tapped by an adversary. In the computer industry, refers to techniques for ensuring that data stored in
a computer cannot be read or compromised by any individuals without authorization. Most security
measures involve data encryption and passwords. Data encryption is the translation of data into a
form that is unintelligible without a deciphering mechanism. A password is a secret word or phrase
that gives a user access to a particular program or system. Modern cryptography abandons the
assumption that the Adversary has available innite computing resources, and assumes instead that
the adversary's computation is resource bounded in some reasonable way. In particular, in these
notes we will assume that the adversary is a probabilistic algorithm who runs in polynomial time.
Similarly, the encryption and decryption algorithms designed are probabilistic and run in polynomial
time. The running time of the encryption, decryption, and the adversary algorithms are all measured
as a function of a security parameter k which is a parameter which is xed at the time the
cryptosystem is setup. Thus,when we say that the adversary algorithm runs in polynomial time, we
mean time bounded by some polynomial function in k.
Accordingly, in modern cryptography, we speak of the infeasibility of breaking the encryption system
and computing information about exchanged messages where as historically one spoke of the
impossibility of breaking the encryption system and nding information about exchanged messages.
We note that the encryption systems which we will describe and claim \secure" with respect to the
new adversary are not \secure" with respect to a computationally unbounded adversary in the way
that the one-time pad system was secure against an unbounded adversary. But, on the other hand, it
is no longer necessarily true that the size of the secret key that A and B meet and agree on before
remote transmission must be as long as the total number of secret bits ever to
be exchanged securely remotely. In fact, at the time of the initial meeting, A and B do not need to
know in advance how many secret bits they intend to send in the future. We will show how to
construct such encryption systems, for which the number of messages to be exchanged securely can
be a polynomial in the length of the common secret key. How we construct them brings us to
anther fundamental issue, namely that of cryptographic, or complexity, assumptions.
(ii) Data Encryption Standard (DES):
The Data Encryption Standard (DES) is the quintessential block cipher. Even though it is now quite
old, and on the way out, no discussion of block ciphers can really omit mention of this construction.
DES is a remarkably well-engineered algorithm which has had a powerful inuence on cryptography.
It is in very widespread use, and probably will be for some years to come. Every time you use an ATM
machine, you are using DES.
Brief history
In 1972 the NBS (National Bureau of Standards, now NIST, the National Institute of Standards and
Technology) initiated a program for data protection and wanted as part of it an encryption algorithm
that could be standardized. They put out a request for such an algorithm. In 1974, IBM responded
with a design based on their \Lucifer" algorithm. This design would eventually evolve into the DES.
DES has a key-length of k = 56 bits and a block-length of n = 64 bits. It consists of 16 rounds of what
is called a \Feistel network." We will describe more details shortly. After NBS, several other bodies
adopted DES as a standard, including ANSI (the American National Standards Institute) and the
American Bankers Association.
The standard was to be reviewed every ve years to see whether or not it should be re-adopted.
Although there were claims that it would not be re-certied, the algorithm was re-certied again and
again. Only recently did the work for nding a replacement begin in earnest, in the form of the AES
(Advanced Encryption Standard)
Construction
The DES algorithm is depicted in Figure 4.1. It takes input a 56-bit key K and a 64 bit plaintext M. The
key-schedule KeyScheduleproduces from the 56-bit key K a sequence of 16 subkeys, one for each of
the rounds 50 Goldwasser and Bellare
The algorithm as a standard
Despite the criticisms, DES was approved as a federal standard in November 1976, and published on
15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently
reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in
1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally
superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May
2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030
for sensitive government information.[10]
The algorithm is also specified in ANSI X3.92,[11] NIST SP 800-67[10] and ISO/IEC 18033-3[12] (as
a component of TDEA).
Another theoretical attack, linear cryptanalysis, was published in 1994, but it was a brute force attack
in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a
replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in
the article.
The introduction of DES is considered to have been a catalyst for the academic study of
cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about
DES, DES is the archetypal block cipher an algorithm that takes a fixed-length string of plaintext
bits and transforms it through a series of complicated operations into another ciphertext bitstring of
the same length. In the case of DES, the block size is 64 bits. DES also uses a key to customize the
transformation, so that decryption can supposedly only be performed by those who know the
particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are
actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter
discarded. Hence the effective key length is 56 bits, and it is usually quoted as such. Like other block
ciphers, DES by itself is not a secure means of encryption but must instead be used in a mode of
operation. FIPS-81 specifies several modes for use with DES.[17] Further comments on the usage of
DES are contained in FIPS-74.[18]
4. What are Digital Signatures? Discuss their merits and drawbacks.
Ans:
A digital signature or digital signature scheme is a mathematical scheme for demonstrating
the authenticity of a digital message or document. A valid digital signature gives a recipient
reason to believe that the message was created by a known sender such that they cannot deny
sending it (authentication and non-repudiation) and that the message was not altered in transit
(integrity). Digital signatures are commonly used for software distribution, financial
transactions, and in other cases where it is important to detect forgery or tampering.
12 Notes
13 Further reading
[edit] Explanation
Digital signatures are often used to implement electronic signatures, a broader term that refers
to any electronic data that carries the intent of a signature,
[1]
but not all electronic signatures
use digital signatures.
[2][3]
In some countries, including the United States, India,
[4]
and
members of the European Union, electronic signatures have legal significance.
Digital signatures employ a type of asymmetric cryptography. For messages sent through a
nonsecure channel, a properly implemented digital signature gives the receiver reason to
believe the message was sent by the claimed sender. Digital signatures are equivalent to
traditional handwritten signatures in many respects, but properly implemented digital
signatures are more difficult to forge than the handwritten type. Digital signature schemes in
the sense used here are cryptographically based, and must be implemented properly to be
effective. Digital signatures can also provide non-repudiation, meaning that the signer cannot
successfully claim they did not sign a message, while also claiming their private key remains
secret; further, some non-repudiation schemes offer a time stamp for the digital signature, so
that even if the private key is exposed, the signature is valid. Digitally signed messages may
be anything representable as a bitstring: examples include electronic mail, contracts, or a
message sent via some other cryptographic protocol.
[edit] Definition
Main article: Public-key cryptography
A digital signature scheme typically consists of three algorithms:
A key generation algorithm that selects a private key uniformly at random from a set
of possible private keys. The algorithm outputs the private key and a corresponding
public key.
A signing algorithm that, given a message and a private key, produces a signature.
A signature verifying algorithm that, given a message, public key and a signature,
either accepts or rejects the message's claim to authenticity.
Two main properties are required. First, a signature generated from a fixed message and fixed
private key should verify the authenticity of that message by using the corresponding public
key. Secondly, it should be computationally infeasible to generate a valid signature for a
party who does not possess the private key.
Advantages and Disadvantages of E-Signature: Just as with any technology, there will be
plus and minuses. This is the way it is with anything, whether it is technology related or not.
The advantages of using digital signatures include:
Imposter prevention: By using digital signatures you are actually eliminating the
possibility of committing fraud by an imposter signing the document. Since the digital
signature cannot be altered, this makes forging the signature impossible.
Message integrity: By having a digital signature you are in fact showing and simply
proving the document to be valid. You are assuring the recipient that the document is
free from forgery or false information.
Legal requirements: Using a digital signature satisfies some type of legal
requirement for the document in question. A digital signature takes care of any formal
legal aspect of executing the document.
The Disadvantages of using digital signatures involve the primary avenue for any business:
money. This is because the business may have to spend more money than usual to work with
digital signatures including buying certificates from certification authorities and getting the
verification software.