0% found this document useful (0 votes)
119 views36 pages

The Internet: Internet vs. World Wide Web

The document provides a history of the development of the Internet from its origins in the 1960s as a US military network called ARPANET to its evolution into a global, publicly accessible network in the late 20th century. It describes early pioneers like Licklider, Kleinrock, and Roberts who developed the concepts of wide area networking and packet switching. Key milestones included connecting the first four nodes in 1969, the introduction of email in 1972, and the adoption of TCP/IP in the 1980s which allowed the Internet to grow rapidly.

Uploaded by

Arpit Anand
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
119 views36 pages

The Internet: Internet vs. World Wide Web

The document provides a history of the development of the Internet from its origins in the 1960s as a US military network called ARPANET to its evolution into a global, publicly accessible network in the late 20th century. It describes early pioneers like Licklider, Kleinrock, and Roberts who developed the concepts of wide area networking and packet switching. Key milestones included connecting the first four nodes in 1969, the introduction of email in 1972, and the adoption of TCP/IP in the 1980s which allowed the Internet to grow rapidly.

Uploaded by

Arpit Anand
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Computer Applications Project-1

The Internet
The Internet is a worldwide, publicly accessible series of interconnected
computer networks that transmit data by packet switching using the standard
Internet Protocol (IP). It is a "network of networks" that consists of millions of
smaller domestic, academic, business, and government networks, which
together carry various information and services, such as electronic mail, online
chat, file transfer, and the interlinked web pages and other resources of the
World Wide Web (WWW).

Internet vs. World Wide Web


The Internet and the World Wide Web are not one and the same. The
Internet is a collection of interconnected computer networks, linked by copper
wires, fiber-optic cables, wireless connections, etc. In contrast, the Web is a
collection of interconnected documents and other resources, linked by
hyperlinks and URLs. The World Wide Web is one of the services accessible via
the Internet, along with various others including e-mail, file sharing, online
gaming and others. However, "the Internet" and "the Web" are commonly
used interchangeably in non-technical settings.

1
Computer Applications Project-1

Evolution Of internet

The Internet was the result of some visionary thinking by people in the
early 1960s who saw great potential value in allowing computers to share
information on research and development in scientific and military fields.
J.C.R. Licklider of MIT, first proposed a global network of computers in 1962,
and moved over to the Defense Advanced Research Projects Agency (DARPA)
in late 1962 to head the work to develop it. Leonard Kleinrock of MIT and later
UCLA developed the theory of packet switching, which was to form the basis
of Internet connections. Lawrence Roberts of MIT connected a Massachusetts
computer with a California computer in 1965 over dial-up telephone lines. It
showed the feasibility of wide area networking, but also showed that the
telephone line's circuit switching was inadequate. Kleinrock's packet switching
theory was confirmed. Roberts moved over to DARPA in 1966 and developed
his plan for ARPANET. These visionaries and many more left unnamed here
are the real founders of the Internet.

When Senator Ted Kennedy heard in 1968


that the pioneering Massachusetts company
BBN had won the ARPA contract for an
"interface message processor (IMP)," he sent a
congratulatory telegram to BBN for their
ecumenical spirit in winning the "interfaith
message processor" contract.

The Internet, then known as ARPANET, was brought online in 1969


under a contract let by the renamed Advanced Research Projects Agency
(ARPA) which initially connected four major computers at universities in the
southwestern US (UCLA, Stanford Research Institute, UCSB, and the
University of Utah). The contract was carried out by BBN of Cambridge, MA
under Bob Kahn and went online in December 1969. By June 1970, MIT,
Harvard, BBN, and Systems Development Corp (SDC) in Santa Monica, Cal.
were added. By January 1971, Stanford, MIT's Lincoln Labs, Carnegie-Mellon,
and Case-Western Reserve U were added. In months to come, NASA/Ames,

1
Computer Applications Project-1

Mitre, Burroughs, RAND, and the U of Illinois plugged in. After that, there
were far too many to keep listing here.

Who was the first to use the Internet?

Charley Kline at UCLA sent the first packets on


ARPANet as he tried to connect to Stanford
Research Institute on Oct 29, 1969. The
system crashed as he reached the G in LOGIN!

The Internet was designed in part to provide a communications network


that would work even if some of the sites were destroyed by nuclear attack. If
the most direct route was not available, routers would direct traffic around the
network via alternate routes.

The early Internet was used by computer experts, engineers, scientists,


and librarians. There was nothing friendly about it. There were no home or
office personal computers in those days, and anyone who used it, whether a
computer professional or an engineer or scientist or librarian, had to learn to
use a very complex system.

Did Al Gore invent the Internet?

According to a CNN transcript of an interview


with Wolf Blitzer, Al Gore said, “During my
service in the United States Congress, I took
the initiative in creating the Internet." Al Gore
was not yet in Congress in 1969 when
ARPANET started or in 1974 when the term
Internet first came into use. Gore was elected
to Congress in 1976. In fairness, Bob Kahn and
Vint Cerf acknowledge in a paper titled Al Gore
and the Internet that Gore has probably done
more than any other elected official to support
the growth and development of the Internet
from the 1970's to the present .

1
Computer Applications Project-1

E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. He


picked the @ symbol from the available symbols on his teletype to link the
username and address. The telnet protocol, enabling logging on to a remote
computer, was published as a Request for Comments (RFC) in 1972. RFC's are
a means of sharing developmental work throughout community. The ftp
protocol, enabling file transfers between Internet sites, was published as an
RFC in 1973, and from then on RFC's were available electronically to anyone
who had use of the ftp protocol.

Libraries began automating and networking their catalogs in the late


1960s independent from ARPA. The visionary Frederick G. Kilgour of the Ohio
College Library Center (now OCLC, Inc.) led networking of Ohio libraries
during the '60s and '70s. In the mid 1970s more regional consortia from New
England, the Southwest states, and the Middle Atlantic states, etc., joined
with Ohio to form a national, later international, network. Automated
catalogs, not very user-friendly at first, became available to the world, first
through telnet or the awkward IBM variant TN3270 and only many years later,
through the web. See The History of OCLC

Ethernet, a protocol for many local networks,


appeared in 1974, an outgrowth of Harvard
student Bob Metcalfe's dissertation on "Packet
Networks." The dissertation was initially
rejected by the University for not being
analytical enough. It later won acceptance
when he added some more equations to it.

The Internet matured in the 70's as a result of the TCP/IP architecture


first proposed by Bob Kahn at BBN and further developed by Kahn and Vint
Cerf at Stanford and others throughout the 70's. It was adopted by the
Defense Department in 1980 replacing the earlier Network Control Protocol
(NCP) and universally adopted by 1983.

The Unix to Unix Copy Protocol (UUCP) was invented in 1978 at Bell
Labs. Usenet was started in 1979 based on UUCP. Newsgroups, which are
discussion groups focusing on a topic, followed, providing a means of
exchanging information throughout the world. While Usenet is not considered
as part of the Internet, since it does not share the use of TCP/IP, it linked unix
systems around the world, and many Internet sites took advantage of the

1
Computer Applications Project-1

availability of newsgroups. It was a significant part of the community building


that took place on the networks.

Similarly, BITNET (Because It's Time Network) connected IBM


mainframes around the educational community and the world to provide mail
services beginning in 1981. Listserv software was developed for this network
and later others. Gateways were developed to connect BITNET with the
Internet and allowed exchange of e-mail, particularly for e-mail discussion
lists. These listservs and other forms of e-mail discussion lists formed another
major element in the community building that was taking place.

In 1986, the National Science Foundation funded NSFNet as a cross


country 56 Kbps backbone for the Internet. They maintained their sponsorship
for nearly a decade, setting rules for its non-commercial government and
research uses.

As the commands for e-mail, FTP, and telnet were standardized, it


became a lot easier for non-technical people to learn to use the nets. It was
not easy by today's standards by any means, but it did open up use of the
Internet to many more people in universities in particular. Other departments
besides the libraries, computer, physics, and engineering departments found
ways to make good use of the nets--to communicate with colleagues around
the world and to share files and resources.

While the number of sites on the Internet was small, it was fairly easy to
keep track of the resources of interest that were available. But as more and
more universities and organizations--and their libraries-- connected, the
Internet became harder and harder to track. There was more and more need
for tools to index the resources that were available.

The first effort, other than library catalogs, to index the Internet was
created in 1989, as Peter Deutsch and his crew at McGill University in
Montreal, created an archive for ftp sites, which they named Archie. This
software would periodically reach out to all known openly available ftp sites,
list their files, and build a searchable index of the software. The commands to
search Archie were Unix commands, and it took some knowledge of Unix to
use it to its full capability.

McGill University, which hosted the first


Archie, found out one day that half the

1
Computer Applications Project-1

Internet traffic going into Canada from the


United States was accessing Archie.
Administrators were concerned that the
University was subsidizing such a volume of
traffic, and closed down Archie to outside
access. Fortunately, by that time, there were
many more Archives available.

At about the same time, Brewster Kahle, then at Thinking Machines,


Corp. developed his Wide Area Information Server (WAIS), which would index
the full text of files in a database and allow searches of the files. There were
several versions with varying degrees of complexity and capability developed,
but the simplest of these were made available to everyone on the nets. At its
peak, Thinking Machines maintained pointers to over 600 databases around
the world which had been indexed by WAIS. They included such things as the
full set of Usenet Frequently Asked Questions files, the full documentation of
working papers such as RFC's by those developing the Internet's standards,
and much more. Like Archie, its interface was far from intuitive, and it took
some effort to learn to use it well.

Peter Scott of the University of Saskatchewan, recognizing the need to


bring together information about all the telnet-accessible library catalogs on
the web, as well as other telnet resources, brought out his Hytelnet catalog in
1990. It gave a single place to get information about library catalogs and other
telnet resources and how to use them. He maintained it for years, and added
HyWebCat in 1997 to provide information on web-based catalogs.

In 1991, the first really friendly interface to the Internet was developed
at the University of Minnesota. The University wanted to develop a simple
menu system to access files and information on campus through their local
network. A debate followed between mainframe adherents and those who
believed in smaller systems with client-server architecture. The mainframe
adherents "won" the debate initially, but since the client-server advocates said
they could put up a prototype very quickly, they were given the go-ahead to
do a demonstration system. The demonstration system was called a gopher
after the U of Minnesota mascot--the golden gopher. The gopher proved to be
very prolific, and within a few years there were over 10,000 gophers around
the world. It takes no knowledge of Unix or computer architecture to use. In a

1
Computer Applications Project-1

gopher system, you type or click on a number to select the menu selection you
want.

Gopher's usability was enhanced much more when the University of


Nevada at Reno developed the VERONICA searchable index of gopher menus.
It was purported to be an acronym for Very Easy Rodent-Oriented Net wide
Index to Computerized Archives. A spider crawled gopher menus around the
world, collecting links and retrieving them for the index. It was so popular that
it was very hard to connect to, even though a number of other VERONICA
sites were developed to ease the load. Similar indexing software was
developed for single sites, called JUGHEAD (Jonzy's Universal Gopher
Hierarchy Excavation And Display).

Peter Deutsch, who developed Archie, always


insisted that Archie was short for Archiver,
and had nothing to do with the comic strip. He
was disgusted when VERONICA and JUGHEAD
appeared.

In 1989 another significant event took place in making the nets easier to
use. Tim Berners-Lee and others at the European Laboratory for Particle
Physics, more popularly known as CERN, proposed a new protocol for
information distribution. This protocol, which became the World Wide Web in
1991, was based on hypertext--a system of embedding links in text to link to
other text, which you have been using every time you selected a text link while
reading these pages. Although started before gopher, it was slower to
develop.

The development in 1993 of the graphical browser Mosaic by Marc


Andreessen and his team at the National Center For Supercomputing
Applications (NCSA) gave the protocol its big boost. Later, Andreessen moved
to become the brains behind Netscape Corp., which produced the most
successful graphical type of browser and server until Microsoft declared war
and developed its Microsoft Internet Explorer.

MICHAEL DERTOUZOS
1936-2001

1
Computer Applications Project-1

The early days of the web was a confused


period as many developers tried to put their
personal stamp on ways the web should
develop. The web was threatened with
becoming a mass of unrelated protocols that
would require different software for different
applications. The visionary Michael Dertouzos
of MIT's Laboratory for Computer Sciences
persuaded Tim Berners-Lee and others to form
the World Wide Web Consortium in 1994 to
promote and develop standards for the Web.
Proprietary plug-ins still abound for the web,
but the Consortium has ensured that there are
common standards present in every browser.

Since the Internet was initially funded by the government, it was


originally limited to research, education, and government uses. Commercial
uses were prohibited unless they directly served the goals of research and
education. This policy continued until the early 90's, when independent
commercial networks began to grow. It then became possible to route traffic
across the country from one commercial site to another without passing
through the government funded NSFNet Internet backbone.

Delphi was the first national commercial online service to offer Internet
access to its subscribers. It opened up an email connection in July 1992 and full
Internet service in November 1992. All pretenses of limitations on commercial
use disappeared in May 1995 when the National Science Foundation ended its
sponsorship of the Internet backbone, and all traffic relied on commercial
networks. AOL, Prodigy, and CompuServe came online. Since commercial
usage was so widespread by this time and educational institutions had been
paying their own way for some time, the loss of NSF funding had no
appreciable effect on costs.

Today, NSF funding has moved beyond supporting the backbone and
higher educational institutions to building the K-12 and local public library
accesses on the one hand, and the research on the massive high volume
connections on the other.

1
Computer Applications Project-1

Microsoft's full scale entry into the browser, server, and Internet Service
Provider market completed the major shift over to a commercially based
Internet. The release of Windows 98 in June 1998 with the Microsoft browser
well integrated into the desktop shows Bill Gates' determination to capitalize
on the enormous growth of the Internet. Microsoft's success over the past few
years has brought court challenges to their dominance. We'll leave it up to you
whether you think these battles should be played out in the courts or the
marketplace.

During this period of enormous growth, businesses entering the


Internet arena scrambled to find economic models that work. Free services
supported by advertising shifted some of the direct costs away from the
consumer--temporarily. Services such as Delphi offered free web pages, chat
rooms, and message boards for community building. Online sales have grown
rapidly for such products as books and music CDs and computers, but the
profit margins are slim when price comparisons are so easy, and public trust in
online security is still shaky. Business models that have worked well are portal
sites, that try to provide everything for everybody, and live auctions. AOL's
acquisition of Time-Warner was the largest merger in history when it took
place and shows the enormous growth of Internet business! The stock market
has had a rocky ride, swooping up and down as the new technology
companies, the dotcom’s encountered good news and bad. The decline in
advertising income spelled doom for many dotcoms, and a major shakeout
and search for better business models took place by the survivors.

A current trend with major implications for the future is the growth of
high speed connections. 56K modems and the providers who supported them
spread widely for a while, but this is the low end now. 56K is not fast enough
to carry multimedia, such as sound and video except in low quality. But new
technologies many times faster, such as cable modems and digital subscriber
lines (DSL) are predominant now.

Wireless has grown rapidly in the past few years, and travelers search
for the Wi-Fi "hot spots" where they can connect while they are away from the
home or office. Many airports, coffee bars, hotels and motels now routinely
provide these services, some for a fee and some for free.

The next big growth area is the surge towards universal wireless access,
where almost everywhere is a "hot spot". Municipal Wi-Fi or city-wide access,
wiMAX offering broader ranges than Wi-Fi, Verizon's EV-DO, and other

1
Computer Applications Project-1

formats will joust for dominance in the USA in the months ahead. The battle is
both economic and political.

Another trend that is beginning to affect web designers is the growth of


smaller devices to connect to the Internet. Small tablets, pocket PCs, smart
phones, game machines, and even GPS devices are now capable of tapping
into the web on the go, and many web pages are not designed to work on that
scale.

As Heraclitus said in the 4th century BC, "Nothing is permanent, but change!"

May you live in interesting times! (Ostensibly an ancient Chinese curse)*

Key terms related to internet

Browser:
The software used to access and view sites on the World Wide Web.

Gopher:
A method of making text-only material available over the Internet, so it can be
viewed online. Gopher servers were used widely before the advent of the
World Wide Web, and there are still many in operation. They can be accessed
through a Web browser, so many people think of them as a part of the Web.

Home Page:

1
Computer Applications Project-1

Originally this referred to the Web page that your Web browser is set to view
when it starts up. It now also refers to the main Web page of a business,
organization, or person or the main page of a collection of Web pages.

Hypertext:
Text of a document that has been linked to other sections of the document or
to other documents that gives more information about the subject. Usually
you follow these links by clicking on them with a computer mouse. Links can
also be made to graphic images, to music, or to any digitized document. For
example, in a hypertext document on health, you could follow links to
definitions of specific terms, to audio files to hear the pronunciation of a word,
and to the Web page of an organization that does work in a specific area.

Hypertext Markup Language (HTML):


This is the language that is used to create hypertext. It consists of various
types of "tags" that indicate to a Web browser how specific text is to be
displayed on the computer screen, as well as where in the document or where
on the Internet to find a link.

Log In and Log Out:


Every computer network must have some way to identify who is online at any
given moment and to make sure that they are all legitimate users. For that
reason, you must log in to a computer system before you use it. This is usually
simply a matter of giving the computer your assigned name and a password
that you have chosen. You must also log out of the system when you are
finished.

Modem (MOdulator/DEModulator):
A device that converts the digital signals of a computer into analog signals
(sound) that can be transmitted over an ordinary telephone line.

Server:

1
Computer Applications Project-1

A special-purpose computer that distributes data to other computers, either


when requested to, or when it has been programmed to. Examples include
World Wide Web and FTP servers.

Uniform Resource Locator (URL):


The standard way to give the address of any resource on the Internet. This is
the address you enter into the "go to" line of a Web browser in order to visit
that site. The URL "http://www.info.usaid.gov/leland/resource.htm”
represents the following: http:// indicates the type of protocol for the
computer and network to use in requesting and receiving data from the
server. In this case it is hypertext transfer protocol.
www.info.usaid.gov is the name of the actual server (computer) that you are
contacting.

Leland is the name of the directory (or folder -the place on the computer)
where the file you are requesting can be found. There may be several
subdirectories, each separated by a slash (/).

/resource.htm is the name of the actual file, or page, that you are requesting.

Webpage:
A collection of related information on a topic

Website:
A collection of related web pages.

World Wide Web:


The World Wide Web (WWW or the Web) is by far the most popular
application on the Internet, and with good reason. The WWW integrates text
in any format, sound, video. and virtually any kind of information that can be
sent in a digital format. The basic language of the Web is hypertext markup
language, which is used to determine what the information will look like and
point to where you can find the links. The Web makes it easy to use the
resources of the Internet without using a lot of computer commands and
knowing a lot about computers. Modern Web browsers, such as Netscape
Navigator and Microsoft's Internet Explorer, can request and display the

1
Computer Applications Project-1

information from a Web server in exactly the way that the originator intended.
As information on the Web gets easier to find, it is becoming more useful as a
tool for conducting everyday business, as well as bringing the world of
information to your desktop.

Search Engine

A search engine is a program that searches documents for specified


keywords and returns a list of the documents where the keywords were found.
Although search engine is really a general class of programs, the term is often
used to specifically describe systems like Alta Vista and Excite that enable
users to search for documents on the World Wide Web and USENET
newsgroups.

Typically, a search engine works by sending out a spider to fetch as


many documents as possible. Another program, called an indexer, then reads
these documents and creates an index based on the words contained in each
document. Each search engine uses a proprietary algorithm to create its
indices such that, ideally, only meaningful results are returned for each query.

There are basically three types of search engines: Those that are
powered by robots (called crawlers; ants or spiders) and those that are
powered by human submissions; and those that are a hybrid of the two.

Crawler-based search engines are those that use automated software


agents (called crawlers) that visit a Web site, read the information on the
actual site, read the site's meta tags and also follow the links that the site
connects to performing indexing on all linked Web sites as well. The crawler
returns all that information back to a central depository, where the data is
indexed. The crawler will periodically return to the sites to check for any
information that has changed. The frequency with which this happens is
determined by the administrators of the search engine.
 

1
Computer Applications Project-1

Human-powered search engines rely on humans to submit information


that is subsequently indexed and catalogued. Only information that is
submitted is put into the index.

E-Mail

E-mail (electronic mail) is the exchange of computer-stored messages


by telecommunication. (Some publications spell it email; we prefer the
currently more established spelling of e-mail.) E-mail messages are usually
encoded in ASCII text. However, you can also send non-text files, such as
graphic images and sound files, as attachments sent in binary streams. E-mail
was one of the first uses of the Internet and is still the most popular use. A
large percentage of the total traffic over the Internet is e-mail. E-mail can also
be exchanged between online service provider users and in networks other
than the Internet, both public and private.

E-mail can be distributed to lists of people as well as to individuals. A


shared distribution list can be managed by using an e-mail reflector. Some
mailing lists allow you to subscribe by sending a request to the mailing list
administrator. A mailing list that is administered automatically is called a list
server.

E-mail is one of the protocols included with the Transport Control


Protocol/Internet Protocol (TCP/IP) suite of protocols. A popular protocol for
sending e-mail is Simple Mail Transfer Protocol and a popular protocol for
receiving it is POP3. Both Netscape and Microsoft include an e-mail utility with
their Web browsers.

Origin
E-mail predates the inception of the Internet, and was in fact a crucial tool in
creating the Internet. MIT first demonstrated the Compatible Time-Sharing
System (CTSS) in 1961.[14] It allowed multiple users to log into the IBM 7094 [15]
from remote dial-up terminals, and to store files online on disk. This new
ability encouraged users to share information in new ways. E-mail started in

1
Computer Applications Project-1

1965 as a way for multiple users of a time-sharing mainframe computer to


communicate. Although the exact history is murky, among the first systems to
have such a facility were SDC's Q32 and MIT's CTSS.

E-mail was quickly extended to become network e-mail, allowing users to pass
messages between different computers by at least 1966 (it is possible the
SAGE system had something similar some time before).

The ARPANET computer network made a large contribution to the


development of e-mail. There is one report that indicates experimental inter-
system e-mail transfers on it shortly after its creation in 1969. [16] Ray
Tomlinson initiated the use of the @ sign to separate the names of the user
and their machine in 1971.[17] The ARPANET significantly increased the
popularity of e-mail, and it became the killer app of the ARPANET.

ADVANTAGES OF Email
Using email is like composing or reading a paper message, with several
distinct differences:

 It's faster. Email is received almost immediately after it is sent, usually


within minutes.
 It's more conversational. Because of its immediacy, a whole series of e-
mail messages may be exchanged within a very short time. As a result,
email messages tend to be less formal and they are also usually shorter
and more to the point.
 It's easier to reference. The text of a previous email message can easily
be included as part of a reply to that message. Thus, eemail
correspondents are able to keep the replies in context for each
message. Including this context is not only polite, but also makes an
email message more accurate and understandable.

ISP
An ISP (Internet service provider) is a company that provides individuals
and other companies access to the Internet and other related services such as
Web site building and virtual hosting. An ISP has the equipment and the
telecommunication line access required to have a point-of-presence on the

1
Computer Applications Project-1

Internet for the geographic area served. The larger ISPs have their own high-
speed leased lines so that they are less dependent on the telecommunication
providers and can provide better service to their customers. Among the
largest national and regional ISPs are AT&T WorldNet, IBM Global Network,
MCI, Netcom, UUNet, and PSINet.

ISPs also include regional providers such as New England's NEARNet


and the San Francisco Bay area BARNet. They also include thousands of local
providers. In addition, Internet users can also get access through online service
providers (OSP) such as America Online and CompuServe.

The larger ISPs interconnect with each other through MAE (ISP
switching centers run by MCI WorldCom) or similar centers. The arrangements
they make to exchange traffic are known as peering agreements. There are
several very comprehensive lists of ISPs world-wide available on the Web.

An ISP is also sometimes referred to as an IAP (Internet access


provider). ISP is sometimes used as an abbreviation for independent service
provider to distinguish a service provider that is an independent, separate
company from a telephone company.

How ISPs connect to the Internet


Just as their customers pay them for Internet access, ISPs themselves
pay upstream ISPs for Internet access. In the simplest case, a single
connection is established to an upstream ISP using one of the technologies
described above, and the ISP uses this connection to send or receive any data
to or from parts of the Internet beyond its own network; in turn, the upstream
ISP uses its own upstream connection, or connections to its other customers
(usually other ISPs) to allow the data to travel from source to destination.

In reality, the situation is often more complicated. For example, ISPs


with more than one point of presence (PoP) may have separate connections to
an upstream ISP at multiple PoPs, or they may be customers of multiple
upstream ISPs and have connections to each one at one or more of their PoPs.
ISPs may engage in peering, where multiple ISPs interconnect with one
another at a peering point or Internet exchange point (IX), allowing the routing
of data between their networks, without charging one another for that data -
data that would otherwise have passed through their upstream ISPs, incurring
charges from the upstream ISP. ISPs that require no upstream and have only

1
Computer Applications Project-1

customers and/or peers are called Tier 1 ISPs, indicating their status as ISPs at
the top of the Internet hierarchy. Routers, switches, Internet routing
protocols, and the expertise of network administrators all have a role to play in
ensuring that data follows the best available route and that ISPs can "see" one
another on the Internet.

End-User-to-ISP Connection
ISPs employ a range of technologies to enable consumers to connect to their
network.

For home users and small businesses, the most popular options include dial-
up, DSL (typically Asymmetric Digital Subscriber Line, ADSL), broadband
wireless, cable modem, fiber to the premises (FTTH), and Integrated Services
Digital Network (ISDN) (typically basic rate interface (BRI).

For customers with more demanding requirements, such as medium-to-large


businesses, or other ISPs, DSL (often SHDSL or ADSL), Ethernet, Metro
Ethernet, Gigabit Ethernet, Frame Relay, ISDN (BRI or PRI), ATM, satellite
Internet access and synchronous optical networking (SONET) are more likely
to be used.

With the increasing popularity of downloading music and online video and the
general demand for faster page loads, higher bandwidth connections are
becoming more popular.

Typical home user connection

 DSL
 Broadband wireless access
 Cable modem
 FTTH
 ISDN

Typical business type connection

 DSL
 SHDSL
 Ethernet technologies

1
Computer Applications Project-1

When using a dial-up or ISDN connection method, the ISP cannot determine
the caller's physical location to more detail than using the number transmitted
using an appropriate form of Caller ID; it is entirely possible to e.g. connect to
an ISP located in Mexico from the U.S. Other means of connection such as
cable or DSL require a fixed registered connection node, usually associated at
the ISP with a physical address.

Types of internet connections

Analog (up to 56k):


Also called dial-up access, it is both economical and slow. Using a modem
connected to your PC, users connect to the Internet when the computer dials a
phone number (which is provided by your ISP) and connects to the network.
Dial-up is an analog connection because data is sent over an analog,  public
telephone network. The modem converts received analog data to digital and
vice versa. Because dial-up access uses normal telephone lines the quality of
the connection is not always good and data rates are limited.

 Typical Dial-up connection speeds range from 2400 bps to 56


Kbps.

ISDN:
Integrated Services Digital Network (ISDN) is an international
communications standard for sending voice, video, and data over digital
telephone lines or normal telephone wires.

 Typical ISDN speeds range from 64 Kbps to 128 Kbps.

B-ISDN:
Broadband ISDN is similar in function to ISDN but it transfers data over fiber
optic telephone lines, not normal telephone wires. SONET is the physical
transport backbone of B-ISDN. Broadband ISDN has not been widely
implemented.

DSL:
DSL is also called an always on connection because it uses existing 2-wire

1
Computer Applications Project-1

copper telephone line connected to the premise and will not tie up your phone
as a dial-up connection does. There is no need to dial-in to your ISP as DSL is
always on. The two main categories of DSL for home subscribers are called
ADSL and SDSL.

ADSL:
ADSL is the most commonly deployed types of DSL in North America. Short
for asymmetric digital subscriber line ADSL supports data rates of from 1.5 to
9 Mbps when receiving data (known as the downstream rate) and from 16 to
640 Kbps when sending data (known as the upstream rate). ADSL requires a
special ADSL modem.

SDSL:
SDSL is still more common in Europe. Short for Symmetric Digital Subscriber
Line, a technology that allows more data to be sent over existing copper
telephone lines (POTS). SDSL supports data rates up to 3 Mbps. SDSL works
by sending digital pulses in the high-frequency area of telephone wires and
cannot operate simultaneously with voice connections over the same wires.
SDSL requires a special SDSL modem. SDSL is called symmetric because it
supports the same data rates for upstream and downstream traffic.

VDSL:
Very High DSL (VDSL) is a DSL technology that offers fast data rates over
relatively short distances — the shorter the distance, the faster the connection
rate.

 All types of DSL technologies are collectively referred to as


xDSL.
 xDSL connection speeds range from 128 Kbps to 8 Mbps.

Cable:
Through the use of a cable modem you can have a broadband Internet
connection that is designed to operate over cable TV lines. Cable Internet
works by using TV channel space for data transmission, with certain channels
used for downstream transmission, and other channels for upstream
transmission. Because the coaxial cable used by cable TV provides much
greater bandwidth than telephone lines, a cable modem can be used to
achieve extremely fast access.

 Cable speeds range from 512 Kbps to 20 Mbps.

1
Computer Applications Project-1

Wireless Internet Connection:


Wireless Internet, or wireless broadband is one of the newest Internet
connection types. Instead of using telephone or cable networks for your
Internet connection, you use radio frequency bands. Wireless Internet
provides an always-on connection which can be accessed from anywhere — as
long as you geographically within a network coverage area. Wireless access is
still considered to be relatively new, and it may be difficult to find a wireless
service provider in some areas. It is typically more expensive and mainly
available in metropolitan areas.

 See the Wireless Networking Standards page of Webopedia for data


rates, Modulation schemes, Security, and More info on Wireless
networking.

Satellite:
Internet over Satellite (IoS) allows a user to access the Internet via a satellite that
orbits the earth. A satellite is placed at a static point above the earth's surface, in a
fixed position. Because of the enormous distances signals must travel from the earth
up to the satellite and back again, IoS is slightly slower than high-speed terrestrial
connections over copper or fiber optic cables.

 Typical Internet over Satellite connection speeds (standard IP services)


average around 492 up to 512 Kbps.

Chatting

On the Internet, chatting is talking to other people who are using the
Internet at the same time you are. Usually, this "talking" is the exchange of
typed-in messages requiring one site as the repository for the messages (or
"chat site") and a group of users who take part from anywhere on the Internet.
In some cases, a private chat can be arranged between two parties who meet
initially in a group chat. Chats can be ongoing or scheduled for a particular
time and duration. Most chats are focused on a particular topic of interest and
some involve guest experts or famous people who "talk" to anyone joining the
chat. (Transcripts of a chat can be archived for later reference.)

1
Computer Applications Project-1

Chats are conducted on online services (especially America Online), by


bulletin board services, and by Web sites. Several Web sites, notably Talk City,
exist solely for the purpose of conducting chats. Some chat sites such as
Worlds Chat allow participants to assume the role or appearance of an avatar
in a simulated or virtual reality environment.

Talk City and many other chat sites use a protocol called Internet Relay
Chat.

A chat can also be conducted using sound or sound and video, assuming
you have the bandwidth access and the appropriate programming.

Chattiquette
The term chatiquette is a variation of netiquette and describes basic rules of
online communication. To avoid misunderstandings and to simplify the
communication between users in a chat these conventions or guidelines have
been created. The chattiquette varies from community to community, in
general it describes basic courtesy, introduces new user into the community
and the associated network culture. As an example, it is considered rude to
write only in UPPER CASE, because it looks as if your are shouting.

Software and protocols


The following are common chat programs and protocols:

 AOL Instant Messenger (AIM)  Pichat


 Cam frog  PSYC
 Campfire  QQ
 Gadu-Gadu  SILC
 Google Talk  Skype
 ICQ (OSCAR)  Talk
 Internet Relay Chat (IRC)  Talker
 Jabber (XMPP)  TeamSpeak (TS)
 MUD  Windows Live Messenger
 MUSH  Yahoo! Messenger
 PalTalk

Chat programs supporting multiple protocols:

1
Computer Applications Project-1

 Adium
 Digsby
 IMVU
 Kopete
 Miranda IM
 Pidgin
 Trillian
 Quiet Internet Pager

Telnet

TELNET (TELecommunication NETwork) is a network protocol used on the


Internet or local area network (LAN) connections. It was developed in 1969
beginning with RFC 15 and standardized as IETF STD 8, one of the first
Internet standards.

The term telnet also refers to software which implements the client part
of the protocol. TELNET clients have been available on most Unix systems for
many years and are available for virtually all platforms. Most network
equipment and OSs with a TCP/IP stack support some kind of TELNET service
server for their remote configuration (including ones based on Windows NT).
Because of security issues with TELNET, its use has waned as it is replaced by
the use of SSH for remote access.

"To telnet" is also used as a verb meaning to establish or use a TELNET


or other interactive TCP connection, as in, "To change your password, telnet
to the server and run the password command".

Most often, a user will be telneting to a Unix-like server system or a


simple network device such as a switch. For example, a user might "telnet in
from home to check his mail at school". In doing so, he would be using a telnet
client to connect from his computer to one of his servers. Once the connection
is established, he would then log in with his account information and execute
operating system commands remotely on that computer, such as ls or cd.

On many systems, the client may also be used to make interactive raw-
TCP sessions. It is commonly believed that a telnet session which does not
include an IAC (character 255) is functionally identical. This is not the case

1
Computer Applications Project-1

however due to special NVT (Network Virtual Terminal) rules such as the
requirement for a bare CR (ASCII 13) to be followed by a NULL (ASCII 0).

Protocol details
TELNET is a client-server protocol, based on a reliable connection-oriented
transport. Typically this protocol used to establish a connection to TCP port
23, where a getty-equivalent program (telnetd) is listening, although TELNET
predates TCP/IP and was originally run on NCP.

Initially, TELNET was an ad-hoc protocol with no official definition [1].


Essentially, it used an 8-bit channel to exchange 7-bit ASCII data. Any byte
with the high bit set was a special TELNET character.

On March 5th, 1973, a meeting was held at UCLA [2] where "New TELNET" was
defined in two NIC documents: TELNET Protocol Specification, NIC #15372,
and TELNET Option Specifications, NIC #15373.

The protocol has many extensions, some of which have been adopted as
Internet standards. IETF standards STD 27 through STD 32 define various
extensions, most of which are extremely common. Other extensions are on
the IETF standards track as proposed standards.

Current status
As of the mid-2000s, while the TELNET protocol itself has been mostly
superseded for remote login, TELNET clients are still used, often when
diagnosing problems, to manually "talk" to other services without specialized
client software. For example, it is sometimes used in debugging network
services such as an SMTP, IRC, HTTP, FTP or POP3 server, by serving as a
simple way to send commands to the server and examine the responses.

This approach has limitations as TELNET clients speak is close to, but not
equivalent to, raw mode (due to terminal control handshaking and the special
rules regarding \377 and \15). Thus, other software such as nc (netcat) or socat
on Unix (or PuTTY on Windows) are finding greater favor with some system
administrators for testing purposes, as they can be called with arguments not
to send any terminal control handshaking data. Also netcat does not distort
the \377 octet, which allows raw access to TCP socket, unlike any standard-
compliant TELNET software.

1
Computer Applications Project-1

TELNET is popular with:

1. enterprise networks to access host applications, e.g. on IBM


Mainframes.
2. administration of network elements, e.g., in commissioning, integration
and maintenance of core network elements in mobile communication
networks.
3. MUD games played over the Internet, as well as talkers, MUSHes,
MUCKs, MOOes, and the resurgent BBS community.
4. embedded systems

Usenet

Usenet (a contraction of user network) is a global, distributed Internet


discussion system. It evolved from the general purpose UUCP architecture of
the same name.

It was conceived by Duke University graduate students Tom Truscott


and Jim Ellis in 1979. Users read and post public messages (called articles or
posts, and collectively termed news) to one or more categories, known as
newsgroups. Usenet resembles bulletin board systems (BBS) in most respects,
and is the precursor to the various web forums which are widely used today.
Discussions are threaded, with modern news reader software, as with web
forums and BBSes, though posts are stored on the server sequentially.

One notable difference from a BBS or web forum is that there is no


central server, nor central system owner. Usenet is distributed among a large,
constantly changing conglomeration of servers which store and forward
messages to one another. These servers are loosely connected in a variable
mesh. Individual users usually read from and post messages to a local server
operated by their ISP, university or employer. The servers then exchange the
messages between one another, so that they are available to readers beyond
the original server.

Usenet is one of the oldest computer network communications systems still in


widespread use. It was established in 1980, following experiments from the
previous year, over a decade before the World Wide Web was introduced and
the general public got access to the Internet. It was originally conceived as a
"poor man's ARPANET," employing UUCP to offer mail and file transfers, as

1
Computer Applications Project-1

well as announcements through the newly developed news software. This


system, developed at University of North Carolina at Chapel Hill and Duke
University, was called USENET to emphasize its creators' hope that the
USENIX organization would take an active role in its operation (Daniel et al,
1980).

The articles that users post to Usenet are organized into topical categories
called newsgroups, which are themselves logically organized into hierarchies
of subjects. For instance, sci.math and sci.physics are within the sci hierarchy,
for science. When a user subscribes to a newsgroup, the news client software
keeps track of which articles that user has read.

In most newsgroups, the majority of the articles are responses to some other
article. The set of articles which can be traced to one single non-reply article is
called a thread. Most modern newsreaders display the articles arranged into
threads and subthreads, making it easy to follow a single discussion in a high-
volume newsgroup.

When a user posts an article, it is initially only available on that user's news
server. Each news server, however, talks to one or more other servers (its
"newsfeeds") and exchanges articles with them. In this fashion, the article is
copied from server to server and (if all goes well) eventually reaches every
server in the network. The later peer-to-peer networks operate on a similar
principle; but for Usenet it is normally the sender, rather than the receiver,
who initiates transfers. Some have noted that this seems an inefficient
protocol in the era of abundant high-speed network access. Usenet was
designed for a time when networks were much slower, and not always
available. Many sites on the original Usenet network would connect only once
or twice a day to batch-transfer messages in and out.

Usenet has significant cultural importance in the networked world, having


given rise to, or popularized, many widely recognized concepts and terms
such as "FAQ" and "spam." Internet culture was born on Usenet.

Today, almost all Usenet traffic is carried over the Internet. The current format
and transmission of Usenet articles is very similar to that of Internet email
messages. However, Usenet articles are posted for general consumption; any
Usenet user has access to all newsgroups, unlike email, which requires a list of
known recipients.

Today, Usenet has diminished in importance with respect to Internet forums,


blogs and mailing lists. The difference, though, is that Usenet requires no

1
Computer Applications Project-1

personal registration with the group concerned, that information need not be
stored on a remote server, that archives are always available, and that reading
the messages requires not a mail or web client, but a news client (included in
many modern e-mail clients).

Voicemail

Voice mail was introduced in the late 1970s. Gordon Mathews founded a
company called VMX in 1979. VMX stood for "voice mail express,"
and Mathews received a U.S. patent for his digital invention in 1982. VMX was
the first voice mail provider service, its first client being 3M. The system
recorded and managed messages using the digital technology available during
the late 1970s and 1980s. Some companies still use their VMX systems.

Voice mails are essentially digital recordings of outgoing and incoming


voice messages that are managed either by an on-site or off-site system.
Some users purchase systems that are operated and managed either by its
own employees or on a contract basis with another company. Home-based
users, such as home telephone and cell phone users, often use an off-site
service, such as their phone service provider, for voice mail accounts. Others,
however, purchase software that allows their PC to become an electronic
message system.

Voice mail systems make phone systems more powerful and flexible by
allowing conversations and information to pass between parties, even when
both aren't present. In a work setting, customers and business people rely
upon voice mail, both for leaving and sending messages. Outgoing messages,
for instance, are the messages people use to greet those who call their line.
The outgoing message can tell a caller whose line they've reached, when that
person might return and to leave a message. The caller, armed with this
information, can leave a detailed message that's most appropriate for his or
her needs.

1
Computer Applications Project-1

Voice mail typically is integrated with the on-site phone system,


allowing both inside and outside users to utilize many features. Such features
include off-site access to messages, paging and urgent message delivery,
among many others.

As in the phone systems of old, many voice mail systems today come
with an "operator." The difference is these operators aren't human, they're
auto-attendants. Auto attendants guide users, both those from the inside and
the outside, through the many options a voice-mail system has to offer. It
instructs users how to enter commands through the phone's keypads, such as
how to retrieve a message.

Features
In its simplest form it mimics the functions of an answering machine, uses a
standard telephone handset for the user interface, and uses a centralized,
computerized system rather than equipment at the individual telephone.
Voicemail systems are much more sophisticated than answering machines in
that they can:

 answer many phones at the same time


 store incoming voice messages in personalized mailboxes associated
with the user’s phone number
 enable users to forward received messages to another voice mailbox
 send messages to one or more other user voice mailboxes
 add a voice introduction to a forwarded message
 store voice messages for future delivery
 make calls to a telephone or paging service to notify the user a message
has arrived in his mailbox
 transfer callers to another phone number for personal assistance
 play different message greetings to different callers.

Newsgroup
A newsgroup is a discussion about a particular subject consisting of
notes written to a central Internet site and redistributed through Usenet, a

1
Computer Applications Project-1

worldwide network of news discussion groups. Usenet uses the Network News
Transfer Protocol (NNTP).

Newsgroups are organized into subject hierarchies, with the first few
letters of the newsgroup name indicating the major subject category and sub-
categories represented by a subtopic name. Many subjects have multiple
levels of subtopics. Some major subject categories are: news, rec (recreation),
soc (society), sci (science), comp (computers), and so forth (there are many
more). Users can post to existing newsgroups, respond to previous posts, and
create new newsgroups.

Newcomers to newsgroups are requested to learn basic Usenet


netiquette and to get familiar with a newsgroup before posting to it. A
frequently-asked question is provided. The rules can be found when you start
to enter the Usenet through your browser or an online service. You can
subscribe to the postings on a particular newsgroup.

Some newsgroups are moderated by a designated person who decides


which postings to allow or to remove. Most newsgroups are unmoderated.

Types of newsgroups
Typically, a newsgroup is focused on a particular topic such as "animal
husbandry," "pole vaulting," or "glockenspiel MIDI files". Some newsgroups
allow the posting of messages on a wide variety of themes, regarding
anything a member chooses to discuss as on-topic, while others keep more
strictly to their particular subject, frowning on off-topic postings. The news
admin (the administrator of a news server) decides how long articles are kept
before being expired (deleted from the server). Usually they will be kept for
one or two weeks, but some admins keep articles in local or technical
newsgroups around longer than articles in other newsgroups.

Newsgroups generally come in either of two types, binary or text. There is no


technical difference between the two, but the naming differentiation allows
users and servers with limited facilities the ability to minimize network
bandwidth usage. Generally, Usenet conventions and rules are enacted with
the primary intention of minimizing the overall amount of network traffic and
resource usage.

1
Computer Applications Project-1

Newsgroups are much like the public message boards on old bulletin board
systems. For those readers not familiar with this concept, envision an
electronic version of the corkboard in the entrance of your local grocery store.

Newsgroups frequently become cliquish and are subject to sporadic flame


wars and trolling, but they can also be a valuable source of information,
support and friendship, bringing people who are interested in specific subjects
together from around the world.

Back when the early community was the pioneering computer society, the
common habit seen with many articles was a notice at the end disclosed if the
author was free of, or had a conflict of interest, or had any financial motive, or
axe to grind, in posting about any product or issue. This is seen much less now,
and the reader must read skeptically, just like in society. Besides all the privacy
or phishing issues.

There are currently well over 100,000 Usenet newsgroups, but only 20,000 or
so of those are active. Newsgroups vary in popularity, with some newsgroups
only getting a few posts a month while others get several hundred (and in a
few cases several thousand) messages a day.

Weblogs have replaced some of the uses of newsgroups (especially because,


for a while, they were less prone to spamming).

A website called DejaNews began archiving Usenet in the 1990s. DejaNews


also provided a searchable web interface. Google bought the archive from
them and made efforts to buy other Usenet archives to attempt to create a
complete archive of Usenet newsgroups and postings from its early
beginnings. Like DejaNews, Google has a web search interface to the archive,
but Google also allows newsgroup posting.

Non-Usenet newsgroups are possible and do occur, as private individuals or


organizations set up their own nntp servers. Examples include the newsgroups
Microsoft run to allow peer-to-peer support of their products and those at
news://news.grc.com.

How newsgroups work


Newsgroup servers are hosted by various organizations and institutions. Most
Internet Service Providers host their own News Server, or rent access to one,
for their subscribers. There are also a number of companies who sell access to
premium news servers.

1
Computer Applications Project-1

Every host of a news server maintains agreements with other news servers to
regularly synchronize. In this way news servers form a network. When a user
posts to one news server, the message is stored locally. That server then
shares the message with the servers that are connected to it if both carry the
newsgroup, and from those servers to servers that they are connected to, and
so on. For newsgroups that are not widely carried, sometimes a carrier group
is used for crossposting to aid distribution. This is typically only useful for
groups that have been removed or newer alt.* groups. Crossposts between
hierarchies, outside of the big eight and alt, are prone to failure..

Types of protocols

IP:
The Internet Protocol (IP) is a data-oriented protocol used for
communicating data across a packet-switched internetwork.

IP is a network layer protocol in the Internet protocol suite and is encapsulated


in a data link layer protocol (e.g., Ethernet). As a lower layer protocol, IP
provides the service of communicable unique global addressing amongst
computers.

UDP:
User Datagram Protocol (UDP) is one of the core protocols of the Internet
protocol suite. Using UDP, programs on networked computers can send short
messages sometimes known as datagrams (using Datagram Sockets) to one
another. UDP is sometimes called the Universal Datagram Protocol. The
protocol was designed by David P. Reed in 1980

TCP:

1
Computer Applications Project-1

The Transmission Control Protocol (TCP) is one of the core protocols of the
Internet protocol suite. TCP provides reliable, in-order delivery of a stream of
bytes, making it suitable for applications like file transfer and e-mail. It is so
important in the Internet protocol suite that sometimes the entire suite is
referred to as "the TCP/IP protocol suite." TCP is the transport protocol that
manages the individual conversations between web servers and web clients.
TCP divides the HTTP messages into smaller pieces, called segments, to be
sent to the destination client. It is also responsible for controlling the size and
rate at which messages are exchanged between the server and the client.

DHCP:
Dynamic Host Configuration Protocol (DHCP) is a protocol used by
networked devices (clients) to obtain various parameters necessary for the
clients to operate in an Internet Protocol (IP) network. By using this protocol,
system administration workload greatly decreases, and devices can be added
to the network with minimal or no manual configurations.

HTTP:
Hypertext Transfer Protocol (HTTP) is a communications protocol for the
transfer of information on intranets and the World Wide Web. Its original
purpose was to provide a way to publish and retrieve hypertext pages over the
Internet.

FTP:
File Transfer Protocol (FTP) is a network protocol used to transfer data from
one computer to another through a network, such as over the Internet.

FTP is a file transfer protocol for exchanging files over any TCP/IP based
network to manipulate files on another computer on that network regardless
of which operating systems are involved (if the computers permit FTP access).
There are many existing FTP client and server programs. FTP servers can be
set up anywhere between game servers, voice servers, internet hosts, and
other physical servers.

TELNET:
TELNET (TELecommunication NETwork) is a network protocol used on the
Internet or local area network (LAN) connections. It was developed in 1969

1
Computer Applications Project-1

beginning with RFC 15 and standardized as IETF STD 8, one of the first
Internet standards.

SSH:
Secure Shell or SSH is a network protocol that allows data to be exchanged
using a secure channel between two computers. Encryption provides
confidentiality and integrity of data over a insecure network, such as the
Internet. SSH uses public-key cryptography to authenticate the remote
computer and allow the remote computer to authenticate the user, if
necessary.[1]

POP3:
local e-mail clients use the Post Office Protocol version 3 (POP3), an
application-layer Internet standard protocol, to retrieve e-mail from a remote
server over a TCP/IP connection. Many subscribers to individual Internet
service provider e-mail accounts access their e-mail with client software that
uses POP3.

SMTP:
Simple Mail Transfer Protocol (SMTP) is the de facto standard for e-mail
transmissions across the Internet. Formally SMTP is defined in RFC 821 (STD
10) as amended by RFC 1123 (STD 3) chapter 5. The protocol used today is also
known as ESMTP and defined in RFC 2821.

IMAP:
The Internet Message Access Protocol (commonly known as IMAP or IMAP4,
and previously called Internet Mail Access Protocol, Interactive Mail Access
Protocol (RFC 1064), and Interim Mail Access Protocol [1]) is an application layer
Internet protocol operating on port 143 that allows a local client to access e-
mail on a remote server. The current version, IMAP version 4 revision 1
(IMAP4rev1), is defined by RFC 3501. IMAP4 and POP3 (Post Office Protocol
version 3) are the two most prevalent Internet standard protocols for e-mail
retrieval. Virtually all modern e-mail clients and servers support both.

1
Computer Applications Project-1

Advantages & Disadvantages of the


internet

Advantages:

Communication
Communication has reduced and has attained the form of a global village.
The foremost target of internet has always been the communication. And
internet has excelled beyond the expectations .Still, innovations are going on
to make it faster, more reliable. By the advent of computer’s Internet, our
earth.
Now we can communicate in a fraction of second with a person who is sitting
in the other part of the world. Today for better communication, we can avail
the facilities of e-mail; we can chat for hours with our loved ones. There are
plenty messenger services in offering. With help of such services, it has
become very easy to establish a kind of global friendship where you can share
your thoughts, can explore other cultures of different ethnicity.

Information
Information is probably the biggest advantage internet is offering. The
Internet is a virtual treasure trove of information. Any kind of information on
any topic under the sun is available on the Internet. The search engines like
Google, yahoo is at your service on the Internet. You can almost find any type
of data on almost any kind of subject that you are looking for. There is a huge
amount of information available on the internet for just about every subject
known to man, ranging from government law and services, trade fairs and
conferences, market information, new ideas and technical support, the list is
endless.

Students and children are among the top users who surf the Internet for
research. Today, it is almost required that students should use the Internet for
research for the purpose of gathering resources. Teachers have started giving

1
Computer Applications Project-1

assignments that require research on the Internet. Almost every coming day,
researches on medical issues become much easier to locate. Numerous web
sites available on the net are offering loads of information for people to
research diseases and talk to doctors online at sites such as, America’s Doctor.
During 1998 over 20 million people reported going online to retrieve health
information.

Entertainment
Entertainment is another popular raison d'être why many people prefer to surf
the Internet. In fact, media of internet has become quite successful in trapping
multifaceted entertainment factor. Downloading games, visiting chat rooms
or just surfing the Web are some of the uses people have discovered. There
are numerous games that may be downloaded from the Internet for free. The
industry of online gaming has tasted dramatic and phenomenal attention by
game lovers. Chat rooms are popular because users can meet new and
interesting people. In fact, the Internet has been successfully used by people
to find lifelong partners. When people surf the Web, there are numerous
things that can be found. Music, hobbies, news and more can be found and
shared on the Internet.

Services
Many services are now provided on the internet such as online banking, job
seeking, purchasing tickets for your favorite movies, guidance services on
array of topics engulfing the every aspect of life, and hotel reservations. Often
these services are not available off-line and can cost you more.

E-Commerce
Ecommerce is the concept used for any type of commercial maneuvering, or
business deals that involves the transfer of information across the globe via
Internet. It has become a phenomenon associated with any kind of shopping,
almost anything. You name it and Ecommerce with its giant tentacles
engulfing every single product and service will make you available at your door
steps. It has got a real amazing and wide range of products from household
needs, technology to entertainment.

1
Computer Applications Project-1

Disadvantages:

Theft of Personal information


If you use the Internet, you may be facing grave danger as your personal
information such as name, address, and credit card number etc. can be
accessed by other culprits to make your problems worse.

Spamming:
Spamming refers to sending unwanted e-mails in bulk, which provide no
purpose and needlessly obstruct the entire system. Such illegal activities can
be very frustrating for you, and so instead of just ignoring it, you should make
an effort to try and stop these activities so that using the Internet can become
that much safer.

Virus threat
Virus is nothing but a program which disrupts the normal functioning of your
computer systems. Computers attached to internet are more prone to virus
attacks and they can end up into crashing your whole hard disk, causing you
considerable headache.

Disorganized
Conducting research often leads to non-productive searches

No Standards
No process to check information for accuracy

Transient
Web addresses change; sites disappear

Limited archives
Often only current information is available online

Costs
Fees often charged for access to specialized information

Unplanned
Content often based on what is popular or profitable.

1
Computer Applications Project-1

END OF PROJECT

You might also like