What is Information technology?
Information technology (IT) is the use of any computers, storage, networking
and other physical devices, infrastructure and processes to create, process,
store, secure and exchange all forms of electronic data. Typically, IT is used in
the context of business operations, as opposed to technology used for
personal or entertainment purposes. The commercial use of IT encompasses
both computer technology and telecommunications.
The Harvard Business Review coined the term information technology to make
a distinction between purpose-built machines designed to perform a limited
scope of functions, and general-purpose computing machines that could be
programmed for various tasks. As the IT industry evolved from the mid-20th
century, computing capability increased, while device cost and energy
consumption decreased, a cycle that continues today when new technologies
emerge.
Examples of information technology
So how is IT actually involved in day-to-day business? Consider five common
examples of IT and teams at work:
1. Server upgrade. One or more data center servers near the end of their
operational and maintenance lifecycle. IT staff will select and procure
replacement servers, configure and deploy the new servers, backup
applications and data on existing servers, transfer that data and
applications to the new servers, validate that the new servers are working
properly and then repurpose or decommission and dispose of the old
servers.
2. Security monitoring. Businesses routinely employ tools to monitor and
log activity in applications, networks and system IT staff receive alerts of
potential threats or noncompliant behavior -- such as a user attempting to
access a restricted file -- check logs and other reporting tools to investigate
and determine the root cause of the alert and take prompt action to
address and remediate the threat, often driving changes and
improvements to security posture that can prevent similar events in the
future.
3. New software. The business determines a need for a new mobile
application that can allow customers to log in and access account
information or conduct other transactions from smartphones and tablets.
Developers work to create and refine a suitable application according to a
planned roadmap. Operations staff posts each iteration of the new mobile
application for download and deploy the back-end components of the app
to the organization's infrastructure.
4. Business improvement. A business requires more availability from a
critical application to help with revenue or business continuance
strategies. The IT staff might be called upon to architect a high-
availability cluster to provide greater performance and resilience for the
application to ensure that the application can continue to function in the
face of single outages. This can be paired with enhancements to data
storage protection and recovery.
5. User support. Developers are building a major upgrade for a vital business
application. Developers and admins will collaborate to create new
documentation for the upgrade. IT staff might deploy the upgrade for
limited beta testing -- allowing a select group of users to try the new
version -- while also developing and delivering comprehensive training
that prepares all users for the new version's eventual release.
INTRODUCTION :
Technology has been defined as "systematic knowledge and action, usually of
industrial processes but applicable to any recurrent activity". In providing
tools and techniques for action, technology at once adds to and draws from a
knowledge base in which theory and practice interact and compact. At its
most general level technology may be regarded as definable specifiable way of
doing anything. In other words, we may say a technology is a codified,
communicable procedure for solving problems. Technology, Manfred Kochen
observed, impacts in three stages. First, it enables us to do what we are now
doing, but better, faster and cheaper; second, it enables us to do what we
cannot do now; and third, it changes our life styles. Information technology is
a recent and comprehensive term, which describes the whole range of
processes for generation, storage, transmission, retrieval and processing of
information. In this Unit, an attempt is made to discuss the components of
information technology and to identify elements that really matter m the
investigation and implementation of new information technologies in
information systems and services.
Definition:
The term `Information Technology' (IT) has varying interpretations.
Macmillan Dictionary of Information Technology defines IT as "the
acquisition, processing, storage and dissemination of vocal, pictorial, textual
and numerical information by a micro-electronics-based combination of
computing and telecommunications". Two points are worth consideration
about this definition: 1) The new information technology is seen as involving
the formulating, recording and processing and not just transmitting of,
information. These are elements in the communication process which can be
separated (both analytically and in practice) but in the context of human
communication they tend to be intertwined. 2) Modem information
technology deals with a wide variety of ways of representing information. It
covers not only the textual (i.e., cognitive, propositional and verbalised forms,
we often think under the head information), but also numerical, visual, and
auditory representations. UNESCO defines Information Technology as
"scientific, technological and engineering disciplines and the management
techniques used in information handling and processing information, their
applications; computers and their interaction with man and machine and
associated social, economic and cultural matters". (Stokes) This definition,
while emphasising the significant role of computers, appears not to take into
its purview the communication systems. It may, however, be stated that
communication systems are as essential to information technology as
computers. As a consequence, we have a convergence of three strands of
technologies: computers, micro-electronics and communications. In other
words, a mosaic of technologies, products and techniques have combined to
provide new electronic dimensions to information management. This mosaic
is known by the name new information technology. It is important to bear in
mind that information technology is not just concerned with new pieces of
equipment but with much broader spectrum of information activities.
Information technology encompasses such different things as book, print;
reprography, the telephone network, broadcasting and computers. In the
following sections let us briefly consider the major components of information
technology namely: computer technology, communications technology and
reprographic and micrographic technologies.
History
The history of computers goes back over 200 years. At first theorized by
mathematicians and entrepreneurs, during the 19th century mechanical
calculating machines were designed and built to solve the increasingly
complex number-crunching challenges. The advancement of technology
enabled ever more-complex computers by the early 20th century, and
computers became larger and more powerful.
Today, computers are almost unrecognizable from designs of the 19th
century, such as Charles Babbage's Analytical Engine — or even from the huge
computers of the 20th century that occupied whole rooms, such as the
Electronic Numerical Integrator and Calculator.
Here's a brief history of computers, from their primitive number-crunching
origins to the powerful modern-day machines that surf the Internet, run
games and stream multimedia.
19TH CENTURY
1801: Joseph Marie Jacquard, a French merchant and inventor invents a loom
that uses punched wooden cards to automatically weave fabric designs. Early
computers would use similar punch cards.
1821: English mathematician Charles Babbage conceives of a steam-driven
calculating machine that would be able to compute tables of numbers. Funded
by the British government, the project, called the "Difference Engine" fails due
to the lack of technology at the time, according to the University of Minnesota.
1848: Ada Lovelace, an English mathematician and the daughter of poet Lord
Byron, writes the world's first computer program. According to Anna Siffert, a
professor of theoretical mathematics at the University of Mü nster in Germany,
Lovelace writes the first program while translating a paper on Babbage's
Analytical Engine from French into English. "She also provides her own
comments on the text. Her annotations, simply called "notes," turn out to be
three times as long as the actual transcript," Siffert wrote in an article for The
Max Planck Society. "Lovelace also adds a step-by-step description for
computation of Bernoulli numbers with Babbage's machine — basically an
algorithm — which, in effect, makes her the world's first computer
programmer." Bernoulli numbers are a sequence of rational numbers often
used in computation.
1853: Swedish inventor Per Georg Scheutz and his son Edvard design the
world's first printing calculator. The machine is significant for being the first
to "compute tabular differences and print the results," according to Uta C.
Merzbach's book, "Georg Scheutz and the First Printing Calculator"
(Smithsonian Institution Press, 1977).
1890: Herman Hollerith designs a punch-card system to help calculate the
1890 U.S. Census. The machine, saves the government several years of
calculations, and the U.S. taxpayer approximately $5 million, according
to Columbia University Hollerith later establishes a company that will
eventually become International Business Machines Corporation (IBM).
EARLY 20TH CENTURY
1931: At the Massachusetts Institute of Technology (MIT), Vannevar Bush
invents and builds the Differential Analyzer, the first large-scale automatic
general-purpose mechanical analog computer, according to Stanford
University.
1936: Alan Turing, a British scientist and mathematician, presents the
principle of a universal machine, later called the Turing machine, in a paper
called "On Computable Numbers…" according to Chris Bernhardt's book
"Turing's Vision" (The MIT Press, 2017). Turing machines are capable of
computing anything that is computable. The central concept of the modern
computer is based on his ideas. Turing is later involved in the development of
the Turing-Welchman Bombe, an electro-mechanical device designed to
decipher Nazi codes during World War II, according to the UK's National
Museum of Computing.
1937: John Vincent Atanasoff, a professor of physics and mathematics at Iowa
State University, submits a grant proposal to build the first electric-only
computer, without using gears, cams, belts or shafts.
1939: David Packard and Bill Hewlett found the Hewlett Packard Company in
Palo Alto, California. The pair decide the name of their new company by the
toss of a coin, and Hewlett-Packard's first headquarters are in Packard's
garage, according to MIT.
1941: German inventor and engineer Konrad Zuse completes his Z3 machine,
the world's earliest digital computer, according to Gerard O'Regan's book "A
Brief History of Computing" (Springer, 2021). The machine was destroyed
during a bombing raid on Berlin during World War II. Zuse fled the German
capital after the defeat of Nazi Germany and later released the world's first
commercial digital computer, the Z4, in 1950, according to O'Regan.
1941: Atanasoff and his graduate student, Clifford Berry, design the first
digital electronic computer in the U.S., called the Atanasoff-Berry Computer
(ABC). This marks the first time a computer is able to store information on its
main memory, and is capable of performing one operation every 15 seconds,
according to the book "Birthing the Computer" (Cambridge Scholars
Publishing, 2016)
1945: Two professors at the University of Pennsylvania, John Mauchly and J.
Presper Eckert, design and build the Electronic Numerical Integrator and
Calculator (ENIAC). The machine is the first "automatic, general-purpose,
electronic, decimal, digital computer," according to Edwin D. Reilly's book
"Milestones in Computer Science and Information Technology" (Greenwood
Press, 2003).
1946: Mauchly and Presper leave the University of Pennsylvania and receive
funding from the Census Bureau to build the UNIVAC, the first commercial
computer for business and government applications.
1947: William Shockley, John Bardeen and Walter Brattain of Bell
Laboratories invent the transistor. They discover how to make an electric
switch with solid materials and without the need for a vacuum.
1949: A team at the University of Cambridge develops the Electronic Delay
Storage Automatic Calculator (EDSAC), "the first practical stored-program
computer," according to O'Regan. "EDSAC ran its first program in May 1949
when it calculated a table of squares and a list of prime numbers," O'Regan
wrote. In November 1949, scientists with the Council of Scientific and
Industrial Research (CSIR), now called CSIRO, build Australia's first digital
computer called the Council for Scientific and Industrial Research Automatic
Computer (CSIRAC). CSIRAC is the first digital computer in the world to play
music, according to O'Regan.
LATE 20TH CENTURY
1953: Grace Hopper develops the first computer language, which eventually
becomes known as COBOL, which stands for COmmon, Business-Oriented
Language according to the National Museum of American History. Hopper is
later dubbed the "First Lady of Software" in her posthumous Presidential
Medal of Freedom citation. Thomas Johnson Watson Jr., son of IBM CEO
Thomas Johnson Watson Sr., conceives the IBM 701 EDPM to help the United
Nations keep tabs on Korea during the war.
1954: John Backus and his team of programmers at IBM publish a paper
describing their newly created FORTRAN programming language, an acronym
for FORmula TRANslation, according to MIT.
1958: Jack Kilby and Robert Noyce unveil the integrated circuit, known as the
computer chip. Kilby is later awarded the Nobel Prize in Physics for his work.
1968: Douglas Engelbart reveals a prototype of the modern computer at the
Fall Joint Computer Conference, San Francisco. His presentation, called "A
Research Center for Augmenting Human Intellect" includes a live
demonstration of his computer, including a mouse and a graphical user
interface (GUI), according to the Doug Engelbart Institute. This marks the
development of the computer from a specialized machine for academics to a
technology that is more accessible to the general public.
1969: Ken Thompson, Dennis Ritchie and a group of other developers at Bell
Labs produce UNIX, an operating system that made "large-scale networking of
diverse computing systems — and the internet — practical," according to Bell
Labs.. The team behind UNIX continued to develop the operating system using
the C programming language, which they also optimized.
1970: The newly formed Intel unveils the Intel 1103, the first Dynamic Access
Memory (DRAM) chip.
1971: A team of IBM engineers led by Alan Shugart invents the "floppy disk,"
enabling data to be shared among different computers.
1972: Ralph Baer, a German-American engineer, releases Magnavox Odyssey,
the world's first home game console, in September 1972 , according to
the Computer Museum of America. Months later, entrepreneur Nolan Bushnell
and engineer Al Alcorn with Atari release Pong, the world's first commercially
successful video game.
1973: Robert Metcalfe, a member of the research staff for Xerox, develops
Ethernet for connecting multiple computers and other hardware.
1977: The Commodore Personal Electronic Transactor (PET), is released onto
the home computer market, featuring an MOS Technology 8-bit 6502
microprocessor, which controls the screen, keyboard and cassette player. The
PET is especially successful in the education market, according to O'Regan.
1975: The magazine cover of the January issue of "Popular Electronics"
highlights the Altair 8080 as the "world's first minicomputer kit to rival
commercial models." After seeing the magazine issue, two "computer geeks,"
Paul Allen and Bill Gates, offer to write software for the Altair, using the new
BASIC language. On April 4, after the success of this first endeavor, the two
childhood friends form their own software company, Microsoft.
1976: Steve Jobs and Steve Wozniak co-found Apple Computer on April Fool's
Day. They unveil Apple I, the first computer with a single-circuit board and
ROM (Read Only Memory), according to MIT.
1977: Radio Shack began its initial production run of 3,000 TRS-80 Model 1
computers — disparagingly known as the "Trash 80" — priced at $599,
according to the National Museum of American History. Within a year, the
company took 250,000 orders for the computer, according to the book "How
TRS-80 Enthusiasts Helped Spark the PC Revolution" (The Seeker Books,
2007).
1977: The first West Coast Computer Faire is held in San Francisco. Jobs and
Wozniak present the Apple II computer at the Faire, which includes color
graphics and features an audio cassette drive for storage.
1978: VisiCalc, the first computerized spreadsheet program is introduced.
1979: MicroPro International, founded by software engineer Seymour
Rubenstein, releases WordStar, the world's first commercially successful
word processor. WordStar is programmed by Rob Barnaby, and includes
137,000 lines of code, according to Matthew G. Kirschenbaum's book "Track
Changes: A Literary History of Word Processing" (Harvard University Press,
2016).
1981: "Acorn," IBM's first personal computer, is released onto the market at a
price point of $1,565, according to IBM. Acorn uses the MS-DOS operating
system from Windows. Optional features include a display, printer, two
diskette drives, extra memory, a game adapter and more.
1983: The Apple Lisa, standing for "Local Integrated Software Architecture"
but also the name of Steve Jobs' daughter, according to the National Museum
of American History (NMAH), is the first personal computer to feature a GUI.
The machine also includes a drop-down menu and icons. Also this year, the
Gavilan SC is released and is the first portable computer with a flip-form
design and the very first to be sold as a "laptop."
1984: The Apple Macintosh is announced to the world during a Superbowl
advertisement. The Macintosh is launched with a retail price of $2,500,
according to the NMAH.
1985: As a response to the Apple Lisa's GUI, Microsoft releases Windows in
November 1985, the Guardian reported. Meanwhile, Commodore announces
the Amiga 1000.
1989: Tim Berners-Lee, a British researcher at the European Organization for
Nuclear Research (CERN), submits his proposal for what would become the
World Wide Web. His paper details his ideas for Hyper Text Markup Language
(HTML), the building blocks of the Web.
1993: The Pentium microprocessor advances the use of graphics and music
on PCs.
1996: Sergey Brin and Larry Page develop the Google search engine at
Stanford University.
1997: Microsoft invests $150 million in Apple, which at the time is struggling
financially. This investment ends an ongoing court case in which Apple
accused Microsoft of copying its operating system.
1999: Wi-Fi, the abbreviated term for "wireless fidelity" is developed, initially
covering a distance of up to 300 feet (91 meters) Wired reported.
21ST CENTURY
2001: Mac OS X, later renamed OS X then simply macOS, is released by Apple
as the successor to its standard Mac Operating System. OS X goes through 16
different versions, each with "10" as its title, and the first nine iterations are
nicknamed after big cats, with the first being codenamed
"Cheetah," TechRadar reported.
2003: AMD's Athlon 64, the first 64-bit processor for personal computers, is
released to customers.
2004: The Mozilla Corporation launches Mozilla Firefox 1.0. The Web browser
is one of the first major challenges to Internet Explorer, owned by Microsoft.
During its first five years, Firefox exceeded a billion downloads by users,
according to the Web Design Museum.
2005: Google buys Android, a Linux-based mobile phone operating system
2006: The MacBook Pro from Apple hits the shelves. The Pro is the company's
first Intel-based, dual-core mobile computer.
2009: Microsoft launches Windows 7 on July 22. The new operating system
features the ability to pin applications to the taskbar, scatter windows away
by shaking another window, easy-to-access jumplists, easier previews of tiles
and more, TechRadar reported.
2010: The iPad, Apple's flagship handheld tablet, is unveiled.
2011: Google releases the Chromebook, which runs on Google Chrome OS.
2015: Apple releases the Apple Watch. Microsoft releases Windows 10.
2016: The first reprogrammable quantum computer was created. "Until now,
there hasn't been any quantum-computing platform that had the capability to
program new algorithms into their system. They're usually each tailored to
attack a particular algorithm," said study lead author Shantanu Debnath, a
quantum physicist and optical engineer at the University of Maryland, College
Park.
2017: The Defense Advanced Research Projects Agency (DARPA) is
developing a new "Molecular Informatics" program that uses molecules as
computers. "Chemistry offers a rich set of properties that we may be able to
harness for rapid, scalable information storage and processing," Anne Fischer,
program manager in DARPA's Defense Sciences Office, said in a statement.
"Millions of molecules exist, and each molecule has a unique three-
dimensional atomic structure as well as variables such as shape, size, or even
color. This richness provides a vast design space for exploring novel and
multi-value ways to encode and process data beyond the 0s and 1s of current
logic-based, digital architectures."
INTERNET
Internet, a system architecture that has revolutionized communications and
methods of commerce by allowing various computer networks around the
world to interconnect. Sometimes referred to as a “network of networks,” the
Internet emerged in the United States in the 1970s but did not become visible
to the general public until the early 1990s. By 2020, approximately 4.5 billion
people, or more than half of the world’s population, were estimated to have
access to the Internet.
The Internet provides a capability so powerful and general that it can be used
for almost any purpose that depends on information, and it is accessible by
every individual who connects to one of its constituent networks. It supports
human communication via social media, electronic mail (e-mail), “chat
rooms,” newsgroups, and audio and video transmission and allows people to
work collaboratively at many different locations. It supports access to digital
information by many applications, including the World Wide Web. The
Internet has proved to be a spawning ground for a large and growing number
of “e-businesses” (including subsidiaries of traditional “brick-and-mortar”
companies) that carry out most of their sales and services over the Internet.
Origin and development
Early networks
The first computer networks were dedicated special-purpose systems such as
SABRE (an airline reservation system) and AUTODIN I (a defense command-
and-control system), both designed and implemented in the late 1950s and
early 1960s. By the early 1960s computer manufacturers had begun to
use semiconductor technology in commercial products, and both conventional
batch-processing and time-sharing systems were in place in many large,
technologically advanced companies. Time-sharing systems allowed a
computer’s resources to be shared in rapid succession with multiple users,
cycling through the queue of users so quickly that the computer appeared
dedicated to each user’s tasks despite the existence of many others accessing
the system “simultaneously.” This led to the notion of sharing computer
resources (called host computers or simply hosts) over an entire network.
Host-to-host interactions were envisioned, along with access to specialized
resources (such as supercomputers and mass storage systems) and
interactive access by remote users to the computational powers of time-
sharing systems located elsewhere. These ideas were first realized
in ARPANET, which established the first host-to-host network connection on
October 29, 1969. It was created by the Advanced Research Projects Agency
(ARPA) of the U.S. Department of Defense. ARPANET was one of the first
general-purpose computer networks. It connected time-sharing computers at
government-supported research sites, principally universities in the United
States, and it soon became a critical piece of infrastructure for the computer
science research community in the United States. Tools and applications—
such as the simple mail transfer protocol (SMTP, commonly referred to as e-
mail), for sending short messages, and the file transfer protocol (FTP), for
longer transmissions—quickly emerged. In order to achieve cost-effective
interactive communications between computers, which typically
communicate in short bursts of data, ARPANET employed the new technology
of packet switching. Packet switching takes large messages (or chunks of
computer data) and breaks them into smaller, manageable pieces (known as
packets) that can travel independently over any available circuit to the target
destination, where the pieces are reassembled. Thus, unlike traditional voice
communications, packet switching does not require a single dedicated circuit
between each pair of users.
Commercial packet networks were introduced in the 1970s, but these were
designed principally to provide efficient access to remote computers by
dedicated terminals. Briefly, they replaced long-distance modem connections
by less-expensive “virtual” circuits over packet networks. In the United States,
Telenet and Tymnet were two such packet networks. Neither supported host-
to-host communications; in the 1970s this was still the province of the
research networks, and it would remain so for many years.
DARPA (Defense Advanced Research Projects Agency; formerly ARPA)
supported initiatives for ground-based and satellite-based packet networks.
The ground-based packet radio system provided mobile access to computing
resources, while the packet satellite network connected the United States with
several European countries and enabled connections with widely dispersed
and remote regions. With the introduction of packet radio, connecting a
mobile terminal to a computer network became feasible. However, time-
sharing systems were then still too large, unwieldy, and costly to be mobile or
even to exist outside a climate-controlled computing environment. A strong
motivation thus existed to connect the packet radio network to ARPANET in
order to allow mobile users with simple terminals to access the time-sharing
systems for which they had authorization. Similarly, the packet satellite
network was used by DARPA to link the United States with satellite terminals
serving the United Kingdom, Norway, Germany, and Italy. These terminals,
however, had to be connected to other networks in European countries in
order to reach the end users. Thus arose the need to connect the packet
satellite net, as well as the packet radio net, with other networks.
Foundation of the Internet
The Internet resulted from the effort to connect various research networks in
the United States and Europe. First, DARPA established a program to
investigate the interconnection of “heterogeneous networks.” This program,
called Internetting, was based on the newly introduced concept of open
architecture networking, in which networks with defined standard interfaces
would be interconnected by “gateways.” A working demonstration of the
concept was planned. In order for the concept to work, a new protocol had to
be designed and developed; indeed, a system architecture was also required.
In 1974 Vinton Cerf, then at Stanford University in California, and this author,
then at DARPA, collaborated on a paper that first described such
a protocol and system architecture—namely, the transmission control
protocol (TCP), which enabled different types of machines on networks all
over the world to route and assemble data packets. TCP, which originally
included the Internet protocol (IP), a global addressing mechanism that
allowed routers to get data packets to their ultimate destination, formed
the TCP/IP standard, which was adopted by the U.S. Department of Defense in
1980. By the early 1980s the “open architecture” of the TCP/IP approach was
adopted and endorsed by many other researchers and eventually by
technologists and businessmen around the world.
By the 1980s other U.S. governmental bodies were heavily involved with
networking, including the National Science Foundation (NSF), the Department
of Energy, and the National Aeronautics and Space Administration (NASA).
While DARPA had played a seminal role in creating a small-scale version of
the Internet among its researchers, NSF worked with DARPA to expand access
to the entire scientific and academic community and to make TCP/IP the
standard in all federally supported research networks. In 1985–86 NSF
funded the first five supercomputing centres—at Princeton University,
the University of Pittsburgh, the University of California, San Diego,
the University of Illinois, and Cornell University. In the 1980s NSF also funded
the development and operation of the NSFNET, a national “backbone”
network to connect these centres. By the late 1980s the network was
operating at millions of bits per second. NSF also funded various nonprofit
local and regional networks to connect other users to the NSFNET. A few
commercial networks also began in the late 1980s; these were soon joined by
others, and the Commercial Internet Exchange (CIX) was formed to allow
transit traffic between commercial networks that otherwise would not have
been allowed on the NSFNET backbone. In 1995, after extensive review of the
situation, NSF decided that support of the NSFNET infrastructure was no
longer required, since many commercial providers were now willing and able
to meet the needs of the research community, and its support was withdrawn.
Meanwhile, NSF had fostered a competitive collection of commercial Internet
backbones connected to one another through so-called network access points
(NAPs).
From the Internet’s origin in the early 1970s, control of it steadily devolved
from government stewardship to private-sector participation and finally to
private custody with government oversight and forbearance. Today a loosely
structured group of several thousand interested individuals known as the
Internet Engineering Task Force participates in a grassroots development
process for Internet standards. Internet standards are maintained by the
nonprofit Internet Society, an international body with headquarters in Reston,
Virginia. The Internet Corporation for Assigned Names and Numbers (ICANN),
another nonprofit, private organization, oversees various aspects of policy
regarding Internet domain names and numbers.
Commercial expansion
The rise of commercial Internet services and applications helped to fuel a
rapid commercialization of the Internet. This phenomenon was the result of
several other factors as well. One important factor was the introduction of
the personal computer and the workstation in the early 1980s—a
development that in turn was fueled by unprecedented progress in integrated
circuit technology and an attendant rapid decline in computer prices. Another
factor, which took on increasing importance, was the emergence
of Ethernet and other “local area networks” to link personal computers. But
other forces were at work too. Following the restructuring of AT&T in 1984,
NSF took advantage of various new options for national-level digital backbone
services for the NSFNET. In 1988 the Corporation for National
Research Initiatives received approval to conduct an experiment linking a
commercial e-mail service (MCI Mail) to the Internet. This application was the
first Internet connection to a commercial provider that was not also part of
the research community. Approval quickly followed to allow other e-mail
providers access, and the Internet began its first explosion in traffic.
In 1993 federal legislation allowed NSF to open the NSFNET backbone to
commercial users. Prior to that time, use of the backbone was subject to an
“acceptable use” policy, established and administered by NSF, under which
commercial use was limited to those applications that served the research
community. NSF recognized that commercially supplied network services,
now that they were available, would ultimately be far less expensive than
continued funding of special-purpose network services.
Also in 1993 the University of Illinois made widely available Mosaic, a new
type of computer program, known as a browser, that ran on most types of
computers and, through its “point-and-click” interface, simplified access,
retrieval, and display of files through the Internet. Mosaic incorporated a set
of access protocols and display standards originally developed at the
European Organization for Nuclear Research (CERN) by Tim Berners-Lee for a
new Internet application called the World Wide Web (WWW). In
1994 Netscape Communications Corporation (originally called Mosaic
Communications Corporation) was formed to further develop the Mosaic
browser and server software for commercial use. Shortly thereafter, the
software giant Microsoft Corporation became interested in supporting
Internet applications on personal computers (PCs) and developed its Internet
Explorer Web browser (based initially on Mosaic) and other programs. These
new commercial capabilities accelerated the growth of the Internet, which as
early as 1988 had already been growing at the rate of 100 percent per year.
By the late 1990s there were approximately 10,000 Internet service
providers (ISPs) around the world, more than half located in the United States.
However, most of these ISPs provided only local service and relied on access
to regional and national ISPs for wider connectivity. Consolidation began at
the end of the decade, with many small to medium-size providers merging or
being acquired by larger ISPs. Among these larger providers were groups such
as America Online, Inc. (AOL), which started as a dial-up information service
with no Internet connectivity but made a transition in the late 1990s to
become the leading provider of Internet services in the world—with more
than 25 million subscribers by 2000 and with branches in Australia,
Europe, South America, and Asia. Widely used Internet “portals” such
as AOL, Yahoo!, Excite, and others were able to command advertising fees
owing to the number of “eyeballs” that visited their sites. Indeed, during the
late 1990s advertising revenue became the main quest of many Internet sites,
some of which began to speculate by offering free or low-cost services of
various kinds that were visually augmented with advertisements. By 2001 this
speculative bubble had burst.
The 21st century and future directions
After the collapse of the Internet bubble came the emergence of what was
called “Web 2.0,” an Internet with emphasis on social networking and content
generated by users, and cloud computing. Social media services such
as Facebook, Twitter, and Instagram became some of the most popular
Internet sites through allowing users to share their own content with their
friends and the wider world. Mobile phones became able to access the Web,
and, with the introduction of smartphones like Apple’s iPhone (introduced in
2007), the number of Internet users worldwide exploded from about one sixth
of the world population in 2005 to more than half in 2020.
The increased availability of wireless access enabled applications that were
previously uneconomical. For example, global positioning systems (GPS)
combined with wireless Internet access help mobile users to locate alternate
routes, generate precise accident reports and initiate recovery services, and
improve traffic management and congestion control. In addition to
smartphones, wireless laptop computers, and personal digital assistants
(PDAs), wearable devices with voice input and special display glasses were
developed.
While the precise structure of the future Internet is not yet clear, many
directions of growth seem apparent. One is toward higher backbone and
network access speeds. Backbone data rates of 100 billion bits (100 gigabits)
per second are readily available today, but data rates of 1 trillion bits (1
terabit) per second or higher will eventually become commercially feasible. If
the development of computer hardware, software, applications, and local
access keeps pace, it may be possible for users to access networks at speeds of
100 gigabits per second. At such data rates, high-resolution video—indeed,
multiple video streams—would occupy only a small fraction of available
bandwidth. Remaining bandwidth could be used to
transmit auxiliary information about the data being sent, which in turn would
enable rapid customization of displays and prompt resolution of certain local
queries. Much research, both public and private, has gone
into integrated broadband systems that can simultaneously carry multiple
signals—data, voice, and video. In particular, the U.S. government has funded
research to create new high-speed network capabilities dedicated to the
scientific-research community.
It is clear that communications connectivity will be an important function of a
future Internet as more machines and devices are interconnected. In 1998,
after four years of study, the Internet Engineering Task Force published a new
128-bit IP address standard intended to replace the conventional 32-bit
standard. By allowing a vast increase in the number of available addresses
(2128, as opposed to 232), this standard makes it possible to assign unique
addresses to almost every electronic device imaginable. Thus, through the
“Internet of things,” in which all machines and devices could be connected to
the Internet, the expressions “wired” office, home, and car may all take on new
meanings, even if the access is really wireless.
The dissemination of digitized text, pictures, and audio and video recordings
over the Internet, primarily available today through the World Wide Web, has
resulted in an information explosion. Clearly, powerful tools are needed to
manage network-based information. Information available on the Internet
today may not be available tomorrow without careful attention’s being paid to
preservation and archiving techniques. The key to making information
persistently available is infrastructure and the management of that
infrastructure. Repositories of information, stored as digital objects, will soon
populate the Internet. At first these repositories may be dominated by digital
objects specifically created and formatted for the World Wide Web, but in
time they will contain objects of all kinds in formats that will be dynamically
resolvable by users’ computers in real time. Movement of digital objects from
one repository to another will still leave them available to users who are
authorized to access them, while replicated instances of objects in multiple
repositories will provide alternatives to users who are better able to interact
with certain parts of the Internet than with others. Information will have its
own identity and, indeed, become a “first-class citizen” on the Internet.
Society and the Internet
What began as a largely technical and limited universe of designers and users
became one of the most important mediums of the late 20th and early 21st
centuries. As the Pew Charitable Trust observed in 2004, it took 46 years to
wire 30 percent of the United States for electricity; it took only 7 years for the
Internet to reach that same level of connection to American homes. By 2005,
68 percent of American adults and 90 percent of American teenagers had used
the Internet. Europe and Asia were at least as well connected as the United
States. Nearly half of the citizens of the European Union are online, and even
higher rates are found in the Scandinavian countries. There is a wide variance
in Asian countries; for example, by 2005 Taiwan, Hong Kong, and Japan had at
least half of their populations online, whereas India, Pakistan, and Vietnam
had less than 10 percent. South Korea was the world leader in connecting its
population to the Internet through high-speed broadband connections.
Such statistics can chart the Internet’s growth, but they offer few insights into
the changes wrought as users—individuals, groups, corporations, and
governments—have embedded the technology into everyday life. The Internet
is now as much a lived experience as a tool for performing particular tasks,
offering the possibility of creating an environment or virtual reality in which
individuals might work, socially interact with others, and perhaps even live
out their lives.
History, community, and communications
Two agendas
The Internet has evolved from the integration of two very different
technological agendas—the Cold War networking of the U.S. military and
the personal computer (PC) revolution. The first agenda can be dated to 1973,
when the Defense Advanced Research Projects Agency (DARPA) sought to
create a communications network that would support the transfer of large
data files between government and government-sponsored academic-
research laboratories. The result was the ARPANET, a robust decentralized
network that supported a vast array of computer hardware. Initially,
ARPANET was the preserve of academics and corporate researchers with
access to time-sharing mainframe computer systems. Computers were large
and expensive; most computer professionals could not imagine anyone
needing, let alone owning, his own “personal” computer. And yet Joseph
Licklider, one of the driving forces at DARPA for computer networking, stated
that online communication would “change the nature and value of
communication even more profoundly than did the printing press and the
picture tube.”
The second agenda began to emerge in 1977 with the introduction of
the Apple II, the first affordable computer for individuals and small
businesses. Created by Apple Computer, Inc. (now Apple Inc.), the Apple II
was popular in schools by 1979, but in the corporate market it was
stigmatized as a game machine. The task of cracking the business market fell
to IBM. In 1981 the IBM PC was released and immediately standardized the
PC’s basic hardware and operating system—so much so that first PC-
compatible and then simply PC came to mean any personal computer built
along the lines of the IBM PC. A major centre of the PC revolution was the San
Francisco Bay area, where several major research institutions funded by
DARPA—Stanford University, the University of California, Berkeley, and Xerox
PARC—provided much of the technical foundation for Silicon Valley. It was no
small coincidence that Apple’s two young founders—Steven Jobs and Stephen
Wozniak—worked as interns in the Stanford University Artificial Intelligence
Laboratory and at the nearby Hewlett-Packard Company. The Bay Area’s
counterculture also figured prominently in the PC’s history. Electronic
hobbyists saw themselves in open revolt against the “priesthood” of the
mainframe computer and worked together in computer-enthusiast groups to
spread computing to the masses.
The WELL
Why does this matter? The military played an essential role in shaping the
Internet’s architecture, but it was through the counterculture that many of the
practices of contemporary online life emerged. A telling example is the
early electronic bulletin board system (BBS), such as the WELL (Whole Earth
’Lectronic Link). Established in 1985 by American publisher Stewart Brand,
who viewed the BBS as an extension of his Whole Earth Catalog, the WELL
was one of the first electronic communities organized around forums
dedicated to particular subjects such as parenting and Grateful Dead concerts.
The latter were an especially popular topic of online conversation, but it was
in the parenting forum where a profound sense of community and belonging
initially appeared. For example, when one participant’s child was diagnosed
with leukemia, members of the forum went out of their way either to find
health resources or to comfort the distressed parents. In this one instance,
several features still prevalent in the online world can be seen. First,
geography was irrelevant. WELL members in California and New York could
bring their knowledge together within the confines of a forum—and could do
so collectively, often exceeding the experience available to any local physician
or medical centre. This marshaling of shared resources persists to this day as
many individuals use the Internet to learn more about their ailments, find
others who suffer from the same disease, and learn about drugs, physicians,
and alternative therapies.
Another feature that distinguished the WELL forums was the use of
moderators who could interrupt and focus discussion while
also disciplining users who broke the rather loose rules. “Flame wars” (crass,
offensive, or insulting exchanges) were possible, but anyone dissatisfied in
one forum was free to organize another. In addition, the WELL was intensely
democratic. WELL forums were the original chat rooms—online spaces where
individuals possessing similar interests might congregate, converse, and even
share their physical locations to facilitate meeting in person. Finally, the WELL
served as a template for other online communities dedicated to subjects
as diverse as Roman Catholicism, liberal politics, gardening, and automobile
modification.
Instant broadcast communication
For the individual, the Internet opened up new communication
possibilities. E-mail led to a substantial decline in traditional “snail
mail.” Instant messaging (IM), or text messaging, expanded, especially among
youth, with the convergence of the Internet and cellular telephone access to
the Web. Indeed, IM became a particular problem in classrooms, with students
often surreptitiously exchanging notes via wireless communication devices.
More than 50 million American adults, including 11 million at work, use IM.
From mailing lists to “buddy lists,” e-mail and IM have been used to create
“smart mobs” that converge in the physical world. Examples include protest
organizing, spontaneous performance art, and shopping. Obviously, people
congregated before the Internet existed, but the change wrought by mass e-
mailings was in the speed of assembling such events. In February 1999, for
example, activists began planning protests against the November 1999 World
Trade Organization (WTO) meetings in Seattle, Washington. Using the
Internet, organizers mobilized more than 50,000 individuals from around the
world to engage in demonstrations—at times violent—that effectively altered
the WTO’s agenda.
More than a decade later, in June 2010 Egyptian computer engineer Wael
Ghonim anonymously created a page titled “We Are All Khaled Said” on the
social media site Facebook to publicize the death of a 28-year-old Egyptian
man beaten to death by police. The page garnered hundreds of thousands of
members, becoming an online forum for the discussion of police brutality in
Egypt. After a popular uprising in Tunisia in January 2011, Ghonim and
several other Internet democracy activists posted messages to their sites
calling for similar action in Egypt. Their social media campaign helped spur
mass demonstrations that forced Egyptian Pres. Hosni Mubarak from power.
(The convergence of mobs is not without some techno-silliness. “Flash
mobs”—groups of strangers who are mobilized on short notice via Web sites,
online discussion groups, or e-mail distribution lists—often take part in
bizarre though usually harmless activities in public places, such as engaging in
mass free-for-alls around the world on Pillow Fight Day.)
In the wake of catastrophic disasters, citizens have used the Internet to donate
to charities in an unprecedented fashion. Others have used the Internet to
reunite family members or to match lost pets with their owners. The role of
the Internet in responding to disasters, both natural and deliberate, remains
the topic of much discussion, as it is unclear whether the Internet actually can
function in a disaster area when much of the infrastructure is destroyed.
Certainly during the September 11, 2001, attacks, people found it easier to
communicate with loved ones in New York City via e-mail than through the
overwhelmed telephone network.
Following the earthquake that struck Haiti in January 2010, electronic media
emerged as a useful mode for connecting those separated by the quake and for
coordinating relief efforts. Survivors who were able to access the Internet—
and friends and relatives abroad—took to social networking sites such as
Facebook in search of information on those missing in the wake of
the catastrophe. Feeds from those sites also assisted aid organizations in
constructing maps of the areas affected and in determining where to channel
resources. The many Haitians lacking Internet access were able to contribute
updates via text messaging on mobile phones.
Social gaming and social networking
One-to-one or even one-to-many communication is only the most elementary
form of Internet social life. The very nature of the Internet makes spatial
distances largely irrelevant for social interactions. Online gaming moved from
simply playing a game with friends to a rather complex form of social life in
which the game’s virtual reality spills over into the physical world. The case
of World of Warcraft, a popular electronic game with several million players,
is one example. Property acquired in the game can be sold online, although
such secondary economies are discouraged by Blizzard Entertainment, the
publisher of World of Warcraft, as a violation of the game’s terms of service. In
any case, what does it mean that one can own virtual property and that
someone is willing to pay for this property with real money? Economists have
begun studying such virtual economies, some of which now exceed the gross
national product of countries in Africa and Asia. In fact, virtual economies
have given economists a means of running controlled experiments.
Millions of people have created online game characters for entertainment
purposes. Gaming creates an online community, but it also allows for a
blurring of the boundaries between the real world and the virtual one. In
Shanghai one gamer stabbed and killed another one in the real world over a
virtual sword used in Legend of Mir 3. Although attempts were made to
involve the authorities in the original dispute, the police found themselves at a
loss prior to the murder because the law did not acknowledge the existence of
virtual property. In South Korea violence surrounding online gaming happens
often enough that police refer to such murders as “off-line PK,” a reference to
player killing (PK), or player-versus-player lethal contests, which are allowed
or encouraged in some games. By 2001 crime related to Lineage had forced
South Korean police to create special cybercrime units to patrol both within
the game and off-line. Potential problems from such games are not limited to
crime. Virtual life can be addictive. Reports of players neglecting family,
school, work, and even their health to the point of death have become more
common.
Social networking sites (SNSs) emerged as a significant online phenomenon
since the bursting of the “Internet bubble” in the early 2000s. SNSs use
software to facilitate online communities where members with shared
interests swap files, photographs, videos, and music, send messages and chat,
set up blogs (Web diaries) and discussion groups, and share opinions. Early
social networking services included Classmates.com, which connected former
schoolmates, and Yahoo! 360° and SixDegrees, which built networks of
connections via friends of friends. In the postbubble era the leading social
networking services were Myspace, Facebook, Friendster, Orkut,
and LinkedIn. LinkedIn became an effective tool for business staff recruiting.
Businesses have begun to exploit these networks, drawing on social
networking research and theory, which suggests that finding key “influential”
members of existing networks of individuals can give those businesses access
to and credibility with the whole network.
Advertising and e-commerce
Nichification allows for consumers to find what they want, but it also provides
opportunities for advertisers to find consumers. For example, most search
engines generate revenue by matching ads to an individual’s particular search
query. Among the greatest challenges facing the Internet’s continued
development is the task of reconciling advertising and commercial needs with
the right of Internet users not to be bombarded by “pop-up” Web pages
and spam (unsolicited e-mail).
Nichification also opens up important e-commerce opportunities. A bookstore
can carry only so much inventory on its shelves, which thereby limits its
collection to books with broad appeal. An online bookstore can “display”
nearly everything ever published. Although traditional bookstores often have
a special-order department, consumers have taken to searching and ordering
from online stores from the convenience of their homes and offices.
Although books can be made into purely digital artifacts, “e-books” have not
sold nearly as well as digital music. In part, this disparity is due to the need for
an e-book reader to have a large, bright screen, which adds to the
display’s cost and weight and leads to more-frequent battery replacement.
Also, it is difficult to match the handy design and low cost of an old-fashioned
paperback book. Interestingly, it turns out that listeners download from
online music vendors as many obscure songs as big record company hits. Just
a few people interested in some obscure song are enough to make it
worthwhile for a vendor to store it electronically for sale over the Internet.
What makes the Internet special here is not only its ability to match buyers
and sellers quickly and relatively inexpensively but also that the Internet and
the digital economy in general allow for a flowering of multiple tastes—in
games, people, and music.
Information and copyright
Education
Commerce and industry are certainly arenas in which the Internet has had a
profound effect, but what of the foundational institutions of any society—
namely, those related to education and the production of knowledge? Here the
Internet has had a variety of effects, some of which are quite disturbing. There
are more computers in the classroom than ever before, but there is scant
evidence that they enhance the learning of basic skills in reading, writing, and
arithmetic. And while access to vast amounts of digital information is
convenient, it has also become apparent that most students now see libraries
as antiquated institutions better used for their computer terminals than for
their book collections. As teachers at all education levels can attest, students
typically prefer to research their papers by reading online rather than
wandering through a library’s stacks.
In a related effect the Internet has brought plagiarism into the computer era in
two distinct senses. First, electronic texts have made it simple for students to
“cut and paste” published sources (e.g., encyclopaedia articles) into their own
papers. Second, although students could always get someone to write their
papers for them, it is now much easier to find and purchase anonymous
papers at Web sites and to even commission original term papers for a fixed
fee. Ironically, what the Internet gives, it also takes away. Teachers now have
access to databases of electronically submitted papers and can easily compare
their own students’ papers against a vast archive of sources. Even a simple
online search can sometimes find where one particularly well-turned phrase
originally appeared.
File sharing
College students have been at the leading edge of the growing awareness of
the centrality of intellectual property in a digital age. When American college
student Shawn Fanning invented Napster in 1999, he set in motion an ongoing
legal battle over digital rights. Napster was a file-sharing system that allowed
users to share electronic copies of music online. The problem was obvious:
recording companies were losing revenues as one legal copy of a song was
shared among many people. Although the record companies succeeded in
shutting down Napster, they found themselves having to contend with a new
form of file sharing, P2P (“person-to-person”). In P2P there is no central
administrator to shut down as there had been with Napster. Initially, the
recording industry sued the makers of P2P software and a few of the
most prolific users—often students located on university campuses with
access to high-speed connections for serving music and, later, movie files—in
an attempt to discourage the millions of people who regularly used the
software. Still, even while some P2P software makers have been held liable for
losses that the copyright owners have incurred, more-devious schemes
for circumventing apprehension have been invented.
The inability to prevent file sharing has led the recording and movie
industries to devise sophisticated copy protection on their CDs and DVDs. In a
particularly controversial incident, Sony Corporation introduced CDs into the
market in 2005 with copy protection that involved a special viruslike code
that hid on a user’s computer. This code, however, also was open to being
exploited by virus writers to gain control of users’ machines.
Electronic publishing
The Internet has become an invaluable and discipline-
transforming environment for scientists and scholars. In 2004 Google began
digitizing public-domain and out-of-print materials from several cooperating
libraries in North America and Europe, such as the University of
Michigan library, which made some seven million volumes available. Although
some authors and publishers challenged the project for fear of losing control
of copyrighted material, similar digitization projects were launched
by Microsoft Corporation and the online book vendor Amazon.com, although
the latter company proposed that each electronic page would be retrieved for
a small fee shared with the copyright holders.
The majority of academic journals are now online and searchable. This has
created a revolution in scholarly publishing, especially in the sciences and
engineering. For example, arXiv.org has transformed the rate at
which scientists publish and react to new theories and experimental data.
Begun in 1991, arXiv.org is an online archive in which physicists,
mathematicians, computer scientists, and computational biologists upload
research papers long before they will appear in a print journal. The articles
are then open to the scrutiny of the entire scientific community, rather than to
one or two referees selected by a journal editor. In this way scientists around
the world can receive an abstract of a paper as soon as it has been uploaded
into the depository. If the abstract piques a reader’s interest, the entire paper
can be downloaded for study. Cornell University in Ithaca, New York, and the
U.S. National Science Foundation support arXiv.org as an international
resource.
While arXiv.org deals with articles that might ultimately appear in print, it is
also part of a larger shift in the nature of scientific publishing. In the print
world a handful of companies control the publication of the most scientific
journals, and the price of institutional subscriptions is frequently exorbitant.
This has led to a growing movement to create online-only journals that are
accessible for free to the entire public—a public that often supports the
original research with its taxes. For example, the Public Library of Science
publishes online journals of biology and medicine that compete with
traditional print journals. There is no difference in how their articles are
vetted for publication; the difference is that the material is made available for
free. Unlike other creators of content, academics are not paid for what they
publish in scholarly journals, nor are those who review the articles. Journal
publishers, on the other hand, have long received subsidies from the scientific
community, even while charging that community high prices for its own work.
Although some commercial journals have reputations that can advance the
careers of those who publish in them, the U.S. government has taken the side
of the “open source” publishers and demanded that government-financed
research be made available to taxpayers as soon as it has been published.
In addition to serving as a medium for the exchange of articles, the Internet
can facilitate the discussion of scientific work long before it appears in print.
Scientific blogs—online journals kept by individuals or groups of researchers
—have flourished as a form of online salon for the discussion of ongoing
research. There are pitfalls to such practices, though. Astronomers who in
2005 posted abstracts detailing the discovery of a potential 10th planet found
that other researchers had used their abstracts to find the new astronomical
body themselves. In order to claim priority of discovery, the original group
rushed to hold a news conference rather than waiting to announce their work
at an academic conference or in a peer-reviewed journal.
Politics and culture
Free speech
The Internet has broadened political participation by ordinary citizens,
especially through the phenomenon of blogs. Many blogs are simply online
diaries or journals, but others have become sources of information and
opinion that challenge official government pronouncements or the
mainstream news media. By 2005 there were approximately 15 million blogs,
a number that was doubling roughly every six months. The United States
dominates the blog universe, or “blogosphere,” with English as the lingua
franca, but blogs in other languages are proliferating. In one striking
development, the Iranian national language, Farsi, has become the commonest
Middle Eastern language in the blogosphere. Despite the Iranian government’s
attempts to limit access to the Internet, some 60,000 active Farsi blogs are
hosted at a single service provider, PersianBlog.
The Internet poses a particular problem for autocratic regimes that restrict
access to independent sources of information. The Chinese government has
been particularly successful at policing the public’s access to the Internet,
beginning with its “Great Firewall of China” that automatically blocks access to
undesirable Web sites. The state also actively monitors Chinese Web sites to
ensure that they adhere to government limits on acceptable discourse and
tolerable dissent. In 2000 the Chinese government banned nine types of
information, including postings that might “harm the dignity and interests of
the state” or “disturb social order.” Users must enter their national
identification number in order to access the Internet at cybercafés.
Also, Internet service providers are responsible for the content on their
servers. Hence, providers engage in a significant amount of self-censorship in
order to avoid problems with the law, which may result in losing access to the
Internet or even serving jail time. Finally, the authorities are willing to shut
Web sites quickly and with no discussion. Of course, the state’s efforts are not
completely effective. Information can be smuggled into China on DVDs, and
creative Chinese users can circumvent the national firewall with proxy servers
—Web sites that allow users to move through the firewall to an ostensibly
acceptable Web site where they can connect to the rest of the Internet.
Others have taken advantage of the Internet’s openness to spread a variety of
political messages. The Ukrainian Orange Revolution of 2004 had a significant
Internet component. More troubling is the use of the Internet
by terrorist groups such as al-Qaeda to recruit members, pass along
instructions to sleeper cells, and celebrate their own horrific activities.
The Iraq War was fought not only on the ground but also online as al-Qaeda
operatives used specific Web sites to call their followers to jihad. Al-Qaeda
used password-protected chat rooms as key recruitment centres, as well as
Web sites to test potential recruits before granting them access to the group’s
actual network. On the other hand, posting material online is also a potential
vulnerability. Gaining access to the group’s “Jihad Encyclopaedia” has enabled
security analysts to learn about potential tactics, and Arabic-speaking
investigators have learned to infiltrate chat rooms and gain access to
otherwise hidden materials.
Political campaigns and muckraking
During the 2004 U.S. presidential campaign, blogs became a locus for often
heated exchanges about the candidates. In fact, the candidates themselves
used blogs and Web sites for fund-raising and networking. One of the first
innovators was Howard Dean, an early front-runner in the Democratic
primaries, whose campaign used a Web site for fund-raising and organizing
local meetings. In particular, Dean demonstrated that a modern presidential
campaign could use the Internet to galvanize volunteer campaign workers and
to raise significant sums from many small donations. In a
particularly astute move, Dean’s campaign set up a blog for comments from
his supporters, and it generated immediate feedback on certain proposals
such as refusing to accept public campaign funding. Both the George W.
Bush and the John Kerry presidential campaigns, as well as the Democratic
and Republican parties, came to copy the practices pioneered by Dean and his
advisers. In addition, changes in U.S. campaign finance laws allowed for the
creation of “527s,” independent action groups such as Moveon.org that used
the Internet to raise funds and rally support for particular issues and
candidates.
By 2005 it was widely agreed that politicians would have to deal not only with
the mainstream media (i.e., newspapers, magazines, radio, and television) but
also with a new phenomenon—the blogosphere. Although blogs do not have
editors or fact checkers, they have benefited from scandals in the mainstream
media, which have made many readers more skeptical of all sources of
information. Also, bloggers have forced mainstream media to confront topics
they might otherwise ignore. Some pundits have gone so far as to predict that
blogs and online news sources will replace the mainstream media, but it is far
more likely that these diverse sources of information will complement each
other. Indeed, falling subscription rates have led many newspaper publishers
to branch into electronic editions and to incorporate editorial blogs and
forums for reader feedback; thus, some of the distinctions between the media
have already been blurred.
Privacy and the Internet
Concerns about privacy in cyberspace are an issue of international debate. As
reading and writing, health care and shopping, and sex and gossip
increasingly take place in cyberspace, citizens around the world are
concerned that the most intimate details of their daily lives are being
monitored, searched, recorded, stored, and often misinterpreted when taken
out of context. For many, the greatest threats to privacy come not from state
agents but from the architecture of e-commerce itself, which is based, in
unprecedented ways, on the recording and exchange of intimate personal
information.
“Getting over it”
The threats to privacy in the new Internet age were crystallized in 2000 by the
case of DoubleClick, Inc. For a few years DoubleClick, the Internet’s
largest advertising company, had been compiling detailed information on the
browsing habits of millions of World Wide Web users by placing “cookie” files
on computer hard drives. Cookies are electronic footprints that allow Web
sites and advertising networks to monitor people’s online movements with
telescopic precision—including the search terms people enter as well as the
articles they skim and how long they spend skimming them. As long as users
were confident that their virtual identities were not being linked to their
actual identities, many were happy to accept DoubleClick cookies in exchange
for the convenience of navigating the Web more efficiently. Then in November
1999 DoubleClick bought Abacus Direct, which held a database of names,
addresses, and information about the off-line buying habits of 90 million
households compiled from the largest direct-mail catalogs and retailers in the
nation. Two months later DoubleClick began compiling profiles linking
individuals’ actual names and addresses to Abacus’s detailed records of their
online and off-line purchases. Suddenly, shopping that once seemed
anonymous was being archived in personally identifiable dossiers.
Under pressure from privacy advocates and dot-com investors, DoubleClick
announced in 2000 that it would postpone its profiling scheme until the U.S.
government and the e-commerce industry had agreed on privacy standards.
Two years later it settled consolidated class-action lawsuits from several
states, agreeing to pay legal expenses of up to $1.8 million, to tell consumers
about its data-collection activities in its online privacy policy, and to get
permission before combining a consumer’s personally identifiable data with
his or her Web-surfing history. DoubleClick also agreed to pay hundreds of
thousands of dollars to settle differences with attorneys general from 10
states who were investigating its information gathering.
The retreat of DoubleClick might have seemed like a victory for privacy, but it
was only an early battle in a much larger war—one in which many observers
still worry that privacy may be vanquished. “You already have zero privacy—
get over it,” Scott McNealy, the CEO of Sun Microsystems, memorably
remarked in 1999 in response to a question at a product show at which Sun
introduced a new interactive technology called Jini. Sun’s cheerful Web
site promised to usher in the “networked home” of the future, in which the
company’s “gateway” software would operate “like a congenial party host
inside the home to help consumer appliances communicate intelligently with
each other and with outside networks.” In this chatty new world of electronic
networking, a household’s refrigerator and coffeemaker could talk to a
television, and all three could be monitored from the office computer. The
incessant information exchanged by these gossiping appliances might, of
course, generate detailed records of the most intimate details of their owners’
daily lives.
New evidence seemed to emerge every day to support McNealy’s grim verdict
about the triumph of online surveillance technology over privacy. A survey of
nearly a thousand large companies conducted by the American Management
Association in 2000 found that more than half of the large American firms
surveyed monitored the Internet connections of their employees. Two-thirds
of the firms monitored e-mail messages, computer files, or telephone
conversations, up from only one-third three years earlier. Some companies
used Orwellian computer software with names like Spector, Assentor, or
Investigator that could monitor and record every keystroke on the computer
with video-like precision. These virtual snoops could also be programmed to
screen all incoming and outgoing e-mail for forbidden words and phrases—
such as those involving racism, body parts, or the name of the boss—and then
forward suspicious messages to a supervisor for review.
Issues in new media
Changes in the delivery of books, music, and television extended the
technologies of surveillance beyond the office, blurring the boundaries
between work and home. The same technologies that make it possible to
download digitally stored books, songs, and movies directly onto computer
hard drives or mobile devices could make it possible for publishers and
entertainment companies to record and monitor each individual’s browsing
habits with unsettling specificity. Television too is being redesigned to create
precise records of viewing habits. For instance, digital video recorders make it
possible to store hours of television programs and enable viewers to skip
commercials and to create their own program lineups. The data generated by
such actions could create viewer profiles, which could then be used to make
viewing suggestions and to record future shows.
Privacy of cell phone communication also has become an issue, as in 2010
when BlackBerry smartphone maker RIM reacted to demands from the United
Arab Emirates (U.A.E.), Saudi Arabia, and India that security forces from those
countries be given the ability to intercept communications such as e-mail and
instant messages from BlackBerry users within their borders. The U.A.E. later
canceled a planned ban on the BlackBerry service, saying that it had reached
an agreement with RIM, which declined to reveal its discussions with the
governments of other countries. The demands were part of a rising tide of
security demands from national governments that cited the need to monitor
criminals and terrorists who used wireless communications.
The United States is not immune to these controversies. In 2010 Pres. Barack
Obama’s administration said that in order to prevent terrorism and identify
criminals, it wanted Congress to require that all Internet services be capable
of complying with wiretap orders. The broad requirement would include
Internet phone services, social networking services, and other types of
Internet communication, and it would enable even encrypted messages to be
decoded and read—something that required considerable time and effort.
Critics complained that the monitoring proposal challenged the ideals of
privacy and the lack of centralized authority for which the Internet had long
been known.
Photos and videos also emerged as unexpected threats to personal privacy.
“Geotags” are created when photos or videos are embedded with geographic
location data from GPS chips inside cameras, including those in cell phones.
When images are uploaded to the Internet, the geotags allow homes or other
personal locations within the images to be precisely located by those who
view the photos online. The security risk is not widely understood by the
public, however, and in some cases disabling the geotag feature in certain
models of digital cameras and camera-equipped smartphones is complicated.
Google’s Street View photo-mapping service caused privacy concerns when
the company disclosed that it had been recording locations and some data
from unprotected household wireless networks as it took pictures. The
company said that the data had been gathered inadvertently. German officials
objected to Google’s actions on the basis of Germany’s strict privacy laws, and,
although German courts decided against the objections, Google did not expand
its Street View service in Germany beyond the handful of urban centres that it
had already photo-mapped. The controversy led to other investigations of the
Street View service by several U.S. states and the governments of several
countries (including the Czech Republic, which eventually refused to grant
Google permission to offer the Street View service there).
The social network Facebook has been a particular focus of privacy concerns
on the Internet. Over the lifetime of the site, the default privacy settings for a
Facebook user’s information evolved from most content being accessible only
to a user’s friends or friends of friends to being accessible to everyone. In
December 2009 Facebook rolled out a new privacy settings update that
allowed users to exercise more “granular” control over what personal
information was shared or displayed. However, the labyrinthine nature of the
various privacy-control menus discouraged use of the new privacy settings.
Users tended to fall back on Facebook’s default settings, which, because of the
expansion of Facebook’s “opt-out” policy, were at the loosest level of security,
forcing users to “opt-in” to make information private. Responding to criticism,
Facebook revised its privacy policy again in May 2010, with a simplified
system that consolidated privacy settings onto a single page.
Another privacy issue is cyberbullying—using the Internet to threaten or
humiliate another person with words, photos, or videos. The problem
received particular attention in 2010 when a male Rutgers University student
committed suicide after two acquaintances reportedly streamed a video over
the Internet of the student having a sexual encounter with a man. Also in
2010, Donna Witsell, the mother of a 13-year-old Florida girl who had
committed suicide in 2009 after a cyberbullying incident, formed a group
called Hope’s Warriors to help curb abuse and to warn others of the threat.
Most U.S. states have enacted laws against bullying, although very few of them
include cyberbullying.
BANKS:-
A bank is a financial institution that accepts deposits from the public and
creates a demand deposit while simultaneously making loans.
[1]
Lending activities can be directly performed by the bank or indirectly
through capital markets.
Because banks play an important role in financial stability and the economy of
a country, most jurisdictions exercise a high degree of regulation over banks.
Most countries have institutionalised a system known as fractional reserve
banking, under which banks hold liquid assets equal to only a portion of their
current liabilities. In addition to other regulations intended to
ensure liquidity, banks are generally subject to minimum capital
requirements based on an international set of capital standards, the Basel
Accords.
Banking in its modern sense evolved in the fourteenth century in the
prosperous cities of Renaissance Italy but in many ways functioned as a
continuation of ideas and concepts of credit and lending that had their roots in
the ancient world. In the history of banking, a number of banking dynasties –
notably, the Medicis, the Fuggers, the Welsers, the Berenbergs, and
the Rothschilds – have played a central role over many centuries. The oldest
existing retail bank is Banca Monte dei Paschi di Siena (founded in 1472),
while the oldest existing merchant bank is Berenberg Bank (founded in 1590).
HISTORY OF BANKING:-
Ancient -
The concept of banking may have begun in
ancient Assyria and Babylonia with merchants offering loans of grain
as collateral within a barter system. Lenders in ancient Greece and during
the Roman Empire added two important innovations: they
accepted deposits and changed money.Archaeology from this period
in ancient China and India also shows evidence of money lending.
Medieval
The present era of banking can be traced to medieval and
early Renaissance Italy, to the rich cities in the centre and north
like Florence, Lucca, Siena, Venice and Genoa. The Bardi and Peruzzi families
dominated banking in 14th-century Florence, establishing branches in many
other parts of Europe.[2] Giovanni di Bicci de' Medici set up one of the most
famous Italian banks, the Medici Bank, in 1397.The Republic of Genoa founded
the earliest-known state deposit bank, Banco di San Giorgio (Bank of St.
George), in 1407 at Genoa,Italy
Early modern
Fractional reserve banking and the issue of banknotes emerged in the 17th
and 18th centuries. Merchants started to store their gold with
the goldsmiths of London, who possessed private vaults, and who charged a
fee for that service. In exchange for each deposit of precious metal, the
goldsmiths issued receipts certifying the quantity and purity of the metal they
held as a bailee; these receipts could not be assigned, only the original
depositor could collect the stored goods.
Gradually the goldsmiths began to lend money out on behalf of the depositor,
and promissory notes (which evolved into banknotes) were issued for money
deposited as a loan to the goldsmith. Thus by the 19th century we find "an
ordinary cases of deposits of money with banking corporations, or bankers,
the transaction amounts to a mere loan or mutuum, and the bank is to restore,
not the same money, but an equivalent sum, whenever it is demanded". and
"money, when paid into a bank, ceases altogether to be the money of the
principal (see Parker v. Marchant, 1 Phillips 360); it is then the money of the
banker, who is bound to return an equivalent by paying a similar sum to that
deposited with him when he is asked for it." The goldsmith paid interest on
deposits. Since the promissory notes were payable on demand, and the
advances (loans) to the goldsmith's customers were repayable over a longer
time-period, this was an early form of fractional reserve banking. The
promissory notes developed into an assignable instrument which could
circulate as a safe and convenient form of money backed by the goldsmith's
promise to pay,allowing goldsmiths to advance loans with little risk
of default. Thus the goldsmiths of London became the forerunners of banking
by creating new money based on credit.
The Bank of England originated the permanent issue of banknotes in
1695.] The Royal Bank of Scotland established the first overdraft facility in
1728. By the beginning of the 19th century Lubbock's Bank had established
a bankers' clearing house in London to allow multiple banks to clear
transactions. The Rothschilds pioneered international finance on a large
scale, financing the purchase of shares in the Suez canal for the British
government in 1875.
FUNCTIONS OF THE BANKS AND ITS TYPES-
The following are the functions of Indian banks:
1. Acceptance of deposits from the public
2. Provide demand withdrawal facility
3. Lending facility
4. Transfer of funds
5. Issue of drafts
6. Provide customers with locker facilities
7. Dealing with foreign exchange
Apart from the aforementioned list, the various banks must additionally fulfill
a variety of utility duties. The space below contains the list of different types
of banks in India.
Banks are divided into several sorts. The following are the different types of
banks in India:
1. Central Bank
2. Cooperative Banks
3. Commercial Banks
4. Regional Rural Banks (RRB)
5. Local Area Banks (LAB)
6. Specialized Banks
7. Small Finance Banks
8. Payments Banks
This is a crucial subject for the IAS Exam. Aspirants will learn about the Indian
banking system, its functions, and the different types of banks in this article.
The several types of banks in India, their functions, and a list of banks under
each sector are all part of the banking awareness syllabus that is covered in
most government exams.
CENTRAL BANK
Our country's central bank is the Reserve Bank of India. Each country has a
central bank that oversees all of the country's other financial institutions.
The central bank's principal role is to serve as the government's bank and to
oversee and regulate the country's other banking institutions. The functions of
a country's central bank are listed below:
assisting other financial institutions
Issuing money and enforcing monetary policies
The financial system's supervisor
In other words, the country's central bank is also known as the banker's bank
because it assists other banks in the country and runs the country's financial
system under the supervision of the Government.
COOPERATIVE BANK
These banks are governed by a law enacted by the state government. They
provide short-term loans to agriculture and related industries.
Cooperative banks' principal purpose is to enhance social welfare by
providing low-interest loans.
They are arranged in a three-tiered system.
State Cooperative Banks, Tier 1 (State Level) (regulated by RBI, State Govt,
NABARD)
The RBI, the government, and the National Bank for Agriculture and Rural
Development (NABARD) all contribute to the project's funding. After then, the
money is allocated to the general population.
These banks are subject to CRR and SLR concessions. (SLR: 25%, CRR: 3%)
The state owns the company, and the senior management is chosen by the
members.
Central/District Cooperative Banks, Tier 2 (District Level)
Tier 3 (Village Level) – Agriculture (Primary) Cooperative Banks
SafaltaSafalta Exam Preparation Online
Home
Blog
Different types of banks in India: Know their functions, categories, and more
details
Different types of banks in India: Know their functions, categories, and more
details
Safalta Expert Published by: Saloni Bhatia Updated Tue, 15 Feb 2022 02:40
PM IST
Highlights
Know about different types of banks and their functioning in India. Banks are
financial institutions that deal with deposits and loans. In India, there are
several different sorts of banks, each with its own set of responsibilities.
Are confident on questions about money and banking? Know about different
types of banks in India and the functions of banks here. In terms of the
government test syllabus, a candidate must understand the many types of
banks and their roles in administering a country's financial system. Banks are
financial institutions that deal with deposits and loans. In India, there are
several different sorts of banks, each with its own set of responsibilities. In
addition, you can also check out our
General Knowledge Ebook Free PDF: Download Here
Current Affairs Ebook Free PDF: Download
Source: Amar Ujala
Free Demo Classes
Register here for Free Demo Classes
Name
Mobile
Select Course
You can also read:
List Of Nationalized Banks In India 2022
Bank Po Salary 2022:In-hand salary, allowance, perks, and other benefits
Bank PO Eligibility 2021: Check Age Limit, Educational Qualifications, And
Other Details Here
Central bank of India SO Recruitment 2022: Apply Online
@centralbankofindia.co.in
The bank accepts deposits from the public at a considerably lower rate,
known as the deposit rate, and lends money at a much higher rate, known as
the lending rate. The fundamental duties of banks are nearly identical,
however, the types of persons with whom each sector or type deals may vary.
In India, modern banking originated in the late eighteenth century. The 'Bank
Of Calcutta,' founded in 1806 and currently known as the 'State Bank Of India,'
is the country's oldest profit-oriented bank. In India, there are currently 34
banks, with 12 public sector banks and 22 private sector banks. Banks have
aided the country's economic development and developed a culture of saving
among its citizens. Let's have a look at the different types of banks in India.
The following are the functions of Indian banks:
o Acceptance of deposits from the public
o Provide demand withdrawal facility
o Lending facility
o Transfer of funds
o Issue of drafts
o Provide customers with locker facilities
o Dealing with foreign exchange
Apart from the aforementioned list, the various banks must additionally fulfill
a variety of utility duties. The space below contains the list of different types
of banks in India. If you are preparing for competitive exams and are looking
for expert guidance, you can check out our daily FREE Current Affairs.
Bank Special Online Classes: Join Detailed Batch By Safalta here!
Banks are divided into several sorts. The following are the different types of
banks in India:
Central Bank
Cooperative Banks
Commercial Banks
Regional Rural Banks (RRB)
Local Area Banks (LAB)
Specialized Banks
Small Finance Banks
Payments Banks
This is a crucial subject for the IAS Exam. Aspirants will learn about the Indian
banking system, its functions, and the different types of banks in this article.
The several types of banks in India, their functions, and a list of banks under
each sector are all part of the banking awareness syllabus that is covered in
most government exams.
Get to know SBI CBO Eligibility here.
CENTRAL BANK
Our country's central bank is the Reserve Bank of India. Each country has a
central bank that oversees all of the country's other financial institutions.
The central bank's principal role is to serve as the government's bank and to
oversee and regulate the country's other banking institutions. The functions of
a country's central bank are listed below:
assisting other financial institutions
Issuing money and enforcing monetary policies
The financial system's supervisor
In other words, the country's central bank is also known as the banker's bank
because it assists other banks in the country and runs the country's financial
system under the supervision of the Government.
Also, know SBI Apprentice Syllabus And Exam Pattern
COOPERATIVE BANKS
These banks are governed by a law enacted by the state government. They
provide short-term loans to agriculture and related industries.
Cooperative banks' principal purpose is to enhance social welfare by
providing low-interest loans.
They are arranged in a three-tiered system.
State Cooperative Banks, Tier 1 (State Level) (regulated by RBI, State Govt,
NABARD)
The RBI, the government, and the National Bank for Agriculture and Rural
Development (NABARD) all contribute to the project's funding. After then, the
money is allocated to the general population.
These banks are subject to CRR and SLR concessions. (SLR: 25%, CRR: 3%)
The state owns the company, and the senior management is chosen by the
members.
Central/District Cooperative Banks, Tier 2 (District Level)
Tier 3 (Village Level) – Agriculture (Primary) Cooperative Banks
COMMERCIAL BANKS
The Banking Companies Act of 1956 established the company.
They function on a commercial basis, with profit as their primary goal.
They are owned by the government, state, or any private company and have a
unified structure.
They look after all sectors, from rural to urban.
Unless the RBI directs otherwise, these banks do not charge concessional
interest rates.
These banks' primary source of funds is public deposits.
Commercial banks are further classified into three types:
Public sector banks are those in which the government or the country's
central bank owns the majority of the stock.
Banks in the private sector are those in which a private entity, an individual,
or a group of people owns the majority of the stock.
Foreign Banks – This category includes banks with headquarters in other
nations and branches in the United States.
REGIONAL RURAL BANKS
These are unique types of commercial banks that lend to agriculture and the
rural economy at a reduced rate.
RRBs were founded in 1975 and are governed by the 1976 Regional Rural
Bank Act.
RRBs are 50/50 joint ventures between the federal government and state
governments (15%), as well as a commercial bank (35 percent ).
Between 1987 and 2005, 196 RRBs were established.
From 2005 forward, the government began merging RRBs, bringing the total
number of RRBs to 82.
A single RRB cannot open branches in more than three districts that are
geographically connected.
LOCAL AREA BANKS
In India, it was first introduced in 1996.
The private sector organizes these.
Local Area Banks' primary goal is to make a profit.
Local Area Banks are governed by the 1956 Companies Act.
There are now just four Local Area Banks in existence, all of which are
located in South India.
SPECIALIZED BANKS
Certain banks exist just to serve a certain purpose. Specialized banks are the
name for several types of financial institutions. These are some of them:
SIDBI (Small Industries Development Bank of India) - SIDBI can provide a loan
for a small-scale enterprise or business. With the support of this bank, small
businesses can get current technology and equipment.
Export and Import Bank (EXIM Bank) - EXIM Bank stands for Export and
Import Bank. This type of bank can provide loans or other financial help to
foreign countries that are exporting or importing goods.
NABARD (National Bank for Agricultural and Rural Development) – People
can resort to NABARD for any type of financial support for rural, handicraft,
village, and agricultural development.
Other specialist banks exist, each with a unique function to play in the
financial development of the country.
SMALL FINANCE BANKS
This sort of bank, as the name implies, provides loans and financial help to
micro industries, small farmers, and the unorganized sector of society. The
country's central bank oversees these institutions.
PAYMENT BANKS
The Reserve Bank of India conceptualized the payments bank, a newly
developed form of banking. People who have a payment bank account can
only deposit up to Rs.1,00,000/- and cannot apply for loans or credit cards
through this account.
Payment banks provide services such as internet banking, mobile banking,
ATM card issuance, and debit card issuance. The following is a list of our
country's few payment banks:
Airtel Payments Bank
India Post Payments Bank
Fino Payments Bank
Jio Payments Bank
Paytm Payments Bank
NSDL Payments Bank
Internet Banking
What is Internet Banking?
Internet Banking, also known as net-banking or online banking, is an
electronic payment system that enables the customer of a bank or a financial
institution to make financial or non-financial transactions online via the
internet. This service gives online access to almost every banking service,
traditionally available through a local branch including fund transfers,
deposits, and online bill payments to the customers.
Internet banking can be accessed by any individual who has registered for
online banking at the bank, having an active bank account or any financial
institution. After registering for online banking facilities, a customer need not
visit the bank every time he/she wants to avail a banking service. It is not just
convenient but also a secure method of banking. Net banking portals are
secured by unique User/Customer IDs and passwords.
Special Features of Internet Banking
Here are some of the best features of internet banking:
Provides access to financial as well as non-financial banking services
Facility to check bank balance any time
Make bill payments and fund transfer to other accounts
Keep a check on mortgages, loans, savings a/c linked to the bank
account
Safe and secure mode of banking
Protected with unique ID and password
Customers can apply for the issuance of a chequebook
Buy general insurance
Set-up or cancel automatic recurring payments and standing orders
Keep a check on investments linked to the bank account
Services Available through the Internet Banking
Once a customer is registered for online banking, he/she can log-in to the
respective online banking portal of his/her bank using the issued User-ID and
password.
Services Available on the Internet Banking Portals
View Bank NEFT & RTGS Fund
Account Balance Check
Statements Transfer
IMPS Fund Transfer Utility Bill Payment Start a Deposit
Open/Close a Fixed Make Merchant Issuance of Cheque
Deposit Payments Book
Buy General Recharge Prepaid
Start Investments
Insurance Mobile/DTH
Set-up/Cancel Manage/Change
Check Mortgages, Loans
Automatic Payments Account Details
Buy/Sell on E- Invest and Conduct
Book Online Tickets
Commerce Platforms Trade
Advantages of Internet Banking
Given below are some advantages/benefits of Internet Banking available for
all the users-
24×7 Availability: Internet banking, unlike usual banking hours, is not
time-bound. It is available 24×7 throughout the year. Most of the
services available online are not time-restricted. Users can check their
bank balance, account statements and make fund transfers anytime
instantly.
Convenience of initiating financial transactions: Internet banking is
largely preferred because of the convenience that it provides while fund
transfer and bill payments. Registered users can use almost all the
banking services without having to visit the bank and standing in
queues. Financial transactions such as paying bills and transferring
funds between accounts can easily be performed anytime as per the
convenience of the user.
Proper Track of Transactions: Acknowledgement slips are provided
by the bank after transactions which have a high possibility of getting
misplaced. However, with internet banking, it becomes very easy to
track the history of all the transactions initiated by the user.
Transactions and fund transfers made online are organised in the
‘Transaction History’ section along with other details such as payee’s
name, bank account number, the amount paid, the date and time of
payment, and remarks.
Quick and Secure: Net banking users can transfer funds between
accounts instantly, especially if the two accounts are held at the same
bank. Funds can be transferred via NEFT, RTGS or IMPS as per the user’s
convenience. One can also make bill payments, EMI payments, loan and
tax payments easily. Moreover, the transactions, as well as the account,
are secured with a password and unique User-ID.
Non-financial Transactions: Besides fund transfer, internet banking
allows the users to avail non-financial services such as balance check,
account statement check, application for issuance of cheque book, etc.
Types of Fund Transfers using Internet Banking
As we have already discussed, there are three types of fund transfers which
can be made using net-banking. Let us understand more-
NEFT
National Electronic Fund Transfer (NEFT) is a payment system which allows
one-to-one fund transfer.
Using NEFT, individuals and corporates can transfer funds electronically
from any bank branch to any individual or corporate with an
account with any other bank branch in the country
NEFT service is available 24×7 on internet banking. But, it is a time-
restricted service at the bank branch
Usually, NEFT transfer is successfully completed within 30 minutes.
Nonetheless, the time can even stretch to 2-3 hours or might be
completed in just 10 minutes
RTGS
Real-Time Gross Settlement (RTGS) is a continuous settlement of funds
individually on an order by order basis.
This payment system ensures that the receiver’s account gets credited
with the funds almost immediately and not after a certain duration, as is
the case with other payment modes like NEFT
RTGS transactions are tracked by the RBI, thereby successful transfers
are irreversible. This method is majorly used for large value transfers
The minimum amount to be remitted through RTGS is 2 lakh. There is
no cap on the maximum amount for transfer via RTGS
Like NEFT, RTGS is also available online 24×7
IMPS
Immediate Payment System (IMPS) is another payment method that transfers
funds in real-time.
IMPS is used to transfer funds instantly within banks across India via
mobile, internet and ATM, which is not only safe but also economical
both in financial and non-financial perspectives
IMPS is an inexpensive mode of fund transfer. Other fund transfer
mediums such as NEFT and RTGS charge significantly higher than IMPS
It does not require details like account number, IFSC code, etc. Funds
can be transferred via IMPS just with the mobile number of the
beneficiary
How to Register for Internet Banking?
Every account holder has to register for an online banking service at his/her
respective bank to get access. Most of the banks provide a net-banking log-in
kit as and when you apply for a new account. To start using net-banking,
follow these steps-
1. Download the application form from your bank’s official website, fill the
same and take out a print. You can also visit the bank directly and fill the
application form for net-banking
2. Submit the application form at the bank
3. After verification, you will receive a unique User ID and password using
which you can log-in to internet banking
What is E-Banking?
E-banking or Electronic Banking refers to all the forms of banking services
and transactions performed through electronic means. It allows individuals,
institutions and businesses to access their accounts, transact business, or
obtain information on various financial products and services via a public or
private network, including the internet.
Popular Types of E-banking Services in India
Internet Banking: It is the type of electronic banking service which
enables customers to perform several financial and non-financial
transactions via the internet. With internet or online banking or net-
banking, customers can transfer funds to another bank account, check
account balance, view bank statements, pay utility bills, and much more.
Mobile Banking: This electronic banking system enables customers to
perform financial and non-financial transactions via mobile phone. Most
of the banks have launched their mobile banking applications available
on Google Playstore and Apple App Store. Just like the net-banking
portal, customers can use the mobile application to access banking
services.
ATM: Automated Teller Machines (ATM) is one of the most popular
types of e-banking. ATMs allow customers to withdraw funds, deposit
money, change Debit Card PIN, and other banking services. To make use
of an ATM, the user must have a password. Banks charge a nominal fee
from the customers on every transaction made after crossing the
specified limit of free transactions if the transaction is done from any
other bank’s ATM.
Debit Cards: Almost every person owns a debit card. This card is
connected to your bank account and you can go cashless with this card.
You can use your debit card for all types of transactions, the transaction
amount is debited from your account instantly.
Deposit and Withdraws (Direct): This service under e-banking offers
the customer a facility to approve paychecks regularly to the account.
The customer can give the bank an authority to deduct funds from
his/her account to pay bills, instalments of any kind, insurance
payments, and many more.
Pay by Phone Systems: This service allows the customer to contact
his/her bank to request them for any bill payment or to transfer funds
to some other account.
Point-of-Sale Transfer Terminals: This service allows customers to
pay for the purchase through a debit/credit card instantly.
Services Provided through E-banking in India
Telephone Banking ATMs (Automated Teller Machines)
Electronic Clearing Cards Mobile Banking
Door-step Banking Bill Payment
Shopping Smart Cards
Funds Transfer Internet Banking
Electronic Funds Transfer System Electronic Clearing Services
Telebanking Investing
Fixed Deposits Insurance
Comparison between Internet Banking and E-
Banking
Internet banking and Electronic banking are often confused with each other.
Let us compare the two for better understanding:
Definition
Internet banking or online banking or net-banking is a digital payment system
which enables customers of a bank or a financial institution to make financial
or non-financial transactions online via the internet. On the other hand, E-
banking or Electronic Banking refers to all the forms of banking services and
transactions performed through electronic means.
Electronic banking or E-banking is a broad category of accessing banking
services via electronic means, whereas Internet banking is a part or type of
electronic banking. It is also known as electronic funds transfer (EFT) and
uses electronic means to transfer funds directly from one account to another.
Types of Services
With internet banking, customers can obtain every banking service,
traditionally available through a local branch including fund transfers,
deposits, and online bill payments to the customers.
Electronic banking includes various transaction services such as internet
banking, mobile banking, telebanking, ATMs, debit cards, and credit cards.
Internet banking is one of the latest additions to electronic banking.
Frequently Asked Questions
How can I use internet banking?
To use internet banking, you must have an operating account in any bank or a
financial institution. You need to register for online banking at the bank to
obtain a unique ID and password. For that, you can download the net-banking
application form from your bank’s net-banking website or visit the bank and
fill in the form.
Can I change my internet banking password?
After logging in to the net-banking portal for the first time, all the users must
change the password which is issued by the bank. Also, you should change
your password at least once every two months.
What precautions should be taken while using internet banking?
While using internet banking, you must make sure of a few things-
1. Avoid using public Wi-Fi or use a VPN software
2. Use a genuine anti-virus software
3. Make sure the operating system of your smartphone or device is
updated
4. Change your login password at least once in two months
5. Avoid logging in to your net-banking portal via mailers
6. Do not use public computers to log in to the net-banking portal
What is user ID in net-banking?
Most of the banks provide internet banking ID and password as and when you
apply for a new account. If you haven’t received your user ID and password,
you need to apply for net-banking at the bank by filling and submitting an
application form. After verification, you will receive a unique user ID and
password to login to net banking.
Are electronic banking and internet banking the same?
No, electronic banking and internet banking are often confused with each
other. However, these are two different services launched by the bank.
Electronic banking is a broad term or category which includes various forms
of banking services and transactions performed through electronic means
such as internet banking, mobile banking, telebanking, ATMs, debit cards, and
credit cards. Internet banking is one of the latest additions to electronic
banking. Thereby, internet banking is a part of electronic banking.
State Bank of India:-
The origin of the State Bank of India goes back to the first decade of the
nineteenth century with the establishment of the Bank of Calcutta in 1806 in
Calcutta. Three years later the bank received its charter and was re–designed
as the Bank of Bengal (2 January 1809). A unique institution, it was the first
joint–stock bank of British India sponsored by the Government of Bengal. The
Bank of Bombay (15 April 1840) and the Bank of Madras (1 July 1843)
followed the Bank of Bengal. These three banks remained at the apex of
modern banking in India till their amalgamation as the Imperial Bank of India
on 27 January 1921.
Primarily Anglo–Indian creations, the three presidency banks came into
existence either as a result of the compulsions of imperial finance or by the
felt needs of local European commerce and were not imposed from outside in
an arbitrary manner to modernise India's economy. Their evolution was,
however, shaped by ideas culled from similar developments in Europe and
England, and was influenced by changes occurring in the structure of both the
local trading environment and those in the relations of the Indian economy to
the economy of Europe and the global economic framework.
The State Bank of India, the country’s oldest bank and a premier in terms of
balance sheet size, number of branches, market capitalization and profits is
today going through a momentous phase of change and transformation – the
two hundred year old public sector behemoth is today stirring out of its public
sector legacy and moving with an agility to give the private and foreign banks
a run for their money.
The bank is entering into many new businesses with strategic tie ups –
Pension Funds, General Insurance, Custodial Services, Private Equity, Mobile
Banking, Point of Sale Merchant Acquisition, Advisory Services, structured
products etc – each one of these initiatives having a huge potential for growth.
The bank is forging ahead with cutting edge technology and innovative new
banking models, to expand its rural banking base, looking at the vast
untapped potential in the hinterland and proposes to cover 100,000 villages
in the next two years. At the end March, 2011, the total number of branches
was 13,542 while the number of ATMs stood at 20,084 across the country.
It is also focusing at the top end of the market, on whole sale banking
capabilities to provide India’s growing mid / large corporate with a complete
array of products and services. It is consolidating its global treasury
operations and entering into structured products and derivative instruments.
Today, the bank is the largest provider of infrastructure debt and the largest
arranger of external commercial borrowings in the country. It is the only
Indian bank to feature in the Fortune 500 list.
The bank is actively involved since 1973 in non–profit activity called
Community Services Banking. All branches and administrative offices
throughout the country sponsor and participate in large number of welfare
activities and social causes. Their business is more than banking because they
touch the lives of people anywhere in many ways.State Bank of India (SBI) has
received an approval from the Government of India (GOI) for acquisition of
SBI Commercial and International Bank (SBICI Bank). The government had
issued the 'Acquisition of SBICI Bank Order 2011' vide order dated July 29,
2011.
SBI entered the UK's home loan market, the bank started with mortgages for
landlords, best known as buy–to–let mortgages, with amounts ranging from
£50,000 to £1.5 million, and loan to value of ratios of up to 60 per cent.
In April 2014 State Bank of India launched three digital banking facilities for
the convenience of SBI customers. Two at the customer’s door step using TAB
banking – one for customers opening Savings Bank accounts and another for
Housing Loan applicants. The third is e–KYC (Know your Customer).
Services offered by the company:
NRI Services
Personal Banking
International Banking
Agriculture / Rural
Corporate Banking
SME
Government Business
Domestic Treasury
Subsidiaries:
Banking Subsidiaries
State Bank of Bikaner and Jaipur (SBBJ)
State Bank of Hyderabad (SBH)
State Bank of Mysore (SBM)
State Bank of Patiala (SBP)
State Bank of Travancore (SBT)
Foreign Subsidiaries
SBI International (Mauritius) Ltd.
State Bank of India (California)
State Bank of India (Canada)
INMB Bank Ltd, Lagos
BANK SBI Indonesia (SBII)
Non banking Subsidiaries
SBI Capital Markets Ltd
SBI Funds Management Pvt Ltd
SBI Factors & Commercial Services Pvt Ltd
SBI Cards & Payments Services Pvt. Ltd. (SBICPSL)
SBI DFHI Ltd
SBI General Insurance Company Limited
Joint Ventures
SBI Life Insurance Company Ltd (SBI LIFE)
SBI General Insurance Company Limited
Achievements/ recognition:
State Bank of India ranked 155 in Forbes list of Global 2000 firms in 2014
Business Standard has awarded the Banker of the Year Award to Shri
O.P.Bhatt.
'2013–14
Innovation in Customer Data Management (DWP)
Financial Inclusion (IT–RB), Electronic Payment (INB, MB &W, ATM, PSG)
and CRM&BI (DWP)
Best IT adoption
BEST IT Team, CSR and Corporate Excellence
AWARD for change management for managing high scale IT projects.
Asia’s Best CSR Practices Award–2013– Singapore
Asian BFSI Awards–2013– Dubai
India’s Most Ethical Companies Award–2013
Asian Green Future Leadership Awards–2013
June ’08 Awards & Recognitions CNN IBN Network 18 has selected Shri
O.P.Bhatt as Indian of the Year – Business 2007.
Asian Centre for Corporate Governance & Sustainability and Indian
Merchants Chamber has awarded the Transformational Leader Award 2007.
State Bank of India ranked as NO.1 in the 4Ps B & M & ICMR Survey on
India's Best Marketed Banks in August–2009.
Shri Om Prakash Bhatt declared as one of the '25 Most Valuable Indians' By
The Week Magazine For 2009.
State Bank of India has been adjuged The Best Bank 2009 By Business India
in August–2009.
It bagged ‘Most Preferred Bank’ and ‘Most Preferred Brand for Home Loan’
at CNBC Awaaz Consumer Awards.
It became the only Indian bank to get listed in the Fortune Global 500 List.
SBI was at the 70th slot in the top 1000 bank survey by Banker Magazine .
It was awarded Golden award for being among the two most trusted banks
in India by Readers Digest.
SBI is ranked 6th in the Economic Times Market Cap List.
SBI ranked as no.1 in the 4Ps B & M & ICMR Survey on India's Best Marketed
Banks (August–2009)