3
Contents:
UNIT 1 HISTORY OF COMPUTERS ................................................................ 4
UNIT 2 TYPES OF A COMPUTER ................................................................. 35
UNIT 3 THE OPERATING SYSTEM .............................................................. 56
UNIT 4 CPU....................................................................................................... 77
UNIT 5 HOW CAN BE DATA STORED?....................................................... 89
KEYS................................................................................................................. 114
Список литературы: ........................................................................................ 120
4
UNIT 1
HISTORY OF COMPUTERS
PART A
One of the great inventions of our time has been the computer.
Today, billions of people use computers in their daily life. You might also
know that a computer could be associated with the name of an English pro-
fessor Charles Babbage. He designed the Analytical Engine and it was
this design that the basic framework of the computers of today are based
on.
In the early 1820s “Charles Babbage” designed a computing ma-
chine called the Difference Engine. It is used in calculating the simple
math tables. In the 1830s he designed a second computing machine called
the Analytical Engine. This machine is used in calculating complicated
problems by following a set of instructions.
Early history
The abacus
The earliest known calculating device is probably the abacus. It
dates back at least to 1100 BC and is still in use today, particularly in Asia.
Now, as then, it typically consists of a rectangular frame with thin parallel
rods strung with beads. Abacus beads can be readily manipulated to per-
form the common arithmetical operations—addition, subtraction, multipli-
cation, and division—that are useful for commercial transactions and in
bookkeeping.
The abacus is a digital device. It represents values discretely. A
bead is either in one predefined position or another, representing unambig-
uously, say, one or zero.
Analog calculators: from Napier’s logarithms to the slide rule
Calculating devices took a different turn when John Napier, a
Scottish mathematician, published his discovery of logarithms in 1614. As
any person can attest, adding two 10-digit numbers is much simpler than
multiplying them together, and the transformation of a multiplication prob-
lem into an addition problem is exactly what logarithms enable. This sim-
plification is possible because of the following logarithmic property:
5
the logarithm of the product of two numbers is equal to the sum of the loga-
rithms of the numbers. By 1624, tables with 14 significant digits were
available for the logarithms of numbers from 1 to 20,000, and scientists
quickly adopted the new labour-saving tool for tedious astronomical calcu-
lations.
Most significant for the development of computing, the transfor-
mation of multiplication into addition greatly simplified the possibility of
mechanization. Analog calculating devices based on Napier’s logarithms—
representing digital values with analogous physical lengths—soon ap-
peared. In 1620 Edmund Gunter, the English mathematician who coined
the terms cosine and cotangent, built a device for performing navigational
calculations: the Gunter scale, or, as navigators simply called it, the gunter.
About 1632 an English clergyman and mathematician named William
Oughtred built the first slide rule, drawing on Napier’s ideas. That first
slide rule was circular, but Oughtred also built the first rectangular one in
1633. The analog devices of Gunter and Oughtred had various advantages
and disadvantages compared with digital devices such as the abacus. What
is important is that the consequences of these design decisions were being
tested in the real world.
Digital calculators: from the Calculating Clock to the Arithmome-
ter
In 1623 the German astronomer and mathematician Wilhelm
Schickard built the first calculator. He described it in a letter to his friend
the astronomer Johannes Kepler, and in 1624 he wrote again to explain that
a machine he had commissioned to be built for Kepler was, apparently
along with the prototype, destroyed in a fire. He called it a Calculating
Clock, which modern engineers have been able to reproduce from details in
his letters. Even general knowledge of the clock had been temporarily lost
when Schickard and his entire family perished during the Thirty Years’
War.
The device could add and subtract six-digit numbers (with a bell for
seven-digit overflows) through six interlocking gears, each of which turned
one-tenth of a rotation for each full rotation of the gear to its right. Thus, 10
rotations of any gear would produce a “carry” of one digit on the following
gear and change the corresponding display.
6
But Schickard may not have been the true inventor of the calculator. A
century earlier, Leonardo da Vinci sketched plans for a calculator that were
sufficiently complete and correct for modern engineers to build a calculator
on their basis.
The first calculator or adding machine to be produced in any quan-
tity and actually used was the Pascaline, or Arithmetic Machine, designed
and built by the French mathematician-philosopher Blaise Pascal between
1642 and 1644. It could only do addition and subtraction, with numbers be-
ing entered by manipulating its dials. Pascal invented the machine for his
father, a tax collector, so it was the first business machine too (if one does
not count the abacus). He built 50 of them over the next 10 years.
In 1671 the German mathematician-philosopher Gottfried Wil-
helm von Leibniz designed a calculating machine called the Step Reckoner.
(It was first built in 1673.) The Step Reckoner expanded on Pascal’s ideas
and did multiplication by repeated addition and shifting.
Leibniz was a strong advocate of the binary number system. Bina-
ry numbers are ideal for machines because they require only two digits,
which can easily be represented by the on and off states of a switch. When
computers became electronic, the binary system was particularly appropri-
ate because an electrical circuit is either on or off. This meant that on could
represent true, off could represent false, and the flow of current would di-
rectly represent the flow of logic.
Leibniz was prescient in seeing the appropriateness of the binary
system in calculating machines, but his machine did not use it. Instead, the
Step Reckoner represented numbers in decimal form, as positions on 10-
position dials. Even decimal representation was not a given: in 1668 Samu-
el Morland invented an adding machine specialized for British money—a
decidedly nondecimal system.
Pascal’s, Leibniz’s, and Morland’s devices were curiosities, but
with the Industrial Revolution of the 18th century came a widespread need
to perform repetitive operations efficiently. With other activities being
mechanized, why not calculation? In 1820 Charles Xavier Thomas de Col-
mar of France effectively met this challenge when he built
his Arithmometer, the first commercial mass-produced calculating device.
It could perform addition, subtraction, multiplication, and, with some more
7
elaborate user involvement, division. Based on Leibniz’s technology, it was
extremely popular and sold for 90 years. In contrast to the modern calcula-
tor’s credit-card size, the Arithmometer was large enough to cover a desk-
top.
The Jacquard loom
Calculators such as the Arithmometer remained a fascination after
1820, and their potential for commercial use was well understood. Many
other mechanical devices built during the 19th century also performed re-
petitive functions more or less automatically, but few had any application
to computing. There was one major exception: the Jacquard loom, invented
in 1804–05 by a French weaver, Joseph-Marie Jacquard.
The Jacquard loom was a marvel of the Industrial Revolution. A
textile-weaving loom, it could also be called the first practical information-
processing device. The loom worked by tugging various-coloured threads
into patterns by means of an array of rods. By inserting a card
punched with holes, an operator could control the motion of the rods and
thereby alter the pattern of the weave. Moreover, the loom was equipped
with a card-reading device that slipped a new card from a prepunched deck
into place every time the shuttle was thrown, so that complex weaving pat-
terns could be automated.
Jacquard loom, engraving, 1874 At the top of the machine is a stack
of punched cards that would be fed into the loom to control the weaving
pattern. This method of automatically issuing machine instructions was
employed by computers well into the 20th century. The Bettmann Archive
8
What was extraordinary about the device was that it transferred the
design process from a labour-intensive weaving stage to a card-punching
stage. Once the cards had been punched and assembled, the design was
complete, and the loom implemented the design automatically. The Jac-
quard loom, therefore, could be said to be programmed for different pat-
terns by these decks of punched cards.
For those intent on mechanizing calculations, the Jacquard loom
provided important lessons: the sequence of operations that a machine per-
forms could be controlled to make the machine do something quite differ-
ent; a punched card could be used as a medium for directing the machine;
and, most important, a device could be directed to perform different tasks
by feeding it instructions in a sort of language—i.e., making the machine
programmable.
PART B
While the first computers were extremely large and took up entire
rooms, today, computers are extremely small and can not only fit on your
desktop, but in your phone and on chips the size of grains of rice. Through-
out the years, the computer has evolved from an extremely expensive,
cumbersome and slow device to today’s extremely smart and quick ma-
chines with incredible processing power.
Here is the history of computers.
The First Computer
While there was no single person that is widely credited with in-
venting the computer, many view Konrad Zuse and his Z1 machine as the
first in a long line of innovations that have given us the computer of today.
Konrad Zuse was a German whose claim to fame is the creation of the first
freely programmable mechanical computing device in 1936. Many would
see Zuse’s Z1 as the first of a long line of calculators. Zuse found that one
of the most difficult aspects of completing large calculations on the calcula-
tion devices of the day (a slide rule or mechanical adding machine) was the
ability to keep track of the many results that would then have to be recom-
puted to give a final answer. Zuse’s Z1 was created with a focus on 3 basic
elements that are still necessary in today’s calculators. It is necessary to
9
have a control, it is necessary to have a memory to store results of each step
and it is necessary to perform calculations.
In later additions of his Zuse computer, Konrad Zuse created the
Z2 and Z3. The innovations to his computers were quite important. The Z2
was the first fully functioning electro- magnetic computer and the Z3 was
the first fully electronic and digital computer that included the ability to be
programmed. The Z3 was programmed with a binary floating point number
and switching system. It even included storage which used tape in a form
of old movie reels. In those days most business machines used punched pa-
per, however in Germany at the time, paper was extremely expensive.
The Harvard Mark I Computer
With World War II blazing on, the US government realized that it
needed to be more innovative than ever in order to gain the upper hand. At
major universities across America, many scientists and mathematicians
worked hard on innovating new ways to keep up with the technology that
was quickly advancing. Much of the focus was on making rockets and bal-
listics more precise. They required complex calculations.
At Harvard, the first of the MARK series computers were being
built. The MARK I began in 1944. This computer was absolutely huge and
filled a room that was 55 feet long by 8 feet high. It contained an amazing
array of components. In fact, in all it had over 760,000 parts. It was loud
and clicked and clanged like a huge factory. However, the MARK 1 turned
10
out to be a success. It was utilized by the US Navy for calculations of bal-
listics. It performed well for the next 15 years, being in service till 1959.
The MARK I used pre- punched paper tape, it could perform a
wide variety of calculations including addition, subtraction, multiplication
and division and it was able to hold and reference a previous result used in
its calculations. It even had the capability to compute numbers with up to
23 decimal places. As for the vastness of this machine, it was not only loud
and had hundreds of thousands of parts, but included 500 miles of wire.
While the computer itself was high tech for its time, the output was not dig-
ital, the MARK I used a simple electric typewriter to display results. Speed
was also lacking with a typical multiplication computation taking from 3 to
5 seconds.
The ENIAC Computer
The ENIAC computer is known as being one of the most im-
portant achievements in computing. The computer was commissioned dur-
ing World War II and it was originally commissioned and used by the US
military for ballistics research for computing tables. The ENIAC stands for
Electrical Numerical Integrator and Calculator. It was developed by John
Mauchly and J Presper Eckert. While John Mauchly created several previ-
ous calculating machines, this machine would be different. The ENIAC
would use vacuum tubes instead of electric motors and levers to speed up
calculations. ENIAC was originally designed starting in 1943, however it
wasn’t built and ready for operation until 1946. The total cost of the ENI-
AC was $500,000. While it was originally built for ballistics it was used for
a whole host of issues including weather, random number studies and even
wind tunnel design. The ENIAC had an enormous amount of vacuum
tubes- over 14,000 and included 70,000 resistors and over 5 million sol-
dered joints. It covered a space of 187 square meters and weighed over 30
tons. This computer was enormous.
Regarding speed, the ENIAC was blazing fast for the technology
in those times. Per one second, the ENIAC could perform 5,000 additions,
357 multiplications or 38 divisions. The speed of the ENIAC was about
1,000 times faster than any other calculating device during that era. The
ENIAC stayed in operation until 1955.
11
The First Random Access Memory (RAM)
In 1946, RAM was first introduced and started to be utilized as an
effective data storage device. While the ability to use a cathode ray tube
(CRT) was being studied for several years, the Williams tube was the first
RAM to be utilized in computers. RAM or Random Access Memory is an
easy way to store computer instructions that can be used over and over by
the computer without unnecessary programming. The first RAM was actu-
ally a metal detector plate that was in position close to a vacuum tube
which detected the difference in electrical charges. On a CRT screen, one
can see the difference between these charges as either a dot or pixel of
green or black- this in essence was binary code either 0 or 1. This type of
memory was used until core memory took over in 1955.
The Manchester Baby and Manchester MARK I
With plenty of innovations taking place in the 1940’s after the
war, faster and more complex computers were being built on both sides of
the Atlantic. England had its own successes with early computers specifi-
cally the Manchester Baby and the Manchester MARK I. The Manchester
Baby was developed by Telecommunications Research Establishment and
it decided to build a computer based on the Williams tube. One of the de-
signers Tom Kilburn devised an even more impressive way to storing data
than the current Williams tube was able to handle. Kilburn’s new innova-
tion allowed the storage capacity to include 2048 bits of information. The
Manchester Baby was the first computer to use a stored program, it went
live in 1948.
The Manchester MARK I
Besides the Manchester Baby, the Manchester MARK I was
commissioned to be built and in 1951 the Manchester went live. The Man-
chester built upon the successes of the day’s computers and while it
showed tremendous progress against computers built just a few years ago.
It also showed researchers that there was also enormous potential for the
computer.
The UNIVAC
Besides the ENIAC, one of the most popular computers of the past
is the UNIVAC. The UNIVAC stands for Universal Automatic Computer.
It was built and developed by those that created the ENIAC computer. In-
stead of working for the US military, the UNIVAC was first sold to the US
Census Bureau that required a computer for complex computations dealing
12
with the explosion in the US population. In 1946, the US Census Bureau
gave a $300,000 deposit for the development and creation of the UNI-
VAC. It was stated in the contract that it would pay no more than $400,000
for the computer, however falling into financial difficulties and cost over
runs, the UNIVAC was delivered at the cost of 1,000,000 dollars. In fact,
the UNIVAC was now owned by the Remington Rand Corporation which
sold the first UNIVAC at a loss in the hopes that later sales of the computer
would pay back their initial investment.
The UNIVAC computer was extremely cutting edge for its day. It was
fast and able to handle many computations. In fact, it can add in 120 micro-
seconds, multiply in 1,800 micro seconds and divide in 3,600 microsec-
onds. It was also able to read characters that were fed via magnetic tape at a
speed of 12,800 characters per second. All in all it was one of the fastest
and most innovative computers of its day. In fact, the UNIVAC received
public praise and notoriety when it was used to predict the next president of
the United States.
IBM and the Computer
IBM today is known for bringing the first widely affordable and
available personal computer (PC) to the masses, however earlier in the
20th century they were widely known for their punch card business ma-
chines such as calculators. The first IBM general purpose computer was the
IBM 701. In 1953, the 701 was developed in part due to the Korean War.
The goal was that a computer was needed in helping to compute and keep
track of the effort of policing Korea. The IBM 701 delivered one computer
for the Korean War, others went to atomic research and to aircraft compa-
nies. Some went to research facilities including the US Weather Bureau. At
the time, a company or large organization could rent the 701 for $15,000
per month. It was built with storage tubes for memory and used magnetic
tape to store information. It also should be noted that the new computer
language FORTRAN was utilized in the new 701.
Besides the IBM 701, there were other IBM computers to follow,
including the 704, the first supercomputer to utilize floating point hardware
and a magnetic core memory that was much faster than magnetic drum
stored memory. The IBM 7090 also was a big success being IBM’s first
commercial transistorized computer. It was built in 1960 and was the fast-
13
est computer of its day. IBM capitalized on the 7090 and it dominated
business computers for the next 20 years.
The Integrated Circuit – The Chip
One of the biggest innovations to the computer was the integrated
circuit (IC) or the chip as it is now known. In fact, the chip has made the
computer extremely powerful and affordable so that practically everyone in
the world today can own a computer. The chip has had an enormous influ-
ence on reducing the cost of the computer, literally cutting it by a factor of
a million to one.
The chip was actually invented by two different entities at about
the same time without either entity knowing about the other. However, both
companies were extremely smart and combined their licensing agreements
to take advantage of the huge market for the technology. In the first few
decades of computer creation, in order to make a computer more powerful
or add innovation, it usually required more and more parts, however with a
chip, everything can be placed on an extremely small piece of silicon.
The first commercial integrated circuits or chips were sold in
1961. While first bought up by the military they later were used in the first
mobile calculators. While the first chip had one transistor, three resisters
and one capacitor which fit on a space less than a square inch, today’s chips
are much smaller and can hold more than 125 million transistors
.
The First Microprocessor- A Computer on a Chip by Intel Corp.
While the IC chip (integrated circuit) was already developed, Intel
was the first to put a complete microprocessor or computer on a single
chip. The first Intel chip to do so was the 4004.
Intel 4004
14
The 4004 was able to put a central processing unit, memory, input
and output controls on one super small chip. This chip had huge implica-
tions to almost anything digital and as the years went on, Intel was able to
create smaller, more powerful chips that actually cost less. The personal
computer of today has the Intel 4004 chip to thank for its ability to be in-
credibly powerful and affordable for the consumer.
The First Consumer Computers
If you wanted to use a computer in the 1960’s or 1970’s, these
huge devise were not only very rare- only available to students and re-
searchers at major universities, but extremely costly to run. However, for
those that were interested and fascinated by computers, most were looking
for ways to own their very own affordable computer. One of the first con-
sumer computers to hit the market was the MITS Altair 8800. It was devel-
oped in 1973 and 1974 and was first sold in 1975 as the “World’s First
Minicomputer Kit to Rival Commercial Models”. The computer included
an 8080 CPU, 256 byte RAM card and a new bus that had 100 pins. It was
a kit, so it needed to be put together by the customer and sold for $400.
The First Apple Computers
During the mid 1970’s, there were plenty of hobby computers for
sale however many were difficult to put together, had plenty of indistin-
guishable switches and must be programmed using difficult languages.
Steve Wozniak was a computer hobbyist and started Apple Computers with
his friend Steve Jobs. At first they showed off the Apple I computer. The
Apple I came equipped with a single circuit board, video interface, 8K of
RAM, a keyboard and was made with affordable components including the
6502 processor that cost only $20.
15
While about 200 Apple I computers were sold in 1976, in 1977, at
the first West Coast Computer Faire, the Apple II was released with many
of the same components, an increase of RAM and a floppy disk drive.
While the first Apple computer sold for $666.66, the second was a little
more polished and more expensive selling for $1,298.
1977 Was a Banner Year for the Home Computer
During 1977, Apple II, Commodore Pet and the Radio Shack
TRS80 all became available for the home. With both Apple II and TRS80
computers using floppy disk drives, it now made it easier for software de-
velopers to create and sell programs to the masses. One company that start-
ed to grow and even trademarked their name in 1977 was Microsoft.
The IBM PC
IBM has had an enormous influence on the computers that we use
today. While many computers that IBM first created were for defense or for
large government organizations and corporations, IBM started to notice
that there was a tremendous amount of demand building up for home com-
puters in the 1970’s. In the late 1970’s and into 1980 IBM developed a per-
sonal computer known as the PC. It went on to be released to the public in
August of 1981. The IBM PC grabbed the attention of the public and many
businesses that realized that since IBM was selling PC’s to the public, there
must be real demand.
Out of the PC there came numerous companies that innovated the
PC. And since the IBM PC was based on off the shelf parts and had an
open architecture, many businesses would be able to support and even start
16
to build computers of their own. The first IBM PC had a 4.77 MHz Intel
8088 microprocessors, 16 Kb of RAM, two 160K floppy drives and even
an optional color monitor. While the price was still on the expensive end-
$1,565, many hailed this as the beginning of the home computing market.
The Apple Macintosh
While the IBM PC definitely took off, not only for consumers, but
small and medium businesses, Apple computers still continued to be domi-
nant in the market. In 1984, the Apple Macintosh was released. The Apple
Macintosh did have one of the first GUI (graphical user interfaces) that
made computing much more attractive and easy to use. In addition, the Ap-
ple Macintosh also had an 8 MHz processor, 128K of RAM, a floppy disk
drive and a monitor, it went into production from January of 1984 to Octo-
ber of 1985 and cost around $2,500. However, it lacked in memory and
was difficult to use with its one single floppy disk drive.
The Computers of Today
A lot has changed since IBM introduced its first PC. Today, com-
puters have infiltrated into practically every aspect of our lives. Today,
computers are extremely powerful, extremely small and more affordable
than ever. With the advent of the internet in the late 60’s and the growth of
the worldwide web decades later, the computer is used as a powerful tool to
communicate and conduct commerce.
In fact, the computer has been a tremendous engine in worldwide
growth and has helped raise the quality of life for potentially billions of
people. As the computer becomes more and more sophisticated and morphs
with a wide variety of other aspects of our lives, where and how the com-
puter will continue to evolve is still unimaginable.
Computers from 2000-2010
2000: In Japan, Softbank introduced the first camera phone, the J-
Phone J-SH04. The camera had a maximum resolution of 0.11 megapixels,
a 256-color display, and photos could be shared wirelessly. It was such a
hit that a flip-phone version was released just a month later.
17
Also in 2000, the USB flash drive is introduced. Used for data storage,
they were faster and had a greater amount of storage space than other stor-
age media options. Plus, they couldn’t be scratched like CDs.
2001: Apple introduces the Mac OS X operating system. Not to be
outdone, Microsoft unveiled Windows XP soon after.
Also, the first Apple stores are opened in Tysons Corner, Virginia, and
Glendale, California. Apple also released iTunes, which allowed users to
record music from CDs, burn it onto the program, and then mix it with oth-
er songs to create a custom CD.
2003: Apple releases iTunes music store, giving users the ability to
purchase songs within the program. In less than a week after its debut, over
1 million songs were downloaded.
Also in 2003, the Blu-ray optical disc is released as the successor of
the DVD.
And, who can forget the popular social networking site Myspace,
which was founded in 2003. By 2005, it had more than 100 million users.
2004: The first challenger of Microsoft’s Internet Explorer came in
the form of Mozilla’s Firefox 1.0. That same year, Facebook launched as a
social networking site.
2005: YouTube, the popular video-sharing service, is founded by
Jawed Karim, Steve Chen, and Chad Hurley. Later that year, Google ac-
quired the mobile phone operating system Android.
2006: Apple unveiled the MacBook Pro, making it their first Intel-
based, dual-core mobile computer.
That same year at the World Economic Forum in Davos, Switzerland,
the United Nations Development Program announced they were creating a
program to deliver technology and resources to schools in under-developed
countries. The project became the One Laptop per Child Consortium,
which was founded by Nicholas Negroponte, the founder of MIT’s Media
Lab. By 2011, over 2.4 million laptops had been shipped.
And, we can’t forget to mention the launch of Amazon Web Services,
including Amazon Elastic Cloud 2 (EC2) and Amazon Simple Storage Ser-
vice (S3). EC2 made it possible for users to use the cloud to scale server
capacity quickly and efficiently. S3 was a cloud-based file hosting service
that charged users monthly for the amount of data they stored.
2007: Apple released the first iPhone, bringing many computer func-
tions to the palm of our hands. It featured a combination of a web browser,
18
a music player, and a cell phone -- all in one. Users could also download
additional functionality in the form of “apps”. The full-touchscreen
smartphone allowed for GPS navigation, texting, a built-in calendar, a
high-definition camera, and weather reports.
Also in 2007, Amazon released the Kindle, one of the first electronic
reading systems to gain a large following among consumers.
And, Dropbox was founded by Arash Ferdowsi and Drew Houston as
a way for users to have convenient storage and access to their files on a
cloud-based service.
2008: Apple releases the MacBook Air, the first ultra notebook that
was a thin and lightweight laptop with a high-capacity battery. To get it to
be a smaller size, Apple replaced the traditional hard drive with a solid-
state disk, making it the first mass-marketed computer to do so.
2009: Microsoft launched Windows 7.
2010: Apple released the iPad, officially breaking into the dormant
tablet computer category. This new gadget came with many features the
iPhone had, plus a 9-inch screen and minus the phone.
Computers from 2011 - present day
2011: Google releases the Chromebook, a laptop that runs on Google
Chrome OS.
Also in 2011, the Nest Learning Thermostat emerges as one of the
first Internet of Things, allowing for remote access to a user’s home ther-
mostat by use of their smartphone or tablet. It also sent monthly power
consumption reports to help customers save on energy bills.
To sum it up…
Computer generations are based on when major technological changes
in computers occurred, like the use of vacuum tubes, transistors, and the
microprocessor. As of 2020, there are five generations of the computer:
First generation (1940 - 1956)
Second generation (1956 - 1963)
Third generation (1964 - 1971)
Fourth generation (1972 - 2010)
Fifth generation (2010 to present)
19
First generation (1940 - 1956)
The first generation of computers used vacuum tubes as a major piece
of technology. Vacuum tubes were widely used in computers
from 1940 through 1956. Vacuum tubes were larger components and re-
sulted in first generation computers being quite large in size, limited to
basic calculations, taking up a lot of space in a room. Some of the first gen-
eration computers took up an entire room. The input method of these com-
puters was a machine language known as the 1GL or the first generation lan-
guage. The physical methods of using punch cards, paper tape, and magnetic
tape were used to enter data into these computers.
The ENIAC is a great example of a first generation computer. It con-
sisted of nearly 20,000 vacuum tubes, as well as 10,000 capacitors and
70,000 resistors. It weighed over 30 tons and took up a lot of space, requir-
ing a large room to house it. Other examples of first generation computers
include the EDSAC, IBM 701, and Manchester Mark 1.
Second generation (1956 - 1963)
The second generation of computers saw the use of transistors instead
of vacuum tubes. These made them far more compact than the first genera-
tion computers. Transistors were widely used in computers
from 1956 to 1963. Transistors were smaller than vacuum tubes and al-
lowed computers to be smaller in size, faster in speed, and cheaper to build.
The first computer to use transistors was the TX-0 and was introduced
in 1956. Other computers that used transistors include the IBM 7070, Phil-
co Transac S-1000, and RCA 501.The inputs for these computers were high-
er level languages like COBOL, FORTRAN etc. In these computers, prima-
ry memory was stored on the magnetic cores and magnetic tape and they
used magnetic disks as secondary storage devices. Examples of the second
generation computers include IBM 1620, IBM 7094, CDC 1604, CDC
3600, UNIVAC 1108. As a result, they worked on AC and therefore were
faster than their predecessors.
20
Third generation (1964 - 1971)
The third generation of computers introduced the use of IC (integrated
circuits) in computers. Using IC's in computers helped reduce the size of
computers even more compared to second-generation computers, as well as
make them faster.
An integrated circuit is a small device that can contain thousands and
thousands of devices like transistors, resistances and other circuit elements
that make up a computer. Jack Kilby is credited with the invention of the In-
tegrated Circuit or the IC chips. With the invention of IC’s, it became possi-
ble to fit thousands of circuit elements into a small region and hence the size
of the computers eventually became smaller and smaller. Nearly all comput-
ers since the mid to late 1960s have utilized IC's. While the third generation
is considered by many people to have spanned from 1964 to 1971, IC's are
still used in computers today.
Another salient feature of these computers was that they were much
more reliable and consumed far less power. The input languages for such
computers were COBOL, FORTRAN-II up to FORTRAN-IV, PASCAL,
ALGOL-68, BASIC, etc. These languages were much better and could repre-
sent more information. Consequently more and more complex calculations
are possible.
Fourth generation (1972 - 2010)
The fourth generation of computers took advantage of the invention of
the microprocessor, more commonly known as a CPU. Microprocessors,
along with integrated circuits, helped make it possible for computers to fit
easily on a desk and for the introduction of the laptop.
Some of the earliest computers to use a microprocessor include
the Altair 8800, IBM 5100, and Micral. Today's computers still use a mi-
croprocessor, despite the fourth generation being considered to have ended
in 2010.
Fourth Generation of computers was between 1971 and 1980. These
computers used the VLSI technology or the Very Large Scale Integrated
(VLSI) circuit technology. Intel was the first company to develop a micro-
21
processor. The first “personal computer” or PC developed by IBM, belonged
to this generation. VLSI circuits had almost about 5000 transistors on a very
small chip and were capable of performing many high-level tasks and com-
putations. These computers were thus very compact and thereby required a
small amount of electricity to run.
This generation of computers had the first “supercomputers” that could
perform many calculations accurately. They were also used in networking
and also used higher and more complicated languages as their inputs. The
computer languages like languages like C, C+, C++, DBASE etc. were the
input for these computers.
Fifth generation (2010 to present)
This is the present generation of computers and is the most advanced
one. The generation began somewhere around 1981 and is the present genera-
tion of computers. The methods of input include the modern high-level lan-
guages like Python, R, C#, Java etc. These are extremely reliable and employ
the ULSI or the Ultra Large Scale Integration technology. These computers
are at the frontiers of the modern scientific calculations and are used to de-
velop the Artificial Intelligence or AI components that will have the ability to
think for themselves.
One of the more well-known examples of AI in computers is IBM's
Watson, which was featured on the TV show Jeopardy as a contestant.
Other better-known examples include Apple's Siri on the iPhone and Mi-
crosoft's Cortana on Windows 8 and Windows 10 computers.
The Google search engine also utilizes AI to process user searches.
Task 1. State if the following statements are true or false:
1. It was American who was considered to be the first to invent the
computer.
2. MARK I performed a vast majority of calculations.
3. MARK I had no display.
4. Dynamic Random Access Memory is an easy way to store instruc-
tions with all necessary programming.
22
5. Then there appeared Apple computers which were used for small
businesses, not only for consumers.
6. IBM developed a Personal Computer and has had a great impact on
computing.
7. The first calculating device was the abacus.
8. Microsoft launched Windows 7.
9. Charles Babbage designed first PC.
Task 2. Insert a proper word into the sentence. Use the texts:
1. Walking across a room, for instance, requires many com-
plex, albeit _____________ calculations.
2. The Difference Engineis used in calculating the ___________math
tables.
3. Soon appeared____________calculating devices based on Napier’s
logarithm.
4. In 1820 Charles Xavier Thomas de Colmar of France built
his______________ the first commercial mass-produced calculating de-
vice.
5. MARK computer was absolutely huge and filled ____________that
was 55 feet long by 8 feet high.
6._____________computer is known as being one of the most im-
portant achievements in computing.
7. IBM today is known for bringing the first widely affordable and
available __________computer to the masses.
8. One of the first consumer computers to hit the market
was________. It was developed in 1973 and 1974 and was first sold in
1975 as the “World’s First Minicomputer Kit to Rival Commercial Mod-
els”.
9. The Apple I came equipped with _________,video interface,
________ ,______ ____and was made with affordable components includ-
ing the 6502 processor.
Task 3.Answer the following questions on the texts above:
1. How can a computer be described?
2. Who designed Analytical Engine? What was it used for?
3. What was the first calculating device?
23
4. Whose name is connected with the invention of logarithm? What is
logarithm property?
5. Who invented the calculating Clock? What was it like?
6. Whose name is associated with Arithmometer?
7. What is Konrad Zuse famous for?
8. When and where did Mark I appear? Was it tiny?
9. What was used in ENIAC instead of electric motors for speeding up
calculations?
10. What is known of UNIVAC?
11. What was the first commercial transistorized computer?
12. Does Steve Wozniak have any relation to Apple computers?
13. How many generations of computers are there?
Task 4. Translate the text from English into Russian:
A computer might be described with deceptive simplicity as “an appa-
ratus that performs routine calculations automatically.” Such a definition
would owe its deceptiveness to a naive and narrow view of calculation as a
strictly mathematical process. In fact, calculation underlies many activities
that are not normally thought of as mathematical. Computers, too, have
proved capable of solving a vast array of problems, from balancing a
checkbook to even—in the form of guidance systems for robots—walking
across a room.
Before the true power of computing could be realized, therefore, the
naive view of calculation had to be overcome. The inventors who laboured
to bring the computer into the world had to learn that the thing they were
inventing was not just a number cruncher, not merely a calculator. For ex-
ample, they had to learn that it was not necessary to invent a new computer
for every new calculation and that a computer could be designed to solve
numerous problems, even problems not yet imagined when the computer
was built. They also had to learn how to tell such a general problem-
solving computer what problem to solve. In other words, they had to invent
programming.
They had to solve all the heady problems of developing such a device,
of implementing the design, of actually building the thing. The history of
the solving of these problems is the history of the computer. That history is
24
covered in this section, and links are provided to entries on many of the in-
dividuals and companies mentioned.
Task 5. Scheme the text below and find the answer to the ques-
tion: “Who is the father of the computer?”
There are hundreds of people who have major contributions to the
field of computing. The following sections detail the primary founding fa-
thers of computing, the computer, and the personal computer we all know
and use today.
Charles Babbage was considered to be the father of computing after
his concept, and then later the invention of the Analytical Engine in 1837.
The Analytical Engine contained an ALU (arithmetic logic unit), basic flow
control, and integrated memory; hailed as the first general-purpose com-
puter concept. Unfortunately, because of funding issues, this computer was
not built while Charles Babbage was alive.
However, in 1910 Henry Babbage, Charles Babbage's youngest
son was able to complete a portion of the machine that could perform basic
calculations. In 1991, the London Science Museum completed a working
version of the Analytical Engine No 2. This version incorporated Babbage's
refinements, which he developed during the creation of the Analytical En-
gine.
Although Babbage never completed his invention in his lifetime, his
radical ideas and concepts of the computer are what make him the father of
computing.
There are several people who can be considered the father of the com-
puter including Alan Turing, John Atanasoff, and John von Neumann.
However, we consider Konrad Zuse as the father of the computer with the
advent of the Z1, Z2, Z3, and Z4.
From 1936 to 1938, Konrad Zuse created the Z1 in his parent's liv-
ing room. The Z1 consisted of over 30,000 metal parts and is considered to
be the first electromechanical binary programmable computer. In 1939, the
German military commissioned Zuse to build the Z2, which was largely
based on the Z1. Later, he completed the Z3 in May 1941, the Z3 was a
revolutionary computer for its time and is considered the first electrome-
chanical and program-controlled computer. Finally, on July 12, 1950,
25
Zusecompleted and shipped the Z4 computer, which is considered to be the
first commercial computer.
Henry Edward Roberts coined the term "personal computer" and is
considered to be the father of the modern personal computers after he re-
leased of the Altair 8800 on December 19, 1974. It was later published on
the front cover of Popular Electronics in 1975 making it an overnight suc-
cess. The computer was available as a kit for $439 or assembled for $621
and had several additional add-ons such as a memory board and interface
boards. By August 1975, over 5,000 Altair 8800 personal computers were
sold, starting the personal computer revolution.
Other computer pioneers
There thousands, of pioneers who have helped contribute to the devel-
opment of the computer, as we know it today. See our computer pioneer
list for additional biographies of foundational computer visionaries.
Part C (in addition to what YOU HAVE READ)
Major milestones in the development of modern day computers.
We have been using the computers for the past 40 years. But the
origin of the concepts, algorithms and the developments in computations
dates back to the very early cultures.
In very early days that is in B.C when there were no computational
devices, people used pebbles, bones and the fingers of hands to count and
calculate.
They even used ropes and shapes for some measurements.
For example: for assuring a right angle, people used 3-4-5 right trian-
gle shape or a rope with 12 evenly spaced knots, which could be formed in-
to a 3-4-5 right triangle.
Use of counters to aid calculations: 3rd - 6th century B.C
Requirement of simple calculations were done in innovative ways
with stones, pebbles and they even used bones. These were called counters.
We can find many versions of the abacus now with more complicated cal-
culation abilities.
26
Many such algorithms were developed around the world by early
mathematicians like Panini, Euclid, Leibniz and others.
By the middle of 16th century explorations of various continents
and trading brought in the requirements of precise calculations of sea
routes, accounting, etc. Some mechanical devices were also developed to
assist in tedious and repetitive calculations like generating calendars of a
year, taxing, trading.
The first computers were people. This was a job title given to peo-
ple who did repetitive calculations for navigational tables, planetary posi-
tions and other such requirements. Mostly women with mathematical profi-
ciency were employed for the job. One of the important automation which
is the Jaquard loom is important in the computer history.
Automation with punched cards 18th-19th century
Trade, travel, and increase in population (which demanded increase in
requirements like clothing, food etc.) led to automation of machinery in
18th-19th century.
The Jacquard loom invented by Joseph Marie Jacquard used punched
cards to control a sequence of operations. A pattern of the loom’s weave
could be changed by changing the punched card. Why do you think Jac-
quard looms are important?
In Scratch programming, the computer takes the blocks one by one
and executes them. The loom too weaves line by line in a sequence the de-
sign on the punched card. In computers we use some input device like key-
board to input data. The punched card is like an input to the loom.
Charles Babbage used the punched card idea, to store data in his ana-
lytical machine.
Boolean algebra which is extensively used in computers was also de-
veloped in 19th century by the mathematician George Boole.
27
Note that the 19th century contributions of automating and the devel-
opment of algorithm are of immense value to the development of electronic
computers in the next century.
Mechanical computation machines- 19th century
Developments in logic and need for more complicated calculations led
to mechanical computation devices which were designed and implemented
for varied degree of computations. But, accuracy, speed and precision
could not be ensured due to the wear and tear of the mechanical compo-
nents.
Mechanical computation machines- Earlier 19th century
Charles Babbage- The Analytical Machine
The analytical machine was designed but not built. The main parts of
his machine were called the “store” and “mill”. Punched card store data,
which is equivalent to the memory unit in computers. Mill weaves or pro-
cesses the data to give a result, which is equivalent to the central processing
unit in computers. He used conditional processing of data. Example: If
block in Scratch.
Ada Lovelace- The first programmer
Ada Lovelace, a friend of Babbage wrote the first sequence of instruc-
tions for various tasks for the analytical engine. It used programming con-
cept of looping for repetitive actions. Example: repeat block in Scratch.
She used subroutines in her programs.
Early 20th century saw many analog computers which were mechani-
cal or electrical or electro-mechanical devices.
These were for limited purpose like solving some mathematical equa-
tions, decoding messages or for tables of firing artillery in World War II.
These computers were based on binary representation of data and
boolean algebra.
Analog computers- First general purpose computers- first half of
1900-1940
The war time requirements for artillery firing, communication of strat-
egies using complicated codes led to electromechanical computers where
magnetic storage and vacuum tubes were first used. Babbage’s punched
card was used to input data.
Mechanical computation machines - 19th century
28
1936- Alan Turing regarded to be the father of modern Computer Sci-
ence provided a formalization for the concept of algorithm and computa-
tions.
1941- Konrad Zuse inventor of the program-controlled computer, built
the first working computer. This computer was based on magnetic storage.
1942- Atanasoff-Berry computer which used vacuum tube, binary
numbers, was non programmable.
1943- Colossus a secret British computer with limited programmabil-
ity built using vacuum tubes, was built to break the German wartime codes.
It was the first computer to read and decipher the codes using cryptog-
raphy.
1944- Harvard Mark I an electromechanical computer built out of
switches, relays,
rotating shafts, and clutches had limited programmability. It used
punched paper tape instead of the punched cards. It worked for almost 15
years. Grace Hopper was the primary programmer. She invented the first
high level language called Flow-Matic which later developed into COBOL.
She also constructed the first compiler. She found the first computer “bug”:
a dead moth that got into the Mark I and whose wings were blocking the
reading of the holes in the paper tape. The word “bug” had been used to de-
scribe a defect since at least 1889 but Hopper is credited with coining the
word “debugging” to describe the work to eliminate program faults.
Next there was something called the “Stored program architecture” of
Von Neumann in 1945. With this architecture rewiring was not required to
change a program.
The program and data was stored in memory and instructions were
processed one after the other.
The input was typed on a terminal which looks like a monitor with
keyboard in the front or on cards. Each instruction was typed on one card
and the deck of cards was read by a card reader and stored in memory.
And those who submitted the program had to wait till their program
was processed and output printed and given to them.
If they had to change the program, they have to type in another card
and insert in the deck of cards.
Digital computers- 1940 to 1970
29
Census, elections, research in various fields and many more such ad-
vances in every field required increased speed, precision, immediate re-
sults. Stored program digital computer architecture was designed with
CPU, memory to hold instructions and data around 1946.
These computers were built using vacuum tubes, transistors, integrat-
ed circuits which are classified into the first three generations of computers.
The classification of generations has been done based on technology,
speed, storage, reliability and cost.
Computation machines- Second half of 19th century
First generation computers
These computers were named Eniac, Edvac, and Univac. These com-
puters were made of vacuum tubes way back in 1945-55. They were huge
in size and very costly to maintain.
Second generation computers
These computers developed after 1955, had transistors in the place of
vacuum tubes.
Transistors were more reliable, much cheaper and smaller. This gen-
eration had more computing power, were smaller in size, easier to maintain
and were more affordable than the previous generation.
Third generation computers
These computers developed in the 1960’s, used integrated circuits.
The transistors were miniaturized and kept on silicon chips called the semi-
conductors which drastically increased the speed and efficiency of comput-
ers.
Microprocessor revolution brought in the explosion of usage of com-
puters in every field.
The size of computers started decreasing and the speed started increas-
ing.
The storage space also started increasing.
Most importantly the reliability of computers increased and the cost
started decreasing.
Invention of microprocessors revolutionized the computer develop-
ment and due to the reduction of cost, by 1990 students could own a per-
sonal computer.
Computers with Microprocessors- 1970 onwards
30
Using microprocessors in computers increased reliability, precision
and reduced size and cost.
This led to uses of computers in offices, colleges, personal use and
exploration of computer usage in every field.
Computation machines- After 1970’s
Fourth generation computers
These were developed in the 1970’s and used microprocessors or
chips. The microprocessors were smaller than a postage stamp and had
tremendous computing capabilities.
Fifth generation computers
These were developed in 1980’s and used the concept of Artificial in-
telligence. The different types of fifth generation computers are Desktop,
notebook or laptop, palmtop, server, Mainframe and Super Computer.
• Desktop computers are based on IC’s.
• Notebook or laptop computer is same as desktop but can be carried
around.
• Palmtop is a miniature version of notebook with limited capabilities.
• Server is a powerful version of desktop capable of catering to vari-
ous applications in a network environment.
• Mainframe is a powerful version of server and is capable of handling
huge applications and data processing.
• Super computer has multiprocessors to perform typical scientific ap-
plications that need trillions of information per second while processing.
Computers are also being used in many devices like the phones,
household machines like washing machines.
These are very small computers which cannot be programmed but are
meant to help in the operation of these devices.
These are called embedded devices.
Late 20th century - Networking, Smart phones and FOSS
We have also collected some information about the history of net-
working and related technologies that revolutionized many aspects of our
daily life like communication, buying tickets, banking, information and
much more.
31
Currently (2011) we have very advanced smart phones which have
many features available on a computer. For example we can browse inter-
net, check email, play games.
Smart phones of today date back to 1992
Smart phones 1992: The first smartphone IBM Simon was designed in
1992 and released in 1993. It also contained a calendar, an address book, a
world clock, a calculator, a note pad, e-mail client, the ability to send and
receive faxes and games. It had no physical buttons, instead customers used
a touchscreen to select telephone numbers with a finger or create facsimiles
and memos with an optional stylus. Text was entered with a unique on-
screen “predictive” (once types of the words are predicted and select the
word) keyboard.
So, there covered the history of computers from Abacus to
Smartphones.
History teaches you not only how things were made but also how you
can innovate and invent.
Task 1. Given are some of the devices used for calculation. Can you
arrange them in sequence of which appeared first?
Palmtop
32
Abacus
ENIAC
Pebbles
Napier bones
Punched card reader
Desktop
Laptop
Task 2.List some advantages of fifth generation computers compared
to the other generation computers.
Task 3.Fill in each blank with a word chosen from the list below to
complete the meaning of the sentence:
chip, speed, figure out, calculating, reduces, microminiaturization,
analog, logarithm, abacus, machine, vacuum tubes,
tiny, dependable, devised
1. The very first .....device used was 10 fingers of a man’s hand.
2. Then, the .....was invented.
3. J. Napier .....a mechanical way of multiplying and dividing.
4. Henry Briggs used J.Napier’s ideas to produce ..... .
5. The first real calculating .....appeared in 1820.
6. This type of machine .....the possibility of making mistakes.
7. In 1930 the first .....computer was built.
8. This was the first machine that could ..... ..... mathematical prob-
lems at a very fast speed.
9. In 1946 was built the first digital computer using parts called .... .
10. The reason for this extra .....was the use of transistors instead of
vacuum tubes.
11. The second generation computers were smaller, faster and more
.....than first-generation computers.
12. The third-generation computers are controlled by .....integrated
circuits.
13. This is due to ....., which means that the circuits are much smaller
than before.
33
14. A .....is a square or rectangular piece of silicon, usually from 1/10
to 1/4 inch.
Task 4. Fill in the prepositions:
1. Let us take a look .....the history of computers.
2. That is why we count .....tens and multiply ..... tens.
3. The beads are moved .....left .....right.
4. Abacus is still being used .....some parts ..... the world.
5. Calculus was independently invented .....both Sir Isaac Newton and
Leibnitz.
6. This type of machine depends .....a ten-toothed gear wheels.
7. «The Analytical Engine» was shown .....the Paris Exhibition .....
1855.
8. The men responsible .....this invention were Professor Howard Ai-
ken and some people ..... IBM.
9. The first generation of computers came .....in 1950.
10. Due to microminiaturization 1000 tiny circuits fit .....a single chip.
Task5.Finish the following sentences:
1. The first generation of computers came out in ..... .
2. The second generation of computers could perform work ten times
faster than their .... .
3. The third-generation computers appeared on the market in ..... .
4. The fourth-generation computers have been greatly ..... .
5. The fourth-generation computers are 50 times faster and can ..... .
Task6. Find the synonyms to the following words in the text:
simple, to carry out, up to date, quick, to try, small
Task 7. Find the antonyms to the following words in the text:
Like, short, to increase, sole, dependently
Task 8. Arrange the items of the plan in a logical order according to
the text:
34
1. J. Napier devised a mechanical way of multiplying and dividing.
2. The very first calculating device was the ten fingers of a man’s
hands.
3. Babbage showed his analytical engine at Paris Exhibition.
4. The first real calculating machine appeared in 1820.
5. The first analog computer was used in World War II.
Task 9. Answer the questions on the text:
1. What was the very first calculating device?
2. What is abacus? When did people begin to use them?
3. When did a lot of people try to find easy ways of calculating?
4. Who used Napier’s ideas to produce logarithm?
5. What was invented by Sir Isaac Newton and Leibnitz?
6. What did Charles Babbage design?
7. When was the first analog computer built? How did people use it?
8. Who built the first digital computer?
9. How did the first generation of computers work?
10. What are the differences between the first and the second comput-
er generations?
11. When did the third-generation computers appear?
35
UNIT 2
TYPES OF A COMPUTER
You can group computers into different types based on what peo-
ple use them for. They are not all the same and you should know the differ-
ences.
Mainframe Computers
Mainframe computers were the first major, commercialized computers
from back in the 50s.
These guys took up entire rooms, letting you access their compu-
ting power from workstation monitors known as dumb terminals. Those
dumb terminals didn't have any of their own processing power; they had to
ask the mainframe computer to do any- and everything.
Mainframe computers can have multiple processors with different
operating systems and allow many programs to run at the same time. Today
you'll be able to find mainframe computers at businesses that deal with gi-
ant amounts of data (think corporate payroll processing, state and federal
tax returns, and healthcare claims).
But more importantly: you can also find them peppered in Hollywood
pseudo-jargon.
Supercomputers
Just like the mainframe, supercomputers are huge (hence the
name). Unlike mainframes, people typically only use them for one, com-
plex task. All of the computer’s resources are thrown at that one task to
help find a solution. To deserve all that work, that task is usually a big one:
something researchers are using to simulate and model some big, hairy
problem. Problems like these usually take multiple years to solve. We're
talking about things like simulating flight and processing climate patterns.
Say we are talking about monitoring climate change with super-
computers. If scientists can crunch giant data sets to figure out what kind of
severe weather can be predicted when, we stand a much better chance at
being able to react to and protect against those giant storms.
Supercomputers are fabulous at helping researchers solve these kinds
of problems.
Personal Computers (PCs)
36
As computing technology improved in the 60s, there was less of a
need to make people share mainframe computers all the time, giving rise to
the—wait for it—personal computer. The first PC (Programma 101) was
developed in the 1960s by an Italian company called Olivetti. NASA
bought several of these early devices and used them to support the 1969
Apollo moon landing. It wasn‘t until the early 1980s (twenty years after
NASA used those Programmas to send astronauts to the moon) that PCs
became accessible to families and business employees.
The first widely available PCs sold for about $3,000. By 2010 the av-
erage sale price had come down to about $550.
Sound like a crazy decrease? It is. It's a giant decrease.
PCs are all about giving you every function you'd need in your
day-to-day life. Instead of focusing on one, complex task, they can handle
hundreds of simultaneous tasks so that you can get all your work done.
Even if that work's actually just playing the latest video game.
Laptops
A laptop computer is a small, light computer that you can easily carry
about with you. It can be powered by battery or mains power. A laptop
computer has a keyboard, and comes with specialized input devices, for ex-
ample trackballs, touch pads or track points. They are needed because lap-
top computers are often operated in places where it is impracticable to use a
mouse.
For output the laptop has an LCD or TFT screen and a set of small
speakers.
‘Laptops’ are often as powerful as desktop computers and run the same
range and type of software.
People use laptops for working when they are on the move, going to
meetings or attending courses.
Many businesses are replacing desktop PCs with special plug-in
workstations designed round laptop computers because of the flexibility
they offer.
Functionally, a laptop is basically the same as a personal computer
(plus maybe some added risks). The main difference is that you can move a
laptop much more easily than whatever tower, monitor, speaker, and
webcam setup you might have at home.
37
That isn't to say laptops are automatically better than PCs. As laptops
have to fit in backpacks and manila envelopes, their components end up
more squished together. That lack of space makes changes like hard disk
and memory upgrades more difficult.
Plus, it's virtually impossible to make those changes without paying
someone else to do it.
Desktop computers
A desktop computer is the most common kind of PC. It is a collection
of a number of different hardware devices. This type of computer is sited
permanently on a desk because its design means it cannot be easily moved.
The common components of a desktop PC are:
the system unit containing the processor and main memory
a monitor
a keyboard
a mouse
a hard disk drive
a floppy disk drive
a CD/DVD drive
speakers.
Palmtop Computer or Personal Digital Assistant (PDA)
This type of computer is increasing in popularity, and is often
called a Personal Digital Assistant (PDA).
A palmtop computer is small enough to fit in your pocket.
It combines a lot of capabilities, including organizer features (such as
storing contact numbers, names and addresses, etc.), e-mail and wireless in-
ternet access.
Palmtop's have small keyboards and most let you open menus and se-
lect icons by using a special pen or stylus. Most let you enter data by writ-
ing with the stylus. They are powered by batteries and store their data on
removable memory units called flash cards.
You can run a wide range of software on palmtop's, for example sim-
ple word processing, database and spreadsheet software as well as useful
applications such as electronic diaries. Mostmodernpalmtops:
are converging with mobile phones to let you access the internet
38
have wireless communications to let you access your local area
network.
A mainframe a PDA
A laptop a desktop computer
Tablets
Boy, if you thought laptops had cramped quarters, tablets are even
smaller. They even replace the mouse and keyboard with a touchscreen
display. Even though they've also been around since the 80s, tablets didn't
really become popular until smartphones started gaining momentum.
39
You're just sharing a printer with your roommate, technically you're
on a computer network.
The most common type of networks (aside from the deal you’re
brokered with your roommate that they'd share the printer if you let them
control the remote) are:
Local Area Networks (LANs): computers all at the same site, all con-
nected to a common file server.
Wide Area Networks (WANs): computers located at different sites—
anywhere in the world—with a common file server.
Generally, if you have got four users or more involved, it makes
more sense money-wise to have those users work on a network so that they
can share devices and software. That way, you will not have to duplicate
devices and software to every user.
These kinds of systems have pros and cons.
Pros:
When you can share software between multiple users, the costs per
user go way down.
Software upgrades only need to be installed in one location, making
them much easier to manage.
Cons:
Whenever you have a network, you need an admin to take care of the
network's set-up and maintenance. That person also needs to make sure
everything is safe and secure from hackers.
If a shared device breaks, what could have been a problem for one
person becomes a problem for everyone. Not so fun.
Robots
Robots aren't all computer. We aren't talking about the ghost in the
machine here, we're just saying that there are so many other mechanical
and electrical components to robots that make them do more than sit there
and crunch numbers.
Still, the computer programming part tells the robot what to do and
how to do it. You could think of the computer as the brains of the opera-
tion, really. A robot that can travel across the surface of Mars looking for
signs of life (preferably in mom form) with all the mechanical construction
and battery power to make it move, but it won't budge an inch if it doesn't
have software that tells it where to go.
40
Robots are a great option for jobs that are dirty, dangerous, or dull.
They can help combat fires, disarm bombs, navigate areas with land mines,
and balance really well. All controlled by computers.
Task 1. Answer the following questions:
1. Why do large businesses such as banks use mainframe computers?
2. Describe the components of the desktop computer you use in the
University or at home.
3. Complete this table comparing a desktop with a laptop.
Desktop Laptop
Ordinary
Output LCD/TFT
Monitor
Hard disk,
Backing
floppy disk,
Storage
CD/DVD Drive
Power Mains
Source Power
Portable No
4. Name three types of software that run on palmtop's.
5. Why are palmtops useful?
Broadly, computers can be based on:
(a) The data handling capabilities and the way they perform the signal
processing, and
(b) Size, in terms of capacities and speed of operation.
Based on the type of input they accept, the computer is of three
types:
41
1. Analogue Computer
Everything we hear and see is changing continuously. This variable
continuous stream of data is known as analogue data. Analog computer
may be used in scientific and industrial applications such as to measure the
electric current, frequency and resistance of the capacitor, etc..
Analogue computers directly accept the data in the measuring device
without first converting it into codes and numbers.
Cases of analogue computer are temperature, pressure, telephone
lines, Speedometer, immunity of capacitor, frequency of signal and voltage,
etc.
2. Digital Computer
The digital computer is the most widely used and used to process data
with numbers using digits, usually utilizing the binary number system.
A digital computer intended to do calculations and logical operations
at a high rate. It takes the raw data as digits or amounts and procedures us-
ing applications stored in its memory to make output. All modern comput-
ers such as laptops and desktops we use at office or home are digital com-
puters.
It works on data, such as magnitudes, letters, and symbols, which ex-
pressed in binary code--i.e., with just the two digits 1 and 0. By counting,
comparing, and manipulating those digits or their mixtures by a pair of in-
structions stored in its memory, a digital computer may perform such tasks
to control industrial processes and also control the operations of machinery;
examine and organize vast amounts of company data; and mimic the be-
haviour of dynamic systems (e.g., international climate patterns and chemi-
cal reactions) in scientific study.
Digital computer supplies accurate result but they're slow compared to
an analogue computer.
3. Hybrid Computer
A hybrid computer combines the aspects of a digital computer and an
analogue computer. It's quick like an analogue computer and contains
memory and precision like digital computers. It's intended to incorporate a
functioning analogue unit that's effective for calculations, nevertheless has
a readily accessible digital memory. In large businesses and companies, a
hybrid computer may be employed to integrate logical operations in addi-
tion to provide efficient processing of differential equations.
42
For instance, a gas pump includes a chip that converts the dimensions
of fuel flow to volume and cost.
A hybrid computer is used in hospitals to gauge the heartbeat of this
individual.
Different kinds and sizes of computer
Since the coming of the very first computer, different kinds and sizes
of machines are providing various services. Computers are often as large as
inhabiting a massive building as little as a notebook or even a microcon-
troller in embedded or mobile systems.
Computers can be generally classified by kind or size and power as
follows, although there's considerable overlap.
Supercomputer
A supercomputer is the fastest computer on earth that could process a
considerable number of information very quickly. The calculating Perfor-
mance of a supercomputer quantified in FLOPS (which is floating-point
operations per minute) rather than MIPS.
These computers will be massive regarding the size. A most potent
supercomputer could occupy several feet to hundreds of feet. The super-
computer cost is exceptionally high, and they can range from two bucks to
over 100 million dollars.
Supercomputers were released in the 1960s and developed by Sey-
mour Cray together with the Atlas at the University of Manchester. The
Cray made CDC 1604 that has been the first supercomputer on earth, and it
replenishes vacuum tubing with transistors.
Uses of Supercomputers
Today's supercomputers can't just perform calculations; they process
enormous amounts of information in parallel with distributing computing
jobs to tens of thousands of CPUs. Supercomputers located at work in re-
search centers, government agencies, and companies performing mathemat-
ical calculations in addition to gathering, collating, categorizing, and as-
sessing information.
Weather Forecasting
The regional weatherman bases his predictions on information provid-
ed by supercomputers run by NOAA or the National Oceanic and Atmos-
pheric Administration. NOAA's systems execute database operations,
mathematical, and statistical analysis on enormous amounts of information
43
gathered from throughout the country and around the globe. The processing
capacity of supercomputers assists climatologists forecast, not merely the
probability of rain on your neighborhood but the paths of hurricanes as well
as the likelihood of whale strikes.
Scientific Research
Much like the weather, scientific study is contingent on the number-
crunching capability of supercomputers. By way of instance, astronomers
at NASA examine data flowing from satellites on the planet, ground-based
radio and optical telescopes and probes exploring the solar system. Re-
searchers in the European Organization for Nuclear Research, or CERN,
discovered the Higgs-Boson particle by assessing the huge amounts of data
created by the Large Hadron Collider.
Data Mining
Many supercomputers are necessary to extract data from raw infor-
mation accumulated from info farms around the floor or the cloud. By way
of instance, companies can analyze data gathered in their cash registers to
help control stock or spot market tendencies. Life Insurance businesses use
supercomputers to lessen their actuarial risks. Likewise, companies that of-
fer health insurance reduce prices and client premiums using supercomput-
ers to analyze the advantages of different treatment choices.
The Top Five Popular Supercomputers
• JAGUAR, Oak Ridge National Laboratory
• NEBULAE, China
• ROADRUNNER, Los Alamos National Laboratory
• KRAKEN, National Institute for Computational Sciences
• JUGENE, Juelich Supercomputing Centre, Germany
Mainframe computer
A mainframe computer is a computer system with:
• very powerful processors
• lots of backing storage
• large internal memory.
Mainframes are designed to process large volumes of data at high
speed. They are used by large businesses such as
Banks and mail-order companies
Large organizations such as universities.
44
Mainframe computers can also multi-task by running more than one
program at the same time. This is known as multi-programming and with
more memory has become possible on desktop and laptop computers.
The mainframe denotes the sort of computer which runs a whole cor-
poration. The Mainframe computers can accommodate in large air-
conditioned rooms because of its dimensions in the current world, where all
of the companies, trades, and communications are real-time.
So to do all this endeavor, a highly effective computer need on the
host side, which processes the directions and supplies the output in mo-
ments. According to the use of computers in the modern world, we could
use classifications pc in Supercomputer, Mainframe Computer, and Mini
Computer and microcomputer types. A mainframe computer is stronger
than Mini and Microcomputer, but stronger than Supercomputer. A main-
frame computer used at large businesses.
The main distinction between a supercomputer and a mainframe is
that a supercomputer stations all its power to execute a program as quickly
as possible. In contrast, a mainframe uses its capability to run many appli-
cations simultaneously. In specific ways, mainframes are more effective
than supercomputers because they encourage more simultaneous applica-
tions. However, supercomputers can do one program faster than a main-
frame.
Popular Mainframe computers
• IBM 1400 series.
• 700/7000 series.
• System/360.
• System/370.
45
• IBM 308X.
Minicomputer
A minicomputer also referred to as miniature. It's a category of little
computers which has introduced to the world from the mid-1960s. Mini-
computers used by small businesses. A minicomputer is a computer that
has all of the qualities of a considerable size pc, but its size is significantly
smaller compared to those. A minicomputer can also be known as a mid-
range pc. Minicomputers are primarily multi-users systems where more
than one user can operate concurrently.
Minicomputer can encourage multi-users at one time, or you'll be able
to state that minicomputer is a multiprocessing system.
Additionally, the ability of processing of minicomputers isn't more
significant than the energy of mainframe and supercomputers.
Different Types of Minicomputers
• Tablet PCs
• Smartphones
• Notebooks
• Touch Screen Pads
• High-End Music Plays
• Desktop Mini Computers
Microcomputer
Micro Computer is a little computer. Your private machines are equal
to the microcomputer. Mainframe and Mini Computer is the ancestor of all
microcomputers. Integrated Circuit manufacturing technology reduces the
size of Mainframe and Minicomputer.
Technically, a microcomputer is a computer where the CPU (central
processing unit (the brains of the machine) comprised of a single processor,
a microprocessor, input/output apparatus, and storage (memory) unit. These
elements are essential to get the proper functioning of the microcomputer.
Micro-computers especially created for general usages like entertain-
ment, education, and work purposes. Well, known Method of a ' Micro-
computers.
Types of Micro Computer
• Desktop computers
• laptops
46
• personal digital assistants (PDA)
• tablets
• telephones
Task 1. Answer the following questions:
1.Who designs computers and their accessory equipment? 2. What is
the role of an analyst? 3. Is it necessary for a user to become a computer
system architect? 4. What functions do computer systems perform? 5. What
types of computers do you know? 6. What is the principle of operation of
analog computers? 7. How do digital computers differ from analog comput-
ers? 8. Where are digital and analog computers used? 9. What are hybrid
computers? 10. Where do they find application?
Task 2. Choose the right variant:
1. Main component of first generation computer was ___________.
a) Transistors
b) Vacuum Tubes
c) Integrated Circuits
d) Microprocessor
2. Second Generation computers were developed during
____________.
a) 1949-1955
b) 1955-1975
c) 1965-1970
d) 1956-1965
3. The computers size was very large in ________.
a) First Generation
b) Second Generation
c) Third Generation
d) Fourth Generation
4. _________used as a programming language in 1st generation com-
puter.
a) FORTRAN
b) COBOL
c) BASIC
d) Java
47
5. The computer that process both analog and digital is
called____________.
a) Analog Computer
b) Digital Computer
c) Hybrid Computer
d) Mainframe Computer
6. What was the name of the first computer designed by Charles Bab-
bage?
a) Analytical Engine
b) Difference Engine
c) Colossus
d) ABACUS
7. The first computer language developed was_________.
a) COBOL
b) PASCAL
c) BASIC
d) FORTRAN
8. BIOS stands for _________.
a) Basic Input Output System
b) Best Input Output System
c) Basic Input Output Symbol
d) Base Input Output Symbol
9. Minicomputers is also known as_____.
a) Personal computer
b) Midrange computers
c) Laptop
d) Monitor
10. Which computers are used as servers for any medium sized organ-
izations?
a) Mainframe Computers
b) Mini Computers
c) Micro Computers
d) Super Computers
11. Minicomputers and Microcomputers are from which generation of
computers?
a) First Generation
b) Second Generation
48
c) Third Generation
d) Fourth Generation
12. Microprocessor chips are made up of?
a) Silicon
b) Copper
c) Gold
d) Platinum
13. Mainframe computers can perform _____types of programs.
a) Multiple
b) Single
c) Both A and B
d) None of the above
Task 3.
a)Translate the text into English .Give the title to it.
Мейнфре́йм — это большой, универсальный и мощный сер-
вер. Основной разработчик мейнфреймов — корпорация IBM. Но в
разное время мейнфреймы производили так же Hitachi, Siemens,
Amdahl, Fujitsu и др. Историю мейнфреймов принято отсчитывать с
появления в 1964 году универсальной компьютерной системы IBM
System/360, на разработку которой корпорация IBM затратила 5 млрд
долларов. Сам термин «мейнфрейм» происходит от названия типо-
вых процессорных стоек этой системы. В 1960-х — начале 1980-х го-
дов System/360 была безоговорочным лидером на рынке. Её клоны
выпускались во многих странах, в том числе — в СССР. Мейнфреймы
IBM используются в более чем 25 тысячах организаций по всему ми-
ру, в России их, по разным оценкам, от 1500 до 7000. Около 70 % всех
важных бизнес-данных обрабатываются на мейнфреймах.
Рабочая нагрузка мейнфреймов может составлять 80-95 % от их
пиковой производительности. Операционная система будет обрабаты-
вать всё сразу, причём все приложения будут тесно сотрудничать и
использовать общие компоненты ПО.
Мейнфреймы очень высокопроизводительны. Однако есть ком-
пьютеры еще более производительные, чем мейнфреймы -
суперкомпьютеры,
49
b) Translate the text into Russian.
Do the exercises that follow both texts.
Computers are generally classified as general-purpose or special pur-
pose machine. A general-purpose computer is one used for a variety of
tasks without the need to modify or change it as the tasks change. A com-
mon example is a computer used in business that runs many different ap-
plications.
A special-purpose computer is designed and used solely for one appli-
cation. The machine may need to be redesigned and certainly repro-
grammed, if, it is to perform another task. Special-purpose computers can
be used in a factory to monitor a manufacturing process; in research to
monitor seismological, meteorological and other natural occurrences; and
in the office.
So all computers have in common, but certain computers differ from
one another. These differences often have to do with the way a particular
computer is used. That is why we can say there are different types of com-
puters that are suited for different kinds of work or problem solving.
Personal computer is a computer system that fits on a desktop that an
individual can afford to buy for personal use, and that is intended for a sin-
gle use.
Personal computers include desktops, laptops and workstation. Each
type of a personal computer shares many characteristics in common with its
counterparts, but people use them in different ways.
The Desktop Personal computer is a computer that:
-fits on a desktop
-is designed for a single user
-is affordable for an individual to buy for personal use.
Desktop personal computers are used for education, running a small
business, or in large corporation, to help office workers be more produc-
tive. There are some common desktop personal computers:
-The IBM PC and PC-compatible
-The Compaq Deskpro 386
-The IBM PS/2
-The Apple Macintosh
50
The Laptop Personal Computer is a computer that people can take
with them, laptop is used by a single individual but can be used in many
different places, it is not confined by its size or weight to a desktop. It has
the same components as a desktop machine but in most cases the monitor is
built in. The printer is usually separate.
Laptops fall into the same general categories as desktop personal
computers:
-PC-compatibles
-ABM PC/2
-Apple Macintosh portable
Managers and employees who travel frequently use laptops to keep in
touch with their office. Sales representatives keep company information on
their laptops to show prospective clients, and send electronic orders into the
company computers. Writers use laptops so they can work on their manu-
script no matter where they are.
There are many portables available today; some weigh as much as 15
pounds, while others weigh as little as 3 pounds. There are laptops so small
they fit in the palm of your hand. There are laptops that fit in a briefcase,
called notebook computers.
The Workstation is a computer that fits on a desktop, but is more
powerful than a desktop computer. The workstation has a more powerful
microprocessor, is able to service more than one user, has an easy to use in-
terface and is capable of multitasking. While these three characteristics
used to be unique to workstation, they are being adapted to the more pow-
erful 386 and 486 personal computers over time.
Workstations are designed for three major tasks: scientific and engi-
neering, office automation and education.
The Minicomputer, or mini, is a versatile special or general-purpose
computer designed so that many people can use it at the same time. Minis
operate in ordinary indoor environments; some require air conditioning
while others do not. Minis also can operate in less hospitable places such as
on ships and planes.
Like all computers, the minicomputer is designed as a system. CPUs,
terminals, printers and storage devices can be purchased separately. Mini
systems are more mobile, easier to set up and install. A minicomputer sys-
tem combined with specialized equipment and peripherals is designed to
51
perform a specific task. A popular minicomputer is the Digital VAX Com-
puter.
A Supercomputer is a very fast special-purpose computer designed to
perform highly sophisticated or complex scientific calculations. For exam-
ple calculating a prime number (one that is divisible only by 1 and itself),
or the distance between planets. But computers permit turning many other
problems into numbers, such as molecular modeling, geographic modeling
and image processing.
Cray is a leading supercomputer maker, with IBM and Fujistsy as ma-
jor competitors.
A Cray X-MP Supercomputer was used to help to make a movie
called ‘The last starfighter’ Computer animation isn’t new but using the X-
MP added a whole new dimension of sophistication. Its most remarkable
accomplishment was creating the entire bridge of the alien’s starship, com-
plete with animated aliens walking around next to real actors. Because the
Cray could process the image in incredibly fine detail, the average viewer
would think it looked absolutely real. The X-MP allowed animators to
make illusion as convincing as reality itself.
Task 4. Fill in the necessary words:
1. .....are generally classified as general – or special-purpose machine.
2. A special-purpose computer is designed and used .....for one appli-
cation.
3. Personal computer .....on a desktop.
4. Each type of a personal computer .....many characteristics in com-
mon with their counterparts.
5. There are many portables .....today.
6. CPUs, terminals, printers and storage devices can be .....separately.
Task 5. Agree or disagree with the following statements:
1. All computer systems have the same five hardware components.
2. Input/output devices receive data, enter it into the computer for pro-
cessing, and then send it back to people so it can be used.
3. Storage components don’t keep data for later use.
4. Computers are general-purpose machines.
52
5. The machine may need to be redesigned and certainly repro-
grammed.
6. We can’t say that there are different types of computers.
Task 6. Ask questions to which the following statements might be the
answer:
1. Desktop personal computers are used for education, running a small
business or in large corporation to help office workers be more productive.
2. Laptops fall into the same general categories as desktop personal
computers.
3. The workstation is a computer that fits on a desktop.
4. Workstations are designed for three major tasks.
5. A minicomputer system combined with specialized equipment and
peripherals is designed to perform a specific task.
6. A mainframe uses the same basic building blocks of a computer
system: the CPU, I/O devices and external memory.
Task 7. Answer the following questions:
1. What do all computers have in common?
2. How can we classify computers?
3. What are general /special-purpose computers used for?
4. What are three primary types of personal computers?
5. What is the primary difference between personal computer and
workstation?
6. What are major tasks of a workstation?
7. What is minicomputer used for?
8. What does the supercomputer differ from the general-purpose main-
frame computer?
9. What are two main characteristics of the supercomputer?
Task 8. Find the synonyms to the following words:
a component
a device
to receive
to enter
to keep
53
to handle
to run
to confine
to fit
terminals
calculation
Task 9.Find the antonyms to the following words:
indirect
monotony
designed
programmed
similar
unlimited
unite
task
slow
odd
number,
simplicity
Task 10. Give the definitions to the following terms:
1. computer
2. supercomputer
3. special-purpose computer
4. general-purpose computer
5. personal computer
6. minicomputer
7. mainframe
Special reading
It is interesting to know that ...
54
PCs and PC-compatibles are used in organization of all sizes. PCs
are an office time saver, allowing the staff to write press releases and legis-
lative testimony, performs accounting tasks, and prepares mailing lists
more quickly. It is also paves the way for organization to complete more ef-
fectively with other public interest groups. Today, over 80 percent of Public
Citizen’s employees use PC-compatibles. Word processing has replaced
typewriters, hard disk drive storage has reduced the amount of paper kept
in filing cabinets, and laser printing has cut their outside printing costs
dramatically.
Banks have traditionally used the latest computer technology to auto-
mate their own operations, but First Banks for Business found a way to use
personal computers to improve customer service. In the past, when a cus-
tomer wanted to cash a check, the signature card had to be compared to
verify identity. That meant looking through a card file or containing central
book-keeping, which could take as long as 30 minutes.
Now Banks for Business installed PC-2s with special graphics capa-
bilities and software called Signet to perform the task. When the letters re-
trieve customer account information from the computer, they see the au-
thorized signatures appear right on the screen. The system also tells them
what other signatories are permitted on the account or if two signatures
are required to cash a check. The banks say the main reason customers
change banks is due to bad service. Using the powerful PS-2s signet, they
can cash a customer’s cheek in a minute or less.
People use laptops for many of the same tasks that they use desktops
and more.
Astrophysicists use Sun Microsystems workstations for their engineer-
ing work. They routinely sketch graphs and diagrams on the screen using
computer-aided drafting software, as well as sophisticated calculation
software to test mathematical equations. They also exchange ideas and in-
formation with each other in electronic messages. One project they have
worked on in cooperation with NASA is the Advanced X-Ray Astrophysic
Facility. It is an observatory in space that will measure cosmic X-rays,
which are invisible an earth. The astrophysicists hope that the information
provided will help them understand better how the universe was formed
and what is eventual fate will be.
The Sun workstation performed an additionally important task: help-
ing gather visual and textual information into a comprehensive report for
55
NASA to explain how an X-ray telescope would function abroad the obser-
vatory. Using electronic publishing software, they combined graphics
screens, mathematical equations, and textual explanations into a document
that took just six hours to prepare. Previously, it would have taken two
days.
56
UNIT 3
THE OPERATING SYSTEM
1. Read the text about the operating system. Discuss the information.
Identifying the operating system
The OS is a set of software tools that serve as hardware-based user in-
terface and efficiency for the computing system by managing its resources
efficiently.
OS as an extended machine
Using most computers at the machine language level is difficult, espe-
cially with regard to I/O. For example, to organize a data block reading
from a flexible disk, a programmer can use 16 different commands, each
requiring 13 parameters, such as the block number on the disk, the sector
number on the track, etc. Even if you do not enter into the course of real
problems TH of I/O programming, it is clear that among programmers
there would be not many willing to directly engage in programming these
operations. When you work with a disk, it's enough for a programmer to
present it as a set of files, each with a name. Working with a file is to open
it, read or write it, and then close the file. Issues such as whether advanced
frequency modulation should be used when recording, or in what condition
the engine of the head-reading movement mechanism is now, should not
worry the user. A program that hides from the programmer all the realities
of the equipment and provides the opportunity to easily, conveniently view
these files, read or write - it is, of course, the operating system. Just as the
OS protects programmers from disk storage hardware and provides it with
a simple file interface, the operating system takes on all unpleasant cases
related to interrupt processing, timer and RAM management, as well as
other low-level problems. In each case, the abstract, imaginary machine
that the operating system can now deal with is much easier and more con-
venient to handle than the real hardware behind this abstract machine.
From this point of view, the function of the OS is to provide the user with
some extended or virtual machine, which is easier to program and with
which it is easier to work, than directly with the hardware that makes up the
real machine.
OS as a resource management system
57
The idea that the OS is primarily a system that provides a user-
friendly interface is consistent with a top-down review. Another view, from
the bottom up, gives an idea of the OS as some mechanism that controls all
parts of a complex system. Modern computing systems consist of proces-
sors, memory, timers, discs, magnetic tape drives, network communication
equipment, printers and other devices. In line with the second approach, the
OS function is to distribute processors, memory, devices, and data between
processes competing for those resources. The OS must manage all of the
computer's resources in a way that maximizes its operation. Efficiency may
be, for example, bandwidth or system reactivity. Resource management in-
cludes two common, resource-free tasks:
Resource planning - that is, determining to whom, when, and for
shared resources and in what quantity, it is necessary to allocate this re-
source;
Tracking the state of the resource - that is, maintaining operational
information about whether the resource is busy or not occupied and for the
shared resources - how much of the resource has already been distributed,
and which is free.
Different OS uses different algorithms to solve these common re-
source management challenges, which ultimately determines their appear-
ance as a whole, including performance characteristics, application area,
and even user interface. For example, the processor control algorithm
largely determines whether the OS is a time-sharing system, a batch pro-
cessing system, or a real-time system.
2. Read the text about the evolution of the OS and say what each of the
periods is noted for. Do the tasks.
The evolution of the OS
First Period (1945-1955)
It is known that the computer was invented by the English mathemati-
cian Charles Babbage in the late eighteenth century. His "analytical ma-
chine" could not really make money, because the technologies of that time
did not meet the requirements for the manufacture of precision mechanics
parts that were necessary for computing. It is also known that this computer
did not have an operating system.
Some progress in the creation of digital computing machines occurred
after the Second World War. In the mid-40s, the first lamp computing de-
58
vices were created. At that time, the same group of people were involved in
the design, operation, and programming of the computer. It was more of a
computer research effort than a computer tool for solving any practical
tasks from other applications. Programming was done exclusively in ma-
chine language. Operating systems were out of the question, all the tasks of
organizing the computing process were solved manually by each program-
mer from the remote control. There was no other system software other
than the mathematical and service routines.
Second Period (1955-1965)
Since the mid-1950s, a new period in the development of computing
technology has begun, associated with the emergence of a new technical
base - semiconductor elements. Second-generation computers have become
more reliable, and now they've been able to run continuously for so long
that they can be tasked with really demanding. It was during this period
that the staff was divided into programmers and operators, operators and
developers of computing machines.
In these years, the first algorithmic languages appeared, and therefore
the first system programs - compilers. The cost of processor time has in-
creased, which has required a reduction in the waste of time between soft-
ware launches. There were the first batch processing systems that simply
automated the launch of one program after another and thus increased the
load factor of the processor. Package processing systems were a prototype
of modern operating systems, they were the first system programs designed
to manage the computing process. In the course of implementing batch
processing systems, a formalized task management language was devel-
oped, with which the programmer informed the system and the operator
what work he wanted to do on the computer. The totality of several jobs,
usually in the form of a deck of punch cards, was called a package of jobs.
Third Period (1965 - 1980)
The next important period of development of computing machines
dates back to 1965-1980. At this time, the technical base undergone a tran-
sition from separate semiconductor elements such as transistors to integrat-
ed chips, which gave much more opportunities to the new, third generation
of computers.
This period is also characterized by the creation of families of soft-
ware-compatible machines. The first family of software-compatible ma-
chines built on integrated chips was the IBM/360series of machines. Built
59
in the early 1960s, this family far outperformed second-generation cars in
terms of price/performance. Soon the idea of software-compatible ma-
chines became universally accepted.
Software compatibility also required compatibility of operating sys-
tems. Such operating systems would have to work on large and small com-
puting systems, with large and small, diverse peripherals, commercial and
research. Operating systems built with the intention of meeting all these
conflicting requirements have proved to be extremely complex "monsters".
They consisted of many millions of assembly lines written by thousands of
programmers and contained thousands of errors that caused a never-ending
stream of fixes. Each new version of the operating system corrected some
errors and made others.
However, despite its immense size and many challenges, OS/360 and
others like the third-generation operating systems did meet most consumer
requirements. The most important achievement of this generation's OS was
the implementation of multiprogramming. Multiprogramming is a way of
organizing a computing process in which multiple programs are alternately
run on a single processor. As long as one program performs an I/O opera-
tion, the processor is not idle, as it was with the sequential run of programs
(single-program mode), and performs another program (multiprogramming
mode). Each program is loaded into its RAM area, called the section.
Another innovation is spooling. Spooling at the time was defined as a
way of organizing a computational process, whereby tasks were read from
punch cards to disk at the rate at which they appeared in the computer cen-
ter, and then, when the next task was completed, a new task from the disk
was loaded into the vacated section.
Along with the multi-program implementation of batch processing
systems, a new type of OS - time-sharing system - has emerged. The multi-
programming option used in time-sharing systems aims to create the illu-
sion of single use of a computer for each individual user.
Fourth Period (1980 - 1990)
The next period in the evolution of operating systems is associated
with the emergence of large integrated circuits (BIS). During these years
there was a sharp increase in the degree of integration and cheaper chips.
The computer became available to an individual, and the era of personal
computers has come. From the point of view of architecture, personal com-
puters were no different from the class of minicomputers such as PDP-11,
60
but the price was significantly different. If the minicomputer made it possi-
ble to have your own computer department of the enterprise or university,
the personal computer made it possible for the individual.
Computers were widely used by laymen, which required the develop-
ment of "friendly" software, which put an end to the caste of programmers.
The operating system market was dominated by two systems: MS-
DOS and UNIX. The single-player MS-DOS was widely used for comput-
ers built on Intel 8088 microprocessors, followed by 80286, 80386 and
80486. UNIX multiprogrammed multiplayer OS dominated the "non-Intel"
environment, especially those built on high-performance RISC processors.
In the mid-1980s, networks of personal computers running network or
distributed OS began to develop rapidly.
In network OS, users should be aware of the presence of other com-
puters and must make a logical login to another computer to take advantage
of its resources, mostly files. Each machine on the network runs its own lo-
cal operating system, which differs from autonomous computer with the
presence of additional tools that allow the computer to work on the net-
work.
The network OS has no fundamental differences from a single-
processor computer. It is sure to contain software support for network inter-
face devices (network adapter driver), as well as tools for remote login to
other network computers and remote file access tools, but these add-ons do
not significantly change the structure of the operating system itself.
Fifth Period (1990 - present)
Linux
Linux was createdin1991 by Linus Torvalds, a Finnish student. The fact
that Linus immediately after the creation of the OS put the source code of
his OS on the Internet, was decisive in the fate of Linux. As in our days,
but used it mainly people who have sufficient technical training.
Windows, where to without it
In 1985, the first version of Windows appeared which was not evalu-
ated by users and ignored. Perhaps, because it only complemented the DOS
capabilities by actually being a graphic shell and an add-on over the MS-
DOS kit.
Over time, the Windows system has improved more and more, there is a
full graphics, deprived users of the vision of system files, was overcome
the barrier of multitasking, which allows you to run 2-3 programs. In 1992,
61
since the emergence of Windows 3.1, according to many users and profes-
sionals, new OS capabilities were appreciated.
In 1998, on June 25, the new OC Windows 98 entered the consumer mar-
ket. The advantages, compared to previous samples, were: full integration
with the Internet, better interface management, a new Pentium II processor,
AGP graphics portal, USB bus.
In parallel with the previous, the development of the Windows XP system,
where it was finally decided to abandon the 16-bitness in the core of the
system, and move to a 32-bit, with a new architecture and structure. Of the
advantages of the new system, it should be noted: this is the first of the sys-
tems with a fully customizable interface, the introduction of the smart
menu "Start." Also optimally redesigned panel - control PC.
The appearance after Widows XP of the new Windows Vista system is
considered the most unsuccessful option after all previous OS releases. It is
presented as a "general rehearsal" in front of Windows 7. It would seem
that good qualities of the new system should have interested users. Such
innovations as built-in search, three-dimensional Aero interface with beau-
tiful screensavers, good protection - nothing helped, everything was exe-
cuted extremely unsuccessfully, according to users.
Windows 7 is little but a new interface canceled from Vista. Windows 7
variants released 5: Starter Edition, home base, home extended, profession-
al, maximum.
Windows 8, unlike its predecessors, Windows 7 and Windows XP, use a
new interface called Modern (Metro). There is also a desktop in the system,
but as a separate application.
Mobile OS
Now more and more interest of users are attracted by smartphones on
various operating systems: Windows Phone, Boda, iOS. The most popular
of them are iOS and AndroidOS.
Ios
IOS is a mobile operating system built on the Linux kernel and devel-
oped and manufactured by Apple. It was originally released in 2007 for
iPhone and iPod Touch. Now it is installed on all Apple devices. Innova-
tions such as the Safari mobile browser, visual voice mail and virtual key-
board have made iOS one of the most popular systems for smartphones.
Android
Android - the system that is the most dynamically developing, designed for
62
smartphones (originally for utilities (iPhone and its touchscreen changed
the opinion of Google)). It is a simplified version of similar Windows and
Linux systems used on desktop PCs and laptops, focused on touchscreen.
The Android platform consists of an operating system, an interface that
connects software and powerful applications.
Google Chrome OS (Cloud OS)
Chrome OS is positioned as an operating system for a variety of de-
vices, from small notebooks to full-size desktop systems, and supports x86
and ARM processor architecture.
The new Google Chrome OS has open source, based on optimized Linux
core and controlled by chrome browser. The main feature will be the domi-
nance of web applications over the usual OS functions. The browser plays a
key role.
The strategy for creating a new product involves an architecture that does
not require the hardware resources of a personal computer used to access
the Internet.
All the applications that the system runs are web services. In fact, all the
things that take place on your computer are done on the Internet - there is
no need to install any offline applications. In this regard, working in
Chrome OS does not require the computer to have powerful resources, be-
cause all processes are run not on the computer itself, but on the servers of
the relevant services.
63
TASKS
1) Answer the following questions:
a) When was the computer invented?
b) Who was the computer invented by?
c) When were the first lamp computing devices created?
d) What elements were second-generation computers based on?
e) What was a prototype of modern operating systems in the mid-
1950s?
f) What gave much more opportunities to the new, third generation
of computers?
g) What does multiprogramming mean?
h) What is spooling referred to?
i) When did the computer become available to an individual?
j) What two systems was the operating system market dominated by
in the mid-1980s?
k) What should users be aware of in network OS?
l) When was Linux created?
m) What was the first version of Windows characterized by?
n) What were the advantages of the OC Windows 98 compared to
previous samples?
o) What interface do Windows 7 and Windows XP?
p) What are the most popular OS used in smartphones? Compare
their advantages and drawbacks.
q) Why is Chrome OS positioned as an operating system for a variety
of devices?
2) Find in the text English equivalents for the following expres-
sions:
- Внедрение мультипрограммирования
- Совместимость программного обеспечения
- Система обработки пакетов
- Огромный размер
- Последовательный запуск программ
- Виртуальная клавиатура
- Полноразмерные настольные системы
- Голосовая почта
- Офлайн-приложения
64
3) Give Russian equivalents for the following expressions from the
text:
- meet the requirements
- lamp computing devices
- mathematical and service routines
- semiconductor elements
- the first batch processing systems
- formalized task management language
- in terms of price/performance
- multiprogrammed multiplayer OS
- make a logical login
Special reading
Task 1
a) read the text and try to answer the questions:
b) discuss the topic with your classmates
c) make summary of the text
1. What is the iPhone OS (iOS)?
2. What is an operating system?
3. Can you multitask in iOS?
4. How much does iOS cost? How often is it updated?
5. How to update your device to the newest version of iOS?
It uses a multi-touch interface in which simple gestures operate the
device, such as swiping your finger across the screen to move to the next
page or pinching your fingers to zoom out. There are more than 2 million
iOS apps available for download in the Apple App Store, the most popular
app store of any mobile device.
Much has changed since the first release of iOS with the iPhone in
2007.
In the simplest terms, an operating system is what lies between you
and the physical device. It interprets the commands of software applica-
tions (apps), and it gives those apps access to features of the device, such as
the multi-touch screen or the storage.
65
Mobile operating systems like iOS differ from most other operating
systems because they put each app in its own protective shell, which keeps
other apps from tampering with them. This design makes it impossible for a
virus to infect apps on a mobile operating system, although other forms of
malware exist. The protective shell around apps also poses limitations be-
cause it keeps apps from directly communicating with one another.
Multitasking in iOS is straightforward.
Apple added a form of limited multitasking soon after the release of
the iPad. This multitasking allowed processes such as those playing music
to run in the background. It also provided fast app-switching by keeping
portions of apps in memory even when they weren't in the foreground.
Apple later added features that allow some iPad models to use slide-
over and split-view multitasking. Split-view multitasking splits the screen
in half, allowing you to run an individual app on each side of the screen.
Apple does not charge for updates to the operating system. Apple also
gives away two suites of software products with the purchase of iOS devic-
es: The iWork suite of office apps which includes a word processor,
spreadsheet, and presentation software and the iLife suite, which includes
video-editing software, music-editing and creation software, and photo-
editing software. Apple apps like Safari, Mail, and Notes are shipped with
the operating system by default.
Apple releases a major update to iOS once a year with an announce-
ment at Apple's developer conference in early summer. It is followed by a
release in early fall that is timed to coincide with the announcement of the
most recent iPhone and iPad models. These free releases add major features
to the operating system. Apple also issues bug fix releases and security
patches throughout the year.
The easiest way to update your iPad, iPhone, or iPod Touch is to use
the scheduling feature. When a new update is released, the device asks if
you want to update it at night. Simply tap Install Later on the dialog box
and remember to plug in your device before you go to bed.
You can also install the update manually by going into the device's
settings, selecting General from the left-side menu and then select-
ing Software Update. This menu takes you to a screen where you can
download the update and install it on the device. The only requirement is
that your device must have enough storage space to complete the process.
66
Task 2.Read the text about interfaces and do the tasks.
Interface
A user interface (user interface or UI for short) is an interface through
which a person can control software or hardware. UIs should be easy to use
so that interaction with them is carried out at the most intuitive level. Soft-
ware interfaces are also called graphical user interfaces (GUIs).
Development stages and types of user interfaces
Unlike modern realities, the early computers were too weak for graph-
ical user interfaces. Therefore, at the very beginning, people could only use
the command line (CLI or command line interface), in which commands
were set using requests. This later evolved into TUI - the interfaces that are
used today in the process of installing operating systems. The availability
of computers has led to the need to develop a user-friendly interface.
The graphical user interface is a type of interface that is firmly en-
trenched alongside the ever-increasing performance of PCs. In the near fu-
ture, audio user interfaces (VUI or voice user interface) may appear that
will allow people to interact with a computer through speech.
Various computer games use a natural user interface (NUI or natural
user interface). His system analyzes the movements of a person and con-
verts them into movements in the game. At the moment, a perceptual user
interface (PUI), as well as a brain-computer interface (BCI or brain-
computer interface) are under development. The latest development is
aimed at providing people with the ability to control computers with the
power of thought.
Graphical User Interface (GUI)
The graphical user interface is the most popular UI. It is a window that
contains various controls. User interaction with the program is provided by
using the mouse and keyboard.
It is also possible to use buttons and menu sections located inside the
application itself. This window is like a gateway between the user and the
software. Typical controls are common in the graphical user interface.
They allow you to standardize the process of interacting with different pro-
grams on different operating systems.
67
1. The real world as a model.
When developing the first graphical user interface, elements of the re-
al world were taken as a basis: a trash can, a folder, an image of a floppy
disk as a save button. Today, many icons are considered obsolete, but are
still used.
Even when using modern images and icons, designers try to at least
minimally reflect their purpose. This makes it easier to interact with the in-
terface intuitively. The purpose of the GUI is so that people can easily
identify the purpose of each button. Thanks to this, we do not have to re-
member all the commands, as was the case with the command line.
2. Instructions and rules.
When designing a GUI, certain sets of rules are applied to help make
programs easier to use. An example is the 8 golden rules from Ben Schnei-
derman. Below there are some footnotes from these rules:
• Consistency: interaction should always happen in the same way.
That is, you should avoid using control panels with options such as “copy
selection”, “delete selection”, “add selection”. This example shows a lack
of consistency in the GUI, which should be avoided;
• Informative feedback: all actions performed by the user should be
supported by feedback. For example, if a double click opens a program,
then a person has to wait a couple of seconds before he can use this pro-
gram. In order for the user to know that his actions have brought a result,
you need to inform him about it. This can be done by changing the cursor.
One of the oldest and most familiar examples is the hourglass cursor in
Windows;
• Do not overload user memory: users cannot remember everything at
once. In long segments of interaction, where the user is forced to navigate
through multiple windows, information should always be displayed in the
same area. Less requested information, which was displayed at the very be-
ginning, should be hidden.
Audio user interface (VUI or voice user interface)
In this type of user interface, the interaction between the user and the
computer takes place through voice. For example, a user can verbally select
a person from a previously compiled contact list and place a call. Speech-
to-text interpretation and speech recognition programs also use audio inter-
faces.
68
The advantage of this form of interaction is that users don't need any-
thing other than their voice. Text input on devices is usually complicated
by small keyboards (on smartphones with small screens), and many often
find it easier to dictate the text of a message.
Examples include Apple's voice assistant, Siri, Samsung's S-Voice, or
Google's voice search. One of the main challenges in designing this user in-
terface (s) is to provide a comfortable environment for the audience to in-
teract with. That is, when using voice synthesizers in technical support, it is
important not to burden customers with long messages.
Tactile user interfaces (TUI or tangible user interface)
In them, interaction occurs through the use of balls or other physical
objects. Today this type of interface is rarely used in everyday life. If the
work computer is constantly on the same table, the use of tactile interfaces
takes on new meaning, but more often than not they are simply not appli-
cable in everyday life. Museums and exhibitions are a great example of
how TUI can be applied.
Physical interaction is remembered better than any other. In addition,
tactile interfaces give scope for the implementation of objects: shape, tex-
ture, color. From a sandbox with wooden cubes to a magnifying glass for
images, almost anything is possible.
Natural user interface (NUI or natural user interface)
Natural user interface aims to provide the user with a natural and in-
tuitive experience with the device or software. At the same time, the inter-
face itself will be visible, for example, on a touch screen. With the help of
NUI, user commands are entered using gestures and touches.
This type of user interface can also be combined with VUI. The direct
response of the device makes interactions more natural than mouse or key-
board input. In addition to touch devices, NUI can also be used in game
consoles.
For example, the Nintendo Wii allows for on-screen action by moving
the controller with your hand. Other examples include the Kinect add-on
for the Xbox, which allows you to control a game character on the screen
with your own body movements. That makes the interaction more natural.
Perceptual user interface (PUI or perceptual user interface)
Perceptual user interface is an interface that is controlled by human
perception. Today it is still under development. PUI, in theory, should
combine the capabilities of GUI and VUI, and also be able to recognize
69
gestures to interact with a computer. The integration of visual and auditory
perception of gestures and sounds should allow PUI to provide users with
the maximum level of perception and naturalness.
Brain-computer interface (BCI and brain-computer interface)
This user interface uses the human brain as the command source. To-
day this technology has reached a high level of development. Electrodes
are used to measure brain waves, after which the information obtained is
decoded by various algorithms. This allows you to control robotic limbs.
This type of interaction is a great advantage for people with disabilities.
TASKS
1) Words and word combinations:
graphical user interfacе Определенный набор
правил
alongside Интерфейс командной
строки
command line interface Измерять мозговые
волны
to interact with a com- наряду
puter
brain-computer inter- сложный
face
certain set of rules Перегружать память
пользователя
Informative feedback Интуитивный уровень
verbally select Взаимодействовать с
компьютером
perceptual user interface Интерфейс мозг-
компьютер
complicated Выбирать устно
to measure brain waves Отсутствие последо-
вательности
To overload user Удобный интерфейс
memory
a lack of consistency Графический интер-
фейс пользователя
70
user-friendly interface Информативная об-
ратная связь
intuitive level Перцептивный поль-
зовательский интерфейс
2) Translate into English:
Ценность оптимизации под поисковые системы
В разработке графического интерфейса пользователя GUI и
сайта есть как схожие моменты, так и различия. Например, посе-
титель пользуется навигацией по сайту. Он выбирает определенный
путь сквозь структуру страниц. В графическом интерфейсе разра-
ботчик может контролировать, какие пункты будут доступны
пользователю в тот или иной момент. Если функция недоступна,
разработчик может скрыть эту опцию.
В случае с сайтом у пользователя всегда есть возможность вер-
нуться назад на страницу. Следовательно, навигацию также необхо-
димо учитывать при создании сайта. Иерархия страниц должна
быть максимально прозрачной и продуманной. Если ваш сайт состо-
ит из нескольких уровней, то логично использовать навигацию типа
“хлебные крошки”.
Люди используют программы уже достаточно долгое время.
Следовательно, мы уже привыкли к большинству стандартных эле-
ментов любого графического интерфейса. Сайты же появились от-
носительно недавно.
Веб-дизайнеры должны стараться продумать опыт взаимодей-
ствия с пользователем на максимальном уровне, и руководствовать-
ся при этом проверенными практиками. Например, меню навигации
лучше всего располагать в левом верхнем углу. Как вебмастер вы
должны убедиться, что все элементы легкодоступны любому посе-
тителю. Это сделает ваш сайт удобным для использования.
71
Why do we need operating system?
Operating system, its Functions and Characteristics
Operating System (OS) is one of the core software programs that runs
on the hardware and makes it usable for the user to interact with the hard-
ware so that they can send commands (input) and receive results (output). It
provides a consistent environment for other software to execute commands.
So we can say that the OS acts at the center through which the system
hardware, other software, and the user communicate. The following figure
shows the basic working of the operating system and how it utilizes differ-
ent hardware or resources.
Figure: Operating system working as a core part
Operating system serves many functions but I will discuss about the
major functions which all operating systems have.
Basic Functions of the Operating system
72
The key five basic functions of any operating system are as following:
1. Interface between the user and the hardware: An OS provides
an interface between user and machine. This interface can be a graphical us-
er interface (GUI) in which users click onscreen elements to interact with
the OS or a command-line interface (CLI) in which users type commands at
the command-line interface (CLI) to tell the OS to do things.
Figure: GUI vs CLI
2. Coordinate hardware components:
An OS enables coordination of hardware components. Each hardware
device speaks a different language, but the operating system can talk to
them through the specific translational software called device drivers. Every
hardware component has different drivers for Operating systems. These
drivers make the communication successful between the other software and
the hardware.
73
Figure: Device Drivers in between OS and Hardware devices
3. Provide environment for software to function: An OS provides an
environment for software applications to function. An application software
is a specific software which is used to perform specific task. In GUI operat-
ing systems such as Windows and macOS, applications run within a con-
sistent, graphical desktop environment.
4. Provide structure for data management: An OS displays struc-
ture/directories for data management. We can view file and folder listings
and manipulate on those files and folders like (move, copy, rename, delete,
and many others).
5. Monitor system health and functionality: OS monitors the health
of our system hardware, giving us an idea of how well (or not) it’s perform-
ing. We can see how busy our CPU is, or how quickly our hard drives re-
trieve data, or how much data our network card is sending etc. and it also
monitors system activity for malware.
Figure: Performance Monitor in windows
Operating System Characteristics
The Operating systems are different according to the three primary
characteristics which are licensing, software compatibility, and complexity.
Licensing
There are basically three kinds of Operating systems. One is Open
Source OS, another is Free OS and the third is Commercial OS.
Linux is an open source operating system which means that anyone
can download and modify it for example Ubuntu etc.
74
A free OS doesn’t have to be open source. They are free to download
and use but cannot modify them. For example, Google owns Chrome OS
and makes it free to use.
Commercial operating systems are privately owned by companies that
charge money for them. Examples include Microsoft Windows and Apple
macOS. These require paying for the right (or license) to use their Operating
systems.
Software Compatibility
The developers make the software which may be compatible or in-
compatible in different versions within the same operating system’s type but
they can’t be compatible with the other OS types. Every OS type have their
own software compatibility.
Complexity
Operating systems come in basically two editions one is 32-bit and
other is 64-bit editions. The 64-bit edition of an operating system best uti-
lizes random access memory (RAM). A computer with a 64-bit CPU can
run either a 32-bit or a 64-bit OS, but a computer with a 32-bit CPU can run
only a 32-bit OS.
Do the quiz:
1. A__________________ consists of a set of programs,which
controls, coordinates and supervises the activities of the various
components of a computer.
a. operating system
b. application software
75
c. both (a) and (b)
d. none of the above
2. A______________ is a program which acts as an interface between
a user and a hardware.
a. operating system
b.application software
c. both (a) and (b)
d. none of the above
3.________________ is a boot strapping process which starts with the
OS when a computer is switched on and the OS gets loaded from hard disk
to main memory.
a. executing
b. fetching
c. booting
d. none of the above
4. When a computer is turned on after it has been completely shut –
down is called _______________.
a.cold booting
b. warm booting
c. both (a) and (b)
d. none of the above
5. When a computer is restarted by pressing the combination of
Ctrl+Alt+Del key or by restart button is called_____________.
a. cold booting
b. warm booting
c. both (a) and (b)
d. none of the above
6.______________ is a type of Operating System which allows one
user at a time.
a. Single User Operating Sysytem
b. MultiUser Operating Sysytem
c. Real Time Operating Sysytem
d.Embedded Operating Sysytem
7. Some Operating Systems use______________multitasking to
prevent any one process from monopolizing the computer”s resources.
a. preemptive
b. non-preemptive
76
c. both (a) and (b)
d. none of the above
8. ______________is the first program run on a computer,when yhe
computer boots up.
a. processing system
c. operating system
d. none of the above
9. BIOS stands for_____________.
a. Bias Intergrated Output System
b.Bias Intergrated Operator System
c. Basic Integrated Output System
d. Basic Input Output System
10. Which process checks to ensure the componets of the computer
are operating and connected properly?
a. processing
b. saving
c. booting
d. editing
77
UNIT 4
CPU
1. Read the text and do the tasks:
A central processing unit (CPU), also called a central proces-
sor, main processor or just processor, is the electronic circuitry within
a computer that executes instructions that make up a computer program.
The CPU performs basic arithmetic, logic, controlling,
and input/output (I/O) operations specified by the instructions in the pro-
gram. The computer industry used the term "central processing unit" as ear-
ly as 1955. Traditionally, the term "CPU" refers to a processor, more spe-
cifically to its processing unit and control unit (CU), distinguishing these
core elements of a computer from external components such as main
memory and I/O circuitry.
The form, design, and implementation of CPUs have changed over the
course of their history, but their fundamental operation remains almost un-
changed. Principal components of a CPU include the arithmetic logic
unit (ALU) that performs arithmetic and logic operations, processor regis-
ters that supply operands to the ALU and store the results of ALU opera-
tions, and a control unit that orchestrates the fetching (from memory) and
execution of instructions by directing the coordinated operations of the
ALU, registers and other components.
Most modern CPUs are microprocessors, where the CPU is contained
on a single metal-oxide-semiconductor (MOS) integrated circuit (IC) chip.
An IC that contains a CPU may also contain memory, peripheral interfaces,
and other components of a computer; such integrated devices are variously
called microcontrollers or systems on a chip (SoC). Some computers em-
ploy a multi-core processor, which is a single chip or "socket" containing
two or more CPUs called "cores".
Array processors or vector processors have multiple processors that
operate in parallel, with no unit considered central. Virtual CPUs are an ab-
straction of dynamical aggregated computational resources.
Central Processing Unit (CPU) consists of the following features −
CPU is considered as the brain of the computer.
CPU performs all types of data processing operations.
It stores data, intermediate results, and instructions (program).
78
It controls the operation of all parts of the computer.
CPU itself has following three components.
Memory or Storage Unit
Control Unit
ALU (Arithmetic Logic Unit)
Memory or Storage Unit
This unit can store instructions, data, and intermediate results. This
unit supplies information to other units of the computer when needed. It is
also known as internal storage unit or the main memory or the primary
storage or Random Access Memory (RAM).
Its size affects speed, power, and capability. Primary memory and
secondary memory are two types of memories in the computer. Functions
of the memory unit are −
It stores all the data and the instructions required for processing.
It stores intermediate results of processing.
It stores the final results of processing before these results are re-
leased to an output device.
All inputs and outputs are transmitted through the main memory.
Control Unit
This unit controls the operations of all parts of the computer but does
not carry out any actual data processing operations.
Functions of this unit are −
It is responsible for controlling the transfer of data and instructions
among other units of a computer.
79
It manages and coordinates all the units of the computer.
It obtains the instructions from the memory, interprets them, and
directs the operation of the computer.
It communicates with Input/Output devices for transfer of data or
results from storage.
It does not process or store data.
ALU (Arithmetic Logic Unit)
This unit consists of two subsections namely,
Arithmetic Section
Logic Section
Arithmetic Section
Function of arithmetic section is to perform arithmetic operations like
addition, subtraction, multiplication, and division. All complex operations
are done by making repetitive use of the above operations.
Logic Section
Function of logic section is to perform logic operations such as com-
paring, selecting, matching, and merging of data.
TASKS
1) Answer the following questions:
a) What is the CPU?
b) What operations does it perform?
c) What three components does the CPU consist of?
d) What are the main functions of the memory unit?
e) What are the main functions of the control unit?
f) What are the main functions of the arithmetic logic unit?
2) Retell the text using the previous exercise.
2. Read and discuss the text. Why do we need parallel computing? Do
the tasks.
Parallel computing
The description of the basic operation of a CPU describes the simplest
form that a CPU can take. This type of CPU, usually referred to
80
as subscalar, operates on and executes one instruction on one or two pieces
of data at a time, that is less than one instruction per clock cycle (IPC < 1).
This process gives rise to an inherent inefficiency in subscalar CPUs.
Since only one instruction is executed at a time, the entire CPU must wait
for that instruction to complete before proceeding to the next instruction.
As a result, the subscalar CPU gets "hung up" on instructions which take
more than one clock cycle to complete execution. Even adding a sec-
ond execution unit does not improve performance much; rather than one
pathway being hung up, now two pathways are hung up and the number of
unused transistors is increased. This design, wherein the CPU's execution
resources can operate on only one instruction at a time, can only possibly
reach scalar performance (one instruction per clock cycle, IPC = 1). How-
ever, the performance is nearly always subscalar (less than one instruction
per clock cycle, IPC < 1).
Attempts to achieve scalar and better performance have resulted in a
variety of design methodologies that cause the CPU to behave less linearly
and more in parallel. When referring to parallelism in CPUs, two terms are
generally used to classify these design techniques:
instruction-level parallelism (ILP), which seeks to increase the
rate at which instructions are executed within a CPU (that is, to increase the
use of on-die execution resources);
task-level parallelism (TLP), which purposes to increase the num-
ber of threads or processes that a CPU can execute simultaneously.
Each methodology differs both in the ways in which they are imple-
mented, as well as the relative effectiveness they afford in increasing the
CPU's performance for an application.
Data parallelism
A less common but increasingly important paradigm of processors
(and indeed, computing in general) deals with data parallelism. The proces-
sors discussed earlier are all referred to as some type of scalar device. As
the name implies, vector processors deal with multiple pieces of data in the
context of one instruction. This contrasts with scalar processors, which deal
with one piece of data for every instruction. These two schemes of dealing
with data are generally referred to as single instruction stream, multiple da-
ta stream (SIMD) and single instruction stream, single data stream (SISD),
respectively. The great utility in creating processors that deal with vectors
81
of data lies in optimizing tasks that tend to require the same operation (for
example, a sum or a dot product) to be performed on a large set of data.
Some classic examples of these types of tasks in-
clude multimedia applications (images, video and sound), as well as many
types of scientific and engineering tasks. Whereas a scalar processor must
complete the entire process of fetching, decoding and executing each in-
struction and value in a set of data, a vector processor can perform a single
operation on a comparatively large set of data with one instruction. This is
only possible when the application tends to require many steps which apply
one operation to a large set of data.
Most early vector processors were associated almost exclusively with
scientific research and cryptography applications. However, as multimedia
has largely shifted to digital media, the need for some form of SIMD in
general-purpose processors has become significant. Shortly after inclusion
of floating-point units started to become commonplace in general-purpose
processors, specifications for and implementations of SIMD execution
units also began to appear for general-purpose processors. Some of these
early SIMD specifications - like HP's Multimedia Acceleration eXten-
sions (MAX) and Intel's MMX - were integer-only. This proved to be a
significant impediment for some software developers, since many of the
applications that benefit from SIMD primarily deal with floating-
point numbers. Progressively, developers refined and remade these early
designs into some of the common modern SIMD specifications, which are
usually associated with one ISA. Some notable modern examples include
Intel's SSE and the PowerPC-related AltiVec (also known as VMX).
Virtual central processing unit
Cloud computing can involve subdividing CPU operation into vCPUs.
A host is the virtual equivalent of a physical machine, on which a virtual
system is operating. When there are several physical machines operating in
tandem and managed as a whole, the grouped computing and memory re-
sources form a cluster. In some systems, it is possible to dynamically add
and remove from a cluster. Resources available at a host and cluster level
can be partitioned out into resources pools with fine granularity.
82
TASKS
1) Answer the following questions:
a) What design techniques are generally used in parallel computing?
b) What is the main function of instruction-level parallelism?
c) What is the main function of task-level parallelism?
d) In what ways do they differ?
e) What is the difference between a scalar processor and a vector
processor?
f) How can cloud computing change the CPU operation?
Words and word combinations:
The performance of
Доступный для про-
a processor
граммного обеспечения
the instructions per Преодолевать
second
An artificial instruc- Многоядерный про-
tion sequence цессор
A benchmark Производительность
процессора
An integrated circuit Включать общие ре-
сурсы
To handle numerous Скорость выполнения
asynchronous events
83
To overwhelm Несовершенный алго-
ритм программного обес-
печения
To take a toll Одновременная мно-
гопоточность
simultaneous multi- Ориентир
threading
accessible to soft- Обрабатывать множе-
ware ство асинхронных событий
To involve sharing Иерархия памяти
resources
A multi-core proces- Оказывать плохое
sor влияние
the memory hierar- Интегральная схема
chy
An execution rate Инструкции в секунду
The imperfect soft- Искусственная после-
ware algorithm довательность инструкций
2). Summarize the text and render the summary into Russian.
Computer performance and Benchmark (computing)
The performance or speed of a processor depends on, among many
other factors, the clock rate (generally given in multiples of hertz) and the
instructions per clock (IPC), which together are the factors for
the instructions per second (IPS) that the CPU can perform. Many reported
IPS values have represented "peak" execution rates on artificial instruction
sequences with few branches, whereas realistic workloads consist of a mix
of instructions and applications, some of which take longer to execute than
others. The performance of the memory hierarchy also greatly affects pro-
cessor performance, an issue barely considered in MIPS calculations. Be-
84
cause of these problems, various standardized tests, often
called "benchmarks" for this purpose—such as SPECint—have been devel-
oped to attempt to measure the real effective performance in commonly
used applications.
Processing performance of computers is increased by using multi-core
processors, which essentially is plugging two or more individual processors
(called cores in this sense) into one integrated circuit. Ideally, a dual core
processor would be nearly twice as powerful as a single core processor. In
practice, the performance gain is far smaller, only about 50%, due to imper-
fect software algorithms and implementation. Increasing the number of
cores in a processor (i.e. dual-core, quad-core, etc.) increases the workload
that can be handled. This means that the processor can now handle numer-
ous asynchronous events, interrupts, etc. which can take a toll on the CPU
when overwhelmed. These cores can be thought of as different floors in a
processing plant, with each floor handling a different task. Sometimes,
these cores will handle the same tasks as cores adjacent to them if a single
core is not enough to handle the information.
Due to specific capabilities of modern CPUs, such as simultaneous
multithreading and being not in the core, which involve sharing of actual
CPU resources while aiming at increased utilization, monitoring perfor-
mance levels and hardware use gradually became a more complex task. As
a response, some CPUs implement additional hardware logic that monitors
actual use of various parts of a CPU and provides various counters accessi-
ble to software; an example is Intel's Performance Counter Moni-
tor technology.
3) Memorize the following definitions:
1. The CPU is the nerve centre of any computer since it coordinates
and controls the activities of all the other units. 2. The arithmetic/logic unit
is that part of the CPU in which the actual computations take place. 3. The
control unit is that part of the CPU which obtains instructions from the
memory, interprets them and generates the control signals. 4. The instruc-
tion decoder is the part of the control unit which interprets or decodes the
instruction. 5. The control generator is the part of the control unit which
generates the control signals. 6. The instruction register is the part of the in-
struction decoder in which the address of current instruction is stored. 7.
The current-address register is the second special memory cell located in
the instruction decoder. 8. The accumulator is the special memory cell in
85
the arithmetic/logic unit in which the result is formed before it is trans-
ferred back to the memory unit.
4) Read the text. Put questions and discuss it with the classmates.
A CPU's clock speed represents how many cycles per second it can
execute. Clock speed is also referred to as clock rate, PC frequency and
CPU frequency. This is measured in gigahertz, which refers to billions of
pulses per second and is abbreviated as GHz. Every computer contains an
internal clock that regulates the rate at which instructions are executed and
synchronizes all the various computer components. The CPU requires a
fixed number of clock ticks (or clock cycles) to execute each instruction.
The faster the clock, the more instructions the CPU can execute per second.
A PC’s clock speed is an indicator of its performance and how rapidly
a CPU can process data (move individual bits). A higher frequency (bigger
number) suggests better performance in common tasks, such as gaming. A
CPU with higher clock speed is generally better if all other factors are
equal, but a mixture of clock speed, how many instructions the CPU can
process per cycle (also known as instructions per clock cycle/clock, or IPC
for short) and the number of cores the CPU has all help determine overall
performance.
Note that clock speed differs from the number of cores a CPU has;
cores help you deal with less common, time-consuming workloads. Clock
speed is also not to be confused with bus speed, which tells you how fast a
PC can communicate with outside peripherals or components, like the
mouse, keyboard and monitor.
Most modern CPUs operate on a range of clock speeds, from the min-
imum "base" clock speed to a maximum "turbo" speed (which is high-
er/faster). When the processor encounters a demanding task, it can raise its
clock speed temporarily to get the job done faster. However, higher clock
speeds generate more heat and, to keep themselves from dangerously over-
heating, processors will "throttle" down to a lower frequency when they get
too warm. A better CPU cooler will lead to higher sustainable speeds.
When buying a PC, its clock speed is a good measurement of perfor-
mance, but it’s not the only one to consider when deciding if a PC is fast
enough for you. Other factors include, again, bus speed and core count, as
well as the hard drive, RAM and SSD (solid-state drive).
86
You can reach faster clock speeds through a process called overclock-
ing.
4) Match the motherboard components to their description.
SIMMS/ ROM/ CPU/ EXPANSION SLOTS/ CASH MEMORY
1. These are memory chips. The more you have, the more work you
can do at a time. Empty memory slots mean you can add more memory.
2. This is the “brain” of the computer.
3. It’s a part of the memory store. It has extremely fast access. It’s
faster than normal RAM. It can speed up the computer.
4. These let you add features such as sound or a modem to your com-
puter.
5. These kind of memory contains all the instructions your computer
needs to activate itself when you switch on. Unlike RAM, its contents are
retained when you switch off.
6) Study these instructions for replacing the motherboard in a PC.
Put them in the correct order.
1. Add the processor.
2. Fit the new motherboard.
3. Remove the old motherboard.
4. Put it back together.
5. Add the memory. Don’t touch the contacts.
7) Arrange synonyms in pairs:
semiconductor technology; to execute; to write; to control; memory to
sense to choose; to form; to feel; storage; to store; to set up; to handle; sol-
id-state technology; to perform; to keep; to select; research; to put in; in-
vestigation
8).Complete the following sentences:
The arithmetic/logic unit is capable of ... .
The access time is the time required for transmitting one computer
... out of the ... to where it ... .
The actual- computations are executed in a central ... .
The part of the control that interprets the instruction is called ... .
87
The part of the control that generates the control signals is called
... .
The control signals choose the proper numbers from and send
them to ... at the proper time.
Let’s check our knowledge
List of questions to be answered:
1. What is an operating system?
2. Where is the operating system held?
3. How is the operating system started up?
4. What are the functions of an operating system?
5. Name 2 memory management techniques used by the operating
system
. Describe paging
. Page table segmentation
.Virtual memory
6. What happens when the CPU receives an interrupt?
7. What is the general purpose and function of the CPU?
8. How many parts is the CPU composed of?
9. What is the general purpose of the control?
10. What is the arithmetic/logic unit?
11. What is the instruction decoder?
12. What is the general function of the control generator?
13. What happens in the CPU while an instruction is being executed?
14. What is the accumulator?
15. Where is the accumulator located?
16. Where are the instruction register and the current address register
located?
17. What do you call a unit which:
- interprets instructions?
- senses the interpretation of instructions and produces control sig-
nals?
- performs mathematical and logical operations?
88
- chooses the proper numbers from the internal memory and sends
them to the arithmetic/logic unit at the proper time?
-obtains instructions from the main memory, interprets them and ac-
complishes the actual operations?
18. When was the first computer designed?
19. What is Konrade Zuse famous for?
20. How many generations of computers exist?
21. What is the most powerful computer?
22. What is a hybrid computer?
23. Who built the first digital computer? What is the name?
24. Which computer can be operated with the touch of the fingers?
25. Which computer is the most expensive?
26. Which is the most powerful type of computer?
27. Which type of computers work on batteries?
28. How many types of computers are there, based on data handling
capability?
29. Is there a full form for COMPUTER?
30. What are the different types of computer?
31. Which was the first computer?
89
UNIT 5
HOW CAN BE DATA STORED?
Cache memory
Most PCs are held back not by the speed of their main processor, but
by the time it takes to move data in and out of the memory. One of the most
important techniques for getting around this bottleneck is the memory
cache.
The idea is to use a small number of very fast memory chips as a buff-
er or cache between main memory and the processor. Whenever the pro-
cessor needs to read data it looks in this cache area first. If it finds the data
in the cache then this counts as a ‘cache hit’ and the processor need not go
through the more laborious process of reading data from the main memory.
Only if the data is not in the cache does it need to access main memory, but
it copies in the process whatever it finds into the cache so that it is there
ready for the next time it is needed. The whole process is controlled by a
group of logic circuits called the cache controller.
One of the cache controller’s main jobs is to look after ‘cache coher-
ency’ which means ensuring that any changes are reflected within the cache
and vice versa. There are several techniques for achieving this, the most
obvious being the processor to write directly to both the cache and the main
memory at the same time. This is known as a ‘write-through’ cache and is
the safest solution, but also is the slowest.
The main alternative is the ‘write-back’ cache which allows the pro-
cessor to write changes only to the cache and not to main memory. Cache
entries that have changed are flagged as a ‘dirty’ telling to the cache con-
troller to write their contents back to the main memory before using the
space to cache new data. A write back cache speeds up the write process,
but does require a more intelligent cache controller.
Most cache controllers move a ‘line’ of data rather than just a single
item each time they need to transfer data between main memory and the
cache. This tends to improve the cache of cache hit as most programs spend
their time stepping through instructions stored sequentially in memory, ra-
ther than jumping about from one area to another. The amount of data
transferred each time is known as a ‘line size’.
90
Definition of Cache Memory
Unlike virtual memory, Cache is a storage device implemented on
the processor itself. It carries the copies of original data that has been ac-
cessed recently. The original data may be placed in the main memory or a
secondary memory. The cache memory fastens the accessing speed of data,
but how? Let’s understand.
We can say that the accessing speed of CPU is limited to the access-
ing speed of main memory. Whenever a program is to be executed by the
processor, it fetches it from main memory. If a copy of the program is al-
ready present in the cache it is implemented on the processor. The process
would be able to access that data faster which will result in a faster execu-
tion.
Ex.1.Read the information and correct the following statements.
1. ‘Cache hit’ means ensuring that any changes are reflected within
the cache and vice versa.
2. A write back cache slows down the write process, but does re-
quire a more intelligent cache controller.
3. The whole process is controlled by a group of logic circuits called
a ‘line size’.
4. The ‘write-back’ cache which allows the processor to write chang-
es to the cache and the main memory at the same time.
5. The cache controller’s main job is to lose ‘cache coherency’
6. ‘Write-through’ cache and is the safest solution, but also is the
fastest.
7. ‘Cache coherency’ means ensuring that any changes aren’t re-
flected within the main memory.
Cloud computing is a term used to describe services provided over a
network by a collection of remote servers. This abstract "cloud" of comput-
91
ers provides massive, distributed storage and processing power that can be
accessed by any Internet-connected device running a web browser. Cloud
storage is a model of computer data storage in which the digital data is
stored in logical pools, said to be on "the cloud". The physical stor-
age spans multiple servers (sometimes in multiple locations), and the phys-
ical environment is typically owned and managed by a hosting company.
These cloud storage providers are responsible for keeping the da-
ta available and accessible, and the physical environment protected and
running. People and organizations buy or lease storage capacity from the
providers to store user, organization, or application data.
How do you access cloud computing?
Cloud computing is accessed through
an application (e.g., Dropbox app) on your computer, smartphone, or tablet
or a website that accesses the cloud through your browser.
Examples of cloud services
If you have spent any time on the Internet or use devices connected to
the Internet, you likely have used cloud computing in some form. Below
are some common examples of cloud computing you have likely heard of
or used.
Ex. 2. Read and decide if these sentences are True or False. If they
are false, correct them.
1. People and organizations buy or lease storage capacity from the
providers to store data.
2. Cloud storage providers are responsible for keeping the da-
ta available and accessible.
3. Cloud storage providers are not charge of the physical environ-
ment protected and running.
4. Cloud storage is a model of computer data storage in which
the digital data is stored in the physical environment is typically owned and
managed by a hosting company.
5. Cloud computing is a term describing external peripherals.
6. Cloud of computers provides massive, distributed storage and pro-
cessing power that can be accessed by any Internet-connected device run-
ning a web browser.
7. Cloud computing isn’t accessed through an application.
92
Magnetic storage
Magnetic storage or magnetic recording is the storage of data on
a magnetized medium. Magnetic storage uses different patterns
of magnetization in a magnetizable material to store data and is a form
of non-volatile memory. The information is accessed using one or
more read/write heads.
Magnetic storage media, primarily hard disks, are widely used to
store computer data as well as audio and video signals. In the field of com-
puting, the term magnetic storage is preferred and in the field of audio and
video production, the term magnetic recording is more commonly used.
The distinction is less technical and more a matter of preference. Other ex-
amples of magnetic storage media include floppy disks, magnetic recording
tape, and magnetic stripes on credit cards.
Ex.3 Read and decide if these sentences are True or False. If they are
false, correct them.
1. In computer disk storage, a sector is a subdivision of ROM on
a magnetic disk.
2. The information in magnetic storage is accessed using one laser
beam.
3. Magnetic storage media, primarily hard disks, are widely used to
erase computer data as well as audio and video signals.
4. Other examples of optical storage include floppy disks, magnet-
ic recording tape, and magnetic stripes on credit cards.
5. The Magnetic disk and Optical disk are the storage devices pro-
vided a way to store data for a short time only.
6. The surface of a magnetic tape and the surface of a magnetic disk
are covered with an unknown material which helps in storing the infor-
93
mation magnetically.
7. Magnetic tape and magnetic disk both are impermanent storage.
8. The basic difference between magnetic tape and magnetic disk is
that magnetic tape is used as primary storage whereas, magnetic disk are
used as secondary storage.
9. Magnetic Tapes are nonvolatile in nature and hence it holds the
large quantity of data permanently.
10. The data transfer speed of the magnetic tape is similar to the
magnetic disk.
11. Each platter surface is divided into circular rings which are further
divided into sectors.
12. Though the disk platter is coated with a protective layer, there is
always a danger that head will make contact with the disk causing head
crash.
13. The Head crash is repairable.
Secondary storage
A hard disk drive with protective cover removed
Secondary storage (also known as external memory or auxiliary stor-
age) differs from primary storage in that it is not directly accessible by the
CPU. The computer usually uses its input/output channels to access sec-
ondary storage and transfer the desired data to primary storage. Secondary
storage is non-volatile (retaining data when its power is shut off). Modern
computer systems typically have two orders of magnitude more secondary
storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state
drives (SSDs) are usually used as secondary storage. The access time per
byte for HDDs or SSDs is typically measured in milliseconds (one thou-
94
sandth seconds), while the access time per byte for primary storage is
measured in nanoseconds (one billionth seconds). Thus, secondary storage
is significantly slower than primary storage. Rotating optical stor-
age devices, such as CD and DVD drives, have even longer access times.
Other examples of secondary storage technologies include USB flash
drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM
disks.
Once the disk read/write head on HDDs reaches the proper placement
and the data, subsequent data on the track are very fast to access. To re-
duce the seek time and rotational latency, data are transferred to and from
disks in large contiguous blocks. Sequential or block access on disks is or-
ders of magnitude faster than random access, and many sophisticated para-
digms have been developed to design efficient algorithms based upon se-
quential and block access. Another way to reduce the I/O bottleneck is to
use multiple disks in parallel in order to increase the bandwidth between
primary and secondary memory.
Secondary storage is often formatted according to a file sys-
tem format, which provides the abstraction necessary to organize data in-
to files and directories, while also providing metadata describing the owner
of a certain file, the access time, the access permissions, and other infor-
mation.
Most computer operating systems use the concept of virtual memory,
allowing utilization of more primary storage capacity than is physically
available in the system. As the primary memory fills up, the system moves
the least-used chunks (pages) to a swap file or page file on secondary stor-
age, retrieving them later when needed. If a lot of pages are moved to
slower secondary storage, the system performance is degraded.
In modern computers, magnetic storage will take these forms:
Magnetic disk
o Floppy disk, used for off-line storage
The 5.25-inch disks were dubbed "floppy" because the diskette pack-
aging was a very flexible plastic envelope, unlike the rigid case used to
hold today's 3.5-inch diskettes.
95
o Hard disk drive, used for secondary storage
Magnetic tape, used for tertiary and off-line storage
Magnetic tape, similar to the tape used in tape recorders, has also been
used for auxiliary storage, primarily for archiving data. Tape is cheap, but
access time is far slower than that of a magnetic disk because it is sequen-
tial-access memory—i.e., data must be sequentially read and written as a
tape is unwound, rather than retrieved directly from the desired point on the
tape.
Carousel memory (magnetic rolls)
Ex.3 Look at the words in the boxes. Are they nouns, verbs, adjectives
or adverbs?
Write n, v, adj or adv next to each word and then complete the sen-
tences below.
magnetize magnetic(2) magnet(2) magnetized (2)
magnetically(2)
magnetizable magnetization
96
1. The cylinder is the name for several tracks equidistant from the
center of rotation of …………..a disk. These tracks are located above each
other on different disk surfaces.
2. Most digital data today is still stored ………….. on hard disks, or
optically on media such as CD-ROMs.
3. In order to generate enough power... to ………….. Savitar's radial
field and keep him in a state of perpetual stasis...
4. The permanent ………….. is transverse ………………. .
5. The magnetic chips of steel produced by a cutting tool are at-
tractable by a………… .
6. Magnetism is a class of physical phenomena that are mediated
by ………. fields.
7. And those two coils of wire are really, really close to each other,
and actually do transfer power …………….. and wirelessly, only over a
very short distance.
8. Magnetic storage uses magnetization in a ………………..material
to store data and is a form of non-volatile memory.
9. Magnetic storage or magnetic recording is the storage of data on
a …………medium
10. Magnetic storage uses different patterns of …………….in a mag-
netizable material to store data and is a form of volatile memory
Performance characteristics
The access time or response time of a rotating drive is a measure of
the time it takes before the drive can actually transfer data. The factors that
control this time on a rotating drive are mostly related to the mechanical
nature of the rotating disks and moving heads. It is composed of a few in-
dependently measurable elements that are added together to get a single
value when evaluating the performance of a storage device. The access
time can vary significantly, so it is typically provided by manufacturers or
measured in benchmarks as an average.
The key components that are typically added together to obtain the ac-
cess time are:
97
Seek time
Rotational latency
Command processing time
Settle time
Seek time
With rotating drives, the seek time measures the time it takes the head
assembly on the actuator arm to travel to the track of the disk where the da-
ta will be read or written. The data on the media is stored in sectors which
are arranged in parallel circular tracks.
Rotational latency
Rotational latency (sometimes called rotational delay or just latency)
is the delay waiting for the rotation of the disk to bring the required disk
sector under the read-write head. It depends on the rotational speed of a
disk (or spindle motor), measured in revolutions per minute (RPM). For
most magnetic media-based drives, the average rotational latency is typi-
cally based on the empirical relation that the average latency in millisec-
onds for such a drive is one-half the rotational period. Maximum rotational
latency is the time it takes to do a full rotation excluding any spin-up time
(as the relevant part of the disk may have just passed the head when the re-
quest arrived).
Maximum latency = 60/rpm
Average latency = 0.5*Maximum latency
Therefore, the rotational latency and resulting access time can be im-
proved by increasing the rotational speed of the disks. This also has the
benefit of improving the throughput.
Data transfer rate
The data transfer rate of a drive (also called throughput) covers both
the internal rate (moving data between the disk surface and the controller
on the drive) and the external rate (moving data between the controller on
the drive and the host system). The measurable data transfer rate will be the
lower (slower) of the two rates. The sustained data transfer rate or sustained
throughput of a drive will be the lower of the sustained internal and sus-
tained external rates. The sustained rate is less than or equal to the maxi-
mum or burst rate because it does not have the benefit of any cache or buff-
98
er memory in the drive. The internal rate is further determined by the media
rate, sector overhead time, head switch time, and cylinder switch time.
Media rate
It is the rate at which the drive can read bits from the surface of the
media.
Sector overhead time
Additional time (bytes between sectors) needed for control structures
and other information necessary to manage the drive, locate and validate
data and perform other support functions.
Head switch time
Additional time required to electrically switch from one head to an-
other, re-align the head with the track and begin reading; only applies to
multi-head drive and is about 1 to 2 ms.
Cylinder switch time
Additional time required moving to the first track of the next cylinder
and beginning reading; the name cylinder is used because typically all the
tracks of a drive with more than one head or data surface are read before
moving the actuator. This time is typically about twice the track-to-track
seek time. As of 2001, it was about 2 to 3 ms.
Ex.5. Complete the text with the words from the box.
retrieve tracks revolutions tracks magnetic
information
magnetized
Auxiliary computer memories using a (1) ………………. drum oper-
ate somewhat like tape and disk units. They store data in the form of
(2)…………………spots in adjacent circular tracks on the surface of a
metal cylinder. A single drum may carry from one to 200
(3)……………….. . Data are recorded and read by heads positioned near
the surface of the drum as the drum rotates at about 3,000
(4)……………………….per minute. Drums provide rapid, random access
to stored (5)………………. . They are able to (6)………………. infor-
mation faster than tape and disk units, but cannot store as much data as ei-
ther of them.
99
Ex. 6Read the information and correct the following statements.
1. Data transfer rate measures the time it takes the head assembly on
the actuator arm to travel to the track of the disk ROM is used in the nor-
mal operations before the operating system is loaded.
2. Rotational latency and resulting access time can be improved by
decreasing the rotational speed of the disks.
3. Cylinder switch time is additional time required to electrically
switch from one head to another ROM is a temporary of the computer.
4. Head switch is additional time (bytes between sectors) needed for
control structures and other information necessary to manage the drive
5. Secondary storage (also known as external memory or auxiliary
storage) differs from primary storage in that it is directly accessible by the
CPU.
6. Secondary storage is volatile (retaining data when its power is shut
off).
7. Primary storage is significantly slower than primary storage.
8. Rotating optical storage devices, such as CD and DVD drives,
have even faster access times.
9. Access time is the delay waiting for the rotation of the disk to
bring the required disk sector under the read-write head.
In computing and optical disc recording technologies, an optical
disc (OD) is a flat, usually circular disc that encodes binary data (bits) in
the form of pits and lands (where change from pit to land or from land to
pit corresponds to binary value of 1, no change, regardless whether in land
or pit area, corresponds to binary value of 0) on a special material (of-
ten aluminum) on one of its flat surfaces.
A CD is made from 1.2-millimetre (0.047 in)
thick, polycarbonate plastic and weighs 14–33 grams. From the center out-
ward, components are: the center spindle hole (15 mm), the first-transition
area (clamping ring), the clamping area (stacking ring), the second-
transition area (mirror band), the program (data) area, and the rim. The in-
ner program area occupies a radius from 25 to 58 mm.
A thin layer of aluminum or, more rarely, gold is applied to the sur-
face, making it reflective. The metal is protected by a film of lacquer nor-
100
mally spin coated directly on the reflective layer. The label is printed on the
lacquer layer, usually by screen printing or offset printing.
CD data is represented as tiny indentations known as pits, encoded in
a spiral track moulded into the top of the polycarbonate layer. The areas be-
tween pits are known as lands. Each pit is approximately 100 nm deep by
500 nm wide, and varies from 850 nm to 3.5 µm in length. The distance be-
tween the tracks (the pitch) is 1.6 µm.
CD-R discs (CD-Rs) are readable by most plain CD readers, i.e., CD
readers manufactured prior to the introduction of CD-R. This is an ad-
vantage over CD-RW, which can be re-written but cannot be played on
many plain CD readers.
CD-ROM, the abbreviation of compact disc read-only memory, the
type of computer memory in the form of a compact disc that is read by op-
tical means. A CD-ROM drive uses a low-power laser beam to read digit-
ized (binary) data that has been encoded in the form of tiny pits on an opti-
cal disk. The drive then feeds the data to a computer for processing. The
standard compact disc was introduced in 1982 for digital audio reproduc-
tion. But, because any type of information can be represented digitally, the
standard CD was adapted by the computer industry, beginning in the mid-
1980s, as a low-cost …(100 of 336 words)
The DVD represents the second generation of compact
disc (CD) technology, and, in fact, soon after the release of the first audio
CDs by the Sony Corporation and Philips Electronics NV in 1982, research
was under way on storing high-quality video on the same 120-mm (4.75-
inch) disc. In 1994–95 two competing formats were introduced, the Multi-
media CD (MMCD) of Sony and Philips and the Super Density (SD) disc
of a group led by the Toshiba Corporation and Time Warner Inc. By the
end of 1995 the competing groups had agreed on a common format, to be
known as DVD, that combined elements of both proposals, and in 1996 the
first DVD players went on sale in Japan.
Like a CD drive, a DVD drive uses a laser to read digitized (binary)
data that have been encoded onto the disc in the form of tiny pits tracing a
spiral track between the centre of the disc and its outer edge. However, be-
cause the DVD laser emits red light at shorter wavelengths than the red
light of the CD laser (635 or 650 nanometres for the DVD as opposed to
780 nanometres for the CD), it is able to resolve shorter pits on more nar-
rowly spaced tracks, thereby allowing for greater storage density. In addi-
101
tion, DVDs are available in single- and double-sided versions, with one or
two layers of information per side. A double-sided, dual-layer DVD can
hold more than 16 gigabytes of data, more than 10 times the capacity of
a CD-ROM, but even a single-sided, single-layer DVD can hold more than
four gigabytes—more than enough capacity for a two-hour movie that has
been digitized in the highly efficient MPEG-2 compression format. Indeed,
soon after the first DVD players were introduced, single-sided DVDs be-
came the standard media for watching movies at home, almost completely
replacing videotape. Consumers quickly appreciated the convenience of the
discs as well as the higher quality of the video images, the interactivity of
the digital controls, and the presence of numerous extra features packed in-
to the discs’ capacious storage.
With two incompatible technologies on the market, consumers were
reluctant to purchase next-generation players for fear that one standard
would lose out to the other and render their purchase worthless. In addition,
movie studios faced a potentially expensive situation if they produced mov-
ies for the losing format, and computer and software firms were concerned
about the type of disc drive that would be needed for their products. Those
uncertainties created pressure to settle on a format, and in 2008 the enter-
tainment industry accepted Blu-ray as its preferred standard. Toshiba’s
group stopped development of HD DVD. By that time, doubts were being
raised about how long even the new Blu-Ray discs would be viable, as a
growing number of movies in high-definition were available for
“streaming” online, and cloud computing services offered consumers huge
data banks for storing all sorts of digitized data.
Blu-ray Disc (BD), often known simply as Blu-ray, is
a digital optical disc storage format. It is designed to supersede
the DVD format, capable of storing several hours of video in high-
definition (HDTV 720p and 1080p). The main application of Blu-ray is as a
medium for video material such as feature films and for the physical distri-
bution of video games for the PlayStation 3, PlayStation 4, PlayStation
5, Xbox One and Xbox Series X. The name "Blumould-ray" refers to
the blue laser (which is actually a violet laser) used to read the disc, which
allows information to be stored at a greater density than is possible with the
longer-wavelength red laser used for DVDs.
The plastic disc is 120 millimetres (4.7 in) in diameter and 1.2 milli-
metres (0.047 in) thick, the same size as DVDs and CDs.[5] Conventional
102
or pre-BD-XL Blu-ray Discs contain 25 GB per layer, with dual-layer discs
(50 GB) being the industry standard for feature-length video discs. Triple-
layer discs (100 GB) and quadruple-layer discs (128 GB) are available
for BD-XL re-writer drives.[6]
Blu-ray Discs contain their data relatively close to the surface (less
than 0.1 mm)which combined with the smaller spot size presents a problem
when the surface is scratched as data would be destroyed.
HD DVD uses traditional material and has the same scratch and sur-
face characteristics of a regular DVD. The data is at the same depth
(0.6 mm) as DVD as to minimize damage from scratching. As with DVD
the construction of the HD DVD allows for a second side of either HD
DVD or DVD.
A study performed by Home Media Magazine (August 5, 2007) con-
cluded that HD DVDs and Blu-ray discs are essentially equal in production
cost.
While there is a HD-DVD variant that acts as a successor for
the DVD-RAM, the HD DVD-RAM, a "BD-RAM" has never been re-
leased. Although the BD-RE has unrestricted random writing access capa-
bilities, its rewrite cycle count of around 1000 times is much lower than the
potential 100.000 rewrite cycles of some DVD-RAM variants.
Ex.7. Read and correct these false statements.
1. CD-ROM, abbreviation of compact disc recordable type of com-
puter memory in the form of a compact disc that is read by optical means
2. The name "Blu-ray" refers to the pink laser used to read the disc,
which allows information to be stored at a greater density than is possible
with the longer-wavelength red laser used for DVDs.
3. HD DVD uses traditional material and has the different scratch
and surface characteristics of a regular DVD. The data is at the same depth
(0.6 mm) as DVD as to minimize damage from scratching.
4. Blu-ray Discs contain their data far from the surface which com-
bined with the smaller spot size presents a problem when the surface is
scratched as data would be destroyed.
5. The DVD represents the second generation of compact
disc (CD) technology, and, in fact, soon after the release of the first audio
CDs by the Sony Corporation and Philips Electronics NV in 1982, research
was under way on storing high-quality video absolutely new disc.
103
6. DVD drive doesn’t use a laser to read digitized (binary) data that
have been encoded onto the disc in the form of tiny pits tracing a spiral
track between the centre of the disc and its outer edge.
7. . A double-sided, dual-layer DVD can hold more than 150 giga-
bytes of data.
Differences between CD, DVD, Blu-ray.
The CD and DVD are the versions of an optical disk which mainly
differ in size and manufacturing method. Generally, a DVD can store more
data than a CD, it’s one of the reason is that CD contain the polycarbonate
substrate on only a single side while in DVDs it present on both of the
sides. We have already discussed the construction and working principle of
the optical disk in our previous article difference between Magnetic Disk
and Optical disk.
Key Differences between CD and DVD
A compact disk can hold up to 700 MB of data while a digital versa-
tile disk can hold a maximum 17 GB of data.
As the DVD can acquire a larger size, it is more prevalently used than
a CD.
The metal layer used for recording is placed beneath the labelling side
in the CD. In contrast, in DVD this metal layer is situated in the centre of
the disk.
The layers of pits and land on a CD can be only one whereas in DVD
it can be two.
A CD contains the spacing of 1.6 micrometres between spiral tracks
and 0.834 micrometres between the pits on the disc. On the other hand, the
spiral loops in DVD are 0.74 micrometres apart and the distance between
the pits is 0.4 micrometres.
The error correcting codes used in CDs are CIRC and EFMP. As
against, DVDs use distinct error correction techniques which involve RS-
PC and EFM Plus.
The removal of the adhesive label of a CD can cause the severe dam-
age to the CD. On the contrary, when the adhesive label of the DVD is re-
moved, the imbalance in spin is caused.
Key Differences Between Magnetic disk and Optical disk
The magnetic disk is a fixed storage device whereas optical disk is
transportable storage media which is removable.
104
Optical disk generates better signal-to-noise ratio as compared to
magnetic disk.
The sample rate used in the magnetic disk is lower than used in the
optical disk.
In the optical disk, the data is sequentially accessed. In contrast, the
data in the magnetic disk is randomly accessed.
Tracks in the magnetic disk are generally circular while in optical disk
the tracks are constructed spirally.
Optical disk allows mass replication. On the contrary, in the magnetic
disk, only one disk is accessed at a time.
The access time of the magnetic disk is lesser than the optical disk.
Key Differences Between Blu-ray and DVD
The blu-ray optical disks perform better than DVD in terms of storing
high-quality data, comparatively.
In DVD the red laser is used to fetch data whose wavelength is 650
nm while blu-ray uses a blue-violet laser in which wavelength is 405 nm.
DVD can store 4.7 GB in a single layer and a maximum of 7.4 in the
double layer. As against, blu-ray has maximum space of 25 GB in a single
layer while 50 GB in a dual layer.
Blu-ray provides a higher data rate that is up to 36 Mbps whereas the
DVD rate of data transfer is 11.08 Mbps.
Blu-ray is more secure and protected.
Advantages of Blu-ray
It has a large amount of capacity.
Compulsory managed copy.
It is backward compatible.
Quality support
Durable HDTV support
Advantages of DVD
Provides high density as compared to CD.
Cost effective relative to CD.
Compatible with most older and current DVD writers.
Durable.
Disadvantages of Blu-ray
Its cost is quite high.
105
However, the blu-ray is capable of storing high definition videos
but in a limited way.
Competitors risk.
Disadvantages of DVD
Data transfer is slow as compared to blu-ray.
Storage capacity less and cannot store HD videos.
The Blu-ray can hold up to 50 GB of data in a dual layer which is 6.7
times of a dual layer DVD. So, it is clear from the fact that blu-ray provide
high storage space, data rate, security, compatibility as compared to DVD.
Solid-state drive
A solid-state drive (SSD) is a solid-state storage device that us-
es integrated circuit assemblies to store data persistently, typically us-
ing flash memory, and functioning as secondary storage in the hierarchy of
computer storage. It is also sometimes called a solid-state device or a solid-
state disk, even though SSDs lack the physical spinning disks and
movable read–write heads used in hard disk drives (HDDs) and floppy
disks.
Compared with electromechanical drives, SSDs are typically more re-
sistant to physical shock, run silently, and have quicker access time and
lower latency. SSDs store data in semiconductor cells. As of 2019, cells
can contain between 1 and 4 bits of data. SSD storage devices vary in their
properties according to the number of bits stored in each cell, with single-
bit cells ("SLC") being generally the most reliable, durable, fast, and ex-
pensive type, compared with 2- and 3-bit cells ("MLC" and "TLC"), and fi-
nally quad-bit cells ("QLC") being used for consumer devices that do not
require such extreme properties and are the cheapest of the four. In addi-
tion, 3D XPoint memory (sold by Intel under the Optane brand), stores data
by changing the electrical resistance of cells instead of storing electrical
charges in cells, and SSDs made from RAM can be used for high speed,
when data persistence after power loss is not required, or may use battery
power to retain data when its usual power source is unavailable. Hybrid
drives or solid-state hybrid drives (SSHDs), such as Apple's Fusion Drive,
combine features of SSDs and HDDs in the same unit using both flash
memory and a HDD in order to improve the performance of frequently-
accessed data.
106
SSDs based on NAND Flash will slowly leak charge over time if left
for long periods without power. This causes worn-out drives (that have ex-
ceeded their endurance rating) to start losing data typically after one year
(if stored at 30 °C) to two years (at 25 °C) in storage; for new drives it
takes longer. Therefore, SSDs are not suitable for archival storage. 3D
XPoint is a possible exception to this rule, however, it is a relatively new
technology with unknown long-term data-retention characteristics.
SSDs can use traditional HDD interfaces and form factors, or newer
interfaces and form factors that exploit specific advantages of the flash
memory in SSDs. Traditional interfaces (e.g. SATA and SAS) and standard
HDD form factors allow such SSDs to be used as drop-in replacements for
HDDs in computers and other devices. Newer form factors such
as mSATA, M.2, U.2, NF1, XFMEXPRESS and EDSFF (formerly known
as Ruler SSD) and higher speed interfaces such as NVM Express (NVMe)
over PCI Express can further increase performance over HDD perfor-
mance.
SSDs have a limited number of writes, and will be slower the more
filled up they are.
Flash memory
Most SSD manufacturers use non-volatile NAND flash memory in the
construction of their SSDs because of the lower cost compared
with DRAM and the ability to retain the data without a constant power
supply, ensuring data persistence through sudden power outages. Flash
memory SSDs were initially slower than DRAM solutions, and some early
designs were even slower than HDDs after continued use. This problem
was resolved by controllers that came out in 2009 and later.
Flash-based SSDs store data in metal-oxide-
semiconductor (MOS) integrated circuit chips which contain non-
volatile floating-gate memory cells. Flash memory-based solutions are typ-
ically packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-
inch), but also in smaller more compact form factors, such as the M.2 form
factor, made possible by the small size of flash memory.
Lower-priced drives usually use triple-level cell (TLC) or multi-level
cell (MLC) flash memory, which is slower and less reliable than single-
level cell (SLC) flash memory. This can be mitigated or even reversed by
the internal design structure of the SSD, such as interleaving, changes to
107
writing algorithms, and higher over-provisioning (more excess capacity)
with which the wear-leveling algorithms can work.
Solid-state drives that rely on V-NAND technology, in which layers
of cells are stacked vertically, have been introduced.
Key Differences Between SSD and HDD
1. The performance of the SSD is far better than the HDD.
2. SSD is more durable and failure resistant than HDD as all its parts
are fixed, unlike HDD which contain moving parts and is susceptible to
damage.
3. The power consumed by the HDD is greater than that of consumed
by SSD, its reason is that the HDD uses energy eating motors for its func-
tioning.
4. The access time of SSD is excellent. For smaller files data transfer
rate can be between 100 MB/s to 600 MB/s. Conversely, HDD access time
is 10 times slower than the SSD. It provides data transfer rate around 140
MB/s, however, large files do not degrade its performance.
5. Seek time and rotation latency of the SSD are superior to HDD
and it also takes less time in a startup.
6. The SSD is costly than HDD and it provides less storage capacity
comparative to HDD in the same amount.
Ex. 8 Answer the questions:
1. What do SSD and HDD stand for?
2. What is the main advantage of using SSD instead of HDD?
3. What technology is used for SSD and Flash memory?
4. Which SSD storage device (SLC, MLC, TLC) is the most expen-
sive type? Why?
5. Is SSD more suitable for archiving then HDD?
Ex. 9Read the information and correct the following statements.
1. SSDs are typically less resistant to physical shock, run silently,
and have quicker access time and high latency.
2. SSDs are not suitable for archival storage.
108
3. The performance of the HDD is far better than the SSD.
4. Seek time and rotational latency of the SSD are worse to HDD and
it also takes more time in a startup.
5. Dual-drive hybrid systems combine the usage of one sin-
gle SSD and HDD device.
6. SSDs store data in metal-oxide-semiconductor (MOS) integrated
circuit chips which contain volatile floating-gate memory cells.
7. The access time of the optical disk is lesser than the magnetic disk.
8. CD provides high density as compared to Blu-ray.
MAIN MEMORY
There are two types of main memory:
RAM (Random Access Memory) holds the program instructions
and the data that is being used by the processor.
ROM (Read Only Memory) holds the program instructions and
settings that required starting up the computer.
There are several major differences between a ROM chip and a RAM
chip. The differences revolve around the uses, storage capabilities and ca-
pacity, and physical sizes of ROM and RAM.
ROM is non-volatile, not requiring to store data. Using a non-volatile
storage medium is the only way to begin this process for computers and
other devices. The data in ROM can only be read by CPU, it cannot be
modified. The CPU cannot directly access the ROM memory, the data has
to be first transferred to the RAM, and then the CPU can access that data
from the RAM. It is often used to store BIOS program on a computer
motherboard. ROM was used as a storage media in a Nintendo, Gameboy,
and Sega Genesis game cartridge. ROM chips store several MB (mega-
bytes) of data, usually 4 to 8 MB per chip. It can vary in size from less than
an inch in length to multiple inches in inches in length and width, depend-
ing on their use.
ROM can be classified into PROM, EPROM and EEPROM.
PROM: Programmable ROM, it can be modified only once by the
user.
EPROM: Erasable and Programmable ROM, the content of this
109
ROM can be erased using ultraviolet rays and ROM can be reprogrammed.
EEPROM: Electrically Erasable and Programmable ROM, it can
be erased electrically and reprogrammed about ten thousand times.
RAM is volatile and used in computers to temporarily store files in
use on the computer. It is one of the fastest types of memory, allowing it to
switch quickly between tasks. For example, the Internet browser you are
using to read this page is loaded into RAM and is running from it. RAM
chips often range in storage capacity from 1 to 256 GB. It is used in the
normal operations after the operating system is loaded.
There are two kinds of RAM, Static RAM and Dynamic RAM.
Static RAM is one which requires the constant flow of the power
to retain the data inside it. It is faster and more expensive than DRAM. It is
used as a cache memory for the computer.
Dynamic RAM needs to be refreshed to retain the data it holds. It
is slower and cheaper than static RAM.
Ex.10Read the information and correct the following statements.
1. ROM chips have a storage capacity of 20 to 25 MB.
2. ROM is used in the normal operations before the operating system
is loaded.
3. RAM and ROM both are the external memories of the computer.
4. RAM is a permanent memory of the computer.
5. ROM is a temporary of the computer.
6. The capacity of RAM is smaller than ROM.
7. Data in RAM and ROM cannot be modified.
8. The data has to be first transferred to the ROM, and then the CPU
can access that data from the RAM.
9. Static RAM is slower and more expensive than dynamic RAM.
Ex. 11Read the text again and answer these questions.
1. What do RAM and ROM stand for?
2. What is the purpose of RAM?
3. What are the benefits of Static RAM?
4. What is the purpose of ROM?
5. What is the difference between ROM and RAM capacity?
110
6. Which type of memory is the fastest?
7. Can the CPU directly access the ROM memory?
8. Which term is used to describe the cheapest and slowest RAM?
Ex. 12Find terms in the text which correspond to these definitions.
1. … was used as a storage media in a Nintendo, Gameboy, and Sega
Genesis game cartridge.
2. … needs to be refreshed to retain the data it holds.
3. it is used as a cache memory for the computer.
4. … cannot directly access the ROM memory, the data has to be
first transferred to the RAM
5. it is used in the normal operations after the operating system is
loaded.
6. it can be erased electrically and reprogrammed about ten thousand
times.
7. the content of this ROM can be erased using ultraviolet rays and
ROM can be reprogrammed.
8. it can be modified only once by the user.
What does computer memory look like?
Below is an example of a 512 MB DIMM computer memory module.
This memory module connects to the memory slot on a comput-
er motherboard.
Volatile vs. non-volatile memory
Memory can be either volatile or non-volatile memory. Volatile
memory is memory that loses its contents when the computer or hardware
device loses power. Computer RAM is an example of volatile memory. It is
why if your computer freezes or reboots when working on a program, you
lose anything that hasn't been saved. Non-volatile memory, sometimes ab-
breviated as NVRAM, is memory that keeps its contents even if the power
is lost. EPROM is an example of non-volatile memory.
What happens to memory when the computer is turned off?
111
As mentioned above, because RAM is volatile memory, when the
computer loses power, anything stored in RAM is lost. For example, while
working on a document, it is stored in RAM. If it were saved to non-
volatile memory (e.g., the hard drive), it would be lost if the computer lost
power.
Memory is not disk storage
It is very common for new computer users to be confused by what
parts in the computer are memory. Although both the hard drive and RAM
are memory, it's more appropriate to refer to RAM as "memory" or
"primary memory" and a hard drive as "storage" or "secondary storage."
When someone asks how much memory is in your computer, it is of-
ten between 1 GB and 16 GB of RAM and several hundred gigabytes, or
even a terabyte, of hard disk drive storage. In other words, you always have
more hard drive space than RAM.
How can I learn more about computers?
How is memory used?
When a program, such as your Internet browser, is open, it is loaded
from your hard drive and placed into RAM. This process allows that pro-
gram to communicate with the processor at higher speeds. Anything you
save to your computer, such as a picture or video, is sent to your hard drive
for storage.
Why is memory important or needed for a computer?
Each device in a computer operates at different speeds and computer
memory gives your computer a place to quickly access data. If the CPU
had to wait for a secondary storage device, like a hard disk drive, a com-
puter would be much slower.
112
Solid-state drive
A solid-state drive (SSD) is a solid-state storage device that us-
es integrated circuit assemblies to store data persistently, typically us-
ing flash memory, and functioning as secondary storage in the hierarchy of
computer storage. It is also sometimes called a solid-state device or a solid-
state disk, even though SSDs lack the physical spinning disks and
movable read–write heads used in hard disk drives (HDDs) and floppy
disks.
Compared with electromechanical drives, SSDs are typically more re-
sistant to physical shock, run silently, and have quicker access time and
lower latency. SSDs store data in semiconductor cells. As of 2019, cells
can contain between 1 and 4 bits of data. SSD storage devices vary in their
properties according to the number of bits stored in each cell, with single-
bit cells ("SLC") being generally the most reliable, durable, fast, and ex-
pensive type, compared with 2- and 3-bit cells ("MLC" and "TLC"), and fi-
nally quad-bit cells ("QLC") being used for consumer devices that do not
require such extreme properties and are the cheapest of the four. In addi-
tion, 3D XPoint memory (sold by Intel under the Optane brand), stores data
by changing the electrical resistance of cells instead of storing electrical
charges in cells, and SSDs made from RAM can be used for high speed,
when data persistence after power loss is not required, or may use battery
power to retain data when its usual power source is unavailable. Hybrid
drives or solid-state hybrid drives (SSHDs), such as Apple's Fusion Drive,
combine features of SSDs and HDDs in the same unit using both flash
memory and a HDD in order to improve the performance of frequently-
accessed data.
SSDs based on NAND Flash will slowly leak charge over time if left
for long periods without power. This causes worn-out drives (that have ex-
ceeded their endurance rating) to start losing data typically after one year
(if stored at 30 °C) to two years (at 25 °C) in storage; for new drives it
takes longer. Therefore, SSDs are not suitable for archival storage. 3D
XPoint is a possible exception to this rule, however it is a relatively new
technology with unknown long-term data-retention characteristics.
SSDs can use traditional HDD interfaces and form factors, or newer
interfaces and form factors that exploit specific advantages of the flash
memory in SSDs. Traditional interfaces (e.g. SATA and SAS) and standard
HDD form factors allow such SSDs to be used as drop-in replacements for
113
HDDs in computers and other devices. Newer form factors such
as mSATA, M.2, U.2, NF1, XFMEXPRESS and EDSFF (formerly known
as Ruler SSD) and higher speed interfaces such as NVM Express (NVMe)
over PCI Express can further increase performance over HDD perfor-
mance.
SSDs have a limited number of writes, and will be slower the more
filled up they are.
Flash memory
Most SSD manufacturers use non-volatile NAND flash memory in the
construction of their SSDs because of the lower cost compared
with DRAM and the ability to retain the data without a constant power
supply, ensuring data persistence through sudden power outages. Flash
memory SSDs were initially slower than DRAM solutions, and some early
designs were even slower than HDDs after continued use. This problem
was resolved by controllers that came out in 2009 and later.
Flash-based SSDs store data in metal-oxide-
semiconductor (MOS) integrated circuit chips which contain non-
volatile floating-gate memory cells. Flash memory-based solutions are typ-
ically packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-
inch), but also in smaller more compact form factors, such as the M.2 form
factor, made possible by the small size of flash memory.
Lower-priced drives usually use triple-level cell (TLC) or multi-level
cell (MLC) flash memory, which is slower and less reliable than single-
level cell (SLC) flash memory. This can be mitigated or even reversed by
the internal design structure of the SSD, such as interleaving, changes to
writing algorithms, and higher over-provisioning (more excess capacity)
with which the wear-leveling algorithms can work.
Solid-state drives that rely on V-NAND technology, in which layers
of cells are stacked vertically, have been introduced.
114
KEYS
Unit 1
Task1. 1. False ( German Konrad Zuze ) 2. True .3. False (a simple
electronic typewriter). 4. False. 5. False. 6. False. 7. True. 8. False. 9.
False.
Task 2. 1. subconscious 2.simple 3. analogue 4. Arithmometer 5. a
room 6.the UNIVAC 7.PC 8.ALTAIR 8800 9. single circuit board, 8K of
RAM, a keyboard
Task 5. Konrad Zuse
Unit 2
Task 2. 1.b,2.d, 3.a,4.d,5.c, 6.b, 7.d,8.a,9.b,10.a,11.c,12.a,13.c
Unit 3
Quiz: 1.A, 2. A, 3. C, 4. A, 5. B, 6. A, 7. A, 8. C, 9. D, 10.C
Unit 5
Ex. 1.
1. ‘cache coherency’ means ensuring that any changes are reflected
within the cache and vice versa.
2. A write back cache speedup the write process, but does require a
more intelligent cache controller.
3. The whole process is controlled by a group of logic circuits called
the ‘cache controller’.
4. the ‘write-back’ cache which allows the processor to write chang-
es only to the cache and not to main memory.
5. the cache controller’s main job is to look after ‘cache coherency’
6. ‘write-through’ cache and is the safest solution, but also is the
slowest.
7. ‘cache coherency’ which means ensuring that any changes are re-
flected within the cache and vice versa.
Ex. 2
115
1. T
2. T
3. F Cloud storage providers are charge of the physical environment
protected and running.
4. F Cloud storage is a model of computer data storage in which
the digital data is stored in logical pools.
5. F Cloud computing is a term used to describe services provided
over a network by a collection of remote servers.
6. F Cloud computing is a term used to describe services provided
over a network by a collection of remote servers.
7. T Cloud computing is accessed through
an application (e.g., Dropbox app) on your computer, smartphone, or tablet
or a website that accesses the cloud through your browser.
Ex. 3
1. In computer disk storage, a sector is a subdivision of a track on
a magnetic disk.
2. The information in magnetic storage is accessed using one or
more read/write heads.
3. Magnetic storage media, primarily hard disks, are widely used to
store computer data as well as audio and video signals.
4. Other examples of magnetic storage media include floppy disks,
magnetic recording tape, and magnetic stripes on credit cards.
5. The Magnetic disk and Optical disk are the storage devices provid-
ing a way to store data for a long duration.
6. The surface of a magnetic tape and the surface of a magnetic disk
are covered with a magnetic material which helps in storing the information
magnetically.
7. Magnetic tape and magnetic disk both are non-volatile storage.
8. The basic difference between magnetic tape and magnetic disk is
that magnetic tape is used for backups whereas, magnetic disk are used
as secondary storage.
9. True
10. True
11. Each platter surface is divided into circular tracks which are fur-
ther divided into sectors.
116
12. True
13. The Head crash is not repairable the whole magnetic disk is to be
replaced.
Ex. 4
1. Magnetic, 2. Magnetically, 3. magnetize, 4. Magnet, magnetized,
5. Magnet, 6. Magnetic, 8. magnetizable
Ex. 5
1 magnetic, 2 magnetized, 3tracks,4 revolutions, 5 information,
6retrieve,
Ex. 6
1. SEEK TIME rate measures the time it takes the head assembly on
the actuator arm to travel to the track of the disk ROM is used in the nor-
mal operations before the operating system is loaded.
2. Rotational latency and resulting access time can be improved by
increasing the rotational speed of the disks.
3. Head switch time - additional time required to electrically switch
from one head to another ROM is a temporary of the computer.
4. Sector overhead time Additional time (bytes between sectors)
needed for control structures and other information necessary to manage
the drive Data in RAM and ROM cannot be modified.
5. Secondary storage (also known as external memory or auxiliary
storage) differs from primary storage in that it is not directly accessible by
the CPU.
6. Secondary storage is non-volatile (retaining data when its power is
shut off).
7. Secondary storage is significantly slower than primary storage.
8. Rotating optical storage devices, such as CD and DVD drives,
have even longer access times.
9. Rotational latency is the delay waiting for the rotation of the disk
to bring the required disk sector under the read-write head.
Ex.7
1. CD-ROM, abbreviation of compact disc read-only memory, type
of computer memory in the form of a compact disc that is read by optical
means
117
2. The name "Blu-ray" refers to the blue laser (which is actually
a violet laser) used to read the disc, which allows information to be stored
at a greater density than is possible with the longer-wavelength red laser
used for DVDs.
3. HD DVD uses traditional material and has the same scratch and
surface characteristics of a regular DVD. The data is at the same depth
(0.6 mm) as DVD as to minimize damage from scratching.
4. Blu-ray Discs contain their data relatively close to the surface (less
than 0.1 mm) which combined with the smaller spot size presents a prob-
lem when the surface is scratched as data would be destroyed.
5. The DVD represents the second generation of compact
disc (CD) technology, and, in fact, soon after the release of the first audio
CDs by the Sony Corporation and Philips Electronics NV in 1982, research
was under way on storing high-quality video on the same 120-mm (4.75-
inch) disc.
6. a DVD drive uses a laser to read digitized (binary) data that have
been encoded onto the disc in the form of tiny pits tracing a spiral track be-
tween the centre of the disc and its outer edge.
7. . A double-sided, dual-layer DVD can hold more than 16 giga-
bytes of data, more than 10 times the capacity of a CD-ROM, but even a
single-sided.
Ex. 8
1. Solid State drive and Hard Disk
2. SSDs lack the physical spinning disks and movable read–write
heads used in hard disk drives (HDDs) and floppy disks.
3. Solid-state technology.
4. SSD storage devices vary in their properties according to the num-
ber of bits stored in each cell, with single-bit cells ("SLC") being generally
the most reliable, durable, fast, and expensive type, compared with 2- and
3-bit cells ("MLC" and "TLC"), and finally quad-bit cells ("QLC") being
used for consumer devices that do not require such extreme properties and
are the cheapest of the four.
5. HDD is more suitable.
Ex.9
118
1. Compared with electromechanical drives, SSDs are typically more
resistant to physical shock, run silently, and have quicker access time and
lower latency.
2. SSDs are suitable for archival storage.
3. The performance of the SSD is far better than the HDD.
4. Seek time and rotational latency of the SSD is superior to HDD
and it also takes less time in a startup.
5. Dual-drive hybrid systems combine the usage of separate SSD and
HDD devices installed in the same computer.
6. SSDs store data in metal-oxide-semiconductor (MOS) integrated
circuit chips which contain non-volatile floating-gate memory cells.
7. The access time of the magnetic disk is lesser than the optical disk.
8. Blu-ray provides high density as compared to CD.
Ex. 10
1. ROM chips have a storage capacity of 4 to 8 MB.
2. It is used in the normal operations after the operating system is
loaded. F
3. RAM and ROM both are the internal memories of the computer.
4. RAM is a temporary (volatile) memory of the computer.
5. ROM is a permanent (non-volatile) of the computer.
6. The capacity of ROM is smaller than RAM.
7. Data in ROM cannot be modified./ Data in RAM can be modified
8. The data has to be first transferred to the RAM, and then the CPU
can access that data from the RAM
9. Dynamic RAM needs to be refreshed to retain the data it holds. It
is slower and cheaper than static RAM.
Ex. 11
Возможные ответы.
1. Random Access Memory, Read Only Memory.
2. RAM (Random Access Memory) holds the program instructions
and the data that is being used by the processor.
3. It is faster and more expensive than DRAM. It is used as a cache
memory for the computer.
4. ROM (Read Only Memory) holds the program instructions and
119
settings that required to start up the computer
5. ROM chips store several MB (4-8)RAM chips often range in stor-
age capacity from 1 to 256 GB.
6. Static RAM
7. The CPU cannot directly access the ROM memory, the data has to
be first transferred to the RAM, and then the CPU can access that data from
the RAM.
8. Dynamic RAM.
Ex. 12
1. ROM
2. Dynamic RAM
3. Static RAM
4. CPU
5. RAM
6. EEPROM: Electrically Erasable and Programmable ROM
7. EPROM: Erasable and Programmable ROM
8. PROM: Programmable ROM
120
Список литературы:
1.Open University (A Britannica Publishing Partner).
2. Радовель В. А. Английский язык. Основы компьютерной гра-
мотности: Учебное пособие / Радовель В. А. – Ростов н/Д: Феникс,
2006. – 224 с.
3. Учебное пособие Englishfor Computer Science Students / Сост.
Т.В. Смирнова, М.В. Юдельсон; Науч. Ред. Н.А. Дударева. – 3-е изд. –
М.: Флинта: Наука, 2003. – 128 с.: 9 ил.
4. Eric H. Glendinning, John McEwan. – Basic English for compu-
ting. – Oxford University Press. – 1999.
5. Tom Ricca-McCarthy, Michael Duckworth. – English for Telecoms
and Information Technology. – Oxford University Press. – 2013
6. Russel Walter “The Secret Guide to Computers”, 2002
7. Roger Young “How Computers Work: Processor and Main
Memory”, 2018
8.http://www.eslcafe.com