0% found this document useful (0 votes)
23 views47 pages

Campmc Unit I

Computer architecture

Uploaded by

summaa786786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views47 pages

Campmc Unit I

Computer architecture

Uploaded by

summaa786786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Paavai Engineering College Department of ECE

UNIT I
MEMORY SYSTEM

1
Paavai Engineering College Department of ECE

CONTENTS

S.No. Name of the content Page No

Technical terms

1.1 Basic Concepts 1.6

1.1.1 CPU-Main Memory Connection 1.7

1.2 Semiconductor RAM Memories 1.8

1.2.1 Semiconductor Memory Classifications

1.3 Read Only Memories 1.20

1.3.1 Various types of ROMs and their characteristics 1.20

1.4 Memory Hierarchy 1.21

1.5 Cache Memories 1.22

1.6 Performance Considerations 1.24

1.7 Virtual Memory 1.26

1.8 Memory Management Requirements 1.32

1.9 Secondary Storage Devices 1.33

1.10 Accessing I/O devices 1.35

1.11 Standard I/O Interfaces 1.37

1.12 PCI 1.38

1.13 CSSI 1.41

1.14 USB 1.43

Question Bank 1.45

2
Paavai Engineering College Department of ECE

TECHNICAL TERMS
Technical Literal
S.No Technical Meaning Digester
Terms Meaning
A semiconductor is a physical
It is in
substance that is designed to https://www.techoped
between a
1 Semiconductor manage and control the flow of ia.com/definition/237
conductor and
current in electronic devices 3/semiconductor
insulator
and equipment.
It is the
physical
It is the hardware computing
components https://whatis.techtarg
device where the operating
Semiconductor that a et.com/search/query?
2 system (OS), application
RAM Memories computer q=Semiconductor+R
programs and data in current
system AM+Memories
use are
requires to
function
Read-Only
Memory
Read-Only Memory (ROM)
(ROM) is
is storage technology that https://www.weboped
Read Only a computer
3 permanently stores data in a ia.com/definitions/ro
Memories memory on
chip built into computers and m/
which data ha
other electronic devices.
s been pre-
recorded.
Memory
In computer architecture,
hierarchy is a
the memory https://en.wikipedia.o
Memory hierarchical
4 hierarchy separates computer rg/wiki/Memory_hier
Hierarchy pyramid for
storage into a hierarchy based archy
computer
on response time.
memory.
The cache is It is a smaller,
the fastest faster memory which stores https://www.javatpoi
Cache
5 component in copies of the data from the nt.com/coa-cache-
Memories
the memory most frequently used memory
hierarchy main memory locations.

3
Paavai Engineering College Department of ECE

In computing,
virtual
A computer can address more
memory, https://www.tutorials
memory than the amount
or virtual point.com/operating_
6 Virtual Memory physically installed on the
storage is system/os_virtual_me
system. This extra memory is
a memory mory.htm
actually called virtual memory
management t
echnique
Memory
management i
The essential requirement of
s a form
Memory memory management is to https://en.wikipedia.o
of resource
7 Management provide ways to dynamically rg/wiki/Memory_man
management
Requirements allocate portions of memory to agement
applied
programs at their request
to computer
memory
The secondary storage devices
It is part of which are built into the
CPU. It is computer or connected to the
https://www.javatpoi
Secondary used to store computer are known as a
8 nt.com/secondary-
Storage Devices large amount secondary memory of the
memory
of data and computer. It is also known as
permanently external memory or auxiliary
storage.
A bus is a
subsystem
that is used to In computer architecture, a bus
connect is a communication system https://en.wikipedia.o
9 Buses computer that transfers data between rg/wiki/Bus_(computi
components components inside a computer, ng)
and transfer or between computers.
data between
them.
It is a set of
standards for
SCSI (Small It is a standard
physically https://ecomputernote
Computer for interfacing, or connecting,
10 connecting s.com/fundamental/te
System personal computers
and rms/scsi
Interface ) to peripheral devices
transferring
data

4
Paavai Engineering College Department of ECE

A universal
serial bus
(USB) Universal Serial Bus (USB) is
connector is a common interface that
an essential enables communication https://www.google.c
11 USB
piece of between devices and a host om/
equipment for controller such as a personal
pairing tech computer (PC) or smartphone.
devices with
one another

5
Paavai Engineering College Department of ECE

1.1 BASIC CONCEPTS


A memory is just like a human brain. It is used to store data and instructions. Computer
memory is the storage space in the computer, where data is to be processed and instructions
required for processing are stored. The memory is divided into large number of small parts called
cells.
Memory System
The memory system serves as the repository of information (data) in a computer
system. The processor (also called the central processing unit, or CPU) accesses (reads or loads)
data from the memory system, performs computations on them, and stores (writes) them back to
memory.
The maximum size of the Main Memory (MM) that can be used in any computer is
determined by its addressing scheme. For example, a 16-bit computer that generates 16-bit
addresses is capable of addressing upto 216 =64K memory locations. If a machine generates 32-bit
addresses, it can access upto 232 = 4G memory locations. This number represents the size of address
space of the computer. If the smallest addressable unit of information is a memory word, the
machine is called word-addressable. If individual memory bytes are assigned distinct addresses, the
computer is called byte-addressable. Most of the commercial machines are byteaddressable. For
example in a byte-addressable 32-bit computer, each memory word contains 4 bytes. A possible
word-address assignment would be:
Word Address Byte Address
0 0 1 2 3
4 4 5 6 7
8 8 9 10 11
. …..
. …..
. …..

With the above structure a READ or WRITE may involve an entire memory word or it may involve
only a byte. In the case of byte read, other bytes can also be read but ignored by the CPU. However,
during a write cycle, the control circuitry of the MM must ensure that only the specified byte is
altered. In this case, the higher-order 30 bits can specify the word and the lower-order 2 bits can
specify the byte within the word.
6
Paavai Engineering College Department of ECE

1.1.1 CPU-Main Memory Connection


A block schematic:
From the system standpoint, the Main Memory (MM) unit can be viewed as a “block box”. Data
transfer between CPU and MM takes place through the use of two CPU registers, usually called
MAR (Memory Address Register) and MDR (Memory Data Register). If MAR is K bits long and
MDR is ‘n’ bits long, then the MM unit may contain upto 2k addressable locations and each
location will be ‘n’ bits wide, while the word length is equal to ‘n’ bits. During a “memory cycle”,
n bits of data may be transferred between the MM and CPU. This transfer takes place over the
processor bus, which has k address lines (address bus), n data lines (data bus) and control lines like
Read, Write, Memory Function completed (MFC), Bytes specifiers etc (control bus). For a read
operation, the CPU loads the address into MAR, set READ to 1 and sets other control signals if
required. The data from the MM is loaded into MDR and MFC is set to 1. For a write operation,
MAR, MDR are suitably loaded by the CPU, write is set to 1 and other control signals are set
suitably. The MM control circuitry loads the data into appropriate locations and sets MFC to 1.
This organization is shown in the following block schematic.

Fig.1.1 Memory Organization


Address Bus (k bits) Main Memory upto 2k addressable locations Word length = n bits Data
bus (n bits) Control Bus (Read, Write, MFC, Byte Specifier etc) MAR MDR CPU
Memory Access Times: - It is a useful measure of the speed of the memory unit. It is the
time that elapses between the initiation of an operation and the completion of that operation (for
example, the time between READ and MFC). Memory Cycle Time :- It is an important measure of
the memory system. It is the minimum time delay required between the initiations of two
successive memory operations (for example, the time between two successive READ operations).
The cycle time is usually slightly longer than the access time.
1.2 SEMICONDUCTOR RAM MEMORIES

7
Paavai Engineering College Department of ECE

Semiconductor dynamic RAM memories (DRAMs) are built using several techniques. The
oldest one, which is not currently used is the asynchronous technique. With this technique, the
memory works in asynchronous manner in respect to the processor, i.e. memory access is not
synchronized by processor clock.

Fig.1.2 Semiconductor Memory organization


A device for storing digital information that is fabricated by using integrated circuit
technology is known as semiconductor memory. Names such as ROM, RAM, EPROM, EEPROM,
Flash memory, DRAM, SRAM, SDRAM, and the very new MRAM.

8
Paavai Engineering College Department of ECE

1.2.1 Semiconductor Memory

Classifications:
Fig.1.3 Semiconductor Memory Classifications
Memory types:
• DRAM
• EEPROM
• Flash
• FRAM
• MRAM
• Phase change memory
• SDRAM
• SRAM

Semiconductor memory is used in any electronics assembly that uses computer processing
technology. Semiconductor memory is the essential electronics component needed for any
computer based PCB assembly. In addition to this, memory cards have become commonplace items
for temporarily storing data - everything from the portable flash memory cards used for transferring
files, to semiconductor memory cards used in cameras, mobile phones and the like.
The use of semiconductor memory has grown, and the size of these memory cards has
increased as the need for larger and larger amounts of storage is needed. To meet the growing needs
for semiconductor memory, there are many types and technologies that are used. As the demand
grows new memory technologies are being introduced and the existing types and technologies are
being further developed.

9
Paavai Engineering College Department of ECE

A variety of different memory technologies are available - each one suited to different
applications.. Names such as ROM, RAM, EPROM, EEPROM, Flash memory, DRAM, SRAM,
SDRAM, as well as F-RAM and MRAM are available, and new types are being developed to
enable improved performance.
Terms like DDR3, DDR4, DDR5 and many more are seen and these refer to different types
of SDRAM semiconductor memory.
In addition to this the semiconductor devices are available in many forms - ICs for printed
board assembly, USB memory cards, Compact Flash cards, SD memory cards and even solid state
hard drives. Semiconductor memory is even incorporated into many microprocessor chips as on-
board memory.

Fig.1.4 Printed circuit board containing computer memory


Semiconductor memory: main types
There are two main types or categories that can be used for semiconductor technology.
These memory types or categories differentiate the memory to the way in which it operates:
• RAM - Random Access Memory: As the names suggest, the RAM or random access
memory is a form of semiconductor memory technology that is used for reading and
writing data in any order - in other words as it is required by the processor. It is used for
such applications as the computer or processor memory where variables and other stored
and are required on a random basis. Data is stored and read many times to and from this
type of memory.

• Random access memory is used in huge quantities in computer applications as current


day computing and processing technology requires large amounts of memory to enable
them to handle the memory hungry applications used today. Many types of RAM

10
Paavai Engineering College Department of ECE

including SDRAM with its DDR3, DDR4, and soon DDR5 variants are used in huge
quantities.
• ROM - Read Only Memory: A ROM is a form of semiconductor memory technology
used where the data is written once and then not changed. In view of this it is used
where data needs to be stored permanently, even when the power is removed - many
memory technologies lose the data once the power is removed.
As a result, this type of semiconductor memory technology is widely used for storing
programs and data that must survive when a computer or processor is powered down. For example
the BIOS of a computer will be stored in ROM. As the name implies, data cannot be easily written
to ROM. Depending on the technology used in the ROM, writing the data into the ROM initially
may require special hardware. Although it is often possible to change the data, this gain requires
special hardware to erase the data ready for new data to be written in.
As can be seen, these two types of memory are very different, and as a result they are used
in very different ways. Each of the semiconductor memory technologies outlined below falls into
one of these two types of category. each technology offers its own advantages and is used in a
particular way, or for a particular application.
Semiconductor memory technologies
There is a large variety of types of ROM and RAM that are available. Often the overall name for
the memory technology includes the initials RAM or ROM and this gives a guide as to the overall
type of format for the memory.
With technology moving forwards apace, not only are the established technologies moving
forwards with SDRAM technology moving from DDR3 to DDR4 and then to DDR5, but Flash
memory used in memory cards is also developing as are the other technologies.
In addition to this, new memory technologies are arriving on the scene and they are starting
to make an impact in the market, enabling processor circuits to perform more effectively.
The different memory types or memory technologies are detailed below:
DRAM: Dynamic RAM is a form of random access memory. DRAM uses a capacitor to store
each bit of data, and the level of charge on each capacitor determines whether that bit is a logical 1
or 0.
However these capacitors do not hold their charge indefinitely, and therefore the data needs
to be refreshed periodically. As a result of this dynamic refreshing it gains its name of being a
dynamic RAM. DRAM is the form of semiconductor memory that is often used in equipment
11
Paavai Engineering College Department of ECE

including personal computers and workstations where it forms the main RAM for the computer.
The semiconductor devices are normally available as integrated circuits for use in PCB assembly in
the form of surface mount devices or less frequently now as leaded components.

DRAM advantages and disadvantages


As with any technology, there are various advantages and disadvantages to using it. Balancing the
advantages and disadvantages of using DRAM against another form of technology ensures that the
optimum format is chosen.
Advantages of DRAM
• Very dense
• Low cost per bit
• Simple memory cell structure
Disadvantages of DRAM
• Complex manufacturing process
• Data requires refreshing
• More complex external circuitry required (read and refresh periodically)
• Volatile memory
• Relatively slow operational speed

EEPROM:
This is an Electrically Erasable Programmable Read Only Memory. Data can be written to
these semiconductor devices and it can be erased using an electrical voltage. This is typically
applied to an erase pin on the chip. Like other types of PROM, EEPROM retains the contents of the
memory even when the power is turned off. Also like other types of ROM, EEPROM is not as fast
as RAM.
Within the overall EEPROM family of memory devices, there are two main memory types that are
available. The actual way in which the memory device is operated depends upon the flavour or
memory type and hence its electrical interface.
Serial EEPROM memory: The serial EEPROMs or E2PROMs are more difficult to operate as a
result of the fact that there are fewer pins are operations must be performed in a serial manner. As
the data is transferred in a serial fashion, this also makes them much slower than their parallel
EEPROM counterparts.
12
Paavai Engineering College Department of ECE

There are several standard interface types: SPI, I2C, Microwire, UNI/O, and 1-Wire are five
common types. These interfaces require between 1 and 4 controls signals for operation. A typical
EEPROM serial protocol consists of three phases: OP-Code Phase, Address Phase and Data Phase.
The OP-Code is usually the first 8-bits input to the serial input pin of the EEPROM device (or with
most I²C devices, is implicit); followed by 8 to 24 bits of addressing depending on the depth of the
device, then the read or write data.
Using these interfaces these semiconductor memory devices may be contained within an eight pin
package. The result that the packages for these memory devices can be made so small is their chief
advantage.
Parallel EEPROM memory: Parallel EEPROM or E2PROM devices normally have an 8 bit wide
bus. Using a parallel bus like this enables it to cover the complete memory of many smaller
processor applications. Typically, devices have chip select and write protect pins and some
microcontrollers used to have an integrated parallel EEPROM for storage of the software.

The operation of a parallel EEPROM is faster than that of a comparable serial EEPROM or
E2PROM, and also the operation is simpler than that of an equivalent serial EEPROM. The
disadvantages are that parallel EEPROMs are larger as a result of the higher pin count. Also they
have been decreasing in popularity in favour of serial EEPROM or Flash as a result of convenience
and cost. Today, Flash memory offers better performance at an equivalent cost, whereas serial
EEPROMs offer advantages of small size.

EPROM:
This is an Erasable Programmable Read Only Memory. These semiconductor devices can
be programmed and then erased at a later time. This is normally achieved by exposing the
semiconductor device itself to ultraviolet light. To enable this to happen there is a circular window
in the package of the EPROM to enable the light to reach the silicon of the device. When the
PROM is in use, this window is normally covered by a label, especially when the data may need to
be preserved for an extended period.

13
Paavai Engineering College Department of ECE

The PROM stores its data as a charge on a capacitor. There is a charge storage capacitor for
each cell and this can be read repeatedly as required. However it is found that after many years the
charge may leak away and the data may be lost.

Nevertheless, this type of semiconductor memory used to be widely used in applications where a
form of ROM was required, but where the data needed to be changed periodically, as in a
development environment, or where quantities were low.
Flash memory:
Flash memory may be considered as a development of EEPROM technology. Data can be
written to it and it can be erased, although only in blocks, but data can be read on an individual cell
basis.
To erase and re-programme areas of the chip, programming voltages at levels that are
available within electronic equipment are used. It is also non-volatile, and this makes it particularly
useful. As a result Flash memory is widely used in many applications including USB memory
sticks, compact Flash memory cards, SD memory cards and also now solid state hard drives for
computers and many other applications.

Flash memory reliability and life


When Flash memory was first introduced it had a relatively short lifetime. The repeated use
of the cells caused the memory to degrade. As such Flash memory was only used for a restricted
number of read / write cycles. Nowadays Flash memory technology has been significantly
improved and reliability is not the issue that it was. Nevertheless Flash memories to incorporate a
scheme of what is termed wear levelling to reduce the impact on cells or areas of the memory that
may be subject to high use.
The wear levelling monitors the usage of different areas of the overall memory and seeks to use all
areas equally, thereby levelling the usage.

F-RAM:
Ferroelectric RAM is a random-access memory technology that has many similarities to the
standard DRAM technology. The major difference is that it incorporates a ferroelectric layer
instead of the more usual dielectric layer and this provides its non-volatile capability. As it offers a
non-volatile capability, F-RAM is a direct competitor to Flash.
14
Paavai Engineering College Department of ECE

MRAM:
This is Magneto-resistive RAM, or Magnetic RAM. It is a non-volatile RAM memory
technology that uses magnetic charges to store data instead of electric charges.
Unlike technologies including DRAM, which require a constant flow of electricity to maintain the
integrity of the data, MRAM retains data even when the power is removed. An additional
advantage is that it only requires low power for active operation. As a result this technology could
become a major player in the electronics industry now that production processes have been
developed to enable it to be produced.

MRAM operation
The operation of the new semiconductor memory is based around a structure known as a
magnetic tunnel junction (MJT). These devices consist of sandwiches of two ferromagnetic layers
separated by thin insulating layers. A current can flow across the sandwich and arises from a
tunnelling action and its magnitude is dependent upon the magnetic moments of the magnetic
layers. The layers of the memory cell can either be the same when they are said to be parallel, or in
opposite directions when they are said to be antiparallel. It is found that the current is higher when
the magnetic fields are aligned to one another. In this way it is possible to detect the state of the
fields.
Magnetic tunnel junctions (MTJ) of the MRAM comprise sandwiches of two ferromagnetic
(FM) layers separated by a thin insulating layer which acts as a tunnel barrier. In these structures
the sense current usually flows parallel to the layers of the structure, the current is passed
perpendicular to the layers of the MTJ sandwich. The resistance of the MTJ sandwich depends on
the direction of magnetism of the two ferromagnetic layers. Typically, the resistance of the MTJ is
lowest when these moments are aligned parallel to one another, and is highest when antiparallel.
To set the state of the memory cell a write current is passed through the structure. This is
sufficiently high to alter the direction of magnetism of the thin layer, but not the thicker one. A
smaller non-destructive sense current is then used to detect the data stored in the memory cell.

P-RAM / PCM:
This type of semiconductor memory is known as Phase change Random Access Memory, P-
RAM or just Phase Change memory, PCM. It is based around a phenomenon where a form of
15
Paavai Engineering College Department of ECE

chalcogenide glass changes is state or phase between an amorphous state (high resistance) and a
polycrystalline state (low resistance). It is possible to detect the state of an individual cell and hence
use this for data storage. Currently this type of memory has not been widely commercialised, but it
is expected to be a competitor for flash memory.

Advantages of phase change memory:


Non-volatile: Phase change RAM is a non-volatile form of memory, i.e. it does not require power
to retain its information. This enables it to compete directly with flash memory.
Bit alterable: Similar to RAM or EEPROM, P-RAM / PCM is what is termed bit-alterable. This
means that information can be written directly to it without the need for an erase process. This
gives it a significant advantage over flash which requires an erase cycle before new data can be
written to it.
Fast read performance: Phase change RAM, P-RAM / PCM features fast random access times.
This has the advantage that it enables the execution of code directly from the memory, without the
need to copy the data to RAM. The read latency of P-RAM is comparable to single bit per cell
NOR flash, while the read bandwidth is similar to that of DRAM
Scalability: For the future, the scalability of P-RAM is another area where it could provide
advantages, although this is yet to be realised. The reasoning is that both NOR and NAND flash
variants rely on floating gate memory structures, which are difficult to shrink. It is found that as the
memory cell size is reduced, the number of electrons stored on the floating gate is reduced and this
makes the detection of these smaller charges more difficult to reliably detect. P-RAM does not
store charge, but instead it relies on a resistance change. As a result is not susceptible to the same
scaling difficulties.
Write/erase performance: The write erase performance of P-Ram is very good having faster
speeds and lower latency than NAND flash. As no erase cycle is required this delivers an overall
significant improvement over flash.
Disadvantages of phase change memory:
Commercial viability: Despite the many claims about the advantages of P-RAM, few companies
have been able to develop chips that have been successfully commercialised.
Multiple bit storage per cell of Flash: The ability of Flash to store and detect multiple bits per
cell still gives flash a memory capacity advantage over P-RAM. Although P-RAM / PCM has
advantages in possible scalability for the future.
16
Paavai Engineering College Department of ECE

PROM:
This stands for Programmable Read Only Memory. It is a semiconductor memory which
can only have data written to it once - the data written to it is permanent. These memories are
bought in a blank format and they are programmed using a special PROM programmer.
Typically a PROM will consist of an array of fuseable links some of which are "blown" during the
programming process to provide the required data pattern.
SDRAM:
Synchronous DRAM. This form of semiconductor memory can run at faster speeds than
conventional DRAM. It is synchronised to the clock of the processor and is capable of keeping two
sets of memory addresses open simultaneously. By transferring data alternately from one set of
addresses, and then the other, SDRAM cuts down on the delays associated with non-synchronous
RAM, which must close one address bank before opening the next.
Within the SDRAM family there are several types of memory technologies that are seen. These are
referred to by the letters DDR - Double Data Rate. DDR4 is currently the latest technology, but this
is soon to be followed by DDR5 which will offer some significant improvements in performance.
SDRAM types: DDR versions, etc
SDRAM technology underwent a huge amount of development. As a result several
successive families of the memory were introduced, each with improved performance over the
previous generation.
• SDR SDRAM: This is the basic type of SDRAM that was first introduced. It has now been
superseded by the other types below. It is referred to as single data rate SDRAM, or just
SDRAM.
• DDR SDRAM: DDR SDRAM, also known as DDR1 SDRAM gains its name from the
fact that it is Double Data Rate SDRAM. This type of SDRAM provides data transfer at
twice the speed of the traditional type of SDRAM memory. This is achieved by transferring
data twice per cycle.
• DDR2 SDRAM: DDR2 SDRAM can operate the external bus twice as fast as its
predecessor and it was first introduced in 2003.
• DDR3 SDRAM: DDR3 SDRAM is a further development of the double data rate type of
SDRAM. It provides further improvements in overall performance and speed.

17
Paavai Engineering College Department of ECE

• DDR4 SDRAM: DDR4 SDRAM was the next generation of DDR SDRAM It provided
enhanced performance to meet the demands of the day. It was introduced in the latter half of
2014.
• DDR5 SDRAM: Development of SDRAM technology is moving forwards and the next
generation of SDRAM, labelled DDR5 is currently under development. The specification
was launched in 2016 with expected first production in 2020. DDR5 will reduce power
consumption while doubling bandwidth and capacity.

SRAM: Static Random Access Memory. This form of semiconductor memory gains its name
from the fact that, unlike DRAM, the data does not need to be refreshed dynamically.

These semiconductor devices are able to support faster read and write times than DRAM
(typically 10 ns against 60 ns for DRAM), and in addition its cycle time is much shorter because it
does not need to pause between accesses. However they consume more power, they are less dense
and more expensive than DRAM. As a result of this SRAM is normally used for caches, while
DRAM is used as the main semiconductor memory technology. There are two key features to
SRAM - Static random Access Memory, and these set it out against other types of memory that are
available:
• The data is held statically: This means that the data is held in the semiconductor memory
without the need to be refreshed as long as the power is applied to the memory.

• SRAM memory is a form of random access memory: A random access memory is one in
which the locations in the semiconductor memory can be written to or read from in any
order, regardless of the last memory location that was accessed.
Semiconductor memory technology is developing at a fast rate to meet the ever growing needs
of the electronics industry. Not only are the existing technologies themselves being developed, but
considerable amounts of research are being invested in new types of semiconductor memory
technology.
In terms of the memory technologies currently in use, SDRAM versions like DDR4 are being
further developed to provide DDR5 which will offer significant performance improvements. In
time, DDR5 will be developed to provide the next generation of SDRAM. Other forms of memory

18
Paavai Engineering College Department of ECE

are seen around the home in the form of USB memory sticks, Compact Flash, CF cards or SD
memory cards for cameras and other applications as well as solid state hard drives for computers.
The semiconductor devices are available in a wide range of formats to meet the differing PCB
assembly and other needs.
COMPARISON OF MEMORY TECHNOLOGIES WITH FRAM

FRAM SRAM EEPROM FLASH

Non-volatile Yes No Yes Yes

1 million billion (i.e. 1 000


Write endurance Unlimited ~500 000
1015) 000

Write speed (for


10ms <10ms 2s
13 kB)

Average active
Up to 10
power 80 <60 260
mA
(µA/MHz)

Dynamics bit
addressable Yes Yes No No
programmability

1.3 READ ONLY MEMORIES


ROM, which stands for read only memory, is a memory device or storage medium that
stores information permanently. It is also the primary memory unit of a computer along with the
random access memory (RAM). It is called read only memory as we can only read the programs
and data stored on it but cannot write on it. ROM stands for Read Only Memory. The memory
from which we can only read but cannot write on it. This type of memory is non-volatile. The
information is stored permanently in such memories during manufacture. A ROM stores such
instructions that are required to start a computer. This operation is referred to as bootstrap. ROM
chips are not only used in the computer but also in other electronic items like washing machine and
microwave oven.
1.3.1 Various types of ROMs and their characteristics
MROM (Masked ROM)
19
Paavai Engineering College Department of ECE

The very first ROMs were hard-wired devices that contained a pre-programmed set of data or
instructions. These kind of ROMs are known as masked ROMs, which are inexpensive.
PROM (Programmable Read Only Memory)
PROM is read-only memory that can be modified only once by a user. The user buys a blank
PROM and enters the desired contents using a PROM program. Inside the PROM chip, there are
small fuses which are burnt open during programming. It can be programmed only once and is not
erasable.
EPROM (Erasable and Programmable Read Only Memory)
EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40 minutes.
Usually, an EPROM eraser achieves this function. During programming, an electrical charge is
trapped in an insulated gate region. The charge is retained for more than 10 years because the
charge has no leakage path. For erasing this charge, ultra-violet light is passed through a quartz
crystal window (lid). This exposure to ultra-violet light dissipates the charge. During normal use,
the quartz lid is sealed with a sticker.
EEPROM (Electrically Erasable and Programmable Read Only Memory)
EEPROM is programmed and erased electrically. It can be erased and reprogrammed about ten
thousand times. Both erasing and programming take about 4 to 10 ms (millisecond). In EEPROM,
any location can be selectively erased and programmed. EEPROMs can be erased one byte at a
time, rather than erasing the entire chip. Hence, the process of reprogramming is flexible but slow.
Advantages of ROM
• Non-volatile in nature
• Cannot be accidentally changed
• Cheaper than RAMs
• Easy to test
• More reliable than RAMs
• Static and do not require refreshing
• Contents are always known and can be verified

1.4 MEMORY HIERARCHY


The five hierarchies in the memory are registers, cache, main memory, magnetic discs,
and magnetic tapes. ... A memory element is the set of storage devices which stores the binary

20
Paavai Engineering College Department of ECE

data in the type of bits. In general, the storage of memory can be classified into two categories such
as volatile as well as non- volatile.
In the Computer System Design, Memory Hierarchy is an enhancement to organize the
memory such that it can minimize the access time. The Memory Hierarchy was developed based
on a program behavior known as locality of references.The figure below clearly demonstrates the
different levels of memory hierarchy :

Fig.1.5 Memory Hierarchy Design


This Memory Hierarchy Design is divided into 2 main types:
• External Memory or Secondary Memory
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage devices
which are accessible by the processor via I/O Module.
• Internal Memory or Primary Memory
Comprising of Main Memory, Cache Memory & CPU registers. This is directly accessible
by the processor.
Characteristics of Memory Hierarchy Design
• Capacity:
It is the global volume of information the memory can store. As we move from top to
bottom in the Hierarchy, the capacity increases.
• Access Time:
It is the time interval between the read/write request and the availability of the data. As
we move from top to bottom in the Hierarchy, the access time increases.

21
Paavai Engineering College Department of ECE

• Performance:
Earlier when the computer system was designed without Memory Hierarchy design, the
speed gap increases between the CPU registers and Main Memory due to large difference
in access time. This results in lower performance of the system and thus, enhancement
was required. This enhancement was made in the form of Memory Hierarchy Design
because of which the performance of the system increases. One of the most significant
ways to increase system performance is minimizing how far down the memory hierarchy
one has to go to manipulate data.
• Cost per bit:
As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal
Memory is costlier than External Memory.

1.5 CACHE MEMORIES


Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU. It holds frequently requested data and instructions so that they are immediately available to
the CPU when needed. Cache memory is used to reduce the average time to access data from the
Main memory.

Fig.4.6a

22
Paavai Engineering College Department of ECE

Fig.1.6b
Figure. 4.6a & b Cache memory in computer organization
The data or contents of the main memory that are used frequently by CPU are stored in the cache
memory so that the processor can easily access that data in a shorter time. Whenever the CPU
needs to access memory, it first checks the cache memory. If the data is not found in cache
memory, then the CPU moves into the main memory.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache
memory can be represented as:
The cache is the fastest component in the memory hierarchy and approaches the speed of CPU
components.
The basic operation of a cache memory is as follows:
• When the CPU needs to access memory, the cache is examined. If the word is found in the
cache, it is read from the fast memory.
• If the word addressed by the CPU is not found in the cache, the main memory is accessed to
read the word.
• A block of words one just accessed is then transferred from main memory to cache memory.
The block size may vary from one word (the one just accessed) to about 16 words adjacent
to the one just accessed.
• The performance of the cache memory is frequently measured in terms of a quantity
called hit ratio.
• When the CPU refers to memory and finds the word in cache, it is said to produce a hit.
• If the word is not found in the cache, it is in main memory and it counts as a miss.
• The ratio of the number of hits divided by the total CPU references to memory (hits plus
misses) is the hit ratio.

1.6 PERFORMANCE CONSIDERATIONS


Performance considerations is a key design objective of a computer system is to achieve
the best possible performance at the lowest possible cost. ... Performance of a processor depends
on: – How fast machine instructions can be brought into the processor for execution. – How fast the
instructions can be executed.
i) A key design objective is to achieve the best possible performance at the lowest possible cost.
• Price /performance ration is a common measure.
23
Paavai Engineering College Department of ECE

ii) Performance os a processor depends on:


• How fast machine instructions can be brought into the processor for execution.
• How fast the instructions can be executed.
iii) Memory hierarchy described earlier was created to increase the speed and size of the memory at
an affordable cost.
iv) Date need to be transferred between various units of this hierarchy as well.
• Speed and efficiency of data transfer between these various memory units also impacts the
performance.
Software performance considerations when using cache
Embedded applications are being implemented on processors that have cache areas of
significant size. Popular architectures in large-scale embedded applications have variants that have
large amounts of on-chip cache. This paper introduces basic concepts about cache and discusses
some ideas on developing software to improve performance when running on these processors.
A developer of an application that will run on a general-purpose computer cannot make any
assumptions about the number of other applications that are running on the system and how the
cache is shared between them. A developer of an embedded application has a higher level of
control over the application environment. This higher level of control can provide opportunities to
use cache more effectively. In cases where consistent response times are important, developers may
want to be aware of cases where memory accesses come from main memory instead of from cache.
The use of cache in computer systems can dramatically improve system performance. Increasing
cache size is a cost-effective way to improve performance on microprocessors as the transistor
count on the chip increases.

Interleaving
• Divdes the memory system into a number of memory modules. Each module has its own
address buffer register (ABR) and data buffer register (DBR).
• Arrange requests for memory access involve consecutive addresses, the access will be to
different modules.
• When requests for memory access involve consecutive address, the access will be to
different modules.
• Since parallel access to these modules is possible, the average rate of fetching words from
the Main memory can be increased.
24
Paavai Engineering College Department of ECE

Fig.1.7 Methods of address Bayouts


Methods of address Bayouts
• Consecutive words are placed in a module.
• High-order k bits of a memory address determine the module.
• Low-order m bits of a memory address determine the word within a module.
• When a block of words is transferred from main memory to chache, only one module is
busy at a time.
Hit rate and miss penalty
• Hit rate.
• Miss penalty.
• Hit rate can be improved by increasing block size, while keeping cache size constant.
• Block sizes that are neither very small nor very large give best results.
• Miss penalty can be reduced if load-through approach is used when loading new blocks into
cache.
Cache on the processor chip
• In high performance processors 2 levels of caches are normally used.
• Avg access time in a system with 2 levels of cache is

Other performance enhancement factors

• Write Buffer

• Write through

• Prefetching

• Lockup free cache

The most important factors affecting processor performance are:


25
Paavai Engineering College Department of ECE

• Instruction Set. This is the processor's built-in code that tells it how to execute its duties.
• Clock Speed.
• Bandwidth.
• Front Side Bus (FSB) Speed.
• On-Board Cache.
• Heat and Heat Dissipation.

1.7 VIRTUAL MEMORY


Virtual memory is a feature of an operating system that enables a computer to be able to
compensate shortages of physical memory by transferring pages of data from random access
memory to disk storage. ... This means that when RAM runs low, virtual memory can move data
from it to a space called a paging file.
Virtual memory is used when the computer has no more available random access
memory (RAM). There are times when the amount of RAM needed to hold all running programs
and data is greater than the amount of RAM available to the computer.
Virtual Memory is a storage allocation scheme in which secondary memory can be
addressed as though it were part of the main memory. The addresses a program may use to
reference memory are distinguished from the addresses the memory system uses to identify
physical storage sites, and program-generated addresses are translated automatically to the
corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and
the amount of secondary memory is available not by the actual number of the main storage
locations. Attention reader! Don’t stop learning now. Get hold of all the important CS Theory
concepts for SDE interviews with the CS Theory Course at a student-friendly price and become
industry ready.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.
• All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be swapped
in and out of the main memory such that it occupies different places in the main memory
at different times during the course of execution.
26
Paavai Engineering College Department of ECE

• A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of dynamic
run-time address translation and use of page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand Paging or
Demand Segmentation.
Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is known
as demand paging.
The process includes the following steps :

Fig. 1.8
• If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
• The OS puts the interrupted process in a blocking state. For the execution to proceed the
OS must bring the required page into the memory.
• The OS will search for the required page in the logical address space.
• The required page will be brought from logical address space to physical address space.
The page replacement algorithms are used for the decision-making of replacing the page
in physical address space.

27
Paavai Engineering College Department of ECE

The page table will be updated accordingly.


• The signal will be sent to the CPU to continue the program execution and it will place the
process back into the ready state.
• Hence whenever a page fault occurs these steps are followed by the operating system and
the required page is brought into memory.
Advantages :
• More processes may be maintained in the main memory: Because we are going to load
only some of the pages of any particular process, there is room for more processes. This
leads to more efficient utilization of the processor because it is more likely that at least
one of the more numerous processes will be in the ready state at any particular time.
• A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in the main
memory as required.
• It allows greater multiprogramming levels by using less of the available (primary)
memory for each process.
Page Fault Service Time :
The time taken to service the page fault is called page fault service time. The page fault service
time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then,
Effective memory access time = (p*s) + (1-p)*m

Swapping:
Swapping a process out means removing all of its pages from memory, or marking them so that
they will be removed by the normal page replacement process. Suspending a process ensures that
it is not runnable while it is swapped out. At some later time, the system swaps back the process
from the secondary storage to the main memory. When a process is busy swapping pages in and
out then this situation is called thrashing.

28
Paavai Engineering College Department of ECE

Fig. 1.9
Thrashing:

Fig. 1.10

At any given time, only a few pages of any process are in the main memory and therefore
more processes can be maintained in memory. Furthermore, time is saved because unused pages
are not swapped in and out of memory. However, the OS must be clever about how it manages
this scheme. In the steady-state practically, all of the main memory will be occupied with process
pages, so that the processor and OS have direct access to as many processes as possible. Thus
when the OS brings one page in, it must throw another out. If it throws out a page just before it is
used, then it will just have to get that page again almost immediately. Too much of this leads to a
condition called Thrashing. The system spends most of its time swapping pages rather than
executing instructions. So a good page replacement algorithm is required.
In the given diagram, the initial degree of multiprogramming up to some extent of
point(lambda), the CPU utilization is very high and the system resources are utilized 100%. But
29
Paavai Engineering College Department of ECE

if we further increase the degree of multiprogramming the CPU utilization will drastically fall
down and the system will spend more time only on the page replacement and the time is taken to
complete the execution of the process will increase. This situation in the system is called
thrashing.
Causes of Thrashing :
1. High degree of multiprogramming : If the number of processes keeps on increasing in the
memory then the number of frames allocated to each process will be decreased. So, fewer frames
will be available for each process. Due to this, a page fault will occur more frequently and more
CPU time will be wasted in just swapping in and out of pages and the utilization will keep on
decreasing.
For example:
Let free frames = 400.
• Case 1: Number of process = 100
Then, each process will get 4 frames.
• Case 2: Number of processes = 400
Each process will get 1 frame.
Case 2 is a condition of thrashing, as the number of processes is increased, frames per
process are decreased. Hence CPU time will be consumed in just swapping pages.

2. Lacks of Frames: If a process has fewer frames then fewer pages of that process will be able
to reside in memory and hence more frequent swapping in and out will be required. This may
lead to thrashing. Hence sufficient amount of frames must be allocated to each process in order to
prevent thrashing.
Recovery of Thrashing :
• Do not allow the system to go into thrashing by instructing the long-term scheduler not to
bring the processes into memory after the threshold.
• If the system is already thrashing then instruct the mid-term schedular to suspend some of
the processes so that we can recover the system from thrashing.

1.8 MEMORY MANAGEMENT REQUIREMENTS


The essential requirement of memory management is to provide ways to dynamically
allocate portions of memory to programs at their request, and free it for reuse when no
30
Paavai Engineering College Department of ECE

longer needed. This is critical to any advanced computer system where more than a single process
might be underway at any time.

Fig.1.11
The 5 memory management requirements are as follows:
• Relocation
• Protection
• Sharing
• Local Organization
• Physical Organization
• Relocation: In a multiprogramming environment the accessible main memory is shared
among number of processes. The users can swap the active processes in and out of the main
memory to maximize the use of the processor by supplying a huge pool of ready process to
accomplish. If a program has been swapped out to disk, it would be quite limiting to declare
that when it is next swapped back in, it should be place in the main memory as it is. If the
location is occupied the process has to be relocated to different area. Therefore, this
requirement supports the concept of modular programming.
• Protection: Every process must be protected against unnecessary interference by the other
processes, whether accidental or intentional. Therefore, programs in other processes must
not be able to reference memory locations in a process for writing or reading operation
without permission. Therefore this requirement support process isolation, protection and
access control.
31
Paavai Engineering College Department of ECE

• Sharing: Processes that are work together on some task may need to share access to the
same data structure. Memory management system should therefore allow regulated call up
to shared area of memory without compromising necessary protection. Therefore, this
requirement supports protection and access control.
• Local organization: Almost consistently, main memory in a computer system is structured
as a linear, or 1-Dimensional, address space, comprising of an arrangement of bytes or
words. Secondary memory, at its physical level, is equally structured. Though this
organization closely reflects at the actual machine hardware, it does not relate to the way in
which programs are normally constructed. Most of the programs are structured into sub
parts, some of which are unmodifiable and some of them contain data that may be modified.
Therefore, this requirement supports the concept of modular programming.
• Physical Organization: The system memory is organized into 2 levels; one level is main
memory and the other level is secondary memory. Main memory offers faster access, but it
has high cost and it is a volatile memory with less storage capacity. Secondary memory is a
slower and cheaper, but it offers permanent storage and non-volatile with huge storage
capacity. Therefore, this requirement supports long term storage and automatic allocation
and management.
1.9 SECONDARY STORAGE DEVICES
Secondary storage devices primarily refer to storage devices that serve as an addition to
the computer's primary storage, RAM and cache memory.
Main memory is a key component of a computer system that works in tandem with secondary
storage. This allows the system to run instructions, while secondary storage retains data. Cloud
storage allows data to be stored at a remote location online.
The need for secondary storage
Computers use main memory such as random access memory (RAM) and cache to hold data that is
being processed. However, this type of memory is volatile - it loses its contents when the computer
is switched off. General purpose computers, such as personal computers and tablets, need to be able
to store programs and data for later use.
Secondary storage is non-volatile, long-term storage. Without secondary storage all programs and
data would be lost the moment the computer is switched off.
There are three main types of secondary storage in a computer system:
• Solid state storage devices, such as USB memory sticks
32
Paavai Engineering College Department of ECE

• Optical storage devices, such as CD, DVD and Blu-ray discs


• Magnetic storage devices, such as hard disk drives
However, not all computers require secondary storage. Embedded computers, such as those found
in a washing machine or central heating system, do not need to store data when the power is turned
off. The instructions needed to run them are stored in read-only memory (ROM) and any user data
is held in RAM.
Solid state
Solid state storage is a special type of storage made from silicon microchips. It can be written to
and overwritten like RAM. However, unlike RAM, it is non-volatile, which means that when the
computer's power is switched off, solid state storage will retain its contents.
Solid state is also used as external secondary storage, for example in USB memory sticks and solid
state drives.
One of the major benefits of solid state storage is that is has no moving parts. Because of this, it is
more portable, and produces less heat compared to traditional magnetic storage devices. Less heat
means that components last longer.
Solid state storage has no moving parts making it more portable and durable.
Solid state storage is also faster than traditional hard disk drives because the data is stored
electrically in silicon chips called cells. Within the cells, the binary data is stored by holding an
electrical current in a transistor with an on / off mode. Unlike RAM which uses a similar technique,
solid state storage retains this even when the power is switched off by using a technology known
as flash memory.
Solid state is an ideal storage medium for many modern devices such as tablets, smartphones and
digital cameras.
Magnetic devices
Magnetic devices such as hard disk drives use magnetic fields to magnetise tiny individual sections
of a metal spinning disk. Each tiny section represents one bit. A magnetised section represents a
binary '1' and a demagnetised section represents a binary '0'. These sections are so tiny that disks
can contain terabytes (TB) of data.
As the disk is spinning, a read/write head moves across its surface. To write data, the head
magnetises or demagnetises a section of the disk that is spinning under it. To read data, the head
makes a note of whether the section is magnetised or not.

33
Paavai Engineering College Department of ECE

Magnetic devices are fairly cheap, high in capacity and durable. However, they are susceptible to
damage if dropped. They are also vulnerable to magnetic fields - a strong magnet might possibly
erase the data the device holds.
Optical devices
Optical devices use a laser to scan the surface of a spinning disc made from metal and plastic. The
disc surface is divided into tracks, with each track containing many flat areas and hollows. The flat
areas are known as lands and the hollows as pits.
When the laser shines on the disc surface, lands reflect the light back, whereas pits scatter the laser
beam. A sensor looks for the reflected light. Reflected light - land - represents a binary '1', and no
reflection - pits - represents a binary '0'.
There are different types of optical media:
• ROM media have data pre-written on them. The data cannot be overwritten. Music, films,
software and games are often distributed this way.
• Read (R) media are blank. An optical device writes data to them by shining a laser onto the
disc. The laser burns pits to represent '0's. The media can only be written to once, but read
many times. Copies of data are often made using these media.
• Read/write RW works in a similar way to R, except that the disc can be written to more than
once.

1.10 ACCESSING I/O DEVICES

A simple arrangement to connect I/O devices to a computer is to use a single bus


arrangement. The bus enables all the devices connected to it to exchange information. Typically,
it consists of three sets of lines used to carry address, data, and control signals. Each I/O device is
assigned a unique set of addresses. When the processor places a particular address on the
address line, the device that recognizes this address responds to the commands issued on the
control lines. The processor requests either a read or a write operation, and the requested data are
transferred over the data lines, when I/O devices andthe memory share the same address space,
the arrangement is called memory-mapped I/O.
With memory-mapped I/O, any machine instruction that can access memory can be
used to transfer data to or from an I/O device. For example, if DATAIN is the address of the
input buffer associated with the keyboard, the instruction
34
Paavai Engineering College Department of ECE

Move DATAIN, R0
Reads the data from DATAIN and stores them into processor register R0. Similarly, the
instruction
Move R0, DATAOUT

Sends the contents of register R0 to location DATAOUT, which may be the output data buffer
of a display unit or a printer.
Most computer systems use memory-mapped I/O. some processors have special In
and Out instructions to perform I/O transfers. When building a computer system based on
these processors, the designer had the option of connecting I/O devices to use the special I/O
address space or simply incorporating them as part of the memory address space. The I/O
devices examine the low-order bits of the address bus to determine whether they should
respond.

The hardware required to connect an I/O device to the bus. The address decoder enables
the device to recognize its address when this address appears on the address lines. The data
register holds the data being transferred to or from the processor. The status register contains
information relevant to the operation of the I/O device. Both the data and status registers are
connected to the data bus and assigned unique addresses. The address decoder, the data and
status registers, and the control circuitry required to coordinate I/O transfers constitute the
device’s interface circuit.
I/O devices operate at speeds that are vastly different from that of the processor. When a
human operator is entering characters at a keyboard, the processor is capable of executing
millions of instructions between successive character entries. An instruction that reads a
character from the keyboard should be executed only when a character is available in the input
buffer of the keyboard interface. Also, we must make sure that an input character is read only
once.

Difference RAM ROM

35
Paavai Engineering College Department of ECE

Difference RAM ROM

RAM is a volatile memory which ROM is a non-volatile memory which


Data retention could store the data as long as the could retain the data even when power
power is supplied. is turned off.

Data stored in RAM can be retrieved


Working type Data stored in ROM can only be read.
and altered.

Used to store the data that has to be


It stores the instructions required
Use currently processed by CPU
during bootstrap of the computer.
temporarily.

Speed It is a high-speed memory. It is much slower than the RAM.

The CPU can not access the data


CPU The CPU can access the data stored
stored on it unless the data is stored in
Interaction on it.
RAM.

Size and
Large size with higher capacity. Small size with less capacity.
Capacity

Used as/in CPU Cache, Primary memory. Firmware, Micro-controllers

The data stored is not as easily


Accessibility The data stored is easily accessible
accessible as in RAM

Cost Costlier cheaper than RAM.

36
Paavai Engineering College Department of ECE

1.11 STANDARD I/O INTERFACES

The processor bus is the bus defied by the signals on the processor chip itself.
Devices that require a very high-speed connection to the processor, such as the main
memory, may be connected directly to this bus. For electrical reasons, only a few devices
can be connected in this manner. The motherboard usually provides another bus that
can support more devices. The two buses are interconnected by a circuit, which we
will call a bridge, that translates the signals and protocols of one bus into those of the
other. Devices connected to the expansion bus appear to the processor as if they were
connected directly to the processor’s own bus. The only difference is that the bridge
circuit introduces a small delay in data transfers between the processor and those
devices.

It is not possible to define a uniform standard for the processor bus. The
structure of this bus is closely tied to the architecture of the processor. It is also dependent
on the electrical characteristics of the processor chip, such as its clock speed. The
expansion bus is not subject to these limitations, and therefore it can use a
standardized signaling scheme. A number of standards have been developed. Some
have evolved by default, when a particular design became commercially successful.
For example, IBM developed a bus they called ISA (Industry Standard Architecture)
for their personal computer known at the time as PC AT.
Some standards have been developed through industrial cooperative efforts, even
among competing companies driven by their common self-interest in having compatible
products. In some cases, organizations such as the IEEE (Institute of Electrical and
Electronics Engineers), ANSI (American National Standards Institute), or international
bodies such as ISO (International Standards Organization) have blessed these
standardsand given them an official status.

A given computer may use more than one bus standards. A typical Pentium
computer has both a PCI bus and an ISA bus, thus providing the user with a wide
range of devices to choose from.

37
Paavai Engineering College Department of ECE

Main
Memory

Processor bus

Bridge
Processor

PCI bus

Additional SCSI Ethernet USB ISA


memory controller interface controller interface

1.12 Peripheral Component Interconnect (PCI) Bus:-

The PCI bus is a good example of a system bus that grew out of the need for
standardization. It supports the functions found on a processor bus bit in a standardized
format that is independent of any particular processor. Devices connected to the PCI bus
appear to the processor as if they were connected directly to the processor bus. They are
assigned addresses in the memory address space of the processor.

The PCI follows a sequence of bus standards that were used primarily in IBM
PCs. Early PCs used the 8-bit XT bus, whose signals closely mimicked those of
Intel’s

38
Paavai Engineering College Department of ECE

0x86 processors. Later, the 16-bit bus used on the PC At computers became known
as the ISA bus. Its extended 32-bit version is known as the EISA bus. Other buses
developed in the eighties with similar capabilities are the Microchannel used in IBM
PCs and the NuBus used in Macintosh computers.

The PCI was developed as a low-cost bus that is truly processor independent. Its
design anticipated a rapidly growing demand for bus bandwidth to support high-speed
disks and graphic and video devices, as well as the specialized needs of multiprocessor
systems. As a result, the PCI is still popular as an industry standard almost a decade
after it was first introduced in 1992.

An important feature that the PCI pioneered is a plug-and-play capability for


connecting I/O devices. To connect a new device, the user simply connects the device
interface board to the bus. The software takes care of the rest.

Data Transfer:-

In today’s computers, most memory transfers involve a burst of data rather


than just one word. The reason is that modern processors include a cache memory. Data
are transferred between the cache and the main memory in burst of several words each.
The words involved in such a transfer are stored at successive memory locations. When
the processor (actually the cache controller) specifies an address and requests a read
operation from the main memory, the memory responds by sending a sequence of data
words starting at that address. Similarly, during a write operation, the processor sends a
memory address followed by a sequence of data words, to be written in successive
memory locations starting at the address. The PCI is designed primarily to support this
mode of operation. A read or write operation involving a single word is simply treated
as a burst of length one.

The bus supports three independent address spaces: memory, I/O, and
configuration. The first two are self-explanatory. The I/O address space is intended

39
Paavai Engineering College Department of ECE

for use with processors, such as Pentium, that have a separate I/O address space.
However, as noted , the system designer may choose to use memory-mapped I/O even
when a separate I/O address space is available. In fact, this is the approach recommended
by the PCI its plug-and-play capability. A 4-bit command that accompanies the address
identifies whichof the three spaces is being used in a given data transfer operation.

The signaling convention on the PCI bus is similar to the one used, we
assumed that the master maintains the address information on the bus until data transfer
is completed. But, this is not necessary. The address is needed only long enough for
the slave to be selected. The slave can store the address in its internal buffer. Thus,
the address is needed on the bus for one clock cycle only, freeing the address lines to be
used for sending data in subsequent clock cycles. The result is a significant cost
reduction because the number of wires on a bus is an important cost factor. This
approach in used in the PCI bus.
At any given time, one device is the bus master. It has the right to initiate data
transfers by issuing read and write commands. A master is called an initiator in PCI
terminology. This is either a processor or a DMA controller. The addressed device that
responds to read and write commands is called a target.

Device Configuration:-
When an I/O device is connected to a computer, several actions are needed to
configure both the device and the software that communicates with it.

The PCI simplifies this process by incorporating in each I/O device interface a
small configuration ROM memory that stores information about that device. The
configuration ROMs of all devices is accessible in the configuration address space.
The PCI initialization software reads these ROMs whenever the system is powered
up or reset. In each case, it determines whether the device is a printer, a keyboard, an
Ethernet interface, or a disk controller. It can further learn bout various device options
and characteristics.

40
Paavai Engineering College Department of ECE

Devices are assigned addresses during the initialization process. This means that
during the bus configuration operation, devices cannot be accessed based on their
address, as they have not yet been assigned one. Hence, the configuration address
space uses a different mechanism. Each device has an input signal called Initialization
Device Select, IDSEL#.

The PCI bus has gained great popularity in the PC word. It is also used in many
other computers, such as SUNs, to benefit from the wide range of I/O devices for which
a PCI interface is available. In the case of some processors, such as the Compaq Alpha,
the PCI-processor bridge circuit is built on the processor chip itself, further
simplifying system design and packaging.

1.12 SCSI Bus:-

The acronym SCSI stands for Small Computer System Interface. It refers to a
standard bus defined by the American National Standards Institute (ANSI) under the
designation X3.131 . In the original specifications of the standard, devices such as
disks are connected to a computer via a 50-wire cable, which can be up to 25 meters
in length and can transfer data at rates up to 5 megabytes/s.

The SCSI bus standard has undergone many revisions, and its data transfer
capability has increased very rapidly, almost doubling every two years. SCSI-2 and
SCSI-3 have been defined, and each has several options. A SCSI bus may have eight data
lines, in which case it is called a narrow bus and transfers data one byte at a time.
Alternatively, a wide SCSI bus has 16 data lines and transfers data 16 bits at a time.
There are also several options for the electrical signaling scheme used.

Devices connected to the SCSI bus are not part of the address space of the
processor in the same way as devices connected to the processor bus. The SCSI bus is
connected to the processor bus through a SCSI controller. This controller uses DMA to
transfer data packets from the main memory to the device, or vice versa. A packet

41
Paavai Engineering College Department of ECE

may contain a block of data, commands from the processor to the device, or status
informationabout the device.

To illustrate the operation of the SCSI bus, let us consider how it may be
used with a disk drive. Communication with a disk drive differs substantially from
communication with the main memory.

A controller connected to a SCSI bus is one of two types – an initiator or a


target. An initiator has the ability to select a particular target and to send commands
specifying the operations to be performed. Clearly, the controller on the processor side,
such as the SCSI controller, must be able to operate as an initiator. The disk controller
operates as a target. It carries out the commands it receives from the initiator. The
initiator establishes a logical connection with the intended target. Once this
connection has been established, it can be suspended and restored as needed to transfer
commands and bursts of data. While a particular connection is suspended, other device
can use the bus to transfer information. This ability to overlap data transfer requests is
one of the key features of the SCSI bus that leads to its high performance.

Data transfers on the SCSI bus are always controlled by the target controller. To
send a command to a target, an initiator requests control of the bus and, after winning
arbitration, selects the controller it wants to communicate with and hands control of
the bus over to it. Then the controller starts a data transfer operation to receive a
command from the initiator.

The processor sends a command to the SCSI controller, which causes the
following sequence of event to take place:

1. The SCSI controller, acting as an initiator, contends for control of the bus.
2. When the initiator wins the arbitration process, it selects the target controller
andhands over control of the bus to it.

42
PAAVAI ENGINEERING COLLEGE DEPARTMENT OF CSE

3. The target starts an output operation (from initiator to target); in response to


this, the initiator sends a command specifying the required read operation.
4. The target, realizing that it first needs to perform a disk seek operation, sends a
message to the initiator indicating that it will temporarily suspend the
connection between them. Then it releases the bus.
5. The target controller sends a command to the disk drive to move the read
head to the first sector involved in the requested read operation. Then, it
reads the data stored in that sector and stores them in a data buffer. When it is
ready to begin transferring data to the initiator, the target requests control of
the bus. After it wins arbitration, it reselects the initiator controller, thus
restoring the suspended connection.
6. The target transfers the contents of the data buffer to the initiator and then
suspends the connection again. Data are transferred either 8 or 16 bits in
parallel, depending on the width of the bus.
7. The target controller sends a command to the disk drive to perform another
seek operation. Then, it transfers the contents of the second disk sector to
the initiator as before. At the end of this transfers, the logical connection
between the two controllers is terminated.
8. As the initiator controller receives the data, it stores them into the main
memory using the DMA approach.
9. The SCSI controller sends as interrupt to the processor to inform it that the
requested operation has been completed

1.13 Universal Serial Bus (USB) in Computer Network


• Last Updated : 24 Sep, 2021
Universal Serial Bus (USB) is an industry-standard that establishes specifications for
connectors, cables, and protocols for communication, connection, and power supply between
personal computers and their peripheral devices. There have been 3 generations of USB
specifications:
1. USB 1.x
2. USB 2.0
3. USB 3.x

43
PAAVAI ENGINEERING COLLEGE DEPARTMENT OF CSE

USB was designed to standardize the connection of peripherals like pointing devices,
keyboards, digital still, and video cameras. But soon devices such as printers, portable media
players, disk drives, and network adaptors to personal computers used USB to communicate
and to supply electric power. It is commonplace to many devices and has largely replaced
interfaces such as serial ports and parallel ports. USB connectors have replaced other types of
battery chargers of portable devices with themselves.
Advantages of USB –
The Universal Serial Bus was designed to simplify and improve the interface between personal
computers and peripheral devices when compared with previously existing standard or ad-hoc
proprietary interfaces.
1. The USB interface is self-configuring. This means that the user need not adjust
settings on the device and interface for speed or data format, or configure interrupts,
input/output addresses, or direct memory access channels.
2. USB connectors are standardized at the host, so any peripheral can use any
available receptacle. USB takes full advantage of the additional processing power
that can be economically put into peripheral devices so that they can manage
themselves. USB devices mostly do not have user-adjustable interface settings.
3. The USB interface is hot pluggable or plug and plays, meaning devices can be
exchanged without rebooting the host computer. Small devices can be powered
directly from the USB interface thus removing extra power supply cables.
4. The USB interface defines protocols for improving reliability over previous
interfaces and recovery from common errors.
5. Installation of a device relying on the USB standard minimal operator action is
required.
Disadvantages of USB –
1. USB cables are limited in length.
2. USB has a strict “tree” topology and “master-slave” protocol for addressing
peripheral devices. Peripheral devices cannot interact with one another except via
the host, and two hosts cannot communicate over their USB ports directly.
3. Some very high-speed peripheral devices require sustained speeds not available in
the USB standard.
4. For a product developer, the use of USB requires the implementation of a complex
protocol and implies an intelligent controller in the peripheral device.
5. Use of the USB logos on the product requires annual fees and membership in the
organization.

44
PAAVAI ENGINEERING COLLEGE DEPARTMENT OF CSE

QUESTION BANK
PART A
1. Define memory cycle time?
2. When is a memory unit called as RAM?
3. What is MMU?
4. What is a word?
5. Define static memories?
6.What are the Characteristics of semiconductor RAM memories? They are available in a wide
range of speeds.
7.Why SRAMs are said to be volatile?
8.What are the Characteristics of SRAMs?
9.What are the Characteristics of DRAMs?
10.Define Refresh Circuit?
11.Define Memory Latency?
12.what are asynchronous DRAMs?
13.what are synchronous DRAMs?
14.Define Bandwidth?
15. What is double data rate SDRAMs?
16.What is mother board?
17.What are SIMMs and DIMMs?
18.What is memory Controller?
19. What is Ram Bus technology?
20. What are RDRAMs?
21.What are the special features of Direct RDRAMs?
22.What are RIMMs?
23.What is load-through or early restart?
24.What are the mapping technique?
25.What is a hit?
26.Define hit rate?
27.What are the two ways of constructing a larger module to mount flash chips on a small card?
28.Define miss rate?

45
PAAVAI ENGINEERING COLLEGE DEPARTMENT OF CSE

29.Define miss penalty?


30.Define access time for magnetic disks?
31.What is phase encoding or Manchestor encoding?
32.What is the formula for calculating the average access time experienced by the processor?
33. What is the formula for calculating the average access time experienced by the processor in a
system with two levels of caches?
34.What are prefetch instructions?
35.Define system space?
36.Define user space?
37.What are pages?
38.What is replacement algorithm?
39.What is dirty or modified bit?
40.What is writemiss?
41.What is associative research?
42.What is virtual memory?
43.What is virtual address?
44.What is virtual page number?
45.What is page frame?
46.What is Winchester technology?
47.What is a disk drive?
48.What is disk controller?
49.What is main memory address?
50.What is wordcount?
51.What is Error checking?
52.What is booting?
53.What are the two states of processor? Supervisor state
54.What is lockup- free?
56.Draw the static RAM cell?

46
PAAVAI ENGINEERING COLLEGE DEPARTMENT OF CSE

PART B
1. Write a note on Asynchronous and Synchronous DRAMs.
2. Explain in detail about DMA.
3. Describe in detail about the I/O sub system and its components.
4. Explain in detail about Programmed I/O and interrupts.
5. Analyze the memory hierarchy in terms of speed, size and Cost
6. What are the different secondary storage devices? Elaborate on any one of the devices.
7. (ii) Explain how the virtual address is converted into real address in a paged virtual memory
system
8. Explain the Address Translation in Virtual Memory.
9. Briefly describe magnetic disk principles and also the organization and accessing of data on a
disk.
10. Explain synchronous DRAM technology in detail.
11. Explain the various mapping techniques associated with cache memories.
12. Define cache memory. Explain the mapping process followed in cache memory.
13. Discuss the relative advantages and disadvantages of the mapping techniques used.
14.What is virtual memory? Why is it necessary to implement virtual memory?
15. Explain the virtual memory address translation.
16. Draw and explain the various types of secondary storage devices.

47

You might also like