Unit 4
Memory
Organization
Computer Memory System Overview
• Characteristics of Memory Systems:
• The most important key characteristics are as follows:
• 1.location : refers to whether memory is internal and
external to the computer.
• Internal memory is main memory.
• There are other forms of internal memory.
• The processor requires its own local memory, in the form of
registers.
• The control unit portion of the processor may also require its
own internal memory.
• Cache is another form of internal memory.
• External memory consists of peripheral storage devices, such
as disk and tape, that are accessible to the processor via I/O
controllers
• 2. Capacity: For internal memory, capacity is in terms of
bytes (1 byte 8 bits) or words.
• Common word lengths are 8, 16, and 32 bits.
• External memory capacity is in terms of bytes.
• 3.Unit of transfer : This is refer to the size of the data that is
transferred in one clock cycle.
• The data that can be transferred in one clock pulse, may be
divided into internal data or external data.
➢ Internal Data: This data is directly accessible by memory
with the help of data bus.
➢ External Data: This data is in removable memory or virtual
memory, and transferred in much larger units than a word,
and these are referred to as blocks
• 4. Method of access :
• These include the following:
• 4.1 Sequential access:
• Memory is organized into units of data, called records.
• Access must be made in a specific linear sequence.
• Stored addressing information is used to separate records
and retrieving data.
• A shared read/write mechanism is used, and this must be
moved from its current location to the desired location,
passing and rejecting each intermediate record.
• Thus, the time to access an arbitrary record is highly
variable.
• 4. method of access :
• 4.2 Direct access:
• Direct access involves a shared read–write mechanism.
• Individual blocks or records have a unique address
based on physical location.
• Access is accomplished by direct access to reach a
general location plus sequential searching, counting, or
waiting to reach the final location.
• Access time is variable.
• 4. method of access :
• 4.3 Random access:
• Each addressable location in memory has a unique,
physically wired-in addressing mechanism.
• The time to access a given location is independent of
the sequence of prior accesses and is constant.
• Thus, any location can be selected at random and
directly addressed and accessed.
• Main memory and some cache systems are random
access.
• 4. Method of access :
• 4.3 Associative:
• This is a random access type of memory that enables one to
make a comparison of desired bit locations within a word for
a specified match, and to do this for all words
simultaneously.
• Thus, a word is retrieved based on a portion of its contents
rather than its address.
• As with ordinary random-access memory, each location has
its own addressing mechanism, and retrieval time is
constant independent of location or prior access patterns.
• Cache memories may employ associative access.
• 5. Performance:
• Three performance parameters are used:
• 5.1 Access time (latency):
• For random-access memory, this is the time it takes to
perform a read or write operation.
• That is, the time from the instant that an address is
presented to the memory to the instant that data have been
stored or made available for use.
• For non-random-access memory, access time is the time it
takes to position the read–write mechanism at the desired
location.
• 5. Performance:
• 5.2 Memory cycle time:
• This is applied to random-access memory and consists of the
access time plus any additional time required before a
second access can start.
• This additional time may be required for recovery of the
signal lines or to regenerate data if they are read
destructively.
• 5.3 Transfer rate: This is the rate at which data can be
transferred into or out of a memory unit.
• For random-access memory, it is equal to 1/(cycle time).
• 6. Physical types
• The most common physical types are semiconductor
memory, magnetic surface memory, used for disk and tape,
and optical and magneto-optical.
• 7. Physical characteristics
• In a volatile memory, information decays naturally or is lost
when electrical power is switched off.
• In a nonvolatile memory, information once recorded remains
without deterioration until deliberately changed.
• No electrical power is needed to retain information.
• Magnetic-surface memories are nonvolatile
• Semiconductor memory (ROM) may be either volatile or
nonvolatile.
• 8. Organization:
• Organization meant the physical arrangement of bits to form
words.
• Apart from sequential organization, the memory may be
organized as interleaved memory.(used in 8086)
The Memory Hierarchy
• Memory hierarchy indicates that , nearer the memory to the
processor, faster its access.
• As one goes down the hierarchy, the following occur:
• a. Decreasing cost per bit
• b. Increasing capacity
• c. Increasing access time
• d.Decreasing frequency of access of the memory by the
processor
• Thus, smaller, more expensive, faster memories are
supplemented by larger, cheaper, slower memories
The Memory Hierarchy
The Memory Hierarchy
• The registers are the closest two the processor hence they
are fastest.
• While the off-line storage like magnetic tape are far away
from processor so they are slowest.
• The list of memories from nearer to the far away are as
follows:
• 1.Registers 2. L1 cache
• 3. L2 cache 4. Main memory
• 5. Magnetic disk 6. Tape.
Random Access Memory(RAM)
• In RAM any memory location in this IC can be accessed randomly.
• RAM technology is divided into two technologies: dynamic and
static.
• 1. Dynamic RAM:
• A dynamic RAM (DRAM) is made with cells that store data as
charge on capacitors.
• The presence or absence of charge in a capacitor is interpreted as
a binary 1 or 0.
• As capacitors can discharge, dynamic RAMs require periodic
charge refreshing to maintain data storage.
DRAM Cell Structure
DRAM
• The address line is activated when the bit value from this cell
is to be read or written.
• The transistor acts as a switch that is closed (allowing
current to flow) if a voltage is applied to the address line and
open (no current flows) if no voltage is present on the
address line.
• For the write operation, a voltage signal is applied to the bit
line.
• A high voltage represents 1, and a low voltage represents 0.
• A signal is then applied to the address line, allowing a charge
to be transferred to the capacitor
DRAM
• For the read operation, when the address line is selected,
the transistor turns on and the charge stored on the
capacitor is fed out onto a bit line.
• The readout from the cell discharges the capacitor, which
must be restored to complete the operation.
SRAM
• 2.Static RAM:
• In a SRAM, binary values are stored using traditional flip-flop
logic-gate configurations.
• A static RAM will hold its data as long as power is supplied to it.
• Four transistors (T1,T2,T3,T4) are cross connected in an
arrangement that produces a stable logic state.
• In logic state 1, point C1 is high and point C2 is low; in this state,T1
and T4 are off and T2 and T3 are on.
• In logic state 0, point C1 is low and point C2 is high; in this state,T1
and T4 are on and T2 and T3 are off.
• Both states are stable as long as the direct current (DC)voltage is
applied. Unlike the DRAM, no refresh is needed to retain data.
SRAM Cell Structure
SRAM
• The SRAM address line is used to open or close a switch.
• The address line controls two transistors (T5 and T6).
• When a signal is applied to this line, the two transistors are
switched on, allowing a read or write operation.
• For a write operation, the desired bit value is applied to line
B, while its complement is applied to the other line.
• This forces the four transistors (T1, T2, T3, T4) into the
proper state.
• For a read operation, the bit value is read from line B
Types of ROM(Read only memory)
• ROM is a type of memory that does not lose its contents
when the power is turned off. For this reason, ROM is also
called non volatile memory.
• There are different types of read-only memory, such as
• PROM (Programmable ROM)
• EPROM (Erasable Programmable ROM)
• EEPROM (electrically erasable programmable ROM)
• Flash EPROM
• Mask ROM
• PROM: This is programmable-ROM. These are inexpensive
versions of ROMs.
• EPROM: Erasable Programmable ROM. This is a ROM which
can be programmed many times.
• EEPROM: This is Electrically-EPROM. It can be rewritten
many times. However, it was slow as it was designed to deal
with one byte at a time.
• Flash: In this ROM, the entire section is deleted & rewritten
with the applied changes.
• Mask : In this type the contents are programmed by the IC
manufacturer, it is not a user-programmable ROM.
Error Correction
• An error correction code, (ECC) is used for controlling
errors in data over unreliable or noisy communication
channels.
• The central idea is the sender encodes the message with
a redundant in the form of an ECC.
• The American mathematician Richard Hamming found
worked in this field in the 1940s and invented the first error-
correcting code in 1950 called as, Hamming (7,4) code.
• The redundancy allows the receiver to detect a limited
number of errors that may occur anywhere in the message,
and correct these errors without retransmission.
Error Correction: Hamming code
• Hamming code is a set of error-correction codes that can be
used to detect and correct bit errors.
• These errors can occur when computer data is moved or
stored.
• Hamming code makes use of the concept of parity and parity
bits.
• These are bits that are added to data so that the validity of
the data can be checked when it has been received at the
destination.
• Using more than one parity bit, an error-correction code can
identify a single bit error in the data unit and also its
location in the data unit.
Error Correction: Hamming code
• Parity bit can be used to check even parity as well as odd
parity of number of bits in a data unit.
• Computing parity involves counting the number of ones in a
unit of data, and adding either a zero or a one (called
a parity bit ) to make the count odd (for odd parity) or even
(for even parity).
Error Correction: Hamming code
• For example,
• 1001 is a 4-bit data unit containing two one bits.
• since that is an even number, a zero would be added to
maintain even parity
• To maintain odd parity, another one would be added.
• To calculate even parity, the XOR operator is used.
• To calculate odd parity, the XNOR operator is used.
Hamming Code
• A 4 bit information produces 7 bit coded data.
• The format of hamming coded data for four bit information
is:
• D7 D6 D5 P4 D3 P2 P1
• The D bit indicates data bits and P bits indicates parity bits.
• The data bits from given data are written in the position
D7,D6,D5 and D3.
• The parity bits for
• 3, 5 and 7 are taken by adjusting bit P1
• 3, 6 and 7 are taken by adjusting bit P2
• 5, 6 and 7 are taken by adjusting bit P4
Examples of Hamming code
• Make a even parity hamming code for data (1001)
• Sol:- Step 1: Place the data in proper sequence
D7 D6 D5 P4 D3 P2 P1
1 0 0 1
• Step 2: Calculate the parity bits
• P1:- calculate no. of 1’s in position 3,5 and 7
• There are 2 ones in above position ( i.e. it is even parity),
thus it is even parity, so
• P1= 0.
Examples of Hamming code
• P2 =0, as no. of 1’s in position 3, 6 and 7 are even.
• Now calculate P4 bit
• P4= 1, as no. of 1’s in position 5,6 and 7 is odd.
• Place these parity bits at their positions.
• Thus the final coded data is as follows
D7 D6 D5 P4 D3 P2 P1
1 0 0 1 1 0 0
Detecting error in the received hamming code
• Problem:- The coded data received is 1101110, check if there
is any error and indicate error bit and correct the code.
• Soln:-
• Step 1:- Write the coded data as follows
D7 D6 D5 P4 D3 P2 P1
1 1 0 1 1 1 0
• Step 2: calculate parity bits
➢ P1= 0 since no. of 1’s in position 3,5,and 7 are even.(It is
correct)
➢ P2= 1, since no. of 1’s in position 3,6 and 7 are odd.(It is
correct)
➢ P4 = 1, since no. of 1’s in position 5,6 and 7 are even (It is
wrong).
• Thus bit P4 is having error and corrected P4 bit should be 0.
• Thus corrected code is as follows
D7 D6 D5 P4 D3 P2 P1
1 1 0 0 1 1 0
Advanced DRAM Organization
• Most critical system bottlenecks when using high-
performance processors is the interface to main internal
memory.
• This interface is the most important pathway in the entire
computer system.
• The basic building block of main memory is the DRAM chip.
Advanced DRAM Organization
• The traditional DRAM chip is controlled by its internal
architecture and by its interface to the processor’s memory
bus.
• To improve performance of DRAM main
• memory is to insert one or more levels of high-speed SRAM
cache between the DRAM main memory and the processor.
• But SRAM is much costlier than DRAM and cache memory
size can not be exceeded beyond certain limit.
Advanced DRAM Organization
• There are some improvements in the DRAM architecture:
1. Synchronous DRAM (SDRAM)
2. Rambus DRAM (RDRAM)
Advanced DRAM Organization
• Synchronous DRAM:
• SDRAM exchanges data with the processor synchronized to
an external clock signal and running at the full speed of the
processor/memory bus.
• With synchronous access, the DRAM moves data in and out
under control of the system clock.
• The SDRAM employs a burst mode to eliminate the address
setup time.
• In burst mode, a series of data bits can be clocked out
rapidly after the first bit has been accessed
SDRAM
Advanced DRAM Organization
• This mode is useful when all the bits to be accessed are in
sequence and in the same row of the array as the initial
access.
• The SDRAM has a multiple-bank internal architecture that is
used for on-chip parallelism.
• The mode register and associated control logic provides a
mechanism to modify the SDRAM to as per specific system
needs.
• The mode register specifies the burst length, which is the
number of separate units of data synchronously passed onto
the data bus.
Advanced DRAM Organization
• The register also allows the programmer to adjust the
latency between receipt of a read request and the beginning
of data transfer.
• The SDRAM performs best when it is transferring large
blocks of data serially, such as for applications like word
processing, spreadsheets, and multimedia.
SDRAM Pin Assignment
Operation of SDRAM
• Consider the burst length is 4 and the latency is 2.
• The burst read command is initiated by having CS and CAS
low while holding RAS and WE high.
• The address inputs determine the starting column address
for the burst, and the mode register sets the type of burst
(sequential or interleave) and the burst length (1, 2, 4, 8, full
page).
• The delay from the start of the command to when the data
from the first cell appears on the outputs is equal to the
value of the CAS latency that is set in the mode register.
Operation of SDRAM
Rambus DRAM
• RDRAM, developed by Rambus [FARM92, CRIS97], has been
adopted by Intel for its Pentium and Itanium processors.
• RDRAM chips are vertical packages, with all pins on one side.
• The chip exchanges data with the processor over 28 wires no
more than 12 centimeters long.
• The bus can address up to 320 RDRAM chips and is rated at
1.6 GBps
Rambus DRAM
• The special RDRAM bus delivers address and control
information using an asynchronous block-oriented protocol.
• After an initial 480 ns access time, this produces the 1.6
GBPS data rate.
• RDRAM gets a memory request over the high-speed bus,
instead of controlled by the explicit RAS, CAS, R/W, and CE
signals used in conventional DRAMs.
• This request contains the desired address, the type of
operation, and the number of bytes in the operation.
Rambus DRAM
• The configuration of RDRAM consists of a controller and a
number of RDRAM modules connected via a common bus.
• The controller is at one end of the configuration, and the far
end of the bus is a parallel termination of the bus lines.
• The bus includes 18 data lines (16 actual data, two parity)
cycling at twice the clock rate.
• that is, 1 bit is sent at the leading and following edge of each
clock signal.
• There is a separate set of 8 lines (RC) used for address and
control signals.
Rambus DRAM
• There is also a clock signal that starts at the far end from the
controller propagates to the controller end and then loops
back.
• A RDRAM module sends data to the controller
synchronously with the clock, and the controller sends data
to an RDRAM synchronously with the clock signal in the
opposite direction.
• The remaining bus lines include a reference voltage, ground,
and power source.
RDRAM Structure
Virtual Memory
• Virtual memory is a memory management capability of an
OS.
• It uses hardware and software to allow a computer to
compensate for physical memory shortages by temporarily
transferring data from random access memory (RAM) to disk
storage.
• Virtual address space is increased using active memory in
RAM and inactive memory in hard disk drives (HDDs) to
form contiguous addresses that hold both
the application and its data.
Virtual Memory
Processor
Data bus Address bus
Cache memory
Data bus Address bus
Main memory
Data bus Address bus
External memory
Virtual Memory
• Among the primary benefits of virtual memory is its ability
to handle twice as many addresses as main memory.
• It uses software to consume more memory by using the HDD
as temporary storage while memory management
units translate virtual memory addresses to physical
addresses via the central processing unit.
• Programs use virtual addresses to store instructions and
data; when a program is executed, the virtual addresses are
converted into actual memory addresses.
Paging Mechanism
• Paging unit converts virtual addresses into physical
addresses.
• The address contains page number and word number in that
page.
• The page number is checked in memory and in that page the
corresponding word is read.
• If the page number is not present in memory page fault
occurs.
• And the desired page is loaded into memory by a special
routine called as page fault routine and this technique is
called as Demand Paging.
Page Table Structure
TLB(Translation Lookaside Buffer)
• TLB reduces memory access time.
• It maintains a list of virtual addresses and their
corresponding physical addresses.
• If the page is found in memory TLB gives hit and then paging
mechanism doesn’t perform the address translation,
otherwise there is a miss and paging mechanism performs
address translation.
Translation Lookaside Buffer
Segmentation
• Addressable memory can be subdivided using segmentation.
• Paging is invisible to the programmer and serves the
purpose of providing the programmer with a larger address
space,
• segmentation is usually visible to the programmer and is
provided as a convenience for organizing programs and data
properly.
• It is a means for associating privilege and protection
attributes with instructions and data
Segmentation
• Segmentation allows the programmer to view memory as
consisting of multiple address spaces or segments.
• Segments are of variable, dynamic size.
• The OS will assign programs and data to different segments.
• There may be a number of program segments for various
types of programs as well as a number of data segments.
• Each segment may be assigned access and usage rights.
Memory references consist of a (segment number, offset)
form of address.
Segmentation
• Typically there are four special segment registers used inside
the processor
1. The code segment register(CS)
2. The stack segment register (SS)
3. The extra segment register(ES)
4. The data segment register(DS)
Segmentation
• Advantages of segment registers:
• 1. It simplifies the handling of growing data structures.
• If the programmer does not know ahead of time how large a
particular data structure will become, the data structure can
be assigned its own segment, and the OS will expand or
shrink the segment as needed.
• 2. It allows programs to be altered and recompiled
independently without requiring that an entire set of
programs be relinked and reloaded.
• This is accomplished using multiple segments.
Segmentation
• Advantages of segment registers:
• 3. It lends itself to sharing among processes.
• A programmer can place a utility program or a useful table of
data in a segment that can be addressed by other processes.
• 4. It lends itself to protection.
• A segment can contain a well-defined set of programs or
data, the programmer or a system administrator can assign
access privileges to the data.
Cache Memory System
• It is used to reduce the average time to access data from
the main memory.
• The cache is a smaller and faster memory which stores
copies of the data from frequently used main memory
locations.
• Most CPUs have different independent caches, including
instruction and data.
Cache Memory System
Cache Memory System
• When the processor needs to read or write a location in
main memory, it first checks for a corresponding entry in the
cache.
• If the processor finds that the memory location is in the
cache, a cache hit has occurred and data is read from cache
• If the processor does not find the memory location in the
cache, a cache miss has occurred.
• For a cache miss, the cache allocates a new entry and copies
in data from main memory, then the request is fulfilled from
the contents of the cache.
Cache Memory System
• There are three popular methods of mapping addresses to
cache locations
1. Fully Associative – Search the entire cache for an address
2. Direct – Each address has a specific place in the cache
3. Set Associative – Each address can be in any of a small set
of cache locations
Cache Memory System
• 1. Direct Mapping:
• In direct mapping each memory block is assigned to only one
cache line.
• i.e, each memory line j maps to cache line j mod 128.
• If a line is previously taken up by a memory block when a
new block needs to be loaded, the old block is trashed.
• An address space is split into three line selector, word
selector and a tag.
• The cache is used to store the tag field whereas the rest of
the address is stored in the main memory.
Cache Memory System
• Direct-mapping cache organization
Cache Memory System
• Word/offset : Least significant w bits identify unique word of a
particular line.
• Line: Most significant s bits specify one memory block to which
cache line corresponds.
• Tag: A tag is (s-r) where r is cache line field.
• 5-bit tag to determine whether there is a hit or a miss.
• If there's a miss, swap out the memory line that occupies that
position in the cache and replace it with the desired memory line.
Cache Memory System
• 2. Associative mapping:
• It uses the associative memory to store content and
addresses both of the memory word.
• Any block can go into any line of the cache.
• This means that the word id is used to identify which word
in the block is needed, but the tag becomes all of the
remaining bits.
• This enables the placement of the any word at any place in
the cache memory.
• It is considered to be the fastest and the most flexible
mapping form.
Cache Memory System
• Tag field identifies block of memory from where the
line has been copied into the cache memory.
• All the cache tags are searched to find out whether
or not the Tag field matches one of the cache tags.
• If so, we have a hit, and if not there's a miss and we
need to replace one of the cache lines by this line
before reading or writing into the cache.
Cache Memory System
• Associative cache organization
Cache Memory System
• 3. Set Associative Mapping:
• Set associative addresses the problem of possible thrashing
in the direct mapping method.
• instead of having exactly one line that a block can map to in
the cache, a group a few lines are combined together to
create a set.
• Then a block in memory can map to any one of the lines of a
specific set.
Cache Memory System
• Set-associative mapping allows that each word that is
present in the cache can have two or more words in the
main memory for the same index address.
• Set associative cache mapping combines the best of direct
and associative cache mapping techniques.
Cache Memory System
• Set associative cache organization
Interleaved Memory
• Main memory is composed of a collection of DRAM memory
chips.
• The number of chips can be grouped together to form a
memory bank.
• The memory banks are organized in a interleaved manner.
• Each bank is independently service a memory read or write
request, so that a system with K banks can service K requests
simultaneously.
• Thus increasing memory read or write rates by a factor of K.
Interleaved Memory
Interleaved Memory
• Interleaving is implemented with the help of distributing
memory address among consecutive memory banks.
• If there are consecutive address and 4 interleaved memory
modules then the address distribution is as follows:
• Address to memory module 0: 0,4,8,12,16
• Address to memory module 1: 1,5,9,13,17
• Address to memory module 2: 2,6,10,14,18
• Address to memory module 3: 3,7,11,15,19
External Memory System
• External storage refers to all of the addressable data that is
not stored on a drive internal to the system.
• It can be used as a backup, to store achieved information or
to transport data.
• External storage is not part of a computer's main memory or
storage, hence it is called secondary or auxiliary storage.
External Memory System
• There are different types of external memories to a
computer system
• 1.ROM
• 2.Magnetic Memory
• 3.Optical Memory
2.Magnetic Memory
• Magnetic storage or magnetic recording is the storage
of data on a magnetized medium.
• Magnetic storage uses different patterns of magnetization in
a magnetisable material to store data.
• The information is accessed using one or more read/write
heads.
• The basic two types of magnetic memory used are:
• 1.Magnetic Disk
• 2.Magnetic Tape
Magnetic Memory
• 1.Magnetic Disk:
• A disk is a circular platter constructed of nonmagnetic material,
called the substrate, coated with a magnetisable material.
• Traditionally, the substrate has an aluminum or aluminum alloy
material but recently glass substrates have been introduced.
• Magnetic Read and Write Mechanisms
• Data are recorded on and later retrieved from the disk via a
conducting coil named the head.
• There are two heads, a read head and a write head.
• During a read or write operation, the head is stationary while the
platter rotates under it.
Magnetic Disk:
• Magnetic disk contains one or more platters.
• Each disk platter is in a flat circular shape.
• Each platter has number of working surfaces called as
tracks that stores data.
• Digital information is stored on magnetic disks in the form of
microscopically small, magnetized needles.
• Each track is divided into a number of sectors.
• To read information, the arm is positioned over the correct
track.
• Data is read and written by a disk drive which rotates the
discs and positions the read/write heads over the desired
track.
Magnetic Disk:
Data Organization on a Disk
2.Magnetic Tape
• A magnetic tape drive is a storage device that makes use of
magnetic tape as a medium for storage.
• It uses a long strip of narrow plastic film with tapes of thin
magnetisable coating.
• Examples of which are tape recorders and video tape
recorder.
Optical Memory
• 1. CD-ROM
• The disk is formed from polycarbonate.
• The data on a compact disc (CD) is stored on the disc as
a series of very tiny pits and lands.
• The pit is an indicator of data, equivalent to a "1" in
binary code.
• The lands, or flat surfaces on the CD, are equivalent to
be the "0" in binary code.
• When data is read from a CD, a laser directs an fine
stream of light onto the surface of the disc.
Optical Memory: CD-ROM
• The laser follows the data stream of pits and lands from the
inside center of the disc outward in a spiral direction.
• As the laser light shines on the CD's data track, it reflects
pattern of light on a pit and a another pattern on a land
area.
• The resulting reflections equate to a series of ones and zeros
by a photo sensor.
Optical Memory:
CD-ROM
Optical Memory: DVD
• A DVD is a type of optical media used for storing digital data.
• It is the same size as a CD, but has a larger storage capacity.
• The original "DVD-Video" format was standardized in 1995
by consortium of electronics companies, including Sony,
Panasonic, Toshiba, and Philips.
Allocation Policies
• Memory allocation is the process of assigning blocks of
memory on request.
• Typically the allocator receives memory from the operating
system in a small number of large blocks that it must divide
up to satisfy the requests for smaller blocks.
• This method is also called as Partition Allocation.
• In Partition Allocation, when there are more than one
partition freely available to accommodate a process’s
request, a partition must be selected.
• To choose a particular partition, a partition allocation
method is needed.
Allocation Policies
• Following are the various partition allocation schemes :
• 1. First Fit: partition is allocated which is first sufficient from
the top of Main Memory.
• 2. Best Fit Allocate the process to the partition which is first
smallest sufficient partition among the free available
partition.
• 3. Worst Fit Allocate the process to the partition which is
largest sufficient among the freely available partitions
available in the main memory.
• 4. Next Fit Next fit is similar to the first fit but it will search for
the first sufficient partition from the last allocation point.
Allocation Policies
• The memory partitions are of size 100k,500k,200k,300k and
600k sequentially. The given processes are of 221k,
423k,137k and 522k respectively. How first-fit, best-fit and
worst-fit places the processes in each memory block? Which
algorithm makes the most efficient use of memory?
Partition Size
no.
1 100k
2 500k
3 200k
4 300k
5 600k
• Memory utilization= Allocated Memory/Total no. of Memory
RAID
• RAID (Redundant array of Independent Disks).
• It is a data storage virtualization technology that combines
multiple physical disk drive components into one or more logical
units.
• The purpose of data redundancy, performance improvement.
• Data is distributed across the drives in one of several ways,
referred to as RAID levels.
• There are seven RAID levels referred as RAID 0 to RAID 6.
• In the case of failure of a disk, the parity information that is kept
on the redundant disk is used to recover the data.
• A RAID controller can be used as a level of abstraction
between the OS and the physical disks, presenting groups of
disks as logical units.
• Using a RAID controller can improve performance and help
protect data in case of a crash.
How RAID works
• RAID works by placing data on multiple disks and allowing
input/output (I/O) operations to be performed in a balanced way,
improving performance.
• RAID arrays appear to the operating system (OS) as a single logical
hard disk.
• RAID employs the techniques of disk mirroring or disk striping.
• Mirroring copies identical data onto more than one drive.
• Striping partitions each drive's storage space into units ranging
from a sector (512 bytes) up to several megabytes.
• The stripes of all the disks are interleaved and addressed in order.
RAID levels
• 1. RAID 0 (disk striping)
• RAID 0 (disk striping) is the process of dividing a body of
data into blocks and spreading the data blocks across
multiple storage devices, such as hard disks in a redundant
array of independent disks (RAID) group.
• A stripe consists of the data divided across the set of hard
disks, and a striped unit refers to the data slice on an
individual drive.
• Because striping spreads data across more physical drives,
multiple disks can access the contents of a file, enabling
writes and reads to be completed more quickly.
RAID 0
2. RAID 1
• RAID 1 also known as Disk mirroring.
• It is the replication of data to two or more disks.
• Because both disks are operational, data can be read from
them simultaneously, which makes read operations quite
fast.
• The RAID array will operate if one disk is operational. Write
operations, however, are slower because every write
operation is done twice.
• Disk mirroring is a good choice for applications that require
high performance and high availability.
RAID 1
RAID 2
• This level uses parallel access technique.
• This configuration uses striping across disks, with some disks
storing error checking and correcting (ECC) information.
• Error correction corrects all single faults on read operation.
• RAID 2 level is effective in the case where data errors are
high.
RAID 2
RAID 3
• This technique uses striping and dedicates one drive to
storing parity information.
• The embedded ECC information is used to detect errors.
• Data recovery is accomplished by calculating the exclusive
OR (XOR) of the information recorded on the other drives.
• Since an I/O operation addresses all the drives at the same
time, RAID 3 cannot overlap I/O.
• For this reason, RAID 3 is best for single-user systems with
long record applications.
RAID 3
RAID 4
• RAID 4 uses independent access technique, each physical
disk is accessed independently.
• Separate input/output requests can be processed parallel.
• Data strip is of large size than the parity strip.
• It uses a separate parity disk.
• All the writes must go to the dedicated parity disk, this
causes a performance bottleneck for all write operations.
RAID 4
RAID 5
• This level is based on block-level striping with parity.
• The parity information is striped across each drive, allowing
the array to function even if one drive were to fail.
• The array's architecture allows read and write operations to
cover multiple drives.
• RAID 5 requires at least three disks, but it is often
recommended to use at least five disks for performance
reasons.
RAID 5
RAID 6
• RAID 6, also known as double-parity RAID, uses
two parity stripes on each disk.
• The use of additional parity allows the array to continue to
function even if two disks fail simultaneously.
• This configuration offers very high fault and drive-failure
tolerance.
• RAID 6 is also more expensive because of the two extra disks
required for parity.
• And have slower write performance than RAID 5 arrays, as
each set of parities must be calculated separately.
RAID 6