Module 5
Memory Organization
                   Introduction
    Memory is used for storing programs and data that are required to
    perform a specific task.
    For CPU to operate at its maximum speed, it required an uninterrupted
    and high speed access to these memories that contain programs and data.
    Some of the criteria need to be taken into consideration while deciding
    which memory is to be used:
•        Cost
•        Speed
•        Memory access time
•        Data transfer rate
•        Reliability
Block Diagram
    A computer system contains various types of memories like auxiliary memory,
    cache memory, and main memory.
•   Auxiliary Memory
    The auxiliary memory is at the bottom and is not connected with the CPU directly.
    However, being slow, it is present in large volume in the system due to its low
    pricing. This memory is basically used for storing the programs that are not needed
    in the main memory. This helps in freeing the main memory which can be utilized
    by other programs that needs main memory. The main function of this memory is
    to provide parallel searching that can be used for performing a search on an entire
    word.
•   Main Memory
    The main memory is at the second level of the hierarchy. Due to its direct
    connection with the CPU, it is also known as central memory. The main memory
    holds the data and the programs that are needed by the CPU. The main memory
    mainly consists of RAM, which is available in static and dynamic mode.
•   Cache Memory
    Cache memory is at the top level of the memory hierarchy. This is a high speed
    memory used to increase the speed of processing by making current programs
    and data available to the CPU at a rapid rate. Cache memory is usually placed
    between the CPU and the main memory.
                Main Memory
•   Central storage unit in a computer system
•   Large memory
•   Made up of Integrated chips
•   Types:
     RAM (Random access memory)
     ROM (Read only memory)
        1.RAM (Random Access Memory)
    Random access memory (RAM) is the best known form of computer
    memory. RAM is considered "random access" because you can access
    any memory cell directly if you know the row and column that intersect at
    that cell.
    Types of RAM:-
•   Static RAM (SRAM)
•   Dynamic RAM (DRAM    )
• Static RAM (SRAM)
   – a bit of data is stored using the state of a flip-flop.
   – Retains value indefinitely, as long as it is kept powered.
   – Mostly uses to create cache memory of CPU.
   – Faster and more expensive than DRAM.
• Dynamic RAM (DRAM)
   – Each cell stores bit with a capacitor and transistor.
   – Large storage capacity
   – Needs to be refreshed frequently.
   – Used to create main memory.
   – Slower and cheaper than SRAM.
                            2.ROM
    ROM is used for storing programs that are Permanently resident in the
    computer and for tables of constants that do not change in value once
    the production of the computer is completed
    The ROM portion of main memory is needed for storing an initial
    program called bootstrap loader, witch is to start the computer
    software operating when power is turned on.
    There are five basic ROM types:
•   ROM - Read Only Memory
•   PROM - Programmable Read Only Memory
•   EPROM - Erasable Programmable Read Only Memory
•   EEPROM - Electrically Erasable Programmable Read OnlyMemory
•   Flash EEPROM memory
Cache Memory: Concept, locality of reference,
                      Cache memory
•   If the active portions of the program and data are placed in a fast small
    memory, the average memory access time can be reduced
•   Thus reducing the total execution time of the program
•   Such a fast small memory is referred to as cache memory
•   The cache is the fastest component in the memory hierarchy and
    approaches the speed of CPU component
                Locality of Reference
•   When CPU needs to access memory, the cache is examined
•   If the word is found in the cache, it is read from the fast memory
•   If the word addressed by the CPU is not found in the cache, the main
    memory is accessed to read the word
•   When the CPU refers to memory and finds the word in cache, it is said to produce a
    hit
•   Otherwise, it is a miss
•   The performance of cache memory is frequently measured in terms of a
    quantity called hit ratio
    Hit ratio = hit / (hit+miss)
•   The basic characteristic of cache memory is its fast access time
•   Therefore, very little or no time must be wasted when searching the words
    in the cache
Spatial Locality – Spatial locality means instruction or data near to the
current memory location that is being fetched, may be needed soon in the
near future. This is slightly different from the temporal locality. Here we
are talking about nearly located memory locations while in temporal
locality we were talking about the actual memory location that was being
fetched.
If a word is access now then the word
adjacent to it will be access next (Close
Proximity)
Temporal Locality – Temporal locality means current data or instruction that is being
fetched may be needed soon. So we should store that data or instruction in the cache
memory so that we can avoid again searching in main memory for the same data.
  If a word is refer now or brought in cache now
  then same word will be refer again in future
•   The transformation of data from main memory to cache memory is
    referred to as a mapping process, there are three types of mapping:
     – Associative mapping
     – Direct mapping
     – Set-associative mapping
                 Associative mapping
•   The fastest and most flexible cache        organization uses an
    associative memory
•   The associative memory stores both the address and data of
    the memory word
•   This permits any location in cache to store any word from main memory
•   The address value of 15 bits is shown as a five- digit octal number and its
    corresponding 12- bit word is shown as a four-digit octal number
•   A CPU address of 15 bits is places in the argument register and the
    associative memory as searched for a matching address
•   If the address is found, the corresponding 12- bits data is read and sent
    to the CPU
•   If not, the main memory is accessed for the word
•   If the cache is full, an address-data pair must be displaced to make
    room for a pair that is needed and not presently in the cache
                       Direct Mapping
•   Associative memory is expensive compared to RAM
•   In general case, there are 2^k words in cache memory and 2^n words in
    main memory (in our case, k=9, n=15)
•   The n bit memory address is divided into two fields: k-bits for the index
    and n-k bits for the tag field
Addressing relationships between main and cache memories
            Set-Associative Mapping
•   The disadvantage of direct mapping is that two words with the same
    index in their address but with different tag values cannot reside in
    cache memory at the same time
•   Set-Associative Mapping is an improvement over the direct-mapping in
    that each word of cache can store two or more word of memory under
    the same index address
•   Each index address refers to two data words and their associated tags
•   Each tag requires six bits and each data word has 12 bits, so the word
    length is 2*(6+12) = 36 bits
         Design on MAPPING
     RAM and ROM Chips
•   A RAM chip is better suited for communication with the CPU if it has one
    or more control inputs that select the chip when needed
•   The Block diagram of a RAM chip is shown next slide, the capacity of
    the memory is 128 words of 8 bits (one byte) per word
RAM
ROM
              Memory Address Map
•   Memory Address Map is a pictorial representation of assigned address
    space for each chip in the system
•   To demonstrate an example, assume that a computer system needs 512
    bytes of RAM and 512 bytes of ROM
•   The RAM have 128 byte and need seven address lines, so for above
    requirement we would need 4 RAM each of 128bytes (128*4= 512)
    whereas the ROM have 512 bytes and need 9 address lines hence
    only one ROM
•   The hexadecimal address assigns a range of hexadecimal equivalent
    address for each chip
•   Line 8 and 9 represent four distinct binary combination to specify
    which RAM we chose
•   When line 10 is 0, CPU selects a RAM. And when it’s 1, it selects the
    ROM
Memory connection to the CPU
•   To help understand the mapping procedure, we have the following
    example: