0% found this document useful (0 votes)
37 views31 pages

Questions COA

The document discusses various aspects of computer architecture, including multiprocessor systems, memory types, and data structures. It covers key concepts such as addressing modes, stack organization, and the types of buses in a computer system. Additionally, it explains the characteristics of different memory types, including volatile and non-volatile semiconductor memories.

Uploaded by

khushirani0712
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views31 pages

Questions COA

The document discusses various aspects of computer architecture, including multiprocessor systems, memory types, and data structures. It covers key concepts such as addressing modes, stack organization, and the types of buses in a computer system. Additionally, it explains the characteristics of different memory types, including volatile and non-volatile semiconductor memories.

Uploaded by

khushirani0712
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

1. Multiprocessor is also classified as MIMD.

2. Which of the following is not a benefit of multiprocessor It executes tasks in


serial manner.
3. What is the other name of tightly coupled multiprocessor Shared memory
multiprocessors.
4. Another name for loosely coupled multiprocessor Distributed Memory
Processor.
5. What is not the characteristics of tightly coupled multiprocessor? Each
processor has its own local memory.
6. Full form of SISD Single Instruction Stream Single Data Stream
7. Full form of MISD Multiple Instruction Stream Single Data Stream
8. Full form of MIMD Multiple Instruction Stream Multiple Data Stream
9. Full form of SIMD Single Instruction Stream Multiple Data Stream
10. RAID Redundant Array of Independent Disk
11. Other name of sub-routines is – Function, Procedure, Module, All of the above
12. Remove element from stack – Pop
13. Add element in stack – Push
14. Which data structure is based on FIFO principle Queue Data Structure
15. Which of the following is not a RAM type: ROM
16. Which ROM is reprogrammable after manufacturing Flash Memory
17. It is a characteristics of ROM: It is written once while programming
18. Characteristics of RAM: It is a volatile memory
19. BIOS Stored in ROM
20. Memory type used as Cache memory in modern computer system SRAM
21. Non-volatile memory ROM
22. Which is not a benefit of virtual memory: Increase program execution speed
23. Primary purpose of virtual memory: To increase physical memory capacity
24. Mechanism that converts virtual to physical address: Translation Look Aside
Before
25. Increase the RAM of the computer typically into performance because fewer
page fault occur

10 Marks
1. Key Characteristics of Multi-Processor System
2. Key Characteristics of Multi-Computer System
3. Difference b/w multi-processor and multi-computer
4. Difference b/w Fine Grain and Coarse Grain
5. Concept of Multi-Threaded Architecture
6. Components of typical Architecture of multi-processor
7. Difference b/w paging and segmentation
8. Performance consideration
9. Data flow in Hybrid Architecture
10.Difference b/w SRAM and RDAM
11.Explain basic of control unit
12.Explain instruction sequencing
13.Illustrate basic computer register with their size and register organisation with
memory bank
14.Auxiliary memory
15.Cache memory
16.Virtual memory
17.RAM, ROM, EPROM, EEPROM
18.Magnetic tape, magnetic disk difference
19.Encoding of machine instructions
20.Types of ROM
21.Levels of RAIDs
22.Concept of semi-conductor memories
23.Levels of memory hierarchy
24.Masked ROM
25.Speed in Amdahl’s Law
26.Features of direct memory access
27.Message passing
28.Latency hiding
29.List applications of multiprocessor systems
30.List applications of multi-computer systems
31.Scenario where multi-computer system preferred over multiprocessor
32.Features of SIMD
33.Concept of Multi-vectored Architecture
34.Purpose of using assembly language over machine language
35.Operations of stack
36.Operations on queue
37.Addressing modes purpose
38.Sub-routine
39.Feature of tertiary memory
40.Feature of secondary memory
41.Types of RAM
42.Various machine instructions
43.Semi-conductor memories
44.Different types of interrupt
45.Stack organistaion of CPU
46.General characteristics of implied and immediate
47.A 10sec efficiency MST
48.Instruction cycle
49.Different types of Buses
50.Operational concept of computer
51.Block diagram of computer (Input/output unit, CPU)
Explain basic Fundamental unit of computer
OR
Operational concept of computer

Input Unit:
1) Think of the input unit as the way you communicate with the computer.
2) It's like the computer's senses, allowing it to receive data and instructions
from the outside world.
3) Input devices include things like keyboards, mice, touchscreens, scanners,
and microphones.
4) Just like how your senses gather information for your brain to process, the
input unit gathers information for the computer to process.
Output Unit:
1) The output unit is how the computer communicates back to you.
2) It's like the computer's way of speaking or showing you what it's been doing.
3) Output devices include monitors, printers, speakers, and projectors.
4) Just as you understand the world through what you see, hear, and touch, the
output unit helps you understand what the computer is doing by showing you
text, images, sounds, or printed documents.
Central Processing Unit (CPU):
1) The CPU is like the computer's decision-maker and problem solver.
2) It's the part that actually does the thinking and processing.
3) The CPU takes in information from the input unit, processes it according to
instructions, and sends out results to the output unit.
4) It's comparable to your brain, which processes information from your senses
and decides how to respond.
In essence, the input unit brings information in, the CPU processes it, and the
output unit shows you the results. It's like a conversation between you and the
computer, where you give it instructions and it responds with what you asked
for or what it's been doing

OR

Memory Unit –
 It stores programs as well as data and there are two types-
Primary and Secondary Memory
 Primary Memory is quite fast which works at electronic speed.
Programs should be stored in memory before getting executed
 Primary memory is essential but expensive so we went for
secondary memory which is quite cheaper.
 It is used when large amount of data & programs are needed to
store, particularly the information that we don’t access very
frequently.
 Ex- Magnetic Disks, Tapes

Arithmetic & Logic Unit :-


 All the arithmetic & Logical operations are performed by ALU and
this operation are initiated once the operands are brought into the
processor.

Basic Operational Concepts


 Instructions take a vital role for the proper working of the computer.
 An appropriate program consisting of a list of instructions is stored in the
memory so that the tasks can be started.
 The memory brings the Individual instructions into the processor, which
executes the specified operations.
 Data which is to be used as operands are moreover also stored in the
memory.
Example:
Add LOCA, R0
 This instruction adds the operand at memory location LOCA to the
operand which will be present in the Register R0.

Explain the various types of addressing modes?


Immediate Addressing: In this mode, the operand is directly specified within
the instruction itself. For example, if an instruction says "add 5 to the contents
of register A", the value 5 is directly provided in the instruction.
Direct Addressing: Here, the address of the operand is directly specified in the
instruction. For instance, if an instruction says "load the content at memory
address 2000 into register B", the address 2000 is directly mentioned in the
instruction.
Indirect Addressing: In this mode, the instruction contains the address of a
memory location which itself holds the address of the operand. It's like a
double reference. For example, if an instruction says "load the content at the
memory address stored in register C into register D", the content of register C
holds the address of the actual operand.
Register Addressing: Here, the instruction refers directly to a register
containing the operand. For instance, if an instruction says "subtract the
content of register A from register B", it means both operands are in registers A
and B.
Indexed Addressing: This mode involves adding an offset to a base address to
reach the operand. The offset is specified within the instruction or a register.
For example, if an instruction says "fetch the content at the memory address
formed by adding the value in register X to the base address 1000", it implies
using the value in register X as an offset to access memory.
Relative Addressing: It uses the Program Counter (PC) register to compute the
operand's address. The operand's address is calculated by adding an offset to
the current value of the PC. It's commonly used in branch instructions where
the offset determines how far to jump from the current instruction.

These addressing modes provide flexibility to programmers and compilers


when writing code, allowing efficient access to data and instructions stored in
memory or registers. Each mode has its advantages and is suitable for different
types of operations and memory organization.

Describe the different types of buses in computer system ?


In computer systems, the term "bus" refers to a communication system that
transfers data between components such as the CPU, memory, and
peripherals. Here's an easy explanation of the different types of buses in a
computer system:

Address Bus: This bus carries information about the memory address where
data needs to be read from or written to. It's like the street address of a house,
telling the CPU or other devices where to find specific data in memory.
Data Bus: The data bus transports the actual data being transferred between
different components, such as between the CPU and memory or between the
CPU and peripherals. It's like the road along which information travels between
different parts of the computer.
Control Bus: The control bus carries signals that control the operations of the
computer system. These signals include commands to read from or write to
memory, signals indicating whether data is being read or written, and signals
for interrupt requests. It's like the traffic signals directing the flow of data
within the computer.
System Bus: Sometimes, the combination of address bus, data bus, and control
bus is referred to as the system bus. It's the main communication highway
within the computer system, facilitating the exchange of information between
different components.
Expansion Bus: This bus allows for the connection of expansion cards or
peripherals to the computer system. Examples include PCI (Peripheral
Component Interconnect) and PCIe (PCI Express) buses, which are used to
connect devices like graphics cards, network adapters, and storage controllers.
Internal Bus: This refers to the buses that operate within a single component,
such as within the CPU or within a memory module. These internal buses
facilitate communication between different parts of the component, enabling
its proper functioning.
Understanding these different types of buses helps in comprehending how data
and instructions are transferred within a computer system, ensuring efficient
communication and coordination between its various components.
What are the General characteristics of implied and immediate addressing
mode?
Implied Addressing Mode:
Characteristics:
In implied addressing mode, the operand is implicitly understood by the CPU.
Instructions in this mode don't explicitly specify any operands.
The operation directly applies to a predefined register or memory location.
Example:
A common example of implied addressing is the "clear accumulator"
instruction in some processors. The CPU knows that when it receives this
instruction, it should clear the accumulator register without needing any
additional operand information.
Immediate Addressing Mode:
Characteristics:
In immediate addressing mode, the operand is directly specified within the
instruction itself.
The data to be operated on is given right in the instruction, rather than
referring to a memory address.
It's useful for operations where the data is constant and known at compile
time.
Example:
An example of immediate addressing is the "add immediate" instruction,
where the CPU adds a constant value directly to a register. For instance, an
instruction might say "add 5 to register A", where 5 is the immediate operand.
These addressing modes provide different ways for the CPU to access data or
perform operations, each suited to different types of instructions and
programming needs. Implied addressing is handy for operations where the
operand is fixed or implied by the instruction itself, while immediate addressing
is useful for operations with constant data that doesn't need to be fetched
from memory.
Explain the stack organisation of CPU in detail?
1. What is a Stack?
A stack is a data structure that operates on the principle of Last In, First Out
(LIFO). Think of it like a stack of plates at a cafeteria - you add plates to the top
of the stack and remove them from the same place. Similarly, in computing,
data is pushed onto and popped off the stack.
2. Purpose of Stack in CPU:
The stack in a CPU is primarily used for storing temporary data during program
execution.
It's commonly used for storing return addresses for subroutines or function
calls, local variables, and intermediate results of calculations.
3. Stack Pointer (SP):
The stack pointer is a special register that keeps track of the top of the stack.
When data is pushed onto the stack, the stack pointer is decremented to point
to the new top of the stack.
When data is popped off the stack, the stack pointer is incremented to reflect
the new top of the stack.
4. Push and Pop Operations:
Push: This operation adds data onto the top of the stack. It involves
decrementing the stack pointer and storing data at the memory location
pointed to by the stack pointer.
Pop: This operation removes data from the top of the stack. It involves
retrieving the data from the memory location pointed to by the stack pointer
and then incrementing the stack pointer.
5. Stack Frame:
Each function or subroutine call typically has its own stack frame.
A stack frame contains information such as the return address, parameters,
local variables, and any other relevant data for that function call.
When a function is called, a new stack frame is created and pushed onto the
stack. When the function returns, its stack frame is popped off the stack.
6. Stack Growth:
The stack can grow either upwards or downwards in memory, depending on
the architecture.
In some architectures, like x86, the stack grows downwards (towards lower
memory addresses).
In others, like ARM, the stack grows upwards (towards higher memory
addresses).
7. Stack Overflow and Underflow:
Stack Overflow: This occurs when the stack becomes full and there is no space
to push more data onto it. It can happen if a program recurses too deeply or if
the stack size is not large enough.
Stack Underflow: This occurs when an attempt is made to pop data off an
empty stack. It usually indicates a programming error or a corrupted stack.
In summary, the stack organization of a CPU provides a convenient way to
manage data during program execution, particularly for handling function calls
and local variables. It's an essential aspect of computer architecture and plays a
crucial role in the execution of programs.

Semiconductor Memories
Semiconductor memories are types of electronic memory devices that utilize
semiconductor technology, typically based on integrated circuit (IC) chips, to
store and retrieve digital data. These memories are widely used in various
electronic devices, ranging from computers and smartphones to digital
cameras and embedded systems. Semiconductor memories can be broadly
categorized into two main types: volatile and non-volatile memories.

1. Volatile Semiconductor Memories:


Volatile semiconductor memories lose their stored data when the power
supply is removed. They are typically faster and used for temporary data
storage purposes, such as system memory in computers.

Static Random-Access Memory (SRAM):


1. SRAM is a type of volatile semiconductor memory that stores data using
latching circuitry made of flip-flops.
2. SRAM is faster and consumes less power than dynamic RAM (DRAM),
but it is more expensive and has lower density.
3. SRAM is commonly used in CPU caches, as well as in other high-speed
cache memory applications.
Dynamic Random-Access Memory (DRAM):
1. DRAM is a type of volatile semiconductor memory that stores data using
a capacitor to hold charge.
2. DRAM requires periodic refreshing to maintain the stored data, which
leads to slower access times compared to SRAM.
3. DRAM is less expensive and offers higher density than SRAM, making it
suitable for main memory (RAM) in computers and other devices.
2. Non-Volatile Semiconductor Memories:
Non-volatile semiconductor memories retain their stored data even when the
power supply is turned off. They are used for permanent data storage, such as
in solid-state drives (SSDs), memory cards, and firmware storage.
Flash Memory:
1. Flash memory is a type of non-volatile semiconductor memory that
stores data using floating-gate transistors.
2. Flash memory can be electrically erased and reprogrammed in blocks,
allowing for efficient read and write operations.
3. Flash memory is commonly used in USB flash drives, SSDs, memory cards
(e.g., SD cards), and embedded systems for firmware storage.
Electrically Erasable Programmable Read-Only Memory (EEPROM):
1. EEPROM is a type of non-volatile semiconductor memory that can be
electrically erased and reprogrammed byte by byte.
2. EEPROM is slower and has lower density than flash memory but offers
greater flexibility for frequent data updates.
3. EEPROM is used in applications such as storing configuration data,
firmware updates, and small amounts of user data.

Read-Only Memory (ROM):


1. ROM is a type of non-volatile semiconductor memory that stores data
permanently during manufacture and cannot be modified.
2. ROM is used to store firmware, boot loaders, and other essential
software in electronic devices.
3. Different types of ROM include mask ROM, programmable ROM (PROM),
erasable programmable ROM (EPROM), and electrically erasable
programmable ROM (EEPROM).
Semiconductor memories play a crucial role in modern computing and
electronic devices, providing fast, reliable, and efficient data storage solutions
for a wide range of applications. Advances in semiconductor technology
continue to drive improvements in memory performance, capacity, and
energy efficiency.
Types of RAM

RAM (Random Access Memory):

1. RAM (Random Access Memory) stores data that the computer is


currently using.
2. It's super-fast, helping the computer run smoothly and quickly.
3. RAM forgets everything when the computer is turned off.
4. It's temporary storage, used while you're working on something.
5. You can add more RAM to make your computer faster, but there's a limit
to how much you can add.

Types of RAM:
SRAM DRAM
Static Random Access Memory Dynamic Random Access Memory
It stores information as long as the power is It stores information as long as the power is
supplied. supplied or a few milliseconds when the power
is switched off.
Transistors are used to store Capacitors are used to store data in
information in SRAM. DRAM.
Capacitors are not used hence no To store information for a longer time,
refreshing is required. the contents of the capacitor need to
be refreshed periodically.
SRAM is faster compared to DRAM. DRAM provides slow access speeds.
It does not have a refreshing unit. It has a refreshing unit.
These are expensive. These are cheaper.
SRAMs are low-density devices. DRAMs are high-density devices.
In this bits are stored in voltage form. In this bits are stored in the form of
electric energy.
These are used in cache memories. These are used in main memories.
Consumes less power and generates Uses more power and generates more
less heat. heat.
SRAMs has lower latency DRAM has more latency than SRAM
SRAMs are more resistant to radiation DRAMs are less resistant to radiation
than DRAM than SRAMs
SRAM has higher data transfer rate DRAM has lower data transfer rate
SRAM is used in high-speed cache DRAM is used in lower-speed main
memory memory
SRAM is used in high performance DRAM is used in general purpose
applications applications
Types of ROM

ROM (Read Only Memory):


1. ROM (Read-Only Memory) stores essential instructions and data
permanently.
2. It keeps its information even when the computer is turned off.
3. ROM contains crucial startup instructions like the BIOS.
4. Unlike RAM, you can't change or overwrite data in ROM.
5. It plays a critical role in ensuring the computer functions correctly
from startup.

Feature PROM EPROM EEPROM


(Programmable (Erasable (Electrically
ROM) Programmable Erasable
ROM Programmable
ROM)
Write Method Written once Written using Written and
during special UV light erased using
manufacturing equipment electrical charge
Erasure Method Not erasable UV light Electrical
exposure signals
Reusability One time use Can be erased Can be erased
and and
reprogrammed reprogrammed
multiple times multiple times
Cost Low Moderate High
Speed Fast read Read/Write Read/Write
slower than slower but more
PROM flexible than
EPROM
Power Low Moderate Low
Consumption
Data Retention Permanent until Retains data Retains data
physically until UV light until electrically
damaged erasure erased

Ease of Use Simple to use Requires special Easy to use with


equipment for flexibility
erasure
Applications Fixed firmware Firmware Data logging,
in simple storage with configuration
electronics potential storage,
updates firmware
updates
Durability Very high , Good , but UV Good , but
physically robust light can higher write
deteriorate over cycles can wear
time out memory
cells
Features of SIMD

1) Single instruction stream and multiple data stream


2) Less memory is required in it
3) Less expensive
4) No. of decodes is less
5) Simple to implement
6) Less efficient

7) Diagram

8) SIMD processors are also known as array processors, since they


consist of an array of functional units with a shared controller.
9) Vector instructions are a primary example of SIMD parallelism in
modern CPUs.
10) Wireless MMX unit is an example of a SIMD.
Purpose of using assembly language over machine language
1) Assembly language is a low-level language that helps to communicate
directly with computer hardware.
2) Assembly language is used to directly manipulate hardware, access
specialized processor instructions, or evaluate critical performance
issues.
3) Assembly language is only understand by human beings not by
the computers
4) In assembly language data can be represented with the help of
mnemonics such as Mov, Add, Sub, End etc.
5) Assembly language is easy to understand by the human being as
compare to machine language.
6) Modifications and error fixing can be done in assembly language.
7) Easy to memorize the assembly language because some
alphabets and mnemonics are used.
8) Execution is slow as compared to machine language.
9) Assembler is used as translator to convert mnemonics into
machine understandable form.
10) Assembly language is the machine dependent and it is not
portable
11) The most commonly used assembly languages include ARM, MIPS,
and x86.

Level of RAID
1) RAID 0:

 Redundancy: No redundancy or fault tolerance. If one disk fails, all data is lost.
 Minimum Disks: 2

2) RAID 1:

 Redundancy: Provides redundancy as data is replicated across multiple disks. If one


disk fails, data remains accessible from the mirrored disk.
 Minimum Disks: 2

Features of Direct memory access


1) What is DMA: Direct Memory Access (DMA) is a feature in
computers that allows certain hardware devices, like a disk
drive or a network adapter, to transfer data to and from the
computer's memory without involving the CPU directly.
2) Purpose of DMA: DMA helps to improve overall system
performance by transferring data without involving CPU.
Instead of the CPU having to manage every data transfer,
DMA allows devices to transfer data directly to and from
memory, freeing up the CPU to perform other tasks.
3) How DMA works: When a hardware device needs to transfer
data, it sends a request to the DMA controller. The DMA
controller coordinates the transfer by temporarily taking
control of the system bus and accessing the computer's
memory directly.
4) CPU Involvement: With DMA, the CPU sets up the initial
parameters for the data transfer, such as the memory
addresses and the amount of data to transfer. After setting up
these parameters, the CPU can continue executing other tasks
while the DMA controller handles the actual data transfer.
5) Benefits of DMA: DMA significantly reduces the workload on
the CPU, allowing it to focus on executing instructions and
performing computations. This leads to improved system
performance and responsiveness, especially during data-
intensive operations like file transfers or network
communication.
6) Applications of DMA: DMA is commonly used in various
hardware devices such as disk controllers, network adapters,
and graphics cards to efficiently transfer large amounts of
data between the device and memory without burdening the
CPU. This technology is essential for achieving high-speed
data transfer rates and higher system performance in modern
computers.

Memory Hierarchy
In the Computer System Design, Memory Hierarchy is a way to
organize the memory such that it can minimize the access time..
The figure below clearly demonstrates the different levels of the
memory hierarchy

Types of Memory Hierarchy


This Memory Hierarchy Design is divided into 2 main types:
1. External Memory or Secondary Memory: Comprising of
Magnetic Disk, Optical Disk, and Magnetic Tape i.e. peripheral
storage devices which are accessible by the processor via an I/O
Module. Secondary memory stores data permanently even if the power supply is
off.
 Internal Memory or Primary Memory: Comprising of Main
Memory, Cache Memory & CPU registers. This is directly
accessible by the processor.
Primary memory stores data only when the power supply is on, it loses data when
the power is off.

Memory Hierarchy Design


Memory Hierarchy Design
1. Registers
Registers are small, high-speed memory units located in the CPU.
They are used to store the most frequently used data and
instructions. Registers have the fastest access time and the
smallest storage capacity, typically ranging from 16 to 64 bits.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the
CPU. It stores frequently used data and instructions that have
been recently accessed from the main memory. Cache memory
is designed to minimize the time it takes to access data by
providing the CPU with quick access to frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is
the primary memory of a computer system. It has a larger
storage capacity than cache memory, but it is slower. Main
memory is used to store data and instructions that are currently
in use by the CPU.
Types of Main Memory
 Static RAM: Static RAM stores the binary information in flip
flops and information remains valid until power is supplied. It has
a faster access time and is used in implementing cache memory.
 Dynamic RAM: It stores the binary information as a charge on
the capacitor. It requires refreshing circuitry to maintain the
charge on the capacitors after a few milliseconds. It contains
more memory cells per unit area as compared to SRAM.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-
state drives (SSD), is a non-volatile memory unit that has a
larger storage capacity than main memory. It is used to store
data and instructions that are not currently in use by the CPU.
Secondary storage has the slowest access time and is typically
the least expensive type of memory in the memory hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are made up with
either a metal or a plastic or a magnetized material. The
Magnetic disks work at a high speed inside the computer and
these are frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is
covered with a plastic film. It is generally used for the backup of
data. In the case of a magnetic tape, the access time for a
computer is a little slower and therefore, it requires some
amount of time for accessing the strip.

Operations on Stack

1) The computers which use Stack-based CPU Organization are


based on a data structure called a stack. The stack is a list of
data words. It uses the Last In First Out (LIFO) access
method which is the most popular access method in most of
the CPU. A register is used to store the address of the topmost
element of the stack which is known as Stack pointer (SP).
In this organization, ALU operations are performed on stack
data. It means both the operands are always required on the
stack. After manipulation, the result is placed in the stack.
2) The main two operations that are performed on the operators
of the stack are Push and Pop. These two operations are
performed from one end only.
1. Push –
This operation results in inserting one operand at the top of the
stack and it increases the stack pointer register. The format of
the PUSH instruction is:

PUSH
It inserts the data word at a specified address to the top of the
stack. It can be implemented as:
// Increment SP by 1
SP <-- SP + 1
SP <-- (memory address)

2. Pop –
This operation results in deleting one operand from the top of the
stack and decreasing the stack pointer register. The format of
the POP instruction is:
POP
It deletes the data word at the top of the stack to the specified
address. It can be implemented as:
(memory address) <-- SP
//Decrement SP by 1
SP <-- SP - 1

3) Operation type instruction does not need the address field in


this CPU organization. This is because the operation is
performed on the two operands that are on the top of the
stack. For example:
SUB
This instruction contains the opcode only with no address field. It
pops the two top data from the stack, subtracting the data, and
pushing the result into the stack at the top.
4) PDP-11, Intel’s 8085, and HP 3000 are some examples of
stack-organized computers.
5) The advantages of Stack-based CPU organization –
 Efficient computation of complex arithmetic expressions.
 Execution of instructions is fast because operand data are stored
in consecutive memory locations.
 The length of instruction is short as they do not have an address
field.
6) The disadvantages of Stack-based CPU organization –
 The size of the program increases.
Operations on Queue

1) A queue is a linear data structure that follows the First-In-


First-Out (FIFO) principle. It operates like a line where
elements are added at one end (rear) and removed from
the other end (front).
2) Queues are commonly used in various algorithms and
applications for their simplicity and efficiency in managing
data flow.

3) Basic Operations of Queue Data Structure


 Enqueue (Insert): Adds an element to the rear of the queue.
 Dequeue (Delete): Removes and returns the element from the
front of the queue.
 Peek: Returns the element at the front of the queue without
removing it.
 Empty: Checks if the queue is empty.
 Full: Checks if the queue is full.

Addressing Modes

1) Immediate Addressing: In this mode, the operand is


directly specified within the instruction itself. For example,
if an instruction says "add 5 to the contents of register A",
the value 5 is directly provided in the instruction.

2) Direct Addressing: Here, the address of the operand is


directly specified in the instruction. For instance, if an
instruction says "load the content at memory address 2000
into register B", the address 2000 is directly mentioned in
the instruction.

3) Indirect Addressing: In this mode, the instruction


contains the address of a memory location which itself
holds the address of the operand. It's like a double
reference. For example, if an instruction says "load the
content at the memory address stored in register C into
register D", the content of register C holds the address of
the actual operand.
4) Register Addressing: Here, the instruction refers directly
to a register containing the operand. For instance, if an
instruction says "subtract the content of register A from
register B", it means both operands are in registers A and
B.

5) Indexed Addressing: This mode involves adding an offset


to a base address to reach the operand. The offset is
specified within the instruction or a register. For example, if
an instruction says "fetch the content at the memory
address formed by adding the value in register X to the
base address 1000", it implies using the value in register X
as an offset to access memory.

6) Relative Addressing: It uses the Program Counter (PC)


register to compute the operand's address. The operand's
address is calculated by adding an offset to the current
value of the PC. It's commonly used in branch instructions
where the offset determines how far to jump from the
current instruction.

7) Autodecrement or the Autoincrement Mode : The


Autodecrement or Autoincrement mode is very similar to
the register indirect mode. The only exception is that the
register is decremented or incremented before or after its
value is used to access memory.

These addressing modes provide flexibility to programmers and


compilers when writing code, allowing efficient access to data
and instructions stored in memory or registers. Each mode has
its advantages and is suitable for different types of operations
and memory organization.

Features of Tertiary memory


 Low cost: Because in tertiary memory data is not accessed
quickly, it is typically less expensive than primary and secondary
storage.
 Large storage capacity: Tertiary storage devices are made to
hold a lot of data, usually between terabytes and petabytes.
 Offsite storage: Tertiary storage systems are frequently used
for offsite storage, which can add security and safeguard against
data loss due to disasters or other unforeseen circumstances.
 Slow access: Tertiary storage is not designed for frequent use,
hence it often accesses more slowly than main and secondary
storage.
 Storage for the long term: Tertiary storage is frequently used
to store data for the long term that is not in use but must be kept
for regulatory
 Data backup and recovery: Tertiary storage is frequently used
for data backup and recovery because it offers an affordable and
dependable way to store data that might be required in the
event of data loss or corruption.
 Large storage capacity: Tertiary storage offers larger storage
capacity as compared to primary and secondary storage, making
it ideal for storing large amounts of data that may not fit in
primary or secondary storage.
 Cost-effective: Tertiary storage is typically more cost-effective
than primary and secondary storage, as it is designed for large-
scale data storage and is available in high-capacity devices.
 Easy accessibility: With tertiary storage, data can be easily
accessed and retrieved as needed, even if it is not currently
being used. This is because tertiary storage operates at a slower
speed than primary and secondary storage.
 Improved data backup and recovery: Tertiary storage
provides a convenient backup solution for critical data and
enables easy data recovery in case of a failure or data loss in
primary or secondary storage.
 Long-term data preservation: Tertiary storage is designed for
long-term data preservation, making it ideal for archiving data
that is not frequently used but must be kept for regulatory or
historical purposes.
 Scalability: Tertiary storage can be easily scaled up or down to
meet changing storage requirements, making it a flexible and
adaptable solution for organizations of any size

Message Passing in Distributed System

1) Message passing refers to the communication medium


used by computers or processes to transfer information
and coordinate their actions. It involves transferring and
entering messages between processors to achieve various
goals such as coordination, synchronization, and data
sharing.

Types of Message Passing


1. Synchronous message passing
2. Asynchronous message passing
3. Hybrids

1. Synchronous Message Passing


 Synchronous message passing is a communication
mechanism in existing programming where processes or
threads change messages in a synchronous manner. The
sender blocks until the receiver has received and
processed the message, ensuring coordination
 To ensure the proper functioning of synchronous message
passing in concurrent systems, it’s crucial to precisely
design and consider potential backups and error handling.
2) Asynchronous Message Passing
 Asynchronous message passing is a communication
mechanism in existing programming where processes
or threads cannot change messages in a synchronous
manner. It involves sending a message to a reciever
and continuing execution without waiting for a
response.
 Key characteristics of asynchronous message passing
include its asynchronous nature, which allows the
sender and receiver to operate singly without waiting
for a response.
 Asynchronous message passing is extensively used
in scenarios like distributed systems, message
queues, and actor models, enabling concurrency,
scalability, and fault tolerance.

3) Hybrids
 Hybrid message passing combines elements of both
synchronous and asynchronous message ends. It
provides flexibility to the sender to choose whether to
block and hold on for a response or continue
execution asynchronously. The choice between
synchronous or asynchronous actions can be made
based on the specific requirements of the system or
the nature of the communication.

Various machine instructions

OR
Encoding of machine instructions
 Machine Instructions are commands or programs
written in the machine language of a computer that it
can recognize and execute by the computer. A
machine instruction consists of several bytes in
memory that tells the processor to perform one
machine operation.
 A machine language program is the collection of
machine instructions in the main memory
 Machine language is a set of instructions executed
directly by a computer’s CPU. Each instruction
performs a specific task, such as a load, a jump, etc
in a CPU register or memory.

Machine Instructions Used in 8086 Microprocessor


1. Data transfer instructions– move, load exchange, input,
output.
 MOV: Move byte to register or a memory .
 IN, OUT: Input byte from port, output word to port.
 LEA: Load effective address
 PUSH, POP: Push word onto stack, pop word off stack.
2. Arithmetic instructions – add, subtract, increment, decrement,
and compare.
 ADD, SUB: Add, subtract byte or word
 INC, DEC: Increment, decrement byte or word.
 NEG: Negate byte or word (two’s complement).
 CMP: Compare byte or word (subtract without storing).
 MUL, DIV: Multiply, divide byte or word (unsigned).
3. Logic instructions– AND, OR, exclusive OR, NOT
 NOT: Logical NOT of byte or word (one’s complement)
 AND: Logical AND of byte or word
 OR: Logical OR of byte or word.
 XOR: Logical exclusive-OR of byte or word
4. Loop control instructions-
 LOOP: Loop unconditional byte or word.
 Subroutine and Interrupt instructions-
 CALL, RET: Call, return from procedure (inside or outside current
segment).

5. String manipulation instruction- load, store, move, compare


and scan for byte/word
 MOVS: Move byte or word string
 CMPS: Compare byte or word string.
 LODS, STOS: Load, store byte or word string to AL.
Latency Hiding
 It is a technique of providing each processor with useful
work as it waits for requests
 Provide each processor with useful work to do as it awaits
the completion of memory access requests.
 Multithreading may be a practical mechanism for latency.
 Latency hiding provide permission of communications to
be completely overlapping between computers, leading to
high efficiency and hardware utilization.
 Mechanism : There will be an Overlapping
communication between computers in order to achieve
high efficiency
 Advantages : Improved machine usage, high efficiency,
and hardware utilization
 Disadvantages : Increased complexity, it requires
synchronization, and deadlocks comes in system
 Overhead : Requires synchronization overhead to
terminate threads
 Examples : Remote direct memory access (RDMA)
 Performance improvement : Effective hiding of latency

You might also like