0% found this document useful (0 votes)
16 views35 pages

Ques: What Do You Mean by Combinational Circuit?

Combinational circuits are digital electronics components that provide outputs based solely on current inputs, without memory or feedback. They are used in various applications such as adders, multiplexers, and ALUs, offering advantages like simplicity and real-time operation, but have limitations in functionality and flexibility. The control unit of a CPU directs operations by generating control signals, with hardwired and micro-programmable types, while registers play a crucial role in CPU organization, influencing performance and efficiency.

Uploaded by

saba.firdous8987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views35 pages

Ques: What Do You Mean by Combinational Circuit?

Combinational circuits are digital electronics components that provide outputs based solely on current inputs, without memory or feedback. They are used in various applications such as adders, multiplexers, and ALUs, offering advantages like simplicity and real-time operation, but have limitations in functionality and flexibility. The control unit of a CPU directs operations by generating control signals, with hardwired and micro-programmable types, while registers play a crucial role in CPU organization, influencing performance and efficiency.

Uploaded by

saba.firdous8987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Ques: What do you mean by combinational circuit?

Answer: Combinational circuits are very well-known components in digital electronics which can provide
output instantly based on the current input. Unlike sequential circuits, a combinational circuit listens for input
signal and and generates output no matter what is the past input or state as it has no feedback or memory
component. It only cares about present input and state.

Definition of Combinational Circuit

Combinational circuits are specially designed using multiple interconnected logic gates such that the output
will be generated by computing the logical combinations of the present input only. No clock pulse is present
here. Moreover, no previously stored value or state is taken into consideration here. The output is independent
of previous states.

Features of Combinational Circuit

In this output depends only upon present input.

It’s Speed is fast.

Easy designed.

There is no feedback between input and output.

It is time independent.

Elementary building blocks are Logic gates.

Used for both arithmetic and boolean operations.

Combinational circuits don’t have the capability to store any state.

Examples

In areas combinational circuits are used which are discusses below:

Adders and Subtractors: Combinational circuits are used to perform mathematical operations of binary
numbers. Adder and Subtractor uses combinational circuit logic where in takes two or more binary numbers
as input and performs addition/subtraction to generate output.

Multiplexers and Demultiplexers: Multiplexer is a special kind of combinational circuit where several
number of inputs are present. From that one input can be selected and that will be transmitted as an output
based on selection signal. In the other hand, Demultiplexers also select one input signal but transmits it to
one of the several outputs.

Encoders and Decoders: Encoders also uses combinational circuits to convert a multiple set of input signal
to smaller set of output signal but the resultant value is unchanged. Inversely, Decoders converts a small set
of input signal to a large set of output signal without changing the resultant value. Basically input of Encoder =
output of Decoder and vice versa.
Applications of Combinational Circuit

In modern technologies, combinational circuits are widely used for its simple functionality and ability to give
instant output. Some of the applications are discussed below:

Arithmetic and Logic Units (ALUs): As combinational circuits are capable of performing various
mathematical tasks like arithmetic and logical operations so, these circuits are used in calculators and
processors as a fundamental component of ALU.

Data Encryption and Decryption: In information protection fields, combinational circuit are used for
encryption and decryption of data for secure communication. Encryption/decryption algorithms are also a
complex mathematical formula which can be performed by combinational circuits.

Data Multiplexing and Demultiplexing: It is just a practical implementation of multiplexer and


demultiplexer. By using it we can optimize network bandwidth by effectively reducing network traffic as
combinational circuit allows to transmit multiple data signals over a single communication channel.

Traffic Light Control: In traffic lights control mechanism, combinational circuits are used to instantly
determine the timing and sequence of traffic light changes based on the inputs of timers and sensors.

Advantages of Combinational Circuit

Some basic advantages of combinational circuits has made them very popular among modern technologies
which are discussed below:

Simplicity: Combinational circuits are very easy to implement and absence of memory or feedback element
has made it more straightforward which is reducing complexity in digital systems and providing faster
prototype building mechanism.

Real-time Operation: Combinational circuits don’t face any delay which is very much essential for quick-
response applications like data transmission and signal processing.

Deterministic Behavior: Combinational circuits generate output based on present input only which ensures
predictable and repeatable results which is essential for applications in which consistency and reliability is
required.

Disadvantages of Combinational Circuit

Limited Functionality: As it provides output only based on present input and there is no way to store
previous data so we can’t use this circuit to any memory-based applications or where previous data is recalled
for doing operations.

Lack of Flexibility: Once the design of logic gates is done then for changing a small design required the
entire redesigning of the circuits which is monotonous and time-consuming.

Increased Complexity for Large Designs: For large designs the number of logic gates will increase
gradually which means the management of input and output ports will become very complex. This may lead to
higher production costs and increased design errors.

Ques 2: What do you mean by control unit? Describe a hardwired control unit with its timing
diagram.
Answer: The Control Unit is the part of the computer’s central processing unit (CPU), which directs the
operation of the processor. It was included as part of the Von Neumann Architecture by John von Neumann. It
is the responsibility of the control unit to tell the computer’s memory, arithmetic/logic unit, and input and
output devices how to respond to the instructions that have been sent to the processor. It fetches internal
instructions of the programs from the main memory to the processor instruction register, and based on this
register contents, the control unit generates a control signal that supervises the execution of these instructions.
A control unit works by receiving input information which it converts into control signals, which are then sent
to the central processor. The computer’s processor then tells the attached hardware what operations to
perform. The functions that a control unit performs are dependent on the type of CPU because the
architecture of the CPU varies from manufacturer to manufacturer.

Examples of devices that require a CU are:

Control Processing Units(CPUs)

Graphics Processing Units(GPUs)

Functions of the Control Unit

It coordinates the sequence of data movements into, out of, and between a processor’s many sub-units.

It interprets instructions.

It controls data flow inside the processor.

It receives external instructions or commands to which it converts to sequence of control signals.

It controls many execution units(i.e. ALU, data buffers and registers) contained within a CPU.

It also handles multiple tasks, such as fetching, decoding, execution handling and storing results.

Types of Control Unit

There are two types of control units:

Hardwired

Micro programmable control unit.

Hardwired Control Unit

In the Hardwired control unit, the control signals that are important for instruction execution control are
generated by specially designed hardware logical circuits, in which we can not modify the signal generation
method without physical change of the circuit structure. The operation code of an instruction contains the
basic data for control signal generation. In the instruction decoder, the operation code is decoded. The
instruction decoder constitutes a set of many decoders that decode different fields of the instruction opcode.

As a result, few output lines going out from the instruction decoder obtains active signal values. These output
lines are connected to the inputs of the matrix that generates control signals for execution units of the
computer. This matrix implements logical combinations of the decoded signals from the instruction opcode
with the outputs from the matrix that generates signals representing consecutive control unit states and with
signals coming from the outside of the processor, e.g. interrupt signals. The matrices are built in a similar way
as a programmable logic arrays.
Control signals for an instruction execution have to be generated not in a single time point but during the
entire time interval that corresponds to the instruction execution cycle. Following the structure of this cycle,
the suitable sequence of internal states is organized in the control unit. A number of signals generated by the
control signal generator matrix are sent back to inputs of the next control state generator matrix.

This matrix combines these signals with the timing signals, which are generated by the timing unit based on the
rectangular patterns usually supplied by the quartz generator. When a new instruction arrives at the control
unit, the control units is in the initial state of new instruction fetching. Instruction decoding allows the control
unit enters the first state relating execution of the new instruction, which lasts as long as the timing signals and
other input signals as flags and state information of the computer remain unaltered.

A change of any of the earlier mentioned signals stimulates the change of the control unit state. This causes
that a new respective input is generated for the control signal generator matrix. When an external signal
appears, (e.g. an interrupt) the control unit takes entry into a next control state that is the state concerned with
the reaction to this external signal (e.g. interrupt processing).

The values of flags and state variables of the computer are used to select suitable states for the instruction
execution cycle. The last states in the cycle are control states that commence fetching the next instruction of
the program: sending the program counter content to the main memory address buffer register and next,
reading the instruction word to the instruction register of computer. When the ongoing instruction is the stop
instruction that ends program execution, the control unit enters an operating system state, in which
it waits for a next user directive.

Micro Programmable control unit

The fundamental difference between these unit structures and the structure of the hardwired control unit is
the existence of the control store that is used for storing words containing encoded control signals mandatory
for instruction execution. In microprogrammed control units, subsequent instruction words are fetched into
the instruction register in a normal way. However, the operation code of each instruction is not directly
decoded to enable immediate control signal generation but it comprises the initial address of a microprogram
contained in the control store.
Ques 3: Define register? “The number of register present in a CPU and their internal organizations
play a dominant role in the organization of CPU”. Justify how many different types of cpu
Organizations are thus obtained based on the number of register available in the CPU.

Answer: A register is a tiny, fast storage memory within the central processing unit (CPU) or the arithmetic
logic unit (ALU) of a computer. Registers are utilized for a variety of functions in handling and controlling
instructions and data and play an important role in the operation of a computer’s CPU.

Memory Hierarchy and the Role of Registers

Computer systems have a memory hierarchy that includes multiple levels of memory with varying access
speeds and capacities. At the top of this hierarchy are the CPU registers, which play a vital role in enhancing
CPU performance. Registers are small, high-speed storage units located within the CPU itself, providing fast
access to frequently used data.

The number of registers present in a CPU and their internal organizations indeed play a dominant role in
determining the organization of the CPU. Registers are small, high-speed storage locations within the CPU that
hold data temporarily during processing. They are fundamental to the operation of the CPU, influencing its
performance, efficiency, and capabilities.

Based on the number of registers available in the CPU, various CPU organizations can be obtained. Here's a
breakdown of different types of CPU organizations based on the number of registers:

Single Accumulator Architecture:

In this organization, there is one primary accumulator register used for arithmetic and logic operations.

Examples include early computers like the Manchester Mark I and the IBM 650.

General-Purpose Register (GPR) Architecture:

In GPR architecture, there are multiple general-purpose registers available for storing operands and results of
arithmetic and logical operations.

Instructions can operate directly on data stored in these registers.

Examples include many modern CPUs like those found in desktops, laptops, and servers.

Stack-Based Architecture:

This organization primarily utilizes a stack for storing operands and results.

While there may be a limited number of general-purpose registers, the stack is the primary storage mechanism
for intermediate values.

Some embedded processors and specialized architectures use stack-based designs.

Register-Memory Architecture:
In this organization, CPU instructions can operate on data stored in both registers and memory locations.

Registers are used for fast access, while memory serves as a larger, but slower, storage medium.

Many RISC (Reduced Instruction Set Computer) architectures employ this design.

Vector Processor Architecture:

Vector processors have specialized registers designed for performing SIMD (Single Instruction, Multiple Data)
operations.

These registers hold arrays of data elements, and instructions can operate on multiple data elements
simultaneously.

GPUs (Graphics Processing Units) and some specialized processors use vector architectures.

Superscalar Architecture:

Superscalar CPUs have multiple execution units capable of executing multiple instructions simultaneously.

While there may be multiple registers, the focus here is on parallel execution of instructions rather than the
number of registers per se.

Many modern high-performance CPUs utilize superscalar designs.

These are some of the key CPU organizations based on the number and arrangement of registers. Each
organization has its advantages and trade-offs, influencing factors such as performance, complexity, and power
efficiency. CPU designers choose an appropriate organization based on the intended application and
desired balance of these factors.

Ques4: What is associative memory? Explain operation of associative memory with the help of
hardware organizations.
Answer: Associative memory, also known as content-addressable memory (CAM), is a type of computer
memory that enables rapid search and retrieval of data based on its content rather than its address. In
associative memory, the memory content itself serves as the address, allowing for parallel comparison of a
search key with all stored data entries simultaneously. This contrasts with conventional memory, where data is
accessed by specifying its memory address.

The operation of associative memory involves three main steps:

Search:

During the search operation, the memory compares the search key (or query) with all stored data entries
simultaneously.

Each entry in the memory is compared with the search key in parallel.

If a match is found between the search key and any stored entry, the memory returns the address or identifier
associated with that entry.

Match Detection:

Once the comparison is performed, the memory detects whether any of the stored entries match the search
key.

If a match is detected, the memory indicates the presence of a match and provides the corresponding address
or identifier.

Retrieval:
If a match is found, the memory retrieves the associated data or information stored at the identified address
or location.

This retrieved data can then be accessed or used by the CPU or other components of the system.

Now, let's discuss the hardware organizations that implement associative memory:

Content-Addressable Memory (CAM):

CAM is a hardware implementation of associative memory.

It consists of an array of memory cells, each containing both data and a comparison circuit.

During a search operation, the search key is broadcast to all cells simultaneously.

Each cell compares its stored data with the search key using its comparison circuit.

If a match is found in any cell, the associated address or identifier is provided as the output.

Ternary Content-Addressable Memory (TCAM):

TCAM is a variant of CAM that supports ternary logic, allowing for more flexible search operations.

In addition to storing data, each memory cell in TCAM can also store a "don't care" value, representing a bit
that can match either 0, 1, or any value during the search.

This flexibility enables TCAM to perform more complex pattern-matching operations efficiently.

Associative Cache Memory:

In computer architecture, associative caches use associative memory to store recently accessed data.

The cache memory is organized as a set of cache lines, each containing both data and tags.

During a cache lookup, the memory controller broadcasts the memory address to all cache lines
simultaneously.

Each cache line compares the address with its stored tag.

If a match is found, the corresponding data is retrieved from the cache.

These hardware organizations demonstrate how associative memory enables rapid search and retrieval of data
based on its content, making it particularly useful for applications such as high-speed database search,
network routing, and pattern matching.

Ques 5: What is Instruction Cycle? Draw the flowchart and discuss the working process of
instruction cycle.
Answer: Instruction Cycle

The structure of the instruction cycle defines the processing of a single instruction. The processing of
instruction takes various form during the occurrence of an interrupt or if there is indirect addressing
present in the instruction. In this section, we will discuss various forms of the instruction cycle.

Instruction Cycle Definition

The processing involved in the execution of a single instruction is termed as Instruction Cycle. This
processing is done in two steps i.e. fetch and execute. To execute an instruction the processor first
reads an instruction from the memory which is called fetching and then the fetched instruction
is executed.

If we discuss the basic structure it includes the following two cycles:


Fetch cycle: In this cycle, the processor reads the instruction that is to be executed from the memory.

Execute cycle: In this cycle, the processor interprets the opcode of the fetched instruction and
performs the operations accordingly.

The figure below shows you the processing of the basic instruction cycle. In the beginning, to start the
execution of a program, the processor runs the fetch cycle and fetches the first instruction from the
memory. The execution cycle interprets the operation and performs the operations specified in the
instruction accordingly.

This cycle repeats until all the instructions are executed from the program and after the execution of
the last instruction the instruction cycle get halt. So, this was the scenario where there were no
interrupts.

Interrupt Cycle

To accommodate the occurrence of interrupts the interrupt cycle must be added to amend the
structure of the instruction cycle. As in the figure below you can see the interrupt cycle has been added
to the basic instruction cycle.

Consider the condition that the interrupts are enabled. In this case, if an interrupt occurs then the
processor halt the execution of the current program. Thereby it saves the address of the instruction
that has to be executed next and service the occurred interrupt.

To process the interrupts the processor set the program counter with starting address of the interrupt
service routine. This would let the processor fetch the first instruction of interrupt service routine and
service the occurred interrupt. Once the interrupt is serviced the processor resumes back to the
execution of the program it has halted to service the interrupt. It set the program counter with the
address of the next instruction to be executed.

If the interrupts are disabled then the processor will simply ignore the occurrence of interrupts. The
processor will smoothly execute the currently running program and will check the pending interrupts
once the interrupts are enabled.

Indirect Cycle

An instruction may have one or more than one operands. To operate these operands its value is
accessed from the memory. So, to execute the instructions with operands we require memory access.
Now, what if indirect addressing is used?
Additional memory access is required if indirect addressing is used in the instructions. This adds one
more stage or cycle to the basic instruction cycle. Basically, the instruction fetch and instruction
execute cycle occurs alternatively.

The fetched instruction is checked for indirect addressing. If indirect addressing is present the operands
are fetched by performing an indirect cycle. And if there occurs an interrupt it is processed before the
execution of the next instruction.

Flowchart

The structure of the instruction cycle varies from processor to processor depending on its design.
Though if we talk in general terms let us see what must happen. Before getting into the details let us
discuss the registers that are required in the processing of the instruction cycle.

Memory Address Register (MAR): This register holds the address of memory location from where the
data has to be fetched or to where the data has to be stored.

Memory Buffer Register (MBR): This register stores the data that is either fetched from the memory or
that has to be stored in the memory.

Program Counter (PC): This is also called the instruction address register as it holds the address of the
instruction that has to be executed next.

Instruction Register (IR): This register holds the instruction that has to be interpreted.

The figure below shows you the flowchart of the instruction cycle with interrupts and indirect
addressing.

Initially fetch cycle will run, the program counter is initialized by the address of the first instruction of
the program. The address in PC is transferred to the MAR and the PC is updated with the address of
the next instruction to be executed. The control unit reads the instruction from the address present in
MAR and store it in MBR and then transfers it to IR. Here the fetch cycle gets over.

The control unit checks the IR to verify if the instruction has any operands present, specifying indirect
addressing. If the indirect addressing is present in the instruction then the indirect cycle has to be run.

For this, the control unit reads the right-most N bits of MBR and transfer them to the MAR. These N
bits contain the address reference of the operand that has to be fetched. The control unit performs the
memory read operation and fetch the address of the operand and transfer it into MBR. Here the
indirect cycle gets over.
The execution cycle may be of various form and it totally depends on the type of instruction present in
the IR. The execute cycle involves transferring data from registers to registers, read memory or write
memory operations, ALU operations.

The interrupt cycle starts when there arises an interrupt signal. The PC has the address of the next
instruction of the program that has to be executed next. The content of PC i.e. address of the next
instruction that has to be executed is transferred to MBR. Now the content of MBR is written to a
special memory location the address of which is loaded into MAR. Now the PC is initialized with the
first instruction of interrupt service routine, this program’s services the occurred interrupts.

So, these are the various forms of the instruction cycle. We have seen the amendment in its structure
when there is indirect addressing present in the instructor and even when an interrupt occurs.

Ques6: What do you mean by addressing modes? Discuss different types of addressing modes with
their merits and demerits.
Answer: Addressing Modes– The term addressing modes refers to the way in which the operand of an
instruction is specified. The addressing mode specifies a rule for interpreting or modifying the address field of
the instruction before the operand is actually executed.

Addressing modes for 8086 instructions are divided into two categories: 1) Addressing modes for data 2)
Addressing modes for branch

The 8086 memory addressing modes provide flexible access to memory, allowing you to easily access
variables, arrays, records, pointers, and other complex data types. The key to good assembly language
programming is the proper use of memory addressing modes.

An assembly language program instruction consists of two parts

The
memory address of an operand consists of two components: IMPORTANT TERMS

Starting address of memory segment.

Effective address or Offset: An offset is determined by adding any combination of three address
elements: displacement, base and index.

Displacement: It is an 8 bit or 16 bit immediate value given in the instruction.

Base: Contents of base register, BX or BP.

Index: Content of index register SI or DI.

According to different ways of specifying an operand by 8086 microprocessor, different addressing modes are
used by 8086. Addressing modes used by 8086 microprocessor are discussed below:

Implied mode:: In implied addressing the operand is specified in the instruction itself. In this mode the data is
8 bits or 16 bits long and data is the part of instruction.Zero address instruction are designed with implied

addressing mode.

Example: CLC (used to reset Carry flag to 0)


Immediate addressing mode (symbol #):In this mode data is present in address field of instruction .Designed
like one address instruction format. Note:Limitation in the immediate mode is that the range of constants are

restricted by size of address field.

Example: MOV AL, 35H (move the data 35H into AL register)

Register mode: In register addressing the operand is placed in one of 8 bit or 16 bit general purpose registers.
The data is in the register that is specified by the instruction. Here one register reference is required to access

the data.

Example: MOV AX,CX (move the contents of CX register to AX register)

Register Indirect mode: In this addressing the operand’s offset is placed in any one of the registers BX,BP,SI,DI
as specified in the instruction. The effective address of the data is in the base register or an index register that
is specified by the instruction. Here two register reference is required to access the data.

The 8086 CPUs let you access memory indirectly through a


register using the register indirect addressing modes.

MOV AX, [BX](move the contents of memory location s

addressed by the register BX to the register AX)

Auto Indexed (increment mode): Effective address of the operand is the contents of a register specified in the
instruction. After accessing the operand, the contents of this register are automatically incremented to point
to the next consecutive memory location.(R1)+. Here one register reference,one memory reference and one
ALU operation is required to access the data. Example:

Add R1, (R2)+ // OR

R1 = R1 +M[R2]

R2 = R2 + d

Useful for stepping through arrays in a loop. R2 – start of array d – size of an element

Auto indexed ( decrement mode): Effective address of the operand is the contents of a register specified in
the instruction. Before accessing the operand, the contents of this register are automatically decremented to
point to the previous consecutive memory location. –(R1) Here one register reference,one memory reference
and one ALU operation is required to access the data.

Example:

Add R1,-(R2) //OR

R2 = R2-d

R1 = R1 + M[R2]
Auto decrement mode is same as auto increment mode. Both can also be used to implement a stack as push
and pop . Auto increment and Auto decrement modes are useful for implementing “Last-In-First-Out” data
structures.

Direct addressing/ Absolute addressing Mode (symbol [ ]): The operand’s offset is given in the instruction as
an 8 bit or 16 bit displacement element. In this addressing mode the 16 bit effective address of the data is the
part of the instruction. Here only one memory reference operation is required to access the data.

Example:ADD AL,[0301] //add the contents of offset address 0301 to AL

Indirect addressing Mode (symbol @ or () ):In this mode address field of instruction contains the address of
effective address.Here two references are required. 1st reference to get effective address. 2nd reference to
access the data. Based on the availability of Effective address, Indirect mode is of two kind:

Register Indirect:In this mode effective address is in the register, and corresponding register name will be
maintained in the address field of an instruction. Here one register reference,one memory reference is required
to access the data.

Memory Indirect:In this mode effective address is in the memory, and corresponding memory address will be
maintained in the address field of an instruction. Here two memory reference is required to access the data.

Indexed addressing mode: The operand’s offset is the sum of the content of an index register SI or DI and an 8
bit or 16 bit displacement.

Example:MOV AX, [SI +05]

Based Indexed Addressing: The operand’s offset is sum of the content of a base register BX or BP and an
index register SI or DI.

Example: ADD AX, [BX+SI]

Based on Transfer of control, addressing modes are:

PC relative addressing mode: PC relative addressing mode is used to implement intra segment transfer of
control, In this mode effective address is obtained by adding displacement to PC.

EA= PC + Address field value

PC= PC + Relative value.

Base register addressing mode:Base register addressing mode is used to implement inter segment transfer of
control.In this mode effective address is obtained by adding base register value to address field value.

EA= Base register + Address field value.

PC= Base register + Relative value.

Note:

PC relative and based register both addressing modes are suitable for program relocation at runtime.

Based register addressing mode is best suitable to write position independent codes.

Advantages of Addressing Modes

To give programmers to facilities such as Pointers, counters for loop controls, indexing of data and program
relocation.
To reduce the number bits in the addressing field of the Instruction.

Sample GATE Question

Match each of the high level language statements given on the left hand side with the most natural addressing
mode from those listed on the right hand side.

1. A[1] = B[J]; a. Indirect addressing

2. while [*A++]; b. Indexed addressing

3. int temp = *x; c. Autoincrement

(A) (1, c), (2, b), (3, a)(B)(1, a), (2, c), (3, b)(C)(1, b), (2, c), (3, a)(D)(1, a), (2, b), (3, c)Answer:(C)Explanation:

List 1 List 2

1) A[1] = B[J]; b) Index addressing

Here indexing is used

2) while [*A++]; c) auto increment

The memory locations are automatically incremented

3) int temp = *x; a) Indirect addressing

Here temp is assigned the value of int type stored

at the address contained in X

Ques7: Write a program in assembly language to evaluate arithmetic expressions, X= (A+B) * (C+D),
using three, two, one and zero address instructions.
Answer: Three-Address Instructions (Reg:ister-Based) In three-address instructions, operands are specified
explicitly. We'll use registers to hold the values of A, B, C, D, and X.

assembly Copy code

LOAD A, R1 ; Load A into register R1 LOAD B, R2 ; Load B into register R2 ADD R1, R2 ; R1 = A + B LOAD
C, R3 ; Load C into register R3 LOAD D, R4 ; Load D into register R4 ADD R3, R4 ; R3 = C + D MUL R1, R3 ;
X = (A + B) * (C + D) STORE R1, X ; Store result in memory location X

Two-Address Instructions (Stack-Based): In two-address instructions, one of the operands is implicitly


the accumulator. We'll use a stack-based approach.

assembly Copy code

LOAD A ; Push A onto stack LOAD B ; Push B onto stack ADD ; Pop two values from stack, add them, and
push result back LOAD C ; Push C onto stack LOAD D ; Push D onto stack ADD ; Pop two values from stack,
add them, and push result back MUL ; Pop two values from stack, multiply them, and push result back
STORE X ; Store result in memory location X

One-Address Instructions (Accumulator-Based): In one-address instructions, operands are implicitly


the accumulator. We'll use an accumulator-based approach.

assembly Copy code


LOAD A ; Load A into accumulator ADD B ; Add B to accumulator (accumulator = A + B) PUSH ; Push
accumulator onto stack LOAD C ; Load C into accumulator ADD D ; Add D to accumulator (accumulator =
C + D) PUSH ; Push accumulator onto stack POP ; Pop two values from stack and multiply them MUL
STORE X ; Store result in memory location X

Zero-Address Instructions (Stack-Based without explicit operands): In zero-address instructions,


operands are implied and typically reside on top of a stack.

assembly Copy code


PUSH A ; Push A onto stack PUSH B ; Push B onto stack ADD ; Pop two values from stack, add them, and
push result back PUSH C ; Push C onto stack PUSH D ; Push D onto stack ADD ; Pop two values from stack, add
them, and push result back MUL ; Pop two values from stack, multiply them, and push result back STORE X ;
Store result in memory location X

Ques8: Discuss Memory-Reference instructions with its flowchart.

Answer: Memory reference instructions are those commands or instructions which are in
the custom to generate a reference to the memory and approval to a program to have an
approach to the commanded information and that states as to from where the data is
cache continually. These instructions are known as Memory Reference Instructions.

There are seven memory reference instructions which are as follows &

AND

The AND instruction implements the AND logic operation on the bit collection from the
register and the memory word that is determined by the effective address. The result of
this operation is moved back to the register.

ADD

The ADD instruction adds the content of the memory word that is denoted by the effective
address to the value of the register.

LDA

The LDA instruction shares the memory word denoted by the effective address to the
register.

STA

STA saves the content of the register into the memory word that is defined by the effective
address. The output is next used to the common bus and the data input is linked to the
bus. It needed only one micro-operation.

BUN
The Branch Unconditionally (BUN) instruction can send the instruction that is determined
by the effective address. They understand that the address of the next instruction to be
performed is held by the PC and it should be incremented by one to receive the address
of the next instruction in the sequence. If the control needs to implement multiple
instructions that are not next in the sequence, it can execute the BUN instruction.

BSA

BSA stands for Branch and Save return Address. These instructions can branch a part of
the program (known as subroutine or procedure). When this instruction is performed, BSA
will store the address of the next instruction from the PC into a memory location that is
determined by the effective address.

ISZ

The Increment if Zero (ISZ) instruction increments the word determined by effective
address. If the incremented cost is zero, thus PC is incremented by 1. A negative value is
saved in the memory word through the programmer. It can influence the zero value after
getting incremented repeatedly. Thus, the PC is incremented and the next instruction is
skipped.

Ques9: What is direct memory Access? Discuss the working process of


DMA Transfer and controller.

Answer: DMA Introduction


Direct Memory Address typically known as DMA is a data transfer technique in which I/O devices
communicate directly with the memory without passing through the Central Processing Unit. In
this hardware mechanism, a DMA controller substitutes the CPU unit and is responsible for
accessing the input-output devices and memory for transferring data. A DMA controller is
dedicated hardware that performs read and write operations directly without the involvement of
the CPU and saves time that involves opcode fetching, decoding, incrementing, and
source/destination test addresses that otherwise, central processing units should do. This leads
to high data transfer rates between the peripherals and the memory and communicates large
blocks of data speedily.

How Data transfer happen in DMA?


The data transfer is initiated by the start address, the number of words to be transferred in a
block, and the direction of transferring data. The DMA controller performs the requested
function as soon it receives the information. When the entire block of data is transferred, an
interrupt signal is sent by the controller to inform the microprocessor that the requested
operation has been completed.

For I/O operations which include the DMA, the program that has requested the data transfer is
put into a suspended state by the operating system and starts to execute another program. At
completion, an interrupt is raised by the DMA to tell the processor. As a result, the operating
system releases the program from the blocked state back into the runnable state so that the
CPU could return to the request program and continue with its further execution. During DMA
transfer, the DMA controller is the master and must be synchronized with the concerned
peripheral.

DMA Interface

A DMA is a protocol between the external devices and the system bus. It consists of DMAC, Disk
Controllers, and memory. DMAC is connected to a fast system bus which is the only medium of
transfer. The Disk controllers authorize the disks and have DMA potential and can perform
independent functions like the DMAC. They are also known as channels that can perform DMA
data transfers according to their programming.

DMAC Controller Registers


It has registers for the purpose of storing the addresses, word count, and control signals. The
processor accesses controller registers to start the data transfer operations. There are two
registers i.e address register and word count register to store memory address where the data is
going to be stored and word count respectively and a control register to keep the status and
control flags. Along with that, there is a Read/Write bit that determines the direction of data
communication.

When it is instructed by the program to read i.e R/W is 1, data is transferred from the memory to
the I/O device and when it is 0, it writes data from the peripheral to the main memory. When the
chunk of the data is entirely transferred, DMA is ready to take in further commands. This is
represented by setting the Done flag to 1. After that IE flag is raised by the DMA that enables an
interrupt for the processor and also IRQ bit goes to 1 when DMA has requested an interrupt.

DMA Controller Programming Modes


Normally there are more than one general-purpose external devices connected to the bus. They
make requests and are always preferred over the CPU requests. Further, out of these DMA
peripherals, the faster ones are at top priority. So, the way a DMAC is programmed and caters to
this situation is important. It determines the number of times it can transfer data, how many
memory allocations it can access, and what type of transfer mode the DMA controller is using.
On this basis, DMA Controller has three programming modes i.e. Burst mode, Cycle Stealing
mode, and Transparent mode.

Burst mode

In this mode, the DMA acquires a system bus from the CPU to perform data transfer. This is the
fastest mode as data is being communicated continuously. The DMAC is given priority over the
CPU to execute the operation without any interruption. The processor has to wait till the DMAC
finishes its work. For example, if there is a network protocol, the data block is read from the
main memory and is stored in an internal buffer temporarily. It is then transferred over the
network at a speed suitable to the memory and the system bus through this mode.

Cycle Stealing mode

In the cycle stealing mode, the microprocessor is controlling the computer bus but DMAC tends
to steal execution cycles from the processor. In this mode, the DMAC requests the processor for
bus control for one cycle and stalls the CPU. It transfers one byte and then gives back the control
to the processor. In this way, the CPU does not need to wait for a long time.

Transparent mode

The mode in which the DMA controller operates and has the bus control only when the
processor is not performing bus-related functions is called transparent mode. It means that the
DMA can transfer data only when the system bus is idle and does not interfere with the
processor executing other instructions. It is also known as a hidden mode. This is a slow yet
efficient mode of direct memory access.
Ques10: What do you understand by Instruction Set Architecture? Write briefly
about RISC and CISC computer architecture with diagram. Also mention difference
between them.

Answer: In computer science, an instruction set architecture (ISA) is an abstract model of


a computer. It is also referred to as architecture or computer architecture. A realization of
an ISA, such as a central processing unit (CPU), is called an implementation.

In general, an ISA defines the supported data types, the registers, the hardware support for
managing main memory, fundamental features (such as the memory
consistency, addressing modes, virtual memory), and the input/output model of a family of
implementations of the ISA.

An ISA specifies the behavior of machine code running on implementations of that ISA in a
fashion that does not depend on the characteristics of that implementation, providing
binary compatibility between implementations. This enables multiple implementations of
an ISA that differ in performance, physical size, and monetary cost (among other things),
but that are capable of running the same machine code, so that a lower-performance,
lower-cost machine can be replaced with a higher-cost, higher-performance machine
without having to replace software. It also enables the evolution of the microarchitectures
of the implementations of that ISA, so that a newer, higher-performance implementation
of an ISA can run software that runs on previous generations of implementations.

If an operating system maintains a standard and compatible application binary interface


(ABI) for a particular ISA, machine code for that ISA and operating system will run on
future implementations of that ISA and newer versions of that operating system. However,
if an ISA supports running multiple operating systems, it does not guarantee that machine
code for one operating system will run on another operating system, unless the first
operating system supports running machine code built for the other operating system.

An ISA can be extended by adding instructions or other capabilities, or adding support for
larger addresses and data values; an implementation of the extended ISA will still be able to
execute machine code for versions of the ISA without those extensions. Machine code using
those extensions will only run on implementations that support those extensions.

The binary compatibility that they provide make ISAs one of the most fundamental
abstractions in computing.

RISC is the way to make hardware simpler whereas CISC is the single instruction that handles multiple work. In
this article, we are going to discuss RISC and CISC in detail as well as the Difference between RISC and CISC,
Let’s proceed with RISC first.

Reduced Instruction Set Architecture (RISC)

have a simpler instruction set, they can execute instructions faster than CISC processors The main idea behind
this is to simplify hardware by using an instruction set composed of a few basic steps for loading, evaluating,
and storing operations just like a load command will load data, a store command will store the data.

Characteristics of RISC

Simpler instruction, hence simple instruction decoding.

Instruction comes undersize of one word.

Instruction takes a single clock cycle to get executed.


More general-purpose registers.

Simple Addressing Modes.

Fewer Data types.

A pipeline can be achieved.

Advantages of RISC

Simpler instructions: RISC processors use a smaller set of simple instructions, which makes them easier to
decode and execute quickly. This results in faster processing times.

Faster execution: Because RISC processors.

Lower power consumption: RISC processors consume less power than CISC processors, making them ideal
for portable devices.

Disadvantages of RISC

More instructions required: RISC processors require more instructions to perform complex tasks than CISC
processors.

Increased memory usage: RISC processors require more memory to store the additional instructions
needed to perform complex tasks.

Higher cost: Developing and manufacturing RISC processors can be more expensive than CISC processors.

Complex Instruction Set Architecture (CISC)

The main idea is that a single instruction will do all loading, evaluating, and storing operations just like a
multiplication command will do stuff like loading data, evaluating, and storing it, hence it’s complex.

Characteristics of CISC

Complex instruction, hence complex instruction decoding.

Instructions are larger than one-word size.

Instruction may take more than a single clock cycle to get executed.

Less number of general-purpose registers as operations get performed in memory itself.

Complex Addressing Modes.

More Data types.

Advantages of CISC

Reduced code size: CISC processors use complex instructions that can perform multiple operations, reducing
the amount of code needed to perform a task.

More memory efficient: Because CISC instructions are more complex, they require fewer instructions to
perform complex tasks, which can result in more memory-efficient code.

Widely used: CISC processors have been in use for a longer time than RISC processors, so they have a larger
user base and more available software.

Disadvantages of CISC
Slower execution: CISC processors take longer to execute instructions because they have more complex
instructions and need more time to decode them.

More complex design: CISC processors have more complex instruction sets, which makes them more
difficult to design and manufacture.

Higher power consumption: CISC processors consume more power than RISC processors because of their
more complex instruction sets.

CPU Performance

Both approaches try to increase the CPU performance

RISC: Reduce the cycles per instruction at the cost of the number of instructions per program.

CPU Time

CISC: The CISC approach attempts to minimize the number of instructions per program but at the cost of an
increase in the number of cycles per instruction.

Earlier when programming was done using assembly language, a need was felt to make instruction do more
tasks because programming in assembly was tedious and error-prone due to which CISC architecture evolved
but with the uprise of high-level language dependency on assembly reduced RISC architecture prevailed.

Example:

Suppose we have to add two 8-bit numbers:

CISC approach: There will be a single command or instruction for this like ADD which will perform the task.

RISC approach: Here programmer will write the first load command to load data in registers then it will use a
suitable operator and then it will store the result in the desired location.

So, add operation is divided into parts i.e. load, operate, store due to which RISC programs are longer and
require more memory to get stored but require fewer transistors due to less complex command.

RISC vs CISC

RISC CISC

Focus on software Focus on hardware

Uses both hardwired and microprogrammed


Uses only Hardwired control unit
control unit

Transistors are used for storing complex


Transistors are used for more registers
Instructions
RISC CISC

Fixed sized instructions Variable sized instructions

Can perform only Register to Register Arithmetic Can perform REG to REG or REG to MEM or MEM to
operations MEM

Requires more number of registers Requires less number of registers

Code size is large Code size is small

An instruction executed in a single clock


Instruction takes more than one clock cycle
cycle

Instructions are larger than the size of one


An instruction fit in one word.
word

Simple and limited addressing modes. Complex and more addressing modes.

RISC is Reduced Instruction Cycle. CISC is Complex Instruction Cycle.

The number of instructions are less as The number of instructions are more as
compared to CISC. compared to RISC.

It consumes the low power. It consumes more/high power.

RISC is highly pipelined. CISC is less pipelined.

RISC required more RAM. CISC required less RAM.

Here, Addressing modes are less. Here, Addressing modes are more.

Ques11: What are the main advantages of using Input and Output interfaces? Why Interfacing is
used in digital computer?
Answer: Input and output (I/O) interfaces serve critical functions in digital computers, enabling communication
between the computer system and external devices. Here are the main advantages of using input and output
interfaces:

1. **Device Connectivity:** I/O interfaces provide the means to connect various peripheral devices to the
computer system, including keyboards, mice, displays, printers, storage devices, network adapters, and more.
This connectivity allows users to interact with the computer and facilitates data exchange with external
devices.
2. **Data Transfer:** I/O interfaces facilitate the transfer of data between the computer system and external
devices. This data transfer can be in the form of input (receiving data from external devices into the computer)
or output (sending data from the computer to external devices). Efficient data transfer is crucial for tasks such
as file storage, printing, communication, and multimedia processing.

3. **Functionality Expansion:** By supporting a wide range of peripheral devices, I/O interfaces enable the
expansion of the computer's functionality. Users can connect additional devices to enhance productivity,
extend storage capacity, improve multimedia capabilities, and more. This flexibility allows computers to adapt
to diverse user needs and applications.

4. **User Interaction:** Input interfaces, such as keyboards, mice, touchscreens, and voice recognition
systems, enable users to interact with the computer system. These interfaces provide intuitive ways for users to
input commands, manipulate data, navigate graphical user interfaces (GUIs), and control applications. Output
interfaces, such as displays, speakers, and printers, present information to users in a comprehensible format,
enhancing user experience and productivity.

5. **Peripheral Control:** I/O interfaces facilitate the control and management of peripheral devices
connected to the computer system. Through standardized protocols and communication mechanisms, the
computer can communicate with peripherals, send commands, receive status updates, and coordinate their
operation. This control enables efficient utilization of peripheral resources and ensures compatibility with the
computer system.

Interfacing is used in digital computers for several reasons:

1. **Compatibility:** Interfacing allows different components and devices to communicate and interact with
each other, even if they are manufactured by different vendors or follow different standards. By providing
standardized interfaces and protocols, interfacing ensures interoperability and compatibility between hardware
and software components.

2. **Integration:** Interfacing facilitates the integration of diverse hardware and software components into a
cohesive system. Through well-defined interfaces and communication protocols, components can exchange
data, synchronize their operation, and collaborate to perform complex tasks. This integration enables the
creation of sophisticated computer systems with diverse capabilities.

3. **Functionality Enhancement:** Interfacing enables the expansion and enhancement of the functionality of
digital computers. By connecting peripherals, sensors, actuators, and other devices to the computer system,
interfacing extends its capabilities beyond basic computation and data processing. This functionality
enhancement enables computers to interact with the physical world, automate tasks, gather information from
external sources, and perform a wide range of specialized functions.

4. **Interoperability:** Interfacing promotes interoperability between different systems, platforms, and


technologies. By providing standardized interfaces and communication protocols, interfacing allows computers
to exchange data and interact with external systems, networks, and services. This interoperability enables
seamless integration of digital computers into diverse environments and facilitates collaboration between
heterogeneous systems.

Overall, input and output interfaces are essential components of digital computers, enabling connectivity, data
transfer, user interaction, peripheral control, functionality expansion, compatibility, integration, and
interoperability. Through effective interfacing, digital computers can interact with the external world, support
diverse applications, and perform a wide range of tasks to meet user needs and requirements.

Ques12: What is Instruction Pipeline? Explain working process of Instruction Pipeline with suitable
Example.

Answer: Pipeline processing can occur not only in the data stream but in the instruction stream as well.
Most of the digital computers with complex instructions require instruction pipeline to carry out
operations like fetch, decode and execute instructions.

In general, the computer needs to process each instruction with the following sequence of steps.

1. Fetch instruction from memory.

2. Decode the instruction.

3. Calculate the effective address.

4. Fetch the operands from memory.

5. Execute the instruction.

6. Store the result in the proper place.

Each step is executed in a particular segment, and there are times when different segments may take
different times to operate on the incoming information. Moreover, there are times when two or more
segments may require memory access at the same time, causing one segment to wait until another is
finished with the memory.

The organization of an instruction pipeline will be more efficient if the instruction cycle is divided into
segments of equal duration. One of the most common examples of this type of organization is a Four-
segment instruction pipeline.

A four-segment instruction pipeline combines two or more different segments and makes it as a
single one. For instance, the decoding of the instruction can be combined with the calculation of the
effective address into one segment.

The following block diagram shows a typical example of a four-segment instruction pipeline. The
instruction cycle is completed in four segments.

Segment 1:
The instruction fetch segment can be implemented using first in, first out (FIFO) buffer.

Segment 2:
The instruction fetched from memory is decoded in the second segment, and eventually, the effective
address is calculated in a separate arithmetic circuit.

Segment 3:

An operand from memory is fetched in the third segment.

Segment 4:
The instructions are finally executed in the last segment of the pipeline organization.

An instruction pipeline is a technique used in computer architecture to improve the performance and
throughput of a CPU by allowing multiple instructions to be processed concurrently, overlapping their
execution stages. It divides the instruction execution process into several stages, with each stage
handling a specific task. This allows multiple instructions to be in various stages of execution
simultaneously, increasing overall throughput and efficiency.

The working process of an instruction pipeline typically involves the following stages:

1. **Instruction Fetch (IF):** In this stage, the CPU fetches the next instruction from memory. The
program counter (PC) is used to determine the address of the next instruction to be fetched. The fetched
instruction is then placed into an instruction register.

2. **Instruction Decode (ID):** The fetched instruction is decoded in this stage. The CPU determines the
type of instruction and extracts the necessary operands and operation codes.

3. **Execution (EX):** In this stage, the CPU executes the instruction. Depending on the instruction type,
various arithmetic, logic, or control operations are performed. For example, if the instruction is an
arithmetic operation, the CPU performs the required calculation.

4. **Memory Access (MEM):** This stage is responsible for accessing memory if the instruction requires
it. For memory-related instructions such as load or store operations, data is read from or written to
memory.

5. **Write Back (WB):** In this final stage, the results of the instruction execution are written back to the
appropriate registers or memory locations. For example, the result of an arithmetic operation may be
stored in a register.

Now, let's illustrate the working process of an instruction pipeline with a simple example using a
hypothetical 5-stage pipeline:

Consider the following sequence of instructions:

1. Load R1, [address1] // Load data from memory address1 into register R1

2. Add R2, R1, R3 // Add contents of R1 and R3, store result in R2

3. Store R2, [address2] // Store data from R2 into memory address2

Here's how the instruction pipeline would process these instructions:

1. **Cycle 1 (IF):** - Instruction 1 is fetched from memory and placed into the instruction register.
2. **Cycle 2 (ID):**- Instruction 1 is decoded. The CPU determines that it is a load instruction and extracts
the necessary operands.

3. **Cycle 3 (EX):* - Instruction 1 is executed. The CPU accesses memory to load data into register R1.

4. **Cycle 4 (MEM):** - Instruction 1 does not require memory access, so this stage is idle.

5. **Cycle 5 (WB):** - The result of instruction 1 is written back to register R1.

Ques13: Write the characteristics of Multiprocessors.

Answr: A multiprocessor is a single computer that has multiple processors. It is possible


that the processors in the multiprocessor system can communicate and cooperate at
various levels of solving a given problem. The communications between the processors
take place by sending messages from one processor to another, or by sharing a common
memory.

Characteristics of Multiprocessor

There are the major characteristics of multiprocessors are as follows −

• Parallel Computing − This involves the simultaneous application of multiple processors. These
processors are developed using a single architecture to execute a common task. In general,
processors are identical and they work together in such a way that the users are under the
impression that they are the only users of the system. In reality, however, many users are
accessing the system at a given time.
• Distributed Computing − This involves the usage of a network of processors. Each processor in
this network can be considered as a computer in its own right and have the capability to solve a
problem. These processors are heterogeneous, and generally, one task is allocated to a single
processor.
• Supercomputing − This involves the usage of the fastest machines to resolve big and
computationally complex problems. In the past, supercomputing machines were vector
computers but at present, vector or parallel computing is accepted by most people.
• Pipelining − This is a method wherein a specific task is divided into several subtasks that must
be performed in a sequence. The functional units help in performing each subtask. The units are
attached serially and all the units work simultaneously.
• Vector Computing − It involves the usage of vector processors, wherein operations such as
‘multiplication’ are divided into many steps and are then applied to a stream of operands
(“vectors”).
• Systolic − This is similar to pipelining, but units are not arranged in a linear order. The steps in
systolic are normally small and more in number and performed in a lockstep manner. This is
more frequently applied in special-purpose hardware such as image or signal processors.
Ques14: Describe Memory Hierarchy?

Answer: The Computer memory hierarchy looks like a pyramid structure which is used to describe the
differences among memory types. It separates the computer storage based on hierarchy.

Level 0: CPU registers

Level 1: Cache memory

Level 2: Main memory or primary memory

Level 3: Magnetic disks or secondary memory

Level 4: Optical disks or magnetic types or tertiary Memory

In Memory Hierarchy the cost of memory, capacity is inversely proportional to speed. Here the devices are
arranged in a manner Fast to slow, that is form register to Tertiary memory.

Let us discuss each level in detail:

Level-0 − Registers

The registers are present inside the CPU. As they are present inside the CPU, they have least access time.
Registers are most expensive and smallest in size generally in kilobytes. They are implemented by using
Flip-Flops.

Level-1 − Cache

Cache memory is used to store the segments of a program that are frequently accessed by the processor.
It is expensive and smaller in size generally in Megabytes and is implemented by using static RAM.

Level-2 − Primary or Main Memory


It directly communicates with the CPU and with auxiliary memory devices through an I/O processor. Main
memory is less expensive than cache memory and larger in size generally in Gigabytes. This memory is
implemented by using dynamic RAM.

Level-3 − Secondary storage

Secondary storage devices like Magnetic Disk are present at level 3. They are used as backup storage.
They are cheaper than main memory and larger in size generally in a few TB.

Level-4 − Tertiary storage

Tertiary storage devices like magnetic tape are present at level 4. They are used to store removable files
and are the cheapest and largest in size (1-20 TB).

Let us see the memory levels in terms of size, access time, bandwidth.

Level Register Cache Primary Secondary


memory memory

Bandwidth 4k to 32k 800 to 5k 400 to 2k 4 to 32 MB/sec


MB/sec MB/sec MB/sec

Size Less than Less than Less than 2 Greater than 2


1KB 4MB GB GB

Access time 2 to 5nsec 3 to 10 nsec 80 to 400 nsec 5ms

Managed by Compiler Hardware Operating OS or user


system

Why memory Hierarchy is used in systems? Memory hierarchy is arranging different kinds of storage
present on a computing device based on speed of access. At the very top, the highest performing storage is
CPU registers which are the fastest to read and write to. Next is cache memory followed by conventional
DRAM memory, followed by disk storage with different levels of performance including SSD, optical and
magnetic disk drives.

To bridge the processor memory performance gap, hardware designers are increasingly relying on memory
at the top of the memory hierarchy to close / reduce the performance gap. This is done through increasingly
larger cache hierarchies (which can be accessed by processors much faster), reducing the dependency on
main memory which is slower.

Ques15: Distinguish between hardwired controls and microprogrammed controls.


Answer: Control unit generates control signal using one of the two organizations: Hardwired Control Unit,
Microprogrammed Control Unit.

Hardwired Control Unit:


It is implemented as logical circuit (gates, flip-flops, decoders etc.) in the hardware. This organization is very
complicated if we have a large control unit.

In this organization, if the design has to be modified or changed, requires changes in the wiring among the
various components. Thus the modification of all the combinational circuits may be very difficult.

Microprogrammed Control Unit:


A microprogrammed control unit is implemented using programming approach. A sequence of micro-
operations are carried out by executing a program consistingof micro-instructions.

Micro-program, consisting of micro-instructions is stored in the control memory of the control unit. Execution
of a micro-instruction is responsible for generation of a set of control signals.

Difference between Hardwired and Microprogrammed Control Unit:

MICROPROGRAMMED CONTROL
ATTRIBUTES HARDWIRED CONTROL UNIT UNIT

1. Speed Speed is fast Speed is slow

2. Cost of
More costlier. Cheaper.
Implementation

Not flexible to accommodate new More flexible to accommodate new


3. Flexibility system specification or new system specification or new instruction
instruction redesign is required. sets.

4. Ability to Handle Difficult to handle complex


Easier to handle complex instruction sets.
Complex Instructions instruction sets.

Complex decoding and sequencing


5. Decoding Easier decoding and sequencing logic.
logic.

6. Applications RISC Microprocessor CISC Microprocessor

7. Instruction set of
Small Large
Size

8. Control Memory Absent Present

9. Chip Area Required Less More

10. Occurrence Occurrence of error is more Occurrence of error is less


Ques16: Describe Computer Instructions.
Answer: Computer organization refers to the way in which the components of a computer system are organized
and interconnected to perform specific tasks. One of the most fundamental aspects of computer organization
is the set of basic computer instructions that the system can execute.

Basic computer instructions are the elementary operations that a computer system can perform. These
instructions are typically divided into three categories: data movement instructions, arithmetic and logic
instructions, and control instructions.

Data movement instructions are used to move data between different parts of the computer system. These
instructions include load and store instructions, which move data between memory and the CPU, and
input/output (I/O) instructions, which move data between the CPU and external devices.

Arithmetic and logic instructions are used to perform mathematical operations and logical operations on data
stored in the system. These instructions include add, subtract, multiply, and divide instructions, as well as logic
instructions such as AND, OR, and NOT.

Control instructions are used to control the flow of instructions within the computer system. These
instructions include branch instructions, which transfer control to different parts of the program based on
specified conditions, and jump instructions, which transfer control to a specified memory location.

The basic computer has 16-bit instruction register (IR) which can denote either memory reference or register
reference or input-output instruction.

Memory Reference – These instructions refer to memory address as an operand. The other operand is always
accumulator. Specifies 12-bit address, 3-bit opcode (other than 111) and 1-bit addressing mode for direct and

indirect addressing.
Example – IR register contains = 0001XXXXXXXXXXXX, i.e. ADD after fetching and decoding of instruction we
find out that it is a memory reference instruction for ADD operation.

Hence, DR ← M[AR]
AC ← AC + DR, SC ← 0

1. Register Reference – These instructions perform operations on registers rather than memory
addresses. The IR(14 – 12) is 111 (differentiates it from memory reference) and IR(15) is 0
(differentiates it from input/output instructions). The rest 12 bits specify register operation.
Example – IR register contains = 0111001000000000, i

.e. CMA after


fetch and decode cycle we find out that it is a register reference instruction for
complement accumulator.
Hence, AC ← ~AC

1. Input/Output – These instructions are for communication between computer and outside
environment. The IR(14 – 12) is 111 (differentiates it from memory reference) and IR(15) is 1
(differentiates it from register reference instructions). The rest 12 bits specify I/O
operation.
Example – IR register contains = 1111100000000000, i.e. INP after fetch and decode
cycle we find out that it is an input/output instruction for inputing character. Hence,
INPUT character from peripheral device.
Essential PC directions are the principal tasks that a PC can perform. These directions are executed by the focal
handling unit (central processor) of a PC, and they structure the reason for additional perplexing tasks. A few
instances of essential PC directions include:

1.Load: This guidance moves information from the memory to a computer processor register.

2.Store: This guidance moves information from a computer chip register to the memory.

3.Add: This guidance adds two qualities and stores the outcome in a register.

4.Subtract: This guidance deducts two qualities and stores the outcome in a register.

5.Multiply: This guidance duplicates two qualities and stores the outcome in a register.

6.Divide: This guidance isolates two qualities and stores the outcome in a register.

7.Branch: This guidance changes the program counter to a predefined address, which is utilized to execute
restrictive and genuine leaps.

8.Jump: This guidance changes the program counter to a predefined address.

9.Compare: This guidance looks at two qualities and sets a banner demonstrating the consequence of the
examination.

10.Increment: This guidance adds 1 to a worth in a register or memory area.

The set of instructions incorporated in16 bit IR register are:

Arithmetic, logical and shift instructions (and, add, complement, circulate left, right, etc)

To move information to and from memory (store the accumulator, load the accumulator)

Program control instructions with status conditions (branch, skip)

Input output instructions (input character, output character)

Symbol Hexadecimal Code Description

AND 0xxx 8xxx And memory word to AC

ADD 1xxx 9xxx Add memory word to AC

LDA 2xxx Axxx Load memory word to AC

STA 3xxx Bxxx Store AC content in memory

BUN 4xxx Cxxx Branch Unconditionally


Symbol Hexadecimal Code Description

BSA 5xxx Dxxx Branch and Save Return Address

ISZ 6xxx Exxx Increment and skip if 0

CLA 7800 Clear AC

CLE 7400 Clear E(overflow bit)

CMA 7200 Complement AC

CME 7100 Complement E

CIR 7080 Circulate right AC and E

CIL 7040 Circulate left AC and E

INC 7020 Increment AC

SPA 7010 Skip next instruction if AC > 0

SNA 7008 Skip next instruction if AC < 0

SZA 7004 Skip next instruction if AC = 0

SZE 7002 Skip next instruction if E = 0

HLT 7001 Halt computer

INP F800 Input character to AC

OUT F400 Output character from AC

SKI F200 Skip on input flag

SKO F100 Skip on output flag

ION F080 Interrupt On

IOF F040 Interrupt Off

Uses of Basic Computer Instructions :


Some of the key uses of basic computer instructions include:

Data manipulation: Basic computer instructions are used to manipulate data stored in the computer system,
including moving data between memory and the CPU, performing mathematical operations, and performing
logical operations.

Control flow: Basic computer instructions are used to control the flow of instructions within the computer
system. This includes branching to different parts of the program based on specified conditions and jumping to
a specific memory location.

Input/output operations: Basic computer instructions are used to transfer data between the computer system
and external devices, such as input devices (e.g. keyboard, mouse) and output devices (e.g. display screen,
printer).

Program execution: Basic computer instructions are used to execute computer programs and run software
applications. These instructions are used to load programs into memory, move data into and out of the
program, and control the execution of the program.

System maintenance: Basic computer instructions are used to perform system maintenance tasks, such as
memory allocation and deallocation, interrupt handling, and error detection and correction.

Ques17: what is program control


Answer: Program control refers to the ability of a computer program to execute instructions in a specific
sequence, make decisions based on conditions, and control the flow of execution. In essence, it involves
directing the execution path of a program based on various conditions, such as user input, program state, or
predefined rules. Program control mechanisms allow developers to create logic structures, loops, and decision-
making processes within their code, enabling the program to perform tasks efficiently and accurately. Here are
some key aspects of program control:

1. **Sequential Execution:** Program control allows instructions to be executed in a sequential order,


following the order in which they are written in the program code. This ensures that tasks are performed in a
predefined sequence, with each instruction executed one after the other.

2. **Conditional Execution:** Program control enables the execution of certain instructions or code blocks
based on specific conditions. Conditional statements, such as if-else statements and switch-case statements,
allow the program to make decisions and choose different execution paths depending on the values of
variables or other factors.

3. **Repetitive Execution (Loops):** Program control facilitates the repetition of instructions or code blocks
multiple times, using loops or iteration constructs. Loops allow developers to perform repetitive tasks
efficiently without the need to write the same instructions multiple times.

4. **Subroutine and Function Calls:** Program control allows the invocation of subroutines or functions within
a program. Subroutines and functions encapsulate specific tasks or functionalities, enabling code reuse and
modular programming. The program can call these subroutines or functions to perform specific tasks when
needed.

5. **Exception Handling:** Program control includes mechanisms for handling exceptions, errors, and
unexpected conditions that may occur during program execution. Exception handling allows the program to
detect and respond to errors gracefully, preventing program crashes and ensuring robustness.

Overall, program control is fundamental to the design and implementation of computer programs, enabling
developers to create complex, responsive, and efficient software systems. It provides the means to direct the
flow of execution, make decisions, perform repetitive tasks, and handle various runtime conditions, ultimately
enabling the program to achieve its intended functionality.
Ques18: What is multiplication Algorithm.
Answer: In computer organization and architecture, multiplication algorithms are designed to efficiently
perform the multiplication operation using hardware circuits or software routines. These algorithms are crucial
for arithmetic operations in digital systems, such as CPUs and digital signal processors (DSPs). Here are some
common multiplication algorithms used in computer architecture:

1. **Binary Multiplication (Shift and Add):** This is the simplest and most fundamental multiplication
algorithm used in digital systems. It involves multiplying two binary numbers by shifting one operand (the
multiplicand) and adding it to a running sum based on the corresponding bits of the other operand (the
multiplier). The result is obtained by accumulating the partial products.

2. **Booth's Algorithm:** Booth's algorithm is an optimization of binary multiplication that reduces the
number of additions required by detecting patterns of consecutive 1s or 0s in the multiplier. It performs fewer
additions compared to the simple shift-and-add method by using signed digit representation and bit-wise
arithmetic operations.

3. **Array Multiplier:** An array multiplier is a hardware circuit that implements multiplication using an array
of binary adders arranged in rows and columns. Each row corresponds to a digit of the multiplier, and each
column corresponds to a digit of the multiplicand. The partial products are generated in parallel, and their
sums yield the final product.

4. **Wallace Tree Multiplier:** The Wallace Tree Multiplier is a high-speed, parallel multiplier architecture that
reduces the number of partial product bits generated compared to traditional array multipliers. It uses a series
of carry-save adders and compressors to reduce the number of partial product bits and improve performance.

5. **Sequential Multiplier:** Sequential multipliers, also known as iterative or iterative array multipliers,
perform multiplication sequentially by computing one partial product at a time. They are simpler and require
fewer hardware resources compared to parallel multipliers but are generally slower.

6. **Shift-and-Multiply Algorithm:** This algorithm is commonly used in software implementations of


multiplication. It involves shifting one operand (the multiplicand) and adding it to a running sum based on the
corresponding bits of the other operand (the multiplier), similar to binary multiplication in hardware. However,
it operates on binary, decimal, or other numeral systems and is implemented using loops and conditional
statements in software.

These multiplication algorithms are designed to balance factors such as speed, hardware complexity, and
resource utilization to meet the requirements of specific applications and hardware platforms. In computer
architecture, the choice of multiplication algorithm depends on factors such as the target hardware
architecture, performance requirements, power constraints, and area constraints.

Ques19: What is flip flop? Types of flip flop

Answer: The flip-flops are basically the circuits that maintain a certain state unless and
until directed by the input for changing that state. We can construct a basic flip-flop using
four-NOR and four-NAND gates.
S-R Flip Flop J-K Flip Flop

T Flip Flop D Flip Flop

Ques20: What is Register?


Answer: In computer architecture, a register is a small, high-speed storage area located within the CPU (Central
Processing Unit) or other processing units. Registers are used to store data temporarily during the execution of
computer programs and play a crucial role in the operation of the CPU. They are the fastest type of memory
available in a computer system, with access times measured in nanoseconds.

Registers are typically implemented as groups of flip-flops, which are electronic circuits capable of storing
binary data (0s and 1s). Each register can hold a fixed number of bits, typically ranging from 8 bits (1 byte) to 64
bits or more, depending on the computer architecture.

Registers serve several important functions in a computer system:

1. **Data Storage:** Registers are used to store operands, intermediate results, and other data needed for
arithmetic, logical, and data manipulation operations performed by the CPU.

2. **Instruction Execution:** Registers hold the instruction currently being executed by the CPU, as well as the
memory address of the next instruction to be fetched from memory.

3. **Addressing:** Registers are used to hold memory addresses and pointers, facilitating memory access and
data transfer between the CPU and main memory (RAM).

4. **Control and Status:** Registers store control and status information used by the CPU to manage the
execution of instructions, handle interrupts, and coordinate operations with other hardware components.

Registers are organized into different types based on their functions and usage:

1. **General-Purpose Registers:** These registers can store any type of data and are used for various
purposes, such as holding operands, intermediate results, and memory addresses. Examples include the
accumulator, data registers, and index registers.
2. **Special-Purpose Registers:** These registers serve specific functions within the CPU, such as storing the
program counter (PC), instruction register (IR), memory address register (MAR), memory data register (MDR),
and status flags (e.g., zero flag, carry flag).

3. **Control Registers:** These registers control the operation of specific CPU features or functions, such as
memory management, interrupt handling, and mode switching (e.g., supervisor mode vs. user mode).

You might also like