0% found this document useful (0 votes)
29 views22 pages

Unit 1&2

A computer is a device that processes data using binary language (0s and 1s) and requires instructions from users through software to perform tasks. The document explains the structure and function of computer systems, including the roles of hardware, software, and the hierarchy of computer architecture. It also discusses design issues in computer systems and differentiates between computer architecture and organization.

Uploaded by

shruti.bhadviya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views22 pages

Unit 1&2

A computer is a device that processes data using binary language (0s and 1s) and requires instructions from users through software to perform tasks. The document explains the structure and function of computer systems, including the roles of hardware, software, and the hierarchy of computer architecture. It also discusses design issues in computer systems and differentiates between computer architecture and organization.

Uploaded by

shruti.bhadviya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

A simple understanding of Computer

Computer is a device that makes our work easy. Computer is a device that helps us to complete our
task easily and speedily.
Computer doesn’t have a brain like human beings. We have to give them instructions to what to
do when a particular situation arises. We have to tell them everything from what to expect for
data(what type of data), how to process it(how to perform calculations) to where to store the data.
We humans, understand language that is composed of words which further is composed of letters.
But, the computers doesn’t understand our language nor the words like “hello, goodmorning,
discipline, etc”. They only understand binary language whose vocabulary contains only two letters
or states or symbols i.e. 0 and 1, True and False, On and off.
But how a transistor get its value ?
When a very little amount of electric current passes through transistor it maintains the state of 1 and
when there is no electric current then the transistor has the state of 0.
Then how it’s all connected to computer ?
This 0’s and 1’s forms the building block of computer. With the combinations of 0 and 1 we create
a whole new language
For example, 0 can be written as 0,
1 as 1
2 as 10
3 as 11
4 as 100
5 as 101
a as 01100001
A as 01000001
s as 01110011
U as 01010101
To maintain the states transistors are used.
Transistors are tiny device that are used to store 2 values 1 and 0 or on and off.
If the transistor is on we say that it has a value 1, and if it is off the valve is 0.
For example, a memory chip contains hundreds of millions or even billions of transistors, each of
which can be switched on or off individually. As transistor can store 2 distinct values, we can have
millions of different values stored on a memory chip consisting entirely of 0’s and 1’s.
So now the question arises how can a human remember this code? It seems impossible!
Well we humans can do everything that we desire and this code can be remembered very easily but
we don’t have to remember. We just have to use our language and the software (also built by
human) converts our normal letters into the binary language.
What is software ?
Software is a set of instructions that tells the computer what to do, when to do, and how to do.
Example are, paint that we use in Microsoft, WhatsApp and games, all are the types of different
software.
Suppose we want to add 2 number and want to know what 2 + 2 is 4. Then we must give the
computer instructions,
Step-1: take 2 value.
Step-2: store that 2 value
Step-3: add 2 value by using + operator
Step-4: save the answer
Separate instructions are provided for the + operator so the computer knows how to do addition
when it encounters + sign.
So who converts this code? Instead of who we can ask what converts the code?
And answer to that question is a software called interpreter that interprets our language code into
binary code. Interpreter converts our code into machine language that can be understood by
computer.
Now the question is how we give our input ?
We give our input with the use of hardware for example like scanner, keyboard, mouse(not the one
that eats cheese).
When we give input through hardware, the software interprets it into machine language and then it
is processed and our output is shown.
Process:
If we want to display letter ‘A’ on screen we first will open notepad. Then we will press Capslock
key or shift key to make letter capital, after that we will press letter ‘a’.
And our screen will show the letter ‘A’.
Under the hood process:
When we pressed the capslock or shift key the software tells that whatever following this should be
printed on the screen and after we have pressed the letter a which is small letter, the software first
converts it into binary like it had converted the shift or capslock key and then after the computer
understands it prints A on the screen.
Issues in Computer Design
Computer Design is the structure in which components relate to each other. The designer deals with
a particular level of system at a time and there are different types of issues at different levels. At
each level, the designer is concerned with the structure and function. The structure is the skeleton
of the various components related to each other for communication. The function is the activities
involved in the system.
Following are the issues in computer design:
1. Assumption of infinite speed:
It can’t be assumed the infinite speed of the computer as it is not practical to assume the infinite
speed. It creates problems in designers’ thinking as well.
2. Assumption of infinite Memory:
Like the speed of the computer, memory also can’t be assumed infinite. Storage is always finite and
this is an issue in computer design.
3. Speed mismatch between memory and processor:
Sometimes it is possible that the speed of memory and processor does not match. It may be
memory speed is faster or processor speed is faster. A mismatch between memory and processor
leads to create problems in designing.
4. Handling of bugs and errors:
Handling bugs and errors are huge responsibility of any computer designer. Bugs and errors lead to
the failure of the computer system. Sometimes these errors may be more dangerous.
5. Multiple processors:
Designing a computer system with multiple processors leads to the huge task of management and
programming. It is a big issue in computer design.
6. Multiple threads:
A computer system with multiple threads is always a threat to the designer. A computer with
several threads should be able to multi-tasking and multi-processing.
7. Shared memory:
If there are several processes to be executed at a time then all the processes share the same memory
space. It should be managed in a specific way so that collision does not happen.
8. Disk access:
Disk management is the key to computer design. There are several issues with disk access. It may
be possible that the system does not support multiple disk access.
9. Better performance:
It is always an issue. A designer always tries to simplify the system for better performance in
reducing power and less cost.
Computer System Level Hierarchy
Computer System Level Hierarchy is the combination of different levels that connects the computer
with the user and that makes the use of the computer. It also describes how the computational
activities are performed on the computer and it shows all the elements used in different levels of
system.
Computer System Level Hierarchy consists of seven levels:

Level-0:
It is related to digital logic. Digital logic is the basis for digital computing and provides a
fundamental understanding of how circuits and hardware communicate within a computer. It
consists of various circuits and gates etc.
Level-1:
This level is related to control. Control is the level where microcode is used in the system. Control
units are included in this level of the computer system.
Level-2:
This level consists of machines. Different types of hardware are used in the computer system to
perform different types of activities. It contains instruction set architecture.
Level-3:
System software is a part of this level. System software is of various types. System software mainly
helps in operating the process and it establishes the connection between hardware and user
interface. It may consist operating system, library code, etc.
Level-4:
Assembly language is the next level of the computer system. The machine understands only the
assembly language and hence in order, all the high-level languages are changed in the assembly
language. Assembly code is written for it.
Level-5:
This level of the system contains high-level language. High-level language consists of C++, Java,
FORTRAN, and many other languages. This is the language in which the user gives the command.
Level-6:
This the last level of the computer system hierarchy. This consists of users and executable
programs.
Computer architecture
Computer architecture is a set of rules and methods that describe the functionality, organization,
and implementation of computer systems. The architecture of a system refers to its structure in
terms of separately specified components of that system and their interrelationships.
In other definitions computer architecture involves instruction set architecture design,
microarchitecture design, logic design, and implementation.
Computer architecture is a specification detailing how a set of software and hardware
technology standards interact to form a computer system or platform. In short, computer
architecture refers to how a computer system is designed and what technologies it is compatible
with.
As with other contexts and meanings of the word architecture, computer architecture is likened to
the art of determining the needs of the user/system/technology, and creating a logical design and
standards based on those requirements.
Computer architecture can be defined as a set of rules and methods that describe the functionality,
management and implementation of computers. To be precise, it is nothing but rules by which a
system performs and operates.
Although the term computer architecture sounds very complicated, its definition is easier than one
might think. Computer architecture is a science or a set of rules stating how computer
software and hardware are joined together and interact to make a computer work.It not only
determines how the computer works but also of which technologies the computer is capable.
Computers continue to be a major part of our lives, and computer architects continue to
develop new and better programs and technologies.
When we think of the word architecture, we think of building a house or a building. Keeping that
same principle in mind, computer architecture involves building a computer and all that goes into
a computer system.
All computers, no matter their size, are based around a set of rules stating how software and
hardware join together and interact to make them work. This is what is known as computer
architecture.
Computer architecture deals with the design of computers, data storage devices, and networking
components that store and run programs, transmit data, and drive interactions between
computers, across networks, and with users.
Differences between Computer Architecture and Computer Organization
Computer Architecture is a functional description of requirements and design implementation for
the various parts of a computer. It deals with the functional behavior of computer systems. It comes
before the computer organization while designing a computer.
Architecture describes what the computer does.

Computer Organization comes after the decision of Computer Architecture first. Computer
Organization is how operational attributes are linked together and contribute to realizing the
architectural specification. Computer Organization deals with a structural relationship.
Organization describes how it does it.

A very good example of computer architecture is von Neumann architecture, which is still used
by most types of computers today. This was proposed by the mathematician John von Neumann in
1945. It describes the design of an electronic computer with its CPU, which includes the arithmetic
logic unit, control unit, registers, memory for data and instructions, an input/output interface and
external storage functions.
There are three categories of computer architecture:
System Design: This includes all hardware components in the system, including data processors
aside from the CPU, such as the graphics processing unit and direct memory access. It also includes
memory controllers, data paths and miscellaneous things like multiprocessing and virtualization.
Instruction Set Architecture (ISA): This is the embedded programming language of the central
processing unit. It defines the CPU's functions and capabilities based on what programming it can
perform or process. This includes the word size, processor register types, memory addressing
modes, data formats and the instruction set that programmers use.
Microarchitecture: Otherwise known as computer organization, this type of architecture defines
the data paths, data processing and storage elements, as well as how they should be implemented in
the ISA.

At its most fundamental level, a computer consists of a control unit, an arithmetic logic unit (ALU),
a memory unit, and input/output (I/O) controllers. The ALU performs simple addition, subtraction,
multiplication, division, and logic operations, such as OR and AND. The memory stores the
program’s instructions and data. The control unit fetches data and instructions from memory and
uses operations of the ALU to carry out those instructions using that data. (The control unit and
ALU together are referred to as the central processing unit [CPU].) When an input or output
instruction is encountered, the control unit transfers the data between the memory and the
designated I/O controller. The operational speed of the CPU primarily determines the speed of the
computer as a whole. All of these components—the control unit, the ALU, the memory, and the I/O
controllers—are realized with transistor circuits.
Computer Organisation
It refers to the operational unit and the interconnection between them that achieve the architectural
specifications. It tells us about different functional blocks of the system.
It is basically the realization of architecture and deals with functional structure and various
structural relationships.
It deals with the concept that is transparent from the programmer.
It includes physical connection component like circuits with adder subtractors.
Single Accumulator Organisation, General Register organisation, Stack Organisation are the 3
types of CPU organisation.
Difference between Computer Architecture and Computer Organization:
S.NO Computer Architecture Computer Organization

1. Architecture describes what the The Organization describes how it


computer does. does it.
2. Computer Architecture deals with the Computer Organization deals with a
functional behavior of computer structural relationship.
systems.
3. it deals with high-level design issues. it deals with low-level design issues.
4. Architecture indicates its hardware. Where Organization indicates its
performance.
5. For designing a computer, its For designing a computer, an
architecture is fixed first. organization is decided after its
architecture.
6. Computer Architecture is also called Computer Organization is frequently
instruction set architecture. called microarchitecture.
7. Computer Architecture comprises Computer Organization consists of
logical functions such as instruction physical units like circuit designs,
sets, registers, data types, and peripherals, and adders.
addressing modes.
8. Architecture coordinates between the Computer Organization handles the
hardware and software of the system. segments of the network in a system.
9. Computer Architecture is concerned Computer organisation is concerned
with the structure and behaviour of the with the way hardware component are
computer system as seen by the user. connected together to form a
computer system.
10. It describes how computer system It describes how computer system
designed. works.
11. It is visible to the software programmer. It is transparent from the software
programmer.
12. What to do? (Instruction Set) How to do?(implementation of the
architecture)

Function Of General Computer System


A computer can be defined as a fast electronic calculating machine that accepts the (data) digitized
input information process it as per the list of internally stored instructions and produces the
resulting information.
List of instructions are called programs & internal storage is called computer memory.
The different types of computers are
1. Personal computers: - This is the most common type found in homes, schools, Business offices
etc., It is the most common type of desk top computers with processing and storage units along
with various input and output devices.
2. Note book computers: - These are compact and portable versions of PC
3. Work stations: - These have high resolution input/output (I/O) graphics capability, but with same
dimensions as that of desktop computer. These are used in engineering applications of interactive
design work.
4. Enterprise systems: - These are used for business data processing in medium to large
corporations that require much more computing power and storage capacity than work stations.
Internet associated with servers have become a dominant worldwide source of all types of
information.
5. Super computers: - These are used for large scale numerical calculations required in the
applications like weather forecasting etc.,
Input Unit :The input unit consists of input devices that are attached to the computer. These
devices take input and convert it into binary language that the computer understands. Some of the
common input devices are keyboard, mouse, joystick, scanner etc.
Central Processing Unit (CPU) : Once the information is entered into the computer by the input
device, the processor processes it. The CPU is called the brain of the computer because it is the
control center of the computer. It first fetches instructions from memory and then interprets them so
as to know what is to be done. If required, data is fetched from memory or input device. Thereafter
CPU executes or performs the required computation and then either stores the output or displays on
the output device. The CPU has three main components which are responsible for different
functions – Arithmetic Logic Unit (ALU), Control Unit (CU) and Memory registers
Arithmetic and Logic Unit (ALU) : The ALU, as its name suggests performs mathematical
calculations and takes logical decisions. Arithmetic calculations include addition, subtraction,
multiplication and division. Logical decisions involve comparison of two data items to see which
one is larger or smaller or equal.
Control Unit : The Control unit coordinates and controls the data flow in and out of CPU and also
controls all the operations of ALU, memory registers and also input/output units. It is also
responsible for carrying out all the instructions stored in the program. It decodes the fetched
instruction, interprets it and sends control signals to input/output devices until the required
operation is done properly by ALU and memory.
Memory Registers : A register is a temporary unit of memory in the CPU. These are used to store
the data which is directly used by the processor. Registers can be of different sizes(16 bit, 32 bit, 64
bit and so on) and each register inside the CPU has a specific function like storing data, storing an
instruction, storing address of a location in memory etc. The user registers can be used by an
assembly language programmer for storing operands, intermediate results etc. Accumulator (ACC)
is the main register in the ALU and contains one of the operands of an operation to be performed in
the ALU.
Memory : Memory attached to the CPU is used for storage of data and instructions and is called
internal memory The internal memory is divided into many storage locations, each of which can
store data or instructions. Each memory location is of the same size and has an address. With the
help of the address, the computer can read any memory location easily without having to search the
entire memory. when a program is executed, it’s data is copied to the internal memory and is stored
in the memory till the end of the execution. The internal memory is also called the Primary memory
or Main memory. This memory is also called as RAM, i.e. Random Access Memory. The time of
access of data is independent of its location in memory, therefore this memory is also called
Random Access memory (RAM). Read this for different types of RAMs
Output Unit : The output unit consists of output devices that are attached with the computer. It
converts the binary data coming from CPU to human understandable form. The common output
devices are monitor, printer, plotter etc.
RISC and CISC
Reduced Instruction Set Architecture (RISC) –
The main idea behind this is to make hardware simpler by using an instruction set composed
of a few basic steps for loading, evaluating, and storing operations just like a load command
will load data, a store command will store the data.
A Reduced Instruction Set Computer is a type of microprocessor architecture that utilizes a small,
highly-optimized set of instructions rather than the highly-specialized set of instructions typically
found in other architectures.
To put simply, RISC is a microprocessor which runs using a pipelining architecture to improve the
performance of a processor. Generally speaking this means faster machine, mostly by improving
MIPS (which stands for millions of instructions per sec, meaning higher MIPS are better). It is
important to note that improvement of MIPS isn't always result in faster machine, as measurement
of MIPS alone isn't good enough to measure the processor. Some well known microprocessors such
as SUN Microsystems' SPARC microprocessor or DEC's Alpha microchips uses the RISC concept
to develop their microprocessors.
Complex Instruction Set Architecture (CISC) –
The main idea is that a single instruction will do all loading, evaluating, and storing operations just
like a multiplication command will do stuff like loading data, evaluating, and storing it, hence it’s
complex.
Both approaches try to increase the CPU performance
RISC: Reduce the cycles per instruction at the cost of the number of instructions per program.
CISC: The CISC approach attempts to minimize the number of instructions per program but at the
cost of an increase in the number of cycles per instruction.
Characteristic of RISC –
 Simpler instruction, hence simple instruction decoding.
 Instruction comes undersize of one word.
 Instruction takes a single clock cycle to get executed.
 More general-purpose registers.
 Simple Addressing Modes.
 Fewer Data types.
 A pipeline can be achieved.
Characteristic of CISC –
 Complex instruction, hence complex instruction decoding.
 Instructions are larger than one-word size.
 Instruction may take more than a single clock cycle to get executed.
 Less number of general-purpose registers as operations get performed in memory itself.
 Complex Addressing Modes.
Difference –
S.no RISC CISC
1. Focus on software Focus on hardware
2. Uses only Hardwired control unit Uses both hardwired and microprogrammed
control unit
3. Transistors are used for more registers Transistors are used for storing complex
Instructions

4. Fixed sized instructions Variable sized instructions


5. Requires more number of registers Requires less number of registers
6. Code size is large Code size is small
7. An instruction executed in a single clock Instruction takes more than one clock cycle
cycle
8. An instruction fit in one word Instructions are larger than the size of one
word
9. It requires multiple register sets to store It requires a single register set to store the
the instruction. instruction.
10. RISC has simple decoding of instruction. CISC has complex decoding of instruction.
11. Uses of the pipeline are simple in RISC. Uses of the pipeline are difficult in CISC.

12. It uses a limited number of instruction It uses a large number of instruction that
that requires less time to execute the requires more time to execute the instructions.
instructions.
13. It uses LOAD and STORE that are It uses LOAD and STORE instruction in the
independent instructions in the register- memory-to-memory interaction of a program.
to-register a program's interaction.
14. RISC has more transistors on memory CISC has transistors to store complex
registers. instructions.
15. The execution time of RISC is very short. The execution time of CISC is longer.

16. RISC architecture can be used with high- CISC architecture can be used with low-end
end applications like telecommunication, applications like home automation, security
image processing, video processing, etc. system, etc.

17. It has fixed format instruction. It has variable format instruction.


18. The program written for RISC Program written for CISC architecture tends to
architecture needs to take more space in take less space in memory.
memory.
19. Example of RISC: ARM, PA-RISC, Examples of CISC: VAX, Motorola 68000
Power Architecture, Alpha, AVR, ARC family, System/360, AMD and the Intel x86
and the SPARC. CPUs.

Computer Data Types


Computer programs or application may use different types of data based on the problem or
requirement.
Given below is different types of data that computer uses:
Numeric data – Integer and Real numbers
Non-numeric data – Character data, address data, logical data
Numeric data
It can be of the following two types:
Integers
Real Numbers
Real numbers can be represented as:
1. Fixed point representation
2. Floating point representation
Character data
A sequence of character is called character data.
A character may be alphabetic (A-Z or a-z), numeric (0-9), special character (+, #, *, @, etc.) or
combination of all of these. A character is represented by group of bits.
When set of multiple character are combined together they form a meaningful data. A character is
represented in standard ASCII format.Another popular format is EBCDIC used in large computer
systems.
Logical data
A logical data is used by computer systems to take logical decisions.
Logical data is different from numeric or alphanumeric data in the way that numeric and
alphanumeric data may be associated with numbers or characters but logical data is denoted by
either of two values true (T) or false(F).
You can see the example of logical data in construction of truth table in logic gates.
A logical data can also be statement consisting of numeric or character data with relational symbols
(>, <, =, etc.).
Character set
Character sets can of following types in computers:
Alphabetic characters- It consists of alphabet characters A-Z or a-z.
Numeric characters- It consists of digits from 0 to 9.
Special characters- Special symbols are +, *, /, -, ., <, >, =, @, %, #, etc.

A data type, in programming, is a classification that specifies which type of value a variable has
and what type of mathematical, relational or logical operations can be applied to it without causing
an error. A string, for example, is a data type that is used to classify text and an integer is a data
type used to classify whole numbers.
Data Type Used for Example
String Alphanumeric characters hello world
Integer Whole numbers 7, 12, 999
Float (floating point) Number with a decimal point 3.15, 9.06, 00.13
Character Encoding text numerically 97 (in ASCII, 97 is a lower
case 'a')
Boolean Representing logical values TRUE, FALSE

Control unit operation


Control unit generates timing and control signals for the operations of the computer. The control
unit communicates with ALU and main memory. It also controls the transmission between
processor, memory and the various peripherals. It also instructs the ALU which operation has to be
performed on data.
Control unit can be designed by two methods which are given below:
1. Hardwired Control
2. Microprogrammed Control
Hardwired Control
The Hardwired Control organization involves the control logic to be implemented with gates, flip-
flops, decoders, and other digital circuits.
The following image shows the block diagram of a Hardwired Control organization.

A Hard-wired Control consists of two decoders, a sequence counter, and a number of logic gates.
An instruction fetched from the memory unit is placed in the instruction register (IR).
The component of an instruction register includes; I bit, the operation code, and bits 0 through 11.
The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.
The outputs of the decoder are designated by the symbols D0 through D7.
The operation code at bit 15 is transferred to a flip-flop designated by the symbol I.
The operation codes from Bits 0 through 11 are applied to the control logic gates.
The Sequence counter (SC) can count in binary from 0 through 15.
Micro-programmed Control
The Microprogrammed Control organization is implemented by using the programming approach.
In Microprogrammed Control, the micro-operations are performed by executing a program
consisting of micro-instructions.
The following image shows the block diagram of a Microprogrammed Control organization.

The Control memory address register specifies the address of the micro-instruction.
The Control memory is assumed to be a ROM, within which all control information is permanently
stored.
The control register holds the microinstruction fetched from the memory.
The micro-instruction contains a control word that specifies one or more micro-operations for the
data processor.
While the micro-operations are being executed, the next address is computed in the next address
generator circuit and then transferred into the control address register to read the next
microinstruction.
The next address generator is often referred to as a micro-program sequencer, as it determines the
address sequence that is read from control memory.
Difference between Hardwired Control and Microprogrammed Control
S.no Hardwired Control Microprogrammed Control
1. Technology is circuit based. Technology is software based.
2. It is implemented through flip-flops, Microinstructions generate signals to
gates, decoders etc. control the execution of instructions.
3. Fixed instruction format. Variable instruction format (16-64 bits per
instruction).
4. Instructions are register based. Instructions are not register based.
5. ROM is not used. ROM is used.
6. It is used in RISC. It is used in CISC.
7. Faster decoding. Slower decoding.
8. Difficult to modify. Easily modified.
9. Chip area is less. Chip area is large.
Microprogramming
microprogramming, process of writing microcode for a microprocessor. Microcode is low-level
code that defines how a microprocessor should function when it executes machine-language
instructions. Typically, one machine-language instruction translates into several microcode
instructions. On some computers, the microcode is stored in ROM (read-only memory) and cannot
be modified; on some larger computers, it is stored in EPROM (erasable programmable read-only
memory) and therefore can be replaced with newer versions.
Micro program :
 A program is a set of instructions. An instruction requires a set of micro-operations.
 Micro-operations are performed using control signals.
 Here, these control signals are generated using micro-instructions.
 This means every instruction requires a set of micro-instructions
 A set of micro-instructions are called micro-program.
 Microprograms for all instructions are stored in a small memory called control memory.
 The control memory is present inside the processor.

Working :
Consider an instruction that is fetched from the main memory into the instruction Register (IR).
The processor uses its unique opcode to identify the address of the first micro-instruction. That
address is loaded into CMAR (Control Memory Address Register). This address is decoded to
decide the corresponding memory instruction from the control Memory. Micro-instructions will
only have a control field. The control field Indicates the control signals to be generated. Most
micro-instructions will not have an address field. Usually µPC will simply get incremented after
every micro-instruction.
This is as long as the micro-program is executing sequentially. If there is a branch micro-
instruction only then there will be an address filed. If the branch is unconditional, the branch
address will be directly loaded into CMAR. For conditional branches, the branch condition will
check the appropriate flag. This is done using a MUX which has all flag inputs. If the condition is
true, then the mux will inform CMAR to load the branch address. If the condition is false CMAR
will simply get incremented.
The control memory is usually implemented using flash ROM as it is non-volatile.
Advantages :
1. The main advantage is flexibility.
2. Any change in the control unit can be performed by simply changing the micro-instruction.
3. Can be easily debugged as compared to hardwired control unit.
4. Most micro-instructions are executed sequentially, they don’t require any address field.
5. Reduction of size of control memory.
Disadvantages :
1. Control memory has to be present inside the processor, therefore increases processor size.
2. This also increases the cost of the processor.
Applications of Microprogrammed Control Unit :
Microprogramming has many advantages like flexibility, simplicity, cost-effectiveness etc.
Therefore, it has a major contribution in the following applications –
1. Development of control units –
Modern processors have very large and complex instruction sets. Microprogramming is used for
making control units of such processors, because it is far less complex and can be easily modified.
2. High level language support –
Modern high level languages have more advanced and complex data types. Microprogramming can
provide support for such data types directly from the processor level. Therefore, the language
becomes easy to compile and also faster to execute.
3. User tailoring of the control unit –
As the control Unit is developed using software, it can be easily reprogrammed. This can be used
for custom-made modifications of the Control Unit. For this purpose, the control memory must be
writable like RAM or flash ROMs.
4. Emulation –
Emulation is when one processor (say A) is made to emulate or behave like another processor (say
B). To do this, A must be able to execute the instructions of B. If we re-program the control
memory of A, same as that of B, then A will be able to emulate the behavior of B, for every
instruction. This is possible only in microprogrammed control units.
Used generally when a main processor has to emulate the behavior of a math co-processor.
5. Improving the operating system –
Microprogramming can be used to implement complex and secure functions of the OS. This not
only makes the OS more powerful and efficient, but more importantly secure, as it provides the OS
a higher degree of protection from malicious virus attacks.
6. Micro-Diagnostics or error debugging –
As Microprogrammed Control Units are software based, debugging an error is far more easy as
compared to doing the same for a complex hardwired control unit. This allows monitoring,
detection and repairs of any kind of system errors in the control unit. It can also use as a runtime
substitute, if the corresponding hardwired component fails.
7. Development of special purpose processors –
All processors are not general purpose. Many applications require special purpose processors like
DSP(Digital Signal Processors) for communication, GPU (Graphic Processor Unit) for image
processing.
They have complex instruction sets and also need to be constantly upgraded. Microprogrammed
control unit is the best choice for them
System Bus
A system bus is a single computer bus that connects the major components of a computer system,
combining the functions of a data bus to carry information, an address bus to determine where it
should be sent or read from, and a control bus to determine its operation.
A System bus is a set of wires for moving data, instructions, and control signals from one computer
component to another component. It is a high-speed internal connection between the processor and
other components.
There are 3 types of the system bus, or we can say components of a system bus. They are Address
bus, Databus, and Control bus. We can think bus as a highway on which data travels in a computer
and within it.
A bus can be 8 bit, 16 bit, 32 bit, and so on. A 32-bit bus means, it can transmit 32 bits of
information at a time. A bus can be internal or external.
Functions of system bus
Different types of buses are used in the computer bus scheme. Depending on its purpose, each of
these buses is allocated to carry a certain form of signal and data.
Some basic functions carried out by system bus are:
 Addressing the issue
 Signals of Control
 Providing Components with Power
 Scheduling System Time
 Data Sharing
Data Bus
A data bus is a computer subsystem that carries the data between the processor and other
components. The data bus is bidirectional that allows for the transferring of data from one
component to another within a computer system or between two computers.
This can include transferring data to and from the memory, or from the central processing unit
(CPU) to other components. Each one is designed to handle so many bits of data at a time. It is the
main part of a system bus that allows the actual transmission of data.
A typical data bus is 32-bits wide. This means that up to 32 bits of data can travel through a data
bus every second. Newer computers are making data buses that can handle 64-bit and even 128-bit
data paths. At the same time, they are making data buses to handle more bits and can handle those
higher bitrates.
Address Bus
An address bus is a computer bus architecture that carries memory addresses from the processor to
other components such as primary storage to input/output devices.The address bus is unidirectional.
It is used to transfer data between devices that are identified by the hardware address of the
physical memory (the physical address), which is stored in the form of binary numbers to enable
the data bus to access memory storage.
The address bus is used by the CPU or direct memory access (DMA) enabled device to locate the
physical address to communicate read/write commands. All address busses are read and written by
the CPU or DMA in the form of bits.
Control Bus
A control bus is a computer bus that is used to carries control signals from the processor to other
components. It also carries the clock’s pulses which are used by the CPU to communicate with
devices that are contained within the computer.
In the computer system, the CPU transmits a variety of control signals to components and devices.
This occurs through physical connections such as cables or printed circuits.
The control bus is bidirectional and is comprised of interrupt lines, byte enables lines, read/write
signals, and status lines.
After data being processed, the control bus carries commands from the CPU and returns status
signals from the devices. For example, if the data is being read or written to the device, the
appropriate line (read or write) will be active (i.e. logic one).
Multi-bus organisation
Multiple bus organization has more number of associated registers than a single bus organization.
The only single operand can be read from the bus using a single bus organization, but the number
rises to two for multiple bus organizations.
All computing devices, from smartphones to supercomputers, pass data back and forth along
electronic channels called "buses." You can think about the buses like freeways, in that having
additional buses makes it easier to quickly transfer data, in the same way having more freeways or
additional lanes increases the speed of traffic. In short, the number and type of buses used strongly
affect the machine's overall speed. Simple computer designs move data with a single bus structure;
multiple buses, however, vastly improve performance.

You might also like