0% found this document useful (0 votes)
34 views7 pages

Architecture Summary - 053057

The document discusses different classes of computing applications and computer architectures. It covers topics like supercomputers, embedded computers, personal mobile devices, cloud computing, Von Neumann architecture, Harvard architecture, instruction set architecture, microarchitecture, functions of a computer, and structural components of a CPU.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views7 pages

Architecture Summary - 053057

The document discusses different classes of computing applications and computer architectures. It covers topics like supercomputers, embedded computers, personal mobile devices, cloud computing, Von Neumann architecture, Harvard architecture, instruction set architecture, microarchitecture, functions of a computer, and structural components of a CPU.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Classes of Computing Application

• Supercomputers
expensive, large memory capacity and processing power
• Embedded computers
integrated with hardware and delivered as a single system, widest range of application
• Personal mobile devices
Battery operated, typically costly and has connectivity to the internet, no attachable peripheral
devices.
• Cloud computing
generally hosted in the cloud (over the internet), usually implemented by data centers, comprises
of multiple servers.
Some Great ideas of Computer Architects

• Performance via prediction: assuming recovery from misprediction is not expensive, it can be
faster to operate on guess rather than waiting to know for sure a run time.
• Performance via Parallelism: these designs offer more performance by performing operations in
parallel.
• Dependency via redundancy: creating a dependable system through alternate measures
(redundant) when failure occurs.
• Hierarchy of Memories – this is in order of the fastest, smallest, and most expensive memory per
bit at the top of the hierarchy and the slowest, largest, and cheapest per bit at the bottom.
Computer Organization: refers to the operational units and their interconnections that realize the
architectural specifications. It addresses issues such as control signal and memory technology.
Computer architecture: refers to those attributes of a system visible to a programmer or those that
directly impact the logical execution of a program. It comprises of rules, methods and procedures that
describes execution and functionality of the entire computer system.
Types of Computer Architecture
i. Von-neumann Architecture

ii. Havard architecture: Harvard Architecture consists of code and data laid in distinct memory
sections. It requires a separate memory block for data and instruction. It has solely contained
data storage within the Central Processing Unit (CPU).

iii. Instruction set architecture: The architecture holds a collection of instructions that the
processor renders and surmises. It consists of two instruction sets: RISC (Reduced Instruction
Set Computer) and CISC (Complex Instruction Set Computer).

iv. Micro-architecture: is the structural design of a microprocessor. This computer organization


leverages a method where the instruction set architecture holds a built-in processor.
Functions of a computer
1. Data processing
2. Data storage
3. Data movement
4. Control (over the first three)
Structural components of a computer
i. Main memory: store data
ii. Input/Output (I/O) : move data between computer an d its external environment
iii. System interconnection: provide communication among memory, CPU and I/O.
CPU Structural component
i. Control unit: Controls the operation of the CPU and hence the computer.
ii. Arithmetic and logic unit (ALU): Performs the computer’s data processing functions.
iii. Registers: Provides storage internal to the CPU
iv. CPU interconnection: Some mechanism that provides for communication among the control unit,
ALU, and registers
Data representation
i. Numbers
ii. Text
iii. Graphics (video, animation and images)
iv. Sound
Numerical Data representation
i. Bit: one binary digit
ii. Byte: a group of eight bits
iii. Nibble: a group of four bits
iv. Word is the number of bits in each memory location.
NUMBER BASES (TWO, TEN and HEX)
BINARY CODE
i. Unpacked binary coded decimal: unassigned bits are represented with 0s e.g 00000010 = 2
ii. Packed binary coded decimal: the first unassigned bits are ignored 0010 = 2
iii. Padding: adding 0s or 1s (pad with 0s for positive, 1s for negative numbers)
CPU
central processing unit includes registers, an arithmetic logic unit (ALU), and control circuits, which
interpret and execute assembly language instructions.
CPU REGISTERS: These are high-speed and purpose-built temporary memory devices
i. MAR: - The Memory address register holds memory addresses of data and instructions. This
register is used to access data and instructions from the memory during the execution phase
of an instruction.
ii. MBR: - The Memory buffer register (MBR) holds the contents of data or instruction read from,
or written to the memory. This register is used to store data/instructions coming from the
memory or going to the memory.
iii. MDR:-Memory Data Register (MDR) is the register of a computer’s control unit that contains
the data to be stored in the computer storage (e.g. RAM), or the data after a fetch from the
computer storage.
Communication Bus Architecture: - A bus is a set of wires that is used to connect the different internal
components of a computer system for transferring data, address, and control.
i. Serial bus
ii. Parallel bus

THE ARITHMETIC LOGIC UNIT: this includes the electrical circuitry that performs any arithmetic and logical
processes on the supplied data. It is used to execute all arithmetic (additions, subtractions, multiplication,
division) and logical (, AND, OR, etc.) computations.
CONTROL UNIT
The control unit collaborates with the computer’s input and output devices. It instructs the computer to
execute stored program instructions via communication with the ALU and registers. The control unit aims
to arrange data and instruction processing.

MEMORY ORGANIZATION IN COMPUTER ARCHITECTURE


Memory Cell: It is a circuit that contains 1 bit of information (binary information), this can be set or reset.
Memory Word: This is a collection of bits to form a multi-bit unit that carries information, this can be 8
bits, 16 bits, 32 bits, 64 bits etc. but others are rare.
Memory capacity: how much memory is in the memory chip is called capacity this is expressed in byte,
KB, MB, GB, TB etc
Address: this is a memory module used to identify each location
MEMORY HIERARCHY

• Primary memory is also known as internal memory, and this is accessible by the processor
straightly. This memory includes main, cache, as well as CPU registers.
• Secondary Memory is also known as external memory, and this is accessible by the processor
through an input/output module. This memory includes an optical disk, magnetic disk, and
magnetic tape
Characteristics of Memory Hierarchy
i. Performance: memory hierarchy address the issues of speed, the closer the memory, the
faster the CPU accesses it (register and caches are the closest, RAM followed by secondary
memory)
ii. Ability : this refers to how much data each level can hold (order of decreasing capacity register,
caches, Ram, ROM)
iii. Access Time: this is the time taken by the CPU to reach the data in the memory (order of
shortest time register, caches, Ram, ROM)
iv. Cost per bit: this shows the speed and access time of the memory, which indicates how cheap
the memory can be (order of decreasing cost: register, caches, Ram, ROM)
Classification of Memory
i. Volatile memory: loses its data when power is switched off e.g RAM, register
ii. Non-volatile memory: these are permanent storage and they do not lose data when power is
switched off e.g Hard drive, flash drive
REGISTER SET
Registers are essentially extremely fast memory locations within the CPU that are used to create and store
the results of CPU operations and other calculations.
Memory Access Registers: Two registers are essential in memory write and read operations: the memory
data register (MDR) and memory address register (MAR). The MDR and MAR are used exclusively by the
CPU and are not directly accessible to programmers.
In order to perform a write operation into a specified memory location, the MDR and MAR are used as
follows;
1. The word to be stored into the memory location is first loaded by the CPU into MDR.
2. The address of the location into which the word is to be stored is loaded by the CPU into a MAR.
3. A write signal is issued by the CPU
Similarly, to perform a memory read operation, the MDR and MAR are used as follows:
1. The address of the location from which the word is to be read is loaded into the MAR.
2. A read signal is issued by the CPU.
3. The required word will be loaded by the memory into the MDR ready for use by the CPU
Memory Access Methods
i. Random Access: Main memories are random access memories, in which each memory location
has a unique address. Using this unique address any memory location can be reached in the
same amount of time in any order.
ii. Sequential Access: This method allows memory access in a sequence or in order.
iii. Direct Access: In this mode, information is stored in tracks, with each track having a separate
read/write head
Cache Memory
The cache memory is used to store program data which is currently being executed in the CPU.
Cache Operation:
It is based on the principle of locality of reference. There are two ways with which data or instruction is
fetched from main memory and get stored in cache memory. These two ways are the following:
Temporal locality: means current data or instruction that is being fetched may be needed soon.
Spatial locality: means instruction or data near to the current memory location that is being fetched, may
be needed soon in the near future.

Hit and Miss Ratio


The performance of cache memory is measured in terms of a quantity called hit ratio. When the CPU
refers to memory and finds the word in cache it is said to produce a hit. If the word is not found in cache,
it is in main memory then it counts as a miss.
The ratio of the number of hits to the total CPU references to memory is called hit ratio
Hit ratio = hit / (hit + miss)
The transformation of data from main memory to cache memory: is referred to as a mapping process,
there are three types of mapping:
i. Associative mapping: this type of mapping stores both the address and data of the memory
word.
ii. Direct mapping: direct mapping assigns each memory block to a specific line in the cache.
iii. Set-associative mapping: this strikes a balance between direct mapping and associative
mapping by taking the simplicity of direct mapping and flexibility of associative mapping.
AVERAGE MEMORY ACCESS TIME
Average memory access time = % instructions * (Hit_time + instruction miss rate*miss_penality) + % data
* (Hit_time + data miss rate*miss_penality)
Assume 40% of the instructions are data accessing instruction. Let a hit take 1 clock cycle and the miss
penalty is 100 clock cycle. Assume instruction miss rate is 4% and data access miss rate is 12%, what is the
average memory access time?
= 60% * (1 + 4% * 100) + 40% * (1 + 12% * 100) = 0.6 * (5) + 0.4 * (13) = 8.2 (clock cycle)

I/O COMMAND
A control command is issued to activate the peripheral and to inform it what to do.
A status command is used to test various status conditions in the interface and the peripheral.
Data Output command- causes the interface to respond by transferring data from the bus into one of its
registers.
Data Input command is the opposite of the data output (from register to Bus)
BUS SYSTEM
Single Bus: in single bus, all components share a single pathway for communication, this limit concurrency
as only one device can asses the bus at a time.
Shared Bus: this utilizes multiple data paths, allowing concurrent communication between components.
Mapped Memory: is a technique that allows process to access data in a file or I/O device as if it were part
of its own memory.
Communication with the processor
i. Polling : device status bit are checked periodically for time of next I/O operation.
ii. Interrupt : device delivers interrupts to the CPU when it requires attention
iii. Direct Memory Access (DMA): data are transferred directly from device to memory
BUS
Bus is a communication system that transfers data between various components inside a computer.
TYPES OF BUS
i. I/O Buses: they are specifically designed for communication with peripheral devices.
ii. Processor-memory bus: is dedicated to high speed communication between CPU and the main
memory (RAM).
BUS DESIGN:
Things To Consider:
i. Accessibility
ii. Speed
iii. Reliability
iv. Interfacing
v. Communication protocol (Synchronous and asynchronous)
Asynchronous: strobe is a signal pulse sent along with the data
handshaking is a signal involving two exchange of control signal between
sender and receiver to ensure data transfer.
vi. Shareability
vii. length
Asynchronous Serial Transmission: In this technique each character consists of three points:
i. Start Bit- First bit, called start bit is always zero and used to indicate the beginning character.
ii. Stop Bit- Last bit, called stop bit is always one and used to indicate end of characters. Stop bit is
always in the 1- state and frame the end of the characters to signify the idle or wait state.
iii. Character Bit- Bits in between the start bit and the stop bit are known as character bits. The
character bits always follow the start bit.
PIPELINING
Pipelining refers to the technique in which a given task is divided into a number of subtasks that need to
be performed in sequence. Each subtask is performed by a given functional unit.
Performance Measure in Pipelining
There are 3 basic performance measure for the goodness of pipelining
i. Speed Up S(n): Speed up in pipelining is about increasing the speed of the process, the process
should take less time to be completed under pipelining.
ii. Through put U(n): this is the production rate or the success rate of the pipelining and if it gives same
result at a shortest time.
iii. Efficiency E(n): this talks about saving time, resource and energy and still produce a successful result.

You might also like