0% found this document useful (0 votes)
7 views10 pages

Revision Unit 2 Chapter - 1

Uploaded by

nmxbusiness0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views10 pages

Revision Unit 2 Chapter - 1

Uploaded by

nmxbusiness0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Revision

Unit – 2
Chapter - 1
Central processing Unit: Introduction, general register organization, stack
organization, instruction format, addressing modes. Overview of GPU, CPU vs GPU
computing difference.
Introduction
• The part of the computer that performs the bulk of data-processing operations is called the central processing
unit and is referred to as the CPU.
• The arithmetic logic unit (ALU) performs the required microoperations for executing the instructions. The
control unit supervises the transfer of information among the registers and instructs the ALU as to which
operation to perform.
General register organization
Example – If the following
microoperation is to be
implemented, identify the
corresponding control word.
Stack organization
Register Stack
Stack organization
Memory Stack
Instruction format
• The most common fields found in instruction formats are:
• 1. An operation code field that specifies the operation to be performed.
• 2. An address field that designates a memory address or a processor register.
• 3. A mode field that specifies the way the operand or the effective address is determined.

• The number of address fields in the instruction format of a computer depends on the internal organization of its
registers. Most computers fall into one of three types of CPU organizations:
• 1. Single accumulator organization.
• 2. General register organization.
• 3. Stack organization.

Code for computation of X = (A+B)*(C+D) using 3 Address, 2 Address, 1 Address amd 0 Address Instructions are as follows

3 Address Instructions 2 Address Instructions 3 Address Instructions 0 Address Instructions


Addressing modes
1. Implied Mode
2. Immediate Mode
3. Register Mode
4. Register Indirect Mode
5. Autoincrement or autodecrement
6. Direct Address Mode
7. Indirect Address Mode
8. Relative Address Mode
9. Indexed Address Mode
10. Base Register Addressing Mode
Overview of GPU, CPU vs GPU computing difference
Central Processing Unit Graphical Processing Unit
Overview • Brain of a computer and is designed for general- • It is specialized for parallel processing and is designed
purpose computing tasks. primarily for rendering graphics.
• Has a few powerful cores optimized for sequential • Consist of thousands of small, efficient cores optimized for
processing. parallelism.
• They are excellent for tasks that require high single- • They excel at tasks that can be parallelized, making them
threaded performance, such as operating systems valuable for scientific simulations, deep learning, and
and general applications. graphics rendering.
Architecture • A few powerful cores with large caches. • Have thousands of simpler cores, organized into streaming
• Execution units are optimized for sequential multiprocessors (SMs).
instruction execution. • Execution units are optimized for parallel execution.
• High single-threaded performance with complex • Lower clock speeds per core but high parallel throughput.
instruction sets.
Parallelism • Have 2-64 cores, suitable for multi-threaded tasks. • Have thousands of cores, designed for massively parallel
• Limited parallelism compared to GPUs. tasks.
• Performance improvements in multi-threaded • Ideal for tasks involving large data sets or complex
applications are modest. calculations.
• Significant performance boost in parallel applications.
Overview of GPU, CPU vs GPU computing difference
Central Processing Unit Graphical Processing Unit
Memory • Have complex memory hierarchies with caches (L1, • GPUs have a simpler memory hierarchy with global memory,
Hierarchy L2, L3) and main memory. shared memory, and local memory.
• Latency-optimized for fast access to a small amount • Designed for high-throughput data access and transfer.
of data. • Ideal for tasks with large data sets and regular access patterns.
• Suitable for tasks with small memory footprints and
irregular access patterns.
Applications • CPUs are suitable for tasks that require strong single- • GPUs excel in tasks that can be parallelized, such as scientific
threaded performance. simulations, 3D rendering, video processing, and deep
• Examples include web browsing, office applications, learning.
and general-purpose software. • They are widely used in artificial intelligence and machine
learning.

You might also like