Chapter 4
The Processor
Chapter 4 — The Processor — 1
§4.1 Introduction
Introduction
• CPU performance factors
Instruction count
Determined by ISA and compiler
CPI and Cycle time
Determined by CPU hardware
• We will examine two MIPS implementations
Chapter 4 — The Processor — 3
A simplified version
A more realistic pipelined version
• Simple subset, shows most aspects
Memory reference: lw, sw
Arithmetic/logical: add, sub, and, or, slt
Control transfer: beq, j
Instruction Execution
• PC → instruction memory, fetch instruction
• Register numbers → register file, read registers
• Depending on instruction class
Use ALU to calculate
Arithmetic result
Chapter 4 — The Processor — 4
Memory address for load/store
Branch target address
Access data memory for load/store
PC target address or PC + 4
buses
several lines
CPU Overview
write data from ALU/mem.
next inst. back into reg.
branch
read/write data
addr. calcul. for load/store
op. exec.
Chapter 4 — The Processor — 5
Multiplexers
can’t just join wires
together: use multiplexers
Chapter 4 — The Processor — 6
PC = (PC + 4 or the branch destination address)
zero output of the ALU AND a control signal that indicates that the instruction is a branch
output of the ALU
(arithmetic-logical
Control Unit
instruction) or
the output of the
data memory (load)
for writing into the
register file
second ALU input =
from the registers
(arithmetic-logical
instruction/branch) or
from the offset field of
the instruction
(for a load or store)
Datapath Elements
• Datapath: the combination of a set of registers with a shared
ALU and interconnecting paths performing an operation
• Combinatorial datapath elements
Operates on data values & elements
Output depends on current input, no internal memory,
deterministic (same inputs → same output)
e.g. ALU
• State elements
Internal storage
Chapter 4 — The Processor — 8
Values preserved after shut down (you can reload the state)
Characterize the computer
Instruction & data memories & registers
Inputs: data value & clock (determines when the value will be
written)
e.g register memory
§4.2 Logic Design Conventions
Logic Design Basics
• Information encoded in binary
Low voltage = 0, High voltage = 1
One wire per bit
Multi-bit data encoded on multi-wire buses
• Combinational element
Operate on data
Chapter 4 — The Processor — 9
Output is a function of input
• State (sequential) elements
Store information
Combinational Elements
• AND-gate • Adder A
Y=A&B • Y=A+B + Y
A B
Y
B
Chapter 4 — The Processor — 10
• Multiplexer • Arithmetic/Logic Unit
• Y = S ? I1 : I0 • Y = F(A, B)
I0 M A
u Y
I1 x
ALU Y
B
S
F
Sequential Elements
• Register: stores data in a circuit
Uses a clock signal to determine when to update the
stored value
Edge-triggered: update when Clk changes from 0 to 1
If the signal is written at the same time it’s read the
result is unpredictable:
Clocking methodology makes is predictable
Clock edge: quick transition from low to high or vice
versa
Clk
D Q
D
Clk
Q
Chapter 4 — The Processor — 11
Sequential Elements
• Register with write control
Only updates on clock edge when write control input is 1
Used when stored value is required later
Write on high edge
Chapter 4 — The Processor — 12
Clk
Write
D Q D
Write
Q
Clk
Clocking Methodology Updated on a
rising-edge
• Combinational logic
transforms data during
clock cycles
Between clock edges
Input from state 1 cycle
elements, output to state
element
Longest delay determines
clock period
Edge-triggered timing: no feedback in
• Inputs are written in the single cycle
previous cycle
• Outputs can be written in
the following cycle
Chapter 4 — The Processor — 13
Instruction Fetch
Chapter 4 — The Processor — 15
Increment by
4 for next
32-bit instruction
register
Data path that fetches instructions and increments PC
R-Format Instructions
• Read two register operands: needs register numbers, outputs
read registers
• Perform arithmetic/logical operation
• Write register result: needs register number and data
Write control signal is needed
Chapter 4 — The Processor — 16
Load/Store Instructions
• Read register
operands
• Calculate address
using 16-bit offset
Use ALU, but
sign-extend offset
Chapter 4 — The Processor — 17
• Load: Read memory
and update register
• Store: Write register
value to memory
Branch Instructions
• Read register operands
• Compare operands
Use ALU, subtract and check Zero output
• Calculate target address
Sign-extend displacement
Chapter 4 — The Processor — 18
Shift left 2 places (word displacement)
Add to PC + 4
Already calculated by instruction fetch
Branch Instructions
Just
re-routes
wires
compute branch
target
Chapter 4 — The Processor — 19
address and
compare the
registers
(Branches also
affect the
instruction
fetch portion of
the datapath)
Sign-bit wire
replicated
Branch Instructions
Chapter 4 — The Processor — 20
Composing the Elements
• First-cut data path does an instruction in one clock cycle
Each datapath element can only do one function at a time
Hence, we need separate instruction and data memories
• Use multiplexers where alternate data sources are used for
different instructions
Chapter 4 — The Processor — 21
Simplest datapath will attempt to execute all instructions in one clock
cycle:
• no datapath resource can be used more than once per
instruction
• any element needed more than once must be duplicated
• need a memory for instructions separate from one for
data
R-Type/Load/Store Datapath
datapath with only a single register file & a single ALU:
• support two different sources for the second ALU input
• support two different sources for the data stored into the register file
• add multiplexors placed at the ALU input and at the data input to the
register file
Full Datapath
Chapter 4 — The Processor — 23
§4.4 A Simple Implementation Scheme
ALU Control
• ALU used for
Load/Store: F = add
Branch: F = subtract
R-type: F depends on funct field
ALU control Function
Chapter 4 — The Processor — 24
0000 AND
0001 OR
0010 add
0110 subtract
0111 set-on-less-than
1100 NOR
ALU Control
• Assume 2-bit ALUOp derived from opcode
Combinational logic derives ALU control
• Multiple levels of decoding to simplify main control unit:
• ALUOp & ALU function are used as input to ALU control
Chapter 4 — The Processor — 25
opcode ALUOp Operation funct ALU function ALU control
lw 00 load word XXXXXX add 0010
sw 00 store word XXXXXX add 0010
beq 01 branch equal XXXXXX subtract 0110
R-type 10 add 100000 add 0010
subtract 100010 subtract 0110
AND 100100 AND 0000
OR 100101 OR 0001
set-on-less-than 101010 set-on-less-than 0111
The Main Control Unit
• Control signals derived from instruction
R-type 0 rs rt rd shamt funct
31:26 25:21 20:16 15:11 10:6 5:0
Load/
35 or 43 rs rt address
Chapter 4 — The Processor — 26
Store
31:26 25:21 20:16 15:0
Branch 4 rs rt address
31:26 25:21 20:16 15:0
opcode always read, write for R- sign-extend
read except type and and add
for load load
Use
multiplexor
Datapath With Control
Chapter 4 — The Processor — 27
Control Signals
The control unit can set all but one of the control signals based solely on the opcode
field of the instruction
• PCSrc control line should be asserted if the instruction is branch on equal (a
decision that the control unit can make) and the Zero output of the ALU,
which is used for equality comparison, is asserted
R-Type Instruction Control Path
• The instruction is fetched, and the PC is incremented
• Two registers, $t2 and $t3, are read from the register file; also,
the main control unit computes the setting of the control lines
• The ALU operates on the data read from the register file, using
the function code (bits 5:0, which is the funct field, of the
instruction) to generate the ALU function
• The result from ALU is written into register file using bits
Chapter 4 — The Processor — 29
15:11 of the instruction to select the destination register ($t1)
add $t1, $t2, $t3
R-Type Instruction
Chapter 4 — The Processor — 30
Load Instruction Control Path
• The instruction is fetched, and the PC is incremented
• Register $t2 is read from the register file
• The ALU computes the sum of the value read from the register
file and the sign-extended, lower 16 bits of the instruction
(offset)
• The sum from the ALU is used as the address for the data
memory
Chapter 4 — The Processor — 31
• The data from the memory unit is written into the register file;
the register destination is given by bits 20:16 of the instruction
($t1)
lw $t1, offset($t2)
Load Instruction
Chapter 4 — The Processor — 32
Branch Instruction Control Path
• Similar to R-format instruction, but the ALU output is used to
determine whether the PC is written with PC + 4 or the
branch target address
• The instruction is fetched, and the PC is incremented
• Two registers, $t1 and $t2, are read from the register file
• The ALU performs subtract on the data read from the register
file, PC+4 is and the sign-extended, lower 16 bits of the
Chapter 4 — The Processor — 33
instruction (offset) shifted left by 2, result is the branching
address
• The Zero result from ALU is is used to decide which adder
result to store in the PC
beq $t1, $t2, offset
Branch-on-Equal Instruction
Chapter 4 — The Processor — 34
Implementing Jumps
Jump 2 address
31:26 25:0
• Jump uses word address
• The upper 4 bits of the address that should replace the PC
come from the PC of the jump instruction plus 4
Chapter 4 — The Processor — 35
• Update PC with concatenation of
Top 4 bits of old PC
26-bit jump address
00: bits 27-28 (like branch)
• Need an extra control signal decoded from opcode
Datapath With Jumps Added
Chapter 4 — The Processor — 36
Performance Issues
• Longest delay determines clock period (cycle time)
Critical path: load instruction
Instruction memory → register file → ALU → data memory →
register file
• We will improve performance by pipelining
Improves efficiency by executing multiple instructions
Chapter 4 — The Processor — 37
simultaneously
Multiple instructions are overlapped in execution
Pipelining Analogy
• Washing clothes
Place one dirty load of clothes in the washer
When the washer is finished, place the wet load in the dryer
When the dryer is finished, place the dry load on a table and fold
When folding is finished, ask your roommate to put the clothes away
• Pipelining
As soon as the washer is finished with the first load and placed in
the dryer, you load the washer with the second dirty load
Chapter 4 — The Processor — 38
When the first load is dry, you place it on the table to start folding,
move the wet load to the dryer, and put the next dirty load into the
washer
Next you have your roommate put the first load away, you start
folding the second load, the dryer has the fourth load into the
washer
Pipelining Analogy
• Step = stage in pipelining
As long as we have separate resources for each step, we can
pipeline the stages
• Pipelining does not shorten the time needed to complete all
steps (instruction execution time or latency)
It improves the throughput of the system
• If all stages take similar amount of time & there’s enough work
to do
Chapter 4 — The Processor — 39
Speedup ≈ number of stages
§4.5 An Overview of Pipelining
Pipelining Analogy
• Pipelined laundry: overlapping execution
Parallelism improves performance
◼ Four loads:
◼ Speedup
= 8/3.5 = 2.3x
◼ pipeline is not full
yet: 2.3 < 4
MIPS Pipeline
• Five stages, one step per stage
IF: Instruction fetch from memory
ID: Instruction decode & register read
EX: Execute operation or calculate address
MEM: Access memory operand
WB: Write result back to register
Chapter 4 — The Processor — 41
• Pipeline depth:
the number of simultaneous stages that may be completed at
once
Pipeline Performance
• Assume time for stages is
100ps for register read or write
200ps for other stages
• Compare pipelined datapath with single-cycle datapath
Chapter 4 — The Processor — 42
Instr Instr Register ALU op Memory Register Total time
fetch read access write
lw 200ps 100 ps 200ps 200ps 100 ps 800ps
sw 200ps 100 ps 200ps 200ps 700ps
R-format 200ps 100 ps 200ps 100 ps 600ps
beq 200ps 100 ps 200ps 500ps
write register: first half of a cycle, read register: second half of a cycle
Pipeline Performance
• Single-cycle
model: every Single-cycle (Tc= 800ps)
instruction takes
exactly one
clock cycle, so
the clock cycle
must be
stretched to
accommodate
the slowest
instruction Pipelined (Tc= 200ps)
• All the pipeline
stages take a
single clock
cycle, so the clock
cycle must be long
enough to
accommodate the
slowest operation
Pipeline Speedup
• If all stages are balanced
i.e., all take the same time
Time between instructionspipelined
= Time between instructionsnonpipelined
Number of stages
• If not balanced, speedup is less
Chapter 4 — The Processor — 44
• A five-stage pipeline should offer nearly a fivefold
improvement over the 800 ps nonpipelined time (or a 160 ps
clock cycle)
• However, that the stages may be imperfectly balanced
• Moreover, pipelining involves some overhead
• Speedup due to increased throughput
Latency (time for each instruction) does not decrease
Speedup Example
Chapter 4 — The Processor — 45
Pipelining and ISA Design
• MIPS ISA designed for pipelining
All instructions are 32-bits
Easier to fetch and decode in one cycle
c.f. x86: 1- to 17-byte instructions
Few and regular instruction formats
Can decode and read registers in one step
Load/store addressing
Chapter 4 — The Processor — 46
Can calculate address in 3rd stage, access memory in 4th stage
If we could operate on operands in memory these stages would be
splitted into more: address stage, memory stage, execute stage
Alignment of memory operands
Memory access takes only one cycle