Group 3 Presentation Outline
Group 3 Presentation Outline
MULTIPROCESSOR SYSTEMS
All processors share the same memory and operate under a single OS.
Each processor has equal access to resources and can perform tasks
independently.
Common in modern multi-core processors.
One processor (master) controls the system, while others (slaves) handle
specific tasks.
Used in specialized applications where certain processors are optimized
for specific function
1
3. Massively Parallel Processing (MPP); Massively Parallel Processing
(MPP) is a type of computing architecture where many processors work
simultaneously on different parts of a computational task. Each
processor in an MPP system has its own memory and communicates
with others through a high-speed network.
Supercomputers:
Servers:
Speaker Notes:
2
Today, I’ll be talking about Direct Memory Access, or DMA—a key system
that improves how computers handle data transfers between hardware and
memory.
WHAT IS DMA?
Content:
Speaker Notes:
DMA enables hardware like disk drives or network cards to transfer data
directly to or from system memory. This happens without the CPU managing
every step, freeing it up to perform other tasks and improving system
efficiency.
Content:
SPEAKER NOTES:
The main advantage of DMA is that it lightens the load on the CPU. With less
CPU involvement, the system can move data faster and handle multiple
operations at once.
TYPES OF DMA
Content:
3
Burst Mode
Transparent Mode
Speaker Notes:
DMA comes in three main types, each balancing speed and CPU access
differently. Let’s look at each one briefly.
BURST MODE
Content:
Very fast
Speaker Notes:
In burst mode, a whole chunk of data is sent in one go. This makes it fast, but
during that time, the CPU can't access memory, which might slow down some
other tasks.
Content:
Balanced performance
4
Speaker Notes:Cycle stealing allows the CPU and DMA to share memory
access by alternating control. It’s not as fast as burst mode, but it keeps the
system responsive.
TRANSPARENT MODE
Content:
Speaker Notes:
This mode waits until the CPU is idle before transferring data, ensuring no
disruption. It’s the slowest but ideal when smooth CPU operation is critical.
Content:
Hard drives
Sound cards
Graphics cards
Network cards
Speaker Notes:
DMA is commonly used in devices that need to move lots of data quickly—like
hard drives for file access, sound cards for audio playback, and graphics or
network cards for video and internet data.
FINAL THOUGHTS
5
Content:
Speaker Notes:
BITSLICE MICROCOMPUTERS
6
Key Features
Scalable Data Widths: Need a 20-bit or 48-bit CPU? Just stack more slices
together.
Examples
AMD 2900 Series: Especially the AM2901 4-bit slice—hugely popular for
custom CPU designs.
DEC VAX 11/780: A famous computer that used bitslice architecture for part
of its CPU.
Applications
7
Bitslice microcomputers were used in systems that demanded speed and
precision, including:
* Platform Choice: Whether the software will run on web, desktop, mobile, or
multiple platforms.
8
* Future Maintenance: Ensuring the software can be easily updated or scaled
as requirements evolve.
Software Requirements define what the software system must accomplish and
under what conditions it must operate. These are gathered through
interaction with stakeholders and are essential for guiding the design,
development, and testing phases.
3. User Requirements: Focus on the end-user's needs and the tasks they need
to perform using the software.
Clearly defined requirements help avoid scope creep, ensure user satisfaction,
and lead to the development of a more robust and effective software solution.
RISC CISC
Takes one clock cycle per Takes multiple clock cycles per
instruction instruction
10
registers registers
A program that has been written for a RISC processor won’t work on a
CISC processor and vice versa
A program that has been written for a RISC processor won’t necessarily work on
another RISC processor as they may have different instruction sets.
Serial interfaces transmit data one bit at a time over a single wire or channel.
To connect an electronic system to an external device such as a PC or
instrumentation, serial I/O is often preferred because it reduces the amount of
wiring required. Among the serial I/O standards available for use today are:
USB
Serial ATA
11
Bluetooth (wireless)
Wi-Fi (wireless)
1. Fewer Wires Needed : Only requires a few lines making the wiring
simpler and cheaper compared to parallel interfaces.
4. Compact Design :Useful in compact devices where space for wiring and
connectors is limited (e.g., embedded systems, mobile devices).
1. Slower data transfer: Serial transmission sends data one bit at a time,
which can be slower than parallel communication for short distances.
2.More Complex Protocols: Protocols like I2C or SPI may require additional
logic for addressing, acknowledgment, or synchronization, especially in multi-
device systems.
3.Not Ideal for High-Speed Local Transfer: For very high-speed or high-
volume data (like video streaming or memory access), serial may not be
efficient compared to parallel methods.
12
1. Computer Peripherals: Devices like mice, keyboards, and printers
(especially via USB) use serial protocols.
In early versions of the microprocessor, data was grouped into bytes (8 bits).
Today, microprocessors work with 8, 16, 32, 64, and 128 bits of data. Access to
more memory requires address buses with an increased number of bits and
the required control signals. The variety of parallel I/O standards available
for use today include:
2. More Cables and Pins Required: Needs multiple lines for data, control,
and ground, which increases cable bulk and connector size.
4. Higher Cost :More wires and connectors mean higher material and
manufacturing costs.
• Parallel ports (like GPIO ports) control LEDs, motors, sensors, and
displays.
• Early printers used parallel ports (like the Centronics interface) for fast
data transfer from computers.
14
3. Data Acquisition Systems
4. Industrial Automation
• Parallel I/O modules control multiple machines and read sensors at the
same time for quick real-time response.
15
These are used to configure the behavior of the ports and control lines. They
enable features like
interrupt control, handshake modes, and strobe signal configuration.
5. Data Bus Buffer:
This internal buffer connects the PIA to the system's data bus. It allows the
microprocessor to
write data to or read data from the PIA.
Working Principle
The PIA is connected to the system bus of a microprocessor. When the
microprocessor wants to
send or receive data from a peripheral, it accesses the PIA through read or
write operations.
- In output mode, the CPU sends data to the PIA, which then passes it to the
peripheral device.
- In input mode, the peripheral sends data to the PIA, and the CPU reads it
when ready.
- Control lines can be used to indicate when data is available or when a
transfer is complete. They
can also trigger interrupts for more efficient communication.
Modes of Operation
1. Input Mode: Data flows from the peripheral into the microprocessor via the
PIA.
2. Output Mode: Data is sent from the microprocessor to the peripheral
through the PIA.
3. Handshake Mode: Control lines are used to synchronize the data transfer
process.
4. Interrupt Mode: The PIA can signal the microprocessor when data is
ready, reducing the need for
constant polling.Example: Motorola MC6821
The MC6821 is a commonly used PIA developed by Motorola. It features:
- Two 8-bit ports (Port A and B)
- Four control lines (CA1, CA2, CB1, CB2)
- Compatibility with Motorola 6800-series microprocessors
- Widely used in early computing and embedded applications
Applications
- Interfacing keyboards and LED displays
- Communication with printers
- Control of motors and relays in automation systems
- Data acquisition systems in embedded electronics
Advantages
16
- Provides flexible and programmable I/O options
- Simple to interface with microprocessors
- Supports both polled and interrupt-driven data transfer
- Easy to use in educational and prototype systems
Limitations
- Limited to 16 I/O lines (8 per port)
- Lacks high-speed communication capabilities- Considered obsolete in
modern systems, replaced by more advanced interfaces like SPI, I2C, and
USB
Conclusion
The Peripheral Interface Adapter is a foundational component in early
microprocessor systems.
While newer technologies have largely replaced it in modern designs,
understanding the PIA is
crucial for grasping how microprocessors interact with the external world. Its
simple yet powerful
---
17
2. Setting Things Up: The DMA controller prepares by deciding where to get
the data from, where to send it, how much data to move, and whether it’s
reading or writing.
3. Data Transfer: The DMA controller takes over the data bus and starts
moving the data directly between the device and memory.
4. Finishing Up: Once the transfer is done, the DMA controller tells the CPU
that everything is complete.
---
Faster transfers: DMA can move data more quickly than if the CPU had to do
it.
Less work for the CPU: The CPU can focus on more important jobs instead of
wasting time on data transfers.
Better efficiency: It’s especially useful when large amounts of data need to be
moved.
---
TYPES OF DMA:
18
Multi-channel DMA: Several devices can share a single DMA controller, each
using its own channel.
---
Disk operations: Moving data between your hard drive or SSD and RAM.
---
Any downsides?
Bus sharing: The DMA and CPU might compete for access to memory, which
can slow things down.
Setup cost: Setting up DMA transfers takes some effort, so it might not be
worth it for small data movements.
---
In short:
DMA is a smart way for computers to move data around quickly and
efficiently without overloading the CPU. It’s especially handy for large or
frequent transfers, like loading files, streaming video, or handling
network traffic.
19
1. Introduction to DMA (Direct Memory Access)
- WHAT IS DMA?
DMA stands for Direct Memory Access. It’s a way for devices (like a USB
drive, hard disk, or sound card) to move data directly to or from the
computer's memory without always bothering the CPU.
Normally, the CPU has to copy data from a device to memory or from
memory to a device. This takes time and slows things down. DMA lets the
device do it itself — faster and more efficient.
Imagine you’re trying to copy a huge file. If the CPU had to handle every step,
it would get too busy and couldn’t do anything else. DMA helps by:
- BASIC IDEA:
Imagine you're the CPU, and your job is to manage everything in the
computer. Now, if every time a file needed to be moved, you had to do it
yourself, you’d be too busy to do anything else. So, you hire an assistant —
that assistant is the DMA Controller (DMAC).
The CPU handles every step of moving data between memory and devices. It’s
like the boss doing the cleaning.
With DMA:
20
The DMAC (assistant) handles the data moving, so the boss (CPU) can focus
on more important things — like running programs.
This is a special chip or circuit that controls the DMA process. It knows where
the data is coming from, where it's going, and how much to move. It's like a
smart delivery guy.
The CPU tells the DMAC where the data is, where to send it, and how much
to move (like giving delivery instructions)
The DMAC says, “I got this,” and starts the data transfer without bothering
the CPU.
Data moves straight from the device to memory (or the other way around). No
CPU help needed.
When the data is fully transferred, the DMAC sends a little signal (called an
interrupt) to tell the CPU: “All done!”
DMA can work in different “styles” or modes depending on how much control
it takes and how it shares time with the CPU.
HOW IT WORKS:
DMA grabs the entire bus (the data road) and transfers a big block of data all
at once without stopping.
21
Example: Like a truck using the whole highway to carry a huge shipment in
one trip.
Disadvantage: The CPU can’t use the bus while DMA is working — it has to
wait.
How it works:
DMA takes small time slots in between CPU operations to transfer data.
Example: Like a kid borrowing your pencil for just a second between your
writing — over and over.
How it works:
DMA only transfers data when the CPU isn’t using the bus at all — it stays
completely out of the CPU’s way.
Example: Like cleaning a room only when no one’s inside — super polite and
quiet.
Each mode is used depending on what’s more important: speed or keeping the
CPU free.
4. Benefits of DMA
DMA makes the computer faster, smarter, and more efficient by handling
data transfers in the background. Here's how:
22
I)Reduces CPU Overhead
Why:
Result:
Data moves faster, especially with things like big files or video streams.
How:
With DMA handling transfers and the CPU focusing on other jobs, the whole
system works better.
Devices that move lots of data, like hard drives, sound cards, and graphics
cards.
5. Applications of DMA
23
What happens:
When you open a file or save something, the data moves between your disk
and RAM.
It moves the file directly without making the CPU do it — which means faster
loading and saving.
What happens:
Games and videos use a lot of images and data. This info has to move quickly
to the graphics card.
DMA transfers game textures, frames, or effects straight into GPU memory,
keeping things smooth and fast.
What happens:
It directly transfers data packets between memory and the network card
without CPU delay — which helps in faster downloading, uploading, or online
gaming.
24
It transfers audio/video data in real time without interrupting the CPU — no
lag, no stuttering.
DMA is basically like the silent superhero in computers — it works behind the
scenes to keep things fast and smooth.
Even though DMA is awesome, it’s not flawless. Here are some of the main
problems:
I) Bus Contention
Both the CPU and DMA may try to use the same data bus (like a shared road)
at the same time.
Problem:
They can get in each other’s way, slowing things down — especially during
heavy data movement
Since DMA can access memory directly, hackers can use it to read or change
data without the CPU noticing.
Example:
A malicious device (like a fake USB or Thunderbolt cable) could plug in and
secretly read your memory — this is called a DMA Attack.
Setting up DMA needs extra hardware and software work. It’s more
complicated than just letting the CPU do the job.
25
Result:
It can be harder to design, test, and troubleshoot systems that use DMA,
especially for beginners or small embedded devices.
So, while DMA is powerful, it has its own risks and challenges — just like any
tool. Still, with good design, it’s usually worth it.
DMA is not just an old-school trick — it’s still very important in today’s
advanced devices. Here’s how it fits into modern technology:
Today’s CPUs have multiple cores (like several brains in one chip).
Instead of all cores waiting for data, DMA feeds data to memory quickly,
keeping all cores busy and working efficiently.
These systems use DMA to move data quickly without slowing down the CPU,
allowing for super-fast file transfers and performances.
Embedded systems:
26
Small computers inside things like washing machines, robots, and car engines.
These devices have limited processing power, so DMA helps by handling data
transfers efficiently, letting the small CPU focus on other tasks.
* Electrical Signaling: This defines the voltage levels, timing, and protocols
used to transmit data and control signals between the microcomputer and the
peripheral.
* Logical Protocols: These are the rules and procedures that govern the
exchange of information, ensuring that data is correctly interpreted by both
the microcomputer and the peripheral.
27
* Software Drivers: These are programs that allow the microcomputer's
operating system to understand and communicate with specific peripheral
devices.
The Intel 8255A is a very common and widely used PPI chip. It provides 24
I/O lines that are grouped into three 8-bit ports (Port A, Port B, and Port C).
These ports can be programmed in various modes to support different I/O
configurations and data transfer methods.
28
Examples of how PPIs are used in microcomputer interfacing:
* Keyboard and Display Interfacing: A PPI can be used to read data from a
keyboard and send data to a display.
* Interfacing with Sensors and Actuators: Reading data from sensors and
controlling actuators in embedded systems.
Key Points:
Timing involves using internal timers to measure and control when operations
occur, ensuring precision in real-time systems.
timers and interrupts work together for efficient, responsive operations, such
as periodic tasks like LED blinking.
The evidence leans toward interrupts being essential for managing time-
sensitive events, though complexity varies by system.
For example, in real-time systems, timers are used for periodic tasks such as
sampling an Analog-to-Digital Converter (ADC) 100 times per second, which
requires interrupts every 10 milliseconds. The SysTick timer, commonly
found in ARM Cortex-M microcontrollers, is a specific implementation for
periodic interrupts, with detailed register settings (e.g., NVIC_ST_CTRL_R,
NVIC_ST_RELOAD_R) The frequency calculation, Frequency = f_BUS /
(n+1), where n is the reload value and f_BUS is the bus clock frequency,
ensures precise timing, with examples given for 80 MHz and 16 MHz clocks.
Interrupt
They come in hardware types, like button presses, and software types, such as
system calls for exceptions.
When an interrupt occurs, the microprocessor saves its state, jumps to the
ISR, executes it, then resumes the original program.This process minimizes
30
latency, ensuring quick responses, which is crucial for efficient
microcomputer operations.
Five conditions must be met for an interrupt, including arming the device and
enabling global interrupts.
• These are split into software interrupts, like system calls, and hardware
ones, triggered by devices.
I'm thinking this practical example could really help illustrate how timing
works in microcomputers.Timing in microcomputers often involves managing
when operations occur, such as reading sensors or updating displays.
Timers are essential for creating delays or synchronizing with external events,
improving system efficiency.
Using timers with interrupts lets the microcomputer handle other tasks while
waiting, which is pretty handy.Interrupts are signals that temporarily halt the
normal execution of a program to handle specific events or tasks.They allow
the microprocessor to respond quickly to external or internal events, such as
input from peripherals or timer expirations.
31
Interrupts are signals that allow a microcomputer to pause its current task
and handle urgent events, such as a keyboard press or sensor data. This
ensures quick responses without constant checking, which is vital for real-time
systems.
Timers are more accurate than software-based delays, which can vary due to
execution time.
They allow the microcomputer to enter low-power modes while waiting for a
timer event.
Introduction to Interrupts
Research suggests that interrupts are categorized into hardware and software
types. Hardware interrupts, triggered by external devices, are often
asynchronous to the processor's clock and include maskable interrupts (which
can be ignored if the processor is busy) and spurious interrupts (occurring
without a valid reason). Software interrupts, on the other hand, are generated
by the program itself, such as system calls (e.g., fork() as detailed at fork()
system call) or exceptions like division by zero.
The interrupt handling process involves several steps: the device raises an
interrupt request (IRQ), the processor finishes the current instruction, saves
its state (e.g., registers and program counter) on the stack, jumps to the
Interrupt Service Routine (ISR) at a predefined address (often via vectored
interrupts, where each device has a unique memory address for its ISR),
executes the ISR, and then restores the state to resume the original program.
This sequence is detailed in resources like Interrupts Overview, which
emphasizes the importance of timely responses in embedded systems.
32
Interrupt latency, the time between interrupt generation and ISR execution, is
a critical metric, especially for time-sensitive applications. Factors affecting
latency include the number of interrupts, enabled interrupts, handleable
interrupts, and the time taken to handle each. The evidence leans toward
minimizing latency to ensure responsiveness, with mechanisms like interrupt
nesting (prioritizing higher-priority interrupts) and priority schemes (e.g.,
fixed priority, dynamic priority, round-robin) playing key roles in managing
multiple interrupts, as discussed at Interrupts in computing.
Interrupt nesting and priority schemes, such as fixed priority (handling the
highest priority first) or round-robin for same-priority interrupts, ensure that
time-critical tasks are prioritized, as detailed at Interrupts. Triggering
methods, like level-trigger (signal held at active logic level) and edge-trigger
(triggered by rising/falling edge), also influence timing, with edge-trigger often
33
requiring additional hardware for detection, as discussed at Interrupts in
8086 microprocessor.
Another example is error handling, where interrupts are used to detect and
recover from hardware or software errors, as mentioned at Interrupts in 8085
Microprocessor. This is particularly useful in input/output operations,
allowing the microprocessor to perform other tasks while waiting for data
transfer, enhancing system utilization.
Conclusion
34
1. Types of Peripheral Interface Connectors
35
I2C (Inter-Integrated Circuit):
PCIe is the modern version with lanes for faster data transfer.
---
2. Functions
---
3. Applications
36
Networking (e.g., Ethernet modules via SPI or PCIe).
---
4. Design Considerations
INSTRUCTION SETS
Key Points:
Instruction Set Architecture (ISA): The design of the instruction set, including
the format and capabilities. Examples include x86, ARM, MIPS, and RISC-V.
CISC (Complex Instruction Set Computing): Fewer lines of code but more
complex instructions (e.g., x86).
37
RISC (Reduced Instruction Set Computing): Simpler, faster instructions,
often requiring more lines of code (e.g., ARM).
Timing:
·The Timing and Control Unit (TCU) generates clock signals and control
signals (like Read/Write)to orchestrate the microprocessor's activities.
·These signals ensure that operations occur in a precise sequence, allowing the
microprocessor to fetch instructions, execute them, and communicate with
memory and peripherals,
·The TCU ensures that all components of the system operate in a coordinated
Interrupts:
●」
They allow the microprocessor to respond quickly to external events, like user
input or hardware signals, and prioritize tasks based on urgency.
38
Benefits of Timing and Interrupts:
·Real-time processing:
·Multi-tasking:
·Efficiency:
·Flexibility:
Timers and interrupts provide a flexible way to manage the flow of control in
a system, allowing for responsive and adaptive behavior.
39
Control structures (loops, conditionals)
1. Ease of
Development
2. Portability
3. Maintainability
4. Integration with
Libraries
1. C Language
40
2. C++
3. Python
4. Embedded Rust
1. Sensor Data
Acquisition and Processing
2. Communication
Interfaces
3. Real-Time
Control Systems
4. Graphical User
Interfaces (GUI)
5. Data Logging
and Storage
6. IoT Systems
41
To mitigate these issues, critical sections of code can be written in assembly or
optimized using compiler directives.
Conclusion
42
Procedural programming. Defines modules as procedures or functions that
are called with a set of parameters to perform a task. A procedural language
begins a process, which is then given data. It is also the most common
category and is subdivided into the following:
It is easier to learn.
43
They are easier to maintain.
44