0% found this document useful (0 votes)
25 views82 pages

UNIT 2 Session 3

The document discusses discrete systems, focusing on the differences between analog and digital signal processing in Electronic Control Units (ECUs). It details the flow of signal processing, the role of microcontrollers, and the importance of real-time systems in automotive applications, including task management and scheduling. Additionally, it covers embedded systems, memory types, and the architecture of microcontrollers, emphasizing the need for precise timing in real-time tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views82 pages

UNIT 2 Session 3

The document discusses discrete systems, focusing on the differences between analog and digital signal processing in Electronic Control Units (ECUs). It details the flow of signal processing, the role of microcontrollers, and the importance of real-time systems in automotive applications, including task management and scheduling. Additionally, it covers embedded systems, memory types, and the architecture of microcontrollers, emphasizing the need for precise timing in real-time tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

2.

2 Discrete Systems
Discrete vs. Analog Processing:
• Traditional mechanical, electrical, or hydraulic systems use analog signal processing.
• ECUs (Electronic Control Units), however, process signals digitally, using
microprocessors.
• Consequently, control functions (open-loop and closed-loop) must also be
implemented discretely.
Signal Processing Flow in ECUs:
• External inputs: Setpoints (W) and feedback (R) from sensors and generators.
• These are preprocessed in the input module, producing internal signals (W<sub>int,
R<sub>int) suitable for digital processing.
• After computation, internal control signals (U<sub>int) are sent to the output module,
which generates external actuator signals (U).
Signal Conditioning:
• Input/output modules often include gain adjustment and signal
conditioning circuits.
Software Development Focus:
• Development focuses on internal signals within the microprocessor.
• For simplicity, the text uses W, R, and U to denote these internal
signals unless stated otherwise.
Time-Discrete Systems and Signals
• Analog systems use signals that are continuous in time and value.
• Any point in time t corresponds to a unique signal value X(t).
• These are known as continuous-time, continuous-value signals.
• Example illustrated in Figure 2-6(a).
Sampled Signals – Discrete Time, Continuous Value
• A sampled signal is derived from a continuous signal X(t) measured at
specific time points:
t₁, t₂, t₃, …
• Result: Discrete-time, but continuous-value signal.
• Defined as:
Xₖ = {X(t₁), X(t₂), X(t₃), …} where k = 1, 2, 3, …
• The time between samples:
ΔTₖ = tₖ - tₖ₋₁ → called the sampling rate.
• Sampling rate can be constant or variable.
Example: Sampling Rates in the Engine ECU
• The engine ECU handles multiple control functions using sensor data and
actuator control.
Variable Sampling Rate Functions:
• Ignition and injection must trigger at specific crankshaft positions.
• Sampling rate varies with engine speed.
Fixed Sampling Rate Functions:
• Driver command (e.g., pedal position sensor) can be sampled at constant
time intervals.
• Uses a fixed sampling rate, independent of engine speed.
Value-Discrete Systems and Signals
• Analog-Digital Converters (A/D converters) sample continuous signals and convert
them into value-discrete signals.
• Limited word size causes amplitude quantization, assigning each continuous value
X(t) to a discrete level Xᵢ.
• This quantization introduces a quantizing error: the difference between the actual
value and the assigned discrete value.
• The amplitude range is limited between Xₘᵢₙ and Xₘₐₓ.
• Digital-Analog Converters (D/A converters) convert discrete values back to
continuous signals, often using pulse width modulation.
• In D/A conversion, each discrete value Xᵢ is held constant until the next sample, hence
D/A converters are also called holding elements.
Time- and Value-Discrete Systems and Signals
• When signals are discretized in both time and value, the resulting signals are called
time- and value-discrete.
• Systems processing at least one such signal are termed time- and value-discrete
systems, or simply digital systems.
• In an ECU microcontroller, input variables are time- and value-discrete signals.
• Continuous signals W and R are sampled and quantized into Wₖ and Rₖ.
• The microcontroller calculates discrete output signals Uₖ, which are converted back
into continuous signals U for actuators.
• Control functions and parameters are implemented in software on the microcontroller.
• Refer to Figure 2-7 for a block diagram illustrating this signal mapping within control
loops.
State Machines in Discrete Systems
• Physical variables are usually continuous in time and value, modeled by
differential equations.
• Discrete systems use difference equations due to time and value
discretization.
• Transitions between discrete states, from Xₜₖ to Xₜₖ₊₁, occur as events.
• The number of possible states and events is often limited in technical
discrete systems.
• This limitation enables modeling discrete systems efficiently using state
machines.
Example: Controlling the Low-Fuel Indicator Lamp
• Fuel level sensor outputs an analog voltage from 0 V (full tank) to 10 V
(empty tank).
• At 8.5 V, the fuel level is about 5 liters — threshold to turn the low-fuel
lamp on.
• Lamp turns off only when fuel rises above 6 liters (signal < 8.0 V) to
avoid flickering—this is called hysteresis.
• The control depends on events: crossing the 8.5 V and 8.0 V thresholds,
plus the lamp’s previous state (on or off).
• See Figure 2-8 for switching operation illustration
Embedded Systems
Embedded Systems
• ECUs, sensors, actuators, and setpoint generators form an electronic system
that influences the vehicle's behavior.
• These systems are typically hidden from the user—no direct user interface.
• Common in powertrain, chassis, and body applications.
• The driver's influence on ECU behavior is usually indirect, limited, or even
nonexistent.
• Reference variables are gathered through indirect user interfaces, as shown
in Figure 2-10.
• Such systems with minimal direct interaction are classified as embedded
systems.
ECU Interfaces and Signal Handling
• Setpoint generators are treated like sensors—they capture driver input.
• Feedback components (e.g., displays, buzzers) are treated like actuators.
• ECUs must consider the response characteristics of:
• Setpoint generators and sensors (input behavior)
• Actuators (output behavior)
• Intelligent sensors/actuators include built-in electronics for signal
conditioning.
• Response includes both:
• Dynamic behavior (how signals change over time)
• Static behavior (value range, resolution)
• ECUs interface directly with the control system (plant), but indirectly
with users.
• Software must handle both:
• Periodic signals (e.g., sensor sampling)
• Aperiodic events (e.g., button presses)
• A solid understanding of microcontroller hardware and software is
essential.
Microcontroller Architecture
• CPU (Microprocessor):
• Contains Control Unit and ALU
• Executes instructions and processes logic/arithmetic tasks
• I/O Modules:
• Interface with external devices and buses (e.g., CAN)
• Handle interrupts and communication
• Program & Data Memory:
• Non-volatile memory storing control algorithms and parameters
• Often separated by function (code vs. constants
• RAM (Data Memory):
• Read/write memory for variable data during execution
• Volatile or non-volatile based on use case
• Bus System:
• Interconnects all internal modules
• Clock Generator (Oscillator):
• Synchronizes all operations
• Watchdog Module:
• Monitors program execution to detect faults
• Integration Trend:
• Components increasingly built into a single chip
• External memory may still be added if needed
Memory Types in ECUs
• RAM (volatile): fast, temporary storage
• ROM, EEPROM, Flash (nonvolatile): permanent storage
• Stores:
• Programs
• Parameters
• Fault logs
• RAM: Read/write, volatile
• SRAM: fast, no refresh
• DRAM: slower, needs refresh
• NV-RAM = RAM + backup battery
• ROM: Read-only, fixed contents
• PROM: Programmable once
• Used for factory-loaded code or constants
Reprogrammable Memory
• EPROM: UV-erasable
• EEPROM: Electrical erase/write, line-by-line
• Flash: Erase/write in blocks, in-system updates
Microcontroller Programming
Microcontroller Software Basics
• Code stored in nonvolatile memory (e.g., Flash)
• Updated only during software revisions
• Software = program + parameter set
Program vs. Data Version
• Program version → Program memory
• Data version → Data memory
• More accurate: "microcontroller software"
• ECU may contain multiple microcontrollers
Microcontroller Components (Fig. 2-13)
• Microprocessor: executes instructions
• Program & data memory
• I/O modules
• All connected via internal buses
Key Functions of Microcontrollers
• Data processing
• Data storage
• Communication with peripherals
Microprocessor Architecture (Fig. 2-14)
• Control unit + arithmetic logic unit (ALU)
• Registers for instructions, operands, results
• Interfaces to memory and I/O
Operand Memory Models
• Memory/memory: both operands in RAM
• Accumulator: one operand in fixed register
• Memory/register: one operand in RAM, one in register
• Load/store: both operands in registers
The instruction clearly specifies the address or register where the operand is located. MOV A, 30H ; Move the
value from memory location 30H into the accumulator A-explicit
The operand is implied by the instruction itself, and no separate address or register is specified. CPL A ;
Complement the accumulator

Instruction Set Architectures


• Explicit vs. implicit addressing
• Destructive vs. non-destructive operations
• Full instruction set = all supported commands
I/O Module Functions (Fig. 2-15)
• Interfaces to sensors/actuators
• Internal/external communication
• Watchdog, timer, fault detection
I/O Addressing and Operating Modes
• Isolated I/O: separate memory space
• Memory-mapped I/O: shared memory space
• Modes:
• Programmed I/O
• Polled I/O
• Interrupt-driven
• DMA (Direct Memory Access)
Real-Time Systems
Introduction to Real-Time Systems
In automotive and embedded applications, the microprocessor—referred to
as the processor in the following sections—must perform control functions
within strict time constraints. This requirement to execute operations
within defined time limits gives rise to the concept of a real-time system.
This section explores:
• Key terminology related to real-time systems,
• The basic principles behind their operation, and
• The configuration and structure of real-time systems in general, with a
special focus on real-time operating systems (RTOS).
Introduction to Task Definition
• In real-time systems, a task represents a unit of work to be scheduled or executed.
• Tasks may be handled:
• By a single processor (sequentially)
• Or by multiple processors (parallel processing)
• Term "task" is used (instead of "process") in alignment with OSEK standard.
Why Define Tasks?
• Before managing resources or scheduling, it is important to:
• Understand what tasks need execution
• Consider how tasks interact over time
• This supports better task scheduling, time management, and resource allocation.
Task Examples in Engine Management
• Common real-time tasks in an Engine ECU:
• Ignition Control
• Fuel Injection
• Pedal Value Acquisition
• Each of these tasks must execute according to strict timing requirements.
Task Execution in Real-Time Systems
• Processor executes instructions sequentially.
• Tasks appear quasi-parallel when:
• The processor switches between them at set time intervals.
• Tasks are shown along a timeline (left-to-right progression).
Task Representation on Timeline
• Tasks are visualized using horizontal bars on a timeline (e.g., Fig. 2-
16).
• Each bar shows the duration and order of task execution.
• 🟦 Task A → 🟥 Task B → 🟩 Task C
Processor Arbitration Diagram
• A Processor Arbitration Diagram (e.g., Fig. 2-17) shows:
• How processor time is allotted among tasks
• The Running state of each task
• Example: Tasks A, B, and C take turns being executed.
Understanding Task States (OSEK)
• Only one task can be in the Running state at a time.
• When a new task runs:
• Previous task transitions to another state (e.g., Waiting or Ready).
• These state transitions follow real-time scheduling rules.
Arbitration vs. Scheduling
• Arbitration: Assigning processor time or bus access permissions.
• Scheduling (OSEK-specific): Determining when tasks get CPU time.
• Both terms describe time-based resource allocation.
Real-Time Task Requirements – Overview
• Real-time systems require precise time-specific task constraints.
• Tasks must be planned and controlled based on:
• Activation point:When the task is triggered.
• Deadline
• Relative: Max response time allowed.
• Absolute: Activation point + relative deadline.
• Response time :Activation to task completion.
• Execution time : Actual processor time spent on task
Task Timing Window
• A real-time requirement defines a time window for task execution.
• This ensures that tasks meet system timing needs.
• Used in safety-critical systems like:
• Engine ignition
• Fuel injection
Activation Rate vs Execution Rate
• Activation Rate: Time between task triggers.
• Execution Rate: Time between completed task runs.
• Do not confuse these two rates—they define different timing behaviors.
Real-Time Task Requirements – Two Categories
Real-time tasks are classified into two types based on strictness of time constraints:
Hard Real-Time Requirements
• Must always meet timing constraints.
• Requires formal validation (proof-based).
• Missing deadlines may cause system failure.
• Task is called a hard real-time task.
Soft Real-Time Requirements
• Missing deadlines is acceptable occasionally.
• No strict validation is required.
• Task is called a soft real-time task.
Different engine functions require different sampling rates:

Function Typical Sampling Rate Requirement Type


Ignition & Injection 1–2 ms Hard
Valve Positioning ~50 µs Hard
Combustion Pressure ~5 µs Hard
Engine Cooling Much slower Soft
Defining Processes and Tasks
• A process is an individual unit of work with specific real-time
requirements.
• Multiple processes with the same real-time requirements can be:
• Managed as separate tasks, or
• Grouped together into one single task.
• When grouped, real-time requirements apply to the whole task, not
to each process.
• Processes inside a task are executed one after another in order.
Basic Task States:

Task States in Real-Time Systems (OSEK-OS)


Basic Task States:
• Suspended
Task is inactive, before activation or after execution.
• Ready
Task is activated but waiting to run (e.g., processor busy with another
task).
• Running
Task is currently executing on the processor.
State Transitions:

TransitionDescription
Activate Task moves from Suspended → Ready at activation point.
Start Task moves from Ready → Running at start of execution.
Running task is interrupted and returns to Ready due to
Preempt
another task’s priority.
Terminate Task completes execution and moves to Suspended.
Extended Task State Model
In addition to the basic task states — Suspended, Ready, and Running — the extended model introduces a new
state called Waiting.
• Waiting State:
Sometimes, a task needs to temporarily pause its execution to wait for an external event or condition
before it can continue. For example, a task might wait for sensor data or a communication signal.
When this happens, the task moves itself from the Running state to the Waiting state.
• Processor Utilization:
While a task is in the Waiting state, it does not use the processor. This allows the system to assign the
processor to other tasks that are ready to run, improving efficiency.
• Transitions Related to Waiting:
• Wait: The transition where a running task moves to Waiting when it cannot proceed until an event occurs.
• Release: When the awaited event occurs, the task transitions from Waiting back to Ready, so it can be scheduled to run
again.
• This model provides greater flexibility for task management, especially in real-time systems where tasks
often depend on asynchronous events.
Task State Model (OSEK-TIME)
The OSEK-TIME model is designed for time-triggered systems where
tasks start execution exactly at their activation time, without waiting.
• Unlike other models, there is no Ready state, because the activation
and start of execution always coincide.
• If a higher priority task needs the processor, a running task can be
preempted and moved to the Preempted state until it can resume.
• This model supports strict time-controlled scheduling strategies
essential in real-time systems where timing precision is critical.
Strategies for Processor Scheduling
• Processor scheduling strategies decide which task gets processor time when
multiple tasks compete for it. These strategies ensure tasks meet their real-time
requirements efficiently.
Processor Scheduling—In Sequential Order
• Tasks in the Ready state are arranged in a FIFO queue.
• The first activated task runs first, others wait their turn.
• Simple but can cause delays if earlier tasks take a long time.
Processor Scheduling—By Priority
• Tasks are assigned priority levels.
• Scheduler always picks the highest priority task to run.
• Useful when some tasks are more critical than others.
Processor Scheduling—Combined Sequential and Priority Strategy
• OSEK combines priority and FIFO:
• Tasks with higher priority run first.
• Among tasks with same priority, FIFO is used.
• The next task to run is the oldest task with the highest priority.
• Provides fairness within same priority groups while respecting task criticality.
Processor Scheduling—Preemptive Strategy
• A running low-priority task can be interrupted by a higher-priority task.
• Interruption can occur at any point during execution (fully preemptive).
• The interrupted task pauses and resumes after the higher-priority task finishes.
• Enables timely response to critical tasks.
Processor Scheduling—Nonpreemptive Strategy
• A running task cannot be interrupted until it finishes or reaches
specific points.
• Higher-priority tasks must wait until the lower-priority task
completes.
• May cause delays for important tasks if low-priority tasks run long.
• Easier to implement but less responsive.
Processor Scheduling—Event-Driven and Time-Controlled Strategies
Event-Driven Scheduling (Dynamic):
• Scheduling decisions made at runtime (online) based on current system events.
• Allows flexible and dynamic task order.
• Scheduler runtime overhead may affect system performance.
• Makes predicting exact timing behavior difficult.
Time-Controlled Scheduling (Static):
• Scheduling decisions made offline, before program execution.
• Task activation times are fixed and predefined.
• Limited flexibility but predictable runtime behavior.
• Scheduler overhead is minimal.
• Uses a dispatcher table to trigger tasks at specific times.
Organization of Real-Time Operating
Systems (per OSEK-OS)
• A Real-Time Operating System (RTOS) is designed to manage tasks
and resources in environments where timing and predictability are
critical. The organization of an RTOS typically includes three essential
components working together to ensure timely and orderly execution
of tasks.
Task Activation and Ready Task Management
• This component is responsible for activating tasks and maintaining the
set of Ready tasks.
• Task activation can occur via:
• Time-dependent activation: Triggered by the system's real-time clock at
scheduled points.
• Event-driven activation: Triggered by asynchronous events such as interrupts or
signals.
• To function correctly, this component needs information about:
• When each task should be activated (activation points).
• What events cause activation.
Scheduler
• The scheduler evaluates the current Ready tasks and decides their execution order based on the
chosen processor scheduling strategy (e.g., priority-based, FIFO).
• It continuously prioritizes tasks according to:
• Task priority levels.
• The state and availability of tasks.
• Its main goal is to optimize processor time allocation and ensure real-time constraints are met.
Dispatcher
• The dispatcher is responsible for managing system resources and starting the execution of tasks.
• Once the scheduler selects the highest-priority Ready task, the dispatcher:
• Checks if the necessary resources are available.
• Starts or resumes execution of the selected task.
• The dispatcher ensures smooth task switching and resource allocation.
Interaction Among Tasks in Real-
Time Operating Systems
Introduction to Task Interaction
• Tasks in RTOS represent independent working units with individual real-
time requirements.
• Despite their independence, tasks often need to collaborate to accomplish
a main function (e.g., engine ECU functions).
• This necessitates inter-task interaction — the exchange of information
between tasks.
• Interaction mechanisms include:
• Event-based synchronization
• Cooperation via global variables
• Message-based communication
Synchronization via Events
• Event-based synchronization coordinates quasi-concurrent tasks.
• Example:
• Task B changes state from B1 to B2 on receiving Event X from Task A.
• Task A transitions to state A3 upon receiving Event Y from Task B.
• Ensures a logical sequence despite tasks running quasi-parallel.
• Illustrated by Message Sequence Charts (time progresses top to bottom).
• Key Point:
Task states only change after receiving corresponding events, ensuring
controlled sequencing.
Potential Conflicts in Quasi-Parallel Tasks
• Parallel task execution can cause conflicts, especially when accessing
shared resources.
• Example: Two tasks may try to read/write the same variable
simultaneously.
• Such conflicts require synchronization mechanisms to prevent data
corruption.
Cooperation Using Global Variables

global variables.
Cooperation means tasks share data via

•Example: Task A writes a value x to Variable X, Task B reads it.


•Risks:
•Data inconsistency if Task B reads Variable X while Task A is still writing.
•Inconsistent or partial data can cause unpredictable system behavior.
Ensuring Data Consistency in Global Variable Access
Two methods ensure data consistency:
1.During Write Access:
1. Interrupts are disabled (locked) during the write operation to prevent task
interruptions.
2. Atomic operations (e.g., writes to 8 or 16-bit variables) do not require this.
2.During Read Access:
1. Inconsistent reads can occur if a variable is updated during the read process.
2. Synchronization events are used to ensure reads see consistent data.
Distributed and Networked Systems
Evolution of Vehicle Electronic Systems
From Autonomous to Integrated Systems
• Initial systems: operated independently (Fig. 2-38).
• Rising functional demands → integrated, networked systems.
• Example: Traction Control System (engine ECU + ABS ECU).
• Goal: General optimization through ECU collaboration (Fig. 2-39).
Distributed & Networked Architecture
Key Features
• Defined as systems with decentralized hardware, control, and data.
• Composed of multiple processors, each with local memory.
• Connected by a communication network (Fig. 2-40).
• Tasks are executed concurrently and spatially distributed.
Advantages over Centralized Systems
1. Spatial Distribution
• Reduced wiring: systems in doors, roof, trunk, interior.
2. Expandability & Scalability
• Easy integration of optional features (e.g., sunroof, seat controls).
• Modular design supports various vehicle variants.
Enhanced Functionality & Safety
Higher Functionality
• Example: Adaptive Cruise Control
• Uses radar, controls engine + brakes.
• Requires multiple ECUs working together.
Safety & Reliability
• Fault-tolerant design → better system resilience.
• Redundancy improves fail-safe capabilities.
Logical vs. Technical Architecture in ECU Communication
• Problem with Peer-to-Peer Connections (Fig. 2-39):
• Direct wiring between ECUs → high cost, weight, complexity, low reliability.
• Impractical for vehicles.
• Solution: Use a shared communication medium (bus system).
• Reduces wiring and complexity.
• Practical and widely adopted (see Fig. 2-40).
Understanding Logical vs. Technical Links
• Logical Communication:
• Represents intended communication paths between ECUs.
• Shown as arrows in diagrams.
• Network nodes are shown in grey.
• Technical Communication:
• Represents actual physical links (e.g., a shared bus).
• Shown as solid lines in diagrams.
• Network nodes are white.
• Refer to Fig. 2-41: Logical vs. Technical System Architecture.
Key Design Challenge
• The main task in system design:
• Map logical links to technical links on a shared bus.
• Ensure efficient, conflict-free communication.
• Issue: Multiple ECUs (nodes) may try to send data simultaneously.
→ Leads to bus access conflicts.
• Need: A bus arbitration strategy to manage access
(Detailed in Section 2.5.6).
Logical Communication Links – Introduction
• Communication between tasks on different processors can be modeled using:
• Message Sequence Charts (e.g., Fig. 2-42)
• Communication Models:
• Client/Server Model
• Producer/Consumer Model (covered later)
• These models help define logical links in distributed systems (see Fig. 2-37 for earlier reference).
Client/Server Communication Model
• Client/Server Sequence (Fig. 2-42):
• Client (Task A) sends a request.
• Communication System delivers an indication to Server (Task B).
• Server sends response → System delivers confirmation to Client.
• If no confirmation is needed → last two steps may be skipped.
• This model defines peer-to-peer communication, even with multiple clients or servers.
Real-World Example – Diagnostic Tester
• Client: Offboard Diagnostic Tester
• Server: Onboard ECUs
• Process:
• Tester sends a diagnostic request.
• ECU executes the service and sends back confirmation.
• Fig. 2-43 shows:
• Logical architecture: Shows intended communication flow.
• Technical architecture: Shows actual network connections including the
temporary node (diagnostic tester).
Producer/Consumer Model – Overview
• Definition:
• A model where one task (Producer) sends data without being asked.
• Multiple tasks (Consumers) receive the data.
• Use Case:
• Suitable for broadcast communication across several network nodes.
• Visual Reference:
• Fig. 2-44: Message sequence chart
→ Producer sends updates, Consumers receive.
Application in Automotive Systems
• Example: Onboard communication among multiple ECUs
• Producer: One ECU sends periodic signals (e.g., sensor data).
• Consumers: Multiple ECUs receive and react (e.g., control, monitoring).
• Benefits:
• Enables periodic, automatic updates.
• Supports synchronized control functions across the system.
• Fig. 2-45:
• Logical and Technical System Architectures of onboard ECU communication.
2.5.3 Defining the Technical Network Topology
Network topology describes how communication links between nodes are organized. The basic types are:
2.5.3.1 Star Topology
• Structure: All nodes connect to a central node (Z).
• Communication: Always routed through node Z.
• Scalability: Central node Z needs (n - 1) interfaces for n nodes.
• Reliability: Central point of failure — if Z fails, the whole network fails.
• Use Case: Suitable for centralized control systems.
2.5.3.2 Ring Topology
• Structure: Closed loop (daisy-chain) of peer-to-peer links.
• Node Functionality: All nodes must regenerate and forward messages.
• Advantage: Suitable for large physical network extents.
• Weakness: One node failure can break the entire ring unless fault tolerance is implemented (e.g., bridging failed
nodes).
2.5.3.3 Linear Topology (Bus Topology)
• Structure: All nodes connect to a single shared communication medium.
• Communication: Messages sent by one node are received by all.
• Advantages:
• Easy to cable and expand.
• Node failures don’t necessarily disrupt the whole network.
• Use Case: Common in automotive systems.
• Example: Controller Area Network (CAN), widely used since the 1990s.
2.5.4 Defining Messages
Serial Communication
• Signals between processors are transferred serially.
• Signals are embedded in message frames.
• Messages may:
• Contain multiple signals.
• Divide one signal across multiple messages.
• Be blank (used for synchronization).
Message Components
• Payload Data: The actual transmitted signals/information.
• Header Information: Includes:
• Identifier (for addressing).
• Control/Status fields.
• Checksum (for error detection).
Addressing
Two types of addressing:
Node Addressing
• Each node has a unique address.
• A message includes the recipient's address.
• All nodes compare incoming message identifiers to their own address.
Message Addressing
• Each message has a unique identifier.
• Multiple nodes can receive and interpret the same message.
• Frame filtering used to determine message relevance.
• Advantage: One message can serve multiple nodes — efficient broadcasting.
Basic Definitions of System Theory
• System theory addresses complexity using the "divide and conquer" principle.
• Based on three key assumptions:
• Dividing a system does not distort the problem.
• Components reflect the nature of the system.
• Rules for assembling components are simple, stable, and known.
• System properties arise from interactions between components.
• As complexity increases, analyzing components and their interdependencies becomes
challenging and costly.
• System components can include technical parts, people, or environments.
• Focus of this book: Technical systems—groups of interacting components separated
from their surroundings.
Key System Terminology
• System Status: Properties describing the system at a specific point in
time.
• System Periphery: External components that influence the system but
aren't part of it.
• System Boundary: Separates the system from its periphery.
• System Interface: Any signal crossing the boundary, forming inputs or
outputs.
• System Input/Output: Interfaces classified by the direction of data flow
(inbound or outbound).
Structure and Hierarchies
• Subsystem: A system can be a component of a larger system and
contain subsystems.
• System Level: Different abstraction or observation layers within
system theory.
• Fractal Proliferation: Similar features across system levels (e.g.,
system and subsystems share traits).
• Interior vs. Exterior View:
• Interior: View from within the system.
• Exterior: Abstract view from outside, focusing on boundaries and interfaces.
System Modeling Concepts
• System Views: Analytical abstractions developed by observers.
• Different observers may create different views of the same system
(e.g., control, hardware, safety).
• Modeling Tools:
• Hierarchy-building
• Modularization
• 7±2 Rule:
• Systems with 5–9 components are manageable.
• Fewer than 5 = too simple, more than 9 = too complex.
System Composition Terms
• Aggregation: Describes the inclusion relationship between a system and its
parts.
• Partitioning (Decomposition): Dividing a system into components.
• Integration (Composition): Assembling components into a complete system.
Example: System Levels in Automotive Electronics
• Electronic systems in vehicles are observed at multiple system levels (see
Fig. 3-4).
• Levels range from individual sensors to full vehicle-wide electronic
architecture.
Process Models and Standards
• System development uses structured models like:
• CMMI (Capability Maturity Model Integration)
• SPICE (Software Process Improvement and Capability Determination)
• V-Model
• These models support various applications and must be evaluated and adapted to
specific projects.
Adapting Process Models
• Model selection depends on project needs (e.g., function calibration is critical for
ECUs, but less so for body electronics).
• Process model application varies by discipline and domain (e.g., networked systems
in body electronics).
Key Areas in V-Model & CMMI
• V-Model focuses on:
• System design
• Project management
• Configuration management
• Quality assurance
• CMMI Level 2 highlights:
• Requirements management
• Configuration management
• Quality assurance
• Project planning & tracking
• Subcontractor management
Real-World Relevance
• Real-life examples highlight process benefits in automotive development.
• Focus on steps with major vehicle-specific impacts or unique features.
Continual Development & Change Management
• Long product life cycles demand strong change management practices.
• Changes in one component (e.g., Component W) can cascade across:
• Components (Y, X, Z)
• Subsystem boundaries
• System levels
• Effective tracking requires integration of support processes with core
development.

You might also like