Operating System Design
Issues
• Efficiency
– Most I/O devices slow compared to main memory
(and the CPU)
I/O Management • Use of multiprogramming allows for some processes to be
waiting on I/O while another process executes
• Often I/O still cannot keep up with processor speed
Chapter 5 • Swapping may used to bring in additional Ready processes
– More I/O operations
• Optimise I/O efficiency – especially Disk &
Network I/O
1 2
Operating System Design
I/O Software Layers
Issues
• The quest for generality/uniformity:
– Ideally, handle all I/O devices in the same way
• Both in the OS and in user applications
– Problem:
• Diversity of I/O devices
• Especially, different access methods (random access versus
stream based) as well as vastly different data rates.
• Generality often compromises efficiency!
– Hide most of the details of device I/O in lower-level
routines so that processes and upper levels see
devices in general terms such as read, write, open,
close. Layers of the I/O Software System
3 4
Interrupt Handlers Interrupt Handler Steps
• Interrupt handlers are best “hidden” • Steps must be performed in software upon occurrence of
• Can execute at almost any time an interrupt
– Raise (complex) concurrency issues in the kernel
– Have similar problems within applications if interrupts are
– Save regs not already saved by hardware interrupt mechanism
propagated to user-level code (via signals, upcalls). – (Optionally) set up context (address space) for interrupt service
– Generally, systems are structured such that drivers procedure
starting an I/O operations block until interrupts notify • Typically, handler runs in the context of the currently running process
– No expensive context switch
them of completion
– Example dev_read() waits on semaphore that the interrupt – Set up stack for interrupt service procedure
handler signals. • Handler usually runs on the kernel stack of current process
– Implies handler cannot block as the unlucky current process will
also be blocked ⇒ might cause deadlock
• Interrupt procedure does its task
– then unblocks driver waiting on completion – Ack/Mask interrupt controller, reenable other interrupts
5 6
1
• Logical position of device drivers
is shown here
Interrupt Handler Steps • Drivers (originally) compiled into Device Drivers
the kernel
– Run interrupt service procedure – Including OS/161
• Acknowledges interrupt at device level – Device installers were
• Figures out what caused the interrupt technicians
– Received a network packet, disk read finished, UART transmit – Number and types of devices
queue empty rarely changed
• If needed, it signals blocked device driver • Nowadays they are dynamically
– In some cases, will have woken up a higher priority loaded when needed
blocked thread – Linux modules
• Choose newly woken thread to schedule next. – Typical users (device installers)
• Set up MMU context for process to run next can’t build kernels
– Load new/original process' registers – Number and types vary greatly
• Even while OS is running (e.g
– Re-enable interrupt; Start running the new process hot-plug USB devices)
7 8
Device Drivers Device Driver
• Drivers classified into similar categories • After issuing the command to the device, the
– Block devices and character (stream of data) device device either
• OS defines a standard (internal) interface to the – Completes immediately and the driver simply returns
different classes of devices to the caller
– Or, device must process the request and the driver
• Device drivers job usually blocks waiting for an I/O complete interrupt.
– translate request through the device-independent
standard interface (open, close, read, write) into
• Drivers are reentrant as they can be called by
appropriate sequence of commands (register another process while a process is already
manipulations) for the particular hardware blocked in the driver.
– Initialise the hardware at boot time, and shut it down – Reentrant: Code that can be executed by more than
cleanly at shutdown one thread (or CPU) at the same time
• Manages concurrency using synch primitives
9 10
Device-Independent I/O Device-Independent I/O Software
Software
• There is commonality between drivers of
similar classes
• Divide I/O software into device-dependent
and device-independent I/O software
• Device independent software includes
– Buffer or Buffer-cache management
– Managing access to dedicated devices
(a) Without a standard driver interface
– Error reporting
(b) With a standard driver interface
11 12
2
Device-Independent I/O Software
Driver ⇔ Kernel Interface
• Major Issue is uniform interfaces to devices and
kernel
– Uniform device interface for kernel code
• Allows different devices to be used the same way
– No need to rewrite filesystem to switch between SCSI, IDE or
RAM disk
• Allows internal changes to device driver with fear of breaking
kernel code
– Uniform kernel interface for device code
• Drivers use a defined interface to kernel services (e.g. (a) Unbuffered input
kmalloc, install IRQ handler, etc.) (b) Buffering in user space
• Allows kernel to evolve without breaking existing drivers (c) Single buffering in the kernel followed by copying to user
– Together both uniform interfaces avoid a lot of space
programming implementing new interfaces (d) Double buffering in the kernel
13 14
No Buffering User-level Buffering
• Process specifies a memory buffer that incoming
• Process must read/write a device a data is placed in until it fills
byte/word at a time – Filling can be done by interrupt service routine
– Each individual system call adds significant – Only a single system call, and block/wakeup per data
overhead buffer
• Much more efficient
– Process must what until each I/O is complete
• Blocking/interrupt/waking adds to overhead.
• Many short runs of a process is inefficient (poor
CPU cache temporal locality)
15 16
User-level Buffering Single Buffer
• Issues
– What happens if buffer is paged out to disk • Operating system assigns a buffer in main
• Could lose data while buffer is paged in memory for an I/O request
• Could lock buffer in memory (needed for DMA), however
many processes doing I/O reduce RAM available for paging. • Stream-oriented
Can cause deadlock as RAM is limited resource
– Used a line at time
– Consider write case
• When is buffer available for re-use?
– User input from a terminal is one line at a time
– Either process must block until potential slow device drains with carriage return signaling the end of the
buffer line
– or deal with asynchronous signals indicating buffer drained
– Output to the terminal is one line at a time
17 18
3
Single Buffer Single Buffer
• Block-oriented – User process can process one block of data
– Input transfers made to buffer while next block is read in
– Block moved to user space when needed – Swapping can occur since input is taking
place in system memory, not user memory
– Another block is moved into the buffer
• Read ahead
– Operating system keeps track of assignment
of system buffers to user processes
19 20
Single Buffer Speed Up Single Buffer
• Assume • What happens if kernel buffer is full, the
– T is transfer time from device
– C is computation time to process incoming packet
user buffer is swapped out, and more data
– M is time to copy kernel buffer to user buffer is received???
• Computation and transfer can be done in parallel – We start to lose characters or drop network
• Speed up with buffering packets
T +C
max(T , C ) + M
21 22
Double Buffer Double Buffer Speed Up
• Use two system buffers instead of one • Computation and Memory copy can be done in
parallel with transfer
• A process can transfer data to or from one
• Speed up with double buffering
buffer while the operating system empties
or fills the other buffer
T +C
max(T , C + M )
• Usually M is much less than T giving a
favourable result
23 24
4
Double Buffer Circular Buffer
• May be insufficient for really bursty traffic • More than two buffers are used
– Lots of application writes between long • Each individual buffer is one unit in a circular
periods of computation buffer
– Long periods of application computation while • Used when I/O operation must keep up with
receiving data process
– Might want to read-ahead more than a single
block for disk
25 26
Important Note Is Buffering Always Good?
• Notice that buffering, double buffering, and T +C T +C
circular buffering are all
max(T , C ) + M max(T , C + M )
Bounded-Buffer Single Double
Producer-Consumer • Can M be similar or greater than C or T?
Problems
27 28
Buffering in Fast Networks
I/O Software Summary
• Networking may involve many copies
• Copying reduces performance
– Especially if copy costs are similar to or greater than computation or
transfer costs
• Super-fast networks put significant effort into achieving zero-copy
Layers of the I/O system and the main
• Buffering also increases latency functions of each layer
29 30