BITS, PILANI – K. K.
BIRLA GOA CAMPUS
                 Advanced Operating
                       Systems
                                  by
                         Dr. Shubhangi
                          Dept. of CS and IS
23 August 2023                                    1
                   Modules
 I. Uniprocessor OS
 II. Multiprocessor OS
 III. Distributed OS
8/23/2023                    3
8/23/2023   4
            Mukesh Singhal and Niranjan G.
            Shivaratri, “Advanced Concepts in
            Operating Systems”, Tata McGraw-Hill
            Publishing Company Ltd., 2001.
8/23/2023                                   5
8/23/2023   6
                     Reference Books
      – Andrew Tanenbaum and Maarten van Steen, Distributed
        Systems Principles and Paradigms
      – P. K. Sinha, Distributed Operating Systems, PHI, 1998.
      – A. Goscinski, Distributed Operating Systems – The Logical
        Design.
8/23/2023                                                           7
            Course webpage
 for study material and notices  quanta and
 google classroom
 Chamber No: D263
8/23/2023                                      8
                    Evaluation components
 S.            Evaluation   Duration   Weightage                         Nature of
                                                      Date & Time
No.           Component     (Mins)        (%)                           Component
                                                   12/10/2023, 9:00 –   Open/Closed
1.      Mid-sem test          90         30%
                                                       10:30 a.m.          Book
2.      Assignment(s)         --         30%               --           Open Book
                                                                        Open/Closed
3.      Comprehensive        120         40%       13/12/2022 (F.N.)
                                                                           Book
      8/23/2023                                                                9
Chapter 1: Evolution of OS
       Main objectives/goals of an OS
• Convenience
• Efficiency
• Ability to evolve
8/23/2023                               11
                 Role of an OS
• OS as a Resource manager
• Resources:
      – Processor cycles
      – Main memory
      – Secondary memory
      – i/o devices
      – File system
8/23/2023                        12
            Services provided by an OS
•   Process management activities
•   Memory management activities
•   Files management activities
•   Protection and security
8/23/2023                                13
             Process management activities
      •     Program development and execution
      •     Multitasking
      •     Inter-process communication
      •     deadlock handling
8/23/2023                                       14
            Memory Management Activities
 • Keeping track of which parts of memory are currently
    being used and by whom
 • Deciding which processes and data to move into and
    out of memory
 • Allocating and deallocating memory space as needed
8/23/2023                                                 15
            File Management Activities
• Creating and deleting files and directories
• Support primitives to manipulate files and
    directories
• Mapping files onto secondary storage
• Backup files onto stable (non-volatile) storage
    media
8/23/2023                                           16
               Protection and Security
 • Protection – any mechanism for controlling access of processes or
    users to resources defined by the OS.
 • Security – defense of the system against internal and external
    attacks.
 • User identities (user IDs, security IDs) include name and associated
    number, one per user.
 • Group identifier (group ID) allows set of users to be defined and
    controls managed, then also associated with each process, file
8/23/2023                                                            17
            Desirable hardware features
  •    Memory protection
  •    Timer
  •    Privileged instructions
  •    Interrupts
8/23/2023                                 18
      Evolution of Operating Systems
• Reasons
      – Hardware upgrades
      – New types of hardware
      – New services
      – Fixes
8/23/2023                              19
    Evolution of Operating Systems
 Stages include:                                                     DS system
                                                          MP system
                                           Time Sharing
                                           Systems
                             Multiprogra
                             mmed Batch
                             Systems
                   Simple
                   Batch
                   Systems
       Serial
       Processin
       g
                  Serial Processing
Earliest Computers:                Problems:
• No operating system             • Scheduling:
       • programmers interacted      – most installations used a
         directly with the             hardcopy sign-up sheet to
         computer hardware             reserve computer time
                                              – time allocations could
• Computers ran from a                          run short or long,
                                                resulting in wasted
  console with display                          computer time
  lights, toggle switches,
                                  • Setup time :
  some form of input                 – aconsiderable amount of
  device, and a printer                 time was spent just on
• Users have access to the              setting up the program to
                                        run
  computer in “series”
            Simple Batch Systems
• Early computers were very expensive
  – important to maximize processor utilization
• Monitor
  – user no longer has direct access to processor
  – job is submitted to computer operator who batches them
    together and places them on an input device
  – program branches back to the monitor when finished
            Monitor Point of View
• Monitor controls the sequence of
  events
• Resident Monitor is software
  always in memory
• Monitor reads in job and gives
  control
• Job returns control to monitor
         Processor Point of View
• Processor executes instruction from the memory
  containing the monitor
• Executes the instructions in the user program until it
  encounters an ending or error condition
• “control is passed to a job” means processor is fetching
  and executing instructions in a user program
• “control is returned to the monitor” means that the
  processor is fetching and executing instructions from the
  monitor program
   Simple Batch System Overhead
• Processor time alternates between execution of user
  programs and execution of the monitor
 • Sacrifices:
     – some main memory is now given over to the
       monitor
     – some processor time is consumed by the
       monitor
 – Despite overhead, the simple batch system
   improves utilization of the computer
     Multiprogrammed batch system
   • Processor is often idle
            • even with automatic job sequencing
            • I/O devices are slow compared to processor
8/23/2023                                                  26
         Time-Sharing Systems
• Can be used to handle multiple interactive jobs
• Processor time is shared among multiple users
• Multiple users simultaneously access the system
  through terminals, with the OS interleaving the
  execution of each user program in a short burst
  or quantum of computation
       Batch Multiprogramming vs. Time
                   Sharing
  • Maximize the processor utilization
  • Minimize the response time
8/23/2023                                28
            New problems for the OS
       Timesharing and multiprogramming raised new problems
       for the OS.
        – Protection of memory and file system
        – Resource contention
8/23/2023                                                     29
 Developments lead to modern OS
    • Hardware drivers
       – Multiprocessor systems,
       – greatly increased processor speed,
       – high-speed network attachments, and
       – increasing size and variety of memory storage devices.
    • Application arena:
       – multimedia applications,
       – Internet and Web access, and
       – client/server computing .
    • Security:
       – sophisticated attacks such as viruses and hacking
         techniques, have had a profound impact on OS design.
8/23/2023                                                         30
 Different Architectural Approaches
     –      Microkernel architecture
     –      Multithreading
     –      Symmetric multiprocessing
     –      Distributed operating systems
     –      Clustered system
8/23/2023                                   31
     Microkernel Architecture
• Assigns only a few essential functions to the
  kernel:
                        interprocess
             address                          basic
                       communication
             spaces                        scheduling
                            (IPC)
– The approach:
                                       is well suited to a
        simplifies     provides
                                           distributed
     implementation    flexibility
                                          environment
                    Multithreading
• Technique in which a process, executing an application, is
  divided into threads that can run concurrently
Thread
 • dispatchable unit of work
 • includes a processor context and its own data area to enable
   subroutine branching
 • executes sequentially and is interruptible
Process
 • a collection of one or more threads and associated system resources
 • programmer has greater control over the modularity of the
   application and the timing of application related events
Symmetric Multiprocessing (SMP)
• Term that refers to a computer hardware
  architecture and also to the OS behavior that exploits
  that architecture
• Several processes can run in parallel
• Multiple processors are transparent to the user
      • these processors share same main memory and
        I/O facilities
      • all processors can perform the same functions
• The OS takes care of scheduling of threads or
  processes on individual processors and of
  synchronization among processors
    Symmetric Multiprocessing Architecture
8/23/2023                                    35
            A Dual-Core Design
8/23/2023                        36
       SMP Advantages
               more than one process can be running
Performance     simultaneously, each on a different
                            processor
                failure of a single process does not
Availability               halt the system
                performance of a system can be
Incremental
               enhanced by adding an additional
  Growth                 processor
               vendors can offer a range of products
  Scaling       based on the number of processors
                     configured in the system
• Distributed Systems
     – Distribute the computation among several physical processors
     – Loosely coupled system
            • Each processor has its own local memory; Processors communicate with one
              another through various communications lines, such as high-speed buses or
              telephone lines
     – Client server and Peer to Peer system
     – Enables Parallelism but speed up is not the goal
     – Advantages
            • Resources Sharing, Computation speed up – load sharing, Reliability,
              Communications
• Clustered Systems
     – Usually performed to provide high availability
     – Asymmetric cluster: one machine will be in hot stand by mode
     – Symmetric cluster: 2 or more hosts are running applications and
       they are monitoring each other. Mode is more efficient
23 August 2023                                                                      38
• Real-time Systems
      –   Timeliness is equally important as produced results
      –   Hard Real-time Systems
      –   Soft Real-time Systems
      –   Example: QNX, RTLINUX,VXWORKS
• Embedded Operating Systems
      – OS embed on the System itself
      – Fast, Application specific
      – Examples : Processor in washing machines, mobile phones
• Hand held Systems
      – Power consumption and weight must be low
      – Memory ranges from 512KB to 8MB
      – Speed of the processor can not be very high because of the power
        consumption
23 August 2023                                                       39
Principles of Multiprocessing
Amdhal’s law
• It is a formula which gives the theoretical speedup in
  latency of the execution of a task at a fixed workload that
  can be expected of a system whose resources are
  improved.
• In other words, it is a formula used to find the maximum
  improvement possible by just improving a particular part
  of a system.
• It is often used in parallel computing to predict the
  theoretical speedup when using multiple processors.
                                                 BITS Pilani, K K Birla Goa Campus
Speedup-
• Speedup is defined as the ratio of performance for the entire
  task using the enhancement and performance for the entire
  task without using the enhancement or
• Speedup can be defined as the ratio of execution time for the
  entire task without using the enhancement and execution time
  for the entire task using the enhancement.
• If Pe is the performance for entire task using the
  enhancement when possible, Pw is the performance for entire
  task without using the enhancement, Ew is the execution time
  for entire task without using the enhancement and Ee is the
  execution time for entire task using the enhancement when
  possible then,
  Speedup                          =                   Pe/Pw
                                  or
  Speedup = Ew/Ee
                                                  BITS Pilani, K K Birla Goa Campus
BITS Pilani, K K Birla Goa Campus
Amdahl's law states that in parallelization, if P is the proportion
  of a system or program that can be made parallel, and 1-P is
  the proportion that remains serial, then the maximum speedup
  that can be achieved using N number of processors is 1/((1-
  P)+(P/N).
If the part that can be improved is 25% of the overall system and
    its performance can be doubled, then:
Let Smax = 1/ (1-0.25)+(0.25/2)) = 1.14
Now suppose that for a different system, the part that can be
  improved is 75% of the overall system and its performance
  can be doubled, then:
Smax = 1/ (1-0.75)+(0.75/2)) = 1.6
Speedup is limited by the total time needed for the sequential
  (serial) part of the program.
                                                     BITS Pilani, K K Birla Goa Campus
Moore’s law
Moore's Law is the observation made in 1965 by
  Gordon Moore, co-founder of Intel, that the number of
  transistors per square inch on integrated circuits had doubled
  every     year    since    the     integrated    circuit  was
  invented. Moore predicted that this trend would continue for
  the foreseeable future.
Moore's Law is a computing term which originated around 1970;
  the simplified version of this law states that processor speeds,
  or overall processing power for computers will double every
  two years.
For    such       portable devices,   power     dissipation
  is important because the power provided by the battery is
  rather limited.
                                                     BITS Pilani, K K Birla Goa Campus
Motivation behind
Multiprocessor systems
Enhanced performance : Increased system throughput
Fault tolerance
                                           BITS Pilani, K K Birla Goa Campus
Scheduling in MC/MP systems
                              BITS Pilani, K K Birla Goa Campus
Scheduling in MC/MP systems
Task partitioning
 and allocation
    (offline)
Task Scheduling
    (online)
                              BITS Pilani, K K Birla Goa Campus
Bin Packing Approximation
algorithms
Task partitioning
 and allocation
    (offline)
 Task Scheduling
     (online)
                            BITS Pilani, K K Birla Goa Campus
Taxonomy of Multiprocessor Scheduling Algorithms
Global Scheme
Partitioning Scheme
              Bin Packing Problem
• Definition: Given a list of objects and their
  weights, and a collection of bins of fixed size, find
  the smallest number of bins so that all of the
  objects are assigned to a bin.
• Example. Given the set of items S = {4, 8, 5, 1, 7,
  6, 1, 4, 2, 2} and bins of size 10, pack the items
  into as few bins as possible using
   –   Next Fit
   –   First Fit
   –   Best Fit
   –   Worst Fit
items S = {4, 8, 5, 1, 7, 6, 1, 4, 2, 2}
    Next fit                 Best fit
    First fit                Worst fit
items S = {4, 8, 5, 1, 7, 6, 1, 4, 2, 2}   • The items are sorted in
                                             decreasing order.
                                             S = {8, 7, 6, 5, 4, 4, 2, 2, 1, 1}.
                                             Applying the First Fit and Best
                                             Fit algorithms gives the result
     Almost Worst
         fit                                 below:
                                                    First fit and best fit
                                                         decreasing
Task
 No    Period Execution
              Time Priority Utilization
7      22     6      7     .27
6      19     4      6     .21
5      15     6      5     .40
4      14     3      4     .21
3      13     2      3     .15
2      11     3      2     .27
1      9      3      1     .33
System Power Savings Using Dynamic
          Voltage Scaling
• Power = (½)CV2 F