UNIT 5
TinyOS
• TinyOS – Operating system
• TinyOS – supports modularity and event-based programming by the
  concept of components
• Components - required state information in a frame, the program code for
  normal tasks, and handlers for events and commands
• Components are arranged hierarchically, from low-level components close
  to the hardware to high-level components making up the actual application.
• Events originate in the hardware and pass upward from low-level to high-
  level components
• Commands - passed from high-level to low-level components
• First In First Out (FIFO) scheduler, shuts the node down when there is no
  task executing or waiting.
TinyOS
         •   Method calls
         •   Tasks and Events
         •   Split Phase Operation
         •   Resource Contention
         •   Lightweight
                                  nesC
                             Basic Concepts
• nesC is an extension to C , designed to embody the structuring concepts
  and execution model of TinyOS.
• Separation of construction and composition: programs are built out of
  components, which are assembled (“wired”) to form whole programs.
• Components define two scopes, one for their specification (containing the
  names of their interface instances) and one for their implementation.
    • Component Interface
    • Component Implementation
.
Component Interface :
• Provides Multi-function interaction channel
  between two components, Provider & User.
  Interfaces may be provided or used by the
  component.
• The provided interfaces are intended to
  represent the functionality that the component
  provides to its user,
• the used interfaces represent the functionality
  the component needs to perform its job.
• Components are statically linked to each other
  via their interfaces
Interfaces are bidirectional:
they specify a set of functions to be
implemented by the interface’s provider
(commands) and
a set to be implemented by the
interface’s user (events).
               Component Implementation
• Components in nesC, depending on how they are implemented is
  either: modules and configurations.
• Modules are implemented by application code (written in a C-like
  syntax).
• Configurations are implemented by connecting interfaces of existing
  components.
  Module
• The implementation part of a module
  is written in C-like code.
• A command or an event bar in an
  interface foo is referred as foo.bar.
• A keyword call indicates the invocation
  of a command.
• A keyword signal indicates the
  triggering by an event
Configuration
• In the implementation section of the configuration, the code
  first includes the two components, and then specifies that the
  interface StdControl of the TimerC component is the StdControl
  interface of the TimerModule; similarly for the Timer01
  interface.
• The connection between the Clock interfaces is specified using
  the -> operator.
• Essentially, this interface is hidden from upper layers.
• nesC also supports the creation of several instances of a
  component by declaring abstract components with optional
  parameters.
Concurrency and Atomicity
   Concurrency and Atomicity
• Race condition: concurrent interrupts/tasks update shared variables.
• Asynchronous code (AC): reachable from at least one interrupt.
• Synchronous code (SC): reachable from tasks only.
• Any update of a shared variable from AC or from SC that is also updated from
  AC, is a potential race condition!
• Atomic Statement: Disable interrupt when atomic code is being executed
atomic {
<Statement list>
}
Dataflow-Style Language: TinyGALS
• Set of processing units – ACTORS
• Actors have ports to receive and produce data, and the directional
  connections among ports are FIFO queues that mediate the flow of data.
• The execution of an actor is triggered when there are enough input data at
  the input ports.
• The globally asynchronous and locally synchronous (GALS) mechanism is a
  way of building event-triggered concurrent execution from thread-unsafe
  components.
• TinyGALS addresses concurrency concerns at the system level, rather than at
  the component level as in nesC.
• Reactions to concurrent events are managed by a dataflow-style FIFO queue
  communication.
TinyGALS Programming Model
• An application in TinyGALS is built in two
  steps:
(1) constructing asynchronous actors from
synchronous components,
 (2) constructing an application by
connecting the asynchronous actors through
FIFO queues.
• An actor in TinyGALS has a set of input
  ports, a set of output ports, and a set of
  connected TinyOS components.
• An actor is constructed by connecting
  synchronous method calls among TinyOS
  components.
• At the application level, the
  asynchronous communication of
  actors is mediated using FIFO queues.
• Each connection can be
  parameterized by a queue size.
Triggering, Sensing and Sending actors of the Field monitor in TinyGALS
  TinyGUYS
• The TinyGALS programming model has the advantage that actors become
  decoupled through message passing.
• However, each message passed will trigger the scheduler and activate a
  receiving actor which is inefficient if there is a global state that must be shared
  among multiple actors.
• TinyGUYS (Guarded Yet Synchronous) variables are a mechanism for sharing
  global state, allowing quick access but with protected modification of the data.
• In the TinyGUYS mechanism, global variables are guarded.
• Actors may read the global variables synchronously (without delay). However,
  writes to the variables are asynchronous in the sense that all writes are
  buffered. The buffer is of size one, so the last actor that writes to a variable wins.
• TinyGUYS variables are updated by the scheduler only when it is safe (e.g., after
  one module finishes and before the scheduler triggers the next module).
TinyGALS Code Generation
• Generative approach to mapping high-level constructs such as FIFO queues and
  actors into executables on Berkeley motes.
• Given the definitions for the components, actors, and application, the code generator
  automatically generates all of the necessary code for
       (1) component links and actor connections,
       (2) application initialization and start of execution
       (3) communication among actors,
       (4) global variable reads and writes.
• TinyGALS code generator generates a set of aliases for each synchronous method call.
• The code generator also creates a system-level initialization function called app_init(),
  which contains calls to the init() method of each actor in the system.
• The app_init() function is one of the first functions called by the TinyGALS run-time
  scheduler before executing the application.
• The code generator automatically generates a set of scheduler data structures
  and functions for each asynchronous connection between actors.
• For each input port of an actor, the code generator generates a queue of
  length n, where n is specified in the application definition.
• The width of the queue depends on the number of arguments of the method
  connected to the port. If there are no arguments, no queue is generated for
  the port.
• For each output port of an actor, the code generator generates a function that
  has the same name as the output port. This function is called whenever a
  method of a component wishes to write to an output port.
• For each input port connected to the output port, a put() function is generated
  which handles the actual copying of data to the input port queue.
• The put() function adds the port identifier to the scheduler event queue so
  that the scheduler will activate the actor at a later time.
• For each connection between a component method and an actor input port, a
  function is generated.
• When the scheduler activates an actor via an input port, it first calls this
  generated function to remove data from the input port queue and then passes
  it to the component method.
• Since most of the data structures in the TinyGALS run-time scheduler are
  generated, the scheduler does not need to worry about handling different
  data types and the conversion among them.
Node-Level Simulators
• Node-level design methodologies are usually associated with simulators that
  simulate the behavior of a sensor network on a per-node basis.
• Using simulation, designers can quickly study the performance (in terms of
  timing, power, bandwidth, and scalability) of potential algorithms without
  implementing them on actual hardware and dealing with the vagaries of
  actual physical phenomena.
• A node-level simulator typically has the following components:
    • Sensor node model
    • Communication model
    • Physical environment model
    • Statistics and visualization
Two Types of Execution model
• Depending on how the time is advanced in the simulation, there are two types of
  execution models:
   • Cycle-driven (CD) simulation
   • Discrete-event (DE) simulation.
• Cycle-driven (CD) simulation
   • Discretizes the continuous notion of real time into (typically regularly spaced) ticks and
     simulates the system behavior at these ticks.
   • At each tick, the physical phenomena are first simulated, and then all nodes are checked to see
     if they have anything to sense, process, or communicate.
   • Sensing and computation are assumed to be finished before the next tick. Sending a packet is
     also assumed to be completed by then.
   • However, the packet will not be available for the destination node until the next tick.
   • This split-phase communication is a key mechanism to reduce cyclic dependencies that may
     occur in cycle-driven simulations.
   • That is, there should be no two components, such that one of them computes yk = f (xk) and the
     other computes xk = g(yk), for the same tick index k.
• Discrete-event (DE) simulation.
   • Assumes that the time is continuous and an event may occur at any time.
   • An event is a 2-tuple with a value and a time stamp indicating when the event is
     supposed to be handled.
   • Components in a DE simulation react to input events and produce output events.
   • In node-level simulators, a component can be a sensor node and the events can be
     communication packets; or a component can be a software module within a node and
     the events can be message passings among these modules.
   • A DE simulator typically requires a global event queue.
   • All events passing between nodes or modules are put in the event queue and sorted
     according to their chronological order.
   • At each iteration of the simulation, the simulator removes the first event (the one with
     the earliest time stamp) from the queue and triggers the component that reacts to that
     event.
• Another class of simulators, sometimes called software in- the-loop simulators,
  incorporate the actual node software into the simulation. For this reason, they
  are typically attached to particular hardware platforms and are less portable.
Examples of Simulators
• ns-2
• TOSSIM