Oose Unit 2
Oose Unit 2
INTRODUCTION
 1. Requirement:
    • A requirement is a specification of a need or want.
      A requirement can range from a high level abstract statement of a service or a
      system to a detailed mathematical functional specifications.
   Types of requirements:
                                                                                         5
                                                                                          6
                                                                                          6
                                                                                                 7
  Eg, List the stake holders        and all types    of requirements for online        train
  reservation system (7) (N/D'17)
  Soln:
  Stake holders:
      Passenger
      Database Administrator
      Booking clerk
      Bank
  Functional Requirements:
     ▪ The system should allow the passenger to register and login.
     ▪ The system should provide search option to the user so that user can search for
        required train and for required number of reservations.
     ▪ Online booking made by the customer must be associated with customer
        account registered.
                                                                                                 7
                                                                                           8
                                                                                           8
                                                                                          9
             Fig: User requirements for mental health care patient management system
  Problems with natural language;
   − Lack of clarity.
   − Requirements confusion: Functional and non-functional requirements tend to be
      mixed-up
   − Requirements amalgamation: several different requirements may be expressed
      together
  Guidelines for writing user requirements:
      ▪ Invent a standard format and use it for all requirements
       ▪ Use language in a consistent way. Use 'shall' for mandatory requirements
       ▪ Use text highlighting to identify key parts of the requirement
       ▪ Avoid using computer jargon( terms)
       Fig: System requirements for mental health care patient management system System
requirements talk about the solution domain, the world of the software logic. They
describe what the software must do.
For instance for a bookkeeping software,
     • The user requirement is to compute the correct revenue.
     • But the system requirement is only to compute the sum of the partial
        revenues entered by the user.
     • If the user enters incorrect partial revenues the software is not required to
        magically correct them:
Different ways of writing a System Requirements specification:
            Notation                                      Description
     Natural language            The requirements are written in numbered sentence in
     sentence                    natural language. Each sentence should have one
                                 requirement.
      Structural natural         The Requirements are written in natural language on a
      language                   standard form or template
      Design Description         Makes use of Programming language . It is rarely
      language                   used.
      Graphical                  UML use case and sequence diagrams are frequently
      Notations                  used
      Mathematical               Rarely used       as customers       don't    understand
      Specifications             mathematical (Formal) notation.
Structured language specifications :
 • It is a popular method for writing requirements .
 • It is a standard way of representing requirements.
Graphical notations:
 • Extra information can be added when the requirements are written using natural
    language. This information can be represented using tables or graphical models.
                                                                                            10
                                                                                      11
                                                                                      11
                                                                                              12
 2. FURPS :
FURPS is an acronym representing a model for classifying software quality attributes
(functional and non-functional requirements):
    • Functionality - Capability (Size & Generality of Feature Set), Reusability (Compatibility,
       Interoperability, Portability), Security (Safety & Exploitability)
    • Usability (UX) - Human Factors, Aesthetics, Consistency, Documentation, Responsiveness
                                                                                              12
                                                                                        13
                                                                                        13
14
14
15
15
16
16
17
17
18
18
                                    19
                                    19
                                                                                             20
                                                                                             20
                                                                              21
                                                                              21
                                                                                            22
                                                                                            22
                                                                                          23
                     - Implied by scenarios
                ➢ Behavioral elements
                       - State diagram
                  ➢ Flow-oriented elements
                      - Data flow diagram
 (d) Negotiation
       o It isn’t unusual for customers and users to ask for more than can be
           achieved, given limited business resources.
       o Negotiation is done on agreeing on a deliverable system that is realistic for
           developers and customers
       o SW team & other project stakeholders negotiate the priority, availability, and cost
           of each requirement.
       o The Process are :
          – Identify the key stakeholders
                • These are the people who will be involved in the negotiation
          – Determine each of the stakeholders “win conditions”
                • Win conditions are not always obvious
          – Negotiate
                • Work toward a set of requirements that lead to “win-win”
(e) Specification
       • In the context of computer-based systems (and software), the term specification
           means different things to different people. Final work product produced by
           requirement engineer.
           A specification can be a
               ▪ A written document
               ▪ A set of models
               ▪ A formal mathematical
               ▪ A collection of user scenarios (use-cases)
               ▪ A prototype
   (f) Validation
    Examine the specification to ensure that SW requirement is consistent, not ambiguous,
   error free etc.
   Checklist for validation:
                                                                                          23
                                                                                            24
                                                                                            24
                                                                                        25
 • Legal/Ethical Feasibility – What are the legal implications of the project? What sort
   of ethical considerations are there? You need to make sure that any project undertaken
   will meet all legal and ethical requirements before the project is on the table.
 • Resource Feasibility – Do you have enough resources, what resources will be
   required, what facilities will be required for the project, etc.
 • Operational Feasibility – This measures how well your company will be able to
   solve problems and take advantage of opportunities that are presented during the
   course of the project
 • Marketing Feasibility – Will anyone want the product once its done? What is the target
   demographic? Should there be a test run? Is there enough buzz that can be created for
   the product?
 • Real Estate Feasibility – What kind of land or property will be required to
   undertake the project? What is the market like? What are the zoning laws? How will
   the business impact the area?
 • Comprehensive Feasibility – This takes a look at the various aspects involved in the
   project – marketing, real estate, cultural, economic, etc. When undertaking a new
   business venture, this is the most common type of feasibility study performed.
                                                                                        25
                                                                                           26
                                                                                           26
                                                                                        27
Viewpoints:
Viewpoints are a way of structuring the requirements to represent the perspectives of
different stakeholders. Stakeholders may be classified under different viewpoints.
 Use cases:
   − Use-cases are a scenario-based technique in the UML which identify the actors in
       an interaction and which describe the interaction itself.
   − Set of use cases should describe all possible interactions the system.
                                                                                        27
                                                                                         28
Ethnography
  − It is a technique of observation which is used to understand social and
     organizational requirements.
  − Two types:
       • Requirements obtained from working style of people
       • Requirement obtained from interactivities performed by people.
 (b) Requirements classification
     • Groups related requirements and organizes them into coherent clusters.
 (c) Requirements Prioritization
     • Prioritizing requirements and resolving requirements conflicts.
 (d) Requirements documentation
     • Requirements are documented and input into the next round of the spiral.
                                                                                         28
                                                                                    29
                                                                                    29
                                                                                   30
                                                                                   30
                                                                                              31
4
       CLASSICAL ANALYSIS:
3.3 STRUCTURED SYSTEM ANALYSIS:
     − Structured system analysis is a technique in which the system requirements are
       converted into specifications.
     − It is a mapping of problem domain to flows and transformations.
     − System can be modeled using:
         (i) ER Diagram – used to represent data model
         (ii) Data flow diagram & Control flow diagram – to represent functional
               model.
    DATA FLOW DIAGRAM(DFD):
     • A data-flow diagram (DFD) is a way of representing a flow of a data of a process or a
       system (usually an information system). The DFD also provides information about the
       outputs and inputs of each entity and the process itself. DFD is also known as bubble
       chart .
     • It is the starting point of the design phase. A DFD consist of a series of bubbles joined
       by lines.
     • The bubbles represent data transformations and the line represents data flow
       in the system.
                                                                                              31
                                                                                        32
• The DFD is presented in a hierarchical fashion. That is, the first data flow model
  (sometimes called a level 0 DFD or context diagram) represents the system as a whole.
  Subsequent data flow diagrams(level 1, 2..) refine the context diagram, providing
  increasing detail with each subsequent level.
                                                                                        32
                                                                                        33
o There should not be a direct flow between data store and external entity.
    o The names of data stores, sources and destination should be in capital letters.
      Process and data flow names first letter should be in capital letter.
    o The level 0 data flow diagram should depict the software/system as a single
      bubble;
    o All arrows and bubbles should be labelled with meaningful names;
    o Information flow continuity must be maintained from level to level.
                                                                                        33
                                                                                 34
Level 0:
  The level 0 DFD must now be expanded into a level 1 data flow model.
  Level 1 DFD :
                                                                                 34
                                                            35
Level 2:
Level 1 DFD
                                                            35
                                                                                        36
Level 2 DFD:
    (1) The degree to which you have properly estimated the size of the product to be
       built
    (2) The ability to translate the size estimate into human effort, calendar time,
                                                                                        36
                                                                                           37
    and dollars(a function of the availability of reliable software metrics from past
    projects);
    (3) The degree to which the project plan reflects the abilities of the software team
    (4) The stability of product requirements and the environment that
Putnam and Myers [Put92] suggest four different approaches to the sizing problem:
For example:
    • Change sizing. This approach is used when a project encompasses the use
    ofexisting software that must be modified in some way as part of a project.
    Theplanner estimates the number and type (e.g., reuse, adding code,
    changingcode, and deleting code) of modifications that must be accomplished.
Problem-Based Estimation:
LOC and FP data are used in two waysduring software project estimation:
                                                                                           37
                                                                                        38
    Function estimates are combined to produce an overall estimate for the entire
    project.
Using historical data the project planner expected value by considering following
variables.
    1. Optimistic
    2. Most likely
    3. Pessimistic
Where,
    •   (size) S
    •   optimistic(sopt),
    •   most likely (sm), and
    •   pessimistic (spess)
                                                                                        38
                        39
                        39
                                                                                      40
Solution:
        Function                                             Estimated
                                                             LOC
        User interface and control facilities (UICF)         2500
        Two-dimensional geometric analysis (2DGA)            5600
        Three-dimensional geometric analysis (3DGA)          6450
        Database management (DBM)                            3100
        Computer graphics display facilities (CGDF)          4740
        Peripheral control function (PCF)                    2250
        Design analysis modules (DAM)                        7980
        Estimated lines of code                              32620
                                                                                      40
                                                                                         41
For sizing software based on FP, several recognized standards and/or public
specifications have come into existence. As of 2013, these are −
ISO Standards
    • COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size
    measurement method.
                                                                                         41
                                                                                           42
Function Point Analysis (FPA) technique quantifies the functions contained within software in
terms that are meaningful to the software users. FPs consider the number of functions being
developed based on the requirements specification.
Function Points (FP) Counting is governed by a standard set of rules, processes and guidelines
as defined by the International Function Point Users Group (IFPUG). These are published in
Counting Practices Manual (CPM).
    1) Basic model: The basic COCOMO model estimates the software development effort using
    only Lines of Code. Various equations in this model are –
                                                                                           42
                                                                                             43
                                                                                             43
                                                                                                 44
    2) Intermediate model
This is an extension of Basic COCOMO model. This estimation model makes useset of
    "Cost driver attributes" to compute the cost of software.
Now these 15 attributes get a 6-point scale ranging from "very low" to "extra high'. These
ratings can be viewed as
The effort multipliers for each cost driver attribute is as given in following table. Theproduct of
all effort multipliers result in "Effort Adjustment Factor" (EAF).
                                                                                                 44
                                                                                             45
The values for ai and bi for various class of software projects are-
The duration and person estimate is same as in basic COCOMO model. i.e.
P = E/ D
                               = 191/13
                                P = 15 persons approximately
                                                                                             45
                                                                                               46
Using these detailed cost drivers, an estimate is determined for each phase of thelifecycle.
    instances by the weighting factor in the figure and summing to obtain a total object point count.
    When component-based development or general software reuse is to be applied, the percent of
    reuse (%reuse) is estimated and the object point count is adjusted:
Once the productivity rate has been determined, an estimate of project effort is computed using
In more advanced COCOMO II models, a variety of scale factors, cost drivers, and
adjustment procedures are required
 6. RISK MANAGEMENT
The risk denotes the uncertainty that may occur in the choices due to past actions and risk is
something which causes heavy losses.
Risk management refers to the process of making decisions based on an evaluation of the factors
that threats to the business.
                                                                                                 47
                                                                                                        48
     Software Risks:
Two characteristics:
   ➢ uncertainty—the risk may or may not happen;
   ➢ loss—if the risk becomes a reality, unwanted consequences or losses will occur
Project risks threaten the project plan, that is, if project risks become real, it islikely that the project
schedule will slip and that costs will increase.
Project risksidentify potential budgetary, schedule, personnel (staffing and organization),
resource,stakeholder, and requirements problems and their impact on a software project.
Technical risks threaten the quality and timeliness of the software to be produced.If a technical
risk becomes a reality, implementation may become difficult or impossible. Technical risks identify
potential design, implementation, interface, verification,and maintenance problems.
Business risks threaten the viability of the software to be built and often threatenthe project or the
product.
Candidates for the top five business risks are
     (1) Buildingan excellent product or system that no one really wants (market risk),
     (2) Buildinga product that no longer fits into the overall business strategy for the company
        (Strategic risk),
      (3) Building a product that the sales force doesn’t understand how tosell (sales risk),
     (4) Losing the support of senior management due to a change in focusor a change in people
        (management risk), and
     (5) Losing budgetary or personnelcommitment (budget risks).
Known risks are those that can be uncovered after careful evaluation ofthe project plan, the
business and technical environment in which the project isbeing developed, and other reliable
information sources (e.g., unrealistic deliverydate, lack of documented requirements or software
scope, poor developmentenvironment).
Predictable risks are extrapolated from past project experience(e.g., staff turnover, poor
communication with the customer, dilution of staff effortas ongoing maintenance requests are
serviced).
    Unpredictable risks can and do occur, but they are extremely difficult to identify inadvance.
       Reactive strategy:
    ➢ Reactive risk management is a risk management strategy in which when project gets into
    trouble then only corrective action is taken. But when such risks cannot be managed and new
    risks come up one after the other, the software team flies into action in an attempt to correct
    problems rapidly. These activities are called ―firefighting‖ activities.
      Proactive strategy:
    ➢ A proactive strategy begins long before technical work is initiated.
    ➢ Potential risks are identified, their probability and impact are assessed, and they are ranked
    by importance. Then, the software team establishes a plan for managing risk.
    ➢ The primary objective is to avoid risk, but because not all risks can be avoided, the team
    works to develop a contingency plan that will enable it to respond in a controlled and effective
    manner.
                                                                                                        48
                                                                                                   49
Various activities that are carried out for risk management are-
    1. Risk Identification
    2. Risk Projection
    3. Risk Refinement
    4. Risk mitigation, monitoring and management
There are two distinct types of risks for each of the categories
The U.S. Air Force has published a pamphlet that contains excellent guidelines for software risk
identification and abatement. The Air Force approach requires that the project manager identify the
risk drivers that affect software risk components—performance, cost, support, and schedule.
Performance risk—the degree of uncertainty that the product will meet its requirements and be
fit for its intended use.
Cost risk—the degree of uncertainty that the project budget will be maintained.
Support risk—the degree of uncertainty that the resultant software will be easy to correct, adapt,
                                                                                                   49
                                                                                                  50
and enhance.
Schedule risk—the degree of uncertainty that the project schedule will bemaintained and that the
product will be delivered on time.
Risk projection, also called risk estimation, attempts to rate each risk in two ways—
    (1) The likelihood or probability that the risk is real and
    (2) The consequences of the problems associated with the risk, should it occur.
                                                                                                  50
                                                                                                 51
                                                                                                 51
                                                                                                52
The nature of the risk indicates the problems that are likely if it occurs. For example, a poorly
defined external interface to customer hardware(a technical risk) will preclude early design and
testing and will likely lead to system integration problems late in a project.
The scope of a risk combines the severity with its overall distribution (how much of the project
will be affected or how many stakeholders are harmed?).
     The timing of a risk considerswhen and for how long the impact will be felt.
U.S. Air Forcesuggeststhe following steps to determine the overall consequences ofa risk: (1)
determine the average probability of occurrence value for each risk component;
    (2) Determine the impact for each component based onthe criteria shown, and
    (3) Complete the risk table and analyze the results
                                                                                                52
                                                                                                53
The CTC stands for condition-transition-consequence. The condition is first stated and then based
on this condition sub conditions can be derived. Then determine the effects of these sub conditions
in order to refine the risk. This refinement helps in exposing the underlying risks. This approach
makes it easier for the project manager to analyze the risk in greater detail.
                                                                                             54
                                                                                                55
The reliability growth group of models measures and predicts the improvement of reliability
programs through the testing process. The growth model represents the reliability or failure rate of
a system as a function of time or the number of test cases. Models included in this group are as
follows.
    1. Coutinho Model – Coutinho adapted the Duane growth model to represent the software
       testing process. Coutinho plotted the cumulative number of deficiencies discovered and the
       number of correction actions made vs. the cumulative testing weeks on log-log paper. Let
       N(t) denote the cumulative number of failures and let t be the total testing time. The failure
       rate, (t), the model can be expressed as
                           where          are the model parameters. The least squares method can
      be used to estimate the parameters of this model.
   2. Wall and Ferguson Model – Wall and Ferguson proposed a model similar to the Weibull
      growth model for predicting the failure rate of software during testing. The cumulative
      number of failures at time t, m(t), can be expressed as                  where      are the
      unknown parameters. The function b(t) can be obtained as the number of test cases or total
      testing time. Similarly, the failure rate function at time t is given by
       Wall and Ferguson tested this model using some software failure data and observed that
      failure data correlate well with the model
       Reliability growth models are mathematical models used to predict the reliability of a
      system over time. They are commonly used in software engineering to predict the reliability
      of software systems and to guide the testing and improvement process.
7.1 Types of reliability growth models:
   1. Non-homogeneous Poisson Process (NHPP) Model: This model is based on the assumption
      that the number of failures in a system follows a Poisson distribution. It is used to model the
      reliability growth of a system over time and to predict the number of failures that will occur
      in the future.
   2. Duane Model: This model is based on the assumption that the rate of failure of a system
      decreases over time as the system is improved. It is used to model the reliability growth of a
      system over time and to predict the reliability of the system at any given time.
   3. Gooitzen Model: This model is based on the assumption that the rate of failure of a system
      decreases over time as the system is improved, but that there may be periods of time where
      the rate of failure increases. It is used to model the reliability growth of a system over time
      and to predict the reliability of the system at any given time.
   4. Littlewood Model: This model is based on the assumption that the rate of failure of a system
                                                                                                55
                                                                                                56
      decreases over time as the system is improved, but that there may be periods of time where
      the rate of failure remains constant. It is used to model the reliability growth of a system
      over time and to predict the reliability of the system at any given time.
   5. Reliability growth models are useful tools for software engineers, as they can help to predict
      the reliability of a system over time and to guide the testing and improvement process. They
      can also help organizations to make informed decisions about the allocation of resources,
      and to prioritize improvements to the system.
   6. It is important to note that reliability growth models are only predictions, and actual results
      may differ from the predictions. Factors such as changes in the system, changes in the
      environment, and unexpected failures can impact the accuracy of the predictions.
7.2 Advantages of Reliability Growth Models:
   1. Predicting Reliability: Reliability growth models are used to predict the reliability of a
      system over time, which can help organizations make informed decisions about the
      allocation of resources and the prioritization of improvements to the system.
   2. Guiding the Testing Process: Reliability growth models can be used to guide the testing
      process, by helping organizations determine which tests should be run, and when they
      should be run, in order to maximize the improvement of the system’s reliability.
   3. Improving the Allocation of Resources: Reliability growth models can help organizations to
      make informed decisions about the allocation of resources, by providing an estimate of the
      expected reliability of the system over time, and by helping to prioritize improvements to
      the system.
   4. Identifying Problem Areas: Reliability growth models can help organizations to identify
      problem areas in the system, and to focus their efforts on improving these areas in order to
      improve the overall reliability of the system.
Disadvantages of Reliability Growth Models:
   1. Predictive Accuracy: Reliability growth models are only predictions, and actual results may
      differ from the predictions. Factors such as changes in the system, changes in the
      environment, and unexpected failures can impact the accuracy of the predictions.
   2. Model Complexity: Reliability growth models can be complex, and may require a high level
      of technical expertise to understand and use effectively.
   3. Data Availability: Reliability growth models require data on the system’s reliability, which
      may not be available or may be difficult to obtain.
The Jelinski-Moranda Software Reliability Model is a mathematical model used to predict the
reliability of software systems. It was developed by M.A. Jelinski and P.A. Moranda in 1972 and
is based on the assumption that the rate of software failures follows a non-homogeneous Poisson
process. This model assumes that the software system can be represented as a series of
independent components, each with its own failure rate. The failure rate of each component is
assumed to be constant over time. The model assumes that software failures occur randomly over
time and that the probability of failure decreases as the number of defects in the software is
reduced.
The Jelinski-Moranda model uses an exponential distribution to model the rate of fault detection
and assumes that the fault detection rate is proportional to the number of remaining faults in the
software. The model can be used to predict the number of remaining faults in the software and to
estimate the time required to achieve the desired level of reliability.
8.1 Assumptions Based on Jelinski-Moranda Model
    • The number of faults in the software is known.
    • The rate of fault detection is constant over time.
    • The software system operates in a steady-state condition.
    • The faults in the software are assumed to be independent and identically distributed.
    • The fault removal process is assumed to be perfect, meaning that once a fault is detected, it
       is immediately removed without introducing any new faults.
    • The testing process is assumed to be exhaustive, meaning that all faults in the software will
       eventually be detected.
    • The model assumes that the software system will not be modified during the testing period
       and that the number and types of faults in the software will remain constant.
    • The Jelinski-Moranda model assumes that faults are introduced into the software during the
       development process and that they are not introduced by external factors such as hardware
       failures or environmental conditions.
    • The model assumes that the testing process is carried out using a specific testing
       methodology and that the results are consistent across different testing methodologies.
One limitation of the Jelinski-Moranda model is that it assumes a constant fault detection rate,
which may not be accurate in practice. Additionally, the model does not take into account factors
such as software complexity, hardware reliability, or user behavior, which can also affect the
reliability of the software system.
Overall, the Jelinski-Moranda model is a useful tool for predicting software reliability, but it
should be used in conjunction with other techniques and methods for software testing and quality
assurance.
The Jelinski-Moranda (J-M) model is one of the earliest software reliability models. Many existing
software reliability models are variants or extensions of this basic model.
The JM model uses the following equation to calculate the software reliability at a given time t:
R(t) = R(0) * exp(-λt)
where R(t) is the reliability of the software system at time t, R(0) is the initial reliability of the
                                                                                                         57
                                                                                                 58
software system, λ is the failure rate of the system, and t is the time elapsed since the software was
first put into operation.
                                                                                                 58
                                                                                                59
     predicting short-term reliability and may not be suitable for predicting the long-term
     reliability of a software system.
Future Developments
  • Getting Used to Agile Development: Adapting the paradigm to Agile processes and taking
     into account the difficulties associated with dynamic and iterative development.
  • Taking Complex Systems into Account: Investigation of model adaption for distributed
     software systems with complicated architectures to address dependability.
  • Monitoring Reliability in Real Time: Creation of techniques for software dependability
     prediction and monitoring in real-time throughout development and operation.
  • Combining DevOps Methods with Integration: Investigating the model’s integration with
     DevOps to provide ongoing feedback on the dependability of software across the
     development process.
  • Interdisciplinarity Cooperation: Working together with professionals in adjacent domains
     to develop a comprehensive strategy for software reliability assessment.
  • Open-Source Model Creation: Encouragement of cooperation and contributions for model
     enhancement through the promotion of open-source projects.
60