0% found this document useful (0 votes)
24 views58 pages

Oose Unit 2

The document provides a comprehensive overview of software requirements, categorizing them into functional, non-functional, and domain requirements, along with their significance in software engineering. It discusses the importance of Software Requirements Specification (SRS) documents, the FURPS model for classifying software quality attributes, and the requirement engineering process, which includes activities like elicitation, validation, and management. A case study on SRS for a credit card processing system is also included to illustrate the application of these concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views58 pages

Oose Unit 2

The document provides a comprehensive overview of software requirements, categorizing them into functional, non-functional, and domain requirements, along with their significance in software engineering. It discusses the importance of Software Requirements Specification (SRS) documents, the FURPS model for classifying software quality attributes, and the requirement engineering process, which includes activities like elicitation, validation, and management. A case study on SRS for a credit card processing system is also included to illustrate the application of these concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

3

UNIT II - ANALYSIS OF SOFTWARE REQUIREMENTS AND ESTIMATION


Software Requirements: Functional and Non-Functional requirements-
FURPS - Software Requirement Specification(SRS)- Characteristics for SRS-
IEEE Standard Requirements Documents -Requirements Analysis- Data Flow
Diagrams(DFD)-Estimation of Software Project- The COCOMO Model-Risk
Management - Reliability Models- Jelinski and Moranda Model.
Case study: SRS for Credit card processing system.

INTRODUCTION
1. Requirement:
• A requirement is a specification of a need or want.
A requirement can range from a high level abstract statement of a service or a
system to a detailed mathematical functional specifications.
Types of requirements:

1.1 Software Requirements:


It is a field within software engineering that deals with establishing the needs of
stakeholders that are to be solved by software.

A software requirement can be of 3 types:


1. Functional requirements
2. Non-functional requirements
3. Domain requirements

1.1.1 Functional and Non-Functional Requirements:


Software system requirements can be classified into Functional and Non-Functional
requirements.
(i) Functional Requirements:
 A functional requirement describes what a software system should do.
 The functional requirement is describing the behavior of the system as it relates to
the system's functionality.
3
4

 Eg, Withdrawing money from ATM


 When expressed as user requirements, functional requirements are usually
described in an abstract way that can be understood by system users.
 More specific functional system requirements describe the system functions, its inputs
and outputs, exceptions, etc., in detail.
 There are many ways of expressing functional requirements e.g., natural language, a
structured or formatted language with no rigorous syntax and formal specification
language with proper syntax.
For example,
a) Examples of functional requirements for the MHC-PMS(mental health care patient
management system ) used to maintain information about patients receiving treatment for
mental health problems:

 A user shall be able to search or book the appointments in various


clinics.
 The system shall generate a list of patients who are expected to
attend appointments that day.
 Each staff member shall be uniquely identified by his or her eight-
digit employee number.

b) Examples of functional requirements for the LIBSYS (online Library Management


system)
 The user shall be able to search the book by author name, publisher
or book title
 The system shall provide appropriate book for the user to read or
download from document store.
 The user shall able to renew the account using his or her account
details

(ii) Non-Functional Requirements:


 They are also called non-behavioral requirements.
 It specifies properties and constraints of the system.
 The definition for a non-functional requirement is that, it essentially specifies how the
system should behave and that it is a constraint upon the systems behavior.
 Non-Functional Requirements are requirements that are not directly concerned with
the specific services delivered by the system to its users.
 Eg, Balance Checking from customer Account
4
5

 They basically deal with issues like:


− Portability
− Security
− Maintainability
− Reliability
− Scalability
− Performance
− Reusability
− Flexibility
 Non-functional requirements are often more critical than individual functional
requirements.
 Non-functional requirements such as reliability, safety, and confidentiality
requirements are particularly important for critical systems.
 However, failing to meet a non-functional requirement can mean that the whole
system is unusable. For example, if an aircraft system does not meet its reliability
requirements, it will not be certified as safe for operation;
 Non-functional requirements arise through user needs, because of budget
constraints, organizational policies, the need for interoperability with other software
or hardware systems, or external factors such as safety regulations. Figure given below
is a classification of non-functional requirements.

5
6

(a) Product requirements:


These requirements specify the behavior of the software. Examples include
performance requirements on how fast the system must execute and how much memory
it requires, reliability requirements that set out the acceptable failure rate, security
requirements, and usability requirements.
(b) Organizational requirements:
These requirements are broad system requirements derived from policies and procedures
in the customer’s and developer’s organization.
Examples include operational process requirements that define how the system will be
used, development process requirements that specify the programming language, the
development environment or process standards to be used, and environmental
requirements that specify the operating environment of the system.
(c) External requirements :
This broad heading covers all requirements that are derived from factors
external to the system and its development process.
These may include regulatory requirements that set out what must be done for the
system to be approved for use by a regulator, such as a central bank; legislative
requirements that must be followed to ensure that the system operates within the law;
and ethical requirements that ensure that the system will be acceptable to its users
and the general public.
Metrics for specifying non-functional requirements:
Characteristics used when the system is being tested to check whether system has met its
nonfunctional requirements or not.

6
7

Difference between Functional and Non-Functional Requirements


Functional Requirement Non Functional Requirement
Defines functions and features of the Define properties and constraints
software system of the software system
Describes " What the system should Describes " How the system should do?"
do?"
Related to business functionalities Related to performance of the business

Easy to Test Difficult to test


Related to individual system Related to system as a whole
requirements
Failure to meet Functional Failure to meet non Functional
requirement may degrade the system requirement will make the system
unusable.

(iii) Domain Requirements


 Domain requirements are the requirements which are characteristic of a particular
category or domain of projects.
 These requirements are derived from the application domain and describe system
characteristics and features that reflect the domain.
 If domain requirements are not satisfied, the system may be unworkable.
 For instance, in an academic software that maintains records of a school or college,
the functionality of being able to access the list of faculty and list of students of
each grade is a domain requirement.

Eg, List the stake holders and all types of requirements for online train
reservation system (7) (N/D'17)
Soln:
Stake holders:
 Passenger
 Database Administrator
 Booking clerk
 Bank
Functional Requirements:
▪ The system should allow the passenger to register and login.
▪ The system should provide search option to the user so that user can search for
required train and for required number of reservations.
▪ Online booking made by the customer must be associated with customer
account registered.

7
8

▪ Booking confirmation must be sent to passenger on specified contact details/ mail


id.
▪ The system should allow the user to make online payments.
▪ The system should allow the user to cancel the booking and half of the amount paid
by the customer must be refunded to him/her.
Non Functional Requirements:
▪ The system should be available to the user for 24/7 (availability)
▪ The system should accept the payments via different payment options like
paypal, wallet, cards, net banking etc.
▪ Use of captcha and encryption to avoid bots from booking tickets for security
purpose.
▪ The response time of the system should be less while searching, booking or
cancellation operations.
▪ User Interface should be simple and easy
1.2 User Requirements
 User requirements describes mainly functional and rarely non-functional
requirements.
 It should be easily understandable by the users who do not have detailed technical
knowledge
 User requirements, often referred to as user needs, describe what the user does with
the system (such as what activities that users must be able to perform).
 User requirements tell what an application must/should do to satisfy user's needs.
 User requirements are statements, in a natural language ,diagrams and tables.
 It a list of features an application must/should have.
 It is written for customers.
 Used as the primary input for creating system requirements
 User requirement are quite general.
➢ User requirements are generally documented in a User Requirements
Document (URD) using narrative text.
➢ User requirements talk about the problem domain, the world of the user. They describe
what effects need to be achieved. These effects are the combined responsibility of the
software, the hardware, and the users

Fig: Readers of user requirements

8
9

Fig: User requirements for mental health care patient management system
Problems with natural language;
− Lack of clarity.
− Requirements confusion: Functional and non-functional requirements tend to be
mixed-up
− Requirements amalgamation: several different requirements may be expressed
together
Guidelines for writing user requirements:
▪ Invent a standard format and use it for all requirements
▪ Use language in a consistent way. Use 'shall' for mandatory requirements
▪ Use text highlighting to identify key parts of the requirement
▪ Avoid using computer jargon( terms)

1.3 System Requirements


 The system requirements provide more specific information about the services and
functions of the system that is to be implemented.
 It should simply describe the external behaviour of the system and its
operational constraints.
 System requirements tell what system should have to be able to run the program:
Hardware: CPU, memory, disk space, etc.
Software: OS, libraries, packages, etc.
 The system requirements document (sometimes called a functional
specification) should define exactly what is to be implemented.
 It may be part of the contract between the system buyer and the software
developers.

Fig: Readers of system requirements


 It Serve as a basis for designing the system .
9
10

 System requirements are expanded versions of the user requirements

Fig: System requirements for mental health care patient management system System
requirements talk about the solution domain, the world of the software logic. They
describe what the software must do.
For instance for a bookkeeping software,
• The user requirement is to compute the correct revenue.
• But the system requirement is only to compute the sum of the partial
revenues entered by the user.
• If the user enters incorrect partial revenues the software is not required to
magically correct them:
Different ways of writing a System Requirements specification:

Notation Description
Natural language The requirements are written in numbered sentence in
sentence natural language. Each sentence should have one
requirement.
Structural natural The Requirements are written in natural language on a
language standard form or template
Design Description Makes use of Programming language . It is rarely
language used.
Graphical UML use case and sequence diagrams are frequently
Notations used
Mathematical Rarely used as customers don't understand
Specifications mathematical (Formal) notation.
Structured language specifications :
• It is a popular method for writing requirements .
• It is a standard way of representing requirements.
Graphical notations:
• Extra information can be added when the requirements are written using natural
language. This information can be represented using tables or graphical models.

10
11

• One way of using graphical model is use of sequence diagram.


• Sequence diagram represents sequence of actions that user performs while
interacting the system.
Example: Sequence diagram for withdrawal of cash from ATM

11
12

Use Case diagram for ATM:

2. FURPS :
FURPS is an acronym representing a model for classifying software quality attributes
(functional and non-functional requirements):
• Functionality - Capability (Size & Generality of Feature Set), Reusability (Compatibility,
Interoperability, Portability), Security (Safety & Exploitability)
• Usability (UX) - Human Factors, Aesthetics, Consistency, Documentation, Responsiveness

• Reliability - Availability (Failure Frequency (Robustness/Durability/Resilience), Failure


Extent & Time-Length (Recoverability/Survivability)), Predictability (Stability), Accuracy
(Frequency/Severity of Error)
• Performance - Speed, Efficiency, Resource Consumption (power, ram, cache, etc.),
Throughput, Capacity, Scalability
• Supportability (Serviceability, Maintainability, Sustainability, Repair Speed) - Testability,
Flexibility (Modifiability, Configurability, Adaptability, Extensibility, Modularity),
Installability, Localizability
The model, developed at Hewlett-Packard was first publicly elaborated by Grady and
Caswell. FURPS+ is now widely used in the software industry. The + was later added to the model
after various campaigns at HP to extend the acronym to emphasize various attributes.

12
13

3. Software Requirement Specification Document (SRS Document):


▪ The software requirements document describes the specification of the system.
▪ It should include both the user requirements & system requirements.
▪ The software requirements are the base for creating the Software Requirements
Specifications (SRS).
Use of SRS:
▪ The SRS is useful in estimating cost, planning team activities, performing
tasks, and tracking the team’s progress throughout the development activity.
▪ Software designers use IEEE STD 830-1998 as the basis for the entire SRS.

Software Requirements Specification


<TITLE>
Version 1.0 approved
Prepared by <author>
<organization>
<date created>

13
14

14
15

15
16

16
17

17
18

18
19

3.1 Characteristics of SRS:


 Correctness
 Unambiguous
 Specific
 Completeness
 Traceable
 Consistent
Users of a Requirements document:

19
20

3.2 REQUIREMENT ENGINEERING PROCESS:


Requirement Engineering:
The requirements for a system are the descriptions of what the system should
do— the services that it provides and the constraints on its operation. These
requirements reflect the needs of customers for a system.
The process of finding out, analyzing, documenting and checking these services
and constraints is called requirements engineering (RE).
Requirements engineering builds a bridge to design and construction.
Requirement engineering processes may include four high-level activities
 Feasibility studies (if the system is useful to the business)
 Requirement elicitation and analysis (discovering requirements)
 Requirement validation (converting these requirements into some standard
form)
 Requirement management (checking that the requirements actually define the
system that the customer wants)

Fig: Requirement Engineering Process


The activities are organized as an iterative process around a spiral, with the
output as system requirements document.
The amount of time and effort devoted to each activity in each iteration depends on the
stage of the overall process and the type of system being developed.

20
21

Fig: Spiral model of requirement engineering process


Requirement Engineering Task or Activities:
Requirements engineering includes 7 distinct tasks:
Inception—Establish a basic understanding of the problem and the nature of
the solution.
Elicitation—Draw out the requirements from stakeholders( Requirement
discovery).
Elaboration—Create an analysis model that represents information,
functional, and behavioral aspects of the requirements.
Negotiation—Agree on a deliverable system that is realistic for developers
and customers.
Specification—Describe the requirements formally or informally.
Validation—Review the requirement specification for errors, ambiguities,
omissions, and conflicts.
Requirements management—Manage changing requirements.
▪ Some of these tasks may occur in parallel

21
22

▪ All strive to define what the customer wants


(a) Inception Task
During inception, the requirements engineer asks a set of questions to establish,
✓ A basic understanding of the problem
✓ The people who want a solution
✓ The nature of the solution that is desired
✓ The effectiveness of preliminary communication and collaboration between the
customer and the developer
Through these questions, the requirements engineer needs to…
✓ Identify the stakeholders
✓ Recognize multiple viewpoints
✓ Work toward collaboration
✓ Break the ice and initiate the communication
(b) Elicitation
It certainly seems simple enough—ask the customer, the users, and others what the
objectives for the system or product are, what is to be accomplished, how the system or
product fits into the needs of the business, and finally, how the system or product is to
be used on a day-to-day basis.
But it isn’t simple—it’s very hard.Why?
The problems are,
– Problem of Scope
• The boundary of the system is ill-defined
– Problem of Understanding
• The customer/users are not completely sure of what is needed
– Problem of volatility
• The requirement change over time
• To help overcome this problem, requirement engineers, must approach the
requirement gathering activity in an organized manner
(c) Elaboration
• Expand requirement into analysis model
• Elaboration is driven by the creation and refinement of user scenarios that describe
how the end user (and other actors) will interact with the system.– Scenario-based
elements are,
➢ Functional—processing narratives for software functions
➢ Use-case—descriptions of the interaction between an “actor” and the
system
➢ Class-based elements

22
23

- Implied by scenarios
➢ Behavioral elements
- State diagram
➢ Flow-oriented elements
- Data flow diagram
(d) Negotiation
o It isn’t unusual for customers and users to ask for more than can be
achieved, given limited business resources.
o Negotiation is done on agreeing on a deliverable system that is realistic for
developers and customers
o SW team & other project stakeholders negotiate the priority, availability, and cost
of each requirement.
o The Process are :
– Identify the key stakeholders
• These are the people who will be involved in the negotiation
– Determine each of the stakeholders “win conditions”
• Win conditions are not always obvious
– Negotiate
• Work toward a set of requirements that lead to “win-win”
(e) Specification
• In the context of computer-based systems (and software), the term specification
means different things to different people. Final work product produced by
requirement engineer.
A specification can be a
▪ A written document
▪ A set of models
▪ A formal mathematical
▪ A collection of user scenarios (use-cases)
▪ A prototype
(f) Validation
Examine the specification to ensure that SW requirement is consistent, not ambiguous,
error free etc.
Checklist for validation:

23
24

(g) Requirement management:


Requirements management is a set of activities that help the project team identify,
control, and track requirements and changes to requirements at any
time as the project proceeds.
3.2.1 Feasibility Studies:
o A feasibility study is a study made to decide whether or not the proposed
system is worthwhile.
o Input to feasibility study: Outline description of the system.
o Output from feasibility study: Feasibility report.
o A good feasibility study will show the strengths and deficits before the project is
planned or budgeted for.
Feasibility report:
It is a document which recommends whether or not it is worth carry on with the
requirement engineering and system development process.
Feasibility studies focus on:
o Does the system contribute the overall objectives of the organization?
o Can the system be implemented using current technology?
o Can the system be implemented within the given budget and schedule?
o Can the system integrated with the other systems which are already in
place?
There are many different types of feasibility studies;
• Technical Feasibility – Does the company have the technological resources to
undertake the project? Are the processes and procedures conducive to project
success?
• Schedule Feasibility – Does the company currently have the time resources to
undertake the project? Can the project be completed in the available time?
• Economic Feasibility – Given the financial resources of the company, is the project
something that can be completed? The economic feasibility study is more commonly
called the cost/benefit analysis.
• Cultural Feasibility – What will be the impact on both local and general cultures?
What sort of environmental implications does the feasibility study have?

24
25

• Legal/Ethical Feasibility – What are the legal implications of the project? What sort
of ethical considerations are there? You need to make sure that any project undertaken
will meet all legal and ethical requirements before the project is on the table.
• Resource Feasibility – Do you have enough resources, what resources will be
required, what facilities will be required for the project, etc.
• Operational Feasibility – This measures how well your company will be able to
solve problems and take advantage of opportunities that are presented during the
course of the project
• Marketing Feasibility – Will anyone want the product once its done? What is the target
demographic? Should there be a test run? Is there enough buzz that can be created for
the product?
• Real Estate Feasibility – What kind of land or property will be required to
undertake the project? What is the market like? What are the zoning laws? How will
the business impact the area?
• Comprehensive Feasibility – This takes a look at the various aspects involved in the
project – marketing, real estate, cultural, economic, etc. When undertaking a new
business venture, this is the most common type of feasibility study performed.

3.2.2 Requirement Elicitation And Analysis


It includes following activities:
a) Requirement discovery b) Requirement Classification c)Requirement
Prioritization d) Requirement documentation

(a) Requirement discovery:


Requirement discovery is a process of Interacting with stakeholders to discover their
requirements. Domain requirements are also discovered at this stage.

25
26

Various methods of requirement discovery:


Interviewing
Viewpoints
Scenarios
Use cases
Ethnography
Interviewing
In formal or informal interviewing, the RE team puts questions to stakeholders about the
system that they use and the system to be developed.
• There are two types of interview
• Closed interviews where a pre-defined set of questions are answered. Defined set of
questions are answered.
• Open interviews where there is no pre-defined agenda and a range of issues are
explored with stakeholders.
Scenarios
Scenarios are real-life examples of how a system can be used. They should include
• A description of the system
• A description of the starting situation;
• A description of the normal flow of events;
• A description of what can go wrong;
• Information about other concurrent activities;
• A description of the state when the scenario finishes.

26
27

Viewpoints:
Viewpoints are a way of structuring the requirements to represent the perspectives of
different stakeholders. Stakeholders may be classified under different viewpoints.

Use cases:
− Use-cases are a scenario-based technique in the UML which identify the actors in
an interaction and which describe the interaction itself.
− Set of use cases should describe all possible interactions the system.

27
28

Ethnography
− It is a technique of observation which is used to understand social and
organizational requirements.
− Two types:
• Requirements obtained from working style of people
• Requirement obtained from interactivities performed by people.
(b) Requirements classification
• Groups related requirements and organizes them into coherent clusters.
(c) Requirements Prioritization
• Prioritizing requirements and resolving requirements conflicts.
(d) Requirements documentation
• Requirements are documented and input into the next round of the spiral.

3.2.3. Requirement Validation


• Requirement validation is a process in which it is checked that whether the gathered
requirements represent the same system the customer really wants.
• In requirement validation the requirement errors are fixed.
• Requirement error fixing cost is higher than the implementation error fixing cost.
Requirement checking can be done in following manner:
− Validity: Does the system provide the functions which best support the
customer’s needs?
− Consistency: Are there any requirements conflicts?
− Completeness: Are all functions required by the customer included?
− Realism: Can the requirements be implemented according to budget and
technology?
− Verifiability: can the requirements be checked?
Requirement validation techniques:

Requirement review: Manual analysis of requirements.


Test case generation: various test cases are developed to verify the requirements.

28
29

Automated consistency analysis: CASE tools can be used to test the


requirements.
Prototyping: The requirements can be checked using executable model of the system.
3.2.4. Requirement Management
Requirement management is a process of managing the changing requirements during
the requirement engineering process and system development. Why the requirements gets
changing:
➢ Requirements are always incomplete and inconsistent.
➢ Customer may specify the requirements from business perspective that may
conflict with the end user.
➢ During the development of the system, its business and the technical
environment may get changed.
Enduring And Volatile Requirements:
Enduring requirements:
- They are relatively stable requirements which derive from a core activity of the
organization and which relate directly to the domain.
- Eg: In Hospital management system, concerned with patients, doctors, nurse and
treatments are enduring requirements.
Volatile requirements:
- These are requirements which are likely to change during the system
development or the system becomes operational.
- Eg: In Hospital management system, Requirement may change after
government health care policies change.
Types of Volatile requirements:

29
30

Requirements management planning


• During the requirements engineering process, you have to plan:
– Requirements identification
• How requirements are individually identified;
– A change management process
• The process followed when analysing a requirements change;
– Traceability policies
• The amount of information about requirements relationships that is maintained;
– CASE tool support
• The tool support required to help manage requirements change
Traceability
Traceability is concerned with the relationships between requirements, their sources
and the system design
o Source traceability
- Links from requirements to stakeholders who proposed these
requirements;
o Requirements traceability
- Links between dependent requirements;
o Design traceability
- Links from the requirements to the design;

CASE tool support


• Requirements storage
– Requirements should be managed in a secure, managed data store.
• Change management

30
31

– The process of change management is a workflow process whose stages can be


defined and information flow between these stages partially automated.
• Traceability management
– Automated retrieval of the links between requirements.
Requirements change management
o Should apply to all proposed changes to the requirements.
o Principal stages
▪ Problem analysis: Discuss requirements problem and propose change;
▪ Change analysis and costing: Assess effects of change on other requirements;
▪ Change implementation: Modify requirements document and other documents
to reflect change.

4
CLASSICAL ANALYSIS:
3.3 STRUCTURED SYSTEM ANALYSIS:
− Structured system analysis is a technique in which the system requirements are
converted into specifications.
− It is a mapping of problem domain to flows and transformations.
− System can be modeled using:
(i) ER Diagram – used to represent data model
(ii) Data flow diagram & Control flow diagram – to represent functional
model.
DATA FLOW DIAGRAM(DFD):
• A data-flow diagram (DFD) is a way of representing a flow of a data of a process or a
system (usually an information system). The DFD also provides information about the
outputs and inputs of each entity and the process itself. DFD is also known as bubble
chart .
• It is the starting point of the design phase. A DFD consist of a series of bubbles joined
by lines.
• The bubbles represent data transformations and the line represents data flow
in the system.

31
32

• The DFD is presented in a hierarchical fashion. That is, the first data flow model
(sometimes called a level 0 DFD or context diagram) represents the system as a whole.
Subsequent data flow diagrams(level 1, 2..) refine the context diagram, providing
increasing detail with each subsequent level.

DFD symbols : There are mainly four symbols for DFD.

• A Square defines a source or destination of system data. It is an external


entity.
• An arrow identifies data flow, i.e.: data in motion. It is a pipeline through which
information flows.
• A circle represents a process that transforms incoming data flows into outgoing data.
Process can have level mentioned in it.
• An open rectangle is a data store, ie: data at rest or a temporary repository of data.
also called files or databases.
Rules for constructing DFD:
o Processes should be named and numbered for easy reference.
o The direction of flow is from top to bottom and from left to right. Data flows
from source to the destination.
o Process should be numbered, if they are exploded into lower level details.
o Process should not have only inputs or only outputs. It must have both
inputs and outputs.

32
33

o There should not be a direct flow between data store and external entity.

o The names of data stores, sources and destination should be in capital letters.
Process and data flow names first letter should be in capital letter.
o The level 0 data flow diagram should depict the software/system as a single
bubble;
o All arrows and bubbles should be labelled with meaningful names;
o Information flow continuity must be maintained from level to level.

Example 1: Data flow diagram for safe home system:


The SafeHome security function enables the homeowner to configure the security system
when it is installed, monitors all sensors connected to the security system, and interacts
with the homeowner through the Internet, a PC, or a control panel.
During installation, the SafeHome PC is used to program and configure the system. Each
sensor is assigned a number and type, a master password is programmed for arming and
disarming the system, and telephone number(s) are input for dialing when a sensor event
occurs.
When a sensor event is recognized, the software invokes an audible alarm attached to the
system. After a delay time that is specified by the homeowner during system configuration
activities, the software dials a telephone number of a monitoring service, provides
information about the location, reporting the nature of the event that has been detected.
The telephone number will be redialed every 20 seconds until telephone connection is
obtained. The homeowner receives security information via a control panel, the PC, or a
browser, collectively called an interface. The interface displays prompting messages and
system status information on the control panel, the PC, or the browser window.

33
34

Level 0:

Fig: Context-level DFD( Level 0) for the SafeHome Security function

The level 0 DFD must now be expanded into a level 1 data flow model.
Level 1 DFD :

Level 1 DFD for SafeHome security function

34
35

Level 2:

Example 2: Automated Railway Reservation System : Level 0


DFD:

Level 1 DFD

35
36

Level 2 DFD:

4. SOFTWARE PROJECT ESTIMATION:

Software project estimation is a form of problem solving, and in most cases,


theproblem to be solved (i.e., developing a cost and effort estimate for a
softwareproject) is too complex to be considered in one piece.
The decomposition approach was discussed from two different pointsof view:
• Decomposition of the problem and
• Decomposition of the process.
Estimationuses one or both forms of
partitioning.
Software Sizing:
The accuracy of a software project estimate is predicated on a number of things:

(1) The degree to which you have properly estimated the size of the product to be
built
(2) The ability to translate the size estimate into human effort, calendar time,

36
37

and dollars(a function of the availability of reliable software metrics from past
projects);
(3) The degree to which the project plan reflects the abilities of the software team
(4) The stability of product requirements and the environment that

supports the software engineering effort.


The sizing can be estimated using two approaches:
• If a direct approach istaken, size can be measured in lines of code (LOC).
• If an indirect approach is chosen,size is represented as function points (FP).

Putnam and Myers [Put92] suggest four different approaches to the sizing problem:

• ―Fuzzy logic‖ sizing. This approach uses the approximate reasoning


techniquesthat are the cornerstone of fuzzy logic. To apply this approach,
theplanner must identify the type of application, establish its magnitude on
aqualitative scale, and then refine the magnitude within the original range.

• Function point sizing. The planner develops estimates of the


informationdomain characteristics.
• Standard component sizing. Software is composed of a number of
different―standard components‖ that are generic to a particular application area.

For example:

The standard components for an information system are subsystems,modules,


screens, reports, interactive programs, batch programs, files,LOC, and object-level
instructions. The project planner estimates the numberof occurrences of each standard
component and then uses historical projectdata to estimate the delivered size per
standard component.

• Change sizing. This approach is used when a project encompasses the use
ofexisting software that must be modified in some way as part of a project.
Theplanner estimates the number and type (e.g., reuse, adding code,
changingcode, and deleting code) of modifications that must be accomplished.
Problem-Based Estimation:
LOC and FP data are used in two waysduring software project estimation:

(1) As estimation variables to ―size‖ each elementof the software and


(2) As baseline metrics collected from past projects and used inconjunction with
estimation variables to develop cost and effort projections.

LOC or FP is then estimated for each function.

37
38

Baseline productivity metrics are then applied to theappropriate estimation


variable, and cost or effort for the function is derived.

Function estimates are combined to produce an overall estimate for the entire
project.

Using historical data the project planner expected value by considering following
variables.
1. Optimistic
2. Most likely
3. Pessimistic

A three-point or expected value can then be computed.


The expected value for theestimation variable (size) S can be computed as a weighted
average of the optimistic(sopt), most likely (sm), and pessimistic (spess) estimates.
For example,

Where,

• (size) S
• optimistic(sopt),
• most likely (sm), and
• pessimistic (spess)

38
39

LOC based Estimation:

39
40

Example of LOC based Estimation:

Solution:

For estimating the given application we consider each module as separate


function and corresponding lines of code can be estimated in the following table as

Function Estimated
LOC
User interface and control facilities (UICF) 2500
Two-dimensional geometric analysis (2DGA) 5600
Three-dimensional geometric analysis (3DGA) 6450
Database management (DBM) 3100
Computer graphics display facilities (CGDF) 4740
Peripheral control function (PCF) 2250
Design analysis modules (DAM) 7980
Estimated lines of code 32620

40
41

Function Point Based Estimation(FP):

A Function Point (FP) is a unit of measurement to express the amount of business


functionality, an information system (as a product) provides to a user. FPs measure
software size. They are widely accepted as an industry standard for functional sizing.

For sizing software based on FP, several recognized standards and/or public
specifications have come into existence. As of 2013, these are −

ISO Standards
• COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size
measurement method.

• FiSMA − ISO/IEC 29881:2008 Information technology - Software


and systems engineering - FiSMA 1.1 functional size measurement method.

• IFPUG − ISO/IEC 20926:2009 Software and systems engineering -


Software measurement - IFPUG functional size measurement method.

• Mark-II − ISO/IEC 20968:2002 Software engineering - Ml II Function


Point Analysis - Counting Practices Manual.

41
42

• NESMA − ISO/IEC 24570:2005 Software engineering - NESMA function size


measurement method version 2.1 - Definitions and counting guidelines for the application of
Function Point Analysis.

Object Management Group Specification for Automated Function Point


Object Management Group (OMG), an open membership and not-for-profit computer industry
standards consortium, has adopted the Automated Function Point (AFP) specification led by the
Consortium for IT Software Quality. It provides a standard for automating FP counting according
to the guidelines of the International Function Point User Group (IFPUG).

Function Point Analysis (FPA) technique quantifies the functions contained within software in
terms that are meaningful to the software users. FPs consider the number of functions being
developed based on the requirements specification.

Function Points (FP) Counting is governed by a standard set of rules, processes and guidelines
as defined by the International Function Point Users Group (IFPUG). These are published in
Counting Practices Manual (CPM).

5. The COCOMO model


• COCOMO is one of the most widely used software estimation models in the world. This
model is developed in 1981 by Barry Boehm to give an estimate of the number of man- months
it will take to develop a software product. COCOMO predicts the efforts and schedule of a
software product based on size of the software. COCOMO stands for "COnstructive COst
MOdel".

COCOMO has three different models that reflect the complexity –


• Basic model
• Intermediate model
• Detailed model.
Similarly there are three classes of software projects.
1) Organic mode: In this mode, relatively small, simple software projects with a small team
are handled. Such a team should have good application experience to less rigid requirements.
2) Semi-detached projects: In this class an intermediate projects in which teams with mixed
experience level are handled. Such projects may have mix of rigid and less than rigid
requirements.
3) Embedded projects: In this class, projects with tight hardware, software and operational
constraints are handled.

Let us understand each model in detail.

1) Basic model: The basic COCOMO model estimates the software development effort using
only Lines of Code. Various equations in this model are –
42
43

Where E is the effort applied in person-months.


D is the development time in chronological months. KLOC means
kilo line of code for the project.
P is total number of persons required to accomplish the project.
The coefficients ab, bb, cb, db for three modes are as given below.

Merits of basic COCOMO model


Basic COCOMO model is good for quick, early, rough order of magnitude estimatesof
software project.
Limitations of basic model
1. The accuracy of this model is limited because it does not consider certain factorsfor cost
estimation of software. These factors are hardware constraints, personalquality, and
experience, modern techniques and tools.
2. The estimates of COCOMO model are within a factor of 1.3 only 29 % of the timeand within
the factor of 2 only 60 % of time.
Example
Consider a software project using semi-detached mode with 30,000 lines of code. Wewill
obtain estimation for this project as follows
i) Effort estimation

ii) Duration estimation

iii) Persons estimation

43
44

2) Intermediate model
This is an extension of Basic COCOMO model. This estimation model makes useset of
"Cost driver attributes" to compute the cost of software.

Now these 15 attributes get a 6-point scale ranging from "very low" to "extra high'. These
ratings can be viewed as

The effort multipliers for each cost driver attribute is as given in following table. Theproduct of
all effort multipliers result in "Effort Adjustment Factor" (EAF).

The formula for effort calculation can be- E=ai (KLOC)


* EAF person months

44
45

The values for ai and bi for various class of software projects are-

The duration and person estimate is same as in basic COCOMO model. i.e.

Merits of intermediate model


1. This model can be applied to almost entire software product for easy and rough cost
estimation during early stage.
2. It can also be applied at the software product component level for obtaining more accurate
cost estimation.
Limitations of intermediate model
1. The estimation is within 20 % of actual 68 % of the time.
2. The effort multipliers are not dependent on phases.
3. A product with many components is difficult to estimate.
Example
Considera project having 30,000 lines of code which is an embedded software with critical area
hence reliability is high. The estimation can be

P = E/ D
= 191/13
P = 15 persons approximately
45
46

3) Detailed COCOMO model


The detailed model uses the same equations for estimation as the Intermediate Model.But detailed
model can estimate the effort (E), duration (D) and persons (P) of each ofdevelopment phases,
subsystems, modules.
The experimentation with different development strategies is allowed in this model. Four
phases used in detailed COCOMO model are
1. Requirements Planning and Product Design (RPD)
2. Detailed Design (DD)
3. Code and Unit Test (CUT)
4. Integrate and Test (IT)
The effort multipliers for detailed COCOMO are

Using these detailed cost drivers, an estimate is determined for each phase of thelifecycle.

5.1 The COCOMO II Model

• COCOMO, for COnstructive COst Model


• COCOMO II is actually a hierarchy of estimation models that address the following
areas:
Application composition model. Used during the early stages of software engineering,
when prototyping of user interfaces, consideration of software and
system interaction, assessment of performance, and evaluation of technology maturity are
paramount.
Early design stage model. Used once requirements have been stabilized and basic
software architecture has been established.
Post-architecture-stage model. Used during the construction of the software.
• The COCOMO II models require sizing information. Three different sizing options are
available as part of the model hierarchy:
• object points,
• function points, and
• Lines of source code.
• The COCOMO II application composition model uses object points
• The object point is an indirect software measure that is computed using counts of the
number of (1) screens (at the user interface), (2) reports, and (3) components likely to be
required to build the application.
• The object point count is then determined by multiplying the original number of object
46
47

instances by the weighting factor in the figure and summing to obtain a total object point count.
When component-based development or general software reuse is to be applied, the percent of
reuse (%reuse) is estimated and the object point count is adjusted:

where NOP is defined as new object points.


To derive an estimate of effort based on the computed NOP value, a ―productivity rate‖ must be
derived. Figure presents the productivity rate for different levels of developer experience and
development environment maturity.

PRODUCTIVITY RATE FOR OBJECT POINTS.

Once the productivity rate has been determined, an estimate of project effort is computed using

In more advanced COCOMO II models, a variety of scale factors, cost drivers, and
adjustment procedures are required

6. RISK MANAGEMENT
The risk denotes the uncertainty that may occur in the choices due to past actions and risk is
something which causes heavy losses.

Risk management refers to the process of making decisions based on an evaluation of the factors
that threats to the business.

47
48

Software Risks:
Two characteristics:
➢ uncertainty—the risk may or may not happen;
➢ loss—if the risk becomes a reality, unwanted consequences or losses will occur

Project risks threaten the project plan, that is, if project risks become real, it islikely that the project
schedule will slip and that costs will increase.
Project risksidentify potential budgetary, schedule, personnel (staffing and organization),
resource,stakeholder, and requirements problems and their impact on a software project.

Technical risks threaten the quality and timeliness of the software to be produced.If a technical
risk becomes a reality, implementation may become difficult or impossible. Technical risks identify
potential design, implementation, interface, verification,and maintenance problems.
Business risks threaten the viability of the software to be built and often threatenthe project or the
product.
Candidates for the top five business risks are
(1) Buildingan excellent product or system that no one really wants (market risk),
(2) Buildinga product that no longer fits into the overall business strategy for the company
(Strategic risk),
(3) Building a product that the sales force doesn’t understand how tosell (sales risk),
(4) Losing the support of senior management due to a change in focusor a change in people
(management risk), and
(5) Losing budgetary or personnelcommitment (budget risks).

Known risks are those that can be uncovered after careful evaluation ofthe project plan, the
business and technical environment in which the project isbeing developed, and other reliable
information sources (e.g., unrealistic deliverydate, lack of documented requirements or software
scope, poor developmentenvironment).
Predictable risks are extrapolated from past project experience(e.g., staff turnover, poor
communication with the customer, dilution of staff effortas ongoing maintenance requests are
serviced).
Unpredictable risks can and do occur, but they are extremely difficult to identify inadvance.

Reactive strategy:
➢ Reactive risk management is a risk management strategy in which when project gets into
trouble then only corrective action is taken. But when such risks cannot be managed and new
risks come up one after the other, the software team flies into action in an attempt to correct
problems rapidly. These activities are called ―firefighting‖ activities.

Proactive strategy:
➢ A proactive strategy begins long before technical work is initiated.
➢ Potential risks are identified, their probability and impact are assessed, and they are ranked
by importance. Then, the software team establishes a plan for managing risk.
➢ The primary objective is to avoid risk, but because not all risks can be avoided, the team
works to develop a contingency plan that will enable it to respond in a controlled and effective
manner.
48
49

Various activities that are carried out for risk management are-
1. Risk Identification
2. Risk Projection
3. Risk Refinement
4. Risk mitigation, monitoring and management

6.1 Risk Identification


Risk identification is a systematic attempt to specify threats to the project plan (estimates,schedule,
resource loading, etc.). Risks identification can be done by identifying the known and predictable
risks.

There are two distinct types of risks for each of the categories

Generic risks are apotential threat to every software project.


Product-specific risks can be identified onlyby those with a clear understanding of the
technology, the people, and the environmentthat is specific to the software that is to be built.

Method for identifying risks:


One method for identifying risks is to create a risk item checklist. The checklistcan be used for
risk identification and focuses on some subset of known and predictablerisks in the following
generic subcategories:
➢ Product size—risks associated with the overall size of the software to be built or
modified.
➢ Business impact—risks associated with constraints imposed by management or the
marketplace.
➢ Stakeholder characteristics—risks associated with the sophistication of the stakeholders
and the developer’s ability to communicate with stakeholders in a timely manner.
➢ Process definition—risks associated with the degree to which the software process has
been defined and is followed by the development organization.
➢ Development environment—risks associated with the availability and quality of the tools
to be used to build the product.
➢ Technology to be built—risks associated with the complexity of the system to be built
and the ―newness‖ of the technology that is packaged by the system.
➢ Staff size and experience—risks associated with the overall technical and project
experience of the software engineers who will do the work.

Risk Components and Drivers:


A set of ―risk components and drivers‖ are listed along with their probability of occurrence.

The U.S. Air Force has published a pamphlet that contains excellent guidelines for software risk
identification and abatement. The Air Force approach requires that the project manager identify the
risk drivers that affect software risk components—performance, cost, support, and schedule.
Performance risk—the degree of uncertainty that the product will meet its requirements and be
fit for its intended use.
Cost risk—the degree of uncertainty that the project budget will be maintained.
Support risk—the degree of uncertainty that the resultant software will be easy to correct, adapt,
49
50

and enhance.
Schedule risk—the degree of uncertainty that the project schedule will bemaintained and that the
product will be delivered on time.

Assessing Overall Project Risk:


The following questions have been derived from risk data obtained by surveying experienced
software project managers in different parts of the world. The questions are ordered by their relative
importance to the success of a project.
1. Have top software and customer managers formally committed to support the project?
2. Are end users enthusiastically committed to the project and the system/product to be built?
3. Are requirements fully understood by the software engineering team and its customers?
4. Have customers been involved fully in the definition of requirements?
5. Do end users have realistic expectations?
6. Is the project scope stable?
7. Does the software engineering team have the right mix of skills?
8. Are project requirements stable?
9. Does the project team have experience with the technology to be implemented?
10. Is the number of people on the project team adequate to do the job?
11. Do all customer/user constituencies agree on the importance of the project and on the
requirements for the system/product to be built?

6.2 RISK PROJECTION:

Risk projection, also called risk estimation, attempts to rate each risk in two ways—
(1) The likelihood or probability that the risk is real and
(2) The consequences of the problems associated with the risk, should it occur.

Manager sand technical staff to perform four risk projection steps:

1. Establish a scale that reflects the perceived likelihood of a risk.


2. Delineate the consequences of the risk.
3. Estimate the impact of the risk on the project and the product.
4. Assess the overall accuracy of the risk projection so that there will be no
misunderstandings.

50
51

DEVELOPING A RISK TABLE:


A risk table provides you with a simple technique for risk projection. A sample risktable is
illustrated in Figure.

➢ Listing all risks in the first column of thetable.


➢ This can be accomplished with the help of the risk item checklists referenced.
➢ Each risk is categorized in the second column (e.g., PS implies aproject size risk, BU
implies a business risk).
➢ The probability of occurrence of each risk is entered in the next column of the table.
➢ The probability value for each risk can be estimated by team members individually.
➢ Next, the impact of each risk is assessed.
➢ The categories for each of the four risk components—performance, support, cost, and
schedule—are averaged to determine an overall impact value.
➢ Once the first four columns of the risk table have been completed, the table is sorted by
probability and by impact.
➢ High-probability, high-impact risks percolate to the top of the table, and low-probability
risks drop to the bottom. This accomplishes first-order risk prioritization.
➢ The cutoff line (drawn horizontally at some point in the table) implies that only risks that
lie above the line will be given further attention. Risks that fall below the line are reevaluated
to accomplish second-order prioritization.

51
52

6.3 RISK AND MANAGEMENT CONCERN


A risk factor that has a high impact but a very low probability of occurrence should not absorb a
significant amount of management time. However, high-impact risks with moderate to high
probability and low-impact risks with high probability should be carried forward into the risk
analysis.

• Assessing Risk Impact

While assessing the risks impact three factors are considered


➢ Nature of risk
➢ Scope of risk
➢ Timing at which risk occurs.

The nature of the risk indicates the problems that are likely if it occurs. For example, a poorly
defined external interface to customer hardware(a technical risk) will preclude early design and
testing and will likely lead to system integration problems late in a project.
The scope of a risk combines the severity with its overall distribution (how much of the project
will be affected or how many stakeholders are harmed?).
The timing of a risk considerswhen and for how long the impact will be felt.

U.S. Air Forcesuggeststhe following steps to determine the overall consequences ofa risk: (1)
determine the average probability of occurrence value for each risk component;
(2) Determine the impact for each component based onthe criteria shown, and
(3) Complete the risk table and analyze the results

The overall risk exposure RE is determined using the following relationship


RE=PXC
where P is the probability of occurrence for a risk, and C is the cost to the projectshould the risk
occur.

52
53

6.4 RISK REFINEMENT:


Risk refinement is a process of specifying the risk. The risk refinement can be represented using
CTC format.

The CTC stands for condition-transition-consequence. The condition is first stated and then based
on this condition sub conditions can be derived. Then determine the effects of these sub conditions
in order to refine the risk. This refinement helps in exposing the underlying risks. This approach
makes it easier for the project manager to analyze the risk in greater detail.

6.5 RISK MITIGATION, MONITORING, AND MANAGEMENT(RMMM)

An effective strategymust consider three issues:


• Risk avoidance,
• Risk monitoring, and
• Risk management

5.5.1 Risk Mitigation:


Risk mitigation means preventing the risks to occur (risk avoidance). Following are
the steps to be taken for mitigating the risks.
• Meet with current staff to determine causes for turnover (e.g., poor working conditions,
low pay, and competitive job market).
• Mitigate those causes that are under your control before the project starts.
• Once the project commences, assume turnover will occur and develop techniques to
ensure continuity when people leave.
• Organize project teams so that information about each development activity is widely
dispersed.
• Define work product standards and establish mechanisms to be sure that all models and
documents are developed in a timely manner.
• Conduct peer reviews of all work (so that more than one person is ―up to speed‖).
• Assign a backup staff member for every critical technologist.

5.5.2 Risk Monitoring:


• The project manager monitors factors that may provide an indication of whether the risk is
becoming more or less likely.
• In the case of high staff turnover, the general attitude of team members based on project
pressures, the degree to which the team has jelled, interpersonal Relationships among team
members, potential problems with compensation and benefits, and the availability of jobs
within the company and outside it are all monitored.
• In addition to monitoring these factors, a project manager should monitor the effectiveness
of risk mitigation steps.
• For example, a risk mitigation step noted here called for the definition of work product
standards and mechanisms to be sure that work products are developed in a timely manner.
• This is one mechanism for ensuring continuity, should a critical individual leave the project.
• The project manager should monitor work products carefully to ensure that each can stand
on its own and that each imparts information that would be necessary if a newcomer were
forced to join the software team somewhere in the middle of the project.
53
54

6.5.3 Risk Management:


• Risk management and contingency planning assumes that mitigation efforts have failed
and that the risk has become a reality.
• Continuing the example, the project is well under way and a number of people announce
that they will be leaving.
• If the mitigation strategy has been followed, backup is available, information is
documented, and knowledge has been dispersed across the team.
• In addition, you can temporarily refocus resources (and readjust the project schedule) to
those functions that are fully staffed, enabling newcomers who must be added to the team to
―get up to speed.‖
• Those individuals who are leaving are asked to stop all work and spend their last weeks in
―knowledge transfer mode.‖
• This might include video-based knowledge capture, the development of ―commentary
documents or Wikis,‖ and/or meeting with other team members who will remain on the project.

THE RMMM PLAN


• A risk management strategy can be included in the software project plan, or the risk
management steps can be organized into a separate risk mitigation, monitoring, and
management plan (RMMM).
• The RMMM plan documents all work performed as part of risk analysis and is used by the
project manager as part of the overall project plan.
• Each risk is documented individually using a risk information sheet (RIS). In most cases,
the RIS is maintained using a database system so that creation and information entry, priority
ordering, searches, and other analysis may be accomplished easily.
• The format of the RIS is illustrated in Figure

54
55

7. Reliability Growth Models – Software Engineering

The reliability growth group of models measures and predicts the improvement of reliability
programs through the testing process. The growth model represents the reliability or failure rate of
a system as a function of time or the number of test cases. Models included in this group are as
follows.
1. Coutinho Model – Coutinho adapted the Duane growth model to represent the software
testing process. Coutinho plotted the cumulative number of deficiencies discovered and the
number of correction actions made vs. the cumulative testing weeks on log-log paper. Let
N(t) denote the cumulative number of failures and let t be the total testing time. The failure
rate, (t), the model can be expressed as

where are the model parameters. The least squares method can
be used to estimate the parameters of this model.

2. Wall and Ferguson Model – Wall and Ferguson proposed a model similar to the Weibull
growth model for predicting the failure rate of software during testing. The cumulative
number of failures at time t, m(t), can be expressed as where are the
unknown parameters. The function b(t) can be obtained as the number of test cases or total
testing time. Similarly, the failure rate function at time t is given by

Wall and Ferguson tested this model using some software failure data and observed that
failure data correlate well with the model
Reliability growth models are mathematical models used to predict the reliability of a
system over time. They are commonly used in software engineering to predict the reliability
of software systems and to guide the testing and improvement process.
7.1 Types of reliability growth models:
1. Non-homogeneous Poisson Process (NHPP) Model: This model is based on the assumption
that the number of failures in a system follows a Poisson distribution. It is used to model the
reliability growth of a system over time and to predict the number of failures that will occur
in the future.
2. Duane Model: This model is based on the assumption that the rate of failure of a system
decreases over time as the system is improved. It is used to model the reliability growth of a
system over time and to predict the reliability of the system at any given time.
3. Gooitzen Model: This model is based on the assumption that the rate of failure of a system
decreases over time as the system is improved, but that there may be periods of time where
the rate of failure increases. It is used to model the reliability growth of a system over time
and to predict the reliability of the system at any given time.
4. Littlewood Model: This model is based on the assumption that the rate of failure of a system
55
56

decreases over time as the system is improved, but that there may be periods of time where
the rate of failure remains constant. It is used to model the reliability growth of a system
over time and to predict the reliability of the system at any given time.
5. Reliability growth models are useful tools for software engineers, as they can help to predict
the reliability of a system over time and to guide the testing and improvement process. They
can also help organizations to make informed decisions about the allocation of resources,
and to prioritize improvements to the system.
6. It is important to note that reliability growth models are only predictions, and actual results
may differ from the predictions. Factors such as changes in the system, changes in the
environment, and unexpected failures can impact the accuracy of the predictions.
7.2 Advantages of Reliability Growth Models:
1. Predicting Reliability: Reliability growth models are used to predict the reliability of a
system over time, which can help organizations make informed decisions about the
allocation of resources and the prioritization of improvements to the system.
2. Guiding the Testing Process: Reliability growth models can be used to guide the testing
process, by helping organizations determine which tests should be run, and when they
should be run, in order to maximize the improvement of the system’s reliability.
3. Improving the Allocation of Resources: Reliability growth models can help organizations to
make informed decisions about the allocation of resources, by providing an estimate of the
expected reliability of the system over time, and by helping to prioritize improvements to
the system.
4. Identifying Problem Areas: Reliability growth models can help organizations to identify
problem areas in the system, and to focus their efforts on improving these areas in order to
improve the overall reliability of the system.
Disadvantages of Reliability Growth Models:
1. Predictive Accuracy: Reliability growth models are only predictions, and actual results may
differ from the predictions. Factors such as changes in the system, changes in the
environment, and unexpected failures can impact the accuracy of the predictions.
2. Model Complexity: Reliability growth models can be complex, and may require a high level
of technical expertise to understand and use effectively.
3. Data Availability: Reliability growth models require data on the system’s reliability, which
may not be available or may be difficult to obtain.

8.Jelinski Moranda software reliability model – Software Engineering

The Jelinski-Moranda (JM) Software Reliability Model is a mathematical model developed in


1972 by M.A. Jelinski and P.A. Moranda. It is used to predict the reliability of software systems,
particularly during the testing and debugging phases. This model assumes that software failures
occur randomly over time and that the likelihood of these failures decreases as bugs are found and
fixed. The model represents the software as a series of independent components, each with a
constant failure rate, and uses an exponential distribution to model the rate at which faults are
detected. By understanding these patterns, software engineers can estimate the number of
remaining faults and the time required to reach a desired level of reliability.
56
57

The Jelinski-Moranda Software Reliability Model is a mathematical model used to predict the
reliability of software systems. It was developed by M.A. Jelinski and P.A. Moranda in 1972 and
is based on the assumption that the rate of software failures follows a non-homogeneous Poisson
process. This model assumes that the software system can be represented as a series of
independent components, each with its own failure rate. The failure rate of each component is
assumed to be constant over time. The model assumes that software failures occur randomly over
time and that the probability of failure decreases as the number of defects in the software is
reduced.
The Jelinski-Moranda model uses an exponential distribution to model the rate of fault detection
and assumes that the fault detection rate is proportional to the number of remaining faults in the
software. The model can be used to predict the number of remaining faults in the software and to
estimate the time required to achieve the desired level of reliability.
8.1 Assumptions Based on Jelinski-Moranda Model
• The number of faults in the software is known.
• The rate of fault detection is constant over time.
• The software system operates in a steady-state condition.
• The faults in the software are assumed to be independent and identically distributed.
• The fault removal process is assumed to be perfect, meaning that once a fault is detected, it
is immediately removed without introducing any new faults.
• The testing process is assumed to be exhaustive, meaning that all faults in the software will
eventually be detected.
• The model assumes that the software system will not be modified during the testing period
and that the number and types of faults in the software will remain constant.
• The Jelinski-Moranda model assumes that faults are introduced into the software during the
development process and that they are not introduced by external factors such as hardware
failures or environmental conditions.
• The model assumes that the testing process is carried out using a specific testing
methodology and that the results are consistent across different testing methodologies.
One limitation of the Jelinski-Moranda model is that it assumes a constant fault detection rate,
which may not be accurate in practice. Additionally, the model does not take into account factors
such as software complexity, hardware reliability, or user behavior, which can also affect the
reliability of the software system.
Overall, the Jelinski-Moranda model is a useful tool for predicting software reliability, but it
should be used in conjunction with other techniques and methods for software testing and quality
assurance.
The Jelinski-Moranda (J-M) model is one of the earliest software reliability models. Many existing
software reliability models are variants or extensions of this basic model.

The JM model uses the following equation to calculate the software reliability at a given time t:
R(t) = R(0) * exp(-λt)

where R(t) is the reliability of the software system at time t, R(0) is the initial reliability of the
57
58

software system, λ is the failure rate of the system, and t is the time elapsed since the software was
first put into operation.

8.2 Purpose of Jelinski Moranda Software Reliability Model


• Estimating Failure Rates: Calculate the frequency of software errors that arise during the
phases of testing and operation.
• Examining the Growth of Software Reliability: Examine how the software’s
dependability increases with time when bugs are found and resolved during the testing and
debugging phases.
• Direct the Testing Process: Give instructions on how to allocate testing funds and
resources to increase software dependability.
• Help with Making Decisions: Help decision-makers weigh the trade-offs between
predicted reliability, testing resources, and development time.
• Calculate Software Dependability: Calculate the software’s anticipated reliability based
on the quantity of flaws and how many are fixed throughout the development process.
8s.3 Characteristics of the Jelinski Moranda Model
Some of the characteristics of the Jelinski Moranda Model are listed below.
• This model is a type of Binomial model.
• Jelinski Moranda Model is the first and most well-known black box model.
• This model generally makes predictions on optimistic reliability.
Some other characteristics are shown in the table.
Measures of Reliability Formula

Software Reliability Function R(ti)= e-ϕ[N-(i-1)]ti

Mean value Function µ(ti )=N(1-e-ϕti)

Median m={ϕ[N-(i-1)]} -1 In2

Probability Density Function f(ti)= ϕ[N-(i-1]e-ϕ[N-(i-1)]ti

Failure Rate Function λ(ti)= ϕ[N-(i-1)]

Failure Intensity Function €(ti )=Nϕe-ϕti

Cumulative Distributive Function f(ti)=1-e-ϕ[N-(i-1)]ti

58
59

Advantages of the Jelinski-Moranda (JM) Software Reliability Model


• Simplicity: The JM model is simple and easy to understand, making it a useful tool for
software engineers who do not have a strong background in mathematical modeling.
• Widely used: The JM model is widely used in software engineering and has been applied to
many different types of software systems.
• Predictability: The JM model can provide valuable insights into the reliability of software
systems over time, helping software engineers to make informed decisions about software
testing and maintenance.
• Flexibility: The JM model is flexible and can be used to model different types of software
systems, including those with different fault rates and fault detection rates.
• Effectiveness: Despite its simplicity, the JM model has been shown to be effective in
predicting software reliability, particularly for software systems with a constant fault
detection rate.
• Ease of Implementation: The JM model can be implemented using basic statistical tools
and software, making it accessible to software engineers and organizations with limited
resources.
• Data-Driven: The JM model relies on empirical data to make predictions about software
reliability, which can provide a more accurate and objective assessment of the software
system’s performance.
• Cost-Effective: The JM model is a cost-effective tool for software testing and maintenance,
as it can help identify the most critical faults in the software system, allowing engineers to
focus their resources on fixing those faults.

Disadvantages of the Jelinski-Moranda (JM) Software Reliability Model


• Unrealistic assumptions: The JM model makes several unrealistic assumptions about
software systems, including that the failure rate is constant over time and that failures can be
modeled as a Poisson process.
• Limited applicability: The JM model is limited in its applicability and may not be suitable
for more complex software systems.
• Lack of flexibility: The JM model is limited in its ability to take into account the dynamic
nature of software systems, such as changes in the environment or the introduction of new
features.
• Dependency on accurate data: The accuracy of the JM model’s predictions is highly
dependent on the accuracy of the data used to build the model. Inaccurate or incomplete data
can lead to incorrect predictions about software reliability.
• Inability to account for external factors: The JM model does not take into account
external factors that may impact software reliability, such as changes in user behavior or
changes in the operating environment.
• Difficulty in estimating initial failure rate: The JM model requires an estimate of the
initial failure rate, which can be difficult to determine accurately, particularly for software
systems that are still in the early stages of development.
• Limited ability to predict long-term reliability: The JM model is most effective at
59
60

predicting short-term reliability and may not be suitable for predicting the long-term
reliability of a software system.
Future Developments
• Getting Used to Agile Development: Adapting the paradigm to Agile processes and taking
into account the difficulties associated with dynamic and iterative development.
• Taking Complex Systems into Account: Investigation of model adaption for distributed
software systems with complicated architectures to address dependability.
• Monitoring Reliability in Real Time: Creation of techniques for software dependability
prediction and monitoring in real-time throughout development and operation.
• Combining DevOps Methods with Integration: Investigating the model’s integration with
DevOps to provide ongoing feedback on the dependability of software across the
development process.
• Interdisciplinarity Cooperation: Working together with professionals in adjacent domains
to develop a comprehensive strategy for software reliability assessment.
• Open-Source Model Creation: Encouragement of cooperation and contributions for model
enhancement through the promotion of open-source projects.

60

You might also like