0% found this document useful (0 votes)
72 views21 pages

CS-736: Advanced Software Engineering: Software Quality Measurement: A Framework For Counting Problems and Defects

Uploaded by

raheela nasim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views21 pages

CS-736: Advanced Software Engineering: Software Quality Measurement: A Framework For Counting Problems and Defects

Uploaded by

raheela nasim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 21

CS-736: Advanced Software Engineering

Defect Analysis

Software Quality Measurement: A Framework for counting


problems and defects:
Abstract: This report presents mechanisms for describing and specifying two software
measures-software problems and defects-used to understand and predict software product
quality and software process efficacy. We propose a framework that integrates and gives
structure to the discovery, reporting, and measurement of software problems and defects
found by the primary problem and defect finding activities. Based on the framework, we
identify and organize measurable attributes common to these activities. We show how
to use the attributes with checklists and supporting forms to communicate the
definitions and specifications for problem and defect measurements. We illustrate
how the checklist and supporting forms can be used to reduce the misunderstanding of
measurement results and can be applied to address the information needs of different
users.

1. Introduction

1.1 Scope

Page 1 of 21
This report describes a framework and supporting mechanisms that may be used to
describe or specify the measurement of software problems and defects associated
with a software product. It includes the following:

 A framework relating the discovery, reporting, and measurement of problems


and defects.
 A principal set of measurable, orthogonal attributes for making the
measurement descriptions exact and unambiguous.
 Checklists for creating unambiguous and explicit definitions or
specifications of software problem and defect measurements.

1.2 Objective and Audience

Our objective is to provide operational methods that will help us obtain clear and
consistent measurements of quality based on a variety of software problem reports
and data derived by analysis and corrective actions. The report is appropriate for any
organization that wants to use software problem report data to help manage
and improve its processes for acquiring, building, or maintaining software
systems. The attributes and checklists described in this report are intended
specifically for software project managers, software planners and analysts,
software engineers, and data management specialists.
The mechanisms may be used for defining and specifying software problem and
defect counts found by static or non-operational processes (e.g., design reviews
or code inspections) and for dynamic or operational processes (e.g., testing or
customer operation). The measurements (counts), properly qualified and described,
may be used in various formats along with other software data to measure the
progress of the software development project, to provide data for prediction models,
and to improve the software life cycle process.

2 Understanding the Framework for Counting Software Problems and


Defects

The purpose of this procedure is to provide a structured view of the activities


involved with data collection, analysis, recording, and reporting of software
problems and defects, and thereby set the framework necessary to define a
common set of attributes. These attributes are used in a checklist to
communicate the description and specification of problem and defect
measurements

Figure 2-1 shows the relationship of the software problem measurement


framework to that of the measurement environment framework previously outlined.

Software Measurement Framework for Measuring


Environment Software Problems

Page 2 of 21
Goals Why measure problems and defects
Measurement definition What are problems and defects
Data collection/recording Problem finding/reporting activities
Measurements/reports Measurement attributes and checklists

Figure 1-1 Measurement Environment Framework Relationship

2.2 Measuring Software Problems and Defects

To measure with explicitness and exactness, it is of utmost importance to clearly


define the entities being measured. Since we concentrate on problems and
defects in this report, we shall define their meaning and discuss the
relationship they have to other terms and entities.

2.2.1 Defects

A defect is any unintended characteristic that impairs the utility or worth of an item,
or any kind of shortcoming, imperfection, or deficiency. Left as is, this definition of
a defect, while correct, needs to be more definitive to be of help in the software
measurement sense. If we further define a defect to be an inconsistency with its
specification [IEEE 88a], the implication is that the reference specification cannot
have any defects. Since we know this is not the case, we will try another approach.
We will define a software defect to be any flaw or imperfection in a software work
product or software process.

This definition of a software defect covers a wide span of possibilities and does
not eliminate software artifacts that we know from experience to contain defects.
It does suggest the need for standards, rules, or conventions that establish the
type and criticality of the defect (Grady proposes a model for defect
nomenclature and classification in [Grady 92]).
A software defect is a manifestation of a human (software producer) mistake;
however, not all human mistakes are defects, nor are all defects the result of human
mistakes.
When found in executable code, a defect is frequently referred to as a fault or a bug.
A fault is an incorrect program step, process, or data definition in a computer
program. Faults are defects that have persisted in software until the software is
executable. In this report, we will use the term defect to include faults, and only use
the term fault when it is significant to refer to a defect in executable code.

2.2.2 Problems

The definition of a software problem has typically been associated with that
of a customer identifying a malfunction of the program in some way. However,
while this may be correct as far as it goes, the notion of a software problem goes
beyond that of an unhappy customer. Many terms are used for problem reports
throughout the software community: incident report, customer service request,

Page 3 of 21
trouble report, inspection report, error report, defect report, failure report, test
incident, etc.
A software problem is a human encounter with software that causes difficulty,
doubt, or uncertainty in the use or examination of the software.
In a dynamic (operational) environment, some problems may be caused by
failures. A failure is the departure of software operation from requirements . A
software failure must occur during execution of a program. Software failures are
caused by faults, that is, defects found in executable code (the same kind of
faults discussed in the previous section as persistent defects).
In a static (non-operational) environment, such as a code inspection, some
problems may be caused by defects. In both dynamic and static environments,
problems also may be caused by misunderstanding, misuse, or a number of other
factors that are not related to the software product being used or examined.

2.3 Problem and Defect Finding Activities

To establish a software measurement environment, the software organization must


define a data collection process and recording media. Software problem reports
typically are the vehicles used to collect data about problems and defects. It is
worthwhile to note that the data that is assembled as part of the problem analysis and
correction process is precisely the same data that characterizes or gives attribute
values to the problems and defects we wish to measure. Although this process
facilitates the data collection aspects of software problem and defects
measurement, the variety of finding activities and related problem reports make it
difficult to communicate clearly and precisely when we define or specify problem and
defect measurements.
The primary points of origin for problem reports are activities whose function is to
find Problems using a wide variety of problem discovery or detection methodologies,
including using the software product (see Figure 2-2). During software development,
these activities would include design and code inspections, various formal
reviews, and all testing activities. In addition, activities such as planning,
designing, technical writing, and coding are also sources of problems reports.
Technical staff engaged in these activities frequently will encounter what appears to
be an defect in a software artifact on which they are dependent to complete their work
and will generate a problem report. Following product development, the software
product customer is another source of problem reports.

ACTIVITIES Find Problems In:


Requirement specs
Design specs
Product synthesis Source code
User publications
Test procedures
Inspections Requirement specs
Design specs
Source code

Page 4 of 21
User publications
Test procedures
Requirement specs
Formal reviews
Formal reviews
Design specs
Implementation Installation
Modules
Components Products
Testing Systems
User publications
Installation procedures
Installation procedures
Operating procedures
Customer service
Maintenance updates
Support documents

Figure 2-2 Problem and Defect Finding Activities

To facilitate the communication aspect, we have identified five major finding activities:
 Software product synthesis1
 Inspections
 Formal reviews
 Testing
 Customer service

2.4 Problem and Defect Attributes

Page 5 of 21
In spite of the variances in the way software problems may be reported and
recorded, there are remarkable similarities among reports, particularly if the
organization or activity-specific data is removed. There are several ways of
categorizing this similarity. The approach we use to arrive at a set of attributes
and attribute values that encompass the various problem reports is to apply the
“who, what, why, when, where, and how” questions in the context of a
software measurement framework

These attributes provide a basis for communicating, descriptively or


prescriptively, the meaning of problem and defect measurements:

 Identification: What software product or software work product is


involved?
 Finding Activity: What activity discovered the problem or defect?
 Finding Mode: How was the problem or defect found?
 Criticality: How critical or severe is the problem or defect?
 Problem Status: What work needs to be done to dispose of the problem?
 Problem Type: What is the nature of the problem? If a defect, what
kind?
 Uniqueness: What is the similarity to previous problems or defects?
 Urgency: What urgency or priority has been assigned?
 Environment: Where was the problem discovered?
 Timing: When was the problem reported? When was it discovered?
When was it corrected?
 Originator: Who reported the problem?
 Defects Found In: What software artifacts caused or contain the defect?
 Changes Made To: What software artifacts were changed to correct the
defect?
 Related Changes: What are the prerequisite changes?
 Projected Availability: When are changes expected?
 Released/Shipped: What configuration level contains the changes?
 Applied: When was the change made to the baseline configuration?
 Approved By: Who approved the resolution of the problem?
 Accepted By: Who accepted the problem resolution?

2.5 Measurement Communication with Checklists


Our primary goal is to provide a basis for clearly communicating the
definition and specification of problem and defect measurements. Two criteria
guide us:
 Communication: If someone generates problem or defect counts
with our methods, will others know precisely what has been measured
and what has been included and excluded?
 Repeatability: Would someone else be able to repeat the measurement
and get the same results?

Page 6 of 21
Page 7 of 21
Figure : Framework overview

3.0 Using the Problem and Defect Attributes

All software organizations will use the same terms as are presented in the definition
of the attributes. We have tried to use terms and names that are recognized by
the IEEE Standard Glossary of Software Engineering Terminology .

3.1 Identification Attributes


Problem ID: This attribute serves to uniquely identify each problem for reference
purposes.
Product ID: This attribute identifies the software product to which the problem
refers. It should include the version and release ID for released products.

Page 8 of 21
3.2 Problem Status
Open: This term means that the problem is recognized and some level of
investigation and action will be undertaken to resolve it.
Recognized: Sufficient data has been collected to permit an evaluation of the
problem to be made.
Evaluated: Sufficient data has been collected by investigation of the reported
problem and the various software artifacts to at least determine the problem type.

3.3 Problem Type

The Problem Type attribute is used to assign a value to the problem that will facilitate
the evaluation and resolution of the problems reported.

The Problem Type attribute values are used to classify the problems into one of
several categories to facilitate the problem resolution.
Software defect: This subtype includes all software defects that have been
encountered or discovered by examination or operation of the software product.
Possible values in this subtype are these:

 Requirements defect: A mistake made in the definition or specification of


the customer needs for a software product. This includes defects found in
functional specifications; interface, design, and test requirements; and
specified standards.
 Design defect: A mistake made in the design of a software product.
This includes defects found in functional descriptions, interfaces, control
logic, data structures, error checking, and standards.
 Code defect: A mistake made in the implementation or coding of a program.
This includes defects found in program logic, interface handling, data
definitions, computation, and standards.
 Document defect: A mistake made in a software product publication. This
does not include mistakes made to requirements, design, or coding documents.
 Test case defect: A mistake in the test case causes the software product to
give an unexpected result.
 Other work product defect: Defects found in software artifacts that are used
to support the development or maintenance of a software product. This
includes test tools, compilers, configuration libraries, and other computer-
aided software engineering tools.

Other problems: This subtype includes those problems that contain either no evidence
that a software defect exists or contain evidence that some other factor or reason
is responsible for the problem. It would not be atypical for many software organizations
to consider problems that fall into this category as Closed almost immediately as
evaluated with an “out of scope,” or similar closing code. Possible values in this subtype
are:

Page 9 of 21
Hardware problem: A problem due to a hardware malfunction that the software
does not, or cannot, provide fault tolerant support.
Operating system problem: A problem that the operating system in use has
responsibility for creating or managing. (If the software product is an operating
system, this value should move to the Software defect category.)

User error: A problem due to a user misunderstanding or incorrect use of the software.

Operations error: A problem caused by an error made by the computer system


operational staff.

New requirement/enhancement: A problem that describes a new requirement or


functional enhancement that is outside the scope of the software product baseline
requirements.

Undetermined problem: The status of the problem has not been determined. Values
for this subtype are:

Not repeatable/Cause unknown: The information provided with the problem


description or available to the evaluator is not sufficient to assign a problem type to the
problem.

Value not identified: The problem has not been evaluated.

3.4 Uniqueness

This attribute differentiates between a unique problem or defect and a duplicate.


The possible values are:
Duplicate: The problem or defect has been previously discovered.
Original: The problem or defect has not been previously reported or discovered.
Value not identified: An evaluation has not been made.

3.5 Criticality
Criticality is a measure of the disruption a problem gives users when they encounter
it. The value given to criticality is normally provided by the problem originator or
originating organization

3.6Urgency
Urgency is the degree of importance that the evaluation, resolution, and closure
of a problem is given by the organization charged with executing the problem
management process.

3.7Finding Activity
This attribute refers to the activity, process, or operation taking place when the
problem was encountered. .Rather than use the program development phases or

Page 10 of 21
stages to describe the activity, we have chosen to use the activities implicit in
software development regardless of the development process model in use.

3.8Finding Mode
This attribute is used to identify whether the problem or defect was discovered
in an operational environment or in a non-operational environment.
Dynamic: This value identifies a problem or defect that is found during operation or
execution of the computer program.
Static: This value identifies a problem or defect that is found in a non-operational
environment. Problems or defects found in this environment cannot be due to a
failure or fault.

3.9 Date/Time of Occurrence:


The Problem Count Request Form is used to identify the range and
environment constraints desired for the measurements.

3.10 Problem Status Dates:

These attributes refer to the date on which the problem report was received or logged
in the problem database or when the problem changed status.
Date Opened: Date the problem was reported and recognized.
Date Closed: Date the problem met the criteria established for closing the problem.
Date Assigned for Evaluation
Date Assigned for Resolution
The Problem Count Request Form should be used to define how the selected values
should be treated.

3.11 Originator
This attribute provides the information needed by the problem analyst to determine
the originating person, organization, or site.

3.12 Environment
This attribute provides information needed by the problem analyst to determine
if a problem is uniquely related to the computer system, operating system, or
operational environment, or if a particular environment tends to generate an
abnormally large number of problems compared to other environments.

Page 11 of 21
3.13 Defects Found In
This attribute enables us to identify the software unit(s) containing defects
causing a problem. This information is particularly useful to identify software units
prone to defects.

3.14 Changes Made To


We use this attribute to identify the software unit(s) changed to resolve the
discovered problem or defect.

Page 12 of 21
Page 13 of 21
Page 14 of 21
SEI-CMM CHECKLIST FOR DEFECT PREVENTION

Yes No Comments

Are defect prevention activities planned?

Does the project conduct causal analysis meetings to identify


common causes of defects?

Once identified, are common causes of defects prioritized and


systematically eliminated?

Does the project follow a written organizational policy for


defect prevention activities?

Do members of the software engineering group and other


software-related groups receive required training to perform
their defect prevention activities (e.g., training in defect
prevention methods and the conduct of task kick-off or causal
analysis meetings)?

Are measurements used to determine the status of defect


prevention activities (e.g., the time and cost for identifying and
correcting defects and the number of action items proposed,
open, and completed)?

Are the activities and work products for defect prevention


subjected to Software Quality Assurance review and audit?

Page 15 of 21
3.0 Technical Benefits
The fundamental reason for measuring software and the software process is to
obtain data that helps us to better control the schedule, cost, and quality of software
products. It is important to be able to consistently count and measure basic entities
that are directly measurable, such as size, defects, effort, and time (schedule)
Consistent measurements provide data for doing the following:
 Quantitatively expressing requirements, goals, and acceptance criteria.
 Monitoring progress and anticipating problems.
 Quantifying tradeoffs used in allocating resources.
 Predicting the software attributes for schedule, cost, and quality.

To establish and maintain control over the development and maintenance of a


software product, it is important that the software developer and maintainer
measure software problems and software defects found in the software product to
determine the status of corrective action, to measure and improve the software
development process, and to the extent possible, predict remaining defects or
failure.

3.1 Quality
It is clear that determining what truly represents software quality in the customer’s
view can be elusive, it is equally clear that the number and frequency of problems
and defects associated with a software product are inversely proportional to the
quality of the software. Software problems and defects are among the few direct
measurements of software processes and products. Such measurements allow
us to quantitatively describe trends in defect or problem discovery, repairs,
process and product imperfections, and responsiveness to customers. Problem
and defect measurements also are the basis for quantifying several significant
software quality attributes, factors, and criteria reliability, correctness,
completeness, efficiency, and usability among others

Financial Benefits

3.2. Cost
The amount of rework is a significant cost factor in software development and
maintenance. The number of problems and defects associated with the product are direct
contributors to this cost. Measurement of the problems and defects can help us to

Page 16 of 21
understand where and how the problems and defects occur, provide insight to methods of
detection, prevention, and prediction, and keep costs under control.

3.3. Schedule

Even though the primary drivers for schedule are workload, people and processes, we
can use measurement of problems and defects in tracking project progress, identifying
process inefficiencies, and forecasting obstacles that will jeopardize schedule
commitments

4.0 Latest work

Defect Prevention Techniques for High Quality and Reduced Cycle Time.

The "Defect Prevention Techniques" Process Improvement Experiment (PIE) was


established to investigate patterns of defects that commonly occur in the Software
Development Process. The Goal of the PIE was to define and implement techniques
to reduce the total number of defects in the Development Life-cycle and to prevent
certain classes of defects from recurring. [1]

It is well known that correcting defects in later stages of the Development Life-cycle
is more complicated, expensive and time-consuming than correcting them in earlier
stages. Preferable even to early detection, is the avoidance of defects altogether - the
Defect Prevention method. Defect Prevention is therefore the best way to optimize
the development process costs and to shorten the development cycle time. [2]

The objective of the experiment was to define and implement Defect Prevention
methods and techniques, for the various phases of the Development Life-cycle, to
reduce the quantity of defects and then to determine a Strategy for decreasing the
Testing effort needed for development projects.

The PIE has produced a better understanding of common defect types, suggested and
implemented solutions to avoid them, and established a mechanism to investigate new
/ remaining defects with the goal of eliminating them

Plans and Expected Outcome


The PIE plan was comprised of several steps :

1. Create a reference line of root causes based on defects recorded in the initial
project release.

2. Based on the profile of common defects, define techniques to prevent specific


problems in each phase of development.

Page 17 of 21
3. Implement and disseminate techniques to engineering team at phase kickoffs.

4 During subsequent project development, support modified software development


process (see Figure 1) and ongoing causal analysis of new defects detected.

5 Review root causes, analyze changes in defect trends. Evaluate efficacy of new
techniques. Determine strategy to reduce test effort while focusing on the most
error-prone areas.

6 Disseminate findings and recommend changes to the OSSP (Organizational


Standard Software Process).

The existing CMM Level 3 development process was modified to include Defect
Prevention activities. A first-stage causal analysis was added to the defect closing
procedure, whereby the engineer handling the error/defect would fill out a new
Analysis form. This Analysis form which includes Beizer Taxonomy classification,
cause category, root cause analysis, containment method and suggestion, is physically
attached to the problem report and remains part of the database.

A second-stage causal analysis was added to verify the correctness of the first stage
and identify trends and extreme cases which require attention. Phase kickoffs were
added to the process to educate the engineers on the common errors of the phase and
on causal analysis techniques.

Based on the trends identified in the second stage of analysis, defect prevention
techniques and strategies were recommended and implemented. This activity was
performed by the PIE team and managed as a separate process.

Page 18 of 21
Figure 1 - CMM Level 3 Software Process enhanced for Defect Prevention

Implementation of the Improvement Actions


Kickoff meetings were held for each phase, where the importance of Defect
Prevention and causal analysis were explained and emphasized. The improvement
actions for the specific phase were presented and discussed. The actions, as suggested
by the PIE team, were generally well received by the TETRA development engineers
and managers. Techniques such as improved review checklists were applied
immediately after the kickoff at formal peer reviews.

In each progressive phase, engineers became more adept at recording the defects
using DDTs® - Distributed Defect Tracking System, and at performing causal
analysis. They became more open minded about reporting and recording their own
defects, understanding the importance of a systematic tracking approach to the quality
of the product and the process.

Many TETRA engineers expressed satisfaction with the causal analysis process and
kickoff meetings, which made them feel better equipped to prevent defects, and
improved their general attitude towards the software process.

Page 19 of 21
The PIE is considered by the technical staff as well as the business staff, to be a
positive process, which gives us an advantage in better quality of the products, and
reduced cycle time of the development process. As such, the TETRA development
group has adopted several changes to its processes, to accommodate the Defect
Prevention environment.

Internal dissemination outside of the TETRA development group, has yet to be done
and will begin with the presentation of the Defect Prevention method to the SEPG -
Software Engineering Process Group, the owner of MCIL,s OSSP. This group will
analyses the results of the PIE project, and update the OSSP accordingly. The SEPG
will also be responsible for deploying the new process and training the other
development groups. This will be done through a series of technical meetings with
engineers and managers, dealing with Defect Prevention, the PIE and the updated
OSSP.

Measured Results
The overall number of defects in Tetra Release 2 has decreased by 60%, in
comparison to the number of defects detected in TETRA Release 1 (the reference line
project). In part, this can be attributed to the fact that Release 2 is a continuation
project and not an initial project as Release 1, and that later releases usually have less
defects due to more cohesive teams, greater familiarity with the application domain,
experience, and fewer undefined issues.

Based on numbers from other MCIL projects, we estimate that half of the defect
decrease (30%) can be attributed to the implementation of the PIE.

A breakdown of the defects, by Phase of Origin, shows the following results :

Phase of Origin TETRA Release 2 Past Projects % Improvement

Requirement Spec. 20% 40.8% 80.6%

Preliminary Design 2.5% 11.8% 93%

Detailed Design 23% 23.9% 61.4%

Coding 54.5% 23.4% 8%

100% 100% 60%

Page 20 of 21
The absolute reduction in defects, which relates to the % improvement shown in the
above table, can be observed in the following chart :

Page 21 of 21

You might also like