0% found this document useful (0 votes)
48 views26 pages

Quality Metrics B

The document discusses software quality metrics, which are quantitative measures used to assess the quality attributes of software. It categorizes metrics into product, process, and project metrics, with further divisions into product quality, in-process quality, and maintenance quality metrics. Key metrics include Mean Time to Failure, Defect Density, and Customer Satisfaction, which help manage software development, maintenance, and improve decision-making processes.

Uploaded by

msigar16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views26 pages

Quality Metrics B

The document discusses software quality metrics, which are quantitative measures used to assess the quality attributes of software. It categorizes metrics into product, process, and project metrics, with further divisions into product quality, in-process quality, and maintenance quality metrics. Key metrics include Mean Time to Failure, Defect Density, and Customer Satisfaction, which help manage software development, maintenance, and improve decision-making processes.

Uploaded by

msigar16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

BSEH443

SOFTWARE QUALITY
ASSURANCE
SOFTWARE QUALITY METRICS
SOFTWARE QUALITY METRICS
● “You can’t control what you can’t measure” Tom DeMarco
(1982)
● Software quality metrics are a category of SQA tools
● A quantitative measure of the degree to which an item
possesses a given quality attribute
● A function whose inputs are software data and whose output is
a single numeric value that can be interpreted as the degree to
which the software possesses a given quality attribute
● (IEEE, 1990)
Software Quality Metrics
● It is commonly believed that quality metrics should be included
in software, as in other industries among the fundamental tools
employed to assist management in three basic areas: control of
software development projects and software maintenance,
support of decision taking, and initiation of corrective actions
● Statistical analysis of metrics data is expected to pinpoint
changes initiated as a result of application of new development
tools, changed procedures, and other interventions
Objectives of Software Quality
Metrics
● To facilitate management control as well as planning and
execution of the appropriate managerial intervention
● To identify situations that require or enable development or
maintenance process improvement in the form of preventive or
corrective actions introduced throughout the organisation
Classification of Software Quality
Metrics
● Metrics can fall into a number of categories
● Product metrics: Describe the characteristics of the product
such as size, complexity, design features, performance, and
quality level
● Process metric: These can be used to improve the
development and maintenance activities of the software
● Project metrics: Describe the project characteristics and
execution, such as the number of software developers, the
staffing patterns over the life cycle of the software, cost,
schedule and productivity
Further categories
● Software quality metrics can be further divided into three
categories:
● Product quality metrics
● In-process quality metrics
● Maintenance quality metrics
Product Quality Metrics
● These include the following:
● Mean Time to Failure
● Defect Density
● Customer Problems
● Customer Satisfaction
Mean Time to Failure
● It is the time between failures
● This metric is mostly used with safety critical systems such as
the airline traffic control systems, avionics, and weapons
Defect Density
● It measures the defects relative to the software size expressed
as lines of code or function point.
● It measures code quality per unit
● This metric is used in many commercial software systems
KLOC
● This classic metric measures the size of software by thousands
of code lines. As the number of code lines required for
programming a given task differs substantially with each
programming tool, this measure is specific to the programming
language or development tool used.
● Application of metrics that include KLOC is limited to software
systems developed using the same programming language or
development tool.
Function Point
● A measure of the development resources (human resources)
required to develop a program, based on the functionality
specified for the software system
Customer Problems
● It measures the problems that customers encounter when
using the product. It contains the customer’s perspective
towards the problem space of the software, which includes the
non-defect oriented problems together with the defect
problems.
● The problems metric is usually expressed in terms of Problems
per User-Month (PUM).
● PUM = Total Problems that customers reported (true defect and
non-defect oriented problems) for a time period + Total number
of license months of the software during the period
Customer Satisfaction
● Customer satisfaction is often measured by customer survey
data through the five-point scale −
● Very satisfied
● Satisfied
● Neutral
● Dissatisfied
● Very dissatisfied
● Satisfaction with the overall quality of the product and its
specific dimensions is usually obtained through various
methods of customer surveys. Based on the five-point-scale
data, several metrics with slight variations can be constructed
and used, depending on the purpose of analysis. For example

● Percent of completely satisfied customers
● Percent of satisfied customers
● Percent of dis-satisfied customers
● Percent of non-satisfied customers
In-process Quality Metrics
● In-process quality metrics deals with the tracking of defect
arrival during formal machine testing for some organizations.
This metric includes −
● Defect density during machine testing
● Defect arrival pattern during machine testing
● Phase-based defect removal pattern
● Defect removal effectiveness
Defect density during machine
testing
● Defect rate during formal machine testing (testing after code is
integrated into the system library) is correlated with the defect
rate in the field. Higher defect rates found during testing is an
indicator that the software has experienced higher error
injection during its development process, unless the higher
testing defect rate is due to an extraordinary testing effort.
● This simple metric of defects per KLOC or function point is a
good indicator of quality, while the software is still being tested.
It is especially useful to monitor subsequent releases of a
product in the same development organization.
Defect arrival pattern during
machine testing
● The overall defect density during testing will provide only the
summary of the defects. The pattern of defect arrivals gives
more information about different quality levels in the field. It
includes the following −
● The defect arrivals or defects reported during the testing phase
by time interval (e.g., week). Here all of which will not be valid
defects.
● The pattern of valid defect arrivals when problem determination
is done on the reported problems. This is the true defect
pattern.
Phase-based defect removal
pattern
● This is an extension of the defect density metric during testing.
In addition to testing, it tracks the defects at all phases of the
development cycle, including the design reviews, code
inspections, and formal verifications before testing.
● Because a large percentage of programming defects is related
to design problems, conducting formal reviews, or functional
verifications to enhance the defect removal capability of the
process at the front-end reduces error in the software. The
pattern of phase-based defect removal reflects the overall
defect removal ability of the development process.
● With regard to the metrics for the design and coding phases, in
Defect removal effectiveness
● Defect removal effectiveness or efficiency is a measure of the
capability of a software development process in removing
defects before the software is shipped. It is calculated as the
percentage of defects found and repaired prior to release.
Defect removal effectiveness is a direct indicator of the quality
of the software's field performance. It is also known as Defect
Removal Efficiency (DRE)
Maintenance Quality Metrics
● Although much cannot be done to alter the quality of the
product during this phase, following are the fixes that can be
carried out to eliminate the defects as soon as possible with
excellent fix quality
● Fix backlog and backlog management index
● Fix response time and fix responsiveness
● Percent delinquent fixes
● Fix quality
Fix backlog and backlog
management index
● Fix backlog is related to the rate of defect arrivals and the rate
at which fixes for reported problems become available. It is a
simple count of reported problems that remain at the end of
each month or each week. Using it in the format of a trend
chart, this metric can provide meaningful information for
managing the maintenance process.
● Backlog Management Index (BMI) is used to manage the
backlog of open and unresolved problems
● If BMI is larger than 100, it means the backlog is reduced
● If BMI is less than 100, then the backlog increased.
Fix response time and fix
responsiveness
● The fix response time metric is usually calculated as the mean
time of all problems from open to close. Short fix response time
leads to customer satisfaction.
● The important elements of fix responsiveness are customer
expectations, the agreed-to fix time, and the ability to meet
one's commitment to the customer.
Fix Quality
● Fix quality or the number of defective fixes is another important
quality metric for the maintenance phase. A fix is defective if it
did not fix the reported problem, or if it fixed the original
problem but injected a new defect. For mission-critical
software, defective fixes are detrimental to customer
satisfaction. The metric of percent defective fixes is the
percentage of all fixes in a time interval that is defective.
● A defective fix can be recorded in two ways: Record it in the
month it was discovered or record it in the month the fix was
delivered. The first is a customer measure; the second is a
process measure. The difference between the two dates is the
latent period of the defective fix.
● Usually the longer the latency, the more will be the customers
that get affected. If the number of defects is large, then the
small value of the percentage metric will show an optimistic
picture. The quality goal for the maintenance process, of
course, is zero defective fixes without delinquency.

You might also like