SOFTWARE
DEFECTS
LECTURE FOUR
Software Defect
• A Software defect is a condition in a software product which does not
meet a software requirement (as stated in the requirement
specifications) or end-user expectation (which may not be specified
but is reasonable).
• It is an error in coding or logic that causes a program to malfunction or
to produce incorrect/ unexpected results.
Software Defect
Defect vs Bug
• A bug is a deficiency in just the software but a defect can be a deficiency in the software as well as
any work product eg Requirement Specification.
• A bug is not the informal name of a defect. Bugs are faults in system or application which
impact on software functionality and performance. Bugs are usually found in unit testing by
testers while defects are found by the developer in development phase
You don’t say
‘There’s a bug in the Test Case’
You say
‘There’s a defect in the Test Case.’
Software Defect Terms
• A program that contains a large number of bugs is said to be buggy.
• Reports detailing defects or bugs in software are known as defect
reports or bug reports.
• Applications for tracking defects bugs are known as defect tracking
tools or bug tracking tools.
• The process of finding the cause of bugs is known as debugging.
• The process of intentionally injecting bugs in a software program, to
estimate test coverage by monitoring the detection of those bugs, is
known as bebugging.
Software Defect Types (Basic)
1. Syntax Defects:
Syntax defects means mistake in the writing style of the code. It also focuses on the small mistake made by developer while
writing the code.. For example, let’s say the correct syntax for printing something is print('hello'), and we accidentally forget
one of the parentheses while coding. A syntax error will happen, and this will stop the program from running.
2. Multithreading Defects:
Multithreading process defects resulting in conditions of deadlock and starvation that may lead to system’s failure.
3. Interface Defects:
Interface defects means the defects in the interaction of the software and the users eg complicated interface or unclear
interface
4. Performance Defects:
Performance defects are the defects when the system or the software application is unable to meet the desired and the
expected results eg the response of the system with the varying load on the system
5. Arithmetic Defects:
It includes the defects made by the developer in some arithmetic expression or mistake in finding solution of such arithmetic
expression. Often cause logical errors
6. Logical Defects:
Logical defects are mistakes done regarding the implementation of the code. For example the application is displaying the
wrong message
7. Runtime Errors: Runtime errors happen as a user is executing your program. The code might work correctly on your
machine, but on the webserver, there might be a different configuration, or it might be interacted with in a way that could
cause a runtime error
Causes Of Errors
• Human Error: Humans are the only intelligence that we can rely on, and they are prone to errors. By being careful, a few of the mistakes
can be avoided but not all.
• Miscommunication: A software is developed and tested by several people, and this might lead to conflicts which result in improper
coordination and poor communication. Misinterpretation and misunderstanding lead to errors in the software which would have been
otherwise avoided.
• Intra-system and inter-system interfaces: The chances of errors in inter-system and intra-system interface establishment are very high.
• Intra-system interface – The integration of different modules and features within a system or application leads to intra-system
interface errors.
• Inter-system interface – The compatibility between two different applications while working together can lead to inter-system
interface errors.
• Environment: Floods, fires, earthquakes, tsunamis, lightning and others affect the computer system and lead to errors.
• Pressure: Working under limited or insufficient resources and on unrealistic deadlines lead to less availability of time for developers to
test the code themselves before passing it on to the testing team.
• Inexperience: The allocation of tasks in a project must be according to the experience or skills of the team member. If this is not followed,
it might lead to errors as the inexperienced or insufficiently skilled members won’t have proper knowledge of the task.
• Complexity: Highly complex codes, designs, architecture or technology, can pave the way for critical bugs as a human mind can only take a
certain level of complexity.
• Unfamiliar technology: The developers and testers who do not stay updated with the recent technological development will face a
problem while on projects that are based on technologies outside of their knowledge domain and thereby cause an error.
Software Defect Classification
1. Defect Severity / Impact
2. Defect Probability / Visibility
3. Defect Priority / Urgency
4. Dimensions of Quality
5. Related Module / Component
6. Phase Detected
7. Phase Injected
8. Defect Nature
1.Defect Severity / Impact
• Defect Severity: also known as Bug Severity, is a classification of software defect
to indicate the degree of negative impact on the quality of software.
• Severity: The degree of impact that a defect has on the development or operation
of a component or system.
• Classification: the actual terminologies, and their meaning, can vary depending on
people, projects, organizations, or defect tracking tools
1.Defect Severity Classification
1. Critical: The defect affects critical functionality or critical data. It does not have a
workaround. Example: Unsuccessful installation, complete failure of a feature.
2. Major: The defect affects major functionality or major data. It has a workaround but
is not obvious and is difficult. Example: A feature is not functional from one module
but the task is doable if 10 complicated indirect steps are followed in another
module/s.
3. Minor: The defect affects minor functionality or non-critical data. It has an easy
workaround. Example: A minor feature that is not functional in one module but the
same task is easily doable from another module.
4. Trivial: The defect does not affect functionality or data. It does not need a
workaround. It does not impact productivity or efficiency. It is merely an
inconvenience. Example: Petty layout discrepancies, spelling/grammatical errors.
• Severity is also denoted as: S1 = Critical S2 = Major S3 = Minor S4 = Trivial
2.Defect Probability / Visibility
Defect Probability : also known as Defect Visibility or Bug Probability or Bug
Visibility, indicates the likelihood of a user encountering the defect/ bug.
1. High: Encountered by all or almost all the users of the feature
2. Medium: Encountered by about 50% of the users of the feature
3. Low: Encountered by very few users of the feature
• Defect Probability can also be denoted in percentage (%).
• The measure of Probability/ Visibility is with respect to the usage of a feature and not
the overall software.
1. A bug in a rarely used feature can have a high probability if the bug is easily
encountered by users of that particular feature.
2. A bug in a widely used feature can have a low probability if the users rarely detect it
even while using that particular feature.
3.Defect Priority / Urgency
• Also known as Bug Priority, indicates the importance or urgency of fixing a defect.
• Priority may be initially set by the Software Tester but finalized by the Project/ Product
Manager.
Classification: Severity Priority could be Urgent/High/Medium/Low based on the impact
urgency at which the defect should be fixed respectively
1. Urgent: Must be fixed immediately / in the next build.
2. High: Must be fixed in any of the upcoming builds but should be included in the
release.
3. Medium: May be fixed after the release / in the next release.
4. Low: May or may not be fixed at all.
Priority is also denoted as P1 for Urgent, P2 for High and so on.
3.Defect Priority / Urgency
Priority is quite a subjective decision therefore categorizations are not authoritative.
Consider the following while determining the priority:
1. Business need for fixing the defect
2. Defect Severity / Impact
3. Defect Probability / Visibility
4. Available Resources (Developers to fix and Testers to verify the fixes)
5. Available Time (Time for fixing, verifying the fixes and performing regression tests after
the verification of the fixes)
Defect Priority needs to be managed carefully in order to avoid product instability,
especially when there is a large of number of defects
4.Dimensions of Quality
Software quality: degree to which a component, system or process meets specified
requirements and/or user/customer needs and expectations.
Software quality dimension importance is subjective and depends on what dimension
you value the most.
Examples
• Accessibility: The degree to which software can be used comfortably by a wide variety
of people, including those who require assistive technologies like screen magnifiers or
voice recognition.
• Reliability: The ability of software to perform a required function under stated
conditions for the stated period of time without any errors
When someone says “This software is of a very high quality.”, ask “In which dimension of
quality?”
5.Related Module / Component
Related Module / Component indicates the module or component of the software where
the defect was detected.
This provides information on which module / component is buggy or risky. Eg Module
one /Component A or Module three /Component E
6.Phases (Phase Detected and Phase Injected)
Phase Detected indicates the phase in Phase Injected indicates the phase in
the software development lifecycle the software development lifecycle
where the defect was identified. where the bug was introduced. It can be
known only after a proper root-cause
analysis of the bug.
1. Unit Testing • Requirements Development
2. Integration Testing • High Level Design
3. System Testing • Detailed Design
4. Acceptance Testing • Coding
• Build / Deployment
Is always earlier in the software
development lifecycle than the Phase
Detected.
Defect Nature
1. Security defects: Security defects are the weaknesses allowing for a potential security attack.
2. Compatibility defects: An application doesn’t show consistent performance on particular types
of hardware, operating systems, browsers, and devices or when integrated with certain
software or operating under certain network configurations.Compatibility testing is carried out
in order to discover such issues.
3. Usability defects: make an application inconvenient to use and, thus, affects a user’s
experience with the software. An example is an overly complex signup procedure
4. Performance defects: are those bound to software’s speed, stability, response time, and
resource consumption, and are discovered during performance testing. An example of a
performance defect is a system’s response time being X times longer than that stated in the
requirements.
5. Functional defects: are the errors identified in case the behavior of software is not compliant
with the functional requirements. Such types of defects are discovered via functional testing.
Defect Metrics (Defect Age In Time)
Defect Age Formula / Calculation
Is the difference in time between the • Defect Age in Time = Defect Fix Date (OR
date a defect is detected and the Current Date) - Defect Detection Date
current date (if the defect is still open) Example
or the date the defect was fixed (if the If a defect was detected on 01/01/2021
defect is already fixed). 10:00:00 AM and closed on 01/04/2021
• The ‘defects’ are confirmed and 12:00:00 PM, the Defect Age is 74 hours.
assigned (not just reported). Use
• Dropped defects are not counted. • For determining the responsiveness of the
development/ testing team. Lesser the age,
• The difference in time can be better the responsiveness.
calculated in hours or in days.
• ‘fixed’ means that the defect is
verified and closed; not just
‘completed’ by the developer.
Defect Metrics (Defect Age In Phases)
Is the difference in phases between the defect injection phase and the
defect detection phase.
• ‘defect injection phase’ is the phase in the software life cycle where the
defect was introduced.
• ‘defect detection phase’ is the phase in the software life cycle where the
defect was identified.
Defect Age Formula :
• Defect Age in Phase = Defect Detection Phase - Defect Injection Phase
Defect Metrics (Defect Age In Phases)
• If a defect is identified in System
Example Testing and the defect was introduced
Let’s say the software life cycle has the in Requirements Development, the
following phases: Defect Age is 6 (7-1)
1.Requirements Development Use
2.High-Level Design • For assessing the effectiveness of each
phase and any review/ testing
3.Detail Design activities. Lesser the age, better the
4.Coding effectiveness.
5.Unit Testing • Defect Age should be maintained at the
lowest minimum number (whether in
6.Integration Testing time or in phases).
7.System Testing
8.Acceptance Testing
Defect Density
Is the number of confirmed defects detected in Defect Density Formula / Calculation
software/ component divided by the size of the • Defect Density = Number of Defects / Size of the
software/ component. Software
The number of defects per unit size of a work Uses
product. The ‘defects’ are:
• Comparing the relative number of defects in
1.Confirmed and agreed upon (not just various software components so that high-risk
reported) components can be identified and resources
focused towards them.
2.Dropped defects are not counted.
• Comparing software/ products so that quality of
each software/ product can be quantified and
resources focused towards those with low quality.
The size is measured in one of the following:
• For assessing the performance of software
• Function Points (FP) developers. Note there are many factors involved
in their performance and using this metric only to
• Source Lines of Code (SLOC) judge them would be unfair.
Defect Detection Efficiency
• (DDE) is the percentage of defects detected during a phase/stage divided by the total
number of defects.
• Defect Detection Percentage: The defect detection percentage (DDP) gives a measure
of the testing effectiveness.
• Since DDP ratio is changing over time as more defects are found by customers working
with the software version. One should visualize using a line chart that starts with 100%
at moment of software version release and a line representing a trend of how fast DDP
is declining.
Defect Detection Efficiency Example
Formula 1
• DDE = (Number of Defects Detected in a Phase /
Total Number of Defects) x 100 %
Formula 2
• DDE = (Number of Defects Detected Prior to a
Phase / Total Number of Defects) x 100 %
• Earlier the detection, less costlier the solution.
• Assuming that the internal team does Unit,
Integration and System testing, the DDR (Formula 2)
is only 86.4% at the level of System Testing the
team is weak and is letting the defects slip to the
customer / users (Acceptance Testing & Production
Defect Rejection and Defect Leakage Ratio
1. Defect rejection Ratio=( No of defects 1. DRR= 30/90*100
rejected / Total no. of defects raised)*100
2. DLR=30/50*100
2. Defect Leakage Ratio = (Number defect
missed/ total defects of software)*100
Example One • The smaller value of DRR and DLR is, the
better quality of test execution is.
• 90 defects reported
• Set the accepted range based on the project
• 60 are actual defects target or a comparable project eg
• 30 defects are mistakes recommended range between 3-5%
Example Two
• 50 defects in system
• 20 detected defects
• 30 missed defects
Defect Detection Efficiency - Uses
Uses
• For measuring the quality of the processes (process efficiency) within
software development life cycle; by evaluating the degree to which defects
introduced during that phase/stage are eliminated before they are
transmitted into subsequent phases/stages.
• For identifying the phases in the software development life cycle that are
the weakest in terms of quality control and for focusing on them.
• For assessing the performance of the team / software testers. Take extra
caution while using this though; there are many factors involved in their
performance and solely using this metric to judge them would be unfair.
Defect Minimizing
Methods adopted for preventing the introduction of bugs by programmers during development
are:
• Peer Review
• Code Analysis
• Programming techniques adopted
• Software Development Methodologies
How Do You Control Defects?
• The defects can be reduced by:
1. Effectively executing defect analysis
2. Thoroughly analyzing software requirements
3. Using error monitoring software
4. Aggressive regression testing
5. Frequent Code refactoring.
Defect Management Process
Defect Management is a systematic process to identify and fix bugs. A
defect management cycle contains the following stages:
1) Discovery of Defect,
2) Defect Categorization
3) Fixing of Defect by developers
4) Verification by Testers,
5) Defect Closure
6) Defect Reports at the end of project
Defect Management Process
1. Discovery: the project teams have to discover as many defects as possible, before the end customer can
discover it. A defect is said to be discovered and change to status accepted when it is acknowledged and
accepted by the developers
2. Categorization: helps the software developers to prioritize their tasks. That means that this kind of
priority helps the developers in fixing those defects first that are highly crucial
3. Defect Resolution: is a step by step process of fixing the defects
• Assignment: Assigned to a developer or other technician to fix, and changed the status to Responding.
• Schedule fixing: The developer side take charge in this phase. They will create a schedule to fix these
defects, depend on the defect priority.
• Fix the defect: While the development team is fixing the defects, the Test Manager tracks the process of
fixing defect compare to the above schedule.
• Report the resolution: Get a report of the resolution from developers when defects are fixed
Defect Management Process
4. Verification: After the development team fixed and reported the defect, the testing
team verifies that the defects are actually resolved.
5. Closure: Once a defect has been resolved and verified, the defect is changed status
as closed. If not, you have send a notice to the development to check the defect again.
.
Defect Life Cycle - Workflow:
• A defect life cycle is defined as a
set of states that a defect
undergoes during its entire life-
form, being found to be
resolved/rejected/deferred.
• It is defined to make the
coordination and communication
between various teams easier.
• The bug cycle varies depending on
the organization, tools and the
type of project.
Defect Life Cycle States:
1. New - Potential defect that is raised and yet to be validated.
2. Assigned - Assigned against a development team to address it but not yet resolved.
3. Active - The Defect is being addressed by the developer and investigation is under progress. At this
stage there are two possible outcomes; viz - Deferred or Rejected.
4. Test - The Defect is fixed and ready for testing.
5. Verified - The Defect that is retested and the test has been verified by QA.
6. Closed - The final state of the defect that can be closed after the QA retesting or can be closed if the
defect is duplicate or considered as NOT a defect.
7. Reopened - When the defect is NOT fixed, QA reopens/reactivates the defect.
8. Deferred - When a defect cannot be addressed in that particular cycle it is deferred to future release.
9. Rejected - A defect can be rejected for any of the 3 reasons; viz - duplicate defect, NOT a Defect, Non
Reproducible.
Guidelines For Defect Life Cycle
• There are specific guidelines to be considered before defining the
states of the defect life cycle. They are:
• A team leader or project manager should ensure that the team
members know their responsibility towards the defect and its status.
• The status of the defect should be assigned and maintained.
• Ensure that the entire defect life cycle is well-documented and
understood by each member of your team.
• Before changing the status of the defect, a plausible and detailed
reason must be specified.
Defect Report
Also known as Bug Report, is a document that identifies
and describes a defect detected by a tester.
The purpose of a defect report is to state the problem as
clearly as possible so that developers can replicate the
defect easily and fix it.
ID Unique identifier given to the defect. (Usually, automated)
Project Project name.
Product Product name.
Release Version Release version of the product. (e.g. 1.2.3)
Module Specific module of the product where the defect was detected.
Detected Build Version Build version of the product where the defect was detected (e.g. 1.2.3.5)
Summary Summary of the defect. Keep this clear and concise.
Description Detailed description of the defect. Describe as much as possible but without repeating anything or using complex words.
Keep it simple but comprehensive.
Steps to Replicate Step by step description of the way to reproduce the defect. Number the steps.
Actual Result The actual result you received when you followed the steps.
Expected Results The expected results.
Attachments Attach any additional information like screenshots and logs.
Remarks Any additional comments on the defect.
Defect Probability Probability of the Defect.
Defect Severity Severity of the Defect
Defect Priority Priority of the Defect.
Reported By The name of the person who reported the defect.
Assigned To The name of the person that is assigned to analyze/ fix the defect.
Status The status of the defect.
Fixed Build Version Build version of the product where the defect was fixed (e.g. 1.2.3.9)
Reporting Guidelines
• Be specific: Specify the exact action to be taken, mention the
exact path you followed in testing ,do not use vague words
• Be detailed
• Be objective:
• Reproduce the defect by repeating the test
• Review the report
Root Cause Analysis
• RCA (Root Cause Analysis) is a mechanism of analyzing the Defects, to
identify its cause. We brainstorm, read and dig the defect to identify
whether the defect was due to “testing miss”, “development miss” or was
a “requirement or designs miss”.
• When RCA is done accurately, it helps to prevent defects in the later
releases or phases.
• A structured and logical approach is required for an effective root cause
analysis. Steps include
Advantages Of Root Cause Analysis
• Prevent the reoccurrence of the same problem in the future.
• Eventually, reduce the number of defects reported over time.
• Reduces developmental costs and saves time.
• Improve the software development process and hence aiding quick
delivery to market.
• Improves customer satisfaction.
• Boost productivity.
• Find hidden problems in the system.
• Aids in continuous improvement
Root Cause Analysis Techniques
• 5 Whys: The 5 Whys technique involves asking "why" repeatedly to drill down into the
problem's root cause. This technique helps in uncovering deeper issues beyond the
obvious symptoms.
• Fishbone Diagram (Ishikawa or Cause-and-Effect Diagram): This visual tool helps
identify potential causes of a problem. It categorizes causes into various branches,
including people, process, tools, and more, to provide a holistic view of the issue.
• Fault Tree Analysis (FTA): FTA is a graphical representation of the failure modes and
their causes. It's particularly useful for analyzing complex system failures and their
interdependencies.
Root Cause Analysis Techniques
• Pareto Analysis: This technique helps prioritize the most significant or common causes
of defects. It is based on the Pareto Principle (80/20 rule), which suggests that 80% of
problems are often caused by 20% of the causes.
• Failure Mode and Effects Analysis (FMEA): FMEA is a structured approach that evaluates
the impact of various failure modes and their potential causes. It assigns risk priority
numbers (RPN) to help prioritize which causes to address.
• Brainstorming: A group technique where team members generate ideas and potential
causes of defects. It encourages creative thinking and diverse perspectives.
Root Cause Analysis Techniques
• Change Analysis: Examining changes in the software, such as code modifications,
configuration changes, or system updates, to identify potential causes of defects.
• Statistical Analysis: Using statistical tools to analyze data and identify patterns,
correlations, and anomalies that could be related to defects.
• Scatter Diagrams: These diagrams help visualize the relationship between two
variables, which can be useful for identifying potential root causes and correlations.
• Regression Analysis: Assessing the relationships between variables to understand how
changes in one variable might affect another. This is particularly useful for identifying
factors contributing to defects.
Root Cause Analysis Techniques
• Control Charts: Monitoring and analyzing process data over time to detect variations
and unusual patterns, which can be indicative of root causes.
• Experience-Based Analysis: Relying on the expertise and experience of software testers,
developers, and other team members to identify potential causes of defects based on
past knowledge and insights.
• Data Mining: Analyzing large datasets to discover hidden patterns or anomalies that
may lead to defect causes.
• Failure Reporting, Analysis, and Corrective Action System (FRACAS): A systematic
process for reporting, tracking, and analyzing failures to identify their root causes and
implement corrective actions.
• Root Cause Mapping: A visual technique that helps map out the sequence of events
leading to a defect, identifying contributing factors and root causes.
Defect Root Cause Analysis
4W1H and 5W Method
• This method involves answering the 4W (What, Where, Who, When) and
1H (How) questions and repeating the process until the primary cause is
identified.
• The example provided is a situation where customers are frustrated and
are complaining about incorrect Equated Monthly Installment (EMI)
calculation.
Defect Root Cause Analysis
4W1H Question Example
What What happened? EMI calculation is wrong.
Where Where did it happen? EMI calculator page.
When When did it happen? Whenever calculation is carried out.
Who Who was involved? Loan Officer.
Values for Loan Amount, Interest Rate
How How did it happen? and Loan Tenure were input and
Calculate button was pressed.
There was a coding error in the EMI
Why Why did it happen?
calculation.
Defect Root Cause Analysis
5W1H Question Example
There was a coding error in the EMI
What What happened?
calculation.
Where Where did it happen? Code.
When When did it happen? Coding phase.
Who Who was involved? Developer.
The formula for EMI calculation was wrong
How How did it happen?
coded.
The requirements specification document
Why Why did it happen? incorrectly specified the formula for EMI
calculation.
Next Week
• Black-Box ,White-Box and Experience based Testing techniques
• ASSIGNMENT
Defect Report Assignment
• Identify any software application and explain its functions
• Identify a specific defect in the application and write a defect report.
• Include a root cause analysis