Se - Unit 5
Se - Unit 5
UNIT 5
Metrics for Process and Products: Software measurement, metrics for software quality.
Risk management: Reactive Vs proactive risk strategies, software risks, risk identification, risk projection, risk refinement,
RMMM, RMMM plan.
Quality Management: Quality concepts, software quality assurance, software reviews, formal technical reviews, statistical
software quality assurance, software reliability, the ISO 9000 quality standards.
1)EXPLAIN METRICS FOR PROCESS AND PRODUCTS FOR SOFTWARE MEASUREMENT CONCEPT?
Software measurement
Software metrics can be categorized
i) Direct measures ii. In Direct measures
▪ Direct measures of the software engineering process include cost and effort applied.
▪ Direct measures of the product include lines of code (LOC) produced, execution speed, memory size,
and defects reported over some set period of time.
▪ direct measures are relatively easy to collect
Indirect measures of the product include functionality, quality, complexity, efficiency, reliability, maintainability etc.
❖ Indirect measures more difficult to assess and can be measured only indirectly.
❖ if the measures are normalized, it is possible to create software metrics.
✓ SIZE-ORIENTED METRICS
• Size-oriented software metrics are derived by normalizing quality and/or productivity
measures by considering the size of the software that has been produced.
• If a software organization maintains simple records, a table of size-oriented measures can be
created.
✓ FUNCTION-ORIENTED METRICS
• Errors per KLOC (thousand lines of code): Measures how many errors there were for every thousand lines
of code.
• Defects per KLOC: Shows how many defects were found per thousand lines of code.
• Cost per LOC (line of code): Calculates the average cost for each line of code.
• Pages of Documentation per KLOC: Indicates how many pages of documentation were created for every
thousand lines of code.
Additionally:
• Errors per Person-Month: Measures how many errors were found for each person-month of work.
• LOC per Person-Month: Shows how many lines of code were written for each person-month of work.
• Cost per Page of Documentation: Calculates the average cost to produce each page of documentation.
• Language Dependency: These metrics can vary based on the programming language used.
• Nonprocedural Languages: They don't work well with languages that are not procedural.
• Estimation Challenge: It's hard to estimate the amount of code early in the project, which makes planning
difficult.
1. Function Points:
o Purpose: Measure software based on the functionality it delivers, rather than just the lines of code.
o How It Works: Function points are calculated from 5 key pieces of information about the software.
2. Key Measures:
o User Inputs: Count each distinct data entry from the user.
o User Outputs: Count each piece of information the software gives to the user (like reports or messages).
o User Inquiries: Count each query that generates an immediate response.
o Files: Count each logical group of data, like a database table.
o External Interfaces: Count each way the software exchanges data with other systems.
3. Calculating Function Points:
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
Here, "count total" is the sum of all the measures (inputs, outputs, etc.), and Fi are complexity adjustment
values that vary based on how complex the software is.
o To calculate 3D function points, you use: Index = I + O + Q + F + E + T +R where each letter represents
a complexity-weighted value for different elements like inputs, outputs, and data structures.
Key Takeaways:
2)EXPLAIN METRICS FOR PROCESS AND METRICS FOR SOFTWARE QUALITY CONCEPT?
o Definition: The ease with which software can be updated, fixed, or adapted after its initial release.
o Indirect Metrics:
▪ Mean-Time-to-Change (MTTC): Measures how long it takes to implement a change. Shorter
times indicate better maintainability.
▪ Spoilage: The cost to fix defects found after the software is released. Lower spoilage suggests the
software is easier to maintain.
3. Integrity
o Definition: The ability of the software to withstand attacks and maintain its security.
o Components:
▪ Threat: The likelihood of a specific type of attack occurring.
▪ Security: The likelihood of successfully repelling the attack.
o Integrity Calculation:
This formula calculates how well the software can protect itself from various attacks.
4. Usability
o Definition: Measures how user-friendly the software is.
o Characteristics:
▪ Skill Required: How much effort it takes to learn the system.
▪ Efficiency: Time needed to become reasonably efficient with the system.
▪ Productivity: Improvement in productivity once the user is familiar with the system.
▪ User Attitudes: Users' overall feelings about the system, often measured through surveys.
Defect Removal Efficiency (DRE)
• Definition: Measures how effectively errors are found and fixed before the software is delivered.
• General Formula:
o E: Number of errors found before delivery.
o D: Number of defects found after delivery.
o Ideal DRE: 1, meaning no defects are found post-delivery. Higher DRE values indicate better error
filtering.
• Project-Specific DRE:
o Context: Used to assess how well errors are identified and corrected within different phases of
development.
o Phase-Specific Formula:
o
▪ Ei: Number of errors found during phase iii.
▪ E{i+1}: :Number of errors found in the next phase that were missed in the current phase.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
o Objective: Aim for DRE values close to 1 in each phase to ensure errors are caught early.
Summary
• Correctness measures how well the software does what it's supposed to do.
• Maintainability looks at how easy it is to update or fix the software.
• Integrity assesses how well the software can withstand attacks.
• Usability evaluates how user-friendly the software is.
• Defect Removal Efficiency (DRE) measures how effectively errors are found and fixed before and after delivery.
A higher DRE indicates better quality assurance and control.
o Contingency Planning: Develops plans to respond effectively if risks do occur, allowing for controlled
management.
Risk Management Plan (RMMM)
• RMMM: Stands for Risk Management, Mitigation, and Monitoring Plan.
• Components:
o Risk Identification: Determine potential risks that could affect the project.
o Risk Projection: Estimate the likelihood and impact of each risk.
o Risk Refinement: Refine the risk management plan based on ongoing project developments.
o Mitigation Strategies: Develop strategies to reduce the probability or impact of risks.
o Monitoring: Continuously monitor risks and adjust plans as needed.
Summary
• Risk Management involves identifying, assessing, and preparing for potential problems.
• Reactive Strategies wait until problems occur, often leading to crisis management.
• Proactive Strategies involve early identification and planning to handle risks before they become serious issues.
• Risk Management Plan (RMMM) includes identifying, projecting, refining, and mitigating risks, along with
ongoing monitoring.
Using proactive risk strategies and a well-developed risk management plan can significantly enhance a project's chances
of success by preventing or effectively addressing potential problems.
Definition of Risk:
Risk Analysis:
Categories of Risks
1. Project Risks:
o Impact: Can affect the project schedule, budget, or resources.
o Examples: Delays in the project timeline, budget overruns, staffing issues.
o Factors: Project complexity, size, and uncertainty.
2. Technical Risks:
o Impact: Can affect the quality and timely delivery of the software.
o Examples: Difficulties in implementation, unclear design specifications, outdated technology.
o Factors: Ambiguity in specifications, technical uncertainty, new or untested technology.
3. Business Risks:
o Impact: Can affect the feasibility and market acceptance of the software.
o Examples: Developing a product no one wants, losing support from senior management, budget issues.
o Subcategories:
▪ Market Risk: Building a product that lacks market demand.
▪ Strategic Risk: Developing a product that no longer fits the company's strategy.
▪ Sales Risk: Creating a product that the sales team cannot effectively sell.
▪ Management Risk: Losing senior management support due to changes in focus or team.
▪ Budget Risk: Losing financial or personnel commitments.
4. Known Risks:
o Definition: Risks that can be identified based on careful evaluation of the project plan, business
environment, and technical environment.
o Examples: Issues from past projects, such as poor communication or unclear requirements.
5. Predictable Risks:
o Definition: Risks identified from previous project experiences.
o Examples: Staff turnover, communication problems with customers, maintenance issues affecting focus.
6. Unpredictable Risks:
o Definition: Risks that are difficult to foresee and identify in advance.
o Examples: Unexpected external events or sudden changes in technology.
Summary
• Risk Management involves identifying and analyzing potential problems, assessing their likelihood and impact,
and preparing strategies to manage them.
• Categories of Risks:
o Project Risks: Affect project timelines and costs.
o Technical Risks: Impact software quality and implementation.
o Business Risks: Influence market success and organizational support.
o Known Risks: Identified from careful evaluation.
o Predictable Risks: Based on past project experiences.
o Unpredictable Risks: Difficult to foresee but possible.
Understanding and managing these risks effectively can help ensure the success of software projects by preparing for
potential problems and mitigating their impact.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
• Definition: A methodology or mechanism used throughout the software development process to identify,
manage, and control risks that arise before and during the software process.
1. Risk Identification
o Purpose: Systematically identify potential threats to the project.
o Methods:
▪ Generic Risks: Common to all software projects (e.g., product size, business impact).
▪ Product-Specific Risks: Unique to the specific project (e.g., technology complexity,
development environment).
▪ Tools: Risk item checklists, expert judgment, historical data.
o Assessment Questions:
▪ Commitment of top managers.
▪ End-user commitment.
▪ Understanding of requirements.
▪ Customer involvement.
▪ Realistic expectations.
2. Risk Projection
o Purpose: Predict the future impact of identified risks.
o Activities:
▪ Developing a Risk Table: List and categorize risks, and estimate their probability and impact.
▪ Assessing Risk Impact: Evaluate how risks might affect the project in terms of cost, schedule,
and performance.
o Components:
▪ Risk drivers.
▪ Impact categories (negligible, marginal, critical, catastrophic).
3. Risk Assessment
o Purpose: Evaluate the risks and their potential effects on the project.
o Activities:
▪ Risk Triplets: Create triplets of risk (ri), likelihood (li), and impact (xi).
▪ Risk Exposure Calculation: Determine overall risk exposure using the formula RE = P x C.
▪ Risk Referent Levels: Define levels at which project decisions (e.g., continue or terminate) are
based on risk impact.
4. Risk Refinement
o Purpose: Continuously update and improve risk assessments and strategies.
o Activities:
▪ Re-evaluate risks as the project progresses.
▪ Adjust risk assessments based on new information or changes in project conditions.
▪ Update risk management plans accordingly.
5. RMMM (Risk Mitigation, Monitoring, and Management)
o Purpose: Implement strategies to handle risks.
o Components:
▪ Risk Mitigation: Develop strategies to reduce the likelihood or impact of risks.
▪ Risk Monitoring: Continuously track identified risks and their status.
▪ Risk Management: Adjust strategies as needed to respond to changes in risk conditions.
Definition:
• Risk Identification: Systematic process of specifying threats to the project plan, including estimates, schedule,
resource loading, etc. It helps in avoiding or controlling known and predictable risks.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
1. Generic Risks:
o Product Size: Risks related to the overall size of the software to be built or modified.
o Business Impact: Risks due to constraints or conditions imposed by management or the marketplace.
o Customer Characteristics: Risks related to the sophistication and communication abilities of the
customer and the developer.
o Process Definition: Risks associated with how well the software process is defined and followed.
o Development Environment: Risks tied to the availability and quality of tools used for building the
product.
o Technology to be Built: Risks concerning the complexity and newness of the technology.
o Staff Size and Experience: Risks related to the technical and project experience of the software
engineers.
2. Product-Specific Risks:
o Identified through a clear understanding of the project's technology, team, and environment.
• Key Questions:
1. Have top software and customer managers formally committed to support the project?
2. Are end-users enthusiastic about the project and the product to be built?
3. Are requirements fully understood by the software engineering team and customers?
4. Have customers been involved in defining requirements?
5. Do end-users have realistic expectations?
6. Is project scope stable?
7. Does the software engineering team have the right mix of skills?
8. Are project requirements stable?
9. Does the project team have experience with the technology to be implemented?
10. Is the team size adequate for the project?
11. Do all customer/user constituencies agree on the project's importance and requirements?
• Note: The degree of project risk is proportional to the number of negative responses to these questions.
1. Performance Risk: Uncertainty that the product will meet its requirements and be fit for its intended use.
2. Cost Risk: Uncertainty that the project budget will be maintained.
3. Support Risk: Uncertainty that the software will be easy to correct, adapt, and enhance.
4. Schedule Risk: Uncertainty that the project schedule will be maintained and the product will be delivered on
time.
1. Negligible
2. Marginal
3. Critical
4. Catastrophic
• Impact Assessment:
o Row 1: Potential consequence of undetected software errors or faults.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
These points provide a comprehensive overview of how risk identification is performed and assessed in software
development projects.
Definition:
• Risk Projection (or Risk Estimation): The process of evaluating each risk by assessing its probability and the
consequences if it occurs. It helps in understanding the potential impact and likelihood of each risk.
Summary: Risk projection involves creating a structured approach to assess risks by determining their probability and
potential consequences. By following these activities, project managers can better prepare for and mitigate the impacts of
identified risks.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
1. List Risks:
o Column 1: List all identified risks using a risk item checklist. Each risk should be described clearly to
ensure that everyone understands the potential issues.
2. Categorize Risks:
o Column 2: Categorize each risk. For example:
▪ PS: Project Size Risk
▪ BU: Business Risk
▪ TE: Technical Risk
▪ OP: Operational Risk
3. Probability of Occurrence:
o Column 3: Enter the probability of occurrence for each risk. This can be represented numerically (e.g.,
0.1 for 10% chance) or using categories (e.g., low, medium, high).
4. Impact of Risk:
o Column 4: Enter the potential impact of each risk. This could be a numerical value or a categorical
description (e.g., negligible, marginal, critical, catastrophic).
5. Sort Risks:
o Sorting: After filling in the table, sort the risks based on their probability and impact. High-probability,
high-impact risks should be placed at the top of the table, and low-probability risks should be placed at
the bottom.
6. Define Cutoff Line:
o Cutoff Line: The project manager studies the sorted table and defines a cutoff line. Risks above this line
are considered for further attention, while risks below the line are re-evaluated.
7. Manage Risks:
o Management: All risks above the cutoff line should be actively managed. The Risk Mitigation,
Monitoring, and Management Plan (RMMM) should be referenced for strategies to handle these risks.
o Understand the specific issues or problems that might arise if the risk occurs. This involves detailing
what the risk entails and how it could affect the project.
2. Scope of the Risk:
o Evaluate the severity of the risk and its overall distribution. Consider how widespread the impact will be:
▪ Severity: How serious is the impact if the risk occurs?
▪ Distribution: How much of the project or how many stakeholders will be affected?
3. Timing of the Risk:
o Assess when the risk might occur and for how long its impact will be felt. This includes considering if
the risk is immediate, short-term, or long-term and its duration.
By following these steps, you can effectively identify, categorize, prioritize, and manage risks throughout the software
development process.
• Formula:
o P: Probability of occurrence (the likelihood that the risk will occur).
o C: Cost of the risk to the project (the financial impact if the risk occurs).
• Example Calculation:
o Probability (P): 80% or 0.80
o Cost (C): The cost to develop 18 components from scratch, each averaging 100 lines of code (LOC) at
$14 per LOC:
▪ Total cost = 18 components × 100 LOC × $14/LOC = $25,200
ri Identified risk.
By thoroughly assessing risk impact, calculating risk exposure, and systematically evaluating and managing risks, you
can effectively control and mitigate potential issues throughout the software development lifecycle.
1. Risk Avoidance
• Definition:
o Proactively taking steps to eliminate or reduce the likelihood and impact of identified risks.
o Aiming to avoid risks through strategic planning and proactive measures.
• Example Scenario: High Staff Turnover
o Identify Causes: Meet with current staff to understand reasons behind high turnover (e.g., poor working
conditions, low pay, competitive job market).
o Mitigate Causes: Address the controllable factors such as improving working conditions or adjusting
compensation.
o Develop Continuity Plans: Prepare for turnover by:
▪ Information Dissemination: Organize project teams so that knowledge is shared and not
concentrated with one individual.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
▪ Documentation Standards: Define and enforce standards for documentation to ensure it's done
on time and is comprehensive.
▪ Peer Reviews: Implement peer reviews to ensure multiple team members are familiar with the
project.
▪ Backup Staff: Assign backup personnel for critical roles to maintain project continuity in case
of turnover.
2. Risk Monitoring
• Definition:
o The ongoing process of tracking identified risks and their indicators to determine if they are becoming
more or less likely.
• Monitoring Factors for High Staff Turnover:
o General Attitude: Assess the morale and general attitude of team members to gauge if project pressures
are affecting staff satisfaction.
o Team Dynamics: Observe how well the team is working together and whether they are functioning
cohesively.
o Interpersonal Relationships: Monitor relationships among team members to identify any emerging
conflicts or issues that could affect stability.
• RMMM Plan:
o A comprehensive document that outlines all activities related to risk management, including risk
identification, assessment, mitigation, and monitoring.
o Purpose: Serves as a key component of the overall project plan, guiding the project manager in handling
risks throughout the project lifecycle.
• Risk Information Sheet (RIS):
o Definition: An individual document used to track specific risks.
o Format:
▪ Risk Description: Detailed description of the risk.
▪ Probability and Impact: Assessment of the risk’s likelihood and potential impact.
▪ Mitigation Strategies: Planned actions to reduce or manage the risk.
▪ Current Status: Ongoing updates on the risk's status and any changes.
▪ Responsibility: Assignment of team members responsible for managing the risk.
o Database System:
▪ RIS can be maintained using a database to facilitate easy creation, information entry, priority
ordering, searching, and analysis.
o Figure 6.5 (Illustrative Example of RIS Format):
▪ Risk ID: Unique identifier for the risk.
▪ Risk Description: Brief and detailed explanation of the risk.
▪ Likelihood: Probability of the risk occurring (e.g., low, medium, high).
▪ Impact: Potential consequences if the risk occurs (e.g., low, moderate, severe).
▪ Mitigation Actions: Steps taken to address or reduce the risk.
▪ Status: Current status of the risk (e.g., ongoing, resolved, monitored).
▪ Owner: Person responsible for managing the risk.
Summary
• Risk Avoidance: Proactively eliminate or reduce risks through planning and preventive measures.
• Risk Monitoring: Continuously track risks and their indicators to assess their changing likelihood and impact.
• RMMM Plan: A structured document or system for managing risks throughout the project, including risk
identification, assessment, mitigation, and monitoring.
• RIS: A detailed document or database entry for tracking individual risks, their impacts, and management
strategies.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
Quality Concepts
1. Quality:
- Definition: Quality in software refers to how well the software works and meets user requirements. It encompasses
both design and conformance.
- Types of Quality:
- Quality of Design: The characteristics specified by designers, including requirements and system design. A higher
design quality ensures the product is built as specified.
- Quality of Conformance: The degree to which the final product adheres to the design specifications. It focuses on the
implementation and whether the system meets its requirements and performance goals.
2. Quality Control:
- Definition: Involves measuring and testing work products to ensure they meet the specified requirements. It includes
inspections, reviews, and tests.
- Process: Quality control uses feedback loops to minimize defects and adjust processes when products fail to meet
specifications. It can be automated, manual, or a combination of both.
3. Quality Assurance:
- Definition: A systematic pattern of activities designed to provide confidence in the product's quality to management. It
includes auditing and reporting functions.
- Goal: To provide data to management and assure that product quality meets its goals. Management is responsible for
addressing problems and applying resources to resolve quality issues.
4. Cost of Quality:
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
- Definition: The total costs associated with achieving and maintaining quality, including all related activities.
- Types of Quality Costs:
- Prevention Costs: Costs incurred to prevent defects (e.g., quality planning, training, test equipment).
- Appraisal Costs: Costs related to measuring and monitoring quality (e.g., inspections, equipment calibration).
- Failure Costs: Costs arising from defects, categorized into:
- Internal Failure Costs: Costs from defects detected before shipping (e.g., rework, repair).
- External Failure Costs: Costs from defects found after shipment (e.g., product returns, warranty work).
The cost to find and repair defects increases as you move from prevention to detection, and from internal to
external failures.
8) Explain Software Quality Assurance, Software Reviews, Formal Technical Reviews, Statistical Software
Quality Assurance, Software Reliability, The Iso 9000 Quality Standards ?
Definition: Software Quality Assurance (SQA) is a systematic process that ensures software meets specified
requirements and standards. SQA involves planning, executing, and monitoring activities to ensure the software is
delivered with high quality, addressing defects and ensuring customer satisfaction.
1. Understanding Requirements:
o Detailed Analysis: Quality issues often arise from inadequate understanding or incomplete
documentation of software requirements. A clear and thorough understanding of both explicit and
implicit requirements is crucial.
o Example: A requirement that specifies "the system should be fast" might be ambiguous. Clarifying what
"fast" means in terms of performance metrics is essential.
2. Implicit Requirements:
o Unspoken Needs: Implicit requirements are those that are not clearly stated but are essential for the
software's functionality. Missing these can lead to significant quality issues.
o Example: A software application might implicitly require compatibility with certain operating systems
that are not explicitly mentioned in the requirements.
3. Development Criteria:
o Standards and Criteria: Establishing clear standards and criteria for development helps guide the
process and ensures that quality is maintained throughout the software lifecycle.
o Example: Development standards might include coding conventions, design patterns, and performance
benchmarks.
SQA Activities:
o Example: An SQA plan might include detailed testing schedules and criteria for passing or failing tests.
2. Participate in Software Process Development:
o Activities: Involve SQA teams in developing and refining software processes to ensure they meet quality
standards.
o Example: Collaborating on process improvement initiatives and incorporating feedback into process
changes.
3. Review Engineering Activities for Compliance:
o Activities: Regularly review engineering activities to ensure they adhere to established processes and
standards.
o Example: Conducting reviews of design documents and code to verify compliance with requirements.
4. Audit Work Products for Adherence to Processes:
o Activities: Perform audits of work products to ensure they meet quality standards and process
requirements.
o Example: Auditing test results and documentation to ensure completeness and accuracy.
5. Document and Report Noncompliance to Management:
o Activities: Record instances of noncompliance and communicate them to senior management for
resolution.
o Example: Creating detailed reports of audit findings and tracking the resolution of identified issues.
2. Software Reviews
Software Review
Definition of Software Review:
Software reviews are systematic evaluations conducted at various stages of the software development process to identify
and address errors and defects. These reviews act as a "filter" for the software engineering process, aiming to improve the
quality of the software product by detecting issues early and ensuring they are corrected before they propagate further in
the development lifecycle.
Types of Reviews:
1. Informal Meeting:
o Description: Informal meetings are less structured discussions that can occur outside the formal work
environment. They provide an opportunity for open, casual discussions about technical issues and project
progress.
o Advantages: Encourages open communication and quick feedback.
o Disadvantages: May lack formal documentation and rigor, potentially leading to missed issues.
2. Formal Technical Review (FTR):
o Description: A Formal Technical Review, sometimes referred to as a “walkthrough” or “inspection,” is a
structured and systematic review conducted by a team of software engineers and other stakeholders.
o Objectives: To detect and resolve errors, ensure adherence to standards, and improve the quality of the
software.
o Advantages: Provides a rigorous examination of work products, often leading to higher quality
outcomes.
o Process:
1. Preparation: Participants review the materials in advance of the meeting.
2. Meeting: A structured discussion to identify and address issues.
3. Reporting: Documenting and summarizing issues identified during the review.
Objective: The primary goal of formal technical reviews is to identify and resolve errors early in the process to prevent
them from becoming more costly defects later.
• Early Detection Benefit: Identifying and correcting errors early in the development process prevents these errors
from propagating to later stages, thereby reducing overall development and support costs.
• Cost Example:
o Design Stage: An error discovered during the design phase might cost 1 unit to correct.
o Pre-Testing Stage: The same error identified just before testing could cost 6.5 units to fix.
o Testing Stage: During testing, the cost could rise to 15 units.
o Post-Release: Errors discovered after the software has been released can cost between 60 and 100 units
to address, including support and patching costs.
Concept: Defect amplification refers to the phenomenon where errors introduced in earlier stages of development
become more pronounced as the software progresses through subsequent stages.
• Model Description:
o Diagram Representation: The model uses boxes to represent different software development steps (e.g.,
preliminary design, detailed design, coding). Errors may be generated during each step and could be
missed by the review process.
o Error Propagation: Errors that are not detected in early stages may be carried forward and potentially
amplified by subsequent development activities.
o Detection Efficiency: Each step in the development process has an associated efficiency for detecting
errors, which can impact the overall quality of the final product.
• Example:
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
o Initial Design Defects: Suppose there are 10 defects identified in the preliminary design phase. If no
reviews are conducted, these defects may be amplified as the software moves through detailed design and
coding phases.
o Final Outcome: If, for instance, 50% of defects are detected and corrected at each stage, 10 initial
defects could grow to 93 errors before testing begins, with 12 latent errors potentially reaching the field.
Conclusion: Software reviews are a critical component of the software development process, helping to identify and
address defects early. Implementing both informal and formal reviews, understanding the cost impact of defects, and
recognizing the phenomenon of defect amplification are crucial for improving software quality and managing
development costs effectively.
Definition: Statistical Quality Assurance (SQA) employs quantitative methods to evaluate and enhance software quality
by analyzing defect data and identifying patterns.
Steps:
4. Software Reliability
Definition: Software reliability measures the probability that software will perform its intended functions without failure
within a specified environment and time frame.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
Measures:
Software Safety:
• Definition: Focuses on identifying and mitigating potential hazards that could cause system failures.
• Techniques: Includes hazard analysis, risk assessment, and implementing safety measures.
• Generic Quality Assurance: ISO 9000 outlines quality assurance principles and elements applicable to any
organization, regardless of industry or product type. This standard provides a broad framework for establishing
and maintaining quality systems.
• Third-Party Audits: Companies seeking ISO 9000 certification undergo rigorous evaluations by independent
auditors who assess compliance with the standard's requirements. Successful audits result in certification by a
recognized registration body.
• Quality System Elements: ISO 9000 defines key components of a quality assurance system, including
organizational structure, procedures, processes, and resources necessary for effective quality management. It does
not specify the exact implementation methods but provides a general framework.
• Focus on Software Engineering: ISO 9001 is specifically tailored to software engineering and provides detailed
requirements for a quality assurance system in this field.
• 20 Essential Requirements: ISO 9001 includes 20 key requirements that organizations must meet to establish an
effective quality assurance system. These requirements cover a wide range of quality management aspects,
ensuring comprehensive quality control.
• Key Requirements Include:
o Management Responsibility: Defines the roles and responsibilities of management in maintaining and
improving quality.
o Quality System: Specifies the structure and documentation needed for managing quality effectively.
o Contract Review: Outlines processes for reviewing and ensuring that contractual requirements are met.
o Design Control: Ensures that design processes are controlled and meet quality standards.
o Document and Data Control: Manages the documentation and data necessary for quality assurance.
o Product Identification and Traceability: Ensures products are identifiable and traceable throughout the
production and delivery process.
o Process Control: Establishes controls for processes to maintain quality standards.
PDF BY AISHWARYA U SOFTWARE ENGINEERING -UNIT5 MSC COMPUTER SC.
o Inspection and Testing: Defines methods for inspecting and testing products to ensure they meet quality
requirements.
o Corrective and Preventive Action: Provides procedures for addressing and preventing issues that affect
quality.
o Control of Quality Records: Manages records related to quality assurance to ensure they are accurate
and accessible.
o Internal Quality Audits: Requires regular internal audits to assess the effectiveness of the quality
management system.
o Training: Ensures that staff are adequately trained to perform their roles effectively.
o Servicing: Addresses requirements for post-delivery servicing and support.
o Statistical Techniques: Utilizes statistical methods for quality control and improvement.
By adhering to ISO 9001, organizations can establish robust quality assurance systems that ensure consistent product
quality and customer satisfaction.