0% found this document useful (0 votes)
16 views21 pages

Practical-05: Actors and Roles

This document presents a case study on the ATM Transaction System, detailing its architecture, actors, use cases, and security measures for secure self-service banking. It outlines the roles of customers, technicians, and the bank backend, along with various transaction processes such as balance inquiries, deposits, and withdrawals. The conclusion suggests future enhancements like contactless authentication and AI-driven fraud detection to improve customer experience and operational resilience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views21 pages

Practical-05: Actors and Roles

This document presents a case study on the ATM Transaction System, detailing its architecture, actors, use cases, and security measures for secure self-service banking. It outlines the roles of customers, technicians, and the bank backend, along with various transaction processes such as balance inquiries, deposits, and withdrawals. The conclusion suggests future enhancements like contactless authentication and AI-driven fraud detection to improve customer experience and operational resilience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

PRACTICAL-05

AIM:- Make a case study ATM Transaction System.

ATM enables secure self-service banking (balance inquiry, deposit, withdrawal,


transfer). Software controls card reader, PIN pad, dispenser, validator, printer,
display; connects to bank via encrypted channel. All events logged locally + backend.

Actors and Roles


• Customer Engages the ATM for balance inquiries, deposits, withdrawals, and
fund Transfers through card and PIN authentication.
• ATM Technician Accesses maintenance and repair functions using secure
service Credentials to ensure continuous operation.
• Bank Backend Validates credentials, authorizes transactions, updates
account records, and stores audit logs for regulatory compliance.
Use Cases:
• Check Balance: Card + PIN → bank validates → ATM shows balances.
• Deposit: Authenticate → insert cash/check → validator confirms →
pending receipt.
• Withdraw: Authenticate → enter amount → verify balance + cash →
dispense.
• Transfer: Authenticate → select accounts + amount → bank processes
→ confirm.
• Maintenance: Technician runs tests, replaces parts, applies
patches; all logged.

Conclusion
This case study details the Bank ATM Subsystem’s architecture, actors,
use cases, security measures, and exception flows. By integrating
customer services, technician workflows, and bank backend processing,
the system delivers secure, reliable, and efficient self-service banking.
Future enhancements could include contactless authentication, mobile
wallet integration, and AI-driven fraud detection to further improve
customer experience and operational resilience.
PRACTICAL-05
AIM:- Make a case study ATM Transaction System.

ATM enables secure self-service banking (balance inquiry, deposit, withdrawal,


transfer). Software controls card reader, PIN pad, dispenser, validator, printer,
display; connects to bank via encrypted channel. All events logged locally + backend.

Actors and Roles


• Customer Engages the ATM for balance inquiries, deposits, withdrawals, and
fund Transfers through card and PIN authentication.
• ATM Technician Accesses maintenance and repair functions using secure
service Credentials to ensure continuous operation.
• Bank Backend Validates credentials, authorizes transactions, updates
account records, and stores audit logs for regulatory compliance.
Use Cases:
• Check Balance: Card + PIN → bank validates → ATM shows balances.
• Deposit: Authenticate → insert cash/check → validator confirms →
pending receipt.
• Withdraw: Authenticate → enter amount → verify balance + cash →
dispense.
• Transfer: Authenticate → select accounts + amount → bank processes
→ confirm.
• Maintenance: Technician runs tests, replaces parts, applies
patches; all logged.

Conclusion
This case study details the Bank ATM Subsystem’s architecture, actors,
use cases, security measures, and exception flows. By integrating
customer services, technician workflows, and bank backend processing,
the system delivers secure, reliable, and efficient self-service banking.
Future enhancements could include contactless authentication, mobile
wallet integration, and AI-driven fraud detection to further improve
customer experience and operational resilience.
PRACTICAL-05
AIM:- Make a case study ATM Transaction System.

ATM enables secure self-service banking (balance inquiry, deposit, withdrawal,


transfer). Software controls card reader, PIN pad, dispenser, validator, printer,
display; connects to bank via encrypted channel. All events logged locally + backend.

Actors and Roles


• Customer Engages the ATM for balance inquiries, deposits, withdrawals, and
fund Transfers through card and PIN authentication.
• ATM Technician Accesses maintenance and repair functions using secure
service Credentials to ensure continuous operation.
• Bank Backend Validates credentials, authorizes transactions, updates
account records, and stores audit logs for regulatory compliance.
Use Cases:
• Check Balance: Card + PIN → bank validates → ATM shows balances.
• Deposit: Authenticate → insert cash/check → validator confirms →
pending receipt.
• Withdraw: Authenticate → enter amount → verify balance + cash →
dispense.
• Transfer: Authenticate → select accounts + amount → bank processes
→ confirm.
• Maintenance: Technician runs tests, replaces parts, applies
patches; all logged.

Conclusion
This case study details the Bank ATM Subsystem’s architecture, actors,
use cases, security measures, and exception flows. By integrating
customer services, technician workflows, and bank backend processing,
the system delivers secure, reliable, and efficient self-service banking.
Future enhancements could include contactless authentication, mobile
wallet integration, and AI-driven fraud detection to further improve
customer experience and operational resilience.
COCOMO Model
Aim :- To understand the fundamental principles of the COCOMO model and the various types
of COCOMO model and its application in software project management.

Theory:

COCOMO is a procedural software cost estimation model developed by Dr. Barry Boehm.
It provides a structured approach to predict the resources and time required to build a software
system.
The model uses a set of formulas that relate the size of the software, measured in Lines of Code
(LOC), to the effort (in person-months), development time (in months), and cost.

Types of COCOMO Model :-


There are three main versions of the COCOMO model:

1. Basic COCOMO: A simple, high-level model used for quick, initial estimates. It relies
solely on the estimated LOC of the project. It's often used early in the project lifecycle, during
feasibility studies or initial proposal stages.

2. Intermediate COCOMO: This model is more detailed and provides a more accurate
estimate. It incorporates 15 cost drivers (also known as EAFs - Effort Adjustment Factors)
that account for various project attributes like product complexity, developer experience, and
required reliability.

3. Detailed COCOMO: This is the most comprehensive and accurate version. It considers
the cost drivers at a more granular level, applying them to individual development phases (e.g.,
design, coding, testing).

Procedure:

To use the COCOMO model, we follow these steps:

1. Select the COCOMO Model: Choose the appropriate model (Basic or Intermediate)
based on the project's complexity and the required accuracy of the estimate. We will
use the Intermediate COCOMO for this practical, as it provides a more realistic
estimate.

2. Estimate the Size: Determine the estimated size of the software in Kilo Lines of Code
(KLOC). This can be done using techniques like expert judgment, analogy, or Function
Point Analysis. For our project, let's assume the estimated size is 50 KLOC.

3. Identify the Project Type: Classify the project into one of the three modes:
o Organic: Small, simple, well-understood projects with a stable team.
o Semi-detached: Medium-sized projects with a mix of experienced and
inexperienced personnel.
o Embedded: Complex projects with stringent hardware, software, or operational
constraints.
o For our practical, let's assume the project is Semi-detached.
4. Calculate Nominal Effort & Time: Use the specific formulas for the chosen project
mode to calculate the initial effort (E) and development time (D) without considering
cost drivers.
o For Semi-detached projects:
▪ E=3.0×(KLOC)1.12
▪ D=2.5×(E)0.35

5. Determine Cost Driver Ratings: Evaluate each of the 15 cost drivers and assign a
rating (e.g., very low, low, nominal, high, very high, extra high). Each rating
corresponds to a numerical effort multiplier (EM).

6. Calculate Effort Adjustment Factor (EAF): Multiply all the effort multipliers
together to get a single EAF value.
o EAF=EM1×EM2×...×EM15

7. Calculate Adjusted Effort & Time: Multiply the nominal effort by the EAF to get the
final adjusted effort. Use the adjusted effort to calculate the final development time.
o Adjusted Effort (E_{adj}) = E×EAF
o Adjusted Time (D_{adj}) = 2.5×(Eadj)0.35

Diagram :-
COCOMO Model
Aim :- To understand the fundamental principles of the COCOMO model and the various types
of COCOMO model and its application in software project management.

Theory:

COCOMO is a procedural software cost estimation model developed by Dr. Barry Boehm.
It provides a structured approach to predict the resources and time required to build a software
system.
The model uses a set of formulas that relate the size of the software, measured in Lines of Code
(LOC), to the effort (in person-months), development time (in months), and cost.

Types of COCOMO Model :-


There are three main versions of the COCOMO model:

1. Basic COCOMO: A simple, high-level model used for quick, initial estimates. It relies
solely on the estimated LOC of the project. It's often used early in the project lifecycle, during
feasibility studies or initial proposal stages.

2. Intermediate COCOMO: This model is more detailed and provides a more accurate
estimate. It incorporates 15 cost drivers (also known as EAFs - Effort Adjustment Factors)
that account for various project attributes like product complexity, developer experience, and
required reliability.

3. Detailed COCOMO: This is the most comprehensive and accurate version. It considers
the cost drivers at a more granular level, applying them to individual development phases (e.g.,
design, coding, testing).

Procedure:

To use the COCOMO model, we follow these steps:

1. Select the COCOMO Model: Choose the appropriate model (Basic or Intermediate)
based on the project's complexity and the required accuracy of the estimate. We will
use the Intermediate COCOMO for this practical, as it provides a more realistic
estimate.

2. Estimate the Size: Determine the estimated size of the software in Kilo Lines of Code
(KLOC). This can be done using techniques like expert judgment, analogy, or Function
Point Analysis. For our project, let's assume the estimated size is 50 KLOC.

3. Identify the Project Type: Classify the project into one of the three modes:
o Organic: Small, simple, well-understood projects with a stable team.
o Semi-detached: Medium-sized projects with a mix of experienced and
inexperienced personnel.
o Embedded: Complex projects with stringent hardware, software, or operational
constraints.
o For our practical, let's assume the project is Semi-detached.
4. Calculate Nominal Effort & Time: Use the specific formulas for the chosen project
mode to calculate the initial effort (E) and development time (D) without considering
cost drivers.
o For Semi-detached projects:
▪ E=3.0×(KLOC)1.12
▪ D=2.5×(E)0.35

5. Determine Cost Driver Ratings: Evaluate each of the 15 cost drivers and assign a
rating (e.g., very low, low, nominal, high, very high, extra high). Each rating
corresponds to a numerical effort multiplier (EM).

6. Calculate Effort Adjustment Factor (EAF): Multiply all the effort multipliers
together to get a single EAF value.
o EAF=EM1×EM2×...×EM15

7. Calculate Adjusted Effort & Time: Multiply the nominal effort by the EAF to get the
final adjusted effort. Use the adjusted effort to calculate the final development time.
o Adjusted Effort (E_{adj}) = E×EAF
o Adjusted Time (D_{adj}) = 2.5×(Eadj)0.35

Diagram :-
COCOMO Model
Aim :- To understand the fundamental principles of the COCOMO model and the various types
of COCOMO model and its application in software project management.

Theory:

COCOMO is a procedural software cost estimation model developed by Dr. Barry Boehm.
It provides a structured approach to predict the resources and time required to build a software
system.
The model uses a set of formulas that relate the size of the software, measured in Lines of Code
(LOC), to the effort (in person-months), development time (in months), and cost.

Types of COCOMO Model :-


There are three main versions of the COCOMO model:

1. Basic COCOMO: A simple, high-level model used for quick, initial estimates. It relies
solely on the estimated LOC of the project. It's often used early in the project lifecycle, during
feasibility studies or initial proposal stages.

2. Intermediate COCOMO: This model is more detailed and provides a more accurate
estimate. It incorporates 15 cost drivers (also known as EAFs - Effort Adjustment Factors)
that account for various project attributes like product complexity, developer experience, and
required reliability.

3. Detailed COCOMO: This is the most comprehensive and accurate version. It considers
the cost drivers at a more granular level, applying them to individual development phases (e.g.,
design, coding, testing).

Procedure:

To use the COCOMO model, we follow these steps:

1. Select the COCOMO Model: Choose the appropriate model (Basic or Intermediate)
based on the project's complexity and the required accuracy of the estimate. We will
use the Intermediate COCOMO for this practical, as it provides a more realistic
estimate.

2. Estimate the Size: Determine the estimated size of the software in Kilo Lines of Code
(KLOC). This can be done using techniques like expert judgment, analogy, or Function
Point Analysis. For our project, let's assume the estimated size is 50 KLOC.

3. Identify the Project Type: Classify the project into one of the three modes:
o Organic: Small, simple, well-understood projects with a stable team.
o Semi-detached: Medium-sized projects with a mix of experienced and
inexperienced personnel.
o Embedded: Complex projects with stringent hardware, software, or operational
constraints.
o For our practical, let's assume the project is Semi-detached.
4. Calculate Nominal Effort & Time: Use the specific formulas for the chosen project
mode to calculate the initial effort (E) and development time (D) without considering
cost drivers.
o For Semi-detached projects:
▪ E=3.0×(KLOC)1.12
▪ D=2.5×(E)0.35

5. Determine Cost Driver Ratings: Evaluate each of the 15 cost drivers and assign a
rating (e.g., very low, low, nominal, high, very high, extra high). Each rating
corresponds to a numerical effort multiplier (EM).

6. Calculate Effort Adjustment Factor (EAF): Multiply all the effort multipliers
together to get a single EAF value.
o EAF=EM1×EM2×...×EM15

7. Calculate Adjusted Effort & Time: Multiply the nominal effort by the EAF to get the
final adjusted effort. Use the adjusted effort to calculate the final development time.
o Adjusted Effort (E_{adj}) = E×EAF
o Adjusted Time (D_{adj}) = 2.5×(Eadj)0.35

Diagram :-
PRACTICAL-06
CASE STUDY
Why Software Systems Fail?
Software development projects are complex undertakings involving people, processes, and
technology. Despite advances in methodologies and tools, a significant percentage of projects fail
to meet deadlines, stay within budget, or deliver the intended value. This document examines the
root causes of software project failures, categorized into organizational, managerial, technical, and
cultural factors.

Key Reasons for Failure

• Poor Requirements & Scope: Ambiguity, scope creep, changing priorities (e.g., FBI
Virtual Case File).
• Weak Planning & Management: Unrealistic timelines, no milestones, no risk handling.
• Communication Breakdowns: Misaligned teams, silent stakeholders, poor tools.
• Technical Challenges: Wrong technology, weak architecture, underestimated complexity.
• Skill Gaps: Lack of expertise, high turnover, poor onboarding.
• Quality Control Issues: Insufficient testing, skipped best practices, rushed releases.
• Low Stakeholder Involvement: Minimal feedback, detached sponsors, decision delays.
• Cultural/Organizational Problems: Politics, resistance to change, unrealistic expectations.

Common Failure Patterns

Research (e.g., Standish Group CHAOS Report) shows these patterns are common:
• Over 50% of failed projects suffer from unclear requirements.
• Nearly 70% experience some degree of scope creep.
• Communication gaps are cited in 30-40% of failures.

Mitigation Strategies
To prevent failure:
• Implement requirements workshops and clear change control.
• Use iterative methodologies (Agile, Scrum, Kanban) to adapt quickly.
• Maintain transparent communication channels with all stakeholders.
• Prioritize early and continuous testing.
• Perform post-mortems on smaller projects to improve processes.

Case Study: Healthcare.gov (2013):-

How political pressure, fragmented leadership, and technical missteps crippled America’s flagship
health insurance marketplace on launch day.
Objective: Build ACA’s federal insurance marketplace with millions of users.
Executive Summary

On October 1, 2013, the United States government launched Healthcare.gov, the online
federal marketplace designed to let Americans shop for and enroll in health insurance under
the Affordable Care Act (ACA). The site was meant to serve millions of users from day one,
integrating dozens of federal and state databases. Instead, the launch turned into a public
catastrophe. Of the 4 million visitors who tried to access the site on day one, only six were
able to successfully complete enrollment. The pages took minutes to load, the login
processes failed, and the back-end systems frequently crashed.
The disaster was not due to a single bug or server issue but a chain reaction of failures in
governance, procurement, project management, and political decision-making. Warnings
had been issued months in advance, but political imperatives to meet the ACA’s deadlines
overrode technical readiness. Ultimately, an emergency “Tiger Team” stabilized the site
within six weeks, allowing for nearly 975,000 enrollments by December 2013.

Failure Causes:

• Leadership: No single accountable executive.


• Vendors: 60 contracts, no lead integrator.
• Technical: Bloated code, poor optimization, minimal load testing.
• Management: Scope changes, unresolved defects.
• Testing: No end-to-end or security testing.
• Launch: Big-bang release, no phased rollout.
• Analysis: Ignored warnings, political deadlines prioritized over readiness.
• Fix: Tiger Team stabilized system in 6 weeks; later costs exceeded $2B.

Lessons Learned

• Assign single accountable leadership.


• Strong vendor coordination.
• Plan for integration early.
• Realistic load testing + phased rollouts.
• Act on risk assessments.
• Technical readiness > political deadlines.

Analysis
The Healthcare.gov crash was not a case of "bad luck." It was the inevitable result of structural
flaws and ignored warnings. A McKinsey report in April 2013 explicitly highlighted integration
risks, insufficient testing, and unclear leadership. Political pressure to meet the ACA’s enrollment
deadline overrode technical caution.

What Could Have Been Done Differently

• Leadership: Assign a single, empowered executive with authority over all contractors and
agencies.
• Procurement: Appoint a lead systems integrator and reduce the number of primary vendors
to simplify coordination.
• Development Process: Use iterative, agile-style releases with continuous integration and
regular public testing.
• Technical Standards: Load test for expected peak traffic (plus a safety margin), optimize
front-end performance, and harden the infrastructure.
• Launch Strategy: Roll out in phases—for example, allow browsing first, then registration—
to limit initial load.
• Risk Management: Act on independent risk assessments and delay the launch if critical
defects remain unresolved.

Conclusion
The 2013 Healthcare.gov crash remains one of the most instructive failures in government
IT. It demonstrates that the most dangerous project risks are not purely technical—they are
organizational and political. The technology to build Healthcare.gov existed; what was
missing was cohesive leadership, disciplined execution, and a willingness to delay until
ready.
PRACTICAL-06
CASE STUDY
Why Software Systems Fail?
Software development projects are complex undertakings involving people, processes, and
technology. Despite advances in methodologies and tools, a significant percentage of projects fail
to meet deadlines, stay within budget, or deliver the intended value. This document examines the
root causes of software project failures, categorized into organizational, managerial, technical, and
cultural factors.

Key Reasons for Failure

• Poor Requirements & Scope: Ambiguity, scope creep, changing priorities (e.g., FBI
Virtual Case File).
• Weak Planning & Management: Unrealistic timelines, no milestones, no risk handling.
• Communication Breakdowns: Misaligned teams, silent stakeholders, poor tools.
• Technical Challenges: Wrong technology, weak architecture, underestimated complexity.
• Skill Gaps: Lack of expertise, high turnover, poor onboarding.
• Quality Control Issues: Insufficient testing, skipped best practices, rushed releases.
• Low Stakeholder Involvement: Minimal feedback, detached sponsors, decision delays.
• Cultural/Organizational Problems: Politics, resistance to change, unrealistic expectations.

Common Failure Patterns

Research (e.g., Standish Group CHAOS Report) shows these patterns are common:
• Over 50% of failed projects suffer from unclear requirements.
• Nearly 70% experience some degree of scope creep.
• Communication gaps are cited in 30-40% of failures.

Mitigation Strategies
To prevent failure:
• Implement requirements workshops and clear change control.
• Use iterative methodologies (Agile, Scrum, Kanban) to adapt quickly.
• Maintain transparent communication channels with all stakeholders.
• Prioritize early and continuous testing.
• Perform post-mortems on smaller projects to improve processes.

Case Study: Healthcare.gov (2013):-

How political pressure, fragmented leadership, and technical missteps crippled America’s flagship
health insurance marketplace on launch day.
Objective: Build ACA’s federal insurance marketplace with millions of users.
Executive Summary

On October 1, 2013, the United States government launched Healthcare.gov, the online
federal marketplace designed to let Americans shop for and enroll in health insurance under
the Affordable Care Act (ACA). The site was meant to serve millions of users from day one,
integrating dozens of federal and state databases. Instead, the launch turned into a public
catastrophe. Of the 4 million visitors who tried to access the site on day one, only six were
able to successfully complete enrollment. The pages took minutes to load, the login
processes failed, and the back-end systems frequently crashed.
The disaster was not due to a single bug or server issue but a chain reaction of failures in
governance, procurement, project management, and political decision-making. Warnings
had been issued months in advance, but political imperatives to meet the ACA’s deadlines
overrode technical readiness. Ultimately, an emergency “Tiger Team” stabilized the site
within six weeks, allowing for nearly 975,000 enrollments by December 2013.

Failure Causes:

• Leadership: No single accountable executive.


• Vendors: 60 contracts, no lead integrator.
• Technical: Bloated code, poor optimization, minimal load testing.
• Management: Scope changes, unresolved defects.
• Testing: No end-to-end or security testing.
• Launch: Big-bang release, no phased rollout.
• Analysis: Ignored warnings, political deadlines prioritized over readiness.
• Fix: Tiger Team stabilized system in 6 weeks; later costs exceeded $2B.

Lessons Learned

• Assign single accountable leadership.


• Strong vendor coordination.
• Plan for integration early.
• Realistic load testing + phased rollouts.
• Act on risk assessments.
• Technical readiness > political deadlines.

Analysis
The Healthcare.gov crash was not a case of "bad luck." It was the inevitable result of structural
flaws and ignored warnings. A McKinsey report in April 2013 explicitly highlighted integration
risks, insufficient testing, and unclear leadership. Political pressure to meet the ACA’s enrollment
deadline overrode technical caution.

What Could Have Been Done Differently

• Leadership: Assign a single, empowered executive with authority over all contractors and
agencies.
• Procurement: Appoint a lead systems integrator and reduce the number of primary vendors
to simplify coordination.
• Development Process: Use iterative, agile-style releases with continuous integration and
regular public testing.
• Technical Standards: Load test for expected peak traffic (plus a safety margin), optimize
front-end performance, and harden the infrastructure.
• Launch Strategy: Roll out in phases—for example, allow browsing first, then registration—
to limit initial load.
• Risk Management: Act on independent risk assessments and delay the launch if critical
defects remain unresolved.

Conclusion
The 2013 Healthcare.gov crash remains one of the most instructive failures in government
IT. It demonstrates that the most dangerous project risks are not purely technical—they are
organizational and political. The technology to build Healthcare.gov existed; what was
missing was cohesive leadership, disciplined execution, and a willingness to delay until
ready.
PRACTICAL-06
CASE STUDY
Why Software Systems Fail?
Software development projects are complex undertakings involving people, processes, and
technology. Despite advances in methodologies and tools, a significant percentage of projects fail
to meet deadlines, stay within budget, or deliver the intended value. This document examines the
root causes of software project failures, categorized into organizational, managerial, technical, and
cultural factors.

Key Reasons for Failure

• Poor Requirements & Scope: Ambiguity, scope creep, changing priorities (e.g., FBI
Virtual Case File).
• Weak Planning & Management: Unrealistic timelines, no milestones, no risk handling.
• Communication Breakdowns: Misaligned teams, silent stakeholders, poor tools.
• Technical Challenges: Wrong technology, weak architecture, underestimated complexity.
• Skill Gaps: Lack of expertise, high turnover, poor onboarding.
• Quality Control Issues: Insufficient testing, skipped best practices, rushed releases.
• Low Stakeholder Involvement: Minimal feedback, detached sponsors, decision delays.
• Cultural/Organizational Problems: Politics, resistance to change, unrealistic expectations.

Common Failure Patterns

Research (e.g., Standish Group CHAOS Report) shows these patterns are common:
• Over 50% of failed projects suffer from unclear requirements.
• Nearly 70% experience some degree of scope creep.
• Communication gaps are cited in 30-40% of failures.

Mitigation Strategies
To prevent failure:
• Implement requirements workshops and clear change control.
• Use iterative methodologies (Agile, Scrum, Kanban) to adapt quickly.
• Maintain transparent communication channels with all stakeholders.
• Prioritize early and continuous testing.
• Perform post-mortems on smaller projects to improve processes.

Case Study: Healthcare.gov (2013):-

How political pressure, fragmented leadership, and technical missteps crippled America’s flagship
health insurance marketplace on launch day.
Objective: Build ACA’s federal insurance marketplace with millions of users.
Executive Summary

On October 1, 2013, the United States government launched Healthcare.gov, the online
federal marketplace designed to let Americans shop for and enroll in health insurance under
the Affordable Care Act (ACA). The site was meant to serve millions of users from day one,
integrating dozens of federal and state databases. Instead, the launch turned into a public
catastrophe. Of the 4 million visitors who tried to access the site on day one, only six were
able to successfully complete enrollment. The pages took minutes to load, the login
processes failed, and the back-end systems frequently crashed.
The disaster was not due to a single bug or server issue but a chain reaction of failures in
governance, procurement, project management, and political decision-making. Warnings
had been issued months in advance, but political imperatives to meet the ACA’s deadlines
overrode technical readiness. Ultimately, an emergency “Tiger Team” stabilized the site
within six weeks, allowing for nearly 975,000 enrollments by December 2013.

Failure Causes:

• Leadership: No single accountable executive.


• Vendors: 60 contracts, no lead integrator.
• Technical: Bloated code, poor optimization, minimal load testing.
• Management: Scope changes, unresolved defects.
• Testing: No end-to-end or security testing.
• Launch: Big-bang release, no phased rollout.
• Analysis: Ignored warnings, political deadlines prioritized over readiness.
• Fix: Tiger Team stabilized system in 6 weeks; later costs exceeded $2B.

Lessons Learned

• Assign single accountable leadership.


• Strong vendor coordination.
• Plan for integration early.
• Realistic load testing + phased rollouts.
• Act on risk assessments.
• Technical readiness > political deadlines.

Analysis
The Healthcare.gov crash was not a case of "bad luck." It was the inevitable result of structural
flaws and ignored warnings. A McKinsey report in April 2013 explicitly highlighted integration
risks, insufficient testing, and unclear leadership. Political pressure to meet the ACA’s enrollment
deadline overrode technical caution.

What Could Have Been Done Differently

• Leadership: Assign a single, empowered executive with authority over all contractors and
agencies.
• Procurement: Appoint a lead systems integrator and reduce the number of primary vendors
to simplify coordination.
• Development Process: Use iterative, agile-style releases with continuous integration and
regular public testing.
• Technical Standards: Load test for expected peak traffic (plus a safety margin), optimize
front-end performance, and harden the infrastructure.
• Launch Strategy: Roll out in phases—for example, allow browsing first, then registration—
to limit initial load.
• Risk Management: Act on independent risk assessments and delay the launch if critical
defects remain unresolved.

Conclusion
The 2013 Healthcare.gov crash remains one of the most instructive failures in government
IT. It demonstrates that the most dangerous project risks are not purely technical—they are
organizational and political. The technology to build Healthcare.gov existed; what was
missing was cohesive leadership, disciplined execution, and a willingness to delay until
ready.

You might also like