0% found this document useful (0 votes)
10 views65 pages

Se Unit Ii

The document provides an overview of Software Project Management, focusing on the Software Development Life Cycle (SDLC) and its various models, including Waterfall, Iterative, and Spiral models. It outlines the stages of SDLC, such as planning, requirement analysis, design, development, testing, deployment, and maintenance, emphasizing the importance of risk management and project control. Additionally, it discusses the advantages and disadvantages of each model, highlighting their applicability based on project requirements and complexity.

Uploaded by

priya.a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views65 pages

Se Unit Ii

The document provides an overview of Software Project Management, focusing on the Software Development Life Cycle (SDLC) and its various models, including Waterfall, Iterative, and Spiral models. It outlines the stages of SDLC, such as planning, requirement analysis, design, development, testing, deployment, and maintenance, emphasizing the importance of risk management and project control. Additionally, it discusses the advantages and disadvantages of each model, highlighting their applicability based on project requirements and complexity.

Uploaded by

priya.a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 65

UNIT-II SOFTWARE PROJECT MANAGEMENT

Basic concepts of life cycle models – different models and milestones; software project planning –
identification of activities and resources; concepts of feasibility study; techniques for estimation of
schedule and effort; software cost estimation models and concepts of software engineering
economics; techniques of software project control and reporting; introduction to measurement of
software size; introduction to the concepts of risk and its mitigation; configuration management.
Basic concepts of life cycle models

SDLC - Overview

Software Development Life Cycle (SDLC) is a process used by the software industry to design,
develop and test high quality softwares. The SDLC aims to produce a high-quality software that meets
or exceeds customer expectations, reaches completion within times and cost estimates.

 SDLC is the acronym of Software Development Life Cycle.


 It is also called as Software Development Process.
 SDLC is a framework defining tasks performed at each step in the software development
process.
 ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be the
standard that defines all the tasks required for developing and maintaining software.

What is SDLC?

SDLC is a process followed for a software project, within a software organization. It consists of a
detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The
life cycle defines a methodology for improving the quality of software and the overall development
process.

The following figure is a graphical representation of the various stages of a typical SDLC.
A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis

Requirement analysis is the most important and fundamental stage in SDLC. It is performed by the
senior members of the team with inputs from the customer, the sales department, market surveys and
domain experts in the industry. This information is then used to plan the basic project approach and to
conduct product feasibility study in the economical, operational and technical areas.

Planning for the quality assurance requirements and identification of the risks associated with the
project is also done in the planning stage. The outcome of the technical feasibility study is to define
the various technical approaches that can be followed to implement the project successfully with
minimum risks.

Stage 2: Defining Requirements

Once the requirement analysis is done the next step is to clearly define and document the product
requirements and get them approved from the customer or the market analysts. This is done through
an SRS (Software Requirement Specification) document which consists of all the product
requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture

SRS is the reference for product architects to come out with the best architecture for the product to be
developed. Based on the requirements specified in SRS, usually more than one design approach for
the product architecture is proposed and documented in a DDS - Design Document Specification.

This DDS is reviewed by all the important stakeholders and based on various parameters as risk
assessment, product robustness, design modularity, budget and time constraints, the best design
approach is selected for the product.

A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any). The
internal design of all the modules of the proposed architecture should be clearly defined with the
minutest of the details in DDS.

Stage 4: Building or Developing the Product

In this stage of SDLC the actual development starts and the product is built. The programming code is
generated as per DDS during this stage. If the design is performed in a detailed and organized manner,
code generation can be accomplished without much hassle.

Developers must follow the coding guidelines defined by their organization and programming tools
like compilers, interpreters, debuggers, etc. are used to generate the code. Different high level
programming languages such as C, C++, Pascal, Java and PHP are used for coding. The programming
language is chosen with respect to the type of software being developed.

Stage 5: Testing the Product

This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are
mostly involved in all the stages of SDLC. However, this stage refers to the testing only stage of the
product where product defects are reported, tracked, fixed and retested, until the product reaches the
quality standards defined in the SRS.
Stage 6: Deployment in the Market and Maintenance

Once the product is tested and ready to be deployed it is released formally in the appropriate market.
Sometimes product deployment happens in stages as per the business strategy of that organization.
The product may first be released in a limited segment and tested in the real business environment
(UAT- User acceptance testing).

Then based on the feedback, the product may be released as it is or with suggested enhancements in
the targeting market segment. After the product is released in the market, its maintenance is done for
the existing customer base.

SDLC Models

There are various software development life cycle models defined and designed which are followed
during the software development process. These models are also referred as Software Development
Process Models". Each process model follows a Series of steps unique to its type to ensure success in
the process of software development.

Following are the most important and popular SDLC models followed in the industry −

 Waterfall Model
 Iterative Model
 Spiral Model
 V-Model
 Big Bang Model

Other related methodologies are Agile Model, RAD Model, Rapid Application Development and
Prototyping Models.

SDLC - Waterfall Model

The Waterfall Model was the first Process Model to be introduced. It is also referred to as a linear-
sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase
must be completed before the next phase can begin and there is no overlapping in the phases.

The Waterfall model is the earliest SDLC approach that was used for software development.

The waterfall Model illustrates the software development process in a linear sequential flow. This
means that any phase in the development process begins only if the previous phase is complete. In this
waterfall model, the phases do not overlap.

Waterfall Model - Design

Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software development is
divided into separate phases. In this Waterfall model, typically, the outcome of one phase acts as the
input for the next phase sequentially.

The following illustration is a representation of the different phases of the Waterfall Model.
The sequential phases in Waterfall model are −

 Requirement Gathering and analysis − All possible requirements of the system to be


developed are captured in this phase and documented in a requirement specification
document.
 System Design − The requirement specifications from first phase are studied in this phase
and the system design is prepared. This system design helps in specifying hardware and
system requirements and helps in defining the overall system architecture.
 Implementation − With inputs from the system design, the system is first developed in small
programs called units, which are integrated in the next phase. Each unit is developed and
tested for its functionality, which is referred to as Unit Testing.
 Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is tested
for any faults and failures.
 Deployment of system − Once the functional and non-functional testing is done; the product
is deployed in the customer environment or released into the market.
 Maintenance − There are some issues which come up in the client environment. To fix those
issues, patches are released. Also to enhance the product some better versions are released.
Maintenance is done to deliver these changes in the customer environment.

All these phases are cascaded to each other in which progress is seen as flowing steadily downwards
(like a waterfall) through the phases. The next phase is started only after the defined set of goals are
achieved for previous phase and it is signed off, so the name "Waterfall Model". In this model, phases
do not overlap.

Waterfall Model - Application


Every software developed is different and requires a suitable SDLC approach to be followed based on
the internal and external factors. Some situations where the use of Waterfall model is most
appropriate are −

 Requirements are very well documented, clear and fixed.


 Product definition is stable.
 Technology is understood and is not dynamic.
 There are no ambiguous requirements.
 Ample resources with required expertise are available to support the product.
 The project is short.

Waterfall Model - Advantages

The advantages of waterfall development are that it allows for departmentalization and control. A
schedule can be set with deadlines for each stage of development and a product can proceed through
the development process model phases one by one.

Development moves from concept, through design, implementation, testing, installation,


troubleshooting, and ends up at operation and maintenance. Each phase of development proceeds in
strict order.

Some of the major advantages of the Waterfall Model are as follows −

 Simple and easy to understand and use


 Easy to manage due to the rigidity of the model. Each phase has specific deliverables and a
review process.
 Phases are processed and completed one at a time.
 Works well for smaller projects where requirements are very well understood.
 Clearly defined stages.
 Well understood milestones.
 Easy to arrange tasks.
 Process and results are well documented.

Waterfall Model - Disadvantages

The disadvantage of waterfall development is that it does not allow much reflection or revision. Once
an application is in the testing stage, it is very difficult to go back and change something that was not
well-documented or thought upon in the concept stage.

The major disadvantages of the Waterfall Model are as follows −

 No working software is produced until late during the life cycle.


 High amounts of risk and uncertainty.
 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Not suitable for the projects where requirements are at a moderate to high risk of changing.
So, risk and uncertainty is high with this process model.
 It is difficult to measure progress within stages.
 Cannot accommodate changing requirements.
 Adjusting scope during the life cycle can end a project.
 Integration is done as a "big-bang. at the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.
SDLC - Iterative Model

In the Iterative model, iterative process starts with a simple implementation of a small set of the
software requirements and iteratively enhances the evolving versions until the complete system is
implemented and ready to be deployed.

An iterative life cycle model does not attempt to start with a full specification of requirements.
Instead, development begins by specifying and implementing just part of the software, which is then
reviewed to identify further requirements. This process is then repeated, producing a new version of
the software at the end of each iteration of the model.

Iterative Model - Design

Iterative process starts with a simple implementation of a subset of the software requirements and
iteratively enhances the evolving versions until the full system is implemented. At each iteration,
design modifications are made and new functional capabilities are added. The basic idea behind this
method is to develop a system through repeated cycles (iterative) and in smaller portions at a time
(incremental).

The following illustration is a representation of the Iterative and Incremental model −

Iterative and Incremental development is a combination of both iterative design or iterative method
and incremental build model for development. "During software development, more than one iteration
of the software development cycle may be in progress at the same time." This process may be
described as an "evolutionary acquisition" or "incremental build" approach."

In this incremental model, the whole requirement is divided into various builds. During each iteration,
the development module goes through the requirements, design, implementation and testing phases.
Each subsequent release of the module adds function to the previous release. The process continues
till the complete system is ready as per the requirement.

The key to a successful use of an iterative software development lifecycle is rigorous validation of
requirements, and verification & testing of each version of the software against those requirements
within each cycle of the model. As the software evolves through successive cycles, tests must be
repeated and extended to verify each version of the software.

Iterative Model - Application

Like other SDLC models, Iterative and incremental development has some specific applications in the
software industry. This model is most often used in the following scenarios −

 Requirements of the complete system are clearly defined and understood.


 Major requirements must be defined; however, some functionalities or requested
enhancements may evolve with time.
 There is a time to the market constraint.
 A new technology is being used and is being learnt by the development team while working
on the project.
 Resources with needed skill sets are not available and are planned to be used on contract basis
for specific iterations.
 There are some high-risk features and goals which may change in the future.

Iterative Model - Pros and Cons

The advantage of this model is that there is a working model of the system at a very early stage of
development, which makes it easier to find functional or design flaws. Finding issues at an early stage
of development enables to take corrective measures in a limited budget.

The disadvantage with this SDLC model is that it is applicable only to large and bulky software
development projects. This is because it is hard to break a small software system into further small
serviceable increments/modules.

The advantages of the Iterative and Incremental SDLC Model are as follows −

 Some working functionality can be developed quickly and early in the life cycle.
 Results are obtained early and periodically.
 Parallel development can be planned.
 Progress can be measured.
 Less costly to change the scope/requirements.
 Testing and debugging during smaller iteration is easy.
 Risks are identified and resolved during iteration; and each iteration is an easily managed
milestone.
 Easier to manage risk - High risk part is done first.
 With every increment, operational product is delivered.
 Issues, challenges and risks identified from each increment can be utilized/applied to the next
increment.
 Risk analysis is better.
 It supports changing requirements.
 Initial Operating time is less.
 Better suited for large and mission-critical projects.
 During the life cycle, software is produced early which facilitates customer evaluation and
feedback.

The disadvantages of the Iterative and Incremental SDLC Model are as follows −

 More resources may be required.


 Although cost of change is lesser, but it is not very suitable for changing requirements.
 More management attention is required.
 System architecture or design issues may arise because not all requirements are gathered in
the beginning of the entire life cycle.
 Defining increments may require definition of the complete system.
 Not suitable for smaller projects.
 Management complexity is more.
 End of project may not be known which is a risk.
 Highly skilled resources are required for risk analysis.
 Projects progress is highly dependent upon the risk analysis phase.

SDLC - Spiral Model

The spiral model combines the idea of iterative development with the systematic, controlled aspects of
the waterfall model. This Spiral model is a combination of iterative development process model and
sequential linear development model i.e. the waterfall model with a very high emphasis on risk
analysis. It allows incremental releases of the product or incremental refinement through each
iteration around the spiral.

Spiral Model - Design

The spiral model has four phases. A software project repeatedly passes through these phases in
iterations called Spirals.

Identification

This phase starts with gathering the business requirements in the baseline spiral. In the subsequent
spirals as the product matures, identification of system requirements, subsystem requirements and unit
requirements are all done in this phase.

This phase also includes understanding the system requirements by continuous communication
between the customer and the system analyst. At the end of the spiral, the product is deployed in the
identified market.

Design

The Design phase starts with the conceptual design in the baseline spiral and involves architectural
design, logical design of modules, physical product design and the final design in the subsequent
spirals.

Construct or Build

The Construct phase refers to production of the actual software product at every spiral. In the baseline
spiral, when the product is just thought of and the design is being developed a POC (Proof of
Concept) is developed in this phase to get customer feedback.

Then in the subsequent spirals with higher clarity on requirements and design details a working model
of the software called build is produced with a version number. These builds are sent to the customer
for feedback.

Evaluation and Risk Analysis

Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the end of
first iteration, the customer evaluates the software and provides feedback.
The following illustration is a representation of the Spiral Model, listing the activities in each phase.

Based on the customer evaluation, the software development process enters the next iteration and
subsequently follows the linear approach to implement the feedback suggested by the customer. The
process of iterations along the spiral continues throughout the life of the software.

Spiral Model Application

The Spiral Model is widely used in the software industry as it is in sync with the natural development
process of any product, i.e. learning with maturity which involves minimum risk for the customer as
well as the development firms.

The following pointers explain the typical uses of a Spiral Model −

 When there is a budget constraint and risk evaluation is important.


 For medium to high-risk projects.
 Long-term project commitment because of potential changes to economic priorities as the
requirements change with time.
 Customer is not sure of their requirements which is usually the case.
 Requirements are complex and need evaluation to get clarity.
 New product line which should be released in phases to get enough customer feedback.
 Significant changes are expected in the product during the development cycle.
Spiral Model - Pros and Cons

The advantage of spiral lifecycle model is that it allows elements of the product to be added in, when
they become available or known. This assures that there is no conflict with previous requirements and
design.

This method is consistent with approaches that have multiple software builds and releases which
allows making an orderly transition to a maintenance activity. Another positive aspect of this method
is that the spiral model forces an early user involvement in the system development effort.

On the other side, it takes a very strict management to complete such products and there is a risk of
running the spiral in an indefinite loop. So, the discipline of change and the extent of taking change
requests is very important to develop and deploy the product successfully.

The advantages of the Spiral SDLC Model are as follows −

 Changing requirements can be accommodated.


 Allows extensive use of prototypes.
 Requirements can be captured more accurately.
 Users see the system early.
 Development can be divided into smaller parts and the risky parts can be developed earlier
which helps in better risk management.

The disadvantages of the Spiral SDLC Model are as follows −

 Management is more complex.


 End of the project may not be known early.
 Not suitable for small or low risk projects and could be expensive for small projects.
 Process is complex
 Spiral may go on indefinitely.
 Large number of intermediate stages requires excessive documentation.

SDLC - V-Model

The V-model is an SDLC model where execution of processes happens in a sequential manner in a V-
shape. It is also known as Verification and Validation model.

The V-Model is an extension of the waterfall model and is based on the association of a testing phase
for each corresponding development stage. This means that for every single phase in the development
cycle, there is a directly associated testing phase. This is a highly-disciplined model and the next
phase starts only after completion of the previous phase.

V-Model - Design

Under the V-Model, the corresponding testing phase of the development phase is planned in parallel.
So, there are Verification phases on one side of the ‘V’ and Validation phases on the other side. The
Coding Phase joins the two sides of the V-Model.

The following illustration depicts the different phases in a V-Model of the SDLC.
V-Model - Verification Phases

There are several Verification phases in the V-Model, each of these are explained in detail below.

Business Requirement Analysis

This is the first phase in the development cycle where the product requirements are understood from
the customer’s perspective. This phase involves detailed communication with the customer to
understand his expectations and exact requirement. This is a very important activity and needs to be
managed well, as most of the customers are not sure about what exactly they need. The acceptance
test design planning is done at this stage as business requirements can be used as an input for
acceptance testing.

System Design

Once you have the clear and detailed product requirements, it is time to design the complete system.
The system design will have the understanding and detailing the complete hardware and
communication setup for the product under development. The system test plan is developed based on
the system design. Doing this at an earlier stage leaves more time for the actual test execution later.

Architectural Design

Architectural specifications are understood and designed in this phase. Usually more than one
technical approach is proposed and based on the technical and financial feasibility the final decision is
taken. The system design is broken down further into modules taking up different functionality. This
is also referred to as High Level Design (HLD).

The data transfer and communication between the internal modules and with the outside world (other
systems) is clearly understood and defined in this stage. With this information, integration tests can be
designed and documented during this stage.

Module Design

In this phase, the detailed internal design for all the system modules is specified, referred to as Low
Level Design (LLD). It is important that the design is compatible with the other modules in the
system architecture and the other external systems. The unit tests are an essential part of any
development process and helps eliminate the maximum faults and errors at a very early stage. These
unit tests can be designed at this stage based on the internal module designs.

Coding Phase

The actual coding of the system modules designed in the design phase is taken up in the Coding
phase. The best suitable programming language is decided based on the system and architectural
requirements.

The coding is performed based on the coding guidelines and standards. The code goes through
numerous code reviews and is optimized for best performance before the final build is checked into
the repository.

Validation Phases

The different Validation Phases in a V-Model are explained in detail below.

Unit Testing

Unit tests designed in the module design phase are executed on the code during this validation phase.
Unit testing is the testing at code level and helps eliminate bugs at an early stage, though all defects
cannot be uncovered by unit testing.

Integration Testing

Integration testing is associated with the architectural design phase. Integration tests are performed to
test the coexistence and communication of the internal modules within the system.

System Testing

System testing is directly associated with the system design phase. System tests check the entire
system functionality and the communication of the system under development with external systems.
Most of the software and hardware compatibility issues can be uncovered during this system test
execution.

Acceptance Testing

Acceptance testing is associated with the business requirement analysis phase and involves testing the
product in user environment. Acceptance tests uncover the compatibility issues with the other systems
available in the user environment. It also discovers the non-functional issues such as load and
performance defects in the actual user environment.
V- Model ─ Application

V- Model application is almost the same as the waterfall model, as both the models are of sequential
type. Requirements have to be very clear before the project starts, because it is usually expensive to
go back and make changes. This model is used in the medical development field, as it is strictly a
disciplined domain.

The following pointers are some of the most suitable scenarios to use the V-Model application.

 Requirements are well defined, clearly documented and fixed.


 Product definition is stable.
 Technology is not dynamic and is well understood by the project team.
 There are no ambiguous or undefined requirements.
 The project is short.

V-Model - Pros and Cons

The advantage of the V-Model method is that it is very easy to understand and apply. The simplicity
of this model also makes it easier to manage. The disadvantage is that the model is not flexible to
changes and just in case there is a requirement change, which is very common in today’s dynamic
world, it becomes very expensive to make the change.

The advantages of the V-Model method are as follows −

 This is a highly-disciplined model and Phases are completed one at a time.


 Works well for smaller projects where requirements are very well understood.
 Simple and easy to understand and use.
 Easy to manage due to the rigidity of the model. Each phase has specific deliverables and a
review process.

The disadvantages of the V-Model method are as follows −

 High risk and uncertainty.


 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Not suitable for the projects where requirements are at a moderate to high risk of changing.
 Once an application is in the testing stage, it is difficult to go back and change a functionality.
 No working software is produced until late during the life cycle.

SDLC - Big Bang Model

The Big Bang model is an SDLC model where we do not follow any specific process. The
development just starts with the required money and efforts as the input, and the output is the software
developed which may or may not be as per customer requirement. This Big Bang Model does not
follow a process/procedure and there is a very little planning required. Even the customer is not sure
about what exactly he wants and the requirements are implemented on the fly without much analysis.

Usually this model is followed for small projects where the development teams are very small.

Big Bang Model ─ Design and Application


The Big Bang Model comprises of focusing all the possible resources in the software development
and coding, with very little or no planning. The requirements are understood and implemented as they
come. Any changes required may or may not need to revamp the complete software.

This model is ideal for small projects with one or two developers working together and is also useful
for academic or practice projects. It is an ideal model for the product where requirements are not well
understood and the final release date is not given.

Big Bang Model - Pros and Cons

The advantage of this Big Bang Model is that it is very simple and requires very little or no planning.
Easy to manage and no formal procedure are required.

However, the Big Bang Model is a very high risk model and changes in the requirements or
misunderstood requirements may even lead to complete reversal or scraping of the project. It is ideal
for repetitive or small projects with minimum risks.

The advantages of the Big Bang Model are as follows −

 This is a very simple model


 Little or no planning required
 Easy to manage
 Very few resources required
 Gives flexibility to developers
 It is a good learning aid for new comers or students.

The disadvantages of the Big Bang Model are as follows −

 Very High risk and uncertainty.


 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Can turn out to be very expensive if requirements are misunderstood.

SDLC - Agile Model

Agile SDLC model is a combination of iterative and incremental process models with focus on
process adaptability and customer satisfaction by rapid delivery of working software product. Agile
Methods break the product into small incremental builds. These builds are provided in iterations. Each
iteration typically lasts from about one to three weeks. Every iteration involves cross functional teams
working simultaneously on various areas like −

 Planning
 Requirements Analysis
 Design
 Coding
 Unit Testing and
 Acceptance Testing.

At the end of the iteration, a working product is displayed to the customer and important stakeholders.

What is Agile?
Agile model believes that every project needs to be handled differently and the existing methods need
to be tailored to best suit the project requirements. In Agile, the tasks are divided to time boxes (small
time frames) to deliver specific features for a release.

Iterative approach is taken and working software build is delivered after each iteration. Each build is
incremental in terms of features; the final build holds all the features required by the customer.

Here is a graphical illustration of the Agile Model −

The Agile thought process had started early in the software development and started becoming
popular with time due to its flexibility and adaptability.

The most popular Agile methods include Rational Unified Process (1994), Scrum (1995), Crystal
Clear, Extreme Programming (1996), Adaptive Software Development, Feature Driven Development,
and Dynamic Systems Development Method (DSDM) (1995). These are now collectively referred to
as Agile Methodologies, after the Agile Manifesto was published in 2001.

Following are the Agile Manifesto principles −

 Individuals and interactions − In Agile development, self-organization and motivation are


important, as are interactions like co-location and pair programming.
 Working software − Demo working software is considered the best means of communication
with the customers to understand their requirements, instead of just depending on
documentation.
 Customer collaboration − As the requirements cannot be gathered completely in the
beginning of the project due to various factors, continuous customer interaction is very
important to get proper product requirements.
 Responding to change − Agile Development is focused on quick responses to change and
continuous development.

Agile Vs Traditional SDLC Models

Agile is based on the adaptive software development methods, whereas the traditional SDLC
models like the waterfall model is based on a predictive approach. Predictive teams in the traditional
SDLC models usually work with detailed planning and have a complete forecast of the exact tasks
and features to be delivered in the next few months or during the product life cycle.

Predictive methods entirely depend on the requirement analysis and planning done in the beginning
of cycle. Any changes to be incorporated go through a strict change control management and
prioritization.

Agile uses an adaptive approach where there is no detailed planning and there is clarity on future
tasks only in respect of what features need to be developed. There is feature driven development and
the team adapts to the changing product requirements dynamically. The product is tested very
frequently, through the release iterations, minimizing the risk of any major failures in future.

Customer Interaction is the backbone of this Agile methodology, and open communication with
minimum documentation are the typical features of Agile development environment. The agile teams
work in close collaboration with each other and are most often located in the same geographical
location.

Agile Model - Pros and Cons

Agile methods are being widely accepted in the software world recently. However, this method may
not always be suitable for all products. Here are some pros and cons of the Agile model.

The advantages of the Agile Model are as follows −

 Is a very realistic approach to software development.


 Promotes teamwork and cross training.
 Functionality can be developed rapidly and demonstrated.
 Resource requirements are minimum.
 Suitable for fixed or changing requirements
 Delivers early partial working solutions.
 Good model for environments that change steadily.
 Minimal rules, documentation easily employed.
 Enables concurrent development and delivery within an overall planned context.
 Little or no planning required.
 Easy to manage.
 Gives flexibility to developers.

The disadvantages of the Agile Model are as follows −

 Not suitable for handling complex dependencies.


 More risk of sustainability, maintainability and extensibility.
 An overall plan, an agile leader and agile PM practice is a must without which it will not
work.
 Strict delivery management dictates the scope, functionality to be delivered, and adjustments
to meet the deadlines.
 Depends heavily on customer interaction, so if customer is not clear, team can be driven in
the wrong direction.
 There is a very high individual dependency, since there is minimum documentation
generated.
 Transfer of technology to new team members may be quite challenging due to lack of
documentation.

SDLC - RAD Model

The RAD (Rapid Application Development) model is based on prototyping and iterative
development with no specific planning involved. The process of writing the software itself involves
the planning required for developing the product.

Rapid Application Development focuses on gathering customer requirements through workshops or


focus groups, early testing of the prototypes by the customer using iterative concept, reuse of the
existing prototypes (components), continuous integration and rapid delivery.

What is RAD?

Rapid application development is a software development methodology that uses minimal planning in
favor of rapid prototyping. A prototype is a working model that is functionally equivalent to a
component of the product.

In the RAD model, the functional modules are developed in parallel as prototypes and are integrated
to make the complete product for faster product delivery. Since there is no detailed preplanning, it
makes it easier to incorporate the changes within the development process.

RAD projects follow iterative and incremental model and have small teams comprising of developers,
domain experts, customer representatives and other IT resources working progressively on their
component or prototype.

The most important aspect for this model to be successful is to make sure that the prototypes
developed are reusable.

RAD Model Design

RAD model distributes the analysis, design, build and test phases into a series of short, iterative
development cycles.

Following are the various phases of the RAD Model −

Business Modelling

The business model for the product under development is designed in terms of flow of information
and the distribution of information between various business channels. A complete business analysis
is performed to find the vital information for business, how it can be obtained, how and when is the
information processed and what are the factors driving successful flow of information.

Data Modelling
The information gathered in the Business Modelling phase is reviewed and analyzed to form sets of
data objects vital for the business. The attributes of all data sets is identified and defined. The relation
between these data objects are established and defined in detail in relevance to the business model.

Process Modelling

The data object sets defined in the Data Modelling phase are converted to establish the business
information flow needed to achieve specific business objectives as per the business model. The
process model for any changes or enhancements to the data object sets is defined in this phase.
Process descriptions for adding, deleting, retrieving or modifying a data object are given.

Application Generation

The actual system is built and coding is done by using automation tools to convert process and data
models into actual prototypes.

Testing and Turnover

The overall testing time is reduced in the RAD model as the prototypes are independently tested
during every iteration. However, the data flow and the interfaces between all the components need to
be thoroughly tested with complete test coverage. Since most of the programming components have
already been tested, it reduces the risk of any major issues.

The following illustration describes the RAD Model in detail.

RAD Model Vs Traditional SDLC

The traditional SDLC follows a rigid process models with high emphasis on requirement analysis and
gathering before the coding starts. It puts pressure on the customer to sign off the requirements before
the project starts and the customer doesn’t get the feel of the product as there is no working build
available for a long time.

The customer may need some changes after he gets to see the software. However, the change process
is quite rigid and it may not be feasible to incorporate major changes in the product in the traditional
SDLC.

The RAD model focuses on iterative and incremental delivery of working models to the customer.
This results in rapid delivery to the customer and customer involvement during the complete
development cycle of product reducing the risk of non-conformance with the actual user
requirements.

RAD Model - Application

RAD model can be applied successfully to the projects in which clear modularization is possible. If
the project cannot be broken into modules, RAD may fail.

The following pointers describe the typical scenarios where RAD can be used −

 RAD should be used only when a system can be modularized to be delivered in an


incremental manner.
 It should be used if there is a high availability of designers for Modelling.
 It should be used only if the budget permits use of automated code generating tools.
 RAD SDLC model should be chosen only if domain experts are available with relevant
business knowledge.
 Should be used where the requirements change during the project and working prototypes are
to be presented to customer in small iterations of 2-3 months.

RAD Model - Pros and Cons

RAD model enables rapid delivery as it reduces the overall development time due to the reusability of
the components and parallel development. RAD works well only if high skilled engineers are
available and the customer is also committed to achieve the targeted prototype in the given time
frame. If there is commitment lacking on either side the model may fail.

The advantages of the RAD Model are as follows −

 Changing requirements can be accommodated.


 Progress can be measured.
 Iteration time can be short with use of powerful RAD tools.
 Productivity with fewer people in a short time.
 Reduced development time.
 Increases reusability of components.
 Quick initial reviews occur.
 Encourages customer feedback.
 Integration from very beginning solves a lot of integration issues.

The disadvantages of the RAD Model are as follows −

 Dependency on technically strong team members for identifying business requirements.


 Only system that can be modularized can be built using RAD.
 Requires highly skilled developers/designers.
 High dependency on Modelling skills.
 Inapplicable to cheaper projects as cost of Modelling and automated code generation is very
high.
 Management complexity is more.
 Suitable for systems that are component based and scalable.
 Requires user involvement throughout the life cycle.
 Suitable for project requiring shorter development times.

SDLC - Software Prototype Model

The Software Prototyping refers to building software application prototypes which displays the
functionality of the product under development, but may not actually hold the exact logic of the
original software.

Software prototyping is becoming very popular as a software development model, as it enables to


understand customer requirements at an early stage of development. It helps get valuable feedback
from the customer and helps software designers and developers understand about what exactly is
expected from the product under development.

What is Software Prototyping?

Prototype is a working model of software with some limited functionality. The prototype does not
always hold the exact logic used in the actual software application and is an extra effort to be
considered under effort estimation.

Prototyping is used to allow the users evaluate developer proposals and try them out before
implementation. It also helps understand the requirements which are user specific and may not have
been considered by the developer during product design.

Following is a stepwise approach explained to design a software prototype.

Basic Requirement Identification

This step involves understanding the very basics product requirements especially in terms of user
interface. The more intricate details of the internal design and external aspects like performance and
security can be ignored at this stage.

Developing the initial Prototype

The initial Prototype is developed in this stage, where the very basic requirements are showcased and
user interfaces are provided. These features may not exactly work in the same manner internally in the
actual software developed. While, the workarounds are used to give the same look and feel to the
customer in the prototype developed.

Review of the Prototype

The prototype developed is then presented to the customer and the other important stakeholders in the
project. The feedback is collected in an organized manner and used for further enhancements in the
product under development.

Revise and Enhance the Prototype

The feedback and the review comments are discussed during this stage and some negotiations happen
with the customer based on factors like – time and budget constraints and technical feasibility of the
actual implementation. The changes accepted are again incorporated in the new Prototype developed
and the cycle repeats until the customer expectations are met.

Prototypes can have horizontal or vertical dimensions. A Horizontal prototype displays the user
interface for the product and gives a broader view of the entire system, without concentrating on
internal functions. A Vertical prototype on the other side is a detailed elaboration of a specific
function or a sub system in the product.

The purpose of both horizontal and vertical prototype is different. Horizontal prototypes are used to
get more information on the user interface level and the business requirements. It can even be
presented in the sales demos to get business in the market. Vertical prototypes are technical in nature
and are used to get details of the exact functioning of the sub systems. For example, database
requirements, interaction and data processing loads in a given sub system.

Software Prototyping - Types

There are different types of software prototypes used in the industry. Following are the major
software prototyping types used widely −

Throwaway/Rapid Prototyping

Throwaway prototyping is also called as rapid or close ended prototyping. This type of prototyping
uses very little efforts with minimum requirement analysis to build a prototype. Once the actual
requirements are understood, the prototype is discarded and the actual system is developed with a
much clear understanding of user requirements.

Evolutionary Prototyping

Evolutionary prototyping also called as breadboard prototyping is based on building actual functional
prototypes with minimal functionality in the beginning. The prototype developed forms the heart of
the future prototypes on top of which the entire system is built. By using evolutionary prototyping, the
well-understood requirements are included in the prototype and the requirements are added as and
when they are understood.

Incremental Prototyping

Incremental prototyping refers to building multiple functional prototypes of the various sub-systems
and then integrating all the available prototypes to form a complete system.

Extreme Prototyping

Extreme prototyping is used in the web development domain. It consists of three sequential phases.
First, a basic prototype with all the existing pages is presented in the HTML format. Then the data
processing is simulated using a prototype services layer. Finally, the services are implemented and
integrated to the final prototype. This process is called Extreme Prototyping used to draw attention to
the second phase of the process, where a fully functional UI is developed with very little regard to the
actual services.

Software Prototyping - Application

Software Prototyping is most useful in development of systems having high level of user interactions
such as online systems. Systems which need users to fill out forms or go through various screens
before data is processed can use prototyping very effectively to give the exact look and feel even
before the actual software is developed.

Software that involves too much of data processing and most of the functionality is internal with very
little user interface does not usually benefit from prototyping. Prototype development could be an
extra overhead in such projects and may need lot of extra efforts.

Software Prototyping - Pros and Cons

Software prototyping is used in typical cases and the decision should be taken very carefully so that
the efforts spent in building the prototype add considerable value to the final software developed. The
model has its own pros and cons discussed as follows.

The advantages of the Prototyping Model are as follows −

 Increased user involvement in the product even before its implementation.


 Since a working model of the system is displayed, the users get a better understanding of the
system being developed.
 Reduces time and cost as the defects can be detected much earlier.
 Quicker user feedback is available leading to better solutions.
 Missing functionality can be identified easily.
 Confusing or difficult functions can be identified.

The Disadvantages of the Prototyping Model are as follows −

 Risk of insufficient requirement analysis owing to too much dependency on the prototype.
 Users may get confused in the prototypes and actual systems.
 Practically, this methodology may increase the complexity of the system as scope of the
system may expand beyond original plans.
 Developers may try to reuse the existing prototypes to build the actual system, even when it is
not technically feasible.
 The effort invested in building prototypes may be too much if it is not monitored properly.

Software Project Planning

A Software Project is the complete methodology of programming advancement from


requirement gathering to testing and support, completed by the execution procedures, in a
specified period to achieve intended software product.

Need of Software Project Management

Software development is a sort of all new streams in world business, and there's next to no
involvement in structure programming items. Most programming items are customized to
accommodate customer's necessities. The most significant is that the underlying technology
changes and advances so generally and rapidly that experience of one element may not be
connected to the other one. All such business and ecological imperatives bring risk in
software development; hence, it is fundamental to manage software projects efficiently.
Software Project Manager

Software manager is responsible for planning and scheduling project development. They
manage the work to ensure that it is completed to the required standard. They monitor the
progress to check that the event is on time and within budget. The project planning must
incorporate the major issues like size & cost estimation scheduling, project monitoring,
personnel selection evaluation & risk management. To plan a successful software project, we
must understand:

o Scope of work to be completed


o Risk analysis
o The resources mandatory
o The project to be accomplished
o Record of being followed

Software Project planning starts before technical work start. The various steps of planning
activities are:

The size is the crucial parameter for the estimation of other activities. Resources requirement
are required based on cost and development time. Project schedule may prove to be very
useful for controlling and monitoring the progress of the project. This is dependent on
resources & development time.

Identification of Activities

Software Project Management consists of many activities, that includes planning of the project,
deciding the scope of product, estimation of cost in different terms, scheduling of tasks, etc.
The list of activities are as follows:

1. Project planning and Tracking


2. Project Resource Management
3. Scope Management
4. Estimation Management
5. Project Risk Management
6. Scheduling Management
7. Project Communication Management
8. Configuration Management

Now we will discuss all these activities -

1. Project Planning: It is a set of multiple processes, or we can say that it a task that performed
before the construction of the product starts.

2. Scope Management: It describes the scope of the project. Scope management is important because
it clearly defines what would do and what would not. Scope Management create the project to contain
restricted and quantitative tasks, which may merely be documented and successively avoids price and
time overrun.

3. Estimation management: This is not only about cost estimation because whenever we start to
develop software, but we also figure out their size(line of code), efforts, time as well as cost.

o Size of software
o Quality
o Hardware
o Communication
o Training
o Additional Software and tools
o Skilled manpower

4. Scheduling Management: Scheduling Management in software refers to all the activities to


complete in the specified order and within time slotted to each activity. Project managers define
multiple tasks and arrange them keeping various factors in mind.

For scheduling, it is compulsory -

o Find out multiple tasks and correlate them.


o Divide time into units.
o Assign the respective number of work-units for every job.
o Calculate the total time from start to finish.
o Break down the project into modules.

5. Project Resource Management: In software Development, all the elements are referred to as
resources for the project. It can be a human resource, productive tools, and libraries.

Resource management includes:

o Create a project team and assign responsibilities to every team member


o Developing a resource plan is derived from the project plan.
o Adjustment of resources.

6. Project Risk Management: Risk management consists of all the activities like identification,
analyzing and preparing the plan for predictable and unpredictable risk in the project.

Several points show the risks in the project:

o The Experienced team leaves the project, and the new team joins it.
o Changes in requirement.
o Change in technologies and the environment.
o Market competition.

7. Project Communication Management: Communication is an essential factor in the success of the


project. It is a bridge between client, organization, team members and as well as other stakeholders of
the project such as hardware suppliers.

From the planning to closure, communication plays a vital role. In all the phases, communication
must be clear and understood. Miscommunication can create a big blunder in the project.

8. Project Configuration Management: Configuration management is about to control the changes


in software like requirements, design, and development of the product.

The Primary goal is to increase productivity with fewer errors.

Some reasons show the need for configuration management:

o Several people work on software that is continually update.


o Help to build coordination among suppliers.
o Changes in requirement, budget, schedule need to accommodate.
o Software should run on multiple systems.

Tasks perform in Configuration management:

o Identification
o Baseline
o Change Control
o Configuration Status Accounting
o Configuration Audits and Reviews

Resources
The second planning task is estimation of the resources required to accomplishthe software
development effort. The three major categoriesof software engineering resources—people, reusable
software components, andthe development environment (hardware and software tools). Each resource
isspecified with four characteristics: description of the resource, a statement ofavailability, time when
the resource will be required, and duration of time that theresource will be applied. The last two
characteristics can be viewed as a timewindow. Availability of the resource for a specified window
must be established atthe earliest practical time.

Human Resources
The planner begins by evaluating software scope and selecting the skills requiredto complete
development. Both organizational position (e.g., manager, seniorsoftware engineer) and specialty
(e.g., telecommunications, database, client-server) are specified. For relatively small projects (a few
person-months), a singleindividual may perform all software engineering tasks, consulting with
specialistsas required. For larger projects, the software team may be geographicallydispersed across a
number of different locations. Hence, the location of eachhuman resource is specified.

The number of people required for a software project can be determined only afteran estimate of
development effort (e.g., person-months) is made.

Reusable Software Resources


Component-based software engineering (CBSE) emphasizes reusability—that is, thecreation and
reuse of software building blocks. Such building blocks, often calledcomponents, must be cataloged
for easy reference, standardized for easyapplication, and validated for easy integration.
Four software resource categories that should be considered as planningproceeds:
Off-the-shelf components (existing software that can be acquired from a thirdparty or from a past
project)
Full-experience components (existing specifications, designs, code, or test datadeveloped for past
projects that are similar to the software to be built for the currentproject)
Partial-experience components (existing specifications, designs, code, or testdata developed for past
projects that are related to the software to be built for thecurrent project but will require substantial
modification).
New components (components built by the software team specifically for theneeds of the current
project).
Ironically, reusable software components are often neglected during planning, onlyto become a
paramount concern during the development phase of the softwareprocess.
Environmental Resources
The environment that supports a software project, often called the softwareengineering environment
(SEE), incorporates hardware and software. Hardwareprovides a platform that supports the tools
(software) required to produce the workproducts that are an outcome of good software engineering
practice. Because mostsoftware organizations have multiple constituencies that require access to the
SEE,you must prescribe the time window required for hardware and software and verifythat these
resources will be available.When a computer-based system (incorporating specialized hardware and
software)is to be engineered, the software team may require access to hardware elementsbeing
developed by other engineering teams.
For example, software for a robotic device used within a manufacturing cell mayrequire a specific
robot (e.g., a robotic welder) as part of the validation test step; asoftware project for advanced page
layout may need a high-speed digital printingsystem at some point during development.Each
hardware element must be specified as part of planning.

The first activity in the classical waterfall model is the feasibility study. In the
feasibility study there are 3 main aspects that are determined, i.e.; whether the
software that to be developed is economically feasible, whether the development
effort and the cost that will be spent on developing software is it worth and lastly
whether the developing organization has the technical competence required to
develop the software or not. This is also termed as the cost benefit analysis.
Considering the example for the development of some satellite communication. As
the developers do not know how to use satellite communication and even write
programs for satellite communication, they will find it technically infeasible. The third
feasibility that needs to be determined during the feasibility study stage is scheduled
feasibility. The time by which the customer requires the product to be delivered and
the development involved to complete the work as per the required time.
During the feasibility study, the project manager needs to determine 3 types of
feasibility. Whether it is cost wise feasible, whether it can be done and the
scheduled feasibility (i.e. whether it can be done in time or not).

The feasibility study involves carrying out several activities such as collection of basic
information relating to the software such as the different data items that would be
input to the system, the processing required to be carried out on these data, the
output data required to be produced by the system, as well as various constraints on
the development. These collected data are analyzed to perform at the following:
(i) Development of an overall understanding of the problem:
The first thing is to roughly understand the requirements of the software and the
customer. What are the features of the software? (I.e. Comprising of different sort of
data which would be input to the system, the processing that needs to be done &
finally the code to be written.) What is the output to be produced by the system and
the various constraints &behavior of the system?
(ii) Formulation of the various possible strategies for solving the problem:
In this activity, various possible high-level solution schemes to the problem are
determined. For example, solution in a client-server framework and a standalone
application framework may be explored.
(iii) Evaluation of the different solution strategies:
The different identified solution schemes are analyzed to evaluate their benefits and
shortcomings. Such evaluation often requires making approximate estimates of the
resources required, cost of development, and development time required. The
different solutions are compared based on the estimations that have been worked
out. Once the best solution is identified, all activities in the later phases are carried
out as per this solution. At this stage, it may also be determined that none of the
solutions is feasible due to high cost, resource constraints, or some technical
reasons. This scenario would, of course, require the project to be abandoned.
Techniques for Estimation of Schedule and Effort:
A project planning process that predicts the amount of time and money needed to
complete a project. It's a key part of project success, and is often used in software
development to plan the resources and schedule for new applications or updates. Accurate
effort estimation can help project managers create budgets, schedules, and resource plans,
and allocate resources accordingly.Project scheduling is the process of deciding how the
work in a project will be organized as separate tasks, and when and how these tasks will be
executed. You estimate the calendar time needed to complete each task, the effort required
and who will work on the tasks that have been identified. You also have to estimate the
resources needed to complete each task, such as the disk space required on a server, the time
required on specialized hardware, such as a simulator, and what the travel budget will be.
Scheduling in plan-driven projects involves breaking down the total work involved ina
project into separate tasks and estimating the time required to complete each task. Tasks
should normally last at least a week, and no longer than 2 months.The maximum amount of
time for any task should be around 8 to 10 weeks. If it takes longer than this, the task should
be subdivided for project planning and scheduling.
Some of these tasks are carried out in parallel, with different people working on
different components of the system. You have to coordinate these parallel tasks and
organize the work so that the workforce is used optimally and you don’t introduce
unnecessary dependencies between the tasks.
Split project into tasks and estimate time and resources required to complete each task.
Organize tasks concurrently to make optimal use of workforce. Minimize task dependencies
to avoid delays caused by one task waiting for another to complete. Dependent on project
managers intuition and experience.

Estimating the difficulty of problems and hence the cost of developing a solution is hard.
Productivity is not proportional to the number of people working on a task.
Adding people to a late project makes it later because of communication overheads.
The unexpected always happens. Always allow contingency in planning.
Schedule representation
Project schedules may simply be represented in a table or spreadsheet showing the
tasks, effort, expected duration, and task dependencies.
However, this style of representation makes it difficult to see the relationships and
dependencies between the different activities. For this reason, alternative graphical
representations of project schedules have been developed that are often easier to
read and understand.
There are two types of representation that are commonly used:
1. Bar charts, which are calendar-based, show who is responsible for each activity,
the expected elapsed time, and when the activity is scheduled to begin and end. Bar
charts are sometimes called ‘Gantt charts’, after their inventor, Henry Gantt.
2. Activity networks, which are network diagrams, show the dependencies
between the different activities making up a project.
Normally, a project planning tool is used to manage project schedule information.
These tools usually expect you to input project information into a table and will then
create a database of project information. Bar charts and activity charts can then be
generated automatically from this database.
Project activities are the basic planning element. Each activity has:
1. A duration in calendar days or months.
2. An effort estimate, which reflects the number of person-days or person-months
to complete the work.
3. A deadline by which the activity should be completed.
4. A defined endpoint. This represents the tangible result of completing the activity.
This could be a document, the holding of a review meeting, the successful
execution of all tests, etc.
When planning a project, you should also define milestones; that is, each stage in
the project where a progress assessment can be made.
Each milestone should be documented by a short report that summarizes the
progress made and the work done. Milestones may be associated with a single task
or with groups of related activities. For example, milestone M1 is associated with
task T1 and milestone M3 is associated with a pair of tasks, T2 and T4. A special
kind of milestone is the production of a project deliverable. A deliverable is a work
product that is delivered to the customer. It is the outcome of a significant project
phase such as specification or design. Usually, the deliverables that are required are
specified in the project contract and the customer’s view of the project’s progress

Task Effort Duration Dependencies


(person- (days)
days)

T1 15 10

T2 8 15

T3 20 15 T1 (M1)

T4 5 10
T5 5 10 T2, T4 (M3)

T6 10 5 T1, T2 (M4)

T7 25 20 T1 (M1)

T8 75 25 T4 (M2)

T9 10 15 T3, T6 (M5)

T10 20 15 T7, T8 (M6)

T11 10 10 T9 (M7)

T12 20 10 T10, T11 (M8)

The estimated duration for some tasks is more than the effort required and vice
versa. If the effort is less than the duration, this means that the people allocated to
that task are not working full-time on it. If the effort exceeds the duration, this
means that several team members are working on the task at the same time.
Fig.2.6 is a bar chart showing a project calendar and the start and finish dates of
tasks. Reading from left to right, the bar chart clearly shows when tasks start and
end. The milestones (M1, M2, etc.) are also shown on the bar chart.
Bar Chart

Notice that tasks that are independent are carried out in parallel (e.g., tasks T1,
T2, and T4 all start at the beginning of the project).

Notice that tasks that are independent are carried out in parallel (e.g., tasks T1, T2,
and T4 all start at the beginning of the project).
As well as planning the delivery schedule for the software, project managers have to
allocate resources to tasks. The key resource is, of course, the software engineers
who will do the work, and they have to be assigned to project activities. The
resource allocation can also be input to project management tools and a bar chart
generated, which shows when staff are working on the project. People may be
working on more than one task at the same time and, sometimes, they are not
working on the project.
They may be on holiday, working on other projects, attending training courses, or
engaging in some other activity. I show part-time assignments using a diagonal line
crossing the bar.
Large organizations usually employ a number of specialists who work on a project
when needed. In Figure 2.6, you can see that Mary is a specialist, who works on
only a single task in the project. This can cause scheduling problems. If one project
is delayed while a specialist is working on it, this may have a knock-on effect on
other projects where the specialist is also required. These may then be delayed
because the specialist is not available.

Staff Allocation char


2.3.2 Estimation techniques
Project schedule estimation is difficult. You may have to make initial estimates on the basis
of a high-level user requirements definition. The software may have to run on unfamiliar
computers or use new development technology. The people involved in the project and their
skills will probably not be known. There are so many uncertainties that it is impossible to
estimate system development costs accurately during the early stages of a project. There is
even a fundamental difficulty in assessing the accuracy of different approaches to cost and
effort estimation.
 The estimate is used to define the project budget and the product is adjusted so
that the budget figure is realized.
 Organizations need to make software effort and cost estimates.
There are two types of technique that can be used to do this:
1. Experience-based techniquesThe estimate of future effort requirements is
based on the manager’s experience of past projects and the application domain.
Essentially, the manager makes an informed judgment of what the effort
requirements are likely to be.
2. Algorithmic cost modellingIn this approach, a formulaic approach is used to
compute the project effort based on estimates of product attributes, such as size,
and process characteristics, such as experience of staff involved.

In both cases, you need to use your judgment to estimate either the effort directly,
or estimate the project and product characteristics.

Based on data collected from a large number of projects, Boehm, et al. (1995)
discovered that startup estimates vary significantly. If the initial estimate of effort
required is x months of effort, they found that the range may be from 0.25x to 4x of
the actual effort as measured when the system was delivered.

Experience-based techniques rely on the manager’s experience of past projects


and the actual effort expended in these projects on activities that are related to software
development.
The difficulty with experience-based techniques is that a new software project may not have
much in common with previous projects. Software development changes very quickly and a
project will often use unfamiliar techniques such as web services, COTS-based development,
or AJAX. If you have not worked with these techniques, your previous experience may not
help you to estimate the effort required, making it more difficult to produce accurate costs
and schedule estimates.
Algorithmic cost modelling
Algorithmic cost modeling uses a mathematical formula to predict project costs
based on estimates of the project size; the type of software being developed; and
other team, process, and product factors. An algorithmic cost model can be built by
analyzing the costs and attributes of completed projects, and finding the closest-fit
formula to actual experience.
Algorithmic cost models are primarily used to make estimates of software
development costs. Algorithmic models for estimating effort in a software project
are mostly based on a simple formula:

A is a constant factor which depends on local organizational practices and the type
of software that is developed. Size may be either an assessment of the code size of
the software or a functionality estimate expressed in function or application points.
The value of exponent B usually lies between 1 and 1.5. M is a multiplier made by
combining process, product, and development attributes, such as the dependability
requirements for the software and the experience of the development team. The
number of lines of source code (SLOC) in the delivered system is the fundamental
size metric that is used in many algorithmic cost models.
Most algorithmic estimation models have an exponential component (B in the above
equation) that is related to the size and complexity of the system. This reflects the
fact that costs do not usually increase linearly with project size. As the size and
complexity of the software increases, extra costs are incurred because of the
communication overhead of larger teams, more complex configuration management,
more difficult system integration, and so on. The more complex the system, the
more these factors affect the cost.
Therefore, the value of B usually increases with the size and complexity of
the system.
Algorithmic cost models are a systematic way to estimate the effort required to
develop a system. However, these models are complex and difficult to use. There
are many attributes and considerable scope for uncertainty in estimating their
values.

This complexity discourages potential users and hence the practical application of
algorithmic cost modeling has been limited to a small number of companies. If you
use an algorithmic cost estimation model, you should develop a range of estimates
(worst, expected, and best) rather than a single estimate and apply the costing
formula to all of them. Estimates are most likely to be accurate when you
understand the type of software that is being developed.
software cost estimation models
The lines of code and function points were the measures from which productivity
metrics can be computed.
LOC and FP data are used in two ways during software project estimation:
1. As estimation variables to “size” each element of the software and
2. As baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques.
LOC BASED ESTIMATION
Steps involved in LOC based estimation:
1. Find the bounded statement of software scope and
2. Decompose the statement of scope into problem functions that can each be
estimated individually.
3. Estimate LOC (the estimation variable) for each function.
4. A three-point or expected value can then be computed.
5. The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess)
estimates.

S = ( Sopt + 4 Sm+Spess ) / 6.

6. Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied.
An Example of LOC-Based Estimation
Problem:
Develop a software package for a computer-aided design application for mechanical
components. The software is to execute on an engineering workstation and must
interface with various computer graphics peripherals including a mouse, digitizer,
high-resolution color display, and laser printer.
Preliminary statement of software scope can be developed:
The mechanical CAD software will accept two- and three-dimensional geometric data
from an engineer. The engineer will interact and control the CAD system through a
user interface that will exhibit characteristics of good human/machine interface
design. All geometric data and other supporting information will be maintained in a
CAD database.
Design analysis modules will be developed to produce the required output, which
will be displayed on a variety of graphics devices. The software will be designed to
control and interact with peripheral devices that include a mouse, digitizer, laser
printer, and plotter. This statement of scope is preliminary - it is not bounded.

Fig.2.8: Estimation table for the LOC methods

A review of historical data indicates that the organizational


Average productivity for systems of this type = 620 LOC/pm.
Burdened labor rate = $8,000 per month, the cost per line of code is approximately
$13.
Based on the LOC estimate and the historical productivity data,
The total estimated project cost = $431,000 ( 33,200*13) and
The estimated effort = 54 person-months (33,200/620).
FP BASED ESTIMATION
The function point (FP) metric can be used effectively as a means for measuring the
functionality delivered by a system.
Using historical data, the FP metric can then be used to
1.Estimate the cost or effort required to design, code, and test the software
2. Predict the number of errors that will be encountered during testing; and
3. Forecast the number of components and/or the number of projected source lines
in the implemented system.
Function points are derived using an empirical relationship based on countable
(direct) measures of software’s information domain and qualitative assessments of
software complexity.
Information domain values are defined in the following manner:
➢ Number of external inputs (EIs).
➢ Number of external outputs (EOs).
➢ Number of external inquiries (EQs).
➢ Number of internal logical files (ILFs).
➢ Number of external interface files (EIFs).
Steps involved in FP based estimation:
1. Find the bounded statement of software scope and
2. Decompose the statement of scope into problem functions that can each be
estimated individually.
3. Estimate the information domain characteristics - inputs, outputs, data files,
inquiries, and external interfaces - as well as the 14 complexity adjustment
values.
4. Using the estimates derive an FP value that can be tied to past data and used to
generate an estimate.
5. Using historical data, estimate an optimistic, most likely, and pessimistic size
value for each function or count for each information domain value.
6. A three-point or expected value can then be computed.
7. The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess)
estimates.
S = ( Sopt + 4 Sm+Spess ) / 6
8. Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied.
Example for FP Based Estimation:
Once the information data have been collected, calculate the FP values by
associating a complexity value with each count as shown below

Computing Function point


The various adjustment factors to be considered are:
Estimating Information Domain values
To compute function points (FP), the following relationship is used:

FP estimated = 320 * [0.65+0.01*52]


= 320*[1.17]=375
The organizational average productivity for systems of this type = 6.5 FP/pm.
Burdened labor rate = $8,000 per month,
The cost per FP is approximately= $1,230.
Based on the FP estimate and the historical productivity data,
The total estimated project cost = $461,000. (375*1230) and
The estimated effort is = 58 person-months. (375/6.5)

SOFTWARE COST ESTIMATION TECHNIQUE:


COCOMO Model :
The Constructive Cost Model (COCOMO) is a software cost estimation model that helps
predict the effort, cost, and schedule required for a software development project. Developed
by Barry Boehm in 1981, COCOMO uses a mathematical formula based on the size of the
software project, typically measured in lines of code (LOC).
Types of Projects in the COCOMO Model
In the COCOMO model, software projects are categorized into three types based on their
complexity, size, and the development environment. These types are:
1. Organic: A software project is said to be an organic type if the team size required is
adequately small, the problem is well understood and has been solved in the past and
also the team members have a nominal experience regarding the problem.
2. Semi-detached: A software project is said to be a Semi-detached type if the vital

Aspects Organic Semidetached Embedded

2 to 50 300 and above


50-300 KLOC
Project Size KLOC KLOC

Complexity Low Medium High

Some
Mixed
experienced as
Highly experience,
well as
experienced includes
Team inexperienced
experts
Experience staff

Somewhat
Flexible, Highly
flexible,
fewer rigorous, strict
moderate
constraints requirements
Environment constraints

Effort E = E = E =
Equation 2.4(400)1.05 3.0(400)1.12 3.6(400)1.20

Simple New system


Flight control
payroll interfacing with
software
Example system existing systems

characteristics such as team size, experience, and knowledge of the various


programming environments lie in between organic and embedded. The projects
classified as Semi-Detached are comparatively less familiar and difficult to develop
compared to the organic ones and require more experience better guidance and
creativity. Eg: Compilers or different Embedded Systems can be considered Semi-
Detached types.
3. Embedded: A software project requiring the highest level of complexity, creativity,
and experience requirement falls under this category. Such software requires a larger
team size than the other two models and also the developers need to be sufficiently
experienced and creative to develop such complex models.

The Six phases of detailed COCOMO are:


Phases of COCOMO Model

1. Planning and requirements: This initial phase involves defining the scope,
objectives, and constraints of the project. It includes developing a project plan that
outlines the schedule, resources, and milestones
2. System design: : In this phase, the high-level architecture of the software system is
created. This includes defining the system’s overall structure, including major
components, their interactions, and the data flow between them.
3. Detailed design: This phase involves creating detailed specifications for each
component of the system. It breaks down the system design into detailed descriptions
of each module, including data structures, algorithms, and interfaces.
4. Module code and test: This involves writing the actual source code for each module
or component as defined in the detailed design. It includes coding the functionalities,
implementing algorithms, and developing interfaces.
5. Integration and test: This phase involves combining individual modules into a
complete system and ensuring that they work together as intended.
6. Cost Constructive model: The Constructive Cost Model (COCOMO) is a widely
used method for estimating the cost and effort required for software development
projects.
Importance of the COCOMO Model
1. Cost Estimation: To help with resource planning and project budgeting, COCOMO
offers a methodical approach to software development cost estimation.
2. Resource Management: By taking team experience, project size, and complexity
into account, the model helps with efficient resource allocation.
3. Project Planning: COCOMO assists in developing practical project plans that
include attainable objectives, due dates, and benchmarks.
4. Risk management: Early in the development process, COCOMO assists in
identifying and mitigating potential hazards by including risk elements.
5. Support for Decisions: During project planning, the model provides a quantitative
foundation for choices about scope, priorities, and resource allocation.
6. Benchmarking: To compare and assess various software development projects to
industry standards, COCOMO offers a benchmark.
7. Resource Optimization: The model helps to maximize the use of resources, which
raises productivity and lowers costs.
Types of COCOMO Model
There are three types of COCOMO Model:
 Basic COCOMO Model
 Intermediate COCOMO Model
 Detailed COCOMO Model
1. Basic COCOMO Model
The Basic COCOMO model is a straightforward way to estimate the effort needed for
a software development project. It uses a simple mathematical formula to predict how
many person-months of work are required based on the size of the project, measured
in thousands of lines of code (KLOC).
It estimates effort and time required for development using the following expression:
E = a*(KLOC)b PM
Tdev = c*(E)d
Person required = Effort/ Time
Where,
E is effort applied in Person-Months
KLOC is the estimated size of the software product indicate in Kilo Lines of Code
Tdev is the development time in months
a, b, c are constants determined by the category of software project given in below
table.
The above formula is used for the cost estimation of the basic COCOMO model and
also is used in the subsequent models. The constant values a, b, c, and d for the Basic
Model for the different categories of the software projects are:

Software
Projects a b c d

Organic 2.4 1.05 2.5 0.38

Semi- 3.0 1.12 2.5 0.35


Software
Projects a b c d

Detached

Embedded 3.6 1.20 2.5 0.32

1. The effort is measured in Person-Months and as evident from the formula is


dependent on Kilo-Lines of code. The development time is measured in months.
2. These formulas are used as such in the Basic Model calculations, as not much
consideration of different factors such as reliability, and expertise is taken into
account, henceforth the estimate is rough.
Example of Basic COCOMO Model
Suppose that a Basic project was estimated to be 400 KLOC (kilo lines of code).
Calculate effort and time for each of the three modes of development. All the
constants value provided in the following table:
Solution
From the above table we take the value of constant a,b,c and d.
1. For organic mode,
 effort = 2.4 × (400)1.05 ≈ 1295 person-month.
 dev. time = 2.5 × (1295)0.38 ≈ 38 months
2. For semi-detach mode,
 effort = 3 × (400)1.12 ≈ 2462 person-month.
 dev. time = 2.5 × (2462)0.35 ≈ 38 months.
3. For Embedded mode,
 effort = 3.6 × (400)1.20 ≈ 4772 person-month.
 dev. time = 2.5 × (4772)0.32 ≈ 38 months.
Intermediate COCOMO Model
The basic COCOMO model assumes that the effort is only a function of the number of lines
of code and some constants evaluated according to the different software systems.
However, in reality, no system’s effort and schedule can be solely calculated based on
Lines of Code. For that, various other factors such as reliability, experience, and Capability.
These factors are known as Cost Drivers (multipliers) and the Intermediate Model utilizes
15 such drivers for cost estimation.
Classification of Cost Drivers and their Attributes:
The cost drivers are divided into four categories
Product attributes:
 Required software reliability extent
 Size of the application database
 The complexity of the product
Hardware attributes
 Run-time performance constraints
 Memory constraints
 The volatility of the virtual machine environment
 Required turnabout time

Personal attributes
 Analyst capability
 Software engineering capability
 Application experience
 Virtual machine experience
 Programming language experience
Project attributes
 Use of software tools
 Application of software engineering methods
 Required development schedule

 Software
Projects A b c d

Organic 3.2 1.05 2.5 0.38

Semi-Detached 3.0 1.12 2.5 0.35

Embedded 2.8 1.20 2.5 0.32

4.Detailed COCOMO Model


Detailed COCOMO goes beyond Basic and Intermediate COCOMO bydiving deeper into
project-specific factors. It considers a wider range of parameters, like team experience,
development practices, and software complexity. By analyzing these factors in more detail,
Detailed COCOMO provides a highly accurate estimation of effort, time, and cost for
software projects. It’s like zooming in on a project’s unique characteristics to get a clearer
picture of what it will take to complete it successfully.

Concepts of software engineering economics:


Software Engineering Economics Fundamentals:
Software engineering economics is about making decisions related to software engineering in
a business context. The success of a software product, service, and solution depends on good
business management. Yet, in many companies and organizations, software business
relationships to software development and engineering remain vague. This knowledge area
(KA) provides an overview on software engineering economics. Economics is the study of
value, costs, resources, and their relationship in a given context or situation. In the discipline
of software engineering, activities have costs, but the resulting software itself has economic
attributes as well. Software engineering economics provides a way to study the attributes of
software and software processes in a systematic way that relates them to economic measures.
These economic measures can be weighed and analyzed when making decisions that are
within the scope of a software organization and those within the integrated scope of an entire
producing or acquiring business. Software engineering economics is concerned with aligning
software technical decisions with the business goals of the organization. In all types of
organizations — be it “for-profit,” “notfor- profit,” or governmental — this translates into
sustainably staying in business

Software Engineering Economics Fundamentals


1.1 Finance
Finance is the branch of economics concerned with issues such as allocation, management,
acquisition, and investment of resources. Finance is an element of every organization,
including software engineering organizations. The field of finance deals with the concepts of
time, money, risk, and how they are interrelated. It also deals with how money is spent and
budgeted. Corporate finance is concerned with providing the funds for an organization’s
activities. Generally, this involves balancing risk and profitability, while attempting to
maximize an organization’s wealth and the value of its stock. This holds primarily for “for-
profit” organizations, but also applies to “not-for-profit” organizations. The latter needs
finances to ensure sustainability, while not targeting tangible profit. To do this, an
organization must
 identify organizational goals, time horizons, risk factors, tax considerations, and
financial constraints;
 identify and implement the appropriate business strategy, such as which portfolio and
investment decisions to take, how to manage cash flow, and where to get the funding;
 measure financial performance, such as cash flow and ROI (see section 4.3, Return on
Investment), and take corrective actions in case of deviation from objectives and
strategy.
1.2 Accounting
Accounting is part of finance. It allows people whose money is being used to run an
organization to know the results of their investment: did they get the profit they were
expecting? In “for-profit” organizations, this relates to the tangible ROI (see section 4.3,
Return on Investment), while in “not-for-profit” and governmental organizations as well as
“for-profit” organizations, it translates into sustainably staying in business. The primary role
of accounting is to measure the organization’s actual financial performance and to
communicate financial information about a business entity to stakeholders, such as
shareholders, financial auditors, and investors. Communication is generally in the form of
financial statements that show in money terms the economic resources to be controlled. It is
important to select the right information that is both relevant and reliable to the user.
Information and its timing are partially governed by risk management and governance
policies. Accounting systems are also a rich source of historical data for estimating.
1.3 Controlling
Controlling is an element of finance and accounting. Controlling involves measuring and
correcting the performance of finance and accounting. It ensures that an organization’s
objectives and plans are accomplished. Controlling cost is a specialized branch of controlling
used to detect variances of actual costs from planned costs.
1.4 Cash Flow
Cash flow is the movement of money into or out of a business, project, or financial product
over a given period. The concepts of cash flow instances and cash flow streams are used to
describe the business perspective of a proposal. To make a meaningful business decision
about any specific proposal, that proposal will need to be evaluated from a business
perspective. In a proposal to develop and launch product X, the payment for new software
licenses is an example of an outgoing cash flow instance. Money would need to be spent to
carry out that proposal. The sales income from product X in the 11th month after market
launch is an example of an incoming cash flow instance. Money would be coming in because
of carrying out the proposal.
Figure 12.3: The Basic Business Decision-Making Process

1.6 Valuation
In an abstract sense, the decision-making process— be it financial decision making or other
— is about maximizing value. The alternative that maximizes total value should always be
chosen. A financial basis for value-based comparison is comparing two or more cash flows.
Several bases of comparison are available, including
 present worth
 future worth
 annual equivalent
 internal rate of return
 (discounted) payback period.
Based on the time-value of money, two or more cash flows are equivalent only when they
equal the same amount of money at a common point in time. Comparing cash flows only
makes sense when they are expressed in the same time frame.
Note that value can’t always be expressed in terms of money. For example, whether an item
is a brand name or not can significantly affect its perceived value. Relevant values that can’t
be expressed in terms of money still need to be expressed in similar terms so that they can be
evaluated objectively.
1.7 Inflation
Inflation describes long-term trends in prices. Inflation means that the same things cost more
than they did before. If the planning horizon of a business decision is longer than a few years,
or if the inflation rate is over a couple of percentage points annually, it can cause noticeable
changes in the value of a proposal. The present time value therefore needs to be adjusted for
inflation rates and also for exchange rate fluctuations.
1.8 Depreciation
Depreciation involves spreading the cost of a tangible asset across a number of time periods;
it is used to determine how investments in capitalized assets are charged against income over
several years. Depreciation is an important part of determining after-tax cash flow, which is
critical for accurately addressing profit and taxes. If a software product is to be sold after the
development costs are incurred, those costs should be capitalized and depreciated over
subsequent time periods. The depreciation expense for each time period is the capitalized cost
of developing the software divided across the number of periods in which the software will
be sold. A software project proposal may be compared to other software and nonsoftware
proposals or to alternative investment options, so it is important to determine how those other
proposals would be depreciated and how profits would be estimated.
1.9 Taxation
Governments charge taxes in order to finance expenses that society needs but that no single
organization would invest in. Companies have to pay income taxes, which can take a
substantial portion of a corporation’s gross profit. A decision analysis that does not account
for taxation can lead to the wrong choice. A proposal with a high pretax profit won’t look
nearly as profitable in posttax terms. Not accounting for taxation can also lead to
unrealistically high expectations about how profitable a proposed product might be.
1.10 Time-Value of Money
One of the most fundamental concepts in finance—and therefore, in business decisions— is
that money has time-value: its value changes over time. A specific amount of money right
now almost always has a different value than the same amount of money at some other time.
This concept has been around since the earliest recorded human history and is commonly
known as time-value. In order to compare proposals or portfolio elements, they should be
normalized in cost, value, and risk to the net present value. Currency exchange variations
over time need to be taken into account based on historical data. This is particularly important
in cross-border developments of all kinds.
1.11 Efficiency
Economic efficiency of a process, activity, or task is the ratio of resources actually consumed
to resources expected to be consumed or desired to be consumed in accomplishing the
process, activity, or task. Efficiency means “doing things right.” An efficient behavior, like
an effective behavior, delivers results—but keeps the necessary effort to a minimum. Factors
that may affect efficiency in software engineering include product complexity, quality
requirements, time pressure, process capability, team distribution, interrupts, feature churn,
tools, and programming language.
1.12 Effectiveness
Effectiveness is about having impact. It is the relationship between achieved objectives to
defined objectives. Effectiveness means “doing the right things.” Effectiveness looks only at
whether defined objectives are reached—not at how they are reached.
1.13 Productivity
Productivity is the ratio of output over input from an economic perspective. Output is the
value delivered. Input covers all resources (e.g., effort) spent to generate the output.
Productivity combines efficiency and effectiveness from a valueoriented perspective:
maximizing productivity is about generating highest value with lowest resource consumption.
Techniques of software project control and reporting
The project schedule becomes a road map that defines the tasks and milestones to be tracked
and controlled as the project proceeds.
Tracking can be accomplished in a number of different ways:
• Conducting periodic project status meetings in which each team member reports progress
and problems.
• Evaluating the results of all reviews conducted throughout the software engineering process.
• Determining whether formal project milestones have been accomplished by the scheduled
date.
• Comparing the actual start date to the planned start date for each project task listed in the
resource table.
• Meeting informally with practitioners to obtain their subjective assessment of progress to
date and problems on the horizon.
• Using earned value analysis to assess progress quantitatively.
Control is employed by a software project manager to administer project resources,
cope with problems, and direct project staff. If things are going well (i.e., the project is on
schedule and within budget, reviews indicate that real progress is being made and milestones
are being reached), control is light. But when problems occur, you must exercise control to
reconcile them as quickly as possible. After a problem has been diagnosed, additional
resources may be focused on the problem area: staff may be redeployed or the project
schedule can be redefined.
When faced with severe deadline pressure, experienced project managers sometimes
use a project scheduling and control technique called time-boxing. The time-boxing strategy
recognizes that the complete product may not be deliverable by the predefined deadline.
Therefore, an incremental software paradigm is chosen, and a schedule is derived for each
incremental delivery.
The tasks associated with each increment are then time-boxed. This means that the
schedule for each task is adjusted by working backward from the delivery date for the
increment. A “box” is put around each task. When a task hits the boundary of its time box
(plus or minus 10 percent), work stops and the next task begins. T
The initial reaction to the time-boxing approach is often negative: “If the work isn’t
finished, how can we proceed?” The answer lies in the way work is accomplished. Bythe
time the time-box boundary is encountered; it is likely that 90 percent of the task has been
completed. 9 The remaining 10 percent, although important, can (1) be delayed until the next
increment or (2) be completed later if required. Rather than becoming “stuck” on a task, the
project proceeds toward the delivery date.

Earned value analysis


It is a quantitative technique for assessing progress as the software team progresses
through the work tasks allocated to the project schedule. In fact, a technique for performing
quantitative analysis of progress does exist. It is called earned value analysis (EVA).
Stated even more simply, earned value is a measure of progress. It enables you to assess the
“percent of completeness” of a project using quantitative analysis rather than rely on a gut
feeling. In fact, Fleming and Koppleman [Fle98] argue that earned value analysis “provides
accurate and reliable readings of performance from as early as 15 percent into the project.”
To determine the earned value, the following steps are performed:
1. The budgeted cost of work scheduled (BCWS) is determined for each work task
represented in the schedule. During estimation, the work (in person-hours or person-days) of
each software engineering task is planned. Hence, BCWS i is the effort planned for work task
i. To determine progress at a given point along the project schedule, the value of BCWS is
the sum of the BCWSi values for all work tasks that should have been completed by that
point in time on the project schedule.
2. The BCWS values for all work tasks are summed to derive the budget at completion
(BAC). Hence, BAC = ∑(BCWSk) for all tasks k.
3. Next, the value for budgeted cost of work performed (BCWP) is computed. The value for
BCWP is the sum of the BCWS values for all work tasks that have actually been completed
by a point in time on the project schedule.
Wilkens [Wil99] notes that “the distinction between the BCWS and the BCWP is that the
former represents the budget of the activities that were planned to be completed and the latter
represents the budget of the activities that actually were completed.” Given values for
BCWS, BAC, and BCWP, important progress indicators can be computed:
➢ Schedule performance index, SPI = BCWP/BCWS
➢ Schedule variance, SV = BCWP – BCWS
SPI is an indication of the efficiency with which the project is utilizing scheduled resources.
An SPI value close to 1.0 indicates efficient execution of the project schedule. SV is simply
an absolute indication of variance from the planned schedule.
➢ Percent scheduled for completion = BCWS/BAC
Percent scheduled for completion provides an indication of the percentage of work that
should have been completed time t.
➢ Percent complete = BCWP/BAC
It is also possible to compute the actual cost of work performed (ACWP). The value for
ACWP is the sum of the effort actually expended on work tasks that have been completed by
a point in time on the project schedule. It is then possible to compute
Cost performance index, CPI = BCWP/ACWP
Cost variance, CV = BCWP - ACWP
A CPI value close to 1.0 provides a strong indication that the project is within its defined
budget. CV is an absolute indication of cost savings (against planned costs) or shortfall at a
particular stage of a project.
Project Reporting Techniques
A project management report isn’t exactly fun to do, but it's necessary to promote project
visibility, keep everyone on the same page, manage stakeholder expectations, and minimize
risks.
Project reports:
➢ Chronicle how projects are progressing.
➢ Communicate the status of the project’s budget, scope, and schedule.
➢ Serve as audit trails when you need to look back and analyze how the project performed.
Types of project management reports
To ensure successful projects, project managers need to stay on top of key project aspects:
scope, budget, schedule, resources, risks, stakeholders, etc. Reports help them do just that.
Here are some of the most common types of reports.
1. Status report
A project status report updates stakeholders on the project status — how it’s progressing,
essentially. It disseminates key information, such as:
➢ Work that has been completed
➢ What’s coming up
➢ Actual performance vs. baselines vs. forecasts (e.g., actual date a milestone was completed
vs. due date vs. expected date of completion if it’s still in progress)
➢ Budget
➢ Project schedule
➢ Resources
➢ Issues and risks affecting the project
➢ Changes made to the project
➢ Action items
➢ Message to key stakeholders
A project status report takes many forms, including a Microsoft PowerPoint presentation or a
document with graphs and tables generated by project management software tool.
2. Risk report
Risks are uncertainties that can affect a project one way or another. They should therefore be
communicated in a timely manner. Risk reports are critical for project risk management and
may include the following information:
➢ An overview of overall project risks and opportunities
➢ Individual risk summaries
➢ Risk treatment action plan
➢ Risk incidence trend
3. Resource report
Every project requires resources. Without the right people or the right tools, moving the
project forward will be extremely difficult. A resource management report helps project
managers allocate the right resources to the right tasks at the right time.
They contain information such as:
➢ Who’s working on what, and when
➢ Potential overallocation — meaning certain team members are working beyond their
capacity, which can lead to bottlenecks, burnout, and project delays
➢ Resource availability for more efficient scheduling
4. Variance report
A variance report shows whether the project is advancing according to plan. It compares what
actually happened against what’s supposed to happen (actual results vs. planned outcome),
allowing you to gauge whether or not your project is on budget, on schedule, or within scope.
5. Budget report
The budget report is a cost management tool that provides essential financial information to
keep projects within or under budget. At the minimum, they should contain:
➢ The total amount spent on completed activities
➢ The total amount expected to be spent to date
By comparing the total amount spent to date vs. scheduled spend to date, you'll know right
away if the project is on budget. If it’s not, these measures can get it back on track:
➢ Swapping less productive team members with more experienced, productive ones
➢ Cutting the scope of some tasks or deliverables
➢ Eliminating certain costs (e.g., using virtual meetings over in-person meetings
that require travel)
Project Status Report
What needs to be included in a project management report?
Project reports vary from company to company, and lengths can vary from one page to 10
pages or more, depending on what your organization requires.

These are some key elements to include:


Project identification details: The project’s name, project manager’s name, project sponsor,
start and expected end dates, customer name and details, and report release date.
Key measures of project success: Actual schedule vs. planned schedule, actual costs vs.
budget, actual vs. planned resourcing, actual vs. planned scope, risks overview, and issues
with quality, if any.
Other information: Pending and approved project change requests, an overview of the
decisions and actions undertaken since the last report, recent milestones and
accomplishments, upcoming milestones, deliverables coming due, and actions or decisions
required.
Introduction to measurement of software size
Software size is the major determinant of software project effort. Effectively measuring
software size is thus a key element of successful effort estimation. Measuring software size is
a challenging task. A number of methods for measuring software size have been proposed
over the years. The size of a project is obviously not the number of bytes that the source code
occupies, neither is it the size of the executable code. The project size is a measure of the
problem complexity in terms of the effort and time required to develop the product.
Currently, two metrics are popularly being used to measure size—lines of code (LOC) and
function point (FP).
Lines of Code (LOC)
LOC is possibly the simplest among all metrics available to measure project size.
Consequently, this metric is extremely popular. This metric measures the size of a project by
counting the number of source instructions in the developed program. Obviously, while
counting the number of source instructions, comment lines, and header lines are ignored.
Determining the LOC count at the end of a project is very simple. However, accurate
estimation of LOC count at the beginning of a project is a very difficult task. One can
possibly estimate the LOC count at the starting of a project, only by using some form of
systematic guess work. Systematic guessing typically involves the following.
The project manager divides the problem into modules, and each module into sub-modules
and so on, until the LOC of the leaf-level modules are small enough to be predicted. To be
able to predict the LOC count for the various leaf-level modules sufficiently accurately, past
experience in developing similar modules is very helpful.
By adding the estimates for all leaf level modules together, project managers arrive at the
total size estimation. In spite of its conceptual simplicity, LOC metric has several
shortcomings when used to measure problem size.
The important shortcomings of the LOC metric
LOC is a measure of coding activity alone. A good problem size measure should consider
the total effort needed to carry out various life cycle activities (i.e. specification, design, code,
test, etc.) and not just the coding effort. LOC, however, focuses on the coding activity alone
—it merely computes the number of source lines in the final program.
LOC count depends on the choice of specific instructions: LOC gives a numerical value of
problem size that can vary widely with coding styles of individual programmers. Different
programmers may lay out their code in very different ways. For example, one programmer
might write several source instructions on a single line, whereas another might split a single
instruction across several lines. Unless this issue is handled satisfactorily, there is a
possibility of arriving at very different size measures for essentially identical programs.
LOC measure correlates poorly with the quality and efficiency of the code:
Larger code size does not necessarily imply better quality of code or higher efficiency. Some
programmers produce lengthy and complicated code as they do not make effective use of the
available instruction set or use improper algorithms. In fact, it is true that a piece of poor a n
d sloppily written piece of code can have larger number of source instructions than a piece
that is efficient and has been thoughtfully written.
LOC metric penalizes use of higher-level programming languages and code reuse: A
paradox is that if a programmer consciously uses several library routines, then the LOC count
will be lower. This would show up as smaller program size, and in turn, would indicate lower
effort!
It is very difficult to accurately estimate LOC of the final program from problem
specification at the project initiation time; it is a very difficult task to accurately estimate the
number of lines of code (LOC) that the program would have after development. The LOC
count can accurately be computed only after the code has fully been developed. Since project
planning is carried out even before any development activity starts, the LOC metric is of little
use to the project managers during project planning.
FUNCTION POINT (FP) Metrics
Function point metric has steadily gained popularity. Function point metric has several
advantages over LOC metric. One of the important advantages of the function point metric
over the LOC metric is that it can easily be computed from the problem specification itself.
Using the LOC metric, on the other hand, the size can accurately be determined only after the
product has fully been Developed. The size of a software product is directly dependent on the
number of different high-level functions or features it supports. This assumption is
reasonable, since each feature would take additional effort to implement Conceptually, the
function point metric is based on the idea that a software product supporting many features
would certainly be of larger size than a product with less number of features. Though each
feature takes some effort to develop, different features may take very different amounts of
efforts to develop. For example, in a banking software, a function to display a help message
may be much easier to develop compared to say the function that carries out the actual
banking transactions. This has been considered by the function point metric by counting the
number of input and output data items and the number of files accessed by the function. The
implicit assumption made is that the more the number of data items that a function reads from
the user and outputs and the more the number of files accessed, the higher is the complexity
of the function.
Now let us analyse why this assumption must be intuitively correct. Each feature
when invoked typically reads some input data and then transforms those to the required
output data. For example, the query book feature of a Library Automation Software takes the
name of the book as input and displays its location in the library and the total number of
copies available. Similarly, the issue book and the return book features produce their output
based on the corresponding input data. It can therefore be argued that the computation of the
number of input and output data items would give a more accurate indication of the code size
compared to simply counting the number of high-level functions supported by the system.
Introduction to the concepts of risk and its mitigation
Every project is susceptible to a large number of risks. Without effective management of the
risks, even the most meticulously planned project may go hay ware.
A risk is any anticipated unfavourable event or circumstance that can occur while a
project is underway.
We need to distinguish between a risk which is a problem that might occur from the problems
currently being faced by a project. If a risk becomes real, the anticipated problem becomes a
reality and is no more a risk. If a risk becomes real, it can adversely affect the project and
hamper the successful and timely completion of the project. Therefore, it is necessary for the
project manager to anticipate and identify different risks that a project is susceptible to, so
that contingency plans can be prepared beforehand to contain each risk. In this context, risk
management aims at reducing the chances of a risk becoming real as well as reducing the
impact of a risks that becomes real.
Risk management consists of three essential activities such as
Risk identification, Risk Assessment, and Risk mitigation.
Risk Identification
The project manager needs to anticipate the risks in a project as early as possible. As soon as
a risk is identified, effective risk management plans are made, so that the possible impacts of
the risks is minimized. So, early risk identification is important. Risk identification is
somewhat similar to the project manager listing down his nightmares. For example, project
manager might be worried whether the vendors whom you have asked to develop certain
modules might not complete their work in time, whether they would turn in poor quality
work, whether some of your key personnel might leave the organization, etc. All such risks
that are likely to affect a project must be identified and listed.
A project can be subject to a large variety of risks. In order to be able to systematically
identify the important risks which might affect a project, it is necessary to categories risks
into different classes. The project manager can then examine which risks from each class are
relevant to the project. There are three main categories of risks which can affect a software
project: project risks, technical risk.
Project risks:
Project risks concern various forms of budgetary, schedule, personnel, resource, and
customer-related problems. An important project risk is schedule slippage. Since, software is
intangible, it is very difficult to monitor and control a software project. It is very difficult to
control something which cannot be seen. For any manufacturing project, such as
manufacturing of cars, the project manager can see the product taking shape. He can for
instance, see that the engine is fitted, after that the doors are fitted, the car is getting painted,
etc. Thus he can accurately assess the progress of the work and control it, if he finds any
activity is progressing at a slower rate than what was planned. The invisibility of the product
being developed is an important reason why many software projects suffer from the risk of
schedule slippage.
Technical risks:
Technical risks concern potential design, implementation, interfacing, testing, and
maintenance problems. Technical risks also include ambiguous specification, incomplete
specification, changing specification, technical uncertainty, and technical obsolescence. Most
technical risks occur due the development team’s insufficient knowledge about the products.
Business risks:
This type of risks includes the risk of building an excellent product that no one wants, losing
budgetary commitments, etc.
Classification of risks in a project
➢ What if the project cost escalates and overshoots what was estimated? Project risk.
➢ What if the mobile phones that are developed become too bulky in size to conveniently
carry? Business risk.
➢ What if it is later found out that the level of radiation coming from the phones is harmful
to human being? Business risk.
➢ What if call hand-off between satellites becomes too difficult to implement? Technical
risk.
In order to be able to successfully foresee and identify different risks that might affect a
software project, it is a good idea to have a company disaster list. This list would contain all
the bad events that have happened to software projects of the company over the years
including events that can be laid at the customer’s doors. This list can be read by the project
mangers in order to be aware of some of the risks that a project might be susceptible to. Such
a disaster list has been found to help in performing better risk analysis.
Risk Assessment
The objective of risk assessment is to rank the risks in terms of their damage causing
potential. For risk assessment, first each risk should be rated in two ways:
The likelihood of a risk becoming real (r).
The consequence of the problems associated with that risk (s).
Based on these two factors, the priority of each risk can be computed as follows:
p=r*s
where, p is the priority with which the risk must be handled, r is the probability of the risk
becoming real, and s is the severity of damage caused due to the risk becoming real. If all
identified risks are prioritized, then the most likely and damaging risks can be handled first
and more comprehensive risk abatement procedures can be designed for those risks.
Risk Mitigation
After all the identified risks of a project have been assessed, plans are made to contain the
most damaging and the most likely risks first. Different types of risks require different
containment procedures. In fact, most risks require considerable ingenuity on the part of the
project manager in tackling the risks.
There are three main strategies for risk containment:
Avoid the risk: Risks can be avoided in several ways. Risks often arise due to project
constraints and can be avoided by suitably modifying the constraints. The different categories
of constraints that usually give rise to risks are:
Process-related risk: These risks arise due to aggressive work schedule, budget, and
resource utilisation.
Product-related risks: These risks arise due to commitment to challengingproduct features
(e.g. response time of one second, etc.), quality, reliability etc.
Technology-related risks: These risks arise due to commitment to use certain technology
(e.g., satellite communication).
A few examples of risk avoidance can be the following: Discussing with the customer to
change the requirements to reduce the scope of the work, giving incentives to the developers
to avoid the risk of manpower turnover, etc.
Transfer the risk: This strategy involves getting the risky components developed by a third
party, buying insurance cover, etc.
Risk reduction: This involves planning ways to contain the damage due to a risk. For
example, if there is risk that some key personnel might leave, new recruitment may be
planned. The most important risk reduction technique for technical risks is to build a
prototype that tries out the technology that you are trying to use. For example, if you are
using a compiler for recognizing user commands, you would have to construct a compiler for
a small and very primitive command language first.
There can be several strategies to cope up with a risk. To choose the most appropriate
strategy for handling a risk, the project manager must consider the cost of handling the risk
and the corresponding reduction of risk. For this we may compute the risk leverage of the
different risks. Risk leverage is the difference in risk exposure divided by the cost of reducing
the risk. More formally,

Even though we identified three broad ways to handle any risk, effective risk handling cannot
be achieved by mechanically following a set procedure, but requires a lot of ingenuity on the
part of the project manager. As an example, let us consider the options available to contain an
important type of risk that occurs in many software projects—that of schedule slippage.
An example of handling schedule slippage risk
Risks relating to schedule slippage arise primarily due to the intangible nature of
software. For a project such as building a house, the progress can easily be seen and assessed
by the project manager. If he finds that the project is lagging behind, then corrective actions
can be initiated. Considering that software development per se is invisible, the first step in
managing the risks of schedule slippage, is to increase the visibility of the software product.
Visibility of a software product can be increased by producing relevant documents
during the development process and getting these documents reviewed by an appropriate
team. Milestones should be placed at regular intervals to provide a manager with regular
indication of progress. Completion of a phase of the development process being followed
need not be the only milestones. Every phase can be broken down to reasonable-sized tasks
and milestones can be associated with these tasks.
A milestone is reached, once documentation produced as part of a software
engineering task is produced and gets successfully reviewed. Milestones need not be placed
for every activity. An approximate rule of thumb is to set a milestone every 10 to 15 days. If
milestones are placed too close each other than the overheads in managing the milestones
would be too much.
Configuration management
Software systems are constantly changing during development and use. Bugs are
discovered and have to be fixed. System requirements change, and you have to implement
these changes in a new version of the system. New versions of hardware and system
platforms are released, and you have to adapt your systems to work with them. Competitors
introduce new features in their system that you have to match. As changes are made to the
software, a new version of a system is created. Most systems, therefore, can be thought of as
a set of versions, each of which may have to be maintained and managed.
Configuration management (CM) is concerned with the policies, processes, and tools
for managing changing software systems. You need to manage evolving systems because it is
easy to lose track of what changes and component versions have been incorporated into each
system version. Versions implement proposals for change, corrections of faults, and
adaptations for different hardware and operating systems. Several versions may be under
development and in use at the same time. If you don’t have effective configuration
management procedures in place, you may waste effort modifying the wrong version of a
system, delivering the wrong version of a system to customers, or forgetting where the
software source code for a particular version of the system or component is stored.
Configuration management is useful for individual projects as it is easy for one person
to forget what changes have been made. It is essential for team projects where several
developers are working at the same time on a software system.
Sometimes these developers are all working in the same place, but, increasingly,
development teams are distributed with members in different locations across the world. The
configuration management system provides team members with access to the system being
developed and manages the changes that they make to the code.
The configuration management of a software system product involves four closely related
activities:
1. Version control This involves keeping track of the multiple versions of system components
and ensuring that changes made to components by different developers do not interfere with
each other.
2. System building This is the process of assembling program components, data, and
libraries, then compiling and linking these to create an executable system.
3. Change management This involves keeping track of requests for changes to delivered
software from customers and developers, working out the costs and impact of making these
changes, and deciding if and when the changes should be implemented.
4. Release management This involves preparing software for external release and keeping
track of the system versions that have been released for customer use.
The development of a software product or custom software system takes place in three
distinct phases:
A development phase where the development team is responsible for managing the software
configuration and new functionality is being added to the software. The development team
decides on the changes to be made to the system.
A system testing phase where a version of the system is released internally for testing. This
may be the responsibility of a quality management team or an individual or group within the
development team. At this stage, no new functionality is added to the system.
The changes made at this stage are bug fixes, performance improvements, and security
vulnerability repairs. There may be some customer involvement as beta testers during this
phase.
A release phase where the software is released to customers for use. After the release has
been distributed, customers may submit bug reports and change requests. New versions of the
released system may be developed to repair bugs and vulnerabilities and to include new
features suggested by customers.
For large systems, there is never just one “working” version of a system; there are always
several versions of the system at different stages of development. Several teams may be
involved in the development of different system versions. Figure shows situations where
three versions of a system are being developed:
1. Version 1.5 of the system has been developed to repair bug fixes and improve the
performance of the first release of the system. It is the basis of the second system release
(R1.1).
2. Version 2.4 is being tested with a view to it becoming release 2.0 of the system. No new
features are being added at this stage.
3. Version 3 is a development system where new features are being added in response to
change requests from customers and the development team. This will eventually be released
as release 3.0.
What is a configuration management plan?

A configuration management plan is a comprehensive document that details the


configurations of a project and how project managers plan to handle them. In project
management, a configuration is a defining feature of a successful project. Configurations are
the specific features of a project's deliverables, which project managers aim to achieve upon
the completion of a project. Configuration management, which closely relates to project
change management, is a process of identifying, recording and managing a project's
configurations. Often, project managers also include configuration management plans with
project quality management plans.

Why is a configuration management plan important?

A configuration management plan is important because it can help everyone involved


with the project understand its configurations. These plans can help project managers create
strategies to achieve the deliverables of their projects and complete projects successfully.
These plans are also useful for stakeholders, as they can use them to stay informed on the
progress and deliverables of a project.

Who uses a configuration management plan?

These plans are mainly used by project managers and stakeholders. The project manager is
responsible for creating the plan, and stakeholders often review it. Project stakeholders are
individuals who are interested or dependent on a project's outcome. These individuals can
include investors, employees, customers and other people. Often, stakeholders set the
deliverables of a project.

How to complete the configuration management process

The configuration management process includes five basic steps:


1. Creating the configuration management plan

The first step of the configuration management process is creating the plan. This type of plan
explains your process for managing, recording and testing project configurations. It defines
the project's deliverables and how you plan to achieve them, and it helps inform the project
stakeholders about your configuration management strategies.These plans typically include:

 Introduction: The introduction of the plan includes the purpose of the project, the
project scope and any other relevant contextual information.
 Project overview: The project overview section gives a brief description of the entire
project.
 Configuration management strategies: The largest section of the plan lists specific
configuration management strategies, including how to identify, track and test
configurations.

2. Identifying configuration requirements

It's also important to identify your project's configuration requirements. You can do this by
meeting with stakeholders and reviewing your deliverables. Once you've identified your
configurations, be sure to document them so you can measure changes and progress later.

3. Documenting changes

Another important step in the process is documenting changes in project scope and
configurations. You can then compare these changes to your baseline configurations that you
initially recorded. Be sure to update your configuration management plan when necessary.

4. Tracking configurations

You can also track your project's configurations through status accounting. The goal of this
stage of the process is to create a list of all of the previous and current configuration versions.
This can help you keep records of changes.

5. Testing adherence to configuration requirements

Another crucial step is testing how your project adheres to configuration requirements. This
is known as auditing. The purpose of this step is to ensure that the result of your project
meets these requirements. When testing, you can spot any areas of improvement. Many
project managers also complete testing at the end of each individual project cycle so they can
find and correct issues before project completion.

You might also like