Se Unit Ii
Se Unit Ii
Basic concepts of life cycle models – different models and milestones; software project planning –
identification of activities and resources; concepts of feasibility study; techniques for estimation of
schedule and effort; software cost estimation models and concepts of software engineering
economics; techniques of software project control and reporting; introduction to measurement of
software size; introduction to the concepts of risk and its mitigation; configuration management.
Basic concepts of life cycle models
SDLC - Overview
Software Development Life Cycle (SDLC) is a process used by the software industry to design,
develop and test high quality softwares. The SDLC aims to produce a high-quality software that meets
or exceeds customer expectations, reaches completion within times and cost estimates.
What is SDLC?
SDLC is a process followed for a software project, within a software organization. It consists of a
detailed plan describing how to develop, maintain, replace and alter or enhance specific software. The
life cycle defines a methodology for improving the quality of software and the overall development
process.
The following figure is a graphical representation of the various stages of a typical SDLC.
A typical Software Development Life Cycle consists of the following stages −
Requirement analysis is the most important and fundamental stage in SDLC. It is performed by the
senior members of the team with inputs from the customer, the sales department, market surveys and
domain experts in the industry. This information is then used to plan the basic project approach and to
conduct product feasibility study in the economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks associated with the
project is also done in the planning stage. The outcome of the technical feasibility study is to define
the various technical approaches that can be followed to implement the project successfully with
minimum risks.
Once the requirement analysis is done the next step is to clearly define and document the product
requirements and get them approved from the customer or the market analysts. This is done through
an SRS (Software Requirement Specification) document which consists of all the product
requirements to be designed and developed during the project life cycle.
SRS is the reference for product architects to come out with the best architecture for the product to be
developed. Based on the requirements specified in SRS, usually more than one design approach for
the product architecture is proposed and documented in a DDS - Design Document Specification.
This DDS is reviewed by all the important stakeholders and based on various parameters as risk
assessment, product robustness, design modularity, budget and time constraints, the best design
approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any). The
internal design of all the modules of the proposed architecture should be clearly defined with the
minutest of the details in DDS.
In this stage of SDLC the actual development starts and the product is built. The programming code is
generated as per DDS during this stage. If the design is performed in a detailed and organized manner,
code generation can be accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and programming tools
like compilers, interpreters, debuggers, etc. are used to generate the code. Different high level
programming languages such as C, C++, Pascal, Java and PHP are used for coding. The programming
language is chosen with respect to the type of software being developed.
This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are
mostly involved in all the stages of SDLC. However, this stage refers to the testing only stage of the
product where product defects are reported, tracked, fixed and retested, until the product reaches the
quality standards defined in the SRS.
Stage 6: Deployment in the Market and Maintenance
Once the product is tested and ready to be deployed it is released formally in the appropriate market.
Sometimes product deployment happens in stages as per the business strategy of that organization.
The product may first be released in a limited segment and tested in the real business environment
(UAT- User acceptance testing).
Then based on the feedback, the product may be released as it is or with suggested enhancements in
the targeting market segment. After the product is released in the market, its maintenance is done for
the existing customer base.
SDLC Models
There are various software development life cycle models defined and designed which are followed
during the software development process. These models are also referred as Software Development
Process Models". Each process model follows a Series of steps unique to its type to ensure success in
the process of software development.
Following are the most important and popular SDLC models followed in the industry −
Waterfall Model
Iterative Model
Spiral Model
V-Model
Big Bang Model
Other related methodologies are Agile Model, RAD Model, Rapid Application Development and
Prototyping Models.
The Waterfall Model was the first Process Model to be introduced. It is also referred to as a linear-
sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase
must be completed before the next phase can begin and there is no overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software development.
The waterfall Model illustrates the software development process in a linear sequential flow. This
means that any phase in the development process begins only if the previous phase is complete. In this
waterfall model, the phases do not overlap.
Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software development is
divided into separate phases. In this Waterfall model, typically, the outcome of one phase acts as the
input for the next phase sequentially.
The following illustration is a representation of the different phases of the Waterfall Model.
The sequential phases in Waterfall model are −
All these phases are cascaded to each other in which progress is seen as flowing steadily downwards
(like a waterfall) through the phases. The next phase is started only after the defined set of goals are
achieved for previous phase and it is signed off, so the name "Waterfall Model". In this model, phases
do not overlap.
The advantages of waterfall development are that it allows for departmentalization and control. A
schedule can be set with deadlines for each stage of development and a product can proceed through
the development process model phases one by one.
The disadvantage of waterfall development is that it does not allow much reflection or revision. Once
an application is in the testing stage, it is very difficult to go back and change something that was not
well-documented or thought upon in the concept stage.
In the Iterative model, iterative process starts with a simple implementation of a small set of the
software requirements and iteratively enhances the evolving versions until the complete system is
implemented and ready to be deployed.
An iterative life cycle model does not attempt to start with a full specification of requirements.
Instead, development begins by specifying and implementing just part of the software, which is then
reviewed to identify further requirements. This process is then repeated, producing a new version of
the software at the end of each iteration of the model.
Iterative process starts with a simple implementation of a subset of the software requirements and
iteratively enhances the evolving versions until the full system is implemented. At each iteration,
design modifications are made and new functional capabilities are added. The basic idea behind this
method is to develop a system through repeated cycles (iterative) and in smaller portions at a time
(incremental).
Iterative and Incremental development is a combination of both iterative design or iterative method
and incremental build model for development. "During software development, more than one iteration
of the software development cycle may be in progress at the same time." This process may be
described as an "evolutionary acquisition" or "incremental build" approach."
In this incremental model, the whole requirement is divided into various builds. During each iteration,
the development module goes through the requirements, design, implementation and testing phases.
Each subsequent release of the module adds function to the previous release. The process continues
till the complete system is ready as per the requirement.
The key to a successful use of an iterative software development lifecycle is rigorous validation of
requirements, and verification & testing of each version of the software against those requirements
within each cycle of the model. As the software evolves through successive cycles, tests must be
repeated and extended to verify each version of the software.
Like other SDLC models, Iterative and incremental development has some specific applications in the
software industry. This model is most often used in the following scenarios −
The advantage of this model is that there is a working model of the system at a very early stage of
development, which makes it easier to find functional or design flaws. Finding issues at an early stage
of development enables to take corrective measures in a limited budget.
The disadvantage with this SDLC model is that it is applicable only to large and bulky software
development projects. This is because it is hard to break a small software system into further small
serviceable increments/modules.
The advantages of the Iterative and Incremental SDLC Model are as follows −
Some working functionality can be developed quickly and early in the life cycle.
Results are obtained early and periodically.
Parallel development can be planned.
Progress can be measured.
Less costly to change the scope/requirements.
Testing and debugging during smaller iteration is easy.
Risks are identified and resolved during iteration; and each iteration is an easily managed
milestone.
Easier to manage risk - High risk part is done first.
With every increment, operational product is delivered.
Issues, challenges and risks identified from each increment can be utilized/applied to the next
increment.
Risk analysis is better.
It supports changing requirements.
Initial Operating time is less.
Better suited for large and mission-critical projects.
During the life cycle, software is produced early which facilitates customer evaluation and
feedback.
The disadvantages of the Iterative and Incremental SDLC Model are as follows −
The spiral model combines the idea of iterative development with the systematic, controlled aspects of
the waterfall model. This Spiral model is a combination of iterative development process model and
sequential linear development model i.e. the waterfall model with a very high emphasis on risk
analysis. It allows incremental releases of the product or incremental refinement through each
iteration around the spiral.
The spiral model has four phases. A software project repeatedly passes through these phases in
iterations called Spirals.
Identification
This phase starts with gathering the business requirements in the baseline spiral. In the subsequent
spirals as the product matures, identification of system requirements, subsystem requirements and unit
requirements are all done in this phase.
This phase also includes understanding the system requirements by continuous communication
between the customer and the system analyst. At the end of the spiral, the product is deployed in the
identified market.
Design
The Design phase starts with the conceptual design in the baseline spiral and involves architectural
design, logical design of modules, physical product design and the final design in the subsequent
spirals.
Construct or Build
The Construct phase refers to production of the actual software product at every spiral. In the baseline
spiral, when the product is just thought of and the design is being developed a POC (Proof of
Concept) is developed in this phase to get customer feedback.
Then in the subsequent spirals with higher clarity on requirements and design details a working model
of the software called build is produced with a version number. These builds are sent to the customer
for feedback.
Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the end of
first iteration, the customer evaluates the software and provides feedback.
The following illustration is a representation of the Spiral Model, listing the activities in each phase.
Based on the customer evaluation, the software development process enters the next iteration and
subsequently follows the linear approach to implement the feedback suggested by the customer. The
process of iterations along the spiral continues throughout the life of the software.
The Spiral Model is widely used in the software industry as it is in sync with the natural development
process of any product, i.e. learning with maturity which involves minimum risk for the customer as
well as the development firms.
The advantage of spiral lifecycle model is that it allows elements of the product to be added in, when
they become available or known. This assures that there is no conflict with previous requirements and
design.
This method is consistent with approaches that have multiple software builds and releases which
allows making an orderly transition to a maintenance activity. Another positive aspect of this method
is that the spiral model forces an early user involvement in the system development effort.
On the other side, it takes a very strict management to complete such products and there is a risk of
running the spiral in an indefinite loop. So, the discipline of change and the extent of taking change
requests is very important to develop and deploy the product successfully.
SDLC - V-Model
The V-model is an SDLC model where execution of processes happens in a sequential manner in a V-
shape. It is also known as Verification and Validation model.
The V-Model is an extension of the waterfall model and is based on the association of a testing phase
for each corresponding development stage. This means that for every single phase in the development
cycle, there is a directly associated testing phase. This is a highly-disciplined model and the next
phase starts only after completion of the previous phase.
V-Model - Design
Under the V-Model, the corresponding testing phase of the development phase is planned in parallel.
So, there are Verification phases on one side of the ‘V’ and Validation phases on the other side. The
Coding Phase joins the two sides of the V-Model.
The following illustration depicts the different phases in a V-Model of the SDLC.
V-Model - Verification Phases
There are several Verification phases in the V-Model, each of these are explained in detail below.
This is the first phase in the development cycle where the product requirements are understood from
the customer’s perspective. This phase involves detailed communication with the customer to
understand his expectations and exact requirement. This is a very important activity and needs to be
managed well, as most of the customers are not sure about what exactly they need. The acceptance
test design planning is done at this stage as business requirements can be used as an input for
acceptance testing.
System Design
Once you have the clear and detailed product requirements, it is time to design the complete system.
The system design will have the understanding and detailing the complete hardware and
communication setup for the product under development. The system test plan is developed based on
the system design. Doing this at an earlier stage leaves more time for the actual test execution later.
Architectural Design
Architectural specifications are understood and designed in this phase. Usually more than one
technical approach is proposed and based on the technical and financial feasibility the final decision is
taken. The system design is broken down further into modules taking up different functionality. This
is also referred to as High Level Design (HLD).
The data transfer and communication between the internal modules and with the outside world (other
systems) is clearly understood and defined in this stage. With this information, integration tests can be
designed and documented during this stage.
Module Design
In this phase, the detailed internal design for all the system modules is specified, referred to as Low
Level Design (LLD). It is important that the design is compatible with the other modules in the
system architecture and the other external systems. The unit tests are an essential part of any
development process and helps eliminate the maximum faults and errors at a very early stage. These
unit tests can be designed at this stage based on the internal module designs.
Coding Phase
The actual coding of the system modules designed in the design phase is taken up in the Coding
phase. The best suitable programming language is decided based on the system and architectural
requirements.
The coding is performed based on the coding guidelines and standards. The code goes through
numerous code reviews and is optimized for best performance before the final build is checked into
the repository.
Validation Phases
Unit Testing
Unit tests designed in the module design phase are executed on the code during this validation phase.
Unit testing is the testing at code level and helps eliminate bugs at an early stage, though all defects
cannot be uncovered by unit testing.
Integration Testing
Integration testing is associated with the architectural design phase. Integration tests are performed to
test the coexistence and communication of the internal modules within the system.
System Testing
System testing is directly associated with the system design phase. System tests check the entire
system functionality and the communication of the system under development with external systems.
Most of the software and hardware compatibility issues can be uncovered during this system test
execution.
Acceptance Testing
Acceptance testing is associated with the business requirement analysis phase and involves testing the
product in user environment. Acceptance tests uncover the compatibility issues with the other systems
available in the user environment. It also discovers the non-functional issues such as load and
performance defects in the actual user environment.
V- Model ─ Application
V- Model application is almost the same as the waterfall model, as both the models are of sequential
type. Requirements have to be very clear before the project starts, because it is usually expensive to
go back and make changes. This model is used in the medical development field, as it is strictly a
disciplined domain.
The following pointers are some of the most suitable scenarios to use the V-Model application.
The advantage of the V-Model method is that it is very easy to understand and apply. The simplicity
of this model also makes it easier to manage. The disadvantage is that the model is not flexible to
changes and just in case there is a requirement change, which is very common in today’s dynamic
world, it becomes very expensive to make the change.
The Big Bang model is an SDLC model where we do not follow any specific process. The
development just starts with the required money and efforts as the input, and the output is the software
developed which may or may not be as per customer requirement. This Big Bang Model does not
follow a process/procedure and there is a very little planning required. Even the customer is not sure
about what exactly he wants and the requirements are implemented on the fly without much analysis.
Usually this model is followed for small projects where the development teams are very small.
This model is ideal for small projects with one or two developers working together and is also useful
for academic or practice projects. It is an ideal model for the product where requirements are not well
understood and the final release date is not given.
The advantage of this Big Bang Model is that it is very simple and requires very little or no planning.
Easy to manage and no formal procedure are required.
However, the Big Bang Model is a very high risk model and changes in the requirements or
misunderstood requirements may even lead to complete reversal or scraping of the project. It is ideal
for repetitive or small projects with minimum risks.
Agile SDLC model is a combination of iterative and incremental process models with focus on
process adaptability and customer satisfaction by rapid delivery of working software product. Agile
Methods break the product into small incremental builds. These builds are provided in iterations. Each
iteration typically lasts from about one to three weeks. Every iteration involves cross functional teams
working simultaneously on various areas like −
Planning
Requirements Analysis
Design
Coding
Unit Testing and
Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and important stakeholders.
What is Agile?
Agile model believes that every project needs to be handled differently and the existing methods need
to be tailored to best suit the project requirements. In Agile, the tasks are divided to time boxes (small
time frames) to deliver specific features for a release.
Iterative approach is taken and working software build is delivered after each iteration. Each build is
incremental in terms of features; the final build holds all the features required by the customer.
The Agile thought process had started early in the software development and started becoming
popular with time due to its flexibility and adaptability.
The most popular Agile methods include Rational Unified Process (1994), Scrum (1995), Crystal
Clear, Extreme Programming (1996), Adaptive Software Development, Feature Driven Development,
and Dynamic Systems Development Method (DSDM) (1995). These are now collectively referred to
as Agile Methodologies, after the Agile Manifesto was published in 2001.
Agile is based on the adaptive software development methods, whereas the traditional SDLC
models like the waterfall model is based on a predictive approach. Predictive teams in the traditional
SDLC models usually work with detailed planning and have a complete forecast of the exact tasks
and features to be delivered in the next few months or during the product life cycle.
Predictive methods entirely depend on the requirement analysis and planning done in the beginning
of cycle. Any changes to be incorporated go through a strict change control management and
prioritization.
Agile uses an adaptive approach where there is no detailed planning and there is clarity on future
tasks only in respect of what features need to be developed. There is feature driven development and
the team adapts to the changing product requirements dynamically. The product is tested very
frequently, through the release iterations, minimizing the risk of any major failures in future.
Customer Interaction is the backbone of this Agile methodology, and open communication with
minimum documentation are the typical features of Agile development environment. The agile teams
work in close collaboration with each other and are most often located in the same geographical
location.
Agile methods are being widely accepted in the software world recently. However, this method may
not always be suitable for all products. Here are some pros and cons of the Agile model.
The RAD (Rapid Application Development) model is based on prototyping and iterative
development with no specific planning involved. The process of writing the software itself involves
the planning required for developing the product.
What is RAD?
Rapid application development is a software development methodology that uses minimal planning in
favor of rapid prototyping. A prototype is a working model that is functionally equivalent to a
component of the product.
In the RAD model, the functional modules are developed in parallel as prototypes and are integrated
to make the complete product for faster product delivery. Since there is no detailed preplanning, it
makes it easier to incorporate the changes within the development process.
RAD projects follow iterative and incremental model and have small teams comprising of developers,
domain experts, customer representatives and other IT resources working progressively on their
component or prototype.
The most important aspect for this model to be successful is to make sure that the prototypes
developed are reusable.
RAD model distributes the analysis, design, build and test phases into a series of short, iterative
development cycles.
Business Modelling
The business model for the product under development is designed in terms of flow of information
and the distribution of information between various business channels. A complete business analysis
is performed to find the vital information for business, how it can be obtained, how and when is the
information processed and what are the factors driving successful flow of information.
Data Modelling
The information gathered in the Business Modelling phase is reviewed and analyzed to form sets of
data objects vital for the business. The attributes of all data sets is identified and defined. The relation
between these data objects are established and defined in detail in relevance to the business model.
Process Modelling
The data object sets defined in the Data Modelling phase are converted to establish the business
information flow needed to achieve specific business objectives as per the business model. The
process model for any changes or enhancements to the data object sets is defined in this phase.
Process descriptions for adding, deleting, retrieving or modifying a data object are given.
Application Generation
The actual system is built and coding is done by using automation tools to convert process and data
models into actual prototypes.
The overall testing time is reduced in the RAD model as the prototypes are independently tested
during every iteration. However, the data flow and the interfaces between all the components need to
be thoroughly tested with complete test coverage. Since most of the programming components have
already been tested, it reduces the risk of any major issues.
The traditional SDLC follows a rigid process models with high emphasis on requirement analysis and
gathering before the coding starts. It puts pressure on the customer to sign off the requirements before
the project starts and the customer doesn’t get the feel of the product as there is no working build
available for a long time.
The customer may need some changes after he gets to see the software. However, the change process
is quite rigid and it may not be feasible to incorporate major changes in the product in the traditional
SDLC.
The RAD model focuses on iterative and incremental delivery of working models to the customer.
This results in rapid delivery to the customer and customer involvement during the complete
development cycle of product reducing the risk of non-conformance with the actual user
requirements.
RAD model can be applied successfully to the projects in which clear modularization is possible. If
the project cannot be broken into modules, RAD may fail.
The following pointers describe the typical scenarios where RAD can be used −
RAD model enables rapid delivery as it reduces the overall development time due to the reusability of
the components and parallel development. RAD works well only if high skilled engineers are
available and the customer is also committed to achieve the targeted prototype in the given time
frame. If there is commitment lacking on either side the model may fail.
The Software Prototyping refers to building software application prototypes which displays the
functionality of the product under development, but may not actually hold the exact logic of the
original software.
Prototype is a working model of software with some limited functionality. The prototype does not
always hold the exact logic used in the actual software application and is an extra effort to be
considered under effort estimation.
Prototyping is used to allow the users evaluate developer proposals and try them out before
implementation. It also helps understand the requirements which are user specific and may not have
been considered by the developer during product design.
This step involves understanding the very basics product requirements especially in terms of user
interface. The more intricate details of the internal design and external aspects like performance and
security can be ignored at this stage.
The initial Prototype is developed in this stage, where the very basic requirements are showcased and
user interfaces are provided. These features may not exactly work in the same manner internally in the
actual software developed. While, the workarounds are used to give the same look and feel to the
customer in the prototype developed.
The prototype developed is then presented to the customer and the other important stakeholders in the
project. The feedback is collected in an organized manner and used for further enhancements in the
product under development.
The feedback and the review comments are discussed during this stage and some negotiations happen
with the customer based on factors like – time and budget constraints and technical feasibility of the
actual implementation. The changes accepted are again incorporated in the new Prototype developed
and the cycle repeats until the customer expectations are met.
Prototypes can have horizontal or vertical dimensions. A Horizontal prototype displays the user
interface for the product and gives a broader view of the entire system, without concentrating on
internal functions. A Vertical prototype on the other side is a detailed elaboration of a specific
function or a sub system in the product.
The purpose of both horizontal and vertical prototype is different. Horizontal prototypes are used to
get more information on the user interface level and the business requirements. It can even be
presented in the sales demos to get business in the market. Vertical prototypes are technical in nature
and are used to get details of the exact functioning of the sub systems. For example, database
requirements, interaction and data processing loads in a given sub system.
There are different types of software prototypes used in the industry. Following are the major
software prototyping types used widely −
Throwaway/Rapid Prototyping
Throwaway prototyping is also called as rapid or close ended prototyping. This type of prototyping
uses very little efforts with minimum requirement analysis to build a prototype. Once the actual
requirements are understood, the prototype is discarded and the actual system is developed with a
much clear understanding of user requirements.
Evolutionary Prototyping
Evolutionary prototyping also called as breadboard prototyping is based on building actual functional
prototypes with minimal functionality in the beginning. The prototype developed forms the heart of
the future prototypes on top of which the entire system is built. By using evolutionary prototyping, the
well-understood requirements are included in the prototype and the requirements are added as and
when they are understood.
Incremental Prototyping
Incremental prototyping refers to building multiple functional prototypes of the various sub-systems
and then integrating all the available prototypes to form a complete system.
Extreme Prototyping
Extreme prototyping is used in the web development domain. It consists of three sequential phases.
First, a basic prototype with all the existing pages is presented in the HTML format. Then the data
processing is simulated using a prototype services layer. Finally, the services are implemented and
integrated to the final prototype. This process is called Extreme Prototyping used to draw attention to
the second phase of the process, where a fully functional UI is developed with very little regard to the
actual services.
Software Prototyping is most useful in development of systems having high level of user interactions
such as online systems. Systems which need users to fill out forms or go through various screens
before data is processed can use prototyping very effectively to give the exact look and feel even
before the actual software is developed.
Software that involves too much of data processing and most of the functionality is internal with very
little user interface does not usually benefit from prototyping. Prototype development could be an
extra overhead in such projects and may need lot of extra efforts.
Software prototyping is used in typical cases and the decision should be taken very carefully so that
the efforts spent in building the prototype add considerable value to the final software developed. The
model has its own pros and cons discussed as follows.
Risk of insufficient requirement analysis owing to too much dependency on the prototype.
Users may get confused in the prototypes and actual systems.
Practically, this methodology may increase the complexity of the system as scope of the
system may expand beyond original plans.
Developers may try to reuse the existing prototypes to build the actual system, even when it is
not technically feasible.
The effort invested in building prototypes may be too much if it is not monitored properly.
Software development is a sort of all new streams in world business, and there's next to no
involvement in structure programming items. Most programming items are customized to
accommodate customer's necessities. The most significant is that the underlying technology
changes and advances so generally and rapidly that experience of one element may not be
connected to the other one. All such business and ecological imperatives bring risk in
software development; hence, it is fundamental to manage software projects efficiently.
Software Project Manager
Software manager is responsible for planning and scheduling project development. They
manage the work to ensure that it is completed to the required standard. They monitor the
progress to check that the event is on time and within budget. The project planning must
incorporate the major issues like size & cost estimation scheduling, project monitoring,
personnel selection evaluation & risk management. To plan a successful software project, we
must understand:
Software Project planning starts before technical work start. The various steps of planning
activities are:
The size is the crucial parameter for the estimation of other activities. Resources requirement
are required based on cost and development time. Project schedule may prove to be very
useful for controlling and monitoring the progress of the project. This is dependent on
resources & development time.
Identification of Activities
Software Project Management consists of many activities, that includes planning of the project,
deciding the scope of product, estimation of cost in different terms, scheduling of tasks, etc.
The list of activities are as follows:
1. Project Planning: It is a set of multiple processes, or we can say that it a task that performed
before the construction of the product starts.
2. Scope Management: It describes the scope of the project. Scope management is important because
it clearly defines what would do and what would not. Scope Management create the project to contain
restricted and quantitative tasks, which may merely be documented and successively avoids price and
time overrun.
3. Estimation management: This is not only about cost estimation because whenever we start to
develop software, but we also figure out their size(line of code), efforts, time as well as cost.
o Size of software
o Quality
o Hardware
o Communication
o Training
o Additional Software and tools
o Skilled manpower
5. Project Resource Management: In software Development, all the elements are referred to as
resources for the project. It can be a human resource, productive tools, and libraries.
6. Project Risk Management: Risk management consists of all the activities like identification,
analyzing and preparing the plan for predictable and unpredictable risk in the project.
o The Experienced team leaves the project, and the new team joins it.
o Changes in requirement.
o Change in technologies and the environment.
o Market competition.
From the planning to closure, communication plays a vital role. In all the phases, communication
must be clear and understood. Miscommunication can create a big blunder in the project.
o Identification
o Baseline
o Change Control
o Configuration Status Accounting
o Configuration Audits and Reviews
Resources
The second planning task is estimation of the resources required to accomplishthe software
development effort. The three major categoriesof software engineering resources—people, reusable
software components, andthe development environment (hardware and software tools). Each resource
isspecified with four characteristics: description of the resource, a statement ofavailability, time when
the resource will be required, and duration of time that theresource will be applied. The last two
characteristics can be viewed as a timewindow. Availability of the resource for a specified window
must be established atthe earliest practical time.
Human Resources
The planner begins by evaluating software scope and selecting the skills requiredto complete
development. Both organizational position (e.g., manager, seniorsoftware engineer) and specialty
(e.g., telecommunications, database, client-server) are specified. For relatively small projects (a few
person-months), a singleindividual may perform all software engineering tasks, consulting with
specialistsas required. For larger projects, the software team may be geographicallydispersed across a
number of different locations. Hence, the location of eachhuman resource is specified.
The number of people required for a software project can be determined only afteran estimate of
development effort (e.g., person-months) is made.
The first activity in the classical waterfall model is the feasibility study. In the
feasibility study there are 3 main aspects that are determined, i.e.; whether the
software that to be developed is economically feasible, whether the development
effort and the cost that will be spent on developing software is it worth and lastly
whether the developing organization has the technical competence required to
develop the software or not. This is also termed as the cost benefit analysis.
Considering the example for the development of some satellite communication. As
the developers do not know how to use satellite communication and even write
programs for satellite communication, they will find it technically infeasible. The third
feasibility that needs to be determined during the feasibility study stage is scheduled
feasibility. The time by which the customer requires the product to be delivered and
the development involved to complete the work as per the required time.
During the feasibility study, the project manager needs to determine 3 types of
feasibility. Whether it is cost wise feasible, whether it can be done and the
scheduled feasibility (i.e. whether it can be done in time or not).
The feasibility study involves carrying out several activities such as collection of basic
information relating to the software such as the different data items that would be
input to the system, the processing required to be carried out on these data, the
output data required to be produced by the system, as well as various constraints on
the development. These collected data are analyzed to perform at the following:
(i) Development of an overall understanding of the problem:
The first thing is to roughly understand the requirements of the software and the
customer. What are the features of the software? (I.e. Comprising of different sort of
data which would be input to the system, the processing that needs to be done &
finally the code to be written.) What is the output to be produced by the system and
the various constraints &behavior of the system?
(ii) Formulation of the various possible strategies for solving the problem:
In this activity, various possible high-level solution schemes to the problem are
determined. For example, solution in a client-server framework and a standalone
application framework may be explored.
(iii) Evaluation of the different solution strategies:
The different identified solution schemes are analyzed to evaluate their benefits and
shortcomings. Such evaluation often requires making approximate estimates of the
resources required, cost of development, and development time required. The
different solutions are compared based on the estimations that have been worked
out. Once the best solution is identified, all activities in the later phases are carried
out as per this solution. At this stage, it may also be determined that none of the
solutions is feasible due to high cost, resource constraints, or some technical
reasons. This scenario would, of course, require the project to be abandoned.
Techniques for Estimation of Schedule and Effort:
A project planning process that predicts the amount of time and money needed to
complete a project. It's a key part of project success, and is often used in software
development to plan the resources and schedule for new applications or updates. Accurate
effort estimation can help project managers create budgets, schedules, and resource plans,
and allocate resources accordingly.Project scheduling is the process of deciding how the
work in a project will be organized as separate tasks, and when and how these tasks will be
executed. You estimate the calendar time needed to complete each task, the effort required
and who will work on the tasks that have been identified. You also have to estimate the
resources needed to complete each task, such as the disk space required on a server, the time
required on specialized hardware, such as a simulator, and what the travel budget will be.
Scheduling in plan-driven projects involves breaking down the total work involved ina
project into separate tasks and estimating the time required to complete each task. Tasks
should normally last at least a week, and no longer than 2 months.The maximum amount of
time for any task should be around 8 to 10 weeks. If it takes longer than this, the task should
be subdivided for project planning and scheduling.
Some of these tasks are carried out in parallel, with different people working on
different components of the system. You have to coordinate these parallel tasks and
organize the work so that the workforce is used optimally and you don’t introduce
unnecessary dependencies between the tasks.
Split project into tasks and estimate time and resources required to complete each task.
Organize tasks concurrently to make optimal use of workforce. Minimize task dependencies
to avoid delays caused by one task waiting for another to complete. Dependent on project
managers intuition and experience.
Estimating the difficulty of problems and hence the cost of developing a solution is hard.
Productivity is not proportional to the number of people working on a task.
Adding people to a late project makes it later because of communication overheads.
The unexpected always happens. Always allow contingency in planning.
Schedule representation
Project schedules may simply be represented in a table or spreadsheet showing the
tasks, effort, expected duration, and task dependencies.
However, this style of representation makes it difficult to see the relationships and
dependencies between the different activities. For this reason, alternative graphical
representations of project schedules have been developed that are often easier to
read and understand.
There are two types of representation that are commonly used:
1. Bar charts, which are calendar-based, show who is responsible for each activity,
the expected elapsed time, and when the activity is scheduled to begin and end. Bar
charts are sometimes called ‘Gantt charts’, after their inventor, Henry Gantt.
2. Activity networks, which are network diagrams, show the dependencies
between the different activities making up a project.
Normally, a project planning tool is used to manage project schedule information.
These tools usually expect you to input project information into a table and will then
create a database of project information. Bar charts and activity charts can then be
generated automatically from this database.
Project activities are the basic planning element. Each activity has:
1. A duration in calendar days or months.
2. An effort estimate, which reflects the number of person-days or person-months
to complete the work.
3. A deadline by which the activity should be completed.
4. A defined endpoint. This represents the tangible result of completing the activity.
This could be a document, the holding of a review meeting, the successful
execution of all tests, etc.
When planning a project, you should also define milestones; that is, each stage in
the project where a progress assessment can be made.
Each milestone should be documented by a short report that summarizes the
progress made and the work done. Milestones may be associated with a single task
or with groups of related activities. For example, milestone M1 is associated with
task T1 and milestone M3 is associated with a pair of tasks, T2 and T4. A special
kind of milestone is the production of a project deliverable. A deliverable is a work
product that is delivered to the customer. It is the outcome of a significant project
phase such as specification or design. Usually, the deliverables that are required are
specified in the project contract and the customer’s view of the project’s progress
T1 15 10
T2 8 15
T3 20 15 T1 (M1)
T4 5 10
T5 5 10 T2, T4 (M3)
T6 10 5 T1, T2 (M4)
T7 25 20 T1 (M1)
T8 75 25 T4 (M2)
T9 10 15 T3, T6 (M5)
T11 10 10 T9 (M7)
The estimated duration for some tasks is more than the effort required and vice
versa. If the effort is less than the duration, this means that the people allocated to
that task are not working full-time on it. If the effort exceeds the duration, this
means that several team members are working on the task at the same time.
Fig.2.6 is a bar chart showing a project calendar and the start and finish dates of
tasks. Reading from left to right, the bar chart clearly shows when tasks start and
end. The milestones (M1, M2, etc.) are also shown on the bar chart.
Bar Chart
Notice that tasks that are independent are carried out in parallel (e.g., tasks T1,
T2, and T4 all start at the beginning of the project).
Notice that tasks that are independent are carried out in parallel (e.g., tasks T1, T2,
and T4 all start at the beginning of the project).
As well as planning the delivery schedule for the software, project managers have to
allocate resources to tasks. The key resource is, of course, the software engineers
who will do the work, and they have to be assigned to project activities. The
resource allocation can also be input to project management tools and a bar chart
generated, which shows when staff are working on the project. People may be
working on more than one task at the same time and, sometimes, they are not
working on the project.
They may be on holiday, working on other projects, attending training courses, or
engaging in some other activity. I show part-time assignments using a diagonal line
crossing the bar.
Large organizations usually employ a number of specialists who work on a project
when needed. In Figure 2.6, you can see that Mary is a specialist, who works on
only a single task in the project. This can cause scheduling problems. If one project
is delayed while a specialist is working on it, this may have a knock-on effect on
other projects where the specialist is also required. These may then be delayed
because the specialist is not available.
In both cases, you need to use your judgment to estimate either the effort directly,
or estimate the project and product characteristics.
Based on data collected from a large number of projects, Boehm, et al. (1995)
discovered that startup estimates vary significantly. If the initial estimate of effort
required is x months of effort, they found that the range may be from 0.25x to 4x of
the actual effort as measured when the system was delivered.
A is a constant factor which depends on local organizational practices and the type
of software that is developed. Size may be either an assessment of the code size of
the software or a functionality estimate expressed in function or application points.
The value of exponent B usually lies between 1 and 1.5. M is a multiplier made by
combining process, product, and development attributes, such as the dependability
requirements for the software and the experience of the development team. The
number of lines of source code (SLOC) in the delivered system is the fundamental
size metric that is used in many algorithmic cost models.
Most algorithmic estimation models have an exponential component (B in the above
equation) that is related to the size and complexity of the system. This reflects the
fact that costs do not usually increase linearly with project size. As the size and
complexity of the software increases, extra costs are incurred because of the
communication overhead of larger teams, more complex configuration management,
more difficult system integration, and so on. The more complex the system, the
more these factors affect the cost.
Therefore, the value of B usually increases with the size and complexity of
the system.
Algorithmic cost models are a systematic way to estimate the effort required to
develop a system. However, these models are complex and difficult to use. There
are many attributes and considerable scope for uncertainty in estimating their
values.
This complexity discourages potential users and hence the practical application of
algorithmic cost modeling has been limited to a small number of companies. If you
use an algorithmic cost estimation model, you should develop a range of estimates
(worst, expected, and best) rather than a single estimate and apply the costing
formula to all of them. Estimates are most likely to be accurate when you
understand the type of software that is being developed.
software cost estimation models
The lines of code and function points were the measures from which productivity
metrics can be computed.
LOC and FP data are used in two ways during software project estimation:
1. As estimation variables to “size” each element of the software and
2. As baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques.
LOC BASED ESTIMATION
Steps involved in LOC based estimation:
1. Find the bounded statement of software scope and
2. Decompose the statement of scope into problem functions that can each be
estimated individually.
3. Estimate LOC (the estimation variable) for each function.
4. A three-point or expected value can then be computed.
5. The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess)
estimates.
S = ( Sopt + 4 Sm+Spess ) / 6.
6. Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied.
An Example of LOC-Based Estimation
Problem:
Develop a software package for a computer-aided design application for mechanical
components. The software is to execute on an engineering workstation and must
interface with various computer graphics peripherals including a mouse, digitizer,
high-resolution color display, and laser printer.
Preliminary statement of software scope can be developed:
The mechanical CAD software will accept two- and three-dimensional geometric data
from an engineer. The engineer will interact and control the CAD system through a
user interface that will exhibit characteristics of good human/machine interface
design. All geometric data and other supporting information will be maintained in a
CAD database.
Design analysis modules will be developed to produce the required output, which
will be displayed on a variety of graphics devices. The software will be designed to
control and interact with peripheral devices that include a mouse, digitizer, laser
printer, and plotter. This statement of scope is preliminary - it is not bounded.
Some
Mixed
experienced as
Highly experience,
well as
experienced includes
Team inexperienced
experts
Experience staff
Somewhat
Flexible, Highly
flexible,
fewer rigorous, strict
moderate
constraints requirements
Environment constraints
Effort E = E = E =
Equation 2.4(400)1.05 3.0(400)1.12 3.6(400)1.20
1. Planning and requirements: This initial phase involves defining the scope,
objectives, and constraints of the project. It includes developing a project plan that
outlines the schedule, resources, and milestones
2. System design: : In this phase, the high-level architecture of the software system is
created. This includes defining the system’s overall structure, including major
components, their interactions, and the data flow between them.
3. Detailed design: This phase involves creating detailed specifications for each
component of the system. It breaks down the system design into detailed descriptions
of each module, including data structures, algorithms, and interfaces.
4. Module code and test: This involves writing the actual source code for each module
or component as defined in the detailed design. It includes coding the functionalities,
implementing algorithms, and developing interfaces.
5. Integration and test: This phase involves combining individual modules into a
complete system and ensuring that they work together as intended.
6. Cost Constructive model: The Constructive Cost Model (COCOMO) is a widely
used method for estimating the cost and effort required for software development
projects.
Importance of the COCOMO Model
1. Cost Estimation: To help with resource planning and project budgeting, COCOMO
offers a methodical approach to software development cost estimation.
2. Resource Management: By taking team experience, project size, and complexity
into account, the model helps with efficient resource allocation.
3. Project Planning: COCOMO assists in developing practical project plans that
include attainable objectives, due dates, and benchmarks.
4. Risk management: Early in the development process, COCOMO assists in
identifying and mitigating potential hazards by including risk elements.
5. Support for Decisions: During project planning, the model provides a quantitative
foundation for choices about scope, priorities, and resource allocation.
6. Benchmarking: To compare and assess various software development projects to
industry standards, COCOMO offers a benchmark.
7. Resource Optimization: The model helps to maximize the use of resources, which
raises productivity and lowers costs.
Types of COCOMO Model
There are three types of COCOMO Model:
Basic COCOMO Model
Intermediate COCOMO Model
Detailed COCOMO Model
1. Basic COCOMO Model
The Basic COCOMO model is a straightforward way to estimate the effort needed for
a software development project. It uses a simple mathematical formula to predict how
many person-months of work are required based on the size of the project, measured
in thousands of lines of code (KLOC).
It estimates effort and time required for development using the following expression:
E = a*(KLOC)b PM
Tdev = c*(E)d
Person required = Effort/ Time
Where,
E is effort applied in Person-Months
KLOC is the estimated size of the software product indicate in Kilo Lines of Code
Tdev is the development time in months
a, b, c are constants determined by the category of software project given in below
table.
The above formula is used for the cost estimation of the basic COCOMO model and
also is used in the subsequent models. The constant values a, b, c, and d for the Basic
Model for the different categories of the software projects are:
Software
Projects a b c d
Detached
Personal attributes
Analyst capability
Software engineering capability
Application experience
Virtual machine experience
Programming language experience
Project attributes
Use of software tools
Application of software engineering methods
Required development schedule
Software
Projects A b c d
1.6 Valuation
In an abstract sense, the decision-making process— be it financial decision making or other
— is about maximizing value. The alternative that maximizes total value should always be
chosen. A financial basis for value-based comparison is comparing two or more cash flows.
Several bases of comparison are available, including
present worth
future worth
annual equivalent
internal rate of return
(discounted) payback period.
Based on the time-value of money, two or more cash flows are equivalent only when they
equal the same amount of money at a common point in time. Comparing cash flows only
makes sense when they are expressed in the same time frame.
Note that value can’t always be expressed in terms of money. For example, whether an item
is a brand name or not can significantly affect its perceived value. Relevant values that can’t
be expressed in terms of money still need to be expressed in similar terms so that they can be
evaluated objectively.
1.7 Inflation
Inflation describes long-term trends in prices. Inflation means that the same things cost more
than they did before. If the planning horizon of a business decision is longer than a few years,
or if the inflation rate is over a couple of percentage points annually, it can cause noticeable
changes in the value of a proposal. The present time value therefore needs to be adjusted for
inflation rates and also for exchange rate fluctuations.
1.8 Depreciation
Depreciation involves spreading the cost of a tangible asset across a number of time periods;
it is used to determine how investments in capitalized assets are charged against income over
several years. Depreciation is an important part of determining after-tax cash flow, which is
critical for accurately addressing profit and taxes. If a software product is to be sold after the
development costs are incurred, those costs should be capitalized and depreciated over
subsequent time periods. The depreciation expense for each time period is the capitalized cost
of developing the software divided across the number of periods in which the software will
be sold. A software project proposal may be compared to other software and nonsoftware
proposals or to alternative investment options, so it is important to determine how those other
proposals would be depreciated and how profits would be estimated.
1.9 Taxation
Governments charge taxes in order to finance expenses that society needs but that no single
organization would invest in. Companies have to pay income taxes, which can take a
substantial portion of a corporation’s gross profit. A decision analysis that does not account
for taxation can lead to the wrong choice. A proposal with a high pretax profit won’t look
nearly as profitable in posttax terms. Not accounting for taxation can also lead to
unrealistically high expectations about how profitable a proposed product might be.
1.10 Time-Value of Money
One of the most fundamental concepts in finance—and therefore, in business decisions— is
that money has time-value: its value changes over time. A specific amount of money right
now almost always has a different value than the same amount of money at some other time.
This concept has been around since the earliest recorded human history and is commonly
known as time-value. In order to compare proposals or portfolio elements, they should be
normalized in cost, value, and risk to the net present value. Currency exchange variations
over time need to be taken into account based on historical data. This is particularly important
in cross-border developments of all kinds.
1.11 Efficiency
Economic efficiency of a process, activity, or task is the ratio of resources actually consumed
to resources expected to be consumed or desired to be consumed in accomplishing the
process, activity, or task. Efficiency means “doing things right.” An efficient behavior, like
an effective behavior, delivers results—but keeps the necessary effort to a minimum. Factors
that may affect efficiency in software engineering include product complexity, quality
requirements, time pressure, process capability, team distribution, interrupts, feature churn,
tools, and programming language.
1.12 Effectiveness
Effectiveness is about having impact. It is the relationship between achieved objectives to
defined objectives. Effectiveness means “doing the right things.” Effectiveness looks only at
whether defined objectives are reached—not at how they are reached.
1.13 Productivity
Productivity is the ratio of output over input from an economic perspective. Output is the
value delivered. Input covers all resources (e.g., effort) spent to generate the output.
Productivity combines efficiency and effectiveness from a valueoriented perspective:
maximizing productivity is about generating highest value with lowest resource consumption.
Techniques of software project control and reporting
The project schedule becomes a road map that defines the tasks and milestones to be tracked
and controlled as the project proceeds.
Tracking can be accomplished in a number of different ways:
• Conducting periodic project status meetings in which each team member reports progress
and problems.
• Evaluating the results of all reviews conducted throughout the software engineering process.
• Determining whether formal project milestones have been accomplished by the scheduled
date.
• Comparing the actual start date to the planned start date for each project task listed in the
resource table.
• Meeting informally with practitioners to obtain their subjective assessment of progress to
date and problems on the horizon.
• Using earned value analysis to assess progress quantitatively.
Control is employed by a software project manager to administer project resources,
cope with problems, and direct project staff. If things are going well (i.e., the project is on
schedule and within budget, reviews indicate that real progress is being made and milestones
are being reached), control is light. But when problems occur, you must exercise control to
reconcile them as quickly as possible. After a problem has been diagnosed, additional
resources may be focused on the problem area: staff may be redeployed or the project
schedule can be redefined.
When faced with severe deadline pressure, experienced project managers sometimes
use a project scheduling and control technique called time-boxing. The time-boxing strategy
recognizes that the complete product may not be deliverable by the predefined deadline.
Therefore, an incremental software paradigm is chosen, and a schedule is derived for each
incremental delivery.
The tasks associated with each increment are then time-boxed. This means that the
schedule for each task is adjusted by working backward from the delivery date for the
increment. A “box” is put around each task. When a task hits the boundary of its time box
(plus or minus 10 percent), work stops and the next task begins. T
The initial reaction to the time-boxing approach is often negative: “If the work isn’t
finished, how can we proceed?” The answer lies in the way work is accomplished. Bythe
time the time-box boundary is encountered; it is likely that 90 percent of the task has been
completed. 9 The remaining 10 percent, although important, can (1) be delayed until the next
increment or (2) be completed later if required. Rather than becoming “stuck” on a task, the
project proceeds toward the delivery date.
Even though we identified three broad ways to handle any risk, effective risk handling cannot
be achieved by mechanically following a set procedure, but requires a lot of ingenuity on the
part of the project manager. As an example, let us consider the options available to contain an
important type of risk that occurs in many software projects—that of schedule slippage.
An example of handling schedule slippage risk
Risks relating to schedule slippage arise primarily due to the intangible nature of
software. For a project such as building a house, the progress can easily be seen and assessed
by the project manager. If he finds that the project is lagging behind, then corrective actions
can be initiated. Considering that software development per se is invisible, the first step in
managing the risks of schedule slippage, is to increase the visibility of the software product.
Visibility of a software product can be increased by producing relevant documents
during the development process and getting these documents reviewed by an appropriate
team. Milestones should be placed at regular intervals to provide a manager with regular
indication of progress. Completion of a phase of the development process being followed
need not be the only milestones. Every phase can be broken down to reasonable-sized tasks
and milestones can be associated with these tasks.
A milestone is reached, once documentation produced as part of a software
engineering task is produced and gets successfully reviewed. Milestones need not be placed
for every activity. An approximate rule of thumb is to set a milestone every 10 to 15 days. If
milestones are placed too close each other than the overheads in managing the milestones
would be too much.
Configuration management
Software systems are constantly changing during development and use. Bugs are
discovered and have to be fixed. System requirements change, and you have to implement
these changes in a new version of the system. New versions of hardware and system
platforms are released, and you have to adapt your systems to work with them. Competitors
introduce new features in their system that you have to match. As changes are made to the
software, a new version of a system is created. Most systems, therefore, can be thought of as
a set of versions, each of which may have to be maintained and managed.
Configuration management (CM) is concerned with the policies, processes, and tools
for managing changing software systems. You need to manage evolving systems because it is
easy to lose track of what changes and component versions have been incorporated into each
system version. Versions implement proposals for change, corrections of faults, and
adaptations for different hardware and operating systems. Several versions may be under
development and in use at the same time. If you don’t have effective configuration
management procedures in place, you may waste effort modifying the wrong version of a
system, delivering the wrong version of a system to customers, or forgetting where the
software source code for a particular version of the system or component is stored.
Configuration management is useful for individual projects as it is easy for one person
to forget what changes have been made. It is essential for team projects where several
developers are working at the same time on a software system.
Sometimes these developers are all working in the same place, but, increasingly,
development teams are distributed with members in different locations across the world. The
configuration management system provides team members with access to the system being
developed and manages the changes that they make to the code.
The configuration management of a software system product involves four closely related
activities:
1. Version control This involves keeping track of the multiple versions of system components
and ensuring that changes made to components by different developers do not interfere with
each other.
2. System building This is the process of assembling program components, data, and
libraries, then compiling and linking these to create an executable system.
3. Change management This involves keeping track of requests for changes to delivered
software from customers and developers, working out the costs and impact of making these
changes, and deciding if and when the changes should be implemented.
4. Release management This involves preparing software for external release and keeping
track of the system versions that have been released for customer use.
The development of a software product or custom software system takes place in three
distinct phases:
A development phase where the development team is responsible for managing the software
configuration and new functionality is being added to the software. The development team
decides on the changes to be made to the system.
A system testing phase where a version of the system is released internally for testing. This
may be the responsibility of a quality management team or an individual or group within the
development team. At this stage, no new functionality is added to the system.
The changes made at this stage are bug fixes, performance improvements, and security
vulnerability repairs. There may be some customer involvement as beta testers during this
phase.
A release phase where the software is released to customers for use. After the release has
been distributed, customers may submit bug reports and change requests. New versions of the
released system may be developed to repair bugs and vulnerabilities and to include new
features suggested by customers.
For large systems, there is never just one “working” version of a system; there are always
several versions of the system at different stages of development. Several teams may be
involved in the development of different system versions. Figure shows situations where
three versions of a system are being developed:
1. Version 1.5 of the system has been developed to repair bug fixes and improve the
performance of the first release of the system. It is the basis of the second system release
(R1.1).
2. Version 2.4 is being tested with a view to it becoming release 2.0 of the system. No new
features are being added at this stage.
3. Version 3 is a development system where new features are being added in response to
change requests from customers and the development team. This will eventually be released
as release 3.0.
What is a configuration management plan?
These plans are mainly used by project managers and stakeholders. The project manager is
responsible for creating the plan, and stakeholders often review it. Project stakeholders are
individuals who are interested or dependent on a project's outcome. These individuals can
include investors, employees, customers and other people. Often, stakeholders set the
deliverables of a project.
The first step of the configuration management process is creating the plan. This type of plan
explains your process for managing, recording and testing project configurations. It defines
the project's deliverables and how you plan to achieve them, and it helps inform the project
stakeholders about your configuration management strategies.These plans typically include:
Introduction: The introduction of the plan includes the purpose of the project, the
project scope and any other relevant contextual information.
Project overview: The project overview section gives a brief description of the entire
project.
Configuration management strategies: The largest section of the plan lists specific
configuration management strategies, including how to identify, track and test
configurations.
It's also important to identify your project's configuration requirements. You can do this by
meeting with stakeholders and reviewing your deliverables. Once you've identified your
configurations, be sure to document them so you can measure changes and progress later.
3. Documenting changes
Another important step in the process is documenting changes in project scope and
configurations. You can then compare these changes to your baseline configurations that you
initially recorded. Be sure to update your configuration management plan when necessary.
4. Tracking configurations
You can also track your project's configurations through status accounting. The goal of this
stage of the process is to create a list of all of the previous and current configuration versions.
This can help you keep records of changes.
Another crucial step is testing how your project adheres to configuration requirements. This
is known as auditing. The purpose of this step is to ensure that the result of your project
meets these requirements. When testing, you can spot any areas of improvement. Many
project managers also complete testing at the end of each individual project cycle so they can
find and correct issues before project completion.