SPM Unit-1
SPM Unit-1
Improving Software Economics: Reducing Software product size, improving software processes,
improving team effectiveness, improving automation, Achieving required quality, peer inspections.
The old way and the new: The principles of conventional software Engineering, principles of modern
software management, transitioning to an iterative process.
1. Software development is still highly unpredictable. Only about 10% of software projects are delivered
successfully within initial budget and schedule estimates.
2. Management discipline is more of a discriminator in success or failure than are technology advances.
All three analyses reached the same general conclusion: The success rate for software projects is very
low. The three analyses provide a good introduction to the magnitude of the software problem and the
current norms for conventional software management performance.
Most software engineering texts present the waterfall model as the source of the "conventional"
software process.
1.1 IN THEORY
o Analysis
o Coding
The basic framework described in the waterfall model is risky and invites failure. The testing
phase that occurs at the end of the development cycle is the first event for which timing, storage,
input/output transfers, etc., are experienced as distinguished from analyzed. The resulting design
changes are likely to be so disruptive that the software requirements upon which the design is
based are likely violated. Either the requirements must be modified or a substantial design change
is warranted.
Insert a preliminary program design phase between the software requirements generation phase
and the analysis phase. By this technique, the program designer assures that the software will not
fail because of storage, timing, and data flux (continuous change).
As analysis proceeds in the succeeding phase, the program designer must impose on the analyst
the storage, timing, and operational constraints in such a way that he senses the consequences.
If the total resources to be applied are insufficient or if the embryonic (in an early stage of
development) operational design is wrong, it will be recognized at this early stage and the iteration
with requirements and preliminary design can be redone before final design, coding, and test
commences.
Begin the design process with program designers, not analysts or programmers.
Design, define, and allocate the data processing modes even at the risk of being wrong. Allocate
processing functions, design the database, allocate execution time, define interfaces and
processing modes with the operating system, describe input and output processing, and define
preliminary operating procedures.
Write an overview document that is understandable, informative, and current so that every worker
on the project can gain an elemental understanding of the system.
2.Document the design:
The amount of documentation required on most software programs is quite a lot, certainly much
more than most programmers, analysts, or program designers are willing to do if left to their own
devices.
3.Do it twice:
If a computer program is being developed for the first time, arrange matters so that the version
finally delivered to the customer for operational deployment is actually the second version insofar as
critical design/operations are concerned. Note that this is simply the entire process done in
miniature, to a time scale that is relatively small with respect to the overall effort.
In the first version, the team must have a special broad competence where they can quickly sense
trouble spots in the design, model them, model alternatives, forget the straightforward aspects of
the design that aren't worth studying at this early point, and, finally, arrive at an error-free program.
Without question, the biggest user of project resources—manpower, computer time, and/or
management judgment—is the test phase.
This is the phase of greatest risk in terms of cost and schedule. It occurs at the latest point in the
schedule, when backup alternatives are least available, if at all.
The previous three recommendations were all aimed at uncovering and solving problems before
entering the test phase. However, even after doing these things, there is still a test phase and there
are still important things to be done, including:
(1) employ a team of test specialists who were not responsible for the original design;
(2) employ visual inspections to spot the obvious errors like dropped minus signs, missing factors of
two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too
expensive);
(3) test every logic path;
(4) employ the final checkout on the target computer without changing the matter.
It is important to involve the customer in a formal way so that he has committed himself at earlier
points before final delivery.
There are three points following requirements definition where the insight, judgment, and
commitment of the customer can bolster the development effort. These include:
Some software projects still practice the conventional software management approach. It is useful
to summarize the characteristics of the conventional process as it has typically been applied, which
is not necessarily as it was intended.
Projects destined for trouble frequently exhibit the following symptoms:
Early success via paper designs and thorough (often too thorough) briefings.
Table 1-1 provides a typical profile of cost expenditures across the spectrum of software activities.
3. Assuming that requirements will remain constant throughout the software development life cycle.
These conditions rarely occur in the real world.Specification of requirements is a difficult and
important part of the software development process.
Another property of the conventional approach is that:
2. Built into the classic waterfall process was the fundamental assumption that the software itself
was decomposed into functions.
The conventional process tended to result in adversarial stakeholder relationships, in large part
because of the difficulties of requirements specification and the exchange of information solely
through paper documents that captured engineering information in ad hoc formats.
The following sequence of events was typical for most contractual software efforts:
3. The contractor incorporated these comments and submitted (typically within 15 to 30 days) a final
version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of customers and
contractors.
The conventional process focused on producing various documents that attempted to describe the
software product, with insufficient focus on producing tangible increments of the products
themselves.
Contractors were driven to produce literally tons of paper to meet milestones and demonstrate
progress to stakeholders, rather than spend their energy on tasks that would reduce risk and
produce quality software.
Typically, presenters and the audience reviewed the simple things that they understood rather than
the complex and important issues. Most design reviews therefore resulted in low engineering
value and high cost in terms of the effort and schedule involved in their preparation and conduct.
They presented merely a facade of progress.
1. Finding and fixing a software problem after delivery costs 100 times more than finding and
fixing the problem in early design phases.
2. You can compress software development schedules 25% of nominal, but no more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the number of source
lines of code.
5. Variations among people account for the biggest differences in software productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in 1985, it
was 85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as individual software
programs. Software-system products (i.e., system of systems) cost 9 times as much.
9. Walkthroughs catch 60% of the errors.
10. 80% of the contribution comes from 20% of the contributors.
Most software cost models can be abstracted into a function of five basic parameters:
1. size
2. process
3. personnel
4. environment and
5. required quality.
1. The size of the end product (in human-generated components), which is typically quantified in
terms of the number of source instructions or the number of function points required to develop the
required functionality.
2. The process used to produce the end product, in particular the ability of the process to avoid
non-value-adding activities (rework, bureaucratic delays, communications overhead).
3. The capabilities of software engineering personnel, and particularly their experience with the
computer science issues and the applications domain issues of the project.
4. The environment, which is made up of the tools and techniques available to support efficient
software development and to automate the process.
5. The required quality of the product, including its features, performance, reliability, and
adaptability.
The relationships among these parameters and the estimated cost can be written as follows:
One important aspect of software economics (as represented within today's software cost models)
is that the relationship between effort and size exhibits a diseconomy of scale. The diseconomy of
scale of software development is a result of the process exponent being greater than 1.0.
Contrary to most manufacturing processes, the more software you build, the more expensive it is
per unit item.
Figure 2-1 shows three generations of basic technology advancement in tools, components, and
processes. The required levels of quality and personnel are assumed to be constant. The ordinate
of the graph refers to software unit costs realized by an organization.
One critical problem in software cost estimation is a lack of well-documented case studies of
projects that used an iterative development approach.
The software industry has inconsistently defined metrics or atomic units of measure. The data
from actual projects are highly suspect in terms of consistency and comparability.
It is hard enough to collect a homogeneous set of project data within one organization; it is
extremely difficult to homogenize data across different organizations with different processes,
languages, domains, and so on.
There have been many debates among developers and vendors of software cost estimation
models and tools. Three topics of these debates are of particular interest here:
o COCOMO
o CHECKPOINT
o ESTIMACS
o Knowledge Plan
o Price-S
o Pro QMS
o SEER
o SLIM
o SOFTCOST
o SPQR/20
Among these, COCOMO is one of the most open and well-documented cost estimation models.
The general accuracy of conventional cost models
(such as COCOMO) has been described as "within 20% of actuals, 70% of the time." Most real-
world use of cost models is bottom-up (substantiating a target cost) rather than top-down
(estimating the "should" cost).
Figure 2-3 illustrates the predominant practice: The software project manager defines the target
cost of the software, and then manipulates the parameters and sizing until the target cost can be
justified.
The rationale for the target cost may be to:
o Win a proposal
The process described in Figure 2-3 is not all bad. In fact, it is absolutely necessary to:
It forces the software project manager to examine the risks associated with achieving the target
costs and to discuss this information with other stakeholders.
o It is conceived and supported by the project manager, architecture team, development team,
and test team accountable for performing the work.
o It is based on a database of relevant project experience that includes similar processes, similar
technologies, similar environments, similar quality requirements, and similar people.
o It is defined in enough detail so that its key risk areas are understood and the probability of
success is objectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost
model with an experience base that reflects multiple similar projects done by the same team
with the same mature processes and tools.
3. Using more-skilled personnel and better teams (not necessarily the same thing).
The old process was geared toward ensuring that the user interface was completely analysed
and designed, because the project could afford only one construction cycle.
The new process was geared toward taking the user interface through a few realistic versions,
incorporating user feedback all along the way and achieving a stable understanding of the
requirements and design issues in balance with one another.
1. REDUCING SOFTWARE PRODUCT SIZE:
To improve affordability and return on investment (ROI) and to produce a product that achieves
the design goals with the minimum amount of human-generated source material.
Component-based development is helpful for reducing the "source" language size to achieve a
software solution.
Reuse, object-oriented technology, automatic code production, and higher order programming
languages are all focused on achieving a given system with fewer lines of human-specified source
directives.
Size reduction is the primary motivation behind improvements in higher order languages (such
as C++, Ada 95, Java, Visual Basic), automatic code generators (CASE tools, visual modeling
tools, GUI builders), reuse of commercial components (operating systems, windowing
environments, DBMS, middleware, networks), and OOPS technologies (Unified Modeling
Language, visual modeling tools, architecture frameworks).
The reduction is defined in terms of human-generated source material. In general, when size-
reducing technologies are used, they reduce the number of human-generated source lines.
So the mature and reliable size reduction technologies are extremely important at producing
economic benefits.
Immature size reduction technologies may reduce the development size but require so much
more investment in achieving the necessary levels of quality and performance that they have a
negative impact on overall project performance.
Language:
To estimate how much human-generated code is reduced to decrease the software project size
developed with high-level languages and tools, there are two popular parameters.
o External outputs
o External inquiries
These are used to indicate relative program size to implement required functionality.
The second one is SLOC, which is used to estimate the reduced size of the software project after a
solution is formulated and an implemented language is known.
The selection of the language for the software project development depends on what kind of data
the application (project) will process, because each language has a domain of usage.
For example, Visual Basic is very expressive and powerful in building simple interactive
applications, but it is not suitable for real-time, embedded programming.
o Encapsulation
o Concurrency control
LANGUAGES:
The reduction of code may increase:
o Understandability
o Changeability
o Reliability
Universal function points can be used to indicate the relative program sizes required to implement a
given functionality.
EXAMPLE:
To achieve a given application with a fixed number of function points, one of the following program sizes
would be required:
With the use of commercial components and automatic code generators, the size of human-
generated source code can be further reduced, which in turn reduces the size of the team and the
time needed for development.
Example with integration:
o 75,000 lines of Ada or C++ with integration of several commercial components
The fundamental impact of object-oriented technology is in reducing the overall size of what
needs to be developed.
Diagrammatic Models:
When they do it for software system design, they call these pictures diagrams or diagrammatic
models, and the very notation for them a modeling language.
1. An object-oriented model of the problem and its solution encourages a common vocabulary
between the end users of a system and its developers, thus creating a shared understanding of
the problem being solved.
2. The use of continuous integration creates opportunities to recognize risk early and make
incremental corrections without destabilizing the entire development effort.
2. The existence of a culture that is centered on results, encourages communication, and yet is not
afraid to fail.
REUSE:
Reusing existing components and building reusable components have been natural software
engineering activities.
With reuse, the goal is to minimize development costs while achieving all the other required
attributes of performance, feature set, and quality.
Most truly reusable components of value are transitioned to commercial products supported by
organizations with the following characteristics:
o They take ownership of improving product quality, adding new features, and transitioning to
new technologies.
Reuse is an important discipline that has an impact on the efficiency of all workflows and the
quality of most artifacts.
The process is a big term which represents various activities in the software development
process.
In a software-oriented organization, there are several processes and sub-processes running.
Generally, the processes in organizations are divided into three categories:
1. Metaprocess:
2. Macroprocess:
A project team's policies, procedures, and practices for achieving an artifact of the
software process.
The focus of the microprocess is on achieving an intermediate product baseline with
adequate quality and functionality, as economically and rapidly as practical.
The three levels of the processes (Metaprocess, Macroprocess, Microprocess) are overlapped at
some point in the project development process, even though they have different objectives,
audiences, metrics, concerns, and time scales.
Here, we consider the macroprocesses, which are project-level processes to reduce the cost of
project development.
To make the project successful, there should be an integration of all processes, which may be
implemented through sequential and parallel steps.
Types of Project Development Process Activities
1. Productive Activities:
2. Overhead Activities:
The goal of processes is to maximize resource allocation to productive activities and minimize
the impact of overhead activities on resources such as personnel, computers, and schedule.
The quality of software process strongly affects the required effort and therefore the schedule
for producing the software product.
In practice, the difference between a good and bad process can affect overall cost estimates by
50% to 100%.
Therefore, reducing inefficiencies will improve the overall schedule.
Primary focus of process improvement: Achieve an adequate solution in the minimum number
of iterations and eliminate downstream scrap and rework.
In perfect software engineering, the goal is to manage activities by avoiding scrap and rework
to achieve successful software process improvement.
1. Balance – The team must not be skewed toward one type of role.
2. Coverage – Strong individuals must fill all key positions (planners, designers, coders, testers,
trainers, etc.).
3.IMPROVING TEAM EFFECTIVENESS:
Software project managers need many leadership qualities in order to enhance team
effectiveness.
Hiring skills: Few decisions are as important as hiring decisions. Placing the right person in the
right job seems obvious but is surprisingly hard to achieve.
Customer-interface skill: Avoiding adversarial relationships among stakeholders is a prerequisite
for success.
Decision-making skill: The jillion books written about management have failed to provide a clear
definition of this attribute. We all know a good leader when we run into one, and decision-making
skill seems obvious despite its intangible definition.
Team-building skill: Teamwork requires that a manager establish trust, motivate progress, exploit
eccentric prima donnas, transition average people into top performers, eliminate misfits, and
consolidate diverse opinions into a team direction.
Selling skill: Successful project managers must sell all stakeholders (including themselves) on
decisions and priorities, sell candidates on job positions, sell changes to the status quo in the face
of resistance, and sell achievements against objectives. In practice, selling requires continuous
negotiation, compromise, and empathy.
4.IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS:
The tools and environment have a linear effect on the productivity of the process.
Planning tools, requirements management tools, visual modeling tools, compilers, editors,
debuggers, quality assurance analysis tools, test tools, and user interfaces provide crucial
automation support for evolving the software engineering artifacts.
Configuration management environments provide the foundation for executing and instrument the
process.
At first order, the isolated impact of tools and automation generally allows improvements of 20% to
40% in effort.
Tools and environments must be viewed as the primary delivery vehicle for process automation and
improvement, so their impact can be much higher.
Automation of the design process provides payback in quality, the ability to estimate costs and
schedules, and overall productivity using a smaller team.
Round-trip engineering describes the key capability of environments that support iterative
development. As we have moved into maintaining different information repositories for the
engineering artifacts, we need automation support to ensure efficient and error-free transition of
data from one artifact to another.
Forward engineering is the automation of one engineering artifact from another, more abstract
representation. (Compilers and linkers have provided automated transition of source code into
executable code).
Reverse engineering is the generation or modification of a more abstract representation from an
existing artifact (for example, creating a visual design model from a source code representation).
Software best practices are derived from the development process and technologies. Key practices
that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the life
cycle on a balance between requirements evolution, design evolution, and plan evolution.
Using metrics and indicators to measure the progress and quality of an architecture as it evolves
from a high-level prototype into a fully compliant product.
Providing integrated life-cycle environments that support early and continuous configuration control,
change management, rigorous design methods, document automation, and regression test
automation.
Using visual modeling and higher-level languages that support architectural control, abstraction,
reliable programming, reuse, and self-documentation.
Early and continuous insight into performance issues through demonstration-based evaluations.
Events in Performance Assessment:
Project Inception:
The proposed design was asserted to be low risk with adequate performance margin.
Initial Design Review:
Optimistic assessments of adequate design margin were based mostly on paper analysis or rough
simulation of the critical threads.
Mid-Life-Cycle Design Review:
The assessments started whittling away at the margin, as early benchmarks and initial tests began
exposing the optimism inherent in earlier estimates.
Integration and Test:
Serious performance problems were uncovered, necessitating fundamental changes in the
architecture. The underlying infrastructure was usually the scapegoat, but the real culprit was
immature use of the infrastructure, immature architectural solutions, or poorly understood early
design trade-offs.
Peer Inspections – A Pragmatic View:
Peer reviews are valuable, but they are rarely significant contributors to quality compared with the
following primary quality mechanisms and indicators, which should be emphasized in the
management process:
Transitioning engineering information from one artifact set to another, thereby assessing the
consistency, feasibility, understandability, and technology constraints inherent in the engineering
artifacts.
Major milestone demonstrations that force the artifacts to be assessed against tangible criteria in
the context of relevant use cases.
Environment tools (compilers, debuggers, analyzers , automated test suites) that ensure
representation rigor, consistency, completeness, and change control.
Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and requirements
compliance.
Change management metrics for objective insight into multiple-perspective change trends and
convergence or divergence from quality and progress goals.
Inspections are also a good vehicle for holding authors accountable for quality products.
All authors of software and documentation should have their products scrutinized as a natural by-
product of the process.
Therefore, the coverage of inspections should be across all authors rather than across all
components.
Top 10 principles of modern software management are (The first five, which are the main themes of my
definition of an iterative process, are summarized in Figure 4-1):
Table 4-1: Mapping Top 10 Risks of the Conventional Process to the Key Attributes and Principles of a
Modern Process
TRANSITIONING TO AN ITERATIVE PROCESS:
Modern software development processes have moved away from the conventional waterfall model,
in which each stage of the development process is dependent on completion of the previous stage.
The economic benefits inherent in transitioning from the conventional waterfall model to an iterative
development process are significant but difficult to quantify.
As one benchmark of the expected economic impact of process improvement, consider the process
exponent parameters of the COCOMO II model. (Appendix B provides more detail on the COCOMO
model.) This exponent can range from 1.01 (virtually no diseconomy of scale) to 1.26
(significant diseconomy of scale).
The parameters that govern the value of the process exponent are:
1. Application precedentedness
2. Process flexibility
3. Architecture risk resolution
4. Team cohesion
5. Software process maturity
The following paragraphs map the process exponent parameters of COCOMO II to the top 10
principles of a modern process:
1. Application precedentedness:
Domain experience is a critical factor in understanding how to plan and execute a software
development project. For unprecedented systems, one of the key goals is to confront risks and
establish early precedents, even if they are incomplete or experimental.
This is one of the primary reasons that the software industry has moved to an iterative life-cycle
process. Early iterations in the life cycle establish precedents from which the product, the process,
and the plans can be elaborated in evolving levels of detail.
2. Process flexibility:
Development of modern software is characterized by such a broad solution space and so many
interrelated concerns that there is a paramount need for continuous incorporation of changes.
These changes may be inherent in the problem understanding, the solution space, or the plans.
Project artifacts must be supported by efficient change management commensurate with project
needs. A configurable process that allows a common framework to be adapted across a range of
projects is necessary to achieve a software return on investment.
Successful teams are cohesive, and cohesive teams are successful. Successful teams and
cohesive teams share common objectives and priorities. Advances in technology (such as
programming languages, UML, and visual modeling) have enabled more rigorous and
understandable notations for communicating software engineering information, particularly in the
requirements and design artifacts that previously were ad hoc and based completely on paper
exchange.
These model-based formats have also enabled the round-trip engineering support needed to
establish change freedom sufficient for evolving design representations.