UNIT-I Notes
UNIT-I Notes
1.1.1.1 In Theory:
2. In order to manage and control all of the intellectual freedom associated with software
development, one must introduce several other "overhead" steps, including system
requirements definition, software requirements definition, program design, and testing.
These steps supplement the analysis and coding steps. Below Figure illustrates the resulting
project profile and the basic steps in developing a large-scale program.
3. The basic framework described in the waterfall model is risky and invites failure. The
testing phase that occurs at the end of the development cycle is the first event for which
timing, storage, input/output transfers, etc., are experienced as distinguished from
analyzed. The resulting design changes are likely to be so disruptive that the software
requirements upon which the design is based are likely violated. Either the requirements
must be modified or a substantial design change is warranted.
1. Program design comes first. Insert a preliminary program design phase between the software
requirements generation phase and the analysis phase. By this technique, the program designer
assures that the software will not fail because of storage, timing, and data flux (continuous
change). As analysis proceeds in the succeeding phase, the program designer must impose on the
analyst the storage, timing, and operational constraints in such a way that he senses the
consequences. If the total resources to be applied are insufficient or if the embryonic(in an early
stage of development) operational design is wrong, it will be recognized at this early stage and
the iteration with requirements and preliminary design can be redone before final design, coding,
and test commences. How is this program design procedure implemented?
2. Document the design. The amount of documentation required on most software programs is
quite a lot, certainly much more than most programmers, analysts, or program designers are
willing to do if left to their own devices. Why do we need so much documentation? (1) Each
designer must communicate with interfacing designers, managers, and possibly customers. (2)
During early phases, the documentation is the design. (3) The real monetary value of
documentation is to support later modifications by a separate test team, a separate maintenance
team, and operations personnel who are not software literate.
3. Do it twice. If a computer program is being developed for the first time, arrange matters so
that the version finally delivered to the customer for operational deployment is actually the
second version insofar as critical design/operations are concerned. Note that this is simply the
entire process done in miniature, to a time scale that is relatively small with respect to the overall
effort. In the first version, the team must have a special broad competence where they can
quickly sense trouble spots in the design, model them, model alternatives, forget the
straightforward aspects of the design that aren't worth studying at this early point, and, finally,
arrive at an error-free program.
4. Plan, control, and monitor testing. Without question, the biggest user of project resources-
manpower, computer time, and/or management judgment-is the test phase. This is the phase of
greatest risk in terms of cost and schedule. It occurs at the latest point in the schedule, when
backup alternatives are least available, if at all. The previous three recommendations were all
aimed at uncovering and solving problems before entering the test phase. However, even after
doing these things, there is still a test phase and there are still important things to be done,
including: (1) employ a team of test specialists who were not responsible for the original design;
(2) employ visual inspections to spot the obvious errors like dropped minus signs, missing
factors of two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it
is too expensive); (3) test every logic path; (4) employ the final checkout on the target computer.
5. Involve the customer. It is important to involve the customer in a formal way so that he has
committed himself at earlier points before final delivery. There are three points following
requirements definition where the insight, judgment, and commitment of the customer can
bolster the development effort. These include a "preliminary software review" following the
preliminary program design step, a sequence of "critical software design reviews" during
program design, and a "final software acceptance review".
1.1.1.2 In Practice:
Some software projects still practice the conventional software management approach.
It is useful to summarize the characteristics of the conventional process as it has typically been
applied, which is not necessarily as it was intended. Projects destined for trouble frequently
exhibit the following symptoms:
Protracted integration and late design breakage.
Late risk resolution.
Requirements-driven functional decomposition.
Adversarial (conflict or opposition) stakeholder relationships.
Focus on documents and review meetings.
Protracted Integration and Late Design Breakage For a typical development project that used a
waterfall model management process, Figure 1-2 illustrates development progress versus time.
Progress is defined as percent coded, that is, demonstrable in its target form.
The following sequence was common: Early success via paper designs and thorough (often
too thorough) briefings.
Commitment to code late in the life cycle.
Integration nightmares (unpleasant experience) due to unforeseen implementation issues and
interface ambiguities.
Heavy budget and schedule pressure to get the system working.
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
Late risk resolution A serious issue associated with the waterfall lifecycle was the lack of
early risk resolution. Figure1.3 illustrates a typical risk profile for conventional waterfall
model projects. It includes four distinct periods of risk exposure, where risk is defined as the
probability of missing a cost, schedule, feature, or quality goal. Early in the life cycle, as the
requirements were being specified, the actual risk exposure was highly unpredictable.
Requirements-Driven Functional Decomposition: This approach depends on specifying
requirements completely and unambiguously before other development activities begin. It
naively treats all requirements as equally important, and depends on those requirements
remaining constant over the software development life cycle. These conditions rarely occur in
the real world. Specification of requirements is a difficult and important part of the software
development process.
Another property of the conventional approach is that the requirements were typically
specified in a functional manner. Built into the classic waterfall process was the fundamental
assumption that the software itself was decomposed into functions; requirements were then
allocated to the resulting components. This decomposition was often very different from a
decomposition based on object- oriented design and the use of existing components. Figure 1-
4 illustrates the result of requirements- driven approaches: a software structure that is
organized around the requirements specification structure.
Figure 1-4: Suboptimal software component organization resulting from a requirements driven
approach
The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured an
intermediate artifact and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30days).
3. The contractor incorporated these comments and submitted (typically within 15 to 30
days) a final version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of customers and
contractors.
1. Finding and fixing a software problem after delivery costs 100 times more than finding
and fixing the problem in early design phases.
2. You can compress software development schedules 25% of nominal, but no more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the number of
source lines of code.
5. Variations among people account for the biggest differences in software productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in
1985,85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products (i.e., system of systems) cost 9 times as
much.
Most software cost models can be abstracted into a function of five basic parameters: size,
process, personnel, environment, and required quality.
1. The size of the end product (in human-generated components), which is typically
quantified in terms of the number of source instructions or the number of function points
required to develop the required functionality
2. The process used to produce the end product, in particular the ability of the process to
avoid non-value- adding activities (rework, bureaucratic delays, communications
overhead)
3. The capabilities of software engineering personnel, and particularly their experience with
the computer science issues and the applications domain issues of the project.
4. The environment, which is made up of the tools and techniques available to support
efficient software development and to automate the process.
5. The required quality of the product, including its features, performance, reliability, and
adaptability.
The relationships among these parameters and the estimated cost can be written as follows:
Effort =(Personnel) (Environment) (Quality) ( Sizeprocess)
One important aspect of software economics (as represented with in today's software cost
models) is that the relationship between effort and size exhibits a diseconomy of scale. The
diseconomy of scale of software development is a result of the process exponent being greater
than 1.0.
Contrary to most manufacturing processes, the more software you build, the more
expensive it is per unit item.
Figure 1-5 shows three generations of basic technology advancement in tools,
components, and processes. The required levels of quality and personnel are assumed to be
constant. The ordinate of the graph refers to software unit costs (pick your favorite: per SLOC,
per function point, per component)realized by an organization. The three generations of
software development are defined as follows:
1. Conventional:1960s and1970s, craftsmanship. Organizations used custom tools, custom
processes, and virtually all custom components built in primitive languages. Project
performance was highly predictable in that cost, schedule, and quality objectives were almost
always under achieved.
2. Transition: 1980s and 1990s, software engineering. Organizations used more-repeatable
processes and off- the-shelf tools, and mostly (>70%) custom components built in higher
level languages. Some of the components (<30%) were available as commercial products,
including the operating system, database management system, networking, and graphical
user interface.
3. Modern practices: 2000 and later, software production. This book's philosophy is rooted in
the use of managed and measured processes, integrated automation environments, and mostly
(70%) off-the-shelf components. Perhaps as few as 30% of the components need to be
custom built.
Technologies for environment automation, size reduction, and process improvement are
not independent of one another. In each new era, the key is complementary growth in all
technologies. For example, the process advances could not be used successfully without new
component technologies and increased tool automation.
Figure 1-5: Three generations of software economics leading to the target objective
Organizations are achieving better economies of scale in successive technology eras-with very
large projects (systems of systems), long-lived products, and lines of business comprising
multiple similar projects. Figure 1-6 provides an overview of how a return on investment (ROI)
profile can be achieved in subsequent efforts across life cycles of various domains.
There have been many debates among developers and vendors of software cost estimation
models and tools. Three topics of these debates are of particular interest here:
There are several popular cost estimation models (such as COCOMO, CHECKPOINT,
ESTIMACS, Knowledge Plan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and SPQR/20),
CO COMO is also one of the most open and well-documented cost estimation models. The
general accuracy of conventional cost models (such as COCOMO) has been described as "within
20% of actuals, 70% of the time."
Most real-world use of cost models is bottom-up (substantiating a target cost) rather than
top-down (estimating the "should" cost). Figure 2-3 illustrates the predominant practice: The
software project manager defines the target cost of the software, and then manipulates the
parameters and sizing until the target cost can be justified. The rationale for the target cost maybe
to win a proposal, to solicit customer funding, to attain internal corporate funding, or to achieve
some other goal.
The process described in Figure 1-7 is not all bad. In fact, it is absolutely necessary to analyze the
cost risks and understand the sensitivities and trade-offs objectively. It forces the software
project manager to examine the risks associated with achieving the target costs and to discuss
this information with other stakeholders.
It is defined in enough detail so that its key risk areas are understood and the probability
of success is objectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost
model with an experience base that reflects multiple similar projects done by the same team with
the same mature processes and tools.
1.3.1.3 Reuse:
Reusing existing components and building reusable components have been natural software
engineering activities since the earliest improvements in programming languages. With reuse in
order to minimize development costs while achieving all the other required attributes of
performance, feature set, and quality. Try to treat reuse as a mundane part of achieving a return
on investment.
Most truly reusable components of value are transitioned to commercial products supported by
organizations with the following characteristics:
They have an economic motivation for continued support.
They take ownership of improving product quality, adding new features, and
transitioning to new technologies.
They have a sufficiently broad customer base to be profitable.
The cost of developing a reusable component is not trivial. Figure 1-8 examines the economic
trade-offs. The steep initial curve illustrates the economic obstacle to developing reusable
components.
Reuse is an important discipline that has an impact on the efficiency of all workflows and the
quality of most artifacts.
Figure 1-8: Cost and Schedule investments necessary to achieve reusable components
In a perfect software engineering world with an immaculate problem description, an obvious solution
space, a development team of experienced geniuses, adequate resources, and stakeholders with
common goals, we could execute a software development process in one iteration with almost no
scrap and rework. Because we work in an imperfect world, however, we need to manage engineering
activities so that scrap and rework profiles do not have an impact on the win conditions of any
stakeholder. This should be the underlying premise for most process improvements.
Software project managers need many leadership qualities in order to enhance team effectiveness.
The following are some crucial attributes of successful software project managers that deserve much
more attention:
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right person in
the right job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a
prerequisite for success.
3. Decision-making skill. The jillion books written about management have failed to provide a
clear definition of this attribute. We all know a good leader when we run into one, and
decision-making skill seems obvious despite its intangible definition.
4. Team-building skill. Teamwork requires that a manager establish trust, motivate progress,
exploit eccentric prima donnas, transition average people into top performers, eliminate
misfits, and consolidate diverse opinions into a team direction.
5. Selling skill. Successful project managers must sell all stakeholders (including themselves)
on decisions and priorities, sell candidates on job positions, sell changes to the status quo in
the face of resistance, and sell achievements against objectives. In practice, selling requires
continuous negotiation, compromise, and empathy
1.3.4 Improving Automation:
The tools and environment used in the software process generally have a linear effect on the
productivity of the process. Planning tools, requirements management tools, visual modeling tools,
compilers, editors, debuggers, quality assurance analysis tools, test tools, and user interfaces provide
crucial automation support for evolving the software engineering artifacts.
Above all, configuration management environments provide the foundation for executing and
instrument the process. At first order, the isolated impact of tools and automation generally allows
improvements of 20% to 40% in effort.
However, tools and environments must be viewed as the primary delivery vehicle for process
automation and improvement, so their impact can be much higher.
Automation of the design process provides payback in quality, the ability to estimate costs and
schedules, and overall productivity using a smaller team.
Round-trip engineering describes the key capability of environments that support iterative
development. As we have moved into maintaining different information repositories for the
engineering artifacts, we need automation support to ensure efficient and error-free transition of data
from one artifact to another.
Forward engineering is the automation of one engineering artifact from another, more abstract
representation. For example, compilers and linkers have provided automated transition of source
code into executable code.
Reverse engineering is the generation or modification of a more abstract representation from an
existing artifact (for example, creating a visual design model from a source code representation).
Economic improvements associated with tools and environments. It is common for tool vendors to
make relatively accurate individual assessments of life-cycle activities to support claims about the
potential economic impact of their tools. For example, it is easy to find statements such as the
following from companies in a particular tool.
Requirements analysis and evolution activities consume 40% of life-cycle costs.
Software design activities have an impact on more than 50% of the resources.
Coding and unit testing activities consume about 50% of software development effort and
schedule.
Test activities can consume as much as 50% of a project's resources.
Configuration control and change management are critical activities that can consume as
much as 25% of resources on a large-scale project.
Documentation activities can consume more than 30% of project engineering resources.
Project management, business administration, and progress assessment can consume as much
as 30% of project budgets.
Key practices that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing
on requirements completeness and traceability late in the life cycle, and focusing
throughout the life cycle on a balance between requirements evolution, design evolution,
and plan evolution
Using metrics and indicators to measure the progress and quality of an architecture as it
evolves from a high-level prototype into a fully compliant product
Providing integrated life-cycle environments that support early and continuous
configuration control, change management, rigorous design methods, document
automation, and regression test automation
Using visual modeling and higher level languages that support architectural control,
abstraction, reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-based
evaluations
Conventional development processes stressed early sizing and timing estimates of computer
program resource utilization. However, the typical chronology of events in performance
assessment was as follows
Project inception. The proposed design was asserted to be low risk with adequate
performance margin.
Initial design review. Optimistic assessments of adequate design margin were based
mostly on paper analysis or rough simulation of the critical threads. In most cases, the
actual application algorithms and database sizes were fairly well understood.
Mid-life-cycle design review. The assessments started whittling away at the margin, as
early benchmarks and initial tests began exposing the optimism inherent in earlier
estimates.
Integration and test. Serious performance problems were uncovered, necessitating
fundamental changes in the architecture. The underlying infrastructure was usually the
scapegoat, but the real culprit was immature use of the infrastructure, immature
architectural solutions, or poorly understood early design trade-offs.
Inspections are also a good vehicle for holding authors accountable for quality products. All authors
of software and documentation should have their products scrutinized as a natural by-product of the
process. Therefore, the coverage of inspections should be across all authors rather than across all
components.