SPPM Unit-2
SPPM Unit-2
Conventional software management practices are sound in theory, but practice is still tied to
archaic (outdated) technology and techniques.
  Conventional software economics provides a benchmark of performance for
  conventional software manage- ment principles.
 The best thing about software is its flexibility: It can be programmed to do almost anything.
 The worst thing about software is also its flexibility: The "almost anything"
 characteristic has made it difficult to plan, monitors, and control software development.
 Three important analyses of the state of the software engineering industry are
    1. Software development is still highly unpredictable. Only about 10% of
       software projects are delivered successfully within initial budget and schedule
       estimates.
    2. Management discipline is more of a discriminator in success or failure than are
       technology advances.
    3. The level of software scrap and rework is indicative of an immature process.
 All three analyses reached the same general conclusion: The success rate for software
 projects is very low. The three analyses provide a good introduction to the magnitude
 of the software problem and the current norms for conventional software management
 performance.
coding.
    2. In order to manage and control all of the intellectual freedom associated with
       software development, one must introduce several other "overhead" steps,
       including system requirements definition, software requirements definition,
       program design, and testing. These steps supplement the analysis and coding
       steps. Below Figure illustrates the resulting project profile and the basic steps in
       developing a large-scale program.
3. The basic framework described in the waterfall and invites failure. The testing phase that
   occurs at the end of the development cycle is the first event for which timing, storage, input/output
   transfers, etc., are experienced as distinguished from analyzed. The resulting design changes are
   likely to be so disruptive that the software requirements upon which the design is based are likely
   violated. Either the requirements must be modified or a substantial design change is warranted.
 3. Do it twice. If a computer program is being developed for the first time, arrange
    matters so that the version finally delivered to the customer for operational
    deployment is actually the second version insofar as critical design/operations are
    concerned. Note that this is simply the entire process done in miniature, to a time scale
    that is relatively small with respect to the overall effort. In the first version, the team
    must have a special broad competence where they can quickly sense trouble spots in
    the design, model them, model alternatives, forget the straightforward aspects of the
    design that aren't worth studying at this early point, and, finally, arrive at an error-free
    program.
 4. Plan, control, and monitor testing. Without question, the biggest user of project
    resources-manpower, computer time, and/or management judgment-is the test phase.
    This is the phase of greatest risk in terms of cost and schedule. It occurs at the latest
    point in the schedule, when backup alternatives are least available, if at all. The
    previous three recommendations were all aimed at uncovering and solving problems
    before entering the test phase. However, even after doing these things, there is still a
    test phase and there are still important things to be done, including: (1) employ a team
    of test specialists who were not responsible for the
original design; (2) employ visual inspections to spot the obvious errors like dropped
minus signs, missing factors of two, jumps to wrong addresses (do not use the computer to
detect this kind of thing, it is too expensive); (3) test every logic path; (4) employ the final
checkout on the target computer.
 5. Involve the customer. It is important to involve the customer in a formal way so that
 he has committed himself at earlier points before final delivery. There are three points
 following requirements definition where the insight, judgment, and commitment of the
 customer can bolster the development effort. These include a "preliminary software
 review" following the preliminary program design step, a sequence of "critical software
 design reviews" during program design, and a "final software acceptance review".
    IN PRACTICE
 Some software projects still practice the conventional software management approach.
      It is useful to summarize the characteristics of the conventional process as it has
 typically been applied, which is not necessarily as it was intended. Projects destined for
 trouble frequently exhibit the following symptoms:
For a typical development project that used a waterfall model management process, Figure
1-2 illustrates development progress versus time. Progress is defined as percent coded, that
is, demonstrable in its target form.
    Early success via paper designs and thorough (often too thorough) briefings.
    Commitment to code late in the life cycle.
    Integration nightmares (unpleasant experience) due to unforeseen
    implementation issues and interface ambiguities.
    Heavy budget and schedule pressure to get the system working.
    Late shoe-homing of no optimal fixes, with no time for redesign.
    A very fragile, unmentionable product delivered late.
In the conventional model, the entire system was designed on paper, then implemented all at once, then
integrated. Table 1-1
                    1 provides a typical profile of cost expenditures across the spectrum of software
activities.
Late risk resolution A serious issue associated with the waterfall lifecycle was the lack of early risk
resolution. Figure 1.3 illustrates a typical risk profile for conventional waterfall model projects. It includes
four distinct periods of risk exposure, where risk is defined as the probability of missing a cost, schedule,
feature, or quality goal. Early in the life cycle, as the requirements were being specified, the actual risk
exposure was highly unpredictable.
Requirements-Driven
                 Driven Functional Decomposition: This approach depends on specifying requirements
com- pletely and unambiguously before other development activities begin. It naively treats all
requirements as equally important, and depends on those requirements remaining constant over the software
development life cycle. These conditions rarely occur in the real world. Specification of requirements is a
difficult and important part of the software development process.
     Another property of the conventional approach is that the requirements were typically specified in a
functional manner. Built into the classic waterfall process was the fundamental assumption that the software
itself was decomposed into functions; requirements were then allocated to the resulting components. This
decomposition was often very different from a decomposition based on object
                                                                        object-oriented
                                                                               oriented design and the use of
existing components. Figure 1-4    4 ill
                                     illustrates the result of requirements-driven
                                                                             driven approaches: a software
structure that is organized around the requirements specification structure.
                 3
Adversarial Stakeholder Relationships:
The conventional process tended to result in adversarial stakeholder relationships, in large part because of
the difficulties of requirements specification and the exchange of information solely through paper
documents that captured engineering information in ad hoc formats.
The following sequence of events was typical for most contractual software efforts:
The conventional process focused on producing various documents that attempted to describe the software
product, with insufficient focus on producing tangible increments of the products themselves. Contractors
were driven to produce literally tons of paper to meet milestones and demonstrate progress to stakeholders,
rather than spend their energy on tasks that would reduce risk and produce quality software. Typically,
presenters and the audience reviewed the simple things that they understood rather than the complex and
important issues. Most design reviews therefore resulted in low engineering value and high cost in terms of
the effort and schedule involved in their preparation and conduct. They presented merely a facade of
progress.
Table 1-2 summarizes the results of a typical design review.
                 3
  CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE
  SOFTWARE ECONOMICS
Most software cost models can be abstracted into a function of five basic parameters: size, process,
personnel, environment, and required quality.
  1. The size of the end product (in human
                                      human-generated
                                             generated components), which is typically quantified in
     terms of the number of source instructions or the number of function points required to develop
     the required functionality
  2. The process used to produce the end product, in particular the ability of the process to avoid non-
     value-adding activities (rework, bureaucratic delays, communications overhead)
  3. The capabilities of software engineering personnel,, and particularly their experience with the
     computer science issues and the applications domain issues of the project
  4. The environment,, which is made up of the tools and techniques available to support efficient
     software development and to automate the process
  5. The required quality of the product, including its features, performance, reliability, and
adaptability The relationships among these parameters and the estimated cost can be written as
follows:
     One important aspect of software economics (as represented within today's software cost models) is
that the relationship between effort and size exhibits a diseconomy of scal
                                                                       scale.
                                                                            e. The diseconomy of scale of
software development is a result of the process exponent being greater than 1.0. Contrary to most
                     3
manufacturing processes, the more software you build, the more expensive it is per unit item.
     Figure 2-1 shows three generations of basic technology advancement in tools, components, and
processes. The required levels of quality and personnel are assumed to be constant. The ordinate of the
graph refers to software unit costs (pick your favorite: per SLOC, per function point, per component)
realized by an organization.
The three generations of software development are defined as follows:
1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools, custom
   processes, and virtually all custom components built in primitive languages. Project performance
   was highly predictable in that cost, schedule, and quality objectives were almost always
   underachieved.
2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable processes and
   off- the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some of
   the
    components (<30%) were available as commercial products, including the operating system, database
    management system, networking, and graphical user interface.
3) Modern practices: 2000 and later, software production. This book's philosophy is rooted in
   the use of managed and measured processes, integrated automation environments, and
   mostly (70%) off-the-shelf components. Perhaps as few as 30% of the components need to
   be custom built
      Technologies for environment automation, size reduction, and process improvement are not
independent of one another. In each new era, the key is complementary growth in all technologies. For
example, the process advances could not be used successfully without new component technologies and
increased tool automation.
                3
Organizations are achieving better economies of scale in successive technology eras
                                                                               eras-with
                                                                                     with very large projects
(systems of systems), long-lived products, and lines of business comprising multiple similar projects. Figure
2-2 provides an overview of how a return on investment (ROI) profile can be achieved in subsequent efforts
across life cycles of various domains.
                 4
   PRAGMATIC SOFTWARE COST ESTIMATION
One critical problem in software cost estimation is a lack of well
                                                               well-documented
                                                                    documented case studies of projects that
used an iterative development approach. Software industry has inconsistently defined metrics or atomic
units of measure, the data from actual projects are highly suspect in terms of consistency and comparability.
It is hard enough to collect a homogeneous set of project data within one organization; it is extremely
difficult to homog- enize data across different organizations with different processes, languages, domains,
and so on.
There have been many debates among developers and vendors of software cost estimation models and tools.
Three topics of these debates are of particular interest here:
3. ImprovingSoftware Economics
Five basic parameters of the software cost model are
                  4
  1. Reducing the size or complexity of what needs to be developed.
  2. Improving the development process.
  3. Using more-skilled personnel and better teams (not necessarily the same thing).
  4. Using better environments (tools to automate the process).
  5. Trading off or backing off on quality thresholds.
These parameters are given in priority order for most software domains. Table 3
                                                                              3-1
                                                                                1 lists some of the
technology developments, process improvement efforts, and management approaches targeted at
improving the economics of software development and integration.
                 4
  LANGUAGES
Universal function points (UFPs1) are useful estimators for language-independent,
                                                                               independent, early life-cycle
estimates. The basic units of function points are external user inputs, external outputs, internal logical data
groups, external
             nal data interfaces, and external inquiries. SLOC metrics are useful estimators for software
after a candidate solution is formulated and an implementation language is known. Substantial data have
been documented relating SLOC to function points. Some of these results are shown in Table 3-2.
1 Function point metrics provide a standardized method for measuring the various functions of a software
application.
The basic units of function points are external user inputs, external outputs, internal logical data groups,
external data interfaces, and external inquiries.
                                    Assembly              320
                                    C                     128
                                    FORTAN77              105
                                    COBOL85               91
                                    Ada83                 71
                                    C++                   56
                                    Ada95                 55
                                    Java                  55
                                    Visual Basic          35
                                              Table 3-2
  1. An object-oriented
               oriented model of the problem and its solution encourages a common vocabulary
     between the end users of a system and its developers, thus creating a shared understanding of the
     problem being solved.
  2. The use of continuous integration creates opportunities to recognize risk early and make incremental
                 4
    corrections without destabilizing the entire development effort.
  3. An object-oriented architecture provides a clear separation of concerns among disparate elements
     of a system, creating firewalls that prevent a change in one part of the system from rending the
     fabric of the entire architecture.
  1. A ruthless focus on the development of a system that provides a well understood collection of essential
     minimal characteristics.
  2. The existence of a culture that is centered on results, encourages communication, and yet is not
     afraid to fail.
  3. The effective use of object-oriented
                                 oriented modeling.
  4. The existence of a strong architectural vision.
  5. The application of a well-managed
                               managed iterative and incremental development life cycle.
  REUSE
Reusing existing components and building reusable components have been natural software engineering
activities since the earliest improvements in programming languages. With reuse in order to minimize
development costs while achieving all the other required attributes of performance, feature set, and quality.
Try to treat reuse as a mundane part of achieving a return on investment.
     Most truly reusable components of value are transitioned to commercial products supported by
organizations with the following characteristics:
  COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize integration of commercial
components and off-the-shelf products. While the use of commercial components is certainly desirable as a
                 4
means of reducing custom development, it has not proven to be straightforward in practice. Table 3-3
                                                                                                 3
identifies some of the advantages and disadvantages of using commercial components.
In a perfect software engineering world with an immaculate problem description, an obvious solution
                 4
space, a development team of experienced geniuses, adequate resources, and stakeholders with common
goals, we could execute a software development process in one iteration with almost no scrap and rework.
Because we work in an imperfect world, however, we need to manage engineering activities so that scrap
and rework profiles do not have an impact on the win conditions of any stakeholder. This should be the
underlying premise for most process improvements.
Software project managers need many leadership qualities in order to enhance team effectiveness. The
following are some crucial attributes of successful software project managers that deserve much more
attention:
  1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right person in the
     right job seems obvious but is surprisingly hard to achieve.
  2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a prerequisite
     for success.
  Decision-making skill. The jillion books written about management have failed to provide a clear
     definition of this attribute. We all know a good leader when we run into one, and decision-making
     skill seems obvious despite its intangible definition.
  Team-building skill. Teamwork requires that a manager establish trust, motivate progress, exploit
     eccentric prima donnas, transition average people into top performers, eliminate misfits, and
     consolidate diverse opinions into a team direction.
  Selling skill. Successful project managers must sell all stakeholders (including themselves) on
     decisions and priorities, sell candidates on job positions, sell changes to the status quo in the face of
     resistance, and sell achievements against objectives. In practice, selling requires continuous
     negotiation, compromise, and empathy
                 4
Conventional development processes stressed early sizing and timing estimates of computer program
resource utilization. However, the typical chronology of events in performance assessment was as
follows
    Project inception. The proposed design was asserted to be low risk with adequate performance
    margin.
    Initial design review. Optimistic assessments of adequate design margin were based mostly on
    paper analysis or rough simulation of the critical threads. In most cases, the actual application
    algorithms and database sizes were fairly well understood.
    Mid-life-cycle design review. The assessments started whittling away at the margin, as early
    benchmarks and initial tests began exposing the optimism inherent in earlier estimates.
    Integration and test. Serious performance problems were uncovered, necessitating fundamental
    changes in the architecture. The underlying infrastructure was usually the scapegoat, but the real
    culprit was immature use of the infrastructure, immature architectural solutions, or poorly
    understood early design trade-offs.
    Transitioning engineering information from one artifact set to another, thereby assessing the
    consistency, feasibility, understandability, and technology constraints inherent in the engineering
    artifacts
    Major milestone demonstrations that force the artifacts to be assessed against tangible criteria in
                4
     the context of relevant use cases
     Environment tools (compilers, debuggers, analyzers, automated test suites) that ensure
     representation rigor, consistency, completeness, and change control
     Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and requirements
     compliance
     Change management metrics for objective insight into multiple-perspective change trends and
     convergence or divergence from quality and progress goals
Inspections are also a good vehicle for holding authors accountable for quality products. All authors of
software and documentation should have their products scrutinized as a natural by    by--product of the
process. Therefore, the coverage of inspections should be across all authors rather than across
                                                                                              acros all
components.
   Top 10 principles of modern software management are. (The first five, which are the main themes of
   my definition of an iterative process, are summarized in Figure 4-1.)
     Base the process on an architecture-first approach. This requires that a demonstrable balance be
       achieved among the driving requirements, the architecturally significant design decisions, and the
       life- cycle plans before the resources are committed for full-scale development.
     Establish an iterative life-cycle process that confronts risk early. With today's sophisticated
       software systems, it is not possible to define the entire problem, design the entire solution, build the
       software, and then test the end product in sequence. Instead, an iterative process that refines the
       problem understanding, an effective solution, and an effective plan over several iterations encourages
       a balanced treatment of all stakeholder objectives. Major risks must be addressed early to increase
       predictability and avoid expensive downstream scrap and rework.
     Transition design methods to emphasize component-based development. Moving from a line-of-
       code mentality to a component-based mentality is necessary to reduce the amount of human-
       generated source code and custom development.
Table 4-1 maps top 10 risks of the conventional process to the key attributes and principles of a modern
process
               5
       TRANSITIONING TO AN ITERATIVE PROCESS
Modern software development processes have moved away from the conventional waterfall model, in
which each stage of the development process is dependent on completion of the previous stage.
     The economic benefits inherent in transitioning from the conventional waterfall model to an iterative
development process are significant but difficult to quantify. As one benchmark of the expected economic
impact of process improvement, consider the process exponent parameters of the COCOMO II model.
(Appendix B provides more detail on the COCOMO model) This exponent can range from 1.01 (virtually
no diseconomy of scale) to 1.26 (significant diseconomy of scale). The parameters that govern the value
of the process exponent are application precedentedness, process flexibility, architecture risk resolution,
team cohesion, and software process maturity.
     The following paragraphs map the process exponent parameters of CO COMO II to my top 10
principles of a modern process.
                    5
ENGINEERING AND PRODUCTION STAGES
  To achieve economies of scale and higher returns on investment, we must move toward a software
manufacturing process driven by technological improvements in process automation and component-
                                                                                     component
based development. Two stages of the life cycle are:
   1. The engineering stage, driven by less predictable but smaller teams doing design and
      synthesis activities
   2. The production stage, driven by more predictable but larger teams doing construction, test,
      and deployment activities
        The transition between engineering and production is a crucial event for the various stakeholders.
The production plan has been agreed upon, and there is a good enough understanding of the problem and
the solution that all stakeholders can make a firm commitment to go ahead with production.
Engineering stage is decomposed into two distinct phases, inception and elaboration, and the production
stage into construction and transition. These four phases of the life
                                                                 life-cycle
                                                                      cycle process are loosely mapped to
the conceptual framework of the spiral model as shown in Figure 5-1
       INCEPTION PHASE
The overriding goal of the inception phase is to achieve concurrence among stakeholders on the life-
cycle objectives for the project.
              5
PRIMARY OBJECTIVES
    Establishing the project's software scope and boundary conditions, including an operational
    concept, acceptance criteria, and a clear understanding of what is and is not intended to be in the
    product
    Discriminating the critical use cases of the system and the primary scenarios of operation that
    will drive the major design trade-offs
    Demonstrating at least one candidate architecture against some of the primary scenanos
    Estimating the cost and schedule for the entire project (including detailed estimates for
    the elaboration phase)
    Estimating potential risks (sources of unpredictability)
ESSENTIAL ACTMTIES
    Formulating the scope of the project. The information repository should be sufficient to define
    the problem space and derive the acceptance criteria for the end product.
    Synthesizing the architecture. An information repository is created that is sufficient to
    demonstrate the feasibility of at least one candidate architecture and an, initial baseline of
    make/buy decisions so that the cost, schedule, and resource estimates can be derived.
     Planning and preparing a business case. Alternatives for risk management, staffing, iteration
     plans, and cost/schedule/profitability trade-offs are evaluated.
 PRIMARY EVALUATION CRITERIA
     Do all stakeholders concur on the scope definition and cost and schedule estimates?
     Are requirements understood, as evidenced by the fidelity of the critical use cases?
     Are the cost and schedule estimates, priorities, risks, and development processes credible?
     Do the depth and breadth of an architecture prototype demonstrate the preceding criteria? (The
     primary value of prototyping candidate architecture is to provide a vehicle for understanding the
     scope and assessing the credibility of the development group in solving the particular technical
     problem.)
     Are actual resource expenditures versus planned expenditures acceptable
ELABORATION PHASE
At the end of this phase, the "engineering" is considered complete. The elaboration phase activities must
ensure that the architecture, requirements, and plans are stable enough, and the risks sufficiently
mitigated, that the cost and schedule for the completion of the development can be predicted within an
acceptable range. During the elaboration phase, an executable architecture prototype is built in one or
more iterations, depending on the scope, size, & risk.
PRIMARY OBJECTIVES
    Baselining the architecture as rapidly as practical (establishing a configuration-managed snapshot in
    which all changes are rationalized, tracked, and maintained)
    Baselining the vision
     Baselining a high-fidelity plan for the construction phase
     Demonstrating that the baseline architecture will support the vision at a reasonable cost in a
     reasonable time
              5
ESSENTIAL ACTIVITIES
    Elaborating the vision.
    Elaborating the process and infrastructure.
    Elaborating the architecture and selecting components.
CONSTRUCTION PHASE
     During the construction phase, all remaining components and application features are integrated
into the application, and all features are thoroughly tested. Newly developed software is integrated
where required. The construction phase represents a production process, in which emphasis is placed
on managing resources and controlling operations to optimize costs, schedules, and quality.
PRIMARY OBJECTIVES
   Minimizing development costs by optimizing resources and avoiding unnecessary scrap and
   rework
        Achieving adequate quality as rapidly as practical
    Achieving useful versions (alpha, beta, and other test releases) as rapidly as practical
ESSENTIAL ACTIVITIES
    Resource management, control, and process optimization
         Complete component development and testing against evaluation criteria
          Assessment of product releases against acceptance criteria of the vision
              5
       TRANSITION PHASE
The transition phase is entered when a baseline is mature enough to be deployed in the end-user domain.
This typically requires that a usable subset of the system has been achieved with acceptable quality
levels and user documentation so that transition to the user will provide positive results. This phase
could include any of the following activities:
PRIMARY OBJECTIVES
    Achieving user self-supportability
                        supportability
    Achieving stakeholder concurrence that deployment baselines are complete and consistent with
    the evaluation criteria of the vision
    Achieving final product baselines as rapidly and cost-effectively as practical
ESSENTIAL ACTIVITIES
   Synchronization and integration of concurrent construction increments into consistent
   deployment baselines
          Deployment-specific engineering (cutover, commercial packaging and production, sales
    rollout kit development, field personnel training)
          Assessment of deployment baselines against the complete vision and acceptance
   criteria in the requirements set
EVALUATION CRITERIA
      Is the user satisfied?
          Are actual resource expenditures versus planned expenditures acceptable?
              5
THE MANAGEMENT SET
The management set captures the artifacts associated with process planning and execution. These
artifacts use ad hoc notations, including text, graphics, or whatever representation is required to
capture the "contracts" among project personnel (project management, architects, developers, testers,
marketers, administrators), among stakeholders (funding authority, user, software project manager,
organization manager, regulatory agency), and between project personnel and stakeholders. Specific
artifacts included in this set are the work breakdown stru  structure
                                                                 cture (activity breakdown and financial
tracking mechanism), the business case (cost, schedule, profit expectations), the release
specifications (scope, plan, objectives for release baselines), the software development plan (project
process instance), the release descriptions (results of release baselines), the status assessments
(periodic snapshots of project progress), the software change orders (descriptions of discrete baseline
changes), the deployment docu-- ments (cutover plan, training course, sales rollout kit), and the
environment (hardware and software tools, process automation, & documentation).
Management set artifacts are evaluated, assessed, and measured through a combination of the following:
      Relevant stakeholder review
      Analysis of changes between the current version of the artifact and previous versions
     Major milestone demonstrations of the balance among all artifacts and, in particular, the
     accuracy of the business case and vision artifacts
      Requirements artifacts are evaluated, assessed, and measured through a combination of the following:
        Analysis of consistency with the release specifications of the management set
        Analysis of consistency between the vision and the requirements models
        Mapping against the design, implementation, and deployment sets to evaluate the consistency
        and completeness and the semantic balance between information in the different sets
        Analysis of changes between the current version of requirements artifacts and previous
        versions (scrap, rework, and defect elimination trends)
        Subjective review of other dimensions of quality
Design Set
         UML notation is used to engineer the design models for the solution. The design set contains
    varying levels of abstraction that represent the components of the solution space (their identities,
    attributes, static relationships, dynamic interactions). The design set is evaluated, assessed, and
    measured through a combination of the following:
         Analysis of the internal consistency and quality of the design model
          Analysis of consistency with the requirements models
          Translation into implementation and deployment sets and notations (for example, traceability,
          source code generation, compilation, linking) to evaluate the consistency and completeness and
          the semantic balance between information in the sets
          Analysis of changes between the current version of the design model and previous versions
          (scrap, rework, and defect elimination trends)
          Subjective review of other dimensions of quality
Implementation set
   The implementation set includes source code (programming language notations) that represents the tangible
   implementations of components (their form, interface, and dependency relationships)
        Implementation sets are human-readable formats that are evaluated, assessed, and measured
   through a combination of the following:
        Analysis of consistency with the design models
        Translation into deployment set notations (for example, compilation and linking) to evaluate the
        consistency and completeness among artifact sets
        Assessment of component source or executable files against relevant evaluation criteria through
        inspection, analysis, demonstration, or testing
        Execution of stand-alone component test cases that automatically compare expected results with
        actual results
        Analysis of changes between the current version of the implementation set and previous
        versions (scrap, rework, and defect elimination trends)
        Subjective review of other dimensions of quality
Deployment Set
The deployment set includes user deliverables and machine language notations, executable software, and
the
build scripts, installation scripts, and executable target specific data necessary to use the product in its
target environment.
Deployment sets are evaluated, assessed, and measured through a combination of the following:
      Testing against the usage scenarios and quality attributes defined in the requirements set to
      evaluate the consistency and completeness and the~ semantic balance between information in
      the two sets
      Testing the partitioning, replication, and allocation strategies in mapping components of the
      implementation set to physical resources of the deployment system (platform type, number,
      network topology)
     Testing against the defined usage scenarios in the user manual such as installation, user-
     orienteddynamic reconfiguration, mainstream usage, and anomaly management
     Analysis of changes between the current version of the deployment set and previous versions
     (defect elimination trends, performance changes)
     Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life cycle; the other sets take
on check and balance roles. As illustrated in Figure 6-2, each phase has a predominant focus:
Requirements are the focus of the inception phase; design, the elaboration phase; implementation, the
construction phase; and deployment, the transition phase. The management artifacts also evolve, but at a
fairly constant level across the life cycle.
     Most of today's software development tools map closely to one of the five artifact sets.
  1. Management: scheduling, workflow, defect tracking, change
      management, documentation, spreadsheet, resource management, and
      presentation tools
  2. Requirements: requirements management tools
  3. Design: visual modeling tools
  4. Implementation: compiler/debugger tools, code analysis tools, test coverage analysis tools, and
      test management tools
  5. Deployment: test coverage and test automation tools, network management tools, commercial
      components (operating systems, GUIs, RDBMS, networks, middleware), and installation tools.
Implementation Set versus Deployment Set
   The separation of the implementation set (source code) from the deployment set (executable code) is
   important because there are very different concerns with each set. The structure of the information
   delivered to the user (and typically the test organization) is very different from the structure of the source
   code information. Engineering decisions that have an impact on the quality of the deployment set but are
   relatively incomprehensible in the design and implementation sets include the following:
         Dynamically reconfigurable parameters (buffer sizes, color palettes, number of servers, number
         of simultaneous clients, data files, run
                                              run-time parameters)
        Effects of compiler/link optimizations (such as space optimization versus speed optimization)
        Performance under certain allocation strategies (centralized versus distributed, primary and
        shadow threads, dynamic load balancing, hot backup versus checkpoint/rollback)Virtual
        machine constraints (file descriptors, garbage collection, heap size, maximum record size, disk
        file rotations)
        Process-level
                 level concurrency issues (deadlock and race conditions)
        Platform-specific differences in performance or behavior
       TEST ARTIFACTS
      The test artifacts must be developed concurrently with the product from inception through
      deployment. Thus, testing is a full-life-cycle activity, not a late life-cycle activity.
      The test artifacts are communicated, engineered, and developed within the same artifact sets as
      the developed product.
      The test artifacts are implemented in programmable and repeatable formats (as
      software programs).
      The test artifacts are documented in the same way that the product is documented.
      Developers of the test artifacts use the same tools, techniques, and training as the
      software engineers developing the product.
Test artifact subsets are highly project
                                   project-specific, the following example clarifies the relationship between
test artifacts and the other artifact sets. Consider a project to perform seismic data processing for the
purpose of oil exploration. This system has three fundamental subsystems: (1) a sensor subsystem that
captures raw seismic data in real time and delivers these data to (2) a technical operations subsystem
that converts raw data into an organized database and manages queries to this database from (3) a
display subsystem that allows workstation operators to examine seismic data in human-readable
                                                                                           readable form.
Such a system would result in the following test artifacts:
     Management set. The release specifications and release descriptions capture the objectives,
     evaluation criteria, and results of an intermediate milestone. These artifacts are the test plans
     and test results negotiated among internal project teams. The software change orders capture
     test results (defects, testability changes, requirements ambiguities, enhancements) and the
     closure criteria associated with making a discrete change to a baseline.
     Requirements set. The system-level use cases capture the operational concept for the system
     and the acceptance test case descriptions, including the expected behavior of the system and its
     quality attributes. The entire requirement set is a test artifact because it is the basis of all
     assessment activities across the life cycle.
     Design set. A test model for nondeliverable components needed to test the product baselines is
     captured in the design set. These components include such design set artifacts as a seismic event
     simulation for creating realistic sensor data; a "virtual operator" that can support unattended,
     after- hours test cases; specific instrumentation suites for early demonstration of resource usage;
     transaction rates or response times; and use case test drivers and component stand-alone test
     drivers.
     Implementation set. Self-documenting source code representations for test components and test
     drivers provide the equivalent of test procedures and test scripts. These source files may also
     include human-readable data files representing certain statically defined data sets that are
     explicit test source files. Output files from test drivers provide the equivalent of test reports.
     Deployment set. Executable versions of test components, test drivers, and data files are provided.
       MANAGEMENT ARTIFACTS
The management set includes several artifacts that capture intermediate results and ancillary
information necessary to document the product/process legacy, maintain the product, improve the
product, and improve the process.
Business Case
The business case artifact provides all the information necessary to determine whether the project is
worth investing in. It details the expected revenue, expected cost, technical and management plans,
and backup data necessary to demonstrate the risks and realism of the plans. The main purpose is to
transform the vision into economic terms so that an organization can make an accurate ROI
assessment. The financial forecasts are evolutionary, updated with more accurate forecasts as the life
cycle progresses. Figure 6-4 provides a default outline for a business case.
The software development plan (SDP) elaborates the process framework into a fully detailed plan.
Two indications of a useful SDP are periodic updating (it is not stagnant shelfware) and
understanding and acceptance by managers and practitioners alike. Figure 6-5 provides a default
outline for a software development plan.
Work Breakdown Structure
   Work breakdown structure (WBS) is the vehicle for budgeting and collecting costs. To monitor and
   control a project's financial performance, the software project man1ger must have insight into project
   costs and how they are expended. The structure of cost accountability is a serious project planning
   constraint.
     Managing change is one of the fundamental primitives of an iterative development process. With greater
    change freedom, a project can iterate more productively. This flexibility increases the content, quality,
    and number of iterations that a project can achieve within a given schedule. Change freedom has been
    achieved in practice through automation, and today's iterative development environments carry the
    burden of change management. Organizational processes that depend on manual change management
    techniques have encountered major inefficiencies.
Release Specifications
   The scope, plan, and objective evaluation criteria for each baseline release are derived from the vision
   statement as well as many other sources (make/buy analyses, risk management concerns, architectural
   considerations, shots in the dark, implementation constraints, quality thresholds). These artifacts are
   intended
          d to evolve along with the process, achieving greater fidelity as the life cycle progresses and
   requirements understanding matures. Figure 6-6 provides a default outline for a release specification
Release Descriptions
   Release description documents describe the results of each release, including performance against each
   of the evaluation criteria in the corresponding release specification. Release baselines should be
   accompanied by a release description document that describes the evaluation criteria for that
   configuration baseline and provides substantiation (through demonstration, testing, inspection, or
   analysis) that each criterion has been addressed in an acceptable manner. Figure 6
                                                                                    6--7 provides a default
   outline for a release description.
Status Assessments
   Status assessments provide periodic snapshots of project health and status, including the software project
   manager's risk assessment, quality indicators, and management indicators. Typical status assessments
   should include a review of resources, personnel staffing, financial data (cost and revenue), top 10 risks,
   technical progress (metrics snapshots), major milestone plans and results, total project or product scope &
   action items
Environment
An important emphasis of a modern approach is to define the development and maintenance environment
as a first-class artifact of the process. A robust, integrated development environment must support
automation of the development process. This environment should include requirements management,
visual modeling, document automation, host and target programming tools, automated regression testing,
and continuous and integrated change management, and feature and defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could include several
document subsets for transitioning the product into operational status. In big contractual efforts in which
the system is delivered to a separate maintenance organization, deployment artifacts may include
computer system operations manuals, software installation manuals, plans and procedures for cutover
(from a legacy system), site surveys, and so forth. For commercial software products, deployment artifacts
may include marketing plans, sales rollout kits, and training courses.
In each phase of the life cycle, new artifacts are produced and previously developed artifacts are updated
to incorporate lessons learned and to capture further depth and breadth of the solution. Figure 6-8
identifies a typical sequence of artifacts across the life-cycle phases.
           ENGINEERING ARTIFACTS
    Most of the engineering artifacts are captured in rigorous engineering notations such as UML,
    programming languages, or executable machine codes. Three engineering artifacts are explicitly
    intended for more general review, and they deserve further elaboration.
Vision Document
    The vision document provides a complete vision for the software system under development and.
    supports the contract between the funding authority and the development organization. A project
    vision is meant to be changeable as understanding evolves of the requirements, architecture, plans,
    and technology. A good vision document should change slowly. Figure 6-9 provides a default outline
    for a vision document.
Architecture Description
    The architecture description provides an organized view of the software architecture under
    development. It is extracted largely from the design model and includes views of the design,
    implementation, and deployment sets sufficient to understand how the operational concept of the
    requirements set will be achieved. The breadth of the architecture description will vary from project
    to project depending on many factors. Figure 6 6-10
                                                      10 provides a default outline for an architecture
    description.
The software user manual provides the user with the reference documentation necessary to support
the delivered software. Although content is highly variable across application domains, the user
manual should include installation procedures, usage procedures and guidance, operational
constraints, and a user interface description, at a minimum. For software products with a user
interface, this manual should be developed early in the life cycle because it is a necessary
mechanism for communicating and stabilizing an important subset of requirements. The user manual
should be written by members of the test team, who are more likely to understand the user's
perspective than the development team.
      PRAGMATIC ARTIFACTS
 People want to review information but don't understand the language of the artifact. Many
interested reviewers of a particular artifact will resist having to learn the engineering language in
which the artifact is written. It is not uncommon to find people (such as veteran software managers,
veteran quality assurance specialists, or an auditing authority from a regulatory agency) who react as
follows: "I'm not going to learn UML, but I want to review the design of this software, so give me a
separate description such as some flowcharts and text that I can understand."
 People want to review the information but don't have access to the tools. It is not very common
for the development organization to be fully tooled; it is extremely rare that the/other stakeholders
have any capability to review the engineering artifacts on-line. Consequently, organizations are
forced to exchange paper documents. Standardized formats (such as UML, spreadsheets, Visual
Basic, C++, and Ada 95), visualization tools, and the Web are rapidly making it economically
feasible for all stakeholders to exchange
information electronically.
 Human-readable engineering artifacts should use rigorous notations that are complete,
consistent, and used in a self-documenting manner. Properly spelled English words should be
used for all identifiers and descriptions. Acronyms and abbreviations should be used only where they
are well accepted jargon in the context of the component's usage. Readability should be emphasized
and the use of proper English words should be required in all engineering artifacts. This practice
enables understandable representations, browse able formats (paperless review), more-rigorous
notations, and reduced error rates.
 Useful documentation is self-defining: It is documentation that gets used.
 Paper is tangible; electronic artifacts are too easy to change. On-line and Web-based artifacts
can be changed easily and are viewed with more skepticism because of their inherent volatility.