0% found this document useful (0 votes)
9 views28 pages

SPM Unit-1

The document discusses conventional software management, highlighting the limitations of the waterfall model and the need for improvements in software processes and economics. It emphasizes the unpredictability of software projects, the importance of management discipline over technology, and the necessity of involving customers throughout the development process. Additionally, it outlines the evolution of software economics and pragmatic cost estimation, stressing the significance of size, process, personnel, environment, and required quality in software development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views28 pages

SPM Unit-1

The document discusses conventional software management, highlighting the limitations of the waterfall model and the need for improvements in software processes and economics. It emphasizes the unpredictability of software projects, the importance of management discipline over technology, and the necessity of involving customers throughout the development process. Additionally, it outlines the evolution of software economics and pragmatic cost estimation, stressing the significance of size, process, personnel, environment, and required quality in software development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT - I

Conventional Software Management: The waterfall model, conventional software Management


performance.

Evolution of Software Economics: Software Economics, pragmatic software cost estimation.

Improving Software Economics: Reducing Software product size, improving software processes,
improving team effectiveness, improving automation, Achieving required quality, peer inspections.

The old way and the new: The principles of conventional software Engineering, principles of modern
software management, transitioning to an iterative process.

Conventional software management


 Conventional software management practices are sound in theory, but in practice, they are still tied
to archaic (outdated) technology and techniques.
 Conventional software economics provides a benchmark of performance for conventional software
management principles.
 The best thing about software is its flexibility: it can be programmed to do almost anything.
 The worst thing about software is also its flexibility: the "almost anything" characteristic makes it
difficult to plan, monitor, and control software development.

Three Important Analyses of the Software Engineering Industry

1. Software development is still highly unpredictable. Only about 10% of software projects are delivered
successfully within initial budget and schedule estimates.

2. Management discipline is more of a discriminator in success or failure than are technology advances.

3. The level of software scrap and rework is indicative of an immature process.

All three analyses reached the same general conclusion: The success rate for software projects is very
low. The three analyses provide a good introduction to the magnitude of the software problem and the
current norms for conventional software management performance.

1.THE WATERFALL MODEL

 Most software engineering texts present the waterfall model as the source of the "conventional"
software process.

1.1 IN THEORY

 It provides an insightful and concise summary of conventional software management.


 Three main primary points:
 There are two essential steps common to the development of computer programs:

o Analysis

o Coding

Waterfall Model part 1: The two basic steps to building a program.


 In order to manage and control all of the intellectual freedom associated with software development,
one must introduce several other "overhead" steps, including system requirements definition,
software requirements definition, program design, and testing. These steps supplement the analysis
and coding steps. Below Figure illustrates the resulting project profile and the basic steps in
developing a large-scale program.

 The basic framework described in the waterfall model is risky and invites failure. The testing
phase that occurs at the end of the development cycle is the first event for which timing, storage,
input/output transfers, etc., are experienced as distinguished from analyzed. The resulting design
changes are likely to be so disruptive that the software requirements upon which the design is
based are likely violated. Either the requirements must be modified or a substantial design change
is warranted.

Five necessary improvements for waterfall model are:


1.Program design comes first:

 Insert a preliminary program design phase between the software requirements generation phase
and the analysis phase. By this technique, the program designer assures that the software will not
fail because of storage, timing, and data flux (continuous change).
 As analysis proceeds in the succeeding phase, the program designer must impose on the analyst
the storage, timing, and operational constraints in such a way that he senses the consequences.
 If the total resources to be applied are insufficient or if the embryonic (in an early stage of
development) operational design is wrong, it will be recognized at this early stage and the iteration
with requirements and preliminary design can be redone before final design, coding, and test
commences.

How is this program design procedure implemented?


The following steps are required:

 Begin the design process with program designers, not analysts or programmers.

 Design, define, and allocate the data processing modes even at the risk of being wrong. Allocate
processing functions, design the database, allocate execution time, define interfaces and
processing modes with the operating system, describe input and output processing, and define
preliminary operating procedures.

 Write an overview document that is understandable, informative, and current so that every worker
on the project can gain an elemental understanding of the system.
2.Document the design:

 The amount of documentation required on most software programs is quite a lot, certainly much
more than most programmers, analysts, or program designers are willing to do if left to their own
devices.

Why do we need so much documentation?


(1) Each designer must communicate with interfacing designers, managers, and possibly customers.
(2) During early phases, the documentation is the design.
(3) The real monetary value of documentation is to support later modifications by a separate test team, a
separate maintenance team, and operations personnel who are not software literate.

3.Do it twice:

 If a computer program is being developed for the first time, arrange matters so that the version
finally delivered to the customer for operational deployment is actually the second version insofar as
critical design/operations are concerned. Note that this is simply the entire process done in
miniature, to a time scale that is relatively small with respect to the overall effort.
 In the first version, the team must have a special broad competence where they can quickly sense
trouble spots in the design, model them, model alternatives, forget the straightforward aspects of
the design that aren't worth studying at this early point, and, finally, arrive at an error-free program.

4.Plan, control, and monitor testing:

 Without question, the biggest user of project resources—manpower, computer time, and/or
management judgment—is the test phase.
 This is the phase of greatest risk in terms of cost and schedule. It occurs at the latest point in the
schedule, when backup alternatives are least available, if at all.
 The previous three recommendations were all aimed at uncovering and solving problems before
entering the test phase. However, even after doing these things, there is still a test phase and there
are still important things to be done, including:
(1) employ a team of test specialists who were not responsible for the original design;
(2) employ visual inspections to spot the obvious errors like dropped minus signs, missing factors of
two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too
expensive);
(3) test every logic path;
(4) employ the final checkout on the target computer without changing the matter.

5.Involve the customer:

 It is important to involve the customer in a formal way so that he has committed himself at earlier
points before final delivery.
 There are three points following requirements definition where the insight, judgment, and
commitment of the customer can bolster the development effort. These include:

1. a "preliminary software review" following the preliminary program design step,

2. a sequence of "critical software design reviews" during program design, and

3. a "final software acceptance review".


1.2 IN PRACTICE:

 Some software projects still practice the conventional software management approach. It is useful
to summarize the characteristics of the conventional process as it has typically been applied, which
is not necessarily as it was intended.
 Projects destined for trouble frequently exhibit the following symptoms:

1. Protracted integration and late design breakage.

2. Late risk resolution.

3. Requirements-driven functional decomposition.

4. Adversarial (conflict or opposition) stakeholder relationships.

5. Focus on documents and review meetings.

1)Protracted Integration and Late Design Breakage:


 For a typical development project that used a waterfall model management process, Figure 1-2
illustrates development progress versus time. Progress is defined as percent coded, that is,
demonstrable in its target form.

The following sequence was common:

 Early success via paper designs and thorough (often too thorough) briefings.

 Commitment to code late in the life cycle.

 Integration nightmares (unpleasant experience) due to unforeseen implementation issues and


interface ambiguities.

 Heavy budget and schedule pressure to get the system working.

 Late shoe-horning of non-optimal fixes, with no time for redesign.

 A very fragile, unmentionable product delivered late.


 In the conventional model, the entire system was designed on paper, then implemented all at
once, then integrated.

Table 1-1 provides a typical profile of cost expenditures across the spectrum of software activities.

2)Late Risk Resolution:


 A serious issue associated with the waterfall lifecycle was the lack of early risk resolution. Figure
1.3 illustrates a typical risk profile for conventional waterfall model projects.
 It includes four distinct periods of risk exposure, where risk is defined as the probability of missing
a cost, schedule, feature, or quality goal.
 Early in the life cycle, as the requirements were being specified, the actual risk exposure was
highly unpredictable.
3)Requirements-Driven Functional Decomposition:
 This approach depends on:

1. Specifying requirements completely and unambiguously before other development activities


begin.

2. Treating all requirements as equally important.

3. Assuming that requirements will remain constant throughout the software development life cycle.

 These conditions rarely occur in the real world.Specification of requirements is a difficult and
important part of the software development process.
 Another property of the conventional approach is that:

1. The requirements were typically specified in a functional manner.

2. Built into the classic waterfall process was the fundamental assumption that the software itself
was decomposed into functions.

3. Requirements were then allocated to the resulting components.

 This decomposition was often very different from:

1. A decomposition based on object-oriented design

2. The use of existing components

 illustrates the result of requirements-driven approaches


:
A software structure that is organized around the requirements specification structure.
4)Adversarial Stakeholder Relationships:

 The conventional process tended to result in adversarial stakeholder relationships, in large part
because of the difficulties of requirements specification and the exchange of information solely
through paper documents that captured engineering information in ad hoc formats.
 The following sequence of events was typical for most contractual software efforts:

1. The contractor prepared a draft contract-deliverable document that captured an intermediate


artifact and delivered it to the customer for approval.

2. The customer was expected to provide comments (typically within 15 to 30 days).

3. The contractor incorporated these comments and submitted (typically within 15 to 30 days) a final
version for approval.

 This one-shot review process encouraged high levels of sensitivity on the part of customers and
contractors.

5)Focus on Documents and Review Meetings:

 The conventional process focused on producing various documents that attempted to describe the
software product, with insufficient focus on producing tangible increments of the products
themselves.
 Contractors were driven to produce literally tons of paper to meet milestones and demonstrate
progress to stakeholders, rather than spend their energy on tasks that would reduce risk and
produce quality software.
 Typically, presenters and the audience reviewed the simple things that they understood rather than
the complex and important issues. Most design reviews therefore resulted in low engineering
value and high cost in terms of the effort and schedule involved in their preparation and conduct.
They presented merely a facade of progress.

Table 1-2 summarizes the results of a typical design review.


CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE:
 Barry Boehm's "Industrial Software Metrics Top 10 List” is a good, objective characterization of
the state of software development.

1. Finding and fixing a software problem after delivery costs 100 times more than finding and
fixing the problem in early design phases.
2. You can compress software development schedules 25% of nominal, but no more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the number of source
lines of code.
5. Variations among people account for the biggest differences in software productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in 1985, it
was 85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as individual software
programs. Software-system products (i.e., system of systems) cost 9 times as much.
9. Walkthroughs catch 60% of the errors.
10. 80% of the contribution comes from 20% of the contributors.

Evolution of Software Economics


SOFTWARE ECONOMICS:

 Most software cost models can be abstracted into a function of five basic parameters:
1. size
2. process
3. personnel
4. environment and
5. required quality.

1. The size of the end product (in human-generated components), which is typically quantified in
terms of the number of source instructions or the number of function points required to develop the
required functionality.
2. The process used to produce the end product, in particular the ability of the process to avoid
non-value-adding activities (rework, bureaucratic delays, communications overhead).
3. The capabilities of software engineering personnel, and particularly their experience with the
computer science issues and the applications domain issues of the project.
4. The environment, which is made up of the tools and techniques available to support efficient
software development and to automate the process.
5. The required quality of the product, including its features, performance, reliability, and
adaptability.

 The relationships among these parameters and the estimated cost can be written as follows:

Effort = (Personnel) (Environment) (Quality) (Size^process)

 One important aspect of software economics (as represented within today's software cost models)
is that the relationship between effort and size exhibits a diseconomy of scale. The diseconomy of
scale of software development is a result of the process exponent being greater than 1.0.
 Contrary to most manufacturing processes, the more software you build, the more expensive it is
per unit item.
 Figure 2-1 shows three generations of basic technology advancement in tools, components, and
processes. The required levels of quality and personnel are assumed to be constant. The ordinate
of the graph refers to software unit costs realized by an organization.

 The three generations of software development are defined as follows:

1. Conventional (1960s and 1970s):


 Craftsmanship
 Organizations used custom tools, custom processes, and virtually all custom components
built in primitive languages.
 Project performance was highly predictable in that cost, schedule, and quality objectives
were almost always underachieved.
2. Transition (1980s and 1990s):
o Software engineering
o Organizations used more-repeatable processes and off-the-shelf tools, and mostly (>70%)
custom components built in higher level languages.
o Some of the components (<30%) were available as commercial products, including the
operating system, database management system, networking, and graphical user interface.
3. Modern practices (2000 and later):
o Software production
o This book's philosophy is rooted in the use of managed and measured processes, integrated
automation environments, and mostly (70%) off-the-shelf components.
o Perhaps as few as 30% of the components need to be custom built.
 Technologies for environment automation, size reduction, and process improvement are not
independent of one another. In each new era, the key is complementary growth in all
technologies. For example, the process advances could not be used successfully without new
component technologies and increased tool automation.
 Organizations are achieving better economies of scale in successive technology eras — with very
large projects (systems of systems), long-lived products, and lines of business comprising multiple
similar projects.
 Figure 2-2 provides an overview of how a return on investment (ROI) profile can be achieved in
subsequent efforts across life cycles of various domains.

PRAGMATIC SOFTWARE COST ESTIMATION:

 One critical problem in software cost estimation is a lack of well-documented case studies of
projects that used an iterative development approach.
 The software industry has inconsistently defined metrics or atomic units of measure. The data
from actual projects are highly suspect in terms of consistency and comparability.
 It is hard enough to collect a homogeneous set of project data within one organization; it is
extremely difficult to homogenize data across different organizations with different processes,
languages, domains, and so on.
 There have been many debates among developers and vendors of software cost estimation
models and tools. Three topics of these debates are of particular interest here:

1. Which cost estimation model to use?

2. Whether to measure software size in source lines of code or function points.

3. What constitutes a good estimate?


 There are several popular cost estimation models such as:

o COCOMO

o CHECKPOINT

o ESTIMACS

o Knowledge Plan

o Price-S

o Pro QMS

o SEER

o SLIM

o SOFTCOST

o SPQR/20

 Among these, COCOMO is one of the most open and well-documented cost estimation models.
 The general accuracy of conventional cost models
 (such as COCOMO) has been described as "within 20% of actuals, 70% of the time." Most real-
world use of cost models is bottom-up (substantiating a target cost) rather than top-down
(estimating the "should" cost).
 Figure 2-3 illustrates the predominant practice: The software project manager defines the target
cost of the software, and then manipulates the parameters and sizing until the target cost can be
justified.
 The rationale for the target cost may be to:

o Win a proposal

o Solicit customer funding

o Attain internal corporate funding

o Achieve some other goal

 The process described in Figure 2-3 is not all bad. In fact, it is absolutely necessary to:

o Analyze the cost risks

o Understand the sensitivities and trade-offs objectively

 It forces the software project manager to examine the risks associated with achieving the target
costs and to discuss this information with other stakeholders.

 A good software cost estimate has the following attributes:

o It is conceived and supported by the project manager, architecture team, development team,
and test team accountable for performing the work.

o It is accepted by all stakeholders as ambitious but realizable.

o It is based on a well-defined software cost model with a credible basis.

o It is based on a database of relevant project experience that includes similar processes, similar
technologies, similar environments, similar quality requirements, and similar people.

o It is defined in enough detail so that its key risk areas are understood and the probability of
success is objectively assessed.
 Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost
model with an experience base that reflects multiple similar projects done by the same team
with the same mature processes and tools.

Improving Software Economics


Improving Software Economics:
 Five basic parameters of the software cost model are:

1.Reducing the size or complexity of what needs to be developed.

2. Improving the development process.

3. Using more-skilled personnel and better teams (not necessarily the same thing).

4. Using better environments (tools to automate the process).

5. Trading off or backing off on quality thresholds.

 The old process was geared toward ensuring that the user interface was completely analysed
and designed, because the project could afford only one construction cycle.
 The new process was geared toward taking the user interface through a few realistic versions,
incorporating user feedback all along the way and achieving a stable understanding of the
requirements and design issues in balance with one another.
1. REDUCING SOFTWARE PRODUCT SIZE:

 To improve affordability and return on investment (ROI) and to produce a product that achieves
the design goals with the minimum amount of human-generated source material.

 Component-based development is helpful for reducing the "source" language size to achieve a
software solution.

 Reuse, object-oriented technology, automatic code production, and higher order programming
languages are all focused on achieving a given system with fewer lines of human-specified source
directives.

 Size reduction is the primary motivation behind improvements in higher order languages (such
as C++, Ada 95, Java, Visual Basic), automatic code generators (CASE tools, visual modeling
tools, GUI builders), reuse of commercial components (operating systems, windowing
environments, DBMS, middleware, networks), and OOPS technologies (Unified Modeling
Language, visual modeling tools, architecture frameworks).
 The reduction is defined in terms of human-generated source material. In general, when size-
reducing technologies are used, they reduce the number of human-generated source lines.
 So the mature and reliable size reduction technologies are extremely important at producing
economic benefits.
 Immature size reduction technologies may reduce the development size but require so much
more investment in achieving the necessary levels of quality and performance that they have a
negative impact on overall project performance.

Language:
 To estimate how much human-generated code is reduced to decrease the software project size
developed with high-level languages and tools, there are two popular parameters.

 One is Universal Function Points (UFP), which consider:

o External user inputs

o External outputs

o Internal logical data groups


o External data interfaces

o External inquiries

These are used to indicate relative program size to implement required functionality.

 The second one is SLOC, which is used to estimate the reduced size of the software project after a
solution is formulated and an implemented language is known.

 The selection of the language for the software project development depends on what kind of data
the application (project) will process, because each language has a domain of usage.

 For example, Visual Basic is very expressive and powerful in building simple interactive
applications, but it is not suitable for real-time, embedded programming.

 Generally, high-level languages have numerous software engineering technology advances,


including:

o Language-enforced configuration control

o Separation of interface and implementation

o Architecture control primitives

o Encapsulation

o Concurrency control

o And many other features to reduce the source lines of code

LANGUAGES:
 The reduction of code may increase:

o Understandability

o Changeability

o Reliability

 But some of the drawbacks of size reduction are:

o High-level abstraction reduces performance

o Increased consumption of resources

o And other related issues


Universal Function Points

 Universal function points can be used to indicate the relative program sizes required to implement a
given functionality.

EXAMPLE:

To achieve a given application with a fixed number of function points, one of the following program sizes
would be required:

 1,000,000 lines of assembly language


 400,000 lines of C
 220,000 lines of Ada83
 175,000 lines of Ada95 or C++

These values indicate the relative expressiveness provided by various languages.

 With the use of commercial components and automatic code generators, the size of human-
generated source code can be further reduced, which in turn reduces the size of the team and the
time needed for development.
 Example with integration:
o 75,000 lines of Ada or C++ with integration of several commercial components

OBJECT-ORIENTED METHODS AND VISUAL MODELING:

Object-Oriented Programming Languages:

 Object-oriented programming languages benefit both software productivity and software


quality.

 The fundamental impact of object-oriented technology is in reducing the overall size of what
needs to be developed.

Diagrammatic Models:

 People like drawing pictures to explain something to others or to themselves.

 When they do it for software system design, they call these pictures diagrams or diagrammatic
models, and the very notation for them a modeling language.

Examples of the Interrelationships Among the Dimensions of Improving Software Economics:

1. An object-oriented model of the problem and its solution encourages a common vocabulary
between the end users of a system and its developers, thus creating a shared understanding of
the problem being solved.

2. The use of continuous integration creates opportunities to recognize risk early and make
incremental corrections without destabilizing the entire development effort.

3. An object-oriented architecture provides a clear separation of concerns among disparate


elements of a system, creating firewalls that prevent a change in one part of the system from
rending the fabric of the entire architecture.
Five Characteristics (as per Booch) of a Successful Object-Oriented Project:

1. A ruthless focus on the development of a system that provides a well-understood collection of


essential minimal characteristics.

2. The existence of a culture that is centered on results, encourages communication, and yet is not
afraid to fail.

3. The effective use of object-oriented modeling.

4. The existence of a strong architectural vision.

5. The application of a well-managed iterative and incremental development life cycle.

REUSE:

 Reusing existing components and building reusable components have been natural software
engineering activities.

 With reuse, the goal is to minimize development costs while achieving all the other required
attributes of performance, feature set, and quality.

 Try to treat reuse as a mundane part of achieving a return on investment.

 Most truly reusable components of value are transitioned to commercial products supported by
organizations with the following characteristics:

o They have an economic motivation for continued support.

o They take ownership of improving product quality, adding new features, and transitioning to
new technologies.

o They have a sufficiently broad customer base to be profitable.

 The cost of developing a reusable component is not trivial.

 Reuse is an important discipline that has an impact on the efficiency of all workflows and the
quality of most artifacts.

 Cost and schedule investments are necessary to achieve reusable components.


COMMERCIAL COMPONENTS:

 Try to maximize integration of commercial components and off-the-shelf products.


 The use of commercial components is certainly desirable as a means of reducing custom
development

2.Improving the Software process:

 The process is a big term which represents various activities in the software development
process.
 In a software-oriented organization, there are several processes and sub-processes running.
 Generally, the processes in organizations are divided into three categories:

3 Levels of Process Identified in an Organization:

1. Metaprocess:

 An organization's policies, procedures, and practices for pursuing a software-intensive


line of business.
 The focus of this process is on organizational economics, long-term strategies, and
software ROI.

2. Macroprocess:

 A project's policies, procedures, and practices for producing a complete software


product within certain cost, schedule, and quality constraints.
 The focus of the macroprocess is on creating an adequate instance of the metaprocess for
a specific set of constraints.
3. Microprocess:

 A project team's policies, procedures, and practices for achieving an artifact of the
software process.
 The focus of the microprocess is on achieving an intermediate product baseline with
adequate quality and functionality, as economically and rapidly as practical.

 The three levels of the processes (Metaprocess, Macroprocess, Microprocess) are overlapped at
some point in the project development process, even though they have different objectives,
audiences, metrics, concerns, and time scales.
 Here, we consider the macroprocesses, which are project-level processes to reduce the cost of
project development.
 To make the project successful, there should be an integration of all processes, which may be
implemented through sequential and parallel steps.
Types of Project Development Process Activities

1. Productive Activities:

 These produce tangible progress toward the end product.


 Examples: Prototyping, modeling, coding, debugging, etc.

2. Overhead Activities:

 These have an intangible impact on the end product.


 Examples: Plan preparation, progress monitoring, risk assessment, financial
assessment, configuration control, quality assessment, integration and testing, etc.
 Though value-adding, overhead activities are often overlooked.

 The goal of processes is to maximize resource allocation to productive activities and minimize
the impact of overhead activities on resources such as personnel, computers, and schedule.

Impact of Process Quality:

 The quality of software process strongly affects the required effort and therefore the schedule
for producing the software product.
 In practice, the difference between a good and bad process can affect overall cost estimates by
50% to 100%.
 Therefore, reducing inefficiencies will improve the overall schedule.

Three Dimensions of Schedule Improvement:

1. Improve the efficiency of each step in an N-step process.


2. Eliminate steps from an N-step process to reduce it to an M-step process.
3. Use concurrency in activities or resource allocation.

 Primary focus of process improvement: Achieve an adequate solution in the minimum number
of iterations and eliminate downstream scrap and rework.
 In perfect software engineering, the goal is to manage activities by avoiding scrap and rework
to achieve successful software process improvement.

Improving Team Effectiveness:

 Personnel management has a significant impact on software project development.


 The COCOMO model highlights that the combination of personnel skills and experience can
impact productivity by a factor of four.
 However, it is difficult to measure the performance of a software team objectively.
 Managers often follow the simple rule: "hire good people."

Characteristics of a Good Software Development Team

1. Balance – The team must not be skewed toward one type of role.
2. Coverage – Strong individuals must fill all key positions (planners, designers, coders, testers,
trainers, etc.).
3.IMPROVING TEAM EFFECTIVENESS:

Project Management and Staffing Principles:

 A well-managed project can succeed with a nominal engineering team.


 A mismanaged project will almost never succeed, even with an expert team of engineers.
 A well-architected system can be built by a nominal team of software builders.
 A poorly architected system will flounder even with an expert team of builders.

Boehm's Five Staffing Principles:

1. The principle of top talent: Use better and fewer people.


2. The principle of job matching: Fit the tasks to the skills and motivation of the people available.
3. The principle of career progression: An organization does best in the long run by helping its
people to self-actualize.
4. The principle of team balance: Select people who will complement and harmonize with one
another.
5. The principle of phase-out: Keeping a misfit on the team doesn't benefit anyone.

Leadership Qualities in Software Project Management

 Software project managers need many leadership qualities in order to enhance team
effectiveness.

Attributes of Successful Software Project Managers:

 Hiring skills: Few decisions are as important as hiring decisions. Placing the right person in the
right job seems obvious but is surprisingly hard to achieve.
 Customer-interface skill: Avoiding adversarial relationships among stakeholders is a prerequisite
for success.
 Decision-making skill: The jillion books written about management have failed to provide a clear
definition of this attribute. We all know a good leader when we run into one, and decision-making
skill seems obvious despite its intangible definition.
 Team-building skill: Teamwork requires that a manager establish trust, motivate progress, exploit
eccentric prima donnas, transition average people into top performers, eliminate misfits, and
consolidate diverse opinions into a team direction.
 Selling skill: Successful project managers must sell all stakeholders (including themselves) on
decisions and priorities, sell candidates on job positions, sell changes to the status quo in the face
of resistance, and sell achievements against objectives. In practice, selling requires continuous
negotiation, compromise, and empathy.
4.IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS:

 The tools and environment have a linear effect on the productivity of the process.
 Planning tools, requirements management tools, visual modeling tools, compilers, editors,
debuggers, quality assurance analysis tools, test tools, and user interfaces provide crucial
automation support for evolving the software engineering artifacts.
 Configuration management environments provide the foundation for executing and instrument the
process.
 At first order, the isolated impact of tools and automation generally allows improvements of 20% to
40% in effort.
 Tools and environments must be viewed as the primary delivery vehicle for process automation and
improvement, so their impact can be much higher.
 Automation of the design process provides payback in quality, the ability to estimate costs and
schedules, and overall productivity using a smaller team.
 Round-trip engineering describes the key capability of environments that support iterative
development. As we have moved into maintaining different information repositories for the
engineering artifacts, we need automation support to ensure efficient and error-free transition of
data from one artifact to another.
 Forward engineering is the automation of one engineering artifact from another, more abstract
representation. (Compilers and linkers have provided automated transition of source code into
executable code).
 Reverse engineering is the generation or modification of a more abstract representation from an
existing artifact (for example, creating a visual design model from a source code representation).

Effort Distribution in the Software Development Life Cycle

 Requirements analysis and evolution activities consume 40% of life-cycle costs.


 Software design activities have an impact on more than 50% of the resources.
 Coding and unit testing activities consume about 50% of software development effort and schedule.
 Test activities can consume as much as 50% of a project's resources.
 Configuration control and change management are critical activities that can consume as much as
25% of resources on a large-scale project.
 Documentation activities can consume more than 30% of project engineering resources.
 Project management, business administration, and progress assessment can consume as much as
30% of project budgets.

5.ACHIEVING REQUIRED QUALITY:

 Software best practices are derived from the development process and technologies. Key practices
that improve overall software quality include the following:
 Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the life
cycle on a balance between requirements evolution, design evolution, and plan evolution.
 Using metrics and indicators to measure the progress and quality of an architecture as it evolves
from a high-level prototype into a fully compliant product.
 Providing integrated life-cycle environments that support early and continuous configuration control,
change management, rigorous design methods, document automation, and regression test
automation.
 Using visual modeling and higher-level languages that support architectural control, abstraction,
reliable programming, reuse, and self-documentation.
 Early and continuous insight into performance issues through demonstration-based evaluations.
Events in Performance Assessment:

 Project Inception:
The proposed design was asserted to be low risk with adequate performance margin.
 Initial Design Review:
Optimistic assessments of adequate design margin were based mostly on paper analysis or rough
simulation of the critical threads.
 Mid-Life-Cycle Design Review:
The assessments started whittling away at the margin, as early benchmarks and initial tests began
exposing the optimism inherent in earlier estimates.
 Integration and Test:
Serious performance problems were uncovered, necessitating fundamental changes in the
architecture. The underlying infrastructure was usually the scapegoat, but the real culprit was
immature use of the infrastructure, immature architectural solutions, or poorly understood early
design trade-offs.
Peer Inspections – A Pragmatic View:

 Peer reviews are valuable, but they are rarely significant contributors to quality compared with the
following primary quality mechanisms and indicators, which should be emphasized in the
management process:
 Transitioning engineering information from one artifact set to another, thereby assessing the
consistency, feasibility, understandability, and technology constraints inherent in the engineering
artifacts.
 Major milestone demonstrations that force the artifacts to be assessed against tangible criteria in
the context of relevant use cases.
 Environment tools (compilers, debuggers, analyzers , automated test suites) that ensure
representation rigor, consistency, completeness, and change control.
 Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and requirements
compliance.
 Change management metrics for objective insight into multiple-perspective change trends and
convergence or divergence from quality and progress goals.
 Inspections are also a good vehicle for holding authors accountable for quality products.
 All authors of software and documentation should have their products scrutinized as a natural by-
product of the process.
 Therefore, the coverage of inspections should be across all authors rather than across all
components.

THE OLD WAY AND THE NEW

THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERING

1. Make quality #1.


Quality must be quantified and mechanisms put into place to motivate its achievement.
2. High-quality software is possible.
Techniques that have been demonstrated to increase quality include involving the customer,
prototyping, simplifying design, conducting inspections, and hiring the best people.
3. Give products to customers early.
No matter how hard you try to learn users' needs during the requirements phase, the most effective
way to determine real needs is to give users a product and let them play with it.
4. Determine the problem before writing the requirements.
When faced with what they believe is a problem, most engineers rush to offer a solution. Before you
try to solve a problem, be sure to explore all the alternatives and don't be blinded by the obvious
solution.
5. Evaluate design alternatives.
After the requirements are agreed upon, you must examine a variety of architectures and
algorithms. You certainly do not want to use “architecture" simply because it was used in the
requirements specification.
6. Use an appropriate process model.
Each project must select a process that makes the most sense for that project on the basis of
corporate culture, willingness to take risks, application area, volatility of requirements, and the
extent to which requirements are well understood.
7. Use different languages for different phases.
Our industry's eternal thirst for simple solutions to complex problems has driven many to declare
that the best development method is one that uses the same notation throughout the life cycle.
8. Minimize intellectual distance.
To minimize intellectual distance, the software's structure should be as close as possible to the real-
world structure.
9. Put techniques before tools.
An undisciplined software engineer with a tool becomes a dangerous, undisciplined software
engineer.
10. Get it right before you make it faster.
It is far easier to make a working program run faster than it is to make a fast program work. Don't
worry about optimization during initial coding.
11. Inspect code.
Inspecting the detailed design and code is a much better way to find errors than testing.
12. Good management is more important than good technology.
Good management motivates people to do their best, but there are no universal "right" styles of
management.
13. People are the key to success.
Highly skilled people with appropriate experience, talent, and training are key.
14. Follow with care.
Just because everybody is doing something does not make it right for you. It may be right, but you
must carefully assess its applicability to your environment.
15. Take responsibility.
When a bridge collapses we ask, "What did the engineers do wrong?" Even when software fails, we
rarely ask this. The fact is that in any engineering discipline, the best methods can be used to
produce awful designs, and the most antiquated methods to produce elegant designs.
16. Understand the customer's priorities.
It is possible the customer would tolerate 90% of the functionality delivered late if they could have
10% of it on time.
17. The more they see, the more they need.
The more functionality (or performance) you provide a user, the more functionality (or performance)
the user wants.
18. Plan to throw one away.
One of the most important critical success factors is whether or not a product is entirely new. Such
brand-new applications, architectures, interfaces, or algorithms rarely work the first time.
19. Design for change.
The architectures, components, and specification techniques you use must accommodate change.
20. Design without documentation is not design.
I have often heard software engineers say, "I have finished the design. All that is left is the
documentation."
21. Use tools, but be realistic.
Software tools make their users more efficient.
22. Avoid tricks.
Many programmers love to create programs with tricks constructs that perform a function correctly,
but in an obscure way. Show the world how smart you are by avoiding tricky code.
23. Encapsulate.
Information-hiding is a simple, proven concept that results in software that is easier to test and much
easier to maintain.
24. Use coupling and cohesion.
Coupling and cohesion are the best ways to measure software's inherent maintainability and
adaptability.
25. Use the McCabe complexity measure.
Although there are many metrics available to report the inherent complexity of software, none is as
intuitive and easy to use as Tom McCabe's.
26. Don't test your own software.
Software developers should never be the primary testers of their own software.
27. Analyze causes for errors.
It is far more cost-effective to reduce the effect of an error by preventing it than it is to find and fix it.
One way to do this is to analyze the causes of errors as they are detected.
28. Realize that software's entropy increases.
Any software system that undergoes continuous change will grow in complexity and will become
more and more disorganized.
29. People and time are not interchangeable.
Measuring a project solely by person-months makes little sense.
30. Expect excellence.
Your employees will do much better if you have high expectations for them.

THE PRINCIPLES OF MODERN SOFTWARE MANAGEMENT

Top 10 principles of modern software management are (The first five, which are the main themes of my
definition of an iterative process, are summarized in Figure 4-1):

1. Base the process on an architecture-first approach.


This requires that a demonstrable balance be achieved among the driving requirements, the
architecturally significant design decisions, and the life-cycle plans before the resources are
committed for full-scale development.
2. Establish an iterative life-cycle process that confronts risk early.
With today's sophisticated software systems, it is not possible to define the entire problem, design
the entire solution, build the software, and then test the end product in sequence. Instead, an
iterative process that refines the problem understanding, an effective solution, and an effective plan
over several iterations encourages a balanced treatment of all stakeholder objectives. Major risks
must be addressed early to increase predictability and avoid expensive downstream scrap and
rework.
3. Transition design methods to emphasize component-based development.
Moving from a line-of-code mentality to a component-based mentality is necessary to reduce the
amount of human-generated source code and custom development.
4. Establish a change management environment.
The dynamics of iterative development, including concurrent workflows by different teams working
on shared artifacts, necessitates objectively controlled baselines.
5. Enhance change freedom through tools that support round-trip engineering.
Round-trip engineering is the environment support necessary to automate and synchronize
engineering information in different formats (such as requirements specifications, design models,
source code, executable code, test cases).
6. Capture design artifacts in rigorous, model-based notation.
A model-based approach (such as UML) supports the evolution of semantically rich graphical and
textual design notations.
7. Instrument the process for objective quality control and progress assessment.
Life-cycle assessment of the progress and the quality of all intermediate products must be
integrated into the process.
8. Use a demonstration-based approach to assess intermediate artifacts.
9. Plan intermediate releases in groups of usage scenarios with evolving levels of detail.
It is essential that the software management process drive toward early and continuous
demonstrations within the operational context of the system, namely its use cases.
10. Establish a configurable process that is economically scalable.
No single process is suitable for all software developments.

Table 4-1: Mapping Top 10 Risks of the Conventional Process to the Key Attributes and Principles of a
Modern Process
TRANSITIONING TO AN ITERATIVE PROCESS:

 Modern software development processes have moved away from the conventional waterfall model,
in which each stage of the development process is dependent on completion of the previous stage.
 The economic benefits inherent in transitioning from the conventional waterfall model to an iterative
development process are significant but difficult to quantify.
 As one benchmark of the expected economic impact of process improvement, consider the process
exponent parameters of the COCOMO II model. (Appendix B provides more detail on the COCOMO
model.) This exponent can range from 1.01 (virtually no diseconomy of scale) to 1.26
(significant diseconomy of scale).
 The parameters that govern the value of the process exponent are:

1. Application precedentedness
2. Process flexibility
3. Architecture risk resolution
4. Team cohesion
5. Software process maturity

The following paragraphs map the process exponent parameters of COCOMO II to the top 10
principles of a modern process:

1. Application precedentedness:

 Domain experience is a critical factor in understanding how to plan and execute a software
development project. For unprecedented systems, one of the key goals is to confront risks and
establish early precedents, even if they are incomplete or experimental.
 This is one of the primary reasons that the software industry has moved to an iterative life-cycle
process. Early iterations in the life cycle establish precedents from which the product, the process,
and the plans can be elaborated in evolving levels of detail.

2. Process flexibility:

 Development of modern software is characterized by such a broad solution space and so many
interrelated concerns that there is a paramount need for continuous incorporation of changes.
These changes may be inherent in the problem understanding, the solution space, or the plans.
 Project artifacts must be supported by efficient change management commensurate with project
needs. A configurable process that allows a common framework to be adapted across a range of
projects is necessary to achieve a software return on investment.

3. Architecture risk resolution:

 Architecture-first development is a crucial theme underlying a successful iterative development


process. A project team develops and stabilizes architecture before developing all the components
that make up the entire suite of applications components.
 An architecture-first and component-based development approach forces the infrastructure,
common mechanisms, and control mechanisms to be elaborated early in the life cycle and drives all
component make/buy decisions into the architecture process.
4. Team cohesion:

 Successful teams are cohesive, and cohesive teams are successful. Successful teams and
cohesive teams share common objectives and priorities. Advances in technology (such as
programming languages, UML, and visual modeling) have enabled more rigorous and
understandable notations for communicating software engineering information, particularly in the
requirements and design artifacts that previously were ad hoc and based completely on paper
exchange.
 These model-based formats have also enabled the round-trip engineering support needed to
establish change freedom sufficient for evolving design representations.

5. Software process maturity:

 The Software Engineering Institute's Capability Maturity Model (CMM) is a well-accepted


benchmark for software process assessment.
 One of key themes is that truly mature processes are enabled through an integrated environment
that provides the appropriate level of automation to instrument the process for objective quality
control.

You might also like