0% found this document useful (0 votes)
64 views11 pages

Software Construction & Design Model

IPU Notes for unit 3 OOSE

Uploaded by

ravi singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views11 pages

Software Construction & Design Model

IPU Notes for unit 3 OOSE

Uploaded by

ravi singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

OBJECT ORIENTED SOFTWARE ENGINEERING

UNIT-3
Construction
Software construction is a software engineering discipline. It is the detailed creation of working
meaningful software through a combination of coding, verification, unit testing, integration testing,
and debugging. It is linked to all the other software engineering disciplines, most strongly to software
design and software testing.

The design model


A design model in Software Engineering is an object-based picture or pictures that
represent the use cases for a system. Or to put it another way, it is the means to describe a
system's implementation and source code in a diagrammatic fashion. This type of
representation has a couple of advantages. First, it is a simpler representation than words
alone. Second, a group of people can look at these simple diagrams and quickly get the
general idea behind a system. In the end, it boils down to the old adage, 'a picture is worth
a thousand words.'

Design Model Dimensions


• The design model can be viewed in two different dimensions.
• (Horizontally) The process dimension
• It indicates the evolution of the parts of the design model as each design task
is executed.
• (Vertically) The abstraction dimension
• It represents the level of detail as each element of the analysis model is
transformed into the design model and then iteratively refined.
• The elements of the design model use many of the same UML diagrams that
were used in the analysis model.
• The difference is that these diagrams are
• Refined and elaborated as part of design;
• More implementation-specific detail is provided,
• Architectural structure and style, components that reside within the
architecture,
• Interfaces between the components and with the outside world are all
emphasized.
Block Design
In combinatorial mathematics, a block design is a set together with a family of
subsets (repeated subsets are allowed at times) whose members are chosen to satisfy some
set of properties that are deemed useful for a particular application. These applications come
from many areas, including experimental design, finite geometry, physical chemistry, software
testing, cryptography, and algebraic geometry. Many variations have been examined, but the
most intensely studied are the balanced incomplete block designs (BIBDs or 2-designs)
which historically were related to statistical issues in the design of experiments.[1][2]
A block design in which all the blocks have the same size is called uniform. The designs
discussed in this article are all uniform. Pairwise balanced designs (PBDs) are examples of
block designs that are not necessarily uniform.

Working with construction


Software construction fundamentals
Minimizing complexity
The need to reduce complexity is mainly driven by limited ability of most people to hold complex
structures and information in their working memories. Reduced complexity is achieved through
emphasizing the creation of code that is simple and readable rather than clever.
Minimizing complexity is accomplished through making use of standards, and through numerous
specific techniques in coding. It is also supported by the construction-focused quality techniques.
Anticipating change
Anticipating change helps software engineers build extensible software, which means they can
enhance a software product without disrupting the underlying structure.[2] Research over 25 years
showed that the cost of rework can be 10 to 100 times (5 to 10 times for smaller projects) more
expensive than getting the requirements right the first time. Given that 25% of the requirements
change during development on average project, the need to reduce the cost of rework elucidates the
need for anticipating change.
Constructing for verification
Constructing for verification means building software in such a way that faults can be ferreted out
readily by the software engineers writing the software, as well as during independent testing and
operational activities. Specific techniques that support constructing for verification include following
coding standards to support code reviews, unit testing, organizing code to support automated
testing, and restricted use of complex or hard-to-understand language structures, among others.
Reuse
Systematic reuse can enable significant software productivity, quality, and cost improvements.
Reuse has two closely related facets:

• Construction for reuse: Create reusable software assets.


• Construction with reuse: Reuse software assets in the construction of a new solution.
Standards in construction
Standards, whether external (created by international organizations) or internal (created at the
corporate level), that directly affect construction issues include:
• Communication methods: Such as standards for document formats and contents.
• Programming languages
• Coding standards
• Platforms
• Tools: Such as diagrammatic standards for notations like UML.

Managing construction
Construction model
Numerous models have been created to develop software, some of which emphasize construction
more than others. Some models are more linear from the construction point of view, such as
the Waterfall and staged-delivery life cycle models. These models treat construction as an activity
which occurs only after significant prerequisite work has been completed—including
detailed requirements work, extensive design work, and detailed planning. Other models are
more iterative, such as evolutionary prototyping, Extreme Programming, and Scrum. These
approaches tend to treat construction as an activity that occurs concurrently with other software
development activities, including requirements, design, and planning, or overlaps them.
Construction planning
The choice of construction method is a key aspect of the construction planning activity. The choice
of construction method affects the extent to which construction prerequisites (e.g. Requirements
analysis, Software design, .. etc.) are performed, the order in which they are performed, and the
degree to which they are expected to be completed before construction work begins. Construction
planning also defines the order in which components are created and integrated, the software quality
management processes, the allocation of task assignments to specific software engineers, and the
other tasks, according to the chosen method.
Construction measurement
Numerous construction activities and artifacts can be measured, including code developed, code
modified, code reused, code destroyed, code complexity, code inspection statistics, fault-fix and
fault-find rates, effort, and scheduling. These measurements can be useful for purposes of managing
construction, ensuring quality during construction, improving the construction process, as well as for
other reasons.

Practical considerations
Software construction is driven by many practical considerations:
Construction design
In order to account for the unanticipated gaps in the software design, during software construction
some design modifications must be made on a smaller or larger scale to flesh out details of
the software design.
Low Fan-out is one of the design characteristics found to be beneficial by researchers. Information
hiding proved to be a useful design technique in large programs that made them easier to modify by
a factor of 4.
Construction languages
Construction languages include all forms of communication by which a human can specify an
executable problem solution to a computer. They include configuration languages, toolkit languages,
and programming languages:
• Configuration languages are languages in which software engineers choose from a limited set of
predefined options to create new or custom software installations.
• Toolkit languages are used to build applications out of toolkits and are more complex than
configuration languages.
• Scripting languages are kinds of application programming languages that supports scripts which
are often interpreted rather than compiled.
• Programming languages are the most flexible type of construction languages which use three
general kinds of notation:
o Linguistic notations which are distinguished in particular by the use of word-like strings of
text to represent complex software constructions, and the combination of such word-like
strings into patterns that have a sentence-like syntax.
o Formal notations which rely less on intuitive, everyday meanings of words and text strings
and more on definitions backed up by precise, unambiguous, and formal (or mathematical)
definitions.
o Visual notations which rely much less on the text-oriented notations of both linguistic and
formal construction, and instead rely on direct visual interpretation and placement of visual
entities that represent the underlying software.
Programmers working in a language they have used for three years or more are about 30 percent
more productive than programmers with equivalent experience who are new to a language. High-
level languages such as C++, Java, Smalltalk, and Visual Basic yield 5 to 15 times better
productivity, reliability, simplicity, and comprehensibility than low-level languages such as assembly
and C. Equivalent code has been shown to need fewer lines to be implemented in high level
languages than in lower level languages.
Coding
Main article: Computer programming
The following considerations apply to the software construction coding activity:

• Techniques for creating understandable source code, including naming and source code layout.
One study showed that the effort required to debug a program is minimized when the variables'
names are between 10 and 16 characters.
• Use of classes, enumerated types, variables, named constants, and other similar entities:
o A study done by NASA showed that the putting the code into well-factored classes can
double the code reusability compared to the code developed using functional design.
o One experiment showed that designs which access arrays sequentially, rather than
randomly, result in fewer variables and fewer variable references.
• Use of control structures:
o One experiment found that loops-with-exit are more comprehensible than other kinds of
loops.
o Regarding the level of nesting in loops and conditionals, studies have shown that
programmers have difficulty comprehending more than three levels of nesting.
o Control flow complexity has been shown to correlate with low reliability and frequent errors.
• Handling of error conditions—both planned errors and exceptions (input of bad data, for
example)
• Prevention of code-level security breaches (buffer overruns or array index overflows, for
example)
• Resource usage via use of exclusion mechanisms and discipline in accessing serially
reusable resources (including threads or database locks)
• Source code organization (into statements and routines):
o Highly cohesive routines proved to be less error prone than routines with lower cohesion. A
study of 450 routines found that 50 percent of the highly cohesive routines were fault free
compared to only 18 percent of routines with low cohesion. Another study of a different 450
routines found that routines with the highest coupling-to-cohesion ratios had 7 times as
many errors as those with the lowest coupling-to-cohesion ratios and were 20 times as
costly to fix.
o Although studies showed inconclusive results regarding the correlation between routine
sizes and the rate of errors in them, but one study found that routines with fewer than 143
lines of code were 2.4 times less expensive to fix than larger routines. Another study
showed that the code needed to be changed least when routines averaged 100 to 150 lines
of code. Another study found that structural complexity and amount of data in a routine were
correlated with errors regardless of its size.
o Interfaces between routines are some of the most error-prone areas of a program. One
study showed that 39 percent of all errors were errors in communication between routines.
o Unused parameters are correlated with an increased error rate. In one study, only 17 to 29
percent of routines with more than one unreferenced variable had no errors, compared to 46
percent in routines with no unused variables.
o The number of parameters of a routine should be 7 at maximum as research has found that
people generally cannot keep track of more than about seven chunks of information at once.
• Source code organization (into classes, packages, or other structures). When
considering containment, the maximum number of data members in a class shouldn't exceed
7±2. Research has shown that this number is the number of discrete items a person can
remember while performing other tasks. When considering inheritance, the number of levels in
the inheritance tree should be limited. Deep inheritance trees have been found to be significantly
associated with increased fault rates. When considering the number of routines in a class, it
should be kept as small as possible. A study on C++ programs has found an association
between the number of routines and the number of faults.[10]
• Code documentation
• Code tuning
Construction testing
The purpose of construction testing is to reduce the gap between the time at which faults are
inserted into the code and the time those faults are detected. In some cases, construction testing is
performed after code has been written. In test-first programming, test cases are created before code
is written. Construction involves two forms of testing, which are often performed by the software
engineer who wrote the code:[1]

• Unit testing
• Integration testing
Reuse
Implementing software reuse entails more than creating and using libraries of assets. It requires
formalizing the practice of reuse by integrating reuse processes and activities into the software life
cycle. The tasks related to reuse in software construction during coding and testing are:[1]

• The selection of the reusable units, databases, test procedures, or test data.
• The evaluation of code or test re-usability.
• The reporting of reuse information on new code, test procedures, or test data.
Construction quality
The primary techniques used to ensure the quality of code as it is constructed include:
• Unit testing and integration testing. One study found that the average defect detection rates of
unit testing and integration testing are 30% and 35% respectively.[16]
• Test-first development
• Use of assertions and defensive programming
• Debugging
• Inspections. One study found that the average defect detection rate of formal code inspections is
60%. Regarding the cost of finding defects, a study found that code reading detected 80% more
faults per hour than testing. Another study shown that it costs six times more to detect design
defects by using testing than by using inspections. A study by IBM showed that only 3.5 hours
where needed to find a defect through code inspections versus 15–25 hours through testing.
Microsoft has found that it takes 3 hours to find and fix a defect by using code inspections and
12 hours to find and fix a defect by using testing. In a 700 thousand lines program, it was
reported that code reviews were several times as cost-effective as testing.[16] Studies found that
inspections result in 20% - 30% fewer defects per 1000 lines of code than less formal review
practices and that they increase productivity by about 20%. Formal inspections will usually take
10% - 15% of the project budget and will reduce overall project cost. Researchers found that
having more than 2 - 3 reviewers on a formal inspection doesn't increase the number of defects
found, although the results seem to vary depending on the kind of material being inspected.
• Technical reviews. One study found that the average defect detection rates of informal code
reviews and desk checking are 25% and 40% respectively.[16] Walkthroughs were found to have
defect detection rate of 20% - 40%, but were found also to be expensive specially when project
pressures increase. Code reading was found by NASA to detect 3.3 defects per hour of effort
versus 1.8 defects per hour for testing. It also finds 20% - 60% more errors over the life of the
project than different kinds of testing. A study of 13 reviews about review meetings, found that
90% of the defects were found in preparation for the review meeting while only around 10%
were found during the meeting.
• Static analysis (IEEE1028)
Studies have shown that a combination of these techniques need to be used to achieve high defect
detection rate. Other studies showed that different people tend to find different defects. One study
found that the Extreme Programming practices of pair programming, desk checking, unit
testing, integration testing, and regression testing can achieve a 90% defect detection rate. An
experiment involving experienced programmers found that on average they were able to find 5
errors (9 at best) out of 15 errors by testing.
80% of the errors tend to be concentrated in 20% of the project's classes and routines. 50% of the
errors are found in 5% of the project's classes. IBM was able to reduce the customer reported
defects by a factor of ten to one and to reduce their maintenance budget by 45% in its IMS system
by repairing or rewriting only 31 out of 425 classes. Around 20% of a project's routines contribute to
80% of the development costs. A classic study by IBM found that few error-prone routines of OS/360
were the most expensive entities. They had around 50 defects per 1000 lines of code and fixing
them costs 10 times what it took to develop the whole system.
Integration
A key activity during construction is the integration of separately
constructed routines, classes, components, and subsystems. In addition, a particular software
system may need to be integrated with other software or hardware systems. Concerns related to
construction integration include planning the sequence in which components will be integrated,
creating scaffolding to support interim versions of the software, determining the degree
of testing and quality work performed on components before they are integrated, and determining
points in the project at which interim versions of the software are tested.[1]
Testing
Software testing can be stated as the process of verifying and validating that a software or
application is bug free, meets the technical requirements as guided by it’s design and
development and meets the user requirements effectively and efficiently with handling all the
exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy and usability. It
mainly aims at measuring specification, functionality and performance of a software program or
application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that software correctly implements a
specific function.
2. Validation: it refers to a different set of tasks that ensure that the software that has been built
is traceable to customer requirements.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”

What are different types of software testing?


Software Testing can be broadly classified into two types:
1. Manual Testing: Manual testing includes testing a software manually, i.e., without using any
automated tool or any script. In this type, the tester takes over the role of an end-user and tests
the software to identify any unexpected behavior or bug. There are different stages for manual
testing such as unit testing, integration testing, system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of
testing. Manual testing also includes exploratory testing, as testers explore the software to
identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is when
the tester writes scripts and uses another software to test the product. This process involves
automation of a manual process. Automation Testing is used to re-run the test scenarios that were
performed manually, quickly, and repeatedly.
Apart from regression testing, automation testing is also used to test the application from load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and
saves time and money in comparison to manual testing.
What are different techniques of Software Testing?
Software techniques can be majorly classified into two categories:
1. Black Box Testing: The technique of testing in which the tester doesn’t have access to the
source code of the software and is conducted at the software interface without concerning with
the internal logical structure of the software is known as black box testing.
2. White-Box Testing: The technique of testing in which the tester is aware of the internal
workings of the product, have access to it’s source code and is conducted by making sure that all
internal operations are performed according to the specifications is known as white box testing.
BLACK BOX TESTING WHITE BOX TESTING

Internal workings of an application are not


required. Knowledge of the internal workings is must.
Also known as closed box/data driven
testing. Also knwon as clear box/structural testing.
End users, testers and developers. Normally done by testers and developers.
THis can only be done by trial and error Data domains and internal boundaries can be
method. better tested.
What are different levels of software testing?
Software level testing can be majorly classified into 4 levels:
1. Unit Testing: A level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as
designed.
2. Integration Testing: A level of the software testing process where individual units are
combined and tested as a group. The purpose of this level of testing is to expose faults in the
interaction between integrated units.
3. System Testing: A level of the software testing process where a complete, integrated
system/software is tested. The purpose of this test is to evaluate the system’s compliance with the
specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested for
acceptability. The purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
Object Oriented testing process
Software typically undergoes many levels of testing, from unit testing to system or acceptance
testing. Typically, in-unit testing, small “units”, or modules of the software, are tested separately
with focus on testing the code of that module. In higher, order testing (e.g, acceptance testing),
the entire system (or a subsystem) is tested with the focus on testing the functionality or external
behavior of the system.
As information systems are becoming more complex, the object-oriented paradigm is gaining
popularity because of its benefits in analysis, design, and coding. Conventional testing methods
cannot be applied for testing classes because of problems involved in testing classes, abstract
classes, inheritance, dynamic binding, message, passing, polymorphism, concurrency, etc.
Testing classes is a fundamentally different problem than testing functions. A function (or a
procedure) has a clearly defined input-output behavior, while a class does not have an input-
output behavior specification. We can test a method of a class using approaches for testing
functions, but we cannot test the class using these
approaches.
According to Davis the dependencies occurring in conventional systems are:
• Data dependencies between variables
• Calling dependencies between modules
• Functional dependencies between a module and the variable it computes
• Definitional dependencies between a variable and its types.
But in Object-Oriented systems there are following additional dependencies:
• Class to class dependencies
• Class to method dependencies
• Class to message dependencies
• Class to variable dependencies
• Method to variable dependencies
• Method to message dependencies
• Method to method dependencies
Issues in Testing Classes:
Additional testing techniques are, therefore, required to test these dependencies. Another issue of
interest is that it is not possible to test the class dynamically, only its instances i.e, objects can be
tested. Similarly, the concept of inheritance opens various issues e.g., if changes are made to a
parent class or superclass, in a larger system of a class it will be difficult to test subclasses
individually and isolate the error to one class.

In object-oriented programs, control flow is characterized by message passing among objects,


and the control flow switches from one object to another by inter-object communication.
Consequently, there is no control flow within a class like functions. This lack of sequential
control flow within a class requires different approaches for testing. Furthermore, in a function,
arguments passed to the function with global data determine the path of execution within the
procedure. But, in an object, the state associated with the object also influences the path of
execution, and methods of a class can communicate among themselves through this state because
this state is persistent across invocations of methods. Hence, for testing objects, the state of an
object has to play an important role.
Techniques of object-oriented testing are as follows:
1. Fault Based Testing:
This type of checking permits for coming up with test cases supported the consumer
specification or the code or both. It tries to identify possible faults (areas of design or code
that may lead to errors.). For all of these faults, a test case is developed to “flush” the errors
out. These tests also force each time of code to be executed.
This method of testing does not find all types of errors. However, incorrect specification
and interface errors can be missed. These types of errors can be uncovered by function
testing in the traditional testing model. In the object-oriented model, interaction errors can
be uncovered by scenario-based testing. This form of Object oriented-testing can only test
against the client’s specifications, so interface errors are still missed.
2. Class Testing Based on Method Testing:
This approach is the simplest approach to test classes. Each method of the class performs a
well defined cohesive function and can, therefore, be related to unit testing of the
traditional testing techniques. Therefore all the methods of a class can be involved at least
once to test the class.
3. Random Testing:
It is supported by developing a random test sequence that tries the minimum variety of
operations typical to the behavior of the categories
4. Partition Testing:
This methodology categorizes the inputs and outputs of a category so as to check them
severely. This minimizes the number of cases that have to be designed.
5. Scenario-based Testing:
It primarily involves capturing the user actions then stimulating them to similar actions
throughout the test.
These tests tend to search out interaction form of error.

testing of analysis and design model


The systematic testing of analysis and design models is a labor-intensive exercise that is highly efficient.
The technique can be made more efficient by a careful selection of use cases to serve as test cases.
Depending upon the selection technique, the faults that would have the most critical implications for
the system or those that would occur most frequently can be targeted. Defects identified at this level
affect a wider scope of the system and can be removed with less effort than defects found at the more
detailed level of code testing. The activities I described here work closely with the typical development
techniques used by developers following an object-oriented method. By integrating these techniques
into the development process, the process becomes self-validating in that one development activity
identifies defects in the products of previous development tasks. Testing becomes a continuous process
that guides development rather than a judgmental step at the end of the development process. Several
of these techniques can be automated, further reducing the effort required. The result is an improved
system which ultimately will be of higher quality and in many cases at a lower cost due to early
detection.
Testing of classes







o
o
o
o

You might also like