Se 4
Se 4
Testing: testing fundamental, unit testing, test planning, Black Box testing, white box testing,
levels of testing, usability testing, regression testing, debugging approaches, integration testing
Coding principles are closely related to the principles of design and modelling. The
developed software goes through a process of testing, maintenance, and reengineering.
Coding principles help programmers in writing an efficient and effective code, which is
easier to test, maintain and reengineer.
There are no well-defined principles for writing effective codes. However, there are some
coding principles can make a code clear, readable, and understandable.
The coding principle have different aspects. These aspects are informed as follows:
Information Hiding
Data encapsulation binds data structures and their operations into a single unit. The
operations declared in a module can access its data structures and allow other modules to
access them via interfaces.
Thus, information hiding hides the implementation details of data structures from the
other modules. In addition, it secures data and information from illegal access and
alterations.
Other modules can access data structure through access specifiers and interfaces available
in modern programming languages.
Information hiding is supported by data abstraction, which allows creating multiple
instances of abstract data type. Thus, modifying a module with encapsulated data and
functions has minimum effect on other modules. Most of object-oriented programming
languages such as C++, Java etc., support the features of information hiding.
1
Software Engineering, Unit -4
2
Software Engineering, Unit -4
Minimizing dependencies among programs will maximize cohesion within modules, That
is, there will be more use of local data rather than global data items. Thus, high cohesion
and low coupling make a program clear, readable, and maintainable.
Code reusability
Code reuse, also called software reuse, is the use of existing software, or software
knowledge, to build new software, following the reusability principles.
Reusability implies some explicit management of build, packaging, distribution,
installation, configuration, deployment, and maintenance and upgrade issues. If these
issues are not considered, software may appear to be reusable from design point of view,
but will not be reused in practice.
Reusability is the use of existing assets in some form within the software product
development process.
Assets are products and by-products of the software development life cycle and include
code, software components, test suites, designs and documentation. Leverage is
modifying existing assets as needed to meet specific system requirements.
KISS (Keep It Simple, Stupid)
The "keep it simple stupid" (KISS) principle is a design rule that states that systems
perform best when they have simple designs rather than complex ones. KISS is not meant
to imply stupidity.
On the contrary, it is usually associated with intelligent systems that may be
misconstrued as stupid because of their simplistic design. The KISS Principle hinders
and/or prevents creeping featurism, system failover and other IT issues.
KISS is also an acronym for "keep it short and simple" and "keep it simple and
straightforward". A simple code is easy to debug, write, read, and modify.
Simplicity, Extensibility, and Effortlessness
A simple program always works better than a complicated program. In addition, it can be
made more reliable than a complex code. Programs should be extendable rather than
being lengthy. Extendibility is different from modifiability.
Sometimes, program modification demolishes the program structure: therefore, more
concentration should be put on extensibility along with modifiability features. There
should be less effort in reading and understanding programs.
Code verification
Kent Back has introduced pair programming in Extreme Programming (XP), which
focuses on program verification. The program logic and its correctness should be verified
before moving towards testing.
3
Software Engineering, Unit -4
Therefore, test driven development (TDD) environment is created for better code writing.
In TDD, programming is done with testing code. Similarly, pair programming allows two
programmers to sit together. One may be coding while other may be thinking or verifying
the code. This reduces the testing and maintenance efforts.
Code verification is the process used for checking the software code for errors introduced
in the coding phase. The objective of code verification process is to check the software
code in all aspects.
This process includes checking the consistency of user requirements with the design
phase. Note that code verification process does not concentrate on proving the
correctness of programs. Instead, it verifies whether the software code has been translated
according to the requirements of the user.
Code documentation
Source codes are used by testers and maintainers. Therefore, programmers should add
comments in source codes as and when required A well commented code helps to
understand at the time of testing and maintenance
Follow Coding Standards, Guidelines and Styles: A source code with the standards and
which is according to the programming styles will have less adverse effect in the system.
Therefore, Programmers should focus on coding standards, guidelines and good
programming styles.
4
Software Engineering, Unit -4
5
Software Engineering, Unit -4
maintainable software that can be fulfilling the customer needs. To overcome such
problems, test-driven development (TDD) was proposed by Kent Back for iterative
software development.
TDD is disciplined programming process that helps to avoid programming errors. It is the
reverse process of the traditional programming process. As the traditional development
cycle is” design-code-test”, TDD has the cycle “test-code-refractor”,. Here development
starts with writing test cases from the requirements rather than designing the solution.
TDD suggests that test cases are designed before coding.
Test-driven development (TDD) is a software development process that relies on the
repetition of a very short development cycle: first the developer writes an (initially
failing) automated test case that defines a desired improvement or new function, then
produces the minimum amount of code to pass that test, and finally refractors the new
code to acceptable standard
Kent Beck, who is credited with having developed or rediscovered the technique, stated
in 2003 that TDD encourages simple designs and inspires confidence. Test-driven
development is related to the test-first programming concepts of extreme programming,
begun in 1999, but more recently has created more general interest in its own right.
Programmers also apply the concept to improving and debugging legacy code developed
with older techniques.
6
Software Engineering, Unit -4
Coding styles are varying with the programmer and usage of programming languages. If
coding standards are implemented correctly then it becomes easy to understand the
maintenance code. The code will have fewer errors consumes less testing efforts.
Coding standards provide general guidelines that can be commonly adopted by
programmers and the development organizations.
o Rules for limiting the use of global variables: These rules list what types of data
can be declared as global and what cannot, with a view to limit the data that needs
to be defined with global scope.
o Standard headers to precede the code of different modules: The information
contained in the headers of different modules should be standard for an
organization. The exact format in which header information is organized is project
specific. Still, the following information is added to most of the projects.
Name of the module
Authors name
Modification history
7
Software Engineering, Unit -4
8
Software Engineering, Unit -4
Do not use coding style that is too clever or too difficult to understand: Many
inexperience engineers actually take pride in writing cryptic and incomprehensive code.
These codes are very difficult to maintain and debug.
Avoid obscure side effects: The side effects of a function call include modifications to
the parameters passed by reference, modification of global variables and I/O operations.
This causes the function very difficult to understand.
Do not use an identifier for multiple purposes: programmers often use the same
identifier to denote several temporary entities. There are several things wrong with this
and thus need to be avoided. Some issues caused by use of variables for multiple
purposes
The code should be well documented: As a rule of thumb, there should be at least one
comment line on the average for three source lines of code
The length of any function should not exceed 10 sources lines: A lengthy function is
very difficult to understand as it has a large no. of variables and carries out many
different types of computations.
There are various categories of errors observed in a program. Errors are sometimes known as
bugs. Some of the common types of errors are
9
Software Engineering, Unit -4
o Syntactical errors easily rectified during the compilation phase. Some common
syntax errors are misspelling a keyword, function or a variable name; leaving out
a some necessary punctuation; missing semicolon; use of End-if statement
without first using an If statement.
Logical error
o A logic error (or logical error) is a mistake in a program's source code that results
in incorrect or unexpected behaviour. It is a type of runtime error that may simply
produce the wrong output or may cause a program to crash while running.
o Many different types of programming mistakes can cause logic errors. For
example, assigning a value to the wrong variable may cause a series of
unexpected program errors.
o Multiplying two numbers instead of adding them together may also produce
unwanted results. Even small typos that do not produce syntax errors may cause
logic errors. For example, suppose you are interchanging two strings.
o You initialized the first string but you forget to initialize the second string. When
you run the code, you will find no answer in the first place.
Run time error
10
Software Engineering, Unit -4
o Runtime errors indicate bugs in the program or problems that the designers had
anticipated but could do nothing. For example, running out of memory will often
cause a runtime error. Note that runtime errors differ from bombs or crash in that
you can often recover gracefully from a runtime error.
6. Discuss in details about code verification
Code verification is the process of identifying errors, failures, and faults in source codes,
which cause the system fail in performing specified task. Code verification ensures the
functional specifications are implemented correctly using a programming language.
There are several techniques in software engineering, which are used for code
verification. Testing is one of the most widely used methods for the verification of the
work products of all phases in the software life cycle.
1. Code review
2. Static Analysis
3. Testing
1. Code review
Code review is a phase in the software development process in which the authors of code,
peer reviewers, and perhaps quality assurance (QA) testers get together to review code.
Finding and correcting errors at this stage is relatively inexpensive and tends to reduce
the more expensive process of handling, locating, and fixing bugs during later stages of
development or after programs are delivered to users.c
Code Review is a systematic examination, which can find and remove the vulnerabilities
in the code such as memory leaks and buffer overflows. Technical reviews are well
documented and use a well-defined defect detection process that includes peers and
technical experts. It is ideally led by a trained moderator, who is not the author. This kind
of review is usually performed as a peer review without management participation.
Reviewers prepare for the review meeting and prepare a review report with a list of
findings. Technical reviews may be quite informal or very formal and can have a number
of purposes but not limited to discussion, decision making, evaluation of alternatives,
finding defects and solving technical problems.
o Code walkthrough
11
Software Engineering, Unit -4
o Code inspection
o Pair programming
Code walkthrough
It is an informal code analysis technique. In this technique, a module is taken up for review after
the module has been coded, successfully compiled and all syntax errors are eliminated.
The main objective of code walk through is to discover the algorithmic and logical errors
in the code. The code walk through is carried by few member of the development team
for couple of days.
Even though code walk through is an informal analysis technique, several guidelines are
evolved over the years for making it more effective. Some of these guidelines are as
follows:
o The team performing code walkthrough should not be very big or very small.
Ideally, it consists of between three to seven members.
o In order to give faster cooperation and to avoid feeling among engineers that they
have evaluated the code walkthrough meetings, managers should not attend the
walk through meeting
Code Inspection
The principal aim of code inspection is to check for the presence of some common types
of errors that usually creep into the code due to programmer oversights and to check
whether coding standards have been adhered to.
Good software companies collect statistics regarding different types of errors commonly
committed by the engineers and identify the types of errors most frequently committed.
Those frequently committed errors can be used as a check list during coding inspection to
look for possible errors.
Some classical programming errors which can be checked during code inspection are:
12
Software Engineering, Unit -4
o Incompatible assignment
IBM pioneered this testing. This type of testing mostly relies heavily on walkthroughs,
inspection and formal verification.
The programmers are not allowed to test any code by executing the code other than doing
some syntax testing using a compiler. This approach produces documentation and codes
that are more reliable and maintainable than other developments methods relying heavily
on code execution based testing
The main problem of this method is that testing effort is increased as walkthroughs,
inspection, and verification are time consuming for detecting all simple errors.
Pair Programming
Working in a Team can be more than twice as effective as working alone. You can feed
off of each other’s knowledge and excitement. You can help each other when things go
wrong. You can learn from each other and study twice as much material. Sometimes you
can finish assignments in less than half the time a single person would take!
In Pair Programming, one programmer is the driver and the other is the navigator. While
the driver is typing (i.e., coding) the navigator is making strategic plans and correcting
tactical (logical) errors.
13
Software Engineering, Unit -4
Each partner should actively communicate with the other, bouncing ideas off one another,
searching for information in the notes or book to solve the current problem (together),
reviewing each other’s typing (in real time), etc. By being able to "multi-task", each
partner bringing their own view and expertise, the partnership will enable both partners to
learn more, and learn "better".
2. Static Analysis
Static analysis, also called static code analysis, is a method of computer program
debugging that is done by examining the code without executing the program. The
process provides an understanding of the code structure, and can help to ensure that the
code adheres to industry standards.
Static analysis, static projection, and static scoring are terms for simplified analysis
wherein the effect of an immediate change to a system is calculated without respect to the
longer-term response of the system to that change. Such analysis typically produces poor
correlation to empirical results.
Static program analysis is the analysis of computer software that is performed without
actually executing programs (analysis performed on executing programs is known as
dynamic analysis).
In most cases, the analysis is performed on some version of the source code, and in the
other cases, some form of the object code. The term is usually applied to the analysis
performed by an automated tool, with human analysis being called program
understanding, program comprehension, or code review. Software inspections and
Software walkthroughs are also used in the latter case
3. Testing
it also provide an objective, independent view of the software to allow the business to
appreciate and understand the risks of software implementation. Test techniques include
the process of executing a program or application with the intent of finding software bugs
(errors or other defects).
14
Software Engineering, Unit -4
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not.
Moreover, they act as a guide for the software maintenance team (this team focuses on
maintaining software by improving and enhancing the software after it has been delivered
to the end user) while the software maintenance process is carried out. In this way, code
documentation facilitates code reusability.
While writing a software code, the developer needs proper documentation for reference
purposes. Programming is an on-going process and requires modifications from time to
time. When a number of software developers are writing the code for the same software,
complexity increases.
With the help of documentation, software developers can reduce the complexity by
referencing the code documentation. Some of the documenting techniques are comments,
visual appearances of codes, and programming tools.
Comments are used to make the reader understand the logic of a particular code segment.
The visual appearance of a code is the way in which the program should be formatted to
increase readability. The programming tools in code documentation are algorithms,
flowcharts, and pseudo-codes
Code documentation contains source code, which is useful for the software developers in
writing the software code. The code documents can be created with the help of various
coding tools that are used to auto-generate the code documents. In other words, these
documents extract comments from the source code and create a reference manual in the
form of text or HTML file.
The auto-generated code helps the software developers to extract the source code from
the comments. This documentation also contains application programming interfaces,
data structures, and algorithms. There are two kinds of code documentation, namely,
internal documentation and external documentation.
15
Software Engineering, Unit -4
Thus documentation is an important artifact in the system. There are following categories
of documentation done in the system.
o Internal documentation
o System documentation
o User documentation
o Daily documentation
o Process documentation
Internal documentation
Computer software is said to have Internal Documentation if the notes on how and why
various parts of code operate is included within the source code as comments. It is often
combined with meaningful variable names with the intention of providing potential future
programmers a means of understanding the workings of the code.
This contrasts with external documentation, where programmers keep their notes and
explanations in a separate document. Internal documentation has become increasingly
popular as it cannot be lost, and any programmer working on the code is immediately
made aware of its existence and has it readily available
System documentation
Systems Documentation can cover a very broad range of documents within any industry
including IT. Documentation is especially critical where decisions have been made and
should be followed by a responsible party.
They key to good documentation is that it is clear and concise, so that anybody other than
the author can pick it up and understand it easily. In many cases, it is more beneficial for
a technical document to be prepared by a group of people. This way you will find that the
final output is complete and readable and you can be sure that everyone is on the same
'sheet of music'.
The purposes of this document is Servers and Workstations are considered. The subject
of system documentation could occupy several books; this document discusses some
basic ideas. The characteristics of good system documentation are considered such as
what form the documentation should take.
16
Software Engineering, Unit -4
User documentation
User documentation refers to the documentation for the product or service provided to
the end users. It is designed to assist end users to use the product or service. This is often
referred to a user assistance.
The development of the user documentation is left until after the testing phase is
complete. If it is created beforehand, parts of the system could change as a result of faults
being discovered. User documentation is provided to the user which gives an overview of
how to use the system.
User documentation refers to the documentation for a product or service provided to the
end users. The user documentation is designed to assist end users to use the product or
service. This is often referred to as user assistance. The user documentation is a part of
the overall product delivered to the customer
Daily documentation
The unit development folder maintains the document, which contains requirements,
design, Architecting, detailed design, source code, test plan, test result, changes and
notes.
Daily documentation helps programmers in reporting to upper levels and in preparing the
phased artefacts and a plan for the next phase.
Process Documentation
Process documentation records and supports the process itself. Process documentation is
not about writing a final report for externals, but about an internal on-going
documentation of the process during the execution of the programme or project.
It is cooperation between the project team, stakeholders and outsiders which helps to
reflect, analyse and improve the on-going project or programme process. Process
documentation is especially necessary in projects that have aspirations for social change,
as it aims capturing the perceptions of the involved stakeholders and how the changes in
these perceptions develop.
17
Software Engineering, Unit -4
Process documentation is not about writing a final report, but about an on-going
documentation of the process during the execution of the programme or project.
Software testing is performed, once software engineers have written the source code.
Software testing is the process of finding defects in the software so that these can be
debugged and the defect-free software can meet the customer needs and expectations.
Software testers always try to prove that the system is incorrect by applying test cases. To
perform successful testing, testers must have a thorough understanding of the whole
system and its sub systems from requirements specification to implementation.
Errors:
These are actual coding mistakes made by developers. In addition, there is a difference in
output of software and desired output, is considered as an error.
Error is the discrepancy between the actual value of the output of software and the
theoretically correct value of the output for the given input.
It is observed that most of the times errors occur during writing of programs by
programmers.
A software tester begins with finding any error in the source code. The error is also
known as variance, mistake, or problem.
Fault:
18
Software Engineering, Unit -4
When error exists fault occurs. A fault, also known as a bug, is a result of an error, which
can cause system to fail.
Fault is also called defect or bug, which is the manifestation of one or more errors. The
cause of an error is a fault (Either hardware fault or software fault), which resides
temporarily or permanently in the system.
Mostly faults are observed at the architecture or design level or they might be a faulty
source code.
Failure:
Failure is said to be the inability of the system to perform the desired task. Failure occurs
when fault exists in the system.
Failure is the deviation of the observed behaviour from the specified behaviour. It occurs
when the faulty code is executed leading to an incorrect outcome. A failure is the
manifestation of an error in the system or software.
A failure occurs because the system is erroneous. An error is caused by a fault and may
propagate to become a failure.
Test case:
This is the triplet [I,S,O], where I is the data input to the system, S is the state of the
system at which the data is input, and O is the expected output of the system.
Test suite:
This is the set of all test cases with which a given software product is to be tested
Verification is the process of determining whether the output of one phase of software
development conforms to that of its previous phase, whereas validation is the process of
determining whether a fully developed system conforms to its requirements specification.
Thus while verification is concerned with phase containment of errors, the aim of
validation is that the final product be error free.
19
Software Engineering, Unit -4
Test suite design: The set of test cases using which a program is to be tested is designed
using different test case design techniques. The information about the designing the test
cases are obtained from code, design document and the SRS document.
Running test cases and checking the results to detect failures: Each test case is run
and the results are compared with the expected results. A mismatch between actual and
expected result indicate a failure. These test cases are noted down for later debugging.
Debugging: Debugging is carried out to identify the statements that are in error. In this,
failure symptoms are analyzed to locate the errors.
Error correction: After the error is located in the previous activity, the code is
appropriately changes to correct the error.
During unit testing, the individual components of a program are tested. After testing all
units individually, the units are slowly integrated and tested after each step of integration.
Finally, the fully integrated system is tested.
Unit testing is undertaken after a module has been coded and successfully reviewed. Unit
testing (or module testing) is the testing of different units (or modules) of a system in
isolation.
20
Software Engineering, Unit -4
In order to test a single module, a complete environment is needed to provide all that is
necessary for execution of the module. That is, besides the module under test itself, the
following steps are needed in order to be able to test the module:
o The procedures belonging to other modules that the module under test calls.
o A procedure to call the functions of the module under test with appropriate
parameters.
Modules required providing the necessary environment (which either call or are called by
the module under test) is usually not available until they too have been unit tested; stubs
and drivers are designed to provide the complete environment for a module. The role of
stub and driver modules is pictorially shown as follows.
A stub procedure is a dummy procedure that has the same I/O parameters as the given
procedure but has a highly simplified behavior. For example, a stub procedure may
produce the expected behavior using a simple table lookup mechanism.
A driver module contains the nonlocal data structures accessed by the module under test,
and would also have the code to call the different functions of the module with
appropriate parameter values.
Black-box testing is performed on the basis of functions or features of the software. In the
Black-box testing, only the input values are considered for the design of test cases. The
internal logic or program structures are not considered during black-box testing.
Requirement specification is the basis of black-box testing. The behaviour of the module
is observed on executing black-box test cases and matched with the features supported by
21
Software Engineering, Unit -4
the module. Therefore, it is also known as behavioural or functional testing. There are
number of black-box test case design methods
In this approach, the domain of input values to a program is partitioned into a set of
equivalence classes. This partitioning is done such that the behavior of the program is
similar for every input data belonging to the same equivalence class.
The main idea behind defining the equivalence classes is that testing the code with any
one value belonging to an equivalence class is as good as testing the software with any
other value belonging to that equivalence class.
The following are some general guidelines for designing the equivalence classes:
o If the input data values to a system can be specified by a range of values, then one
valid and two invalid equivalence classes should be defined. For example, if the
equivalence class is set of integers in the range 1 to 10, then the invalid equivalent
classes are [-∞,0], [11, ∞]
o If the input data assumes values from a set of discrete members of some domain,
then one equivalence class for valid input values and another equivalence class for
invalid input values should be defined. For example, if the equivalent classes are
{A, B, C}, then the invalid equivalent class is U-{A, B, C}, where U is the
universal set of inputs.
To design boundary value test cases, it requires examining the equivalent classes to check
if any of the equivalence classes contains a range of values. For those equivalent classes
that are not a range of values no boundary value test cases can be defined. For an
equivalent class that is range of values, the boundary values need to be in the test case.
For example: if the equivalence class contains the integers in the range 1 to 10, then the
boundary value test suite is {0, 1, 10, 11}.
22
Software Engineering, Unit -4
The equivalent classes for the above problem is “number less than 1, number between 1
to 10, and number greater than 10”.
Thus, the boundary values considered are ‘0’ from the number less than 1 class, the
boundary values “1, 10”, and ‘11’ from the number greater than 10.
The important steps in the black box test suite design approach is as follows:
Design equivalent class test cases by picking one representative value from each
equivalent class.
Design the boundary value test cases as follows. Examine if any equivalent class is a
range of values. Include the values at the boundaries of such equivalent classes in the test
suite.
White-box testing is also known as glass-box testing or structural testing. Internal logics,
such as control structures, control flow, and data structures are considered during white
box testing. White-box testing is performed to cover various program structures.
A fault based testing strategy targets to detect certain types of faults. The fault that a test
strategy focuses on constitutes the fault model of the strategy. Mutation testing is an
example of system testing.
Mutation Testing
The idea behind mutation testing is to make few arbitrary changes to a program at a time.
Each time the program is changed, it is called as a mutated program and the change
effected is called as a mutant.
23
Software Engineering, Unit -4
A mutated program is tested against the full test suite of the program. If there exist at
least one test case in the test suite for which a mutant gives an incorrect result, then the
mutant is said to be dead.
If a mutant remains alive even after all the test cases have been exhausted, the test data is
enhanced to kill the mutant. A major disadvantage of the mutation-based testing approach
is that it is computationally very expensive, since a large number of possible mutants can
be generated.
Since mutation testing generates a large number of mutants and requires us to check each
mutant with the full test suite, it is not suitable for manual testing. Mutation testing
should be used in conjunction of some testing tool, which would run all the test cases
automatically
The coverage based testing attempts to execute certain elements of the program. Popular
example of coverage based testing are statement coverage, branch coverage, condition
coverage and path coverage testing.
One white-box testing strategy is said to be stronger than another strategy, if all types of
errors detected by the first testing strategy is also detected by the second testing strategy,
and the second testing strategy additionally detects some more types of errors.
When two testing strategies detect errors that are different at least with respect to some
types of errors, then they are called complementary. The concepts of stronger and
complementary testing are schematically illustrated as below:
Coverage
achieved
by testing
strategy A
Coverage
Coverage Coverage
achieved
achieved achieved
by testing
by testing by testing
strategy B
strategy A strategy B
24
Software Engineering, Unit -4
From stronger testing, we know that if the stronger testing is performed there is no
needed for weaker testing as the stronger one has covered all the test cases that belongs to
the weaker testing.
However, in complementary testing, as there are few additional test cases are present in
both strategy both approach have to be done to complete all the test cases.
The statement coverage strategy aims to design test cases so that every statement in a
program is executed at least once. The principal idea governing the statement coverage
strategy is that unless a statement is executed, it is very hard to determine if an error
exists in that statement.
Unless a statement is executed, it is very difficult to observe whether it causes failure due
to some illegal memory access, wrong result computation, etc.
However, executing some statement once and observing that it behaves properly for that
input value is no guarantee that it will behave correctly for all input values
Example of coverage testing;
Design a coverage based test suite for the following Euclid’s GCD computation program:
while ( x ! = y) {
if ( x > y) then
x = x – y;
else y = y – x; }
return x ;
In order to design the test case for statement coverage, the conditional expression of the
while needs to be true and the conditional statement if statement needs to be both true or
false.
Thus, the test cases are {(x = 3, y= 3), (x = 4, y = 3), ( x = 3, y = 4)}.
25
Software Engineering, Unit -4
26
Software Engineering, Unit -4
1. a= 5; 1. if(a>b) 1. while(a>b){
2. b= a*2-1; 2. c= 3; 2. b=b-1;
3. else c=5 ; 3. b=b*a;}
4. c = c * c; 4. c=a+b;
1 1
1
2 3
2 4
2 4
3
27
Software Engineering, Unit -4
28
Software Engineering, Unit -4
System testing tests the system as a whole. Once all the components are integrated, the
application as a whole is tested rigorously to see that it meets the specified Quality
Standards. This type of testing is performed by a specialized testing team.
System testing is important because of the following reasons:
o System testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.
o The application is tested thoroughly to verify that it meets the functional and
technical specifications.
o The application is tested in an environment that is very close to the production
environment where the application will be deployed.
o System testing enables us to test, verify, and validate both the business
requirements as well as the application architecture.
14. Discuss different performance testing
This testing is carried out to check whether the system meet the non functional requirements
identified in the SRS documents. All performance testing are considered as black box testing.
Stress testing: This is also known as endurance testing. This testing evaluates system
performance, when it is stresses for a short period of time. This is a black box test which
is designed to impose a range of abnormal and even illegal input condition s as to stress
the capabilities of the software. This testing is especially important for systems that
usually operate below their maximum capability, but may be severely stressed at some
peak demand hours.
Volume testing: This testing checks whether the data structure (buffers, arrays, queues,
stacks, etc.) has been designed to successfully handle extraordinary situations. For
example, if the array size is not considered properly than it may cause an array out of
bound exception.
Compatible testing: This type of testing is required when the system interfaces with
external systems. Compatibility aims to check whether the interface function perform as
required. It tests the speed and accuracy of data retrieval.
Configuration testing: This testing is used to test system behavior in various hardware
and software configurations specified in the requirements. Sometimes the systems are
built to work in different configurations for different users. The system is configured in
each of the required configurations and depending upon specific customer requirements,
it is checked if the system behaves correctly in all required configurations.
Recovery testing: This testing tests the response of the system to the presence of faults,
or loss of power, device, services, data. The system is subjected to loss of the mentioned
resourced and it is checked if the system recovers satisfactorily.
29
Software Engineering, Unit -4
Maintenance testing: This testing is used to diagnostic programs, and other procedures
that are required to help maintenance of the system. It is verified that the artifacts exist
and they perform properly.
Documentation testing: it is checked whether the required user manuals and technical
manuals exists and are consistent
Security testing: it is essential for software products that process confidential data. It
needs to be tested whether the system is full proof from security attacks such as intrusion
by hackers
15. Explain usability testing
Usability testing refers to evaluating a product or service by testing it with representative
users. Typically, during a test, participants will try to complete typical tasks while
observers watch, listen and takes notes.
The goal is to identify any usability problems, collect qualitative and quantitative
data and determine the participant's satisfaction with the product.
Benefits of Usability Testing:
Usability testing lets the design and development teams identify problems before they are
coded. The earlier issues are identified and fixed, the less expensive the fixes will be in
terms of both staff time and possible impact to the schedule.
Usability Testing is a type of testing done from an end-user’s perspective to determine if
the system is easily usable.
During a usability test, we will:
Learn if participants are able to complete specified tasks successfully
Identify how long it takes to complete specified tasks
Find out how satisfied participants are with your Web site or other product
Identify changes required to improve user performance and satisfaction
analyze the performance to see if it meets your usability objectives
16. Discuss regression testing
Whenever a change in a software application is made, it is quite possible that other areas
within the application have been affected by this change.
Regression testing is performed to verify that a fixed bug hasn't resulted in another
functionality or business rule violation. The intent of regression testing is to ensure that a
change, such as a bug fix should not result in another fault being uncovered in the
application.
Regression testing is important because of the following reasons:
30
Software Engineering, Unit -4
o Minimize the gaps in testing when an application with changes made has to be
tested.
o Testing the new changes to verify that the changes made did not affect any other
area of the application.
o Mitigates risks when regression testing is performed on the application.
o Test coverage is increased without compromising timelines.
o Increase speed to market the product.
17. What do you mean by debugging? Explain its type.
After a failure has been detected, it is necessary to first identify the program statements
that are in error and are responsible for failure, the error can then be fixed. The following
are few debugging approaches adopted by programmers:
Brute force method
This is most common method of debugging, but the least effective method. In this
approach, print statements are inserted throughout the program to print the intermediate
values with the hope that some of the printed values will help to identify the statement in
error.
This method is much helpful by using break points and watch points in the code to test
the value of the variables.
Backtracking
In this approach, beginning from the statement at which an error symptom has been
observed, the source code is traced backwards until the error is discovered.
Unfortunately, as the number of source lines to be traced back increases, the number of
potential backward path increases and may became unmanageably large for complex
program.
Cause elimination method
In this approach, once a failure is observed, the symptoms of the failure are noted. Based
on the failure symptoms, the causes, which would contribute to the system are developed
and tests are conducted to eliminate each.
A related technique of identification of the error from the error symptom is the software
fault tree analysis.
Program Slicing
This is quite similar to back tracking. However, the search space is reduced by defining
slices. A slice of a program, for a particular variable at a particular statement is the set of
source lines preceding this statement can influence the value of the variable.
31
Software Engineering, Unit -4
Debugging guidelines
The following are some general guidelines for effective debugging
Many times debugging requires a thorough understanding of the program design. Trying
to debug based on partial understanding of the program design may require an inordinate
amount of effort to be put into debugging.
Debugging may even require full redesign of the system. In that case, the novice
programmers often attempt not to fix the error but its symptoms.
One must be beware of the possibility that an error correction may introduce new error.
Therefore, after every round of error fixing, regression testing must be carried out.
18. What do you mean by integration testing? Explain its type.
It is carried out after all the modules have been unit tested. Successful completion of unit
testing largely ensures that the unit as a whole work satisfactory. In this context, the
objective of integration testing is to detect the errors at the module interface.
During integration testing different modules of a system are integrated in a planned
manner using an integration plan. The integration plan specifies the steps and the order in
which the modules are combined to realize the full system. The different approaches can
be used to develop the test cases are as follows:
Big bang integration testing
In this approach, all the modules making up the system are integrated in a single step. All
the modules of the system are simply linked together and tested. This technique can be
meaningful for very small systems.
If any error found in this system, then it is very difficult to localize the error as the error
may potentially lie in any of the modules. Therefore, debugging system in case of big
bang integration is very expensive and thus, never used for large system.
Bottom-up integration testing
Large software products often made of several subsystems and the subsystem may also
contain subsystems. In this type of testing, first the modules of a sub module are
integrated. Thus, the subsystem can be integrated separately and independently.
The primary purpose of the integration testing of a subsystem is to test whether the
interfaces among various modules making up the subsystem work satisfactorily. In pure
bottom-up testing no stubs are required and only test drivers are required.
Top down integration testing
This testing starts with the root module and one or more subordinate modules in the
system. After the top level skeleton has been tested, the modules at the immediately
lower layer of the skeleton has been combined with it and tested.
32
Software Engineering, Unit -4
It uses program stubs to stimulate the effect of lower level routines that are called by the
routine under test. The advantage of this approach is that it requires only stubs to work
not the driver, which is difficult to design.
The disadvantage of this approach is that the absence of lower level routines many times
makes it difficult for the top level routines to run in desired manner.
Mixed integration testing
The mixed approach (also called as sandwiched) integration testing follows a
combination of top-down and bottom-up testing approach.
In this approach, testing can start as and when modules became available after unit
testing. it is the most commonly used integration testing approach. In this approach, both
stubs and drivers are required to be designed.
Important questions
1. Define coding principle.
2. Explain the coding process
3. Explain about coding standard.
4. Explain about coding guidelines
5. Explain different coding errors
6. Discuss in details about code verification& coding documentation
7. Discuss the fundamental of testing.
8. Explain the testing planning process
9. Explain the Unit testing
10. Explain the black box testing& white box testing
11. Discuss the level of testing
12. Discuss different performance testing
13. Explain usability testing& regression testing
14. What do you mean by debugging? Explain its type.
15. What do you mean by integration testing? Explain its type.
33