0% found this document useful (0 votes)
102 views32 pages

Kunal Se

OpenProj or similar project management software can be used to: 1. Draft a project plan by creating tasks, assigning resources, and identifying dependencies between tasks. 2. Track project progress over time by saving baselines, entering percent complete values for tasks, and tracking actual work hours spent on tasks. 3. Evaluate the project plan by analyzing critical paths, resource workloads, and generating status reports to monitor schedule and budget.

Uploaded by

Kunal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views32 pages

Kunal Se

OpenProj or similar project management software can be used to: 1. Draft a project plan by creating tasks, assigning resources, and identifying dependencies between tasks. 2. Track project progress over time by saving baselines, entering percent complete values for tasks, and tracking actual work hours spent on tasks. 3. Evaluate the project plan by analyzing critical paths, resource workloads, and generating status reports to monitor schedule and budget.

Uploaded by

Kunal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

DAVIET/CSE/2003656

D.A.V.INSTITUTE OF ENGINERING &TECHNOLOGY


KABIR NAGAR , JALANDHAR

SOFTWARE ENGINEERING(BTCS 503-18)

Submitted To:-
Submitted By:-
DR. Parveen Kakkar
Kunal Thakur
University Roll No :- 2003656
Class Roll NO :- 126
Branch :- CSE 5th SEM

1|Page
SE -LAB
DAVIET/CSE/2003656

INDEX
S.NO. Programs Page Remarks
No.
1 Software Requirements For Restaurant menu and
ordering System

2
Study and usage of Openprojor similar software to
draft a project plan
3
Study and usage of Openproj or similar software to
track theprogress of a project.

.
4 Preparation of Software Configuration
Management and RiskManagement related
documents
5 Study and usage of any design phase case tool.
6 To perform unit testing and integration testing.

7 To perform various white box and black box testing


techniques.
8 Testing of a website.

2|Page
SE -LAB
DAVIET/CSE/2003656
Task 2 :- Study and usage of Openproj or similar software to draft a project plan.
Project managers can use OpenProj, a free project tracking application, when creating Effective project
plans. OpenProj delivers functionality that rivals the capabilities of commercial software. This can save a
project thousands of dollars in startup costs. Of course, saving a lot of money canbe foolish if the project
tasks can't be done. This is not the case with OpenProj. Luckily the OpenProj application gives project
managers a full set of tools that are typically used by project managers to track projects. Useful aids such
as critical path analysis, resource tracking and task comments are all present in Open- Proj. The tool is
ideal for simple project management but is capable of larger efforts as well.

STEP 1 Create the project plan shell:-

Fig 2.1:-Particulars of project team

STEP 2 Identify the project resources:- Use the resources view to enter the particulars of all of the project
team. The names and roles of the team members can be specified. If required, you can enter hourly rates,
overtime andavailability Information for each team member.

Fig 2.2:-Particulars of project team

Step 3: Identify the project's high-level tasks:- That project required tasks for initiation, research,
contracting, development and launch. The project manager enters these tasks into the Gantt view of.

Fig 2.3:- Identifying project’s high-level tasks


3|Page
SE -LAB
DAVIET/CSE/2003656
Step 4: Identify the task dependencies:During a project, some tasks can't start until others have been
completed. This is true for the project's "Test launch" task. There is nothing to test until the development
is completed. As well,the "News Shower" launch is dependent on every other task. The project manager,
in discussionswith team members or sponsors as appropriate, determines the task dependencies.

Fig 2.4:- Identifying task dependencies

Step 5: Assign project resources to tasks:-Each of the tasks can have one or more resources assigned. The
column "Resource Names" on theGantt View allows direct data entry of this information. Enter the name
of a resource in the field.The default action is to have each named resource work 100% of their time on
the task. The fieldalso supports the direct entry of multiple resources. Enter the resource names separated
by a semi-colon brackets such as "[50%]". Here is the project plan now with resources assigned.

Fig 2.5:- Assigning project resources to tasks

Step 6: Task elaboration:-An important feature of project management applications is the ability to allow
the project managerto split tasks into smaller sub-tasks. This can allow better accuracy in schedule
estimating. It alsoallows team members to be specified in a just-in-time fashion for their assignments. The
exampleproject has some opportunities for task elaboration.

Fig 2.6:- Elaborating the task

Step 7: Evaluate the project plan:-

4|Page
SE -LAB
DAVIET/CSE/2003656
With all of the tasks entered, and sub-tasks specified, the project plan has really evolved. It nowshows a lot
of information which can be useful in project reporting. The first item is the criticalpath. This of the highest
importance to the project manager and the organization. Reports showing the tasks can be presented to
company executives. An analysis of work loads can be done. Task reports can be printed. In time, as
completion percentages are entered for tasks, the project manager can run status reports showing progress
and schedule tracking.

Fig 2.7:- Evaluating the project plan

OpenProj application, it is quite evident in these presented steps that it is quite helpful to the project
manager. From the graphical presentation of the critical path, to resource balancing and task elaboration,
OpenProj gives the project manager a set of functions that help to monitor project performance.

Project Management:-Project management is the planning and control of a project. It allows theuser to
resolve conflicts in a continuously changing environment and is necessary to continuouslydrive towards
the achievement of the baseline project goals.

Work Breakdown Structure (WBS):-Dividing complex projects to simpler and manageable tasks is the
process identified as Work Breakdown Structure (WBS).Usually the project managersuse this method for
simplifying the project execution. In WBS, much larger tasks are broken downto manageable chunks of
work. These chunks can be easily supervised.

Fig 2.8: Work Breakdown Structure of an Aircraft Proble

5|Page
SE -LAB
DAVIET/CSE/2003656
Task 3: Study and usage of Openproj or similar software to track theprogress of a project.
An effective project manager needs to ensure that the project goals are met on time and are withinthe
budget. Anticipating the implications of a task that is slipping behind schedule, revising projectplans,
reassigning resources and finding ways to minimize the impact on time and costs require anorderly
approach to tracking project progress. Once your plan is set up and your resources assigned, you are ready
to start your project. This brings you to a different set of Project features, which can help you track your
progress and compare this progress to your original plan. In this article, we’ll examine the following
methods of tracking progress:

Using Baselines:

After you have your resources leveled and everything is set to begin, you might want to baseline your
project. Baseline is a common project management term. It refers to a set of data about your project that
represents its state before the work actually began. In Project, a baseline is a copy of the Start, Finish, Work,
and Cost for all the Resources and Assignments, plus Duration for all the Tasks in your project.Together,
this data represents the state of your plan at the time the baseline is saved . The baseline will be a valuable
tool to use as your project progresses, and after it completes, to compare how the real life of your project
matched up with what you projected during the planningstages.To save a baseline, click on Tools | Tracking
| Save Baseline. This brings up the Save Baseline dialog box. Click OK, and you’ve saved your baseline

Fig 3.0:- Save baseline dialog box

Using percent complete and actual work done:

There are two commonly used methods of updating a project plan in Project with the status of your
project work. The first uses percent complete to take a general measure of how “finished” atask or
assignment is. The second uses a collection of actual work done by resources. The percent
complete method is faster than the actual work method, but it gives a much more general, higher-level
view of status.

6|Page
SE -LAB
DAVIET/CSE/2003656
Percent complete

The percent complete method uses the general feelings of the resource or the project manager about how
complete an assignment or task is. You are asking your resource to tell you what percentage of the work is
complete. You then enter this information into the Resource Usage viewby adding in the Percent Work
Complete field.

Fig 3.1:- Resource Usage with Percent Work Complete

Actual work

The actual work method is basically the same as percentage complete, except it offers more detail.The actual work
approach is normally used when a resource uses a timecard to track how many hours are spent working on each
task for a given time period. Depending on our processes manager enters this information. This view allows you to
enter actual work on a time-period-by-time-period basis. So if you ask your resources to submit their actual work
on their tasks day-by-day, you can set up the view to display Actual Work.

Fig 3.2:-Resource Usage view set up to record actual work

Gantt chart

A Gantt chart is a horizontal bar chart developed as a production control tool in 1917 by Henry L.Gantt,
an American engineer and social scientist. Frequently used in project management, a Ganttchart provides a
graphical illustration of a schedule that helps to plan, coordinate, and track specifictasks in a project.A
Gantt chart is constructed with a horizontal axis representing the total time span of the project, broken
down into increments (for example, days, weeks, or months) and a vertical axis representing the tasks that
make up the

7|Page
SE -LAB
DAVIET/CSE/2003656

Fig 3.3:- Gantt chart for a project

Disadvantages of Gantt charts :-Gantt charts give a clear illustration of project status, but one problem
with them is that they don't indicate task dependencies - you cannot tell how one taskfalling behind
schedule affects other tasks.

PERT chart

A PERT chart is a project management tool used to schedule, organize, and coordinate tasks withina
project. PERT stands for Program Evaluation Review Technique, a methodology developed by the U.S.
Navy in the 1950s to manage the Polaris submarine missile program.

Fig 3.4:- PERT chart for a project

Advantages of PERT Chart:-The PERT chart is sometimes preferred over the Gantt chart anotherpopular
project management charting method, because it clearly illustrates task dependencies. Onthe other hand,
the PERT chart can be much more difficult to interpret, especially on complex projects. Frequently,
project managers use both techniques.

Critical Path Method (CPM):-

The critical path method (CPM) is an algorithm for scheduling a set of project activities. It is an important
tool for effective project management. Using these values, CPM calculates the longest path of planned
activities to logical end points orto the end of the project, and the earliest and latest that each activity can
start and finish without making the project longer. This process determines which activities are "critical"
(i.e., on the longest path) and which have "total float" (i.e., can be delayed without making the project
longer)

8|Page
SE -LAB
DAVIET/CSE/2003656

Fig 3.5:- Critical Path Method for a project

Steps in Critical Path Method:

Step 1: Activity specification

You can use the Work Breakdown Structure (WBS) to identify the activities involved in theproject. This
is the main input for the critical path method.

In activity specification, only the higher-level activities are selected for critical path method. When
detailed activities are used, the critical path method may become too complex to manageand maintain.

Step 2: Activity sequence establishment

In this step, the correct activity sequence is established. For that, you need to ask three questionsfor each
task of your list.

Which tasks should take place before this task happens.

Which tasks should be completed at the same time as this task.

Which tasks should happen immediately after this task.

Step 3: Network diagram

Once the activity sequence is correctly identified, the network diagram can be drawn. Although the early
diagrams were drawn on paper, there are a number of computer software, such as Primavera, for this
purpose nowadays.

Step 4: Estimates for each activity

This could be a direct input from the WBS based estimation sheet. Most of the companies use 3-point estimation
method or COCOMO based (function points based) estimation methods for tasks estimation.

9|Page
SE -LAB
DAVIET/CSE/2003656
Step 5: Identification of the critical path

For this, you need to determine four parameters of each activity of the network.

Earliest start time (ES) - The earliest time an activity can start once the previous dependentactivities
are over.

Earliest finish time (EF) - ES + activity duration.

Latest finish time (LF) - The latest time an activity can finish without delaying the project.

Latest start time (LS) - LF - activity duration.

The float time for an activity is the time between the earliest (ES) and the latest (LS) start time orbetween
the earliest (EF) and latest (LF) finish times.

Step 6: Critical path diagram to show project progresses

Critical path diagram is a live artifact. Therefore, this diagram should be updated with actualvalues once
the task is completed.

This gives more realistic figure for the deadline and the project management can know whetherthey are
on track regarding the deliverables.

Advantages of Critical Path Method:

Offers a visual representation of the project activities.

10 | P a g e
SE -LAB
DAVIET/CSE/2003656
Task 4: Preparation of Software Configuration Management and RiskManagement related
documents.
Software Configuration Management

The purpose of software configuration management is to plan, organize, control and coordinate the
identification, storage and change of software through development, integration and transfer. Every
project must establish a software configuration management system. Software configurationmanagement
must ensure that:

software components can be identified;

software is built from a consistent set of components;

software components are available and accessible;

software components never get lost (e.g. after media failure or operator error);

every change to the software is approved and documented;

changes do not get lost (e.g. through simultaneous updates);

it is always possible to go back to a previous version;

a history of changes is kept, so that is always possible to discover who did what and when.

Project management is responsible for organizing software configuration management activities, defining
software configuration management roles (e.g. software librarian), and allocating staff tothose roles.

Fig 4.0: Software Configuration Management

Software Configuration Management Activities

The configuration management tool enables the developer to change various components in a controlled
manner. Configuration management is carried out through two principal activities:

Fig 4.1: Software Configuration Management activities

Fig 4.1: Software Configuration Management activities

11 | P a g e
SE -LAB
DAVIET/CSE/2003656

Fig 4.1: Software Configuration Management activities

Configuration identification:

The first step in configuration management is to define the CIs and the conventions for their
identification. The inputs to the configuration identification activity are the SPMP (which lists theproject
documents), the ADD and the DDD (which list the software components). These inputs are used to define
the hierarchy of CIs. The output is a configuration item list,

Configuration control:

Configuration item control authority should have a unique control authority that decides what changes
will be made to a CI. The control authority may be an individual or a group of people. Three levels of
control authority are normally distinguished in a software project:

• Software author

Software authors create low-level CIs, such as documents and code. Document writers are usuallythe
control authority for draft documents. Programmers are normally the control authority for codeuntil unit
testing is complete.

• Software project manager

Software project managers are responsible for the assembly of high-level CIs (i.e. baselines and releases)
from the low-level CIs provided by software authors. Software project managers are the control
authorities for all these CIs. Low-level CIs are maintained in software libraries. On most projects the
software project manager is supported by a software librarian.

• Review board

The review board approves baselines and changes to them. During development phases, formal review of
baselines is done by the UR/R, SR/R, AD/R and DD/R boards. The review board is thecontrol authority
for all baselines that have been approved or are under review. The review boardmay decide that a baseline
should be changed, and authorizes the development or maintenance team to implement the change.

Risk:
12 | P a g e
SE -LAB
DAVIET/CSE/2003656
A risk is any anticipated unfavorable event or circumstance that can occur while a project is underway.
Risks are future uncertain events with a probability of occurrence and a potential for loss.

Categories of risks:

Schedule Risk: Risk management is the identification, assessment, and prioritization of risks (followed by
coordinated and economical application of resources to minimize, monitor, and control the
probability and/or impact of unfortunate eventsor to maximize the realization of opportunities. Theprocess
of identification, analysis and either acceptance or mitigation of uncertainty in investmentdecision-making
is called risk management. Essentially, risk management occurs anytime an investor or fund manager
analyzes and attempts to quantify the potential for losses in an investmentand then takes the appropriate
action (or inaction) given their investment objectives and risk tolerance. Inadequate risk management can
result in severe consequences for companies as well as individuals.

Fig 4.2: Steps involved in Risk Management process

Risk Identification:

The first step in the process of managing risk is to identify potential risks. Risks are about eventsthat,
when triggered, cause problems or benefits. Hence, risk identification can start with the source of our
problems and those of our competitors (benefit), or with the problem itself.

Source analysis- Risk sources may be internal or external to the system that is the target of riskmanagement
(use mitigation instead of management since by its own definition risk deals withfactors of decision-
making that cannot be managed).

Problem analysis- Risks are related to identified threats. For example: the threat of losing money, the
threat of abuse of confidential information or the threat of human errors, accidentsand casualties. The
threats may exist with various entities, most important with shareholders, customers and legislative bodies
such as the government.

Risk Assessment:

13 | P a g e
SE -LAB
DAVIET/CSE/2003656
Once risks have been identified, they must then be assessed as to their potential severity of impact
(generally a negative impact, such as damage or loss) and to the probability of occurrence. These
quantities can be either simple to measure, in the case of the value of a lost building, or impossibleto know
for sure in the case of the probability of an unlikely event occurring. Therefore, in the assessment process
it is critical to make the best educated decisions in order to properly prioritizethe implementation of the
risk management plan. The fundamental difficulty in risk assessment isdetermining the rate of occurrence
since statistical information is not available on all kinds of pastincidents. Furthermore, evaluating the
severity of the consequences (impact) is often quite difficultfor intangible assets. Asset valuation is another
question that needs to be addressed. Thus, best educated opinions and available statistics are the primary
sources of information.

Once risks have been identified and assessed, all techniques to manage the risk fall into one ormore of
these four major categories:

Risk avoidance:

This includes not performing an activity that could carry risk. An example would be not buyinga
property or business in order to not take on the legal liability that comes with it. Another wouldbe not
flying in order not to take the risk that the airplane were to be hijacked. Avoidance may seem the answer
to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the
risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of
earning profits. Increasing risk regulation in hospitals has led to avoidance of treating higher risk
conditions, in favor of patients presenting with lower risk.

Risk reduction:

Risk reduction or "optimization" involves reducing the severity of the loss or the likelihood of the loss
from occurring. For example, sprinklers are designed to put out a fire to reduce the riskof loss by fire.
This method may cause a greater loss by water damage and therefore may not besuitable. Halon fire
suppression systems may mitigate that risk, but the cost may be prohibitive as a strategy. Acknowledging
that risks can be positive or negative, optimizing risks means finding a balance between negative risk and
the benefit of the operation or activity; and between risk reduction and effort applied. By an offshore
drilling contractor effectively applying HSE Management in its organization, it can optimize risk to
achieve levels of residual risk that are tolerable.

Risk sharing:

It is defined as sharing with another party the burden of loss or the benefit of gain, from a risk,and the
measures to reduce a risk.The term of 'risk transfer' is often used in place of risk sharing in the mistaken
belief that you cantransfer a risk to a third party through insurance or outsourcing. In practice if the
insurance company or contractor go bankrupt or end up in court, the original risk is likely to still revert to
the first party. As such in the terminology of practitioners and scholars alike, the purchase of an insurance
contract is often described as a "transfer of risk." However, technically speaking, the buyer of the contract
generally retains legal responsibility for the losses "transferred", meaning that insurance may be described
14 | P a g e
SE -LAB
DAVIET/CSE/2003656
more accurately as a post-event compensatory mechanism. For example, a personal injuries insurance
policy does not transfer the risk of a car accident to the insurance company. The risk still lies with the
policy holder namely the person who has been in the accident. The insurance policy simply provides that
if an accident (the event) occurs involvingthe policy holder then some compensation may be payable to
the policy holder that is commensurate to the suffering/damage.Some ways of managing risk fall into
multiple categories. Risk retention pools are technically retaining the risk for the group, but spreading it
over the whole group involves transfer among individual members of the group. This is different from
traditional insurance, in that no premium is exchanged between members of the group up front, but
instead losses are assessed to all members of the group.

Risk retention

Risk retention is a viable strategy for small risks where the cost of insuring against the risk wouldbe
greater over time than the total losses sustained.Involves accepting the loss, or benefit of gain, from a risk
when it occurs. True self-insurance falls in this category. All risks that are not avoidedor transferred are
retained by default. This includes risks that are so large or catastrophic that theyeither cannot be insured
against or the premiums would be infeasible. War is an example since most property and risks are not
insured against war, so the loss attributed by war is retained by theinsured. Also any amount of potential
loss (risk) over the amount insured is retained risk. This mayalso be acceptable if the chance of a very large
loss is small or if the cost to insure for greater coverage amounts is so great it would hinder the goals of
the organization too much

15 | P a g e
SE -LAB
DAVIET/CSE/2003656
Task. 5: Study and usage of any design phase case tool.
Design Phase

Design phase starts with the requirement document delivered by the requirement phase and mapsthe
requirements into an architecture. The architecture defines the components, their interfaces andbehaviors.
The deliverable design document is the architecture. The design document describes a plan to implement
the requirements. This phase represents the ``how'' phase. Details on computerprogramming languages
and environments, machines, packages, application architecture, distributed architecture layering,
memory size, platform, algorithms, data structures, global type definitions, interfaces, and many other
engineering details are established. The design may includethe usage of existing components.

CASE tool and its scope

A CASE (Computer Aided Software Engineering) tool is a generic term used to denote any form of
automated support for software engineering. In a more restrictive sense, a CASE tool means anytool used
to automate some activity associated with software development. Many CASE tools are available. Some
of these CASE tools assist in phase related tasks such as specification, structured analysis, design, coding,
testing, etc.; and others to non-phase activities such as projectmanagement and configuration
management.

CASE environment

Although individual CASE tools are useful, the true power of a tool set can be realized only whenthese set
of tools are integrated into a common framework or environment. CASE tools are characterized by the
stage or stages of software development life cycle on which they focus. Sincedifferent tools covering
different stages share common information, it is required that they integratethrough some central repository
to have a consistent view of information associated with the software development artifacts.

Fig 5.0:- case environment

16 | P a g e
SE -LAB
DAVIET/CSE/2003656
Structured analysis and design with CASE tools

A CASE tool should support one or more of the structured analysis and design techniques.

It should support effortlessly drawing analysis and design diagrams.

It should support drawing for fairly complex diagrams, preferably through a hierarchy of levels.

The CASE tool should provide easy navigation through the different levels and through the design and
analysis.

The tool must support completeness and consistency checking across the design and analysis and through
all levels of analysis hierarchy. Whenever it is possible, the system should disallow any inconsistent
operation, but it may be very difficult to implement sucha feature. Whenever there arises heavy
computational load while consistency checking, it should be possible to temporarily disable consistency
checking.

Usage of CASE

A key benefit arising out of the use of a CASE environment is cost saving through all development
phases. Different studies carry out to measure the impact of CASE put the effort reduction between 30%
to 40%.

Use of CASE tools leads to considerable improvements to quality. This is mainly due to the facts that one
can effortlessly iterate through the different phases of software development and the chances of human
error are considerably reduced.

CASE tools help produce high quality and consistent documents. Since the important datarelating to a
software product are maintained in a central repository, redundancy in the stored data is reduced and
therefore chances of inconsistent documentation is reduced to agreat extent.

CASE tools take out most of the drudgery in a software engineer’s work. For example, theyneed not check
meticulously the balancing of the DFDs but can do it effortlessly through the press of a button.

CASE tools have led to revolutionary cost saving in software maintenance efforts. This arises not only
due to the tremendous value of a CASE environment in traceability and consistency checks, but also due
to the systematic information capture during the various phases of software development as a result of
adhering to a CASE environment.

Introduction of a CASE environment has an impact on the style of working of a company,and makes it
oriented towards the structured and orderly approach.

17 | P a g e
SE -LAB
DAVIET/CSE/2003656
Task 6: To perform unit testing and integration testing.
Testing is a set of activities that can be planned in advance and conducted systematically. For thisreason a
template for software testing-a set of steps into which you can place specific test case design techniques and
testing methods-should be defined for the software process. Initially, systemengineering defines the role of
software and leads to software requirements analysis, where the information domain, function, behavior,
performance, constraints, and validation criteria for software are established. Moving inward along the spiral, you
come to design and finally to coding.To develop computer software, you spiral inward (counterclockwise) along
streamlines that decrease the level of abstraction on each turn.

Fig 6.0:- Software testing steps

Unit Testing

Unit testing is the act of testing a unit in your application. A unit is often a function or a method of a class
instance. The unit is also referred to as a “unit under test”. The goal of a single unit test is to test only some
permutation of the “unit under test”. If you write a unit test that aims to verifythe result of a particular
code path through a Python function, you need only be concerned about testing the code that lives in the
function body itself. If the function accepts a parameter that represents a complex application “domain
object” (such as a resource, a database connection, or an SMTP server), the argument provided to this
function during a unit test need not be and likelyshould not be a “real” implementation object. In object-
oriented programming, a unit is often an entire interface, such as a class, but could be an individual method.
Unit tests are short code fragments created by programmers or occasionally by white box testers during

18 | P a g e
SE -LAB
DAVIET/CSE/2003656
the development process. Unit testing focuses verification effort on the smallest unit of software design
the software component or module. Using the component-level design description as a guide, important
controlpaths are tested to uncover errors within the boundary of the module.

Fig 6.1:- Unit testing

Fig 6.2:- Unit testing environmen

19 | P a g e
SE -LAB
DAVIET/CSE/2003656
Benefits of Unit testing:

Find problems early

In test-driven development (TDD), which is frequently used in both Extreme Programming and Scrum,
unit tests are created before the code itself is written. When the tests pass, that code is considered
complete. The same unit tests are run against that function frequently as the larger codebase is developed
either as the code is changed or via an automated process with the build. If theunit tests fail, it is
considered to be a bug either in the changed code or the tests themselves.

Facilitates change

Unit testing allows the programmer to refactor code at a later date, and make sure the module still works
correctly (e.g., in regression testing). The procedure is to write test cases for all functions and methods so
that whenever a change causes a fault, it can be quickly identified. Readily available unit tests make it
easy for the programmer to check whether a piece of code isstill working properly.

Simplifies integration

Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up testing style
approach. By testing the parts of a program first and then testing the sum of its parts,integration testing
becomes much easier. An elaborate hierarchy of unit tests does not equal integration testing. Integration
with peripheral units should be included in integration tests, but not in unit tests. Integration testing
typically still relies heavily on humans testing manually;

Documentation

Unit testing provides a sort of living documentation of the system. Developers looking to learn what
functionality is provided by a unit and how to use it can look at the unit tests to gain a basicunderstanding
of the unit's interface (API). Unit test cases embody characteristics that are criticalto the success of the
unit.

Design

When software is developed using a test-driven approach, the combination of writing the unit test to
specify the interface plus the refactoring activities performed after the test is passing, maytake the place of
formal design. Each unit test can be seen as a design element specifying classes,methods, and observable
behavior.

Unit testing Limitations:

Testing will not catch every error in the program, since it cannot evaluate every execution path in any but
the most trivial programs. Additionally, unit testing by definition only teststhe functionality of the units
themselves. Therefore, it will not catch integration errors or broader system-level errors

20 | P a g e
SE -LAB
DAVIET/CSE/2003656
Another challenge related to writing the unit tests is the difficulty of setting up realistic anduseful tests. It is
necessary to create relevant initial conditions so the part of the applicationbeing tested behaves like part of
the complete system.

It is also essential to implement a sustainable process for ensuring that test case failures arereviewed daily
and addressed immediately. If such a process is not implemented and ingrained into the team's workflow,
the application will evolve out of sync with the unit test suite, increasing false positives and reducing the
effectiveness of the test suite.

Unit testing embedded system software presents a unique challenge: Since the software is being developed
on a different platform than the one it will eventually run on, you cannotreadily run a test program in the
actual deployment environment, as is possible with desktop programs.

Integration Testing:

Integration testing (sometimes called integration and testing) is the phase in software testing in which
individual software modules are combined and tested as a group. It occurs after unit testing and before
validation testing. Integration testing takes as its input modules that have beenunit tested, groups them in
larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its
output the integrated system ready for system testing. The purpose of integration testing is to verify
functional, performance, and reliability requirements placed on major design items. These "design items",
i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing,
success and error cases being simulated via appropriate parameter and data inputs. .

Integration testing is a systematic technique for constructing the software architecture while at thesame
time conducting tests to uncover errors associated with interfacing. The objective is to take unit-tested
components and build a program structure that has been dictated by design. There is often a tendency to
attempt non incremental integration; that is, to construct the program using a “big bang” approach.

Fig 6.3:- Big Bang Integration testing

21 | P a g e
SE -LAB
DAVIET/CSE/2003656
In big bang approach, all components are combined in advance. The entire program is tested as awhole. A
set of errors is encountered. Correction is difficult because isolation of causes is complicated by the vast
expanse of the entire program. Once these errors are corrected, new ones appear and the process continues
in a seemingly endless loop. Incremental integration is the antithesis of the big bang approach. The
program is constructed and tested in small increments, where errors are easier to isolate and correct;
interfaces are more likely to be tested completely; and a systematic test approach may be applied.

The two mainintegration testing strategies are as follows:

Top-Down: Involves testing the top integrated modules first. Subsystems are testedindividually. Top-
down testing facilitates detection of lost module branch links.

Bottom-Up: Involves low-level component testing, followed by high-level components. Testing continues
until all hierarchical components are tested. Bottom-up testing facilitatesefficient error detection.

Top-Down Testing:-

Fig 6.4:- Top-down Integration Testing

Top-down integration testing is an incremental approach to construction of the software architecture.


Modules are integrated by moving downward through the control hierarchy, beginning with the main
control module (main program). Modules subordinate (and ultimately subordinate) to the main control
module are incorporated into the structure in either a depth-first or breadth-first manner. Referring to
Figure 17.5, depth-first integration integrates all componentson a major control path of the program
structure. Selection of a major path is somewhat arbitrary and depends on application-specific
characteristics. For example, selecting the left-hand path, components M1, M2, M5 would be integrated
first. Next, M8 or (if necessary for properfunctioning of M2) M6 would be integrated. Then, the central
and right-hand control paths are built. Breadth-first integration incorporates all components directly
subordinate at each level, moving across the structure horizontally. From the figure, components M2, M3,
and M4 would beintegrated first. The next control level, M5, M6, and so on, follows.

The integration process is performed in a series of five steps:


22 | P a g e
SE -LAB
DAVIET/CSE/2003656
The main control module is used as a test driver and stubs are substituted for all componentsdirectly
subordinate to the main control module.

Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubsare replaced
one at a time with actual components.

Tests are conducted as each component is integrated.

On completion of each set of tests, another stub is replaced with the real component.

Regression testing may be conducted to ensure that new errors have not been introduced.The process
continues from step 2 until the entire program structure is built.

Bottom-Up Testing:-

Bottom up testing is an approach to integrated testing where the lowest level components are testedfirst,
then used to facilitate the testing of higher level components. The process is repeated until the component
at the top of the hierarchy is tested.All the bottom or low-level modules, procedures or functions are
integrated and then tested. Afterthe integration testing of lower level integrated modules, the next level of
modules will be formedand can be used for integration testing. This approach is helpful only when all or
most of the modules of the same development level are ready. This method also helps to determine the
levelsof software developed and makes it easier to report testing progress in the form of a percentage.

Fig 6.5:- Bottom-up integration

Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules
(i.e., components at the lowest levels in the program structure). Because components areintegrated from
the bottom up, the functionality provided by components subordinate to a given

23 | P a g e
SE -LAB
DAVIET/CSE/2003656
Task 7: To perform various white box and black box testing techniques.
White-Box Testing

White-Box testing is a software testing method in which the internal structure/ design of the itembeing
tested is known to the tester. The tester chooses inputs to exercise paths through the code and determines
the appropriate outputs. Programming know-how and the design knowledge is essential. White box testing
is testing beyond the user interface and into the nitty-gritty of a system.This method is named so because
the software program, in the eyes of the tester, is like a white/transparent box; inside which one clearly
sees. In this method of testing the test cases are calculated based on analysis internal structure of the
system based on Code coverage, branches coverage, paths coverage, condition Coverage etc.

Basis Path Testing

Basis path testing is a white-box testing technique first proposed by Tom McCabe. The basis pathmethod
enables the test-case designer to derive a logical complexity measure of a procedural design and use this
measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set
are guaranteed to execute every statement in the program at leastone time during testing.

Fig 7.0:- Flow graph notation

Condition Testing:-

Condition testing is a test-case design method that exercises the logical conditions contained in a program
module.These broaden testing coverage and improve the quality of white-box testing. Asimple condition
is a Boolean variable or a relational expression, possibly preceded with one NOT(¬) operator. A relational
expression takes the form E1 <relational-operator>E2 whereE1 and E2 are arithmetic expressions and
<relational-operator> is one of the following: <, <=, >, >= or = . A compound condition is composed of
two or more simple conditions, Boolean operators, and parentheses. We assume that Boolean operators
allowed in a compound condition include OR, AND (&), and NOT. A condition without relational
expressions is referred to as a Boolean expression.

If a condition is incorrect, then at least one component of the condition is incorrect. Therefore, types of
errors in a condition include Boolean operator errors (incorrect), Boolean variable errors,Boolean
parenthesis errors, relational operator errors, and arithmetic expression errors. Thecondition testing
method focuses on testing each condition in the program to ensure that it does not contain errors.
24 | P a g e
SE -LAB
DAVIET/CSE/2003656
Data Flow Testing:-

The data flow testing method [Fra93] selects test paths of a program according to the locations of
definitions and uses of variables in the program. To illustrate the data flow testing approach, assume that
each statement in a program is assigned a unique statement number and that each function does not
modify its parameters or global variables. For a statement with S as its statementnumber,

DEF(S) _ {X | statement S contains a definition of X}USE(S) _ {X | statement S contains a use of X}

If statement S is an ifor loop statement, its DEF set is empty and its USE set is based on the condition of
statement S. The definition of variable X at statement S is said to be live at statementS’ if there exists a
path from statement S to statement S’ that contains no other definition of X. Adefinition-use (DU) chain
of variable X is of the form [X, S, S’], where S and S’ are statement numbers, X is in DEF(S) and
USE(S’), and the definition of X in statement S is live at statement S’.

Loop Testing:-

Fig 7.1:- Classes of Loop

Loops are the cornerstone for the vast majority of all algorithms implemented in software. And yet, we
often pay them little heed while conducting software tests. Loop testing is a white-box testing technique
that focuses exclusively on the validity of loop constructs. Four different classesof loops [Bei90] can be
defined: simple loops, concatenated loops, nested loops, and unstructuredloops

STEP 1: UNDERSTAND OF THE SOURCE CODE

The first & very important step is to understand the source code of the application is being test. Astester
should know about the internal structure of the code which helps to test the application. Better knowledge
of Source code will helps to identify & write the important test case which causesthe security issues & also
helps to cover the 100% test coverage. Before doing this the tester shouldknow the programming language
what is being used in the developing the application. As securityof application is primary objective of
25 | P a g e
SE -LAB
DAVIET/CSE/2003656
application so tester should aware of the security concerns of the project which help in testing. If the tester
is aware of the security issues then he can preventsthe security issues like attacks from hackers & users
who are trying to inject the mucinous things in the application.

STEP 2: CREATE TEST CASES AND EXECUTE

In the second step involves the actual writing of test cases based on Statement/Decision/
Condition/Branch coverage & actual execution of test cases to cover the 100% testing coverage ofthe
software application. The Tester will write the test cases by diving the applications by
Statement/Decision/Condition/Branch wise. Other than this tester can include the trial and error testing,
manual testing and using the Software testing tools.

Advantages of White-Box Testing:

It helps in optimizing the code.

Extra lines of code can be removed which can bring in hidden defects.

Disadvantages of White-Box Testing:

Due to the fact that a skilled tester is needed to perform white box testing, the costs areincreased.

Sometimes it is impossible to look into every nook and corner to find out hidden errors thatmay create
problems as many paths will go untested.

Black-Box Testing

Black-box testing, also called behavioral testing, focuses on the functional requirements of the software.
That is, black-box testing techniques enable you to derive sets of input conditions that will fully exercise
all functional requirements for a program. Black-box testing is not an alternativeto white-box techniques.
Rather, it is a complementary approach that is likely to uncover a differentclass of errors than white-box
methods.

Graph-Based Testing Methods:

The first step in black-box testing is to understand the objects5 that are modeled in software and the
relationships that connect these objects. Once this has been accomplished, the next step is to define a
series of tests that verify “all objects have the expected relationship to one another”. Statedin another way,
software testing begins by creating a graph of important objects and their relationships and then devising
a series of tests that will cover the graph so that each object and relationship is exercised and errors are
uncovered.

26 | P a g e
SE -LAB
DAVIET/CSE/2003656

Fig 7.2:- Graph based testing methods

Equivalence Partitioning:

Equivalence partitioning is a black-box testing method that divides the input domain of a programinto
classes of data from which test cases can be derived. An ideal test case single handedly uncovers a class
of errors (e.g., incorrect processing of all character data) that might otherwise require many test cases to
be executed before the general error is observed.

Test-case design for equivalence partitioning is based on an evaluation of equivalence classes foran input
condition. Using concepts introduced in the preceding section, if a set of objects can belinked by
relationships that are symmetric, transitive, and reflexive, an equivalence class is present.Equivalence Class
represents a set of valid or invalid states for input conditions. Typically, aninput Condition is either a
specific numeric value, a range of values, a set of related values, or aBoolean condition. Equivalence
classes may be defined according to the following guidelines: 1.If an input condition specifies a range, one
valid and two invalid equivalence classes are defined.2.If an input condition requires a specific value, one
valid and two invalid equivalence classes aredefined. If an input condition specifies a member of a set,
one valid and one invalid equivalence classare defined.

Boundary Value Analysis:

A greater number of errors occurs at the boundaries of the input domain rather than in the “center.”It is for
this reason that boundary value analysis (BVA) has been developed as a testing technique.Boundary value
analysis leads to a selection of test cases that exercise bounding values. Boundaryvalue analysis is a test-
case design technique that complements equivalence partitioning. Rather than selecting any element of an
equivalence class, BVA leads to the selection of test cases at the“edges” of the class. Rather than focusing
solely on input condition. Guidelines for BVA are similar in many respects to those provided for
equivalence partitioning: .If an input condition specifies a range bounded by values a and b, test cases
should be designedwith values a and b and just above and just below a and b. If an input condition
specifies a number of values, test cases should be developed that exercise the minimum and maximum
numbers. Values just above and below minimum and maximum are also tested.

Advantages of Black-Box Testing:


27 | P a g e
SE -LAB
DAVIET/CSE/2003656
Well suited and efficient for large code segments.

Code Access not required.

Disadvantages of Black-Box Testing:

Limited Coverage since only a selected number of test scenarios are actually performed.

The test cases are difficult to design.

Difference between White-Box testing and Black-Box testing:

Criteria Black Box Testing White Box Testing

Definition Black Box Testing is a software White Box Testing is a software


testing method in which the internal testing method in which the
structure/ design/ implementation of internal structure/ design/
the item being tested is NOT knownto implementation of the item being
the tester tested is known to the tester.

Levels Applicable Mainly applicable to higher levels of Mainly applicable to lower levels
To testing:Acceptance Testing of testing:Unit Testing

System Testing Integration Testing

Responsibility Generally, independent Software Generally, Software Developers


Testers

Programming Not Required Required


Knowledge

Implementation Not Required Required


Knowledge

Basis for Test Requirement Specifications Detail Design


Cases

28 | P a g e
SE -LAB
DAVIET/CSE/2003656
Task 8: Testing of a website.
Web application refers to all applications that are accessed through a browser. Web testing is the name
given to software testing that focuses on web applications. Complete testing of a web-basedsystem before
going live can help address issues before the system is revealed to the public. Issuessuch as the security of
the web application, the basic functionality of the site, its accessibility to handicapped users and fully able
users, as well as readiness for expected traffic and number of users and the ability to survive a massive
spike in user traffic, both of which are related to load testing.

Web page testing is part of building a successful web site. The HTML code and CSS (Cascading Style
Sheet) code should validate. All links need to be tested to make sure they work correctly. The web site
should also be cross browser compatible. The web page speed and download time also are important to
the success of a web site.

Fig 8.1:- Different testing techniques

Errors within a website

Because many types of WebApp tests uncover problems that are first evidenced on the client side (i.e., via
an interface implemented on a specific browser or a personal communication device),you often see a
symptom of the error, not the error itself. Because a WebApp is implemented in a number of different
configurations and within differentenvironments, it may be difficult or impossible to reproduce an error
outside the environment in which the error was originally encountered. Although some errors are the
result of incorrect design or improper HTML (or other programming language) coding, many errors can
be traced to the WebApp configuration. Because Web Apps reside within a client-server architecture,
errors can be difficult to trace across three architectural layers: the client, the server, or the network
itself.Some errors are due to the static operating environment (i.e., the specific configuration in which
testing is conducted), while others are attributable to the dynamic operating environment (i.e.,
instantaneous resource loading or time-related errors).

29 | P a g e
SE -LAB
DAVIET/CSE/2003656

➢ Functionality Testing

Test for – all the links in web pages, database connection, forms used in the web pages for
submitting or getting information from user, cookie testing.
Check all the Links
• Test the outgoing links from all the pages from specific domain under test
• Test all internal links
• Test links jumping on the same pages
• Test links used to send the email to admin or other users from web pages
• Test to check if there are any orphan pages
• Lastly in link checking, check for broken links in all above-mentioned links
Forms are the integral part of any web site. Forms are used to get information from users and to
keep interaction with them. So what should be checked on these forms?
• First check all the validations on each field
• Check for the default values of fields
• Wrong inputs to the fields in the forms
• Options to create forms if any, form delete, view or modify the forms

Cookies Testing == Cookies are small files stored on the user machine. These are basically used to
maintain the sessionmainly login sessions. Test the application by enabling or disabling the cookies in
your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing
the session cookies . Validate Your HTML/CSS: If you are optimizing your site for Search engines, then
HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is
crawlable to different search engines.

Database Testing == Data consistency is very important in web application. Check for data integrity and
errors while you edit, delete, modify the forms or do any DB related functionality. Check if all the
database queries are executing correctly, data is retrieved correctly and also updated correctly. More on
database testing could be load on DB, we will address this in web load or performance testing below

➢ Usability Testing

Test for Navigation == Navigation means how the user surfs the web pages, different controls like
buttons, boxes or howuser uses the links on the pages to surf different pages. Usability Testing Includes:
Web site should be easy to use. Instructions should be provided clearly.Check if the provided instructions

30 | P a g e
SE -LAB
DAVIET/CSE/2003656
are correct meaning whether they satisfy the purpose. Main menu should be provided on each page. It
should be consistent.

Content Checking == Content should be logical and easy to understand. Check for spelling errors. Use
of dark colors annoys users and should not be used in site theme. You can follow some standards that are
used for web page and content building. These are common accepted standards like I mentioned above
about annoying colors, fonts, frames, etc

Interface Testing == The main interfaces are:

• Web server and application server interface


• Application server and database server interface

Fig 8.2:- Layers of interaction

➢ Compatibility Testing

Compatibility of your web site is very important testing aspect. See which compatibility test is tobe
executed:

Browser Compatibility == In my web-testing career, I have experienced this as the most


influencing part on web site testing.Some applications are very dependent on browsers. Different
browsers have different configurations and settings that your web page should be compatible
with. Your web site coding should be cross browser platform compatible. If you are using
JavaScripts or AJAX calls for UI functionality, performing security checks or validations, then
give more stress on browser compatibility testing of your web application. Test web application
31 | P a g e
SE -LAB
DAVIET/CSE/2003656
on different browsers like Internet Explorer, Firefox, Netscape Navigator, AOL, Safari, Opera
browsers with different versions.
OS Compatibility == Some functionality in your web application may not be compatible with all
operating systems. Allnew technologies used in web development like graphics designs, interface calls
like different APIs may not be available in all Operating Systems. Test your web application on different
operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.

Mobile Browsing ==This is the new technology age. So in future, Mobile browsing will rock. Test your
web pages onmobile browsers. Compatibility issues may be there on mobile.

Printing Options == If you are giving page-printing options, then make sure fonts, page alignment,
page graphics are getting printed properly. Pages should fit to paper size or as per the size mentioned inthe
printing option.

➢ Performance Testing

Web application should sustain to heavy load. Web performance testing should include:
Load Testing == Test application performance on different internet connection speed. In web load
testing, test if many users are accessing or requesting the same page.
Stress testing == Generally stress means stretching the system beyond its specification limits. Web stress
testing isperformed to break the site by giving stress and checked how system reacts to stress and how
system recovers from crashes. Stress is generally given on input fields, login and sign up areas. Inweb
performance testing web site functionality on different operating systems, different hardwareplatforms are
checked for software, hardware memory leakage errors

➢ Security Testing

Following are some test cases for web security testing:


• Test by pasting internal URL directly into browser address bar without login. Internal pages should
not open.
• If you are logged in using username and password and browsing internal pages, then try changing
URL options directly, i.e., If you are checking some publisher site statistics with publisher site ID=
123. Access should be denied for this user to view others stats.
• Try some invalid inputs in input fields like login username, password, input text boxes. Check the
system reaction on all invalid inputs.
• Web directories or files should not be accessible directly unless given download option.
• Test if SSL is used for security measures. If used, proper message should get displayed when user
switches from non-secure http:// pages to secure https:// pages and vice versa.
• All transactions, error messages, security breach attempts should get logged in log files somewhere
on web server.

32 | P a g e
SE -LAB

You might also like