0% found this document useful (0 votes)
18 views91 pages

Oose CS - A, B

The document provides an overview of software engineering, detailing its definition, history, and importance, as well as the challenges faced in the field. It introduces key concepts of object-oriented programming, the Unified Modeling Language (UML), and various software development life cycle (SDLC) models including Waterfall, Iterative, RAD, and Prototype models. Each model is described with its phases, advantages, and disadvantages, emphasizing the need for structured approaches in software development.

Uploaded by

rajkumar.vari013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views91 pages

Oose CS - A, B

The document provides an overview of software engineering, detailing its definition, history, and importance, as well as the challenges faced in the field. It introduces key concepts of object-oriented programming, the Unified Modeling Language (UML), and various software development life cycle (SDLC) models including Waterfall, Iterative, RAD, and Prototype models. Each model is described with its phases, advantages, and disadvantages, emphasizing the need for structured approaches in software development.

Uploaded by

rajkumar.vari013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

UNIT-I

Overview of Software Engineering


Software Engineering:
Software: It is collection of related programs.

Engineering: It is a branch of engineering that deals with the development of software


products.

 Margaret Hamilton (American computer scientist)-Software Engineering history begins in


the year of 1960’s.

 Software engineering is defined as a process of analyzing user requirements and then


designing, building, and testing software application which will satisfy those
requirements.

 Software engineering is an engineering branch associated with the development of the


product with in well-defined principles.

The important reasons behind the popularity of software engineering:

1. Large software 4.Scalability

2. Adaptability 5.Dynamic Nature

3. Cost Quality 6.Management

Challenges of Software Engineering:


Some critical challenges faced by software engineers:

 In safety-critical areas such as space, aviation, nuclear power plants, etc. The cost of
software failure can be massive because lives are at risk.

 Increased market demands for fast turnaround time.

 Dealing with the increased complexity of software need for new applications.

 The diversity of software systems should be communicating with each other.

Characteristics of Good Software:


Every software must satisfy the following attributes:
Operational: This characteristic let us know about how well software works in the
operations which can be measured on:

 Budget

 Efficiency

 Usability

 Dependability

 Correctness

 Functionality

 Safety

 Security

Transitional: This is an essential aspect when the software is moved from one platform to
another

 Interoperability

 Reusability

 Portability

 Adaptability

Maintenance: This aspect talks about how well software has the capabilities to adapt itself in
the quickly changing environment.

 Flexibility

 Maintainability

 Modularity

 Scalability

Introduction to Object-Oriented Programming (OOP)


Object-Oriented Concepts:

1. Attributes: a collection of data values that describe a class.

2. Class: encapsulates the data and procedural abstractions required to describe the
content and behavior of some real-world entity. In other words, class is a generalized
description that describes the collection of similar objects.

3. Objects: instances of a specific class. Objects inherit a class’s attributes and


operations.

4. Operations: also called methods and services, provide a representation of one of the
behaviors of the class.

5. Subclass: specialization of the super class. A subclass can inherit both attributes and
operations from a super class.

6. Superclass: also called a base class, is a generalization of a set of classes that are
related to it.

7. Inheritance: Inheritance is a mechanism by which child classes inherit the properties


of their parent classes.

8. Abstraction: Mechanism by which implementation details are hidden from user.

9. Encapsulation: Binding data together and protecting it from the outer world is
referred to as encapsulation.
10. Polymorphism: Mechanism by which functions or entities are able to exist in
different forms.

Unified Modelling Language (UML)


UML consists of an integrated set of diagrams that are created to specify, visualize, construct
and document the artifacts of a software system. UML is a useful technique while creating
object-oriented software and working with the software development process. In UML,
graphical notations are used to represent the design of a software project. UML also help in
validating the architectural design of the software.

Unified Modeling Language (UML) is a general purpose modelling language. The


main aim of UML is to define a standard way to visualize the way a system has been
designed.

UML is a pictorial language useful to make software blueprints. UML is not a


programing language. UML has a direct relation with object oriented analysis and design.
UML is linked with object oriented design and analysis. UML makes the use of
elements and forms associations between them to form diagrams. Dia grams in UML can be
broadly classified as:
Introduction to software development process

Software engineering paradigm or process model is an abstract


representation of a process.

The process model is chosen based on nature of software project and


application and then to obtain deliverable product method and tools are
applied

Using problem solving loop the software development can be


done. The problem solving loop includes:

A set of problems raised during software development process. These problems are
called as Crisis.
o Size: Software is becoming more expensive and more complex with the
growing complexity and expectation out of software. For example, the code in
the consumer product is doubling every couple of years.
o Quality: Many software products have poor quality, i.e., the software products
defects after putting into use due to ineffective testing technique. For example,
Software testing typically finds 25 errors per 1000 lines of code.
o Cost: Software development is costly i.e. in terms of time taken to develop and
the money involved. For example, Development of the FAA's Advanced
Automation System cost over $700 per lines of code.
o Delayed Delivery: Serious schedule overruns are common. Very often the
software takes longer than the estimated time to develop, which in turn leads
to cost shooting up. For example, one in four large-scale development projects
is never completed.
 To overcome this software crisis, we have to follow the SDLC.
 To develop software product, the development team must identify Suitable life cycle
model for particular project.
software development life cycle (SDLC)
SDLC (Software Development Life Cycle): It is a step by step procedure or systematic
approach to develop software. It is descriptive and diagrammatic representation of software
life cycles.

 It consists of various phases(Analysis, Design, Implementation, Maintenance,


Planning)

Process Models:
A (Software/System) process model is a description of the sequences of activities
carried out in an SE project, and the relative order of these activities.

A software process is the set of activities and associated outcome that produce a
software product. Software engineers mostly carry out these activities. These are four key
process activities, which are common to all software processes. These activities are:

 Software specifications: The functionality of the software and constraints on its


operation must be defined.
 Software development: The software to meet the requirement must be produced.
 Software validation: The software must be validated to ensure that it does what
the customer wants.
 Software evolution: The software must evolve to meet changing client needs.

The most used, popular and important SDLC models are given below:

1. Waterfall model
2. Iterative model
3. RAD model
4. Prototype model
5. Agile model
6. V model
7. Spiral model
8. Incremental model
1)Waterfall Model:
 The Waterfall Model is also referred to as a linear-sequential life cycle model. It is
very simple to understand and use.
 In a waterfall model, each phase must be completed before the next phase can begin
and there is no overlapping in the phases. 
 The Waterfall model is the earliest SDLC approach that was used for software
development.
 The waterfall Model illustrates the software development process in a linear
sequential flow.

Waterfall Model - Design


 Waterfall approach was first SDLC Model to be used widely in Software Engineering
to ensure success of the project.
 In "The Waterfall" approach, the whole process of software development is divided
into separate phases.
 In this Waterfall model, typically the outcome of one phase acts as the input for the
next phase sequentially.

The following illustration is a representation of the different phases of the Waterfall Model.

1. Requirement Gathering and analysis: All possible requirements of the system to be


developed are captured in this phase and documented in a Software Requirement
Specification(SRS) document. These requirements gathered from the customers.
2. System Design: The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system
architecture.
3. Implementation: With inputs from the system design, the system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.
4. Integration and Testing: All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.
5. Deployment of system: Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
6. Maintenance: There are some issues which come up in the client environment. To fix
those issues, patches are released. Also to enhance the product some better versions
are released. Maintenance is done to deliver these changes in the customer
environment.

Waterfall Model - Advantages

1. This model is simple and easy to understand and use.


2. It is easy to manage due to the rigidity of the mode, each phase has specific
deliverables and a review process.
3. In this model phases are processed and completed one at a time. Phases do not
overlap.
4. Waterfall model works well for smaller projects where requirements are clearly
defined and very well understood.

Waterfall Model - Disadvantages

1. Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
2. No working software is produced until late during the life cycle.
3. High amounts of risk and uncertainty.
4. Not a good model for complex and object-oriented projects.
5. Poor model for long and on-going projects.
6. Not suitable for the projects where requirements are at a moderate to high risk of
changing.

2) Iterative Model:
In the Iterative model, iterative process starts with a simple implementation of a small
set of the software requirements and iteratively enhances the evolving versions until the
complete system is implemented and ready to be deployed.

An iterative life cycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the
software, which is then reviewed to identify further requirements.

This process is then repeated, producing a new version of the software at the end of
each iteration of the model.

This model consists of the same phases as the waterfall model, but with fewer
restrictions. Generally, the phases occur in the same order as in the waterfall model, but these
may be conducted in several cycles. A reusable product is released at the end of the cycle,
with each release providing additional functionality.

Iterative Model - Design

 Iterative process starts with a simple implementation of a subset of the software


requirements and iteratively enhances the evolving versions until the full system is
implemented.
 At each iteration, design modifications are made and new functional capabilities are
added.
 The basic idea behind this method is to develop a system through repeated cycles
(iterative) and in smaller portions at a time (incremental).

The following illustration is a representation of the Iterative and Incremental model

1. Requirement and analysis phase: In this phase, requirements are collected from
customers and examined by an analyst to see if the requirements will be met. Analysts
examine what should or should not be achieved within the budget. After all this, the
software team proceeds to the next stage.
2. Design: In the design phase, the team designs the software with different diagrams
such as data flow diagram, activity diagram, class diagram, state transition diagram,
etc.
3. Implementation: In the implementation phase, coding is done, and it is converted to
complete software.
4. Testing:After completing the coding phase, software testing starts using various
testing methods. There are various testing methods, but the most common is white
box testing, black-box testing, and gray box testing methods.
5. Deployment: After completing all the steps, the software is deployed in its work
environment. In this phase, after product deployment, the review phase is carried out
to check the behavior and validity of the developed product.
6. Review: In this phase, after product deployment, the review phase is carried out to
check the behavior and validity of the developed product. And if an error is found, the
process begins to review again.
7. Maintenance:In the maintenance phase, there may be some bugs after the
deployment of the software in the work environment; some errors or new updates are
required. Maintenance includes debugging and new additional options.

Iterative Model - Advantages

1. It is easily adaptable to the ever changing needs of the project as well as the client.
2. It is best suited for agile organisations.
3. It is more cost effective to change the scope or requirements in Iterative model.
4. Parallel development can be planned.
5. Testing and debugging during smaller iteration is easy.
6. Risks are identified and resolved during iteration; and each iteration is an easily
managed.
7. In iterative model less time is spent on documenting and more time is given for
designing.
8. One can get reliable user feedback, when presenting sketches and blueprints of the
product to users for their feedback.

Iterative Model - Disadvantages

1. More resources may be required.


2. Although cost of change is lesser, but it is not very suitable for changing
requirements.
3. More management attention is required.
4. It is not suitable for smaller projects.
5. Highly skilled resources are required for skill analysis.
6. Project progress is highly dependent upon the risk analysis phase.
7. Defining increments may require definition of the complete system.

3) RAD (Rapid Application Development) Model:


 In the 1980s, Rapid Application Development Model was firstly presented by IBM.
 Using the RAD model, software product is developed in a short period of time.
 The initial activity starts with the communication between customer and developer.
 Planning depends upon the initial requirements and then the requirements are divided
into groups.
 Planning is more important to work together on different modules.
The RAD model consists of following phases:

1) Business Modeling
2) Data modeling

3) Process modeling

4) Application generation

5) Testing and turnover

1) Business Modeling:

 Business modeling consists of the flow of information between various functions in


the project.
 For example, what type of information is produced by every function and which are
the functions to handle that information.
 It is necessary to perform complete business analysis to get the essential business
information.

2) Data modeling:

 The information in the business modeling phase is refined into the set of objects and it
is essential for the business.
 The attributes of each object are identified and defined the relationship between
objects.

3) Process modeling:

 The data objects defined in the data modeling phase are changed to fulfil the
information flow to implement the business model.
 The process description is created for adding, modifying, deleting or retrieving a data
object.

4) Application generation:
 In the application generation phase, the actual system is built.
 To construct the software the automated tools are used.

5) Testing and turnover:

 The prototypes are independently tested after each iteration so that the overall testing
time is reduced.
 The data flow and the interfaces between all the components are fully tested. Hence,
most of the programming components are already tested.

Advantages of RAD Model:

1. The process of application development and delivery are fast.


2. This model is flexible, if any changes are required.
3. Reviews are taken from the clients at the staring of the development hence there are
lesser chances to miss the requirements.

Disadvantages of RAD Model:

1. The feedback from the user is required at every development phase.


2. This model is not a good choice for long term and large projects.

4) Prototype Model:
The Prototyping Model is one of the most popularly used Software Development Life
Cycle Models (SDLC models). This model is used when the customers do not know the exact
project requirements beforehand. In this model, a prototype of the end product is first
developed, tested and refined as per customer feedback repeatedly till a final acceptable
prototype is achieved which forms the basis for developing the final product.
1. Requirements gathering and analysis:
A requirement analysis is the first step in developing a prototyping model. During this
phase, the system’s desires are precisely defined. During the method, system users are
interviewed to determine what they expect from the system.

2. Quick design:

The second phase could consist of a preliminary design or a quick design. During this
stage, the system’s basic design is formed. However, it is not a complete design. It
provides the user with a quick overview of the system. The rapid design aids in the
development of the prototype.

3. Build a Prototype:

During this stage, an actual prototype is intended to support the knowledge gained from
quick design. It is a small low-level working model of the desired system.

4. Initial user evaluation:

The proposed system is presented to the client for preliminary testing at this stage. It is
beneficial to investigate the performance model’s strengths and weaknesses. Customer
feedback and suggestions are gathered and forwarded to the developer.

5. Refining prototype:

If the user is dissatisfied with the current model, you may want to improve the type that
responds to user feedback and suggestions. When the user is satisfied with the upgraded
model, a final system based on the approved final type is created.

6. Implement Product and Maintain:

The final system was fully tested and distributed to production after it was developed to
support the original version. To reduce downtime and prevent major failures, the
programmer is run on a regular basis.

Advantage of Prototype Model

1. Reduce the risk of incorrect user requirement


2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model

1. An unstable/badly implemented prototype often becomes the final product.


2. Require extensive customer collaboration
 Costs customer money
 Needs committed customer
 Difficult to finish if customer withdraw
 May be too customer specific, no broad market
3. Difficult to know how long the project will last.
4. Easy to fall back into the code and fix without proper requirement analysis, design,
customer evaluation, and feedback.
5. Prototyping tools are expensive.
6. Special tools & techniques are required to build a prototype.
7. It is a time-consuming process.

5)Agile model
The meaning of Agile is swift or versatile. “Agile process model" refers to a software
development approach based on iterative development. Agile methods break tasks into
smaller iterations, or parts do not directly involve long term planning. The project scope and
requirements are laid down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are clearly defined in
advance.

Each iteration is considered as a short time "frame" in the Agile process model, which
typically lasts from one to four weeks. The division of the entire project into smaller parts
helps to minimize the project risk and to reduce the overall project delivery time
requirements. Each iteration involves a team working through a full software development life
cycle including planning, requirements analysis, design, coding, and testing before a working
product is demonstrated to the client.
Phases of Agile Model:

Following are the phases in the Agile model are as follows:

1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback

1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project. Based
on this information, you can evaluate technical and economic feasibility.

2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to
show the work of new features and show how it will apply to your existing system.

3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.

4. Testing: In this phase, the Quality Assurance team examines the product's performance and
looks for the bug.

5. Deployment: In this phase, the team issues a product for the user's work environment.

6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback
6) V-Model

V-Model also referred to as the Verification and Validation Model. In this, each phase of
SDLC must complete before the next phase starts. It follows a sequential design process same
as the waterfall model. Testing of the device is planned in parallel with a corresponding stage
of development.

Verification: It involves a static analysis method (review) done without executing code. It is
the process of evaluation of the product development process to find whether specified
requirements meet.

Validation: It involves dynamic analysis method (functional, non-functional), testing is done


by executing code. Validation is the process to classify the software after the completion of
the development process to determine whether the software meets the customer expectations
and requirements.

So V-Model contains Verification phases on one side of the Validation phases on the other
side. Verification and Validation process is joined by coding phase in V-shape. Thus it is
known as V-Model.
UNIT-II

Requirements Analysis and Design:


Requirement Analysis and specification:
Requirements Analysis is the process of defining the expectations of the users for an
application that is to be built or modified. It involves all the tasks that are conducted to
identify the needs of different stakeholders. Therefore requirements analysis means to
analyze, document, validate and manage software or system requirements.

High-quality requirements are documented, actionable, measurable, testable,


traceable le, helps to identify business opportunities, and are defined to a facilitate system
design.

Requirements Analysis is the process of determining user expectations for a new or


modified product. These features, called requirements, must be quantifiable, relevant and
detailed. In software engineering, such requirements are often called functional
specifications.

Requirements Analysis Process:


The software requirements analysis process involves the following steps/phases:

1. Eliciting requirements
2. Analyzing requirements
3. Requirements modeling
4. Review and retrospective

1- Eliciting requirements

The process of gathering requirements by communicating with the customers is known as


eliciting requirements.

2- Analyzing requirements

This step helps to determine the quality of the requirements. It involves identifying whether
the requirements are unclear, incomplete, ambiguous, and contradictory. These issues
resolved before moving to the next step.

3- Requirements modeling
In Requirements modeling, the requirements are usually documented in different formats
such as use cases, user stories, natural-language documents, or process specification.

4- Review and retrospective

This step is conducted to reflect on the previous iterations of requirements gathering in a bid
to make improvements in the process going forward.
Requirements Analysis Techniques:
There are different techniques used for business Requirements Analysis. Below is a list of
different business Requirements Analysis Techniques:

1. Business process modeling notation (BPMN)


2. UML (Unified Modeling Language)
3. Flowchart technique
4. Data flow diagram(DFD)
5. Role Activity Diagrams (RAD)
6. Gantt Charts
7. IDEF (Integrated Definition for Function Modeling)
8. Gap Analysis

1- Business process modeling notation (BPMN)

This technique is similar to creating process flowcharts, although BPMN has its own symbols
and elements. Business process modeling and notation is used to create graphs for the
business process. These graphs simplify understanding the business process. BPMN is widely
popular as a process improvement methodology.

2- UML (Unified Modeling Language)

UML consists of an integrated set of diagrams that are created to specify, visualize, construct
and document the artifacts of a software system. UML is a useful technique while creating
object-oriented software and working with the software development process. In UML,
graphical notations are used to represent the design of a software project. UML also help in
validating the architectural design of the software.

3- Flowchart technique

A flowchart depicts the sequential flow and control logic of a set of activities that are related.
Flowcharts are in different formats such as linear, cross-functional, and top-down. The
flowchart can represent system interactions, data flows, etc. Flow charts are easy to
understand and can be used by both the technical and non-technical team members.
Flowchart technique helps in showcasing the critical attributes of a process.

4- Data flow diagram


This technique is used to visually represent systems and processes that are complex and
difficult to describe in text. Data flow diagrams represent the flow of information through a
process or a system. It also includes the data inputs and outputs, data stores, and the various
subprocess through which the data moves. DFD describes various entities and their
relationships with the help of standardized notations and symbols. By visualizing all the
elements of the system it is easier to identify any shortcomings. These shortcomings are then
eliminated in a bid to create a robust solution.

5- Role Activity Diagrams (RAD)

Role-activity diagram (RAD) is a role-oriented process model that represents role-activity


diagrams. Role activity diagrams are a high-level view that captures the dynamics and role
structure of an organization. Roles are used to grouping together activities into units of
responsibilities. Activities are the basic parts of a role. An activity may be either carried out
in isolation or it may require coordination with other activities within the role.

6- Gantt Charts

Gantt charts used in project planning as they provide a visual representation of tasks that are
scheduled along with the timelines. The Gantt charts help to know what is scheduled to be
completed by which date. The start and end dates of all the tasks in the project can be seen in
a single view.

7- IDEF (Integrated Definition for Function Modeling)

Integrated definition for function modeling (IDEFM) technique represents the functions of a
process and their relationships to child and parent systems with the help of a box. It provides
a blueprint to gain an understanding of an organization’s system.

8- Gap Analysis

Gap analysis is a technique which helps to analyze the gaps in performance of a software
application to determine whether the business requirements are met or not. It also involves
the steps that are to be taken to ensure that all the business requirements are met successfully.
Gap denotes the difference between the present state and the target state. Gap analysis is also
known as need analysis, need assessment or need-gap analysis.

Use cases and scenarios


Analysis Model:
Analysis Model is a technical representation of the system. It acts as a link between
system description and design model. In Analysis Modelling, information, behavior, and
functions of the system are defined and translated into the architecture, component, and
interface level design in the design modeling.

It uses a combination of text and diagrams to represent software requirements (data,


function, and behavior) in an understandable way.
Analysis modeling represents the user's requirements by depicting the software in
three different domains: information domain, functional domain, and behavioral domain.

The analysis model is designed by a software engineer or modeler or system analysts,


or project manager.
1. Scenario based element

 This type of element represents the system user point of view.

 Scenario based elements are use case diagram, user stories.

2. Class based elements: The elements of the class based model consist of classes and
object, attributes, operations, class – responsibility - collaborator (CRS) models.

 The object of this type of element manipulated by the system.

 It defines the object, attributes and relationship.

 The collaboration is occurring between the classes.

 Class based elements are the class diagram, collaboration diagram.

3. Behavioral elements:

 Behavioral elements represent state of the system and how it is changed by the
external events.

 The behavioral elements are sequenced diagram, state diagram.


4. Flow oriented elements

 An information flows through a computer-based system it gets transformed.

 It shows how the data objects are transformed while they flow between the various
system functions.

 The flow elements are data flow diagram, control flow diagram.
Object-oriented analysis and design (OOAD)
OOAD stands for Object Oriented Analysis and Design.

Object: Object is a combination of data and logic that represents some real world entities.

 Data are called as attributes.


 Logic are called as functions(methods (or) operations).

Example:

 Object: car
 Attributes: car registers no, car name, colour, number of doors.
 Functions: stop, start, change, mileage.

Object Representation:

OOAD Object Model:

OOA (Object Oriented Analysis): do the right thing.


Analysis is the process of investigation of the problem and the requirements than finding its
solution. It is based on a set of basic principles, which are as follows:

 The information domain is modeled.


 Behavior is represented.
 The function is described.
 Data, functional, and behavioral models are divided to uncover greater detail.
 Early models represent the essence of the problem, while later ones provide implementation
details.
OOD (Object Oriented Design): do the thing right.
Design is the process of finding a conceptual solution that fulfills the requirements than
implementing it. It is having the following four layers.

 The Subsystem Layer: It represents the subsystem that enables software to achieve user
requirements and implement technical frameworks that meet user needs.
 The Class and Object Layer: It represents the class hierarchies that enable the system to
develop using generalization and specialization. This layer also represents each object.
 The Message Layer: It represents the design details that enable each object to communicate
with its partners. It establishes internal and external interfaces for the system.
 The Responsibilities Layer: It represents the data structure and algorithmic design for all the
attributes and operations for each object.

What is OOAD?
 It is a software engineering approach which model the system as interacting object.
 Each object represents a system entity which plays a vital role to build the system.

Define use case:

 Use case is a list of action defining the interaction between an actor and a system to achieve
a goal.
 Action can be human.
 Use case is a popular tool in requirement analysis.

Define domain model:

 Domain model is a conceptual model of the domain that incorporate both behaviour and
data.
 It gives very good information about the problem to solve.

Define interaction diagram:

 It describes the structure of a system by showing the system classes, their attributes,
operations and the relationship among the objects.

Define class diagrams:

 Class are represented by boxes.


 Top compartment contains the attributes of the class.
 Bottom compartment contains the function (or) operations.
Design patterns
What are Design Patterns?

In Object-Oriented Analysis and Design (OOAD), design patterns are reusable fixes for typical
software design issues that occur during the development process. These patterns capture best
practices, principles, and guidelines for creating modular, scalable, and maintainable software
systems. They offer an organized method for resolving common design problems, encouraging
code reuse, adaptability, and ease of maintenance. OOAD design patterns that are frequently used
include the following:
 Creational Patterns: These patterns focus on the techniques involved in the creation of
objects, helping in their appropriate creation. Examples include the Factory Method pattern,
Builder pattern, and Singleton pattern.
 Structural Patterns: Structural patterns deal with object composition and class relationships,
aiming to simplify the structure of classes and objects. Examples include the Adapter pattern,
Composite pattern, and Decorator pattern.
 Behavioral Patterns: Behavioral patterns address how objects interact and communicate with
each other, focusing on the delegation of responsibilities between objects. Examples include the
Observer pattern, Strategy pattern, and Command pattern.
 Architectural Patterns: These patterns provide high-level templates for organizing a software
system's general structure. Examples include the Model-View-Controller (MVC) pattern,
Layered Architecture pattern, and Micro services pattern.
Developers can create software systems that are more reliable, maintainable, and scalable by
utilizing these design patterns, which provide tried-and-true solutions to common design issues. In
addition, design patterns facilitate team collaboration and increase overall development efficiency
by promoting consistency, code reusability, and ease of understanding.

Benefits of using Design Patterns

 Reduced Complexity: By leveraging existing patterns, developers can avoid reinventing the
wheel. This saves time and effort, leading to faster development cycles.
 Improved Code Quality: Design patterns often promote good coding practices, resulting in
cleaner, more modular code that's easier to understand, maintain, and modify.
 Enhanced Communication: Design patterns provide a common language for developers,
fostering better communication and collaboration within a team.
 Promotes Reusability: The core concept of design patterns is reusability. They can be applied
in different contexts within a project or even across multiple projects.
 Proven Solutions: Design patterns represent well-tested solutions, offering confidence that the
chosen approach is effective and avoids potential pitfalls.
In essence, design patterns empower developers to create well-structured, maintainable, and
efficient object-oriented software systems by offering a library of reusable solutions to common
design challenges.

Commonly Used Design Patterns

In Object-Oriented Analysis and Design (OOAD), several design patterns are commonly used to
address recurring design problems. Here are some of the most commonly used design patterns:
1. Singleton Pattern
Ensures that a class has only one instance and provides a global point of access to it. Useful for
managing global resources or maintaining a single configuration throughout an application.
2. Factory Method Pattern
Defines an interface for creating an object, but allows subclasses to alter the type of objects that
will be created. Useful for decoupling the creation of objects from the client code.
3. Abstract Factory Pattern
Provides an interface for creating families of related or dependent objects without specifying their
concrete classes. Useful for creating objects with varying implementations but ensuring they work
together seamlessly.
4. Builder Pattern
Separates the construction of a complex object from its representation, allowing the same
construction process to create different representations. Useful for creating objects with a large
number of configuration options or parameters.
5. Prototype Pattern
Creates new objects by copying an existing object, known as the prototype, rather than creating
new instances from scratch. Useful for improving performance and reducing the overhead of object
creation.
6. Adapter Pattern
Allows incompatible interfaces to work together by providing a wrapper or intermediary that
converts the interface of one class into another interface expected by the client. Useful for
integrating legacy code or third-party libraries into new systems.
7. Observer Pattern
Defines a one-to-many dependency between objects so that when one object changes state, all its
dependents are notified and updated automatically. Useful for implementing event handling
systems or maintaining consistency between related objects.
8. Strategy Pattern
Defines a family of algorithms, encapsulates each one, and makes them interchangeable. It allows
the algorithm to vary independently from the clients that use it. Useful for selecting algorithms at
runtime or providing different implementations of the same behavior.
These design patterns provide solutions to common design problems encountered during software
development and promote principles such as code reuse, modularity, and flexibility in OOAD.

UML modelling techniques


(class diagrams, sequence diagrams, state machine diagrams, activity diagrams)

What is UML:
Unified Modeling Language (UML) is a general purpose modelling language. The main aim
of UML is to define a standard way to visualize the way a system has been designed.

UML is a pictorial language useful to make software blueprints. UML is not a programing
language. UML has a direct relation with object oriented analysis and design.
UML is linked with object oriented design and analysis. UML makes the use of elements and
forms associations between them to form diagrams. Diagrams in UML can be broadly classified as:
Behavior Diagrams: Behavioral diagrams basically capture the dynamic aspect of a system.
Dynamic aspect can be further described as the changing/moving parts of a system. Behavior
diagrams include: Use Case Diagrams, State Diagrams, Activity Diagrams and Interaction
Diagrams.

UML has the following four types of behavioral diagrams:

1. Activity Diagram: Activity diagram describes the flow of control in a system. It cons ists of
activities and links. The flow can be sequential, concurrent, or branched.
2. State Machine Diagram: The state machine diagram is also called the Statechart or State
Transition diagram. It models event-based systems to handle the state of an object. It also
defines several distinct states of a component within the system. Each object/component has
a specific state.
 Behavioral state machine
 Protocol state machine
3. Use Case Diagram: Use case diagrams are a set of use cases, actors, and their relationships.
They represent the use case view of a system. These diagrams also identify the interactions
between the system and its actors.
4. Interaction diagram: It is used to capture the interactive behavior of a system.
a) Communication diagrams: Communication diagrams is also known as collaboration
diagrams. It is used to relationships and interactions among software objects.
b) Interaction overview diagram: An interaction overview diagram is a form of activity
diagram in which the nodes represent interaction diagrams.
c) Sequence diagram: The sequence diagram represents the flow of messages in the system
and is also termed as an event diagram.
d) Timing diagram: The timing diagram describes how an object underwent a change
from one form to another.
Structural Diagrams: The structural diagrams represent the static aspect of the
system. These static parts are represented by classes, interfaces, objects, components, and nodes.
Structural Diagrams includes Component Diagrams, Object Diagrams, Class Diagrams and
Deployment Diagrams.

UML has the following six types of structural diagrams:

1. Class diagrams: Class diagrams are the most common diagrams used in UML. Class
diagram consists of classes, interfaces, associations, and collaboration. Class diagrams
basically represent the object-oriented view of a system, which is static in nature.
 Top compartment contains the attributes of the class.
 Bottom compartment contains the function (or) operations.
2. Component Diagram: Component diagrams are used to represent how the physical
components in a system have been organized.
3. Object Diagram: An Object Diagram can be referred to as a screenshot of the instances in a
system and the relationship that exists between them.
4. Composite Structure Diagram: We use composite structure diagrams to represent the
internal structure of a class and its interaction points with other parts of the system.
5. Deployment Diagram: Deployment Diagrams are used to represent system hardware and its
software.

6. Package Diagram: We use Package Diagrams to depict how packages and their elements
have been organized. A package diagram simply shows us the dependencies between
different packages and internal composition of packages. Basic Structural Modeling: Classes,
Relationships, common Mechanisms, and diagrams. Class & Object Diagrams: Terms, concepts,
modeling techniques for Class & Object Diagrams.

Basic Structural Modeling:


Structural modeling captures the static features of a system. These static aspects
represent those parts of a diagram, which forms the main structure and are therefore stable.

These static parts are represented by classes, interfaces, objects, components, and
nodes. The four structural diagrams are:

 Class diagram
 Object diagram
 Component diagram
 Deployment diagram
 Package diagram

Class diagram:
Class diagrams are the most common diagrams used in UML. Class diagram consists of
classes, interfaces, associations, and collaboration. Class diagrams basically represent the
object-oriented view of a system, which is static in nature.
Elements of UML Class Diagram:
 Class
 Attributes
 Operations
 Relationships

A class represents state (data/attributes) and behavior (logic/ methods/ functions/ operations).
Class is represented by a rectangle (box).

Terms & concepts :


(1) Class:

Class Name: Class name is the only mandatory information. The name of the class appears
in the first partition.

(2) Attributes:

Class Attributes: Each attribute has a data type. The attributes are shown in the second
partition, the attribute type is shown after the colon.

(3) Operations:

Class Operations: Each operation has a signature. The operations are shown in the third
partition, the return type of a method is shown after the colon at the end of the method
signature.

Class Visibility: +, -, # are symbols before an attribute and operation name in a class denote
the visibility of the attribute and operation.
 + indicates Public attribute / operation
 - indicates Private attribute / operation
 # indicates Protected attribute / operation

Parameter Directionality: Each parameter in an operation (method) may be denoted as in,


out or inout which specifies its direction with respect to the caller. This directionality is
shown before the parameter name.

(4) Relationships:

A class may be involved in I or more relationships with other classes. A relationship can be
one of the following types:

1. Association

Associations are relationships between classes in a UML Class Diagram. They are
represented by a solid line between classes. Associations are typically named using a verb or
verb phrase which reflects the real world problem domain.
2. Inheritance (or Generalization):

A generalization is a taxonomic relationship between a more general classifier and a more


specific classifier. Each instance of the specific classifier is also an indirect instance of the
general classifier. Thus, the specific classifier inherits the features of the more general
classifier.
3. Realization

Realization is a relationship between the blueprint class and the object containing its
respective implementation level details.

4. Dependency

An object of one class might use an object of another class in the code of a method. If the
object is not stored in any field, then this is modeled as a dependency relationship.

5. Aggregation

A special type of association. It represents a "part of" relationship.

6. Composition

A special type of aggregation where parts are destroyed when the whole is destroyed.

Class- Common Modeling Techniques:


Modeling Simple Collaborations

 Identify the function or behavior of the part of a system you would like to model.
 For each function or mechanism identify the classes, interfaces, collaborations and
relationships between them.
 Use scenarios (sequence of activities) to walk through these things. You may find new
things or find that existing things are semantically wrong.
 Populate the things found in the above steps. For example, take a class and fill its
responsibilities. Now, convert these responsibilities into attributes and operations.

Modeling a Logical Database Schema


 Identify the classes whose state must be saved over the lifetime of the application.
 Create a class diagram and mark these classes as persistent by using tagged values.
 Provide the structural details for these classes like the attributes, associations with
other classes and the multiplicity.
 Minimize the common patterns which complicate the physical database design like
cyclic associations, one-to-one associations and n-ary associations.
 Provide the behavior for these classes by listing out the operations that are important
for data access and integrity.
 Wherever possible, use tools to convert the logical design to physical design.

Forward and Reverse Engineering

 Identify the rules for mapping the models to the implementation language of your
choice.
 Depending on the semantics of the language, you may want to restrict the information
in your UML models. For example, UML supports multiple inheritance. But some
programming languages might not allow this.
 Use tagged values to specify the target language.
 Use tools to convert your models to code.
Object Diagrams:
 Object diagrams are derived from class diagrams so object diagrams are dependent
upon class diagrams.
 Object diagrams represent an instance of a class diagram. The basic concepts are
similar for class diagrams and object diagrams. Object diagrams also represent the
static view of a system but this static view is a snapshot of the system at a particular
moment.
 Object diagrams are used to render a set of objects and their relationships as an
instance.
 The classifiers in the class, deployment, component and use-case diagrams may be
instantiated to build object diagrams. Object diagrams in UML provide a picture of
the instances in a system and the connections between them.
 Object diagrams are simple to create: they're made from objects represented by
rectangles linked together with lines. Take a look at the major elements of an object
diagram.

Objects: objects are instances of a class. For example, if "car" is a class, a 2009 Nissan
Altima is an object of a class.

Class Titles: Class titles are the specific attributes of a given class. In the family tree of
object diagram, class titles include the name, gender, and age of the family members. You
can list class titles as items on the object or even in the object's properties (such as color).
Class Attributes: Class attributes are represented by a rectangle with two tabs that indicates
a software element.

Links: Links are the lines that connect two shapes of an object diagram to each other. The
corporate object diagram below shows how departments are connected in the traditional
organizational chart style.

Object- Common Modeling Techniques :


Modeling Object Structures

 First, identify the function or behavior or part of a system you want to model as
collection of classes, interfaces and other things.
 For each function or mechanism identify the classes and interfaces that collaborate
and also identify the relationships between them.
 Consider a scenario (context) that walks through this mechanism and freeze at a
moment in time and identify the participating objects.
 Represent the state of objects by listing out the attributes and their values.
 Represent the links between objects which are instances of associations.

Forward and Reverse Engineering

Forward engineering a object diagram is theoretically possible but practically of limited value
as the objects are created and destroyed dynamically at runtime, we cannot represent them
statically.

 Choose the target (context) you want to reverse engineer.


 Use a tool to stop execution at a certain moment in time.
 Identify the objects that collaborate with each other and represent them in an object
diagram.
 To understand their semantics, expose these object’s states.
 Also identify the links between the objects to understand their semantics.
Common Mechanisms:
It can be classified in to 4 types :

Specifications:

Specification provides a textual statement describing interesting aspects of a system.

Adornments:

 Textual/graphical items added to the basic notation of an element


 They are used for explicit visual representation of those aspects of an element that are
beyond the most important.

Common Divisions:

 In modeling, object-oriented systems get divided in multiple ways.


 For example, class vs. object, interface vs. implementation
 An object uses the same symbol as its class with its name underlined.

Extensibility Mechanisms

Extensibility mechanisms allow extending the language in controlled ways.


Extensibility mechanisms can be classified in to:

Stereotypes, Tagged Values, Constraints

a) Stereotypes

 Stereotypes are used to create new building blocks from existing blocks
 New building blocks are domain-specific
 Stereotypes are used to extend the vocabulary of a system
 Graphically represented as a name enclosed by guillemets (« »)

b) Tagged Values

 Tagged values are used to add to the information of the element (not of its instances)
 Stereotypes help to create new building blocks, whereas tagged values help to create
new attributes
 These are commonly used to specify information relevant to code generation,
configuration management and so on.

c) Constraints

 Constraints are used to create rules for the model


 Rules that impact the behavior of the model, and specify conditions that must be met
 Can apply to any element in the model, i.e., attributes of a class, relationship
 Graphically represented as a string enclosed by braces {....} and placed near the
associated elements or connected to that elements by dependency relationships.
Basic Behavioral Modeling:
Basic Behavioral-Modeling: Interaction diagrams, Activity diagrams, State diagrams,
Deployment diagrams and Component diagrams.

UML behavioral diagrams visualize, specify, construct, and document the dynamic
aspects of a system.

The behavioral diagrams are categorized as follows: use case diagrams, interaction
diagrams, state–chart diagrams, and activity diagrams

 Interaction diagram
 Activity diagram
 State-chart diagram
 Use-case diagram

Interaction diagram:
 Interaction diagrams are the dynamic behavior of the system

 Interaction diagrams depict interactions of objects and their relationships. They also
include the messages that are exchanged between objects.

 There are two types of interaction diagrams:

1. Sequence Diagrams

2. Collaboration Diagram

Sequence Diagrams:
 Sequence diagrams are interaction diagrams

 Sequence diagram emphasizes on time sequence of messages.

 The sequence diagram represents the flow of messages in the system and is also
termed as an event diagram.
 It is communication between any two lifelines as a time-ordered sequence of events,
such that these lifelines took part at the run time.
 In UML, the lifeline is represented by a vertical bar, whereas the message flow is
represented by a vertical dotted line that extends across the bottom of the page. It
incorporates the iterations as well as branching.
Purpose of a Sequence Diagram:
1. To model high-level interaction among active objects within a system.
2. To model interaction among objects inside a collaboration realizing a use case.
3. It either models generic interactions or some certain instances of interaction.
Sequence Diagram Notations:
Lifeline
An individual participant in the sequence diagram is represented by a lifeline. It is positioned
at the top of the diagram.

Actor
A role played by an entity that interacts with the subject is called as an actor. It is out of the
scope of the system. It represents the role, which involves human users and external hardware
or subjects. An actor may or may not represent a physical entity, but it purely depicts the role
of an entity. Several distinct roles can be played by an actor or vice versa.

Activation
It is represented by a thin rectangle on the lifeline. It describes that time period in which an
operation is performed by an element, such that the top and the bottom of the rectangle is
associated with the initiation and the completion time, each respectively.
Messages
The messages depict the interaction between the objects and are represented by arrows. They
are in the sequential order on the lifeline. The core of the sequence diagram is formed by
messages and lifelines.
Call Message: It defines a particular communication between the lifelines of an
interaction,which represents that the target lifeline has invoked an operation.

Return Message: It defines a particular communication between the lifelines of interaction


that represent the flow of information from the receiver of the corresponding caller message.

Self-Message: It describes a communication, particularly between the lifelines of an


interaction that represents a message of the same lifeline, has been invoked.
Recursive Message: A self-message sent for recursive purpose is called a recursive message.
In other words, it can be said that the recursive message is a special case of the self-message
as it represents the recursive calls.

Destroy Message: It describes a communication, particularly between the lifelines of an


interaction that depicts a request to destroy the lifecycle of the target.

Note
A note is the capability of attaching several remarks to the element. It basically carries useful
information for the modelers.
Example:

Benefits of a Sequence Diagram


1. It explores the real-time application.
2. It depicts the message flow between the different objects.
3. It has easy maintenance.
4. It is easy to generate.
5. Implement both forward and reverse engineering.
6. It can easily update as per the new change in the system.
Drawback of a Sequence Diagram
1. In the case of too many lifelines, the sequence diagram can get more complex.
2. The incorrect result may be produced, if the order of the flow of messages changes.
3. Since each sequence needs distinct notations for its representation, it may make the
diagram more complex.
4. The type of sequence is decided by the type of message.
Collaboration Diagram:
 Collaboration Diagram are interaction diagrams

 Collaboration Diagram emphasizes on the structural organization of the objects that


send and receive messages.

 The collaboration diagram, which is also known as a communication diagram, is used


to portray the object's architecture in the system.

 The collaboration diagram is used to show the relationship between the objects in a
system.
 Both the sequence and the collaboration diagrams represent the same information but
differently.

 Instead of showing the flow of messages, it depicts the architecture of the object
residing in the system as it is based on object-oriented programming.

 An object consists of several features. Multiple objects present in the system are
connected to each other.

Notations of a Collaboration Diagram


Following are the components of a component diagram that are enlisted below:
 Objects: The representation of an object is done by an object symbol with its name
and class underlined, separated by a colon. In the collaboration diagram, objects are
utilized in the following ways:
o The object is represented by specifying their name and class.
o It is not mandatory for every class to appear.
o A class may constitute more than one object.
o In the collaboration diagram, firstly, the object is created, and then its class is
specified.
o To differentiate one object from another object, it is necessary to name them.
 Actors: In the collaboration diagram, the actor plays the main role as it invokes the
interaction. Each actor has its respective role and name. In this, one actor initiates the
use case.
 Links: The link is an instance of association, which associates the objects and actors.
It portrays a relationship between the objects through which the messages are sent. It
is represented by a solid line. The link helps an object to connect with or navigate to
another object, such that the message flows are attached to links.
 Messages: It is a communication between objects which carries information and
includes a sequence number, so that the activity may take place. It is represented by a
labeled arrow, which is placed near a link. The messages are sent from the sender to
the receiver, and the direction must be navigable in that particular direction. The
receiver must understand the message.
Benefits of a Collaboration Diagram
 The collaboration diagram is also known as Communication Diagram.
 It mainly puts emphasis on the structural aspect of an interaction diagram, i.e., how
lifelines are connected.
 The syntax of a collaboration diagram is similar to the sequence diagram; just the
difference is that the lifeline does not consist of tails.
 The messages transmitted over sequencing is represented by numbering each
individual message.
 The collaboration diagram is semantically weak in comparison to the sequence
diagram.
 The special case of a collaboration diagram is the object diagram.
 It focuses on the elements and not the message flow, like sequence diagrams.
Drawback of a Collaboration Diagram
 Multiple objects residing in the system can make a complex collaboration diagram, as
it becomes quite hard to explore the objects.
 It is a time-consuming diagram.
 After the program terminates, the object is destroyed.
 As the object state changes momentarily, it becomes difficult to keep an eye on every
single that has occurred inside the object of a system.
Example:

Activity diagrams:
 It is also termed as an object-oriented flowchart. It encompasses activities composed
of a set of actions or operations that are applied to model the behavioral diagram.
 The activity diagram is used to demonstrate the flow of control within the system
rather than the implementation. It models the concurrent and sequential activities.
 The activity diagram helps in envisioning the workflow from one activity to another.
It put emphasis on the condition of flow and the order in which it occurs. The flow
can be sequential, branched, or concurrent, and to deal with such kinds of flows, the
activity diagram has come up with a fork, join, etc.

Components in Activity diagram:


Activities
The categorization of behavior into one or more actions is termed as an activity. In
other words, it can be said that an activity is a network of nodes that are connected by edges.
The edges depict the flow of execution. It may contain action nodes, control nodes, or object
nodes.
The control flow of activity is represented by control nodes and object nodes that
illustrates the objects used within an activity. The activities are initiated at the initial node and
are terminated at the final node.

Activity partition /swimlane


The swimlane is used to cluster all the related activities in one column or one row. It
can be either vertical or horizontal. It used to add modularity to the activity diagram. It is not
necessary to incorporate swimlane in the activity diagram. But it is used to add more
transparency to the activity diagram.
Forks
Forks and join nodes generate the concurrent flow inside the activity. A fork node consists of
one inward edge and several outward edges. It is the same as that of various decision
parameters. Whenever a data is received at an inward edge, it gets copied and split crossways
various outward edges. It split a single inward flow into multiple parallel flows.

Join Nodes

Join nodes are the opposite of fork nodes. A Logical AND operation is performed on all of
the inward edges as it synchronizes the flow of input across one single output (outward) edge.

Notation of an Activity diagram:Activity diagram constitutes following notations:


Initial State: It depicts the initial stage or beginning of the set of actions.

Final State: It is the stage where all the control flows and object flows end.

Decision Box: It makes sure that the control flow or object flow will follow only one path.

Action Box: It represents the set of actions that are to be performed.


Rules that are to be followed for drawing an activity diagram:
1. A meaningful name should be given to each and every activity.
2. Identify all of the constraints.
3. Acknowledge the activity associations.
Benefits of Activity diagram:

1. To graphically model the workflow in an easier and understandable way.

2. To model the execution flow among several activities.

3. To model comprehensive information of a function or an algorithm employed within

the system.

4. To model the business process and its workflow.

5. To envision the dynamic aspect of a system.

6. To generate the top-level flowcharts for representing the workflow of an application.

7. To represent a high-level view of a distributed or an object-oriented system.


State-chart diagrams:
It captures the software system's behavior
The state machine diagram is also called the State-machine or State-Transition
diagram, which shows the order of states underwent by an object within the system.
It models event-based systems to handle the state of an object. It also defines several distinct
states of a component within the system. Each object/component has a specific state.
Following are the types of a state machine diagram that are given below:
1. Behavioral-state-machine
The behavioral state machine diagram records the behavior of an object within the
system. It depicts an implementation of a particular entity. It models the behavior of
the system.
2. Protocol-state-machine
It captures the behavior of the protocol. The protocol state machine depicts the change
in the state of the protocol and parallel changes within the system. But it does not
portray the implementation of a particular component.
Notation of a State Machine Diagram:
1. Initial state: It defines the initial state (beginning) of a system, and it is represented
by a black filled circle.
2. Final state: It represents the final state (end) of a system. It is denoted by a filled
circle present within a circle.
3. Decision box: It is of diamond shape that represents the decisions to be made on the
basis of an evaluated guard.
4. Transition: A change of control from one state to another due to the occurrence of
some event is termed as a transition. It is represented by an arrow labeled with an
event due to which the change has ensued.
5. State box: It depicts the conditions or circumstances of a particular object of a class
at a specific point of time. A rectangle with round corners is used to represent the
state box.
The UML consist of three states:
1. Simple state: It does not constitute any substructure.
2. Composite state: It consists of nested states (substates), such that it does not contain
more than one initial state and one final state. It can be nested to any level.
3. Submachine state: The submachine state is semantically identical to the composite
state, but it can be reused.
State machine diagram is used for:
For modeling the object states of a system.
For modeling the reactive system as it consists of reactive objects.
For pinpointing the events responsible for state transitions.
For implementing forward and reverse engineering.

Deployment diagrams:
It portrays the static deployment view of a system. It involves the nodes and their
relationships.
The deployment diagram visualizes the physical hardware on which the software will be
deployed.
The main purpose of the deployment diagram is to represent how software is installed on the
hardware component. It depicts in what manner a software interacts with hardware to perform
its execution.
Symbol and notation of Deployment diagram:

Artifact: A product developed by the software, symbolized by a rectangle with the name and
the word “artifact” enclosed by double arrows.

Component: A rectangle with two tabs that indicates a software element.

Interface: A circle that indicates a contractual relationship. Those objects that realize the
interface must complete some sort of obligation.

Node: A hardware or software object, shown by a three-dimensional box.

Node as container: A node that contains another node inside of it—such as in the example
below, where the nodes contain components.

Association: A line that indicates a message or other type of communication between nodes.

Dependency: A dashed line that ends in an arrow, which indicates that one node or
component is dependent on another.

Stereotype: A device contained within the node, presented at the top of the node, with the
name bracketed by double arrows.
The deployment diagram portrays the deployment view of the system. It helps in
visualizing the topological view of a system. It incorporates nodes, which are physical
hardware. The nodes are used to execute the artifacts. The instances of artifacts can be
deployed on the instances of nodes.

Deployment diagrams are useful for system engineers. An efficient deployment


diagram is very important as it controls the following parameters:

 Performance
 Scalability
 Maintainability
 Portability

Before drawing a deployment diagram, the following artifacts should be identified:

 Nodes
 Relationships among nodes

Following is a sample deployment diagram to provide an idea of the deployment view of


order management system. Here, we have shown nodes as:

 Monitor
 Modem
 Caching server
 Server
Component diagrams:

 Component diagrams are used to visualize the organization and relationships among
components in a system. These diagrams are also used to make executable systems.

 A component diagram is used to break down a large object-oriented system into the
smaller components, so as to make them more manageable. It models the physical
view of a system such as executables, files, libraries, etc. that resides within the node.
Notation of a Component Diagram
Component
An entity required to execute a stereotype function. A component provides and consumes
behavior through interfaces, as well as through other components.

Node
Represents hardware or software objects, which are of a higher level than components.

Port symbol :Specifies a separate interaction point between the component and the
environment. Ports are symbolized with a small square

Dependency symbol :Shows that one part of your system depends on another. Dependencies
are represented by dashed lines linking one component (or element) to another.

Note symbol: Allows developers to affix a meta-analysis to the component diagram.

Provided interfaces: A straight line from the component box with an attached circle. These
symbols represent the interfaces where a component produces information used by the
required interface of another component

Required interfaces: A straight line from the component box with an attached half circle
(also represented as a dashed arrow with an open arrow). These symbols represent the
interfaces where a component requires information in order to perform its proper function.

Component diagrams can be used to:

 Model the components of a system.


 Model the database schema.
 Model the executables of an application.
 Model the system's source code.
Example:
UNIT-III
Object Oriented Testing: Overview of Testing, object oriented Testing, Types of Testing,
Object oriented Testing strategies, Test case design for OO software.

Object Oriented Testing:


Software Testing is a process of identifying the correctness of software. Software
Testing is evaluation of the software against requirements gathered from users and system
specifications. Testing is conducted at the phase level in software development life cycle or
at module level in program code. Software testing comprises of Validation and
Verification.

Software Validation: Validation is process of examining whether or not the software


satisfies the user requirements. It is carried out at the end of the SDLC. If the software
matches requirements for which it was made, it is validated.

Dynamic testing or validation is the process of testing the actual product through the
tests done on the software application. Various parameters, including CPU and
memory usage, response time, and the overall performance of the software, are tested
and reviewed. The dynamic behavior of the code is examined through this testing
technique.

 Validation means “Are we developing the right software?

 It includes the execution of the code. It is a high-level activity

 Methods: Black Box Testing, White Box Testing, and non-functional testing.

Software Verification: Verification is the process of confirming if the software is meeting


the business requirements, and is developed proper specifications and methodologies.

Static testing or verification is the testing method of checking files and documents to
ensure the development of the right product according to the specified design, test cases,
test plans, etc.

 Verification means “Are we developing the software right?

 It does not includes the execution of the code. It is a low-level activity


 Methods: Reviews, Walkthroughs, Inspections, and Desk-checking.

Error, Fault and Failure:


(A person makes an error - that creates a fault in software-that can cause a failure in
operation)
Errors(Mistake): These are actual coding mistakes(wrong logic,syntax) made by
developers. In addition, there is a difference between actual output and expected output.
Fault(Defect/Bug): It is a condition that causes the software to fail to perform its required
function. When error exists fault occurs.
Failure: When fault exists Failure occurs in the system.

Benefits of Software Testing


Cost-effective development: If an application works without any fault and with low
maintenance will save a big amount for the owner.
Quality product: The main focus of software testing is to deliver a quality product to its
clients which results in satisfied client.Software testing helps to keep in check the following
in a system or software:
 Functionality
 Usability
 Efficiency
 Reliability
 Maintainability
 Portability
Customer Satisfaction:Customer Satisfaction is the only goal for the success and popularity
of a software application.
Bug-Free Application: (It means an application with no faults/defects) The main task of
software testing is to identify bugs and inform them the concerned developing team to fix.
When a bug is fixed, testers recheck the bug to identify its status.
Types of Testing:

1) Manual Software Testing:

Testing any software or an application according to the client's needs without using
any automation tool is known as manual testing.
There are three types of testing approaches:
1. White Box Testing
2. Black Box Testing
3. Grey Box Testing

White Box Testing:

White Box Testing is a testing technique in which software’s internal structure, design, and
coding are tested to verify input-output flow and improve design, usability, and security. In
white box testing, code is visible to testers, so it is also called Clear box testing, Open box
testing, Transparent box testing, Code-based testing, and Glass box testing(Structural testing).
Following are important White Box Testing Techniques:
 Statement Coverage
 Decision Coverage
 Branch Coverage
 Condition Coverage
 Multiple Condition Coverage
 Finite State Machine Coverage
 Path Coverage
 Control flow testing
 Data flow testing
Black Box Testing:
Black Box Testing is a software testing method in which the functionalities of software
applications are tested without having knowledge of internal code structure, implementation
details and internal paths. Black Box Testing mainly focuses on input and output of software
applications and it is entirely based on software requirements and specifications. It is also
known as Behavioral Testing.
Following are important Black Box Testing Techniques:
 Equivalence Class Testing
 Boundary Value Testing
 Decision Table Testing
Black-box testing is again divided into two different types of testing:
1. Functional Testing
2. Non-Functional Testing.
Functional Testing:
Functional testing is also known as Component testing. This black box testing type is related
to the functional requirements of a system, it is done by software testers.
Function Testing has 3 types of testing:
 Unit testing
 Integration testing
 System testing
Unit Testing: Unit testing is the first level of functional testing ,where individual units or
components of a software are tested. The purpose is to validate that each unit of the software
code performs as expected. Unit Testing is done during the development (coding phase) of an
application by the developers. Unit Tests isolate a section of code and verify its correctness.
A unit may be an individual function, method, procedure, module, or object.
Integration Testing: Once we are successfully implementing the unit testing, we will go
integration testing. It is the second level of functional testing, where we test the data flow
between dependent modules or interface between two features is called integration testing.
Integration testing is also further divided into the following parts:
i) Incremental Integration Testing: Suppose, we take two modules and analysis the
data flow between them if they are working fine or not. we can say that incrementally
adding up the modules and test the data flow between the modules is known as
Incremental integration testing.
Incremental integration testing can further classify into two parts:
1. Top-down Incremental Integration Testing
In this approach, we will add the modules step by step or incrementally and
test the data flow between them. We have to ensure that the modules we are
adding are the child of the earlier ones.
2. Bottom-up Incremental Integration Testing
In the bottom-up approach, we will add the modules incrementally and check
the data flow between modules. And also, ensure that the module we are
adding is the parent of the earlier ones
ii) Non-Incremental Integration Testing/ Big Bang Method: Whenever the data
flow is complex and very difficult to classify a parent and a child, we will go for the
non-incremental integration approach. The non-incremental method is also known as
the Big Bang method
System Testing: Whenever we are done with the unit and integration testing, we can proceed
with the system testing.
In system testing, the test environment is parallel to the production environment. It is also
known as end-to-end testing.
Non-functional Testing:
It provides detailed information on software product performance and used technologies.
Non-functional testing will help us minimize the risk of production and related costs of the
software.
Non-functional testing categorized into different parts of testing:
o Performance Testing
o Usability Testing
o Compatibility Testing
Performance Testing: In performance testing, the test engineer will test the working of an
application by applying some load. The test engineer will only focus on several aspects, such
as Response time, Load, scalability, and Stability of the software or an application.
Load Testing:While executing the performance testing, we will apply some load on
the particular application to check the application's performance, known as load
testing.
Stress Testing: It is used to analyze the user-friendliness and robustness of the
software beyond the common functional limits
Scalability Testing:To analysis, the application's performance by enhancing or
reducing the load in particular balances is known as scalability testing.
Stability Testing: we can rapidly find the system's defect even in a stressful situation.
Usability Testing: In usability testing, we will analyze the user-friendliness of an application
and detect the bugs in the software's end-user interface. (that the software is simple to use for
consumers)
Compatibility Testing: In compatibility testing, we will check the functionality of an
application in specific hardware and software environments. Once the application is
functionally stable then only, we go for compatibility testing.
Here, software means we can test the application on the different operating systems and other
browsers, and hardware means we can test the application on different sizes.
Grey-box Testing:
It is a collaboration of black box and white box testing. The grey box testing includes access
to internal coding for designing test cases. Grey box testing is performed by a person who
knows coding as well as testing. In other words, we can say that if a single-person team done
both white box and black-box testing, it is considered grey box testing.

2) Automation Software Testing:

Automation testing is a software testing technique that utilizes specialized automation


testing tools to automatically run a suite of test cases, delivering faster and more accurate
results compared to manual testing methods.

Some of the automation tools are:


1. Selenium is an open-source, automated testing tool used to test web applications
across various browsers. Selenium can only test web applications, unfortunately so
desktop and mobile apps cannot be tested.

2. Appium is an open-source, automated testing tool used to test the mobile


applications.It is a cross-platform testing framework that is flexible, enabling testers
to write test scripts on multiple platforms, such as iOS,Windows and Android.

3. QTP was renamed as UFT(Unified Functional Testing), this tool is primarily used for
functional, regression and service testing.

Some other types of software testing:


 Smoke testing
 Sanity testing
 Regression testing
 User Acceptance testing
 Exploratory testing
 Adhoc testing
 Security testing
 Globalization testing

Object Oriented Testing Strategies:


Software testing is the process of evaluating a software application to identify if it meets
specified requirements and to identify any defects. The following are common testing
strategies:

A Test Strategy is a plan for defining an approach to the Software Testing Life Cycle
(STLC).
Test Strategy Document is a well-described document in software testing which clearly
defines the exact software testing approach and testing objectives of the software application.
Test document is an important document for QA teams which is derived from actual business
requirements that guides the whole team about software testing approach and objectives for
each activity in the software testing process.
Static Testing Strategy:
The early-stage testing strategy is static testing: it is performed without actually
running the developing product. Basically, such desk-checking is required to detect bugs and
issues that are present in the code itself. Such a check-up is important at the pre-deployment
stage as it helps avoid problems caused by errors in the code and software structure deficits.

Structural Testing Strategy:


It is also called white-box testing because they are run by testers with through
knowledge of the devices and systems it is functioning on.
Behavioral Testing Strategy:
It is also called Black-box testing. Behavioral Testing focuses on how a system acts
rather than the mechanism behind its functions. It focuses on workflows, configurations,
performance, and all elements of the user journey.
Front-End Testing Strategy:
Front-end refers to the user-facing part of an app, which is the primary interface for
content consumption and business transactions. Front End Testing is a key part of any SDLC
as it validates GUI elements are functioning as expected.
Unit testing:
Tests individual units or components of the software to ensure they are functioning as
intended.
Integration testing:
Tests the integration of different components of the software to ensure they work
together as a system.
System testing:
Tests the complete software system to ensure it meets the specified requirements.
Acceptance testing:
Tests the software to ensure it meets the customer’s or end-user’s expectations.
Regression testing:
Tests the software after changes or modifications have been made to ensure the
changes have not introduced new defects.
Performance testing:
Tests the software to determine its performance characteristics such as speed,
scalability, and stability.

Test case design for OO software:


Test Case: A Test Case is a set of actions performed on a system to determine if it satisfies
software requirements and functions correctly.

A Test Case is a defined format for software testing required to check if a particular
application (or) software (or) module is working or not. It consists of Id, conditions, steps,
input, expected output or result, status and remarks.

Module Name: Subject or title that defines the functionality of the test.
Test Case Id: A unique identifier assigned to every single condition in a test case.
Tester Name: The name of the person who would be carrying out the test.
Test scenario: The test scenario provides a brief description to the tester, as in providing a
small overview to know about what needs to be performed and the small features, and
components of the test.
Test Case Description: The condition required to be checked for a given software. for eg.
Check if only numbers validation is working or not for an age input box.
Test Steps: Steps to be performed for the checking of the condition.
Prerequisite: The conditions required to be fulfilled before the start of the test process.
Test Priority: As the name suggests gives the priority to the test cases as in which had to be
performed first, or are more important and which could be performed later.
Test Data: The inputs to be taken while checking for the conditions.
Test Expected Result: The output which should be expected at the end of the test.
Test parameters: Parameters assigned to a particular test case.
Actual Result: The output that is displayed at the end.
Environment Information: The environment in which the test is being performed, such as
operating system, security information, the software name, software version, etc.
Status: The status of tests such as pass, fail, NA, etc.
Comments: Remarks on the test regarding the test for the betterment of the software.
Example: Below is an example for preparing various test cases for a login page with a
username and password.
Here we are only checking if the username validates at least for the length of eight characters.
Berard proposes the following approach:

Each test case should be uniquely identified and explicitly associated with the class to
be tested.
A list of testing steps should be developed for each test and should contain:
 A list of specified states for the object that is to be tested.
 A list of messages and operations that will be exercised as a consequence of
thetest.
 A list of exceptions that may occurs as the object is tested.
 A list of external conditions (i.e., changes in the environment external to
the software that must exist in order to properly conduct the test).
Test-driven development (TDD)
Test Driven Development (TDD) is a software development methodology that emphasizes
writing tests before writing the actual code. It ensures that code is always tested and functional,
reducing bugs and improving code quality. In TDD, developers write small, focused tests that
define the desired functionality, then write the minimum code necessary to pass these tests, and
finally, refactor the code to improve structure and performance.
This cyclic process helps in creating reliable, maintainable, and efficient software. By following
TDD, teams can enhance code reliability, accelerate development cycles, and maintain high
standards of software quality
What is Test Driven Development (TDD)?
Test-driven development (TDD) is a method of coding in which you first write a test and it fails,
then write the code to pass the test of development, and clean up the code. This process recycled
for one new feature or change. In other methods in which you write either all the code or all the
tests first, TDD will combine and write tests and code together into one.
Test-Driven Development (TDD) is a method in software development where the focus is on
writing tests before writing the actual code for a feature. This approach uses short development
cycles that repeat to ensure quality and correctness.
Process of Test Driven Development (TDD)
It is the process in which test cases are written before the code that validates those cases. It depends
on the repetition of a concise development cycle. Test-driven Development is a technique in which
automated Unit tests are used to drive the design and free decoupling of dependencies.
The following sequence of steps is generally followed:

Test Driven Development (TDD)

 Run all the test cases and make sure that the new test case fails.
 Red – Create a test case and make it fail, Run the test cases
 Green – Make the test case pass by any means.
 Refactor – Change the code to remove duplicate/redundancy.Refactor code – This is done to
remove duplication of code.
 Repeat the above-mentioned steps again and again
Write a complete test case describing the function. To make the test cases the developer must
understand the features and requirements using user stories and use cases.
Advantages of Test Driven Development (TDD)
 Unit test provides constant feedback about the functions.
 Quality of design increases which further helps in proper maintenance.
 Test driven development act as a safety net against the bugs.
 TDD ensures that your application actually meets requirements defined for it.
 TDD have very short development lifecycle.
Disadvantages of Test Driven Development (TDD)
 Increased Code Volume: Using TDD means writing extra code for tests cases , which can
make the overall codebase larger and more Unstructured.
 False Security from Tests: Passing tests will make the developers think the code is safer only
for assuming purpose.
 Maintenance Overheads: Keeping a lot of tests up-to-date can be difficult to maintain the
information and it’s also time-consuming process.
 Time-Consuming Test Processes: Writing and maintaining the tests can take a long time.
 Testing Environment Set-Up: TDD needs to be a proper testing environment in which it will
make effort to set up and maintain the codes and data.
TDD Vs. Traditional Testing
 Approach: Test Driven Development (TDD) it is a way of making software in that tests are
written first after that the code is written. In traditional testing it the other way, It will making
the code first and then start testing in it.
 Testing Scope: TDD checks small parts of code one by one. Traditional testing checks the
whole system, including how different parts work together.
 Iterative: TDD is works in small small steps. It will write a small code and tests, and then
improve regularly to code until it passes all the tests which are required. Traditional testing
tests the code one time and then fixing any problems which ate been find.
 Debugging: TDD will try to find mistakes early in the process of code, which makes it will
easier to fix them. Traditional testing will find the mistakes for after, which can be when held
then it will harder to fix in the future.
 Documentation: TDD will focuses on documentation of the tests and their results. Traditional
testing might have been more clear information about how the testing made done and the
system will tested.
UNIT-IV

Software Maintenance Basics.


Need for Maintenance
Software Maintenance must be performed in order to:
 Correct faults.
 Improve the design.
 Implement enhancements.
 Interface with other systems.
 Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
 Migrate legacy software.
 Retire software.
 Requirement of user changes.
 Run the code fast

Several Key Aspects of Software Maintenance


1. Bug Fixing: The process of finding and fixing errors and problems in the software.
2. Enhancements: The process of adding new features or improving existing features to meet the
evolving needs of the users.
3. Performance Optimization: The process of improving the speed, efficiency, and reliability of
the software.
4. Porting and Migration: The process of adapting the software to run on new hardware or
software platforms.
5. Re-Engineering: The process of improving the design and architecture of the software to make
it more maintainable and scalable.
6. Documentation: The process of creating, updating, and maintaining the documentation for the
software, including user manuals, technical specifications, and design documents.

Several Types of Software Maintenance


1. Corrective Maintenance: This involves fixing errors and bugs in the software system.
2. Patching: It is an emergency fix implemented mainly due to pressure from management.
Patching is done for corrective maintenance but it gives rise to unforeseen future errors due to
lack of proper impact analysis.
3. Adaptive Maintenance: This involves modifying the software system to adapt it to changes in
the environment, such as changes in hardware or software, government policies, and business
rules.
4. Perfective Maintenance: This involves improving functionality, performance, and reliability,
and restructuring the software system to improve changeability.
5. Preventive Maintenance: This involves taking measures to prevent future problems, such as
optimization, updating documentation, reviewing and testing the system, and implementing
preventive measures such as backups.
Maintenance can be categorized into proactive and reactive types. Proactive maintenance involves
taking preventive measures to avoid problems from occurring, while reactive maintenance involves
addressing problems that have already occurred.
Maintenance can be performed by different stakeholders, including the original development team,
an in-house maintenance team, or a third-party maintenance provider. Maintenance activities can
be planned or unplanned. Planned activities include regular maintenance tasks that are scheduled in
advance, such as updates and backups. Unplanned activities are reactive and are triggered by
unexpected events, such as system crashes or security breaches. Software maintenance can involve
modifying the software code, as well as its documentation, user manuals, and training materials.
This ensures that the software is up-to-date and continues to meet the needs of its users.
Software maintenance can also involve upgrading the software to a new version or platform. This
can be necessary to keep up with changes in technology and to ensure that the software remains
compatible with other systems. The success of software maintenance depends on effective
communication with stakeholders, including users, developers, and management. Regular updates
and reports can help to keep stakeholders informed and involved in the maintenance process.
Software maintenance is also an important part of the Software Development Life Cycle
(SDLC). To update the software application and do all modifications in software application so as
to improve performance is the main focus of software maintenance. Software is a model that runs
on the basis of the real world. so, whenever any change requires in the software that means the
need for real-world changes wherever possible.

Advantages of Software Maintenance


 Improved Software Quality: Regular software maintenance helps to ensure that the software
is functioning correctly and efficiently and that it continues to meet the needs of the users.
 Enhanced Security: Maintenance can include security updates and patches, helping to ensure
that the software is protected against potential threats and attacks.
 Increased User Satisfaction: Regular software maintenance helps to keep the software up-to-
date and relevant, leading to increased user satisfaction and adoption.
 Extended Software Life: Proper software maintenance can extend the life of the software,
allowing it to be used for longer periods of time and reducing the need for costly replacements.
 Cost Savings: Regular software maintenance can help to prevent larger, more expensive
problems from occurring, reducing the overall cost of software ownership.
 Better Alignment with business goals: Regular software maintenance can help to ensure that
the software remains aligned with the changing needs of the business. This can help to improve
overall business efficiency and productivity.
 Competitive Advantage: Regular software maintenance can help to keep the software ahead of
the competition by improving functionality, performance, and user experience.
 Compliance with Regulations: Software maintenance can help to ensure that the software
complies with relevant regulations and standards. This is particularly important in industries
such as healthcare, finance, and government, where compliance is critical.
 Improved Collaboration: Regular software maintenance can help to improve collaboration
between different teams, such as developers, testers, and users. This can lead to better
communication and more effective problem-solving.
 Reduced Downtime: Software maintenance can help to reduce downtime caused by system
failures or errors. This can have a positive impact on business operations and reduce the risk of
lost revenue or customers.
 Improved Scalability: Regular software maintenance can help to ensure that the software is
scalable and can handle increased user demand. This can be particularly important for growing
businesses or for software that is used by a large number of users.
Disadvantages of Software Maintenance
 Cost: Software maintenance can be time-consuming and expensive, and may require significant
resources and expertise.
 Schedule disruptions: Maintenance can cause disruptions to the normal schedule and
operations of the software, leading to potential downtime and inconvenience.
 Complexity: Maintaining and updating complex software systems can be challenging,
requiring specialized knowledge and expertise.
 Risk of introducing new bugs: The process of fixing bugs or adding new features can introduce
new bugs or problems, making it important to thoroughly test the software after maintenance.
 User resistance: Users may resist changes or updates to the software, leading to decreased
satisfaction and adoption.
 Compatibility issues: Maintenance can sometimes cause compatibility issues with other
software or hardware, leading to potential integration problems.
 Lack of documentation: Poor documentation or lack of documentation can make software
maintenance more difficult and time-consuming, leading to potential errors or delays.
 Technical debt: Over time, software maintenance can lead to technical debt, where the cost of
maintaining and updating the software becomes increasingly higher than the cost of developing
a new system.
 Skill gaps: Maintaining software systems may require specialized skills or expertise that may
not be available within the organization, leading to potential outsourcing or increased costs.
 Inadequate testing: Inadequate testing or incomplete testing after maintenance can lead to
errors, bugs, and potential security vulnerabilities.
End-of-life: Eventually, software systems may reach their end-of-life, making maintenance and
updates no longer feasible or cost-effective. This can lead to the need for a complete system
replacement, which can be costly and time-consuming.

Code Refactoring Techniques in Software Engineering


Improving or updating the code without changing the software’s functionality or external behavior
of the application is known as code refactoring.

It reduces the technical cost and makes the code more efficient and maintainable. If you don’t pay
attention to the code refactoring process earlier, you will pay for errors in your code later. So don’t
ignore cleaning up the code.

In a software development process, different developers have different code writing styles. They
make changes, maintain the code, extend the code, and most of the time they leave the code
without continuous refactoring. Un-refactored code tends to code rot: a lot of confusion and clutter
in code such as duplicate code, unhealthy dependencies between classes or packages, bad
allocation of class responsibilities, too many responsibilities per method or class, etc. To avoid all
these issues continuous refactoring is important.
Most Common Code Refactoring Techniques

There are many approaches and techniques to refactor the code. Let’s discuss some popular ones…

1. Red-Green Refactoring

Red-Green is the most popular and widely used code refactoring technique in the Agile software
development process. This technique follows the “test-first” approach to design and
implementation, this lays the foundation for all forms of refactoring. Developers take initiative for
the refactoring into the test-driven development cycle and it is performed into the three district
steps.
 RED: The first step starts with writing the failing “red-test”. You stop and check what needs to
be developed.
 Green: In the second step, you write the simplest enough code and get the development pass
“green” testing.
 Refactor: In the final and third steps, you focus on improving and enhancing your code
keeping your test green.
So basically this technique has two distinct parts: The first part involves writing code that adds a
new function to your system and the second part is all about refactoring the code that does this
function. Keep in mind that you’re not supposed to do both at the same time during the workflow.

2)Refactoring By Abstraction

This technique is mostly used by developers when there is a need to do a large amount of
refactoring. Mainly we use this technique to reduce the redundancy (duplication) in our code. This
involves class inheritances, hierarchy, creating new classes and interfaces, extraction, replacing
inheritance with the delegation, and vice versa.

3)Composing Method

During the development phase of an application a lot of times we write long methods in our
program. These long methods make your code extremely hard to understand and hard to change.
The composing method is mostly used in these cases.
In this approach, we use streamline methods to reduce duplication in our code. Some examples are:
extract method, extract a variable, inline Temp, replace Temp with Query, inline method, split
temporary variable, remove assignments to parameters, etc.

4 Moving Features Between Objects

In this technique, we create new classes, and we move the functionality safely between old and
new classes. We hide the implementation details from public access.

5)Preparatory Refactoring

This approach is best to use when you notice the need for refactoring while adding some new
features in an application. So basically it’s a part of a software update with a separate refactoring
process. You save yourself with future technical debt if you notice that the code needs to be
updated during the earlier phases of feature development.

6)User Interface Refactoring

You can make simple changes in UI and refactor the code. For example: align entry field, apply
font, reword in active voice indicate the format, apply common button size, and increase color
contrast, etc.

Software version control

What is a “version control system”?


Version control systems are a category of software tools that helps in recording changes made to
files by keeping a track of modifications done in the code.
Why Version Control system is so Important?
As we know that a software product is developed in collaboration by a group of developers they
might be located at different locations and each one of them contributes to some specific kind of
functionality/features. So in order to contribute to the product, they made modifications to the
source code(either by adding or removing). A version control system is a kind of software that
helps the developer team to efficiently communicate and manage(track) all the changes that have
been made to the source code along with the information like who made and what changes have
been made. A separate branch is created for every contributor who made the changes and the
changes aren’t merged into the original source code unless all are analyzed as soon as the changes
are green signaled they merged to the main source code. It not only keeps source code organized
but also improves productivity by making the development process smooth.
Basically Version control system keeps track on changes made on a particular software and take a
snapshot of every modification. Let’s suppose if a team of developer add some new functionalities
in an application and the updated version is not working properly so as the version control system
keeps track of our work so with the help of version control system we can omit the new changes
and continue with the previous version.
Benefits of the version control system:
 Enhances the project development speed by providing efficient collaboration,
 Leverages the productivity, expedites product delivery, and skills of the employees through
better communication and assistance,
 Reduce possibilities of errors and conflicts meanwhile project development through traceability
to every small change,
 Employees or contributors of the project can contribute from anywhere irrespective of the
different geographical locations through this VCS,
 For each different contributor to the project, a different working copy is maintained and not
merged to the main file unless the working copy is validated. The most popular example is Git,
Helix core, Microsoft TFS,
 Helps in recovery in case of any disaster or contingent situation,
 Informs us about Who, What, When, Why changes have been made.
Use of Version Control System:
 A repository: It can be thought of as a database of changes. It contains all the edits and
historical versions (snapshots) of the project.
 Copy of Work (sometimes called as checkout): It is the personal copy of all the files in a
project. You can edit to this copy, without affecting the work of others and you can finally
commit your changes to a repository when you are done making your changes.
 Working in a group: Consider yourself working in a company where you are asked to work on
some live project. You can’t change the main code as it is in production, and any change may
cause inconvenience to the user, also you are working in a team so you need to collaborate with
your team to and adapt their changes. Version control helps you with the, merging different
requests to main repository without making any undesirable changes. You may test the
functionalities without putting it live, and you don’t need to download and set up each time, just
pull the changes and do the changes, test it and merge it back. It may be visualized as.

Types of Version Control Systems:

 Local Version Control Systems


 Centralized Version Control Systems
 Distributed Version Control Systems

Local Version Control Systems:

It is one of the simplest forms and has a database that kept all the changes to files under revision
control. RCS is one of the most common VCS tools. It keeps patch sets (differences between files)
in a special format on disk. By adding up all the patches it can then re-create what any file looked
like at any point in time.

Centralized Version Control Systems:

Centralized version control systems contain just one repository globally and every user need to
commit for reflecting one’s changes in the repository. It is possible for others to see your changes
by updating.
Two things are required to make your changes visible to others which are:
 You commit
 They update
The benefit of CVCS (Centralized Version Control Systems) makes collaboration amongst
developers along with providing an insight to a certain extent on what everyone else is doing on the
project. It allows administrators to fine-grained control over who can do what.
It has some downsides as well which led to the development of DVS. The most obvious is the
single point of failure that the centralized repository represents if it goes down during that period
collaboration and saving versioned changes is not possible. What if the hard disk of the central
database becomes corrupted, and proper backups haven’t been kept? You lose absolutely
everything.
Distributed Version Control Systems: Distributed version control systems contain multiple
repositories. Each user has their own repository and working copy. Just committing your changes
will not give others access to your changes. This is because commit will reflect those changes in
your local repository and you need to push them in order to make them visible on the central
repository. Similarly, When you update, you do not get others’ changes unless you have first pulled
those changes into your repository.
To make your changes visible to others, 4 things are required:
 You commit
 You push
 They pull
 They update
The most popular distributed version control systems are Git, and Mercurial. They help us
overcome the problem of single point of failure.
Purpose of Version Control:

 Multiple people can work simultaneously on a single project. Everyone works on and edits their
own copy of the files and it is up to them when they wish to share the changes made by them
with the rest of the team.
 It also enables one person to use multiple computers to work on a project, so it is valuable even
if you are working by yourself.
 It integrates the work that is done simultaneously by different members of the team. In some
rare cases, when conflicting edits are made by two people to the same line of a file, then human
assistance is requested by the version control system in deciding what should be done.
 Version control provides access to the historical versions of a project. This is insurance against
computer crashes or data loss. If any mistake is made, you can easily roll back to a previous
version. It is also possible to undo specific edits that too without losing the work done in the
meanwhile. It can be easily known when, why, and by whom any part of a file was edited.

Code Review and Inspection


In software engineering, "code review" and "code inspection" are both processes where developers
examine and evaluate each other's code to identify potential issues and ensure quality, but
"inspection" typically refers to a more structured and formal approach with a defined process, often
involving a dedicated team and checklist, while "review" can be a more informal feedback loop
between developers.
Key differences:
 Formal vs. Informal:
Code inspection is usually a more formal process with a structured approach, including a
designated moderator, checklist, and defined roles for participants, while code review can be a
more informal peer-to-peer feedback session.
 Focus on Defects:
Code inspection aims to actively search for defects and potential problems by thoroughly
examining the code, while code review may also focus on style, readability, and best practices in
addition to finding bugs.
 Process Structure:
Inspections often follow a specific methodology, like the Fagan inspection process, with distinct
steps like overview, individual review, and discussion, whereas code reviews can be more flexible
in their approach.
What both code review and inspection involve:
 Examining code: Reviewing the logic, syntax, and structure of the code to identify potential issues
 Checking for errors: Looking for bugs, logical flaws, and incorrect calculations
 Assessing code quality: Evaluating code readability, maintainability, and adherence to coding
standards
 Providing feedback: Offering constructive comments and suggestions for improvement
Benefits of code review and inspection:
 Improved code quality: Identifying and fixing defects early in the development process
 Knowledge sharing: Learning from other developers' approaches and best practices
 Reduced technical debt: Proactively addressing potential issues before they become major
problems
 Collaboration: Fostering a culture of teamwork and continuous improvement
Code Inspection :

Code inspection is a type of Static testing that aims to review the software code and examine for
any errors. It helps reduce the ratio of defect multiplication and avoids later-stage error detection
by simplifying all the initial error detection processes. This code inspection comes under the review
process of any application.

Purpose of code inspection :


1. It checks for any error that is present in the software code.
2. It identifies any required process improvement.
3. It checks whether the coding standard is followed or not.
4. It involves peer examination of codes.
5. It documents the defects in the software code.
Advantages Of Code Inspection :
 Improves overall product quality.
 Discovers the bugs/defects in software code.
 Marks any process enhancement in any case.
 Finds and removes defective efficiently and quickly.
 Helps to learn from previous defeats.
Disadvantages of Code Inspection:
 Requires extra time and planning.
 The process is a little slower.

Software Evolution and Reengineering

Software Evolution

Software Evolution is a term that refers to the process of developing software initially, and then
timely updating it for various reasons, i.e., to add new features or to remove obsolete functionalities,
etc. This article focuses on discussing Software Evolution in detail.

What is Software Evolution?

The software evolution process includes fundamental activities of change analysis, release planning,
system implementation, and releasing a system to customers.
1. The cost and impact of these changes are accessed to see how much the system is affected by the
change and how much it might cost to implement the change.
2. If the proposed changes are accepted, a new release of the software system is planned.
3. During release planning, all the proposed changes (fault repair, adaptation, and new functionality)
are considered.
4. A design is then made on which changes to implement in the next version of the system.
5. The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented, and tested.
Necessity of Software Evolution

Software evaluation is necessary just because of the following reasons:


1. Change in requirement with time: With time, the organization’s needs and modus Operandi of
working could substantially be changed so in this frequently changing time the tools(software)
that they are using need to change to maximize the performance.
2. Environment change: As the working environment changes the things(tools) that enable us to
work in that environment also changes proportionally same happens in the software world as the
working environment changes then, the organizations require reintroduction of old software with
updated features and functionality to adapt the new environment.
3. Errors and bugs: As the age of the deployed software within an organization increases their
preciseness or impeccability decrease and the efficiency to bear the increasing complexity
workload also continually degrades. So, in that case, it becomes necessary to avoid use of
obsolete and aged software. All such obsolete Pieces of software need to undergo the evolution
process in order to become robust as per the workload complexity of the current environment.
4. Security risks: Using outdated software within an organization may lead you to at the verge of
various software-based cyberattacks and could expose your confidential data illegally associated
with the software that is in use. So, it becomes necessary to avoid such security breaches through
regular assessment of the security patches/modules are used within the software. If the software
isn’t robust enough to bear the current occurring Cyber-attacks so it must be changed (updated).
5. For having new functionality and features: In order to increase the performance and fast data
processing and other functionalities, an organization need to continuously evaluate the software
throughout its life cycle so that stakeholders & clients of the product could work efficiently.

Re-engineering

Software Re-engineering is a process of software development that is done to improve the


maintainability of a software system. Re-engineering is the examination and alteration of a system to
reconstitute it in a new form. This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing, etc.
What is Re-engineering?

Re-engineering, also known as software re-engineering, is the process of analyzing, designing, and
modifying existing software systems to improve their quality, performance, and maintainability.
1. This can include updating the software to work with new hardware or software platforms, adding
new features, or improving the software’s overall design and architecture.
2. Software re-engineering, also known as software restructuring or software renovation, refers to
the process of improving or upgrading existing software systems to improve their quality,
maintainability, or functionality.
3. It involves reusing the existing software artifacts, such as code, design, and documentation, and
transforming them to meet new or updated requirements.

Objective of Re-engineering

The primary goal of software re-engineering is to improve the quality and maintainability of the
software system while minimizing the risks and costs associated with the redevelopment of the
system from scratch. Software re-engineering can be initiated for various reasons, such as:
1. To describe a cost-effective option for system evolution.
2. To describe the activities involved in the software maintenance process.
3. To distinguish between software and data re-engineering and to explain the problems of data re-
engineering.
Overall, software re-engineering can be a cost-effective way to improve the quality and functionality
of existing software systems, while minimizing the risks and costs associated with starting from
scratch.

Process of Software Re-engineering

The process of software re-engineering involves the following steps:

Process of Software Re-engineering


1. Planning: The first step is to plan the re-engineering process, which involves identifying the
reasons for re-engineering, defining the scope, and establishing the goals and objectives of the
process.
2. Analysis: The next step is to analyze the existing system, including the code, documentation, and
other artifacts. This involves identifying the system’s strengths and weaknesses, as well as any
issues that need to be addressed.
3. Design: Based on the analysis, the next step is to design the new or updated software system.
This involves identifying the changes that need to be made and developing a plan to implement
them.
4. Implementation: The next step is to implement the changes by modifying the existing code,
adding new features, and updating the documentation and other artifacts.
5. Testing: Once the changes have been implemented, the software system needs to be tested to
ensure that it meets the new requirements and specifications.
6. Deployment: The final step is to deploy the re-engineered software system and make it available
to end-users.
Why Perform Re-engineering?

Re-engineering can be done for a variety of reasons, such as:


1. To improve the software’s performance and scalability: By analyzing the existing code and
identifying bottlenecks, re-engineering can be used to improve the software’s performance and
scalability.
2. To add new features: Re-engineering can be used to add new features or functionality to
existing software.
3. To support new platforms: Re-engineering can be used to update existing software to work with
new hardware or software platforms.
4. To improve maintainability: Re-engineering can be used to improve the software’s overall
design and architecture, making it easier to maintain and update over time.
5. To meet new regulations and compliance: Re-engineering can be done to ensure that the
software is compliant with new regulations and standards.
6. Improving software quality: Re-engineering can help improve the quality of software by
eliminating defects, improving performance, and enhancing reliability and maintainability.
7. Updating technology: Re-engineering can help modernize the software system by updating the
technology used to develop, test, and deploy the system.
8. Enhancing functionality: Re-engineering can help enhance the functionality of the software
system by adding new features or improving existing ones.
9. Resolving issues: Re-engineering can help resolve issues related to scalability, security, or
compatibility with other systems.
Steps involved in Re-engineering
1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering
Steps of Re-Engineering

Re-engineering Cost Factors


1. The quality of the software to be re-engineered.
2. The tool support available for re-engineering.
3. The extent of the required data conversion.
4. The availability of expert staff for re-engineering.
Factors Affecting Cost of Re-engineering

Re-engineering can be a costly process, and there are several factors that can affect the cost of
re-engineering a software system:
1. Size and complexity of the software: The larger and more complex the software system, the
more time and resources will be required to analyze, design, and modify it.
2. Number of features to be added or modified: The more features that need to be added or
modified, the more time and resources will be required.
3. Tools and technologies used: The cost of re-engineering can be affected by the tools and
technologies used, such as the cost of software development tools and the cost of hardware and
infrastructure.
4. Availability of documentation: If the documentation of the existing system is not available or is
not accurate, then it will take more time and resources to understand the system.
5. Team size and skill level: The size and skill level of the development team can also affect the
cost of re-engineering. A larger and more experienced team may be able to complete the project
faster and with fewer resources.
6. Location and rate of the team: The location and rate of the development team can also affect
the cost of re-engineering. Hiring a team in a lower-cost location or with lower rates can help to
reduce the cost of re-engineering.
7. Testing and quality assurance: Testing and quality assurance are important aspects of re-
engineering, and they can add significant costs to the project.
8. Post-deployment maintenance: The cost of post-deployment maintenance such as bug fixing,
security updates, and feature additions can also play a role in the cost of re-engineering.
In summary, the cost of re-engineering a software system can vary depending on a variety of factors,
including the size and complexity of the software, the number of features to be added or modified,
the tools and technologies used, and the availability of documentation and the skill level of the
development team. It’s important to carefully consider these factors when estimating the cost of re-
engineering a software system.
Advantages of Re-engineering
1. Reduced Risk: As the software is already existing, the risk is less as compared to new software
development. Development problems, staffing problems and specification problems are the lots
of problems that may arise in new software development.
2. Reduced Cost: The cost of re-engineering is less than the costs of developing new software.
3. Revelation of Business Rules: As a system is re-engineered , business rules that are embedded
in the system are rediscovered.
4. Better use of Existing Staff: Existing staff expertise can be maintained and extended
accommodate new skills during re-engineering.
5. Improved efficiency: By analyzing and redesigning processes, re-engineering can lead to
significant improvements in productivity, speed, and cost-effectiveness.
6. Increased flexibility: Re-engineering can make systems more adaptable to changing business
needs and market conditions.
7. Better customer service: By redesigning processes to focus on customer needs, re-engineering
can lead to improved customer satisfaction and loyalty.
8. Increased competitiveness: Re-engineering can help organizations become more competitive by
improving efficiency, flexibility, and customer service.
9. Improved quality: Re-engineering can lead to better quality products and services by identifying
and eliminating defects and inefficiencies in processes.
10. Increased innovation: Re-engineering can lead to new and innovative ways of doing things,
helping organizations to stay ahead of their competitors.
11. Improved compliance: Re-engineering can help organizations to comply with industry standards
and regulations by identifying and addressing areas of non-compliance.
Disadvantages of Re-engineering

Major architectural changes or radical reorganizing of the systems data management has to be done
manually. Re-engineered system is not likely to be as maintainable as a new system developed using
modern software Re-engineering methods.
1. High costs: Re-engineering can be a costly process, requiring significant investments in time,
resources, and technology.
2. Disruption to business operations: Re-engineering can disrupt normal business operations and
cause inconvenience to customers, employees and other stakeholders.
3. Resistance to change: Re-engineering can encounter resistance from employees who may be
resistant to change and uncomfortable with new processes and technologies.
4. Risk of failure: Re-engineering projects can fail if they are not planned and executed properly,
resulting in wasted resources and lost opportunities.
5. Lack of employee involvement: Re-engineering projects that are not properly communicated
and involve employees, may lead to lack of employee engagement and ownership resulting in
failure of the project.
6. Difficulty in measuring success: Re-engineering can be difficult to measure in terms of success,
making it difficult to justify the cost and effort involved.
Difficulty in maintaining continuity: Re-engineering can lead to significant changes in processes
and systems, making it difficult to maintain continuity and consistency in the organization.
UNIT-V

Advanced Topics in Object-Oriented Software Engineering:

Model-driven engineering (MDE):

Model-Driven Engineering (MDE) is the practice of raising models to first-class artefacts of the
software engineering process, using such models to analyse, simulate, and reason about properties of
the system under development, and eventually, often auto-generate (a part of) its implementation.

MDE brings and adapts well-understood and long-established principles and practices of trustworthy
systems engineering to software engineering; it is unthinkable to start constructing e.g. a bridge or an
aircraft without designing and analysing several models of it first. It’s used extensively in
organisations that produce business or safety-critical software, including in the aerospace, automotive
and robotics industries, where defects can have catastrophic effects (e.g. loss of life), or can be very
expensive to remedy, for example requiring large-scale product recalls. MDE is also increasingly
used for non-critical systems due to the productivity and consistency benefits it delivers, largely
through automated code generation.

Essentially, the use of domain-specific models enables software engineers to capture essential
information about the system under development at precisely the level of detail that is appropriate for
their domain and technical stack. These models are then used to automate labor-intensive and tedious
work (writing setters and getters, JSON marshalling and un-marshalling code etc. is nobody’s idea of
fun) which lets engineers channel their creativity towards the novel and intellectually demanding
parts of the system under development.

Models, Metamodels, and Model Transformations

The central artefact in MDE is the model. A model in the computing world is a simplification of a
process one wishes to capture or automate. The simplification is such that it does not take into
account details that can be overseen at a given stage of the engineering cycle.

The purpose is to focus on the relevant concepts at hand—much as for example a plaster model of a
car for studying body aerodynamics will not take into account the real materials a car is made of.
In the computing world a model is defined by using a given language. Coming back to the car
analogy,
if an engineer wishes to have a computational model of a car for 3D visualization, a language such as
the one defined by a Computer Assisted Design (Cad) tool will be necessary to express a particular
car design. In the computing world several such languages—called metamodels—are used to describe
families of models of computational artefacts that share the same abstraction concerns.
Each metamodel is a language (also called formalism) that may have many model instantiations,
much as in a Cad tool many different car designs can be described.
The missing piece in this set of concepts are Model Transformations.
Model transformations, as illustrated in Fig.

Model-Driven Security

Model-Driven Security (MDS) can be seen as a specialization of MDE for supporting the development
of security-critical applications. MDS makes use of the conceptual approach of MDE as well as its
associated techniques and tools to propose methods for the engineering of security-critical
applications. More specifically, models are the central artifacts in every MDS approach. Besides being
used to describe the system’s business logic, they are used extensively to capture security concerns.
Models

Model-Driven Architecture
Model-Driven Architecture (Mda) is an Omg proposal launched in 2001 to help standardize model
definitions, and favor model exchanges and compatibility. The Mda consists of the following points .

It builds on the Uml, an already standardized and well-accepted notation, already widely used
in Object-Oriented systems. In an effort to harmonize notations and clean the Uml’s internal
structure, Meta-Object Facility (Mof) was proposed for coping with the plethora of model
definitions and languages;

It proposes a pyramidal construction of models as can be observed in Fig. : artifacts
populating the level M0 represents the actual system; those in the M1 level model
the M0 ones; artifacts belonging to the M2 level are metamodels, allowing the definition
of M1 models; and finally, the unique artifact at the M3 level is Mof itself, considered as
meta-circularly defined as a model itself;
Aspect-oriented programming (AOP)

Aspect-oriented programming (AOP) is a coding approach that helps developers write cleaner, more
organized code by separating common tasks, such as logging or error handling, from the main
program logic. In AOP, code is separated into modules or ‘aspects’ that encapsulate related
functionality, making it easier to manage and modify.

In simpler terms, imagine you’re writing a program with considerable repeated code. Let’s say you’re
building a website and must perform security checks on every page. You could write the security
code on every single page; however, that would be inefficient and difficult to manage. Instead, you
could use AOP to create a separate ‘security aspect’ that handles security checks for all the pages in
one central location. This makes the code more organized and easier to modify in the future.

Development of AOP

AOP first emerged in the late 1990s. It was developed as a response to the limitations of Object-
oriented programming (OOP) in dealing with cross-cutting concerns. The concept of cross-cutting
concerns was first introduced in a 1997 paper by Gregor Kiczales, John Lamping, and others, titled
‘Aspect-Oriented Programming.’ They argued that certain concerns, such as logging, error handling,
and security, cut across multiple modules or components of an application and cannot be
encapsulated within a single class or module.
Today, AOP is widely used in various domains, including web and mobile development, gaming, big
data, cloud computing, and IoT.

It has proven to be a useful tool to improve application performance, reliability, and scalability by
separating cross-cutting concerns from business logic.
How Does AOP Work?

AOP is a programming technique that focuses on modularizing cross-cutting concerns, which are
features that cut across different parts of an application. Cross-cutting concerns include things such as
logging, security, performance monitoring, error handling, and more.
Here’s a step-by-step explanation of how AOP works.

1. Identifying concerns

First, identify the different concerns or responsibilities of your program. For example, if you’re
writing a program to process orders, you might identify concerns such as order validation, payment
processing, and order fulfilment.

2. Defining aspects

Once you’ve identified your concerns, you can define aspects for each one. An aspect is a modular
unit of code that encapsulates a specific behavior or responsibility. For example, you might create an
aspect for order validation, another for payment processing, and so on.

3. Determining join points

A join point is a specific point in your program’s execution where an aspect can be applied. For
example, a join point for the order validation aspect might be when a new order is submitted. You
can define join points in your code using annotations or other markers.

4. Defining pointcuts

A pointcuts is a set of join points where an aspect should be applied. For example, you might define a
pointcut for the order validation aspect that includes all the join points where a new order is
submitted. Pointcuts help narrow down the scope of an aspect so that it’s only applied where needed.

5. Defining advice

Advice is the behavior an aspect provides at a join point. Several types of advice exist, including
before, after, and around advice. Before advice is executed before the join point, after advice is
executed after the join point, and around advice wraps the join point and can modify its behavior.

6. Weaving aspects

Weaving is the process of applying aspects to your program’s execution. During weaving, the advice
provided by aspects is injected into the appropriate join points in your code. This allows the behavior
of aspects to be integrated into your program’s execution.

7. Executing program

Once your aspects are woven into your program, you can execute the program as usual. The
behaviour the aspects provide will be triggered at the appropriate join points, and your program will
run as expected.

Prominent AOP Frameworks


Today, several AOP frameworks are available for different programming languages, each with its
own set of features and capabilities. Here are some of the most prominent ones widely used in
software development.

Prominent AOP Frameworks


1. Spring AOP

Spring AOP is a lightweight AOP framework that is part of the larger ‘Spring Framework.’ It
supports several types of advice, including before, after, around, and after-returning advice. It also
supports pointcut expressions, which allow you to specify sets of join points using a simple syntax.
Spring AOP integrates easily with other parts of the Spring Framework, such as Spring MVC and
Spring Boot.

2. AspectJ

AspectJ is a powerful AOP framework that supports many features, including advanced pointcut
expressions, inter-type declarations, and aspect inheritance. It allows you to weave aspects into both
Java and bytecode at compile time or runtime, giving you increased flexibility. AspectJ is also
compatible with many integrated development environments (IDEs) and build tools, including
Eclipse and Maven.

3. JBoss AOP

JBoss AOP is an AOP framework part of the JBoss application server. It supports before and after
advice, around advice, and inter-type declarations. It also includes several built-in aspects for
common concerns such as security and transaction management. JBoss AOP weaves aspects into
bytecode at runtime and integrates easily with other parts of the JBoss ecosystem.
4. Guice AOP

Guice AOP is an AOP framework part of the Google Guice dependency injection library. It supports
before, after, and around advice in the form of method interceptors. Guice AOP uses method
interception rather than bytecode weaving, which can be simpler and faster in some cases. Integrating
with other Guice features, such as dependency injection and scopes, is easy.

5. PostSharp

PostSharp is an AOP framework for .NET that supports a wide range of features, including before
and after advice, around advice, and aspect inheritance. It also includes several pre-built aspects for
common concerns such as logging and caching. PostSharp weaves aspects into .NET assemblies at
compile time and integrates easily with Visual Studio and other .NET tools.

Technically, AOP frameworks provide developers with a powerful tool to implement cross-cutting
concerns in their applications. Each framework provides different features and has its own strengths.
The choice of framework largely depends on the application’s requirements, the programming
language used, and the development environment.

Application Areas of AOP


AOP is used across multiple application areas. Let’s look at how different fields use AOP
programming.

1. Web applications

AOP can be used in web applications to separate concerns such as logging, security, and transaction
management. For example, an AOP logging aspect can capture method execution times and stack
traces, while a security aspect can enforce authentication and authorization policies.
2. Enterprise applications

Aspect-oriented programming can be used in enterprise applications to manage exception handling,


caching, and performance monitoring. For instance, an AOP exception handling aspect can catch and
handle exceptions in a uniform and consistent manner across multiple components. On the other
hand, a caching aspect can cache frequently accessed data to improve performance.

3. Mobile applications

AOP is used in mobile applications to manage device compatibility, data synchronization, and user
engagement. The device compatibility aspect ensures the application works seamlessly across
different platforms and devices. Meanwhile, a data synchronization aspect can handle data conflicts
and ensure data consistency across multiple devices.

4. Embedded systems

Aspect-oriented programming could be used in embedded systems to manage concerns such as


memory management, power consumption, and fault tolerance. For example, an AOP memory
management aspect can optimize memory usage and prevent memory leaks. Meanwhile, a fault
tolerance aspect can handle hardware failures and ensure the system continues operating reliably.

5. IoT

In IoT applications, AOP addresses concerns such as security, fault tolerance, and data processing.
This implies that the AOP security aspect can enforce authentication and authorization policies to
protect against cyber-attacks. Meanwhile, a fault tolerance aspect can handle errors and failures
gracefully to ensure that the system continues to operate even in unpredictable and unreliable
environments. A data processing aspect can handle complex data processing tasks such as
aggregation, filtering, and transformation, making it easier to write efficient and maintainable code
for IoT applications.
As per February 2023 research by IoT Analytics, the IoT market size is estimated to grow at a CAGR
of 19.4% between 2022 and 2027 to reach $483 billion. As the IoT market continues to accelerate,
AOP is expected to play a crucial role in managing IoT applications.

6. Finance sector

In financial applications, AOP can be deployed to address concerns such as transaction management,
auditing, and compliance. For instance, an AOP auditing aspect can capture audit logs to ensure
compliance with regulatory requirements. A compliance aspect can enforce compliance policies and
prevent unauthorized access to sensitive financial data.

7. Healthcare industry

In healthcare applications, aspect-oriented programming can address concerns such as privacy,


security, and interoperability. An AOP privacy aspect can ensure that sensitive patient data is
protected and handled in compliance with privacy regulations, while a security aspect can protect
against cyber threats and data breaches.
An interoperability aspect can ensure that healthcare systems can communicate and exchange data
seamlessly, making it easier to share patient information across different healthcare providers and
systems.

8. Gaming industry

In gaming applications, AOP can be used to address issues such as performance optimization,
debugging, and game logic. For example, an AOP performance optimization aspect can optimize
game rendering and animation, improving the overall gameplay experience for players. An AOP
debugging aspect can capture detailed debugging information to help developers identify and resolve
issues quickly and efficiently.
A game logic aspect can handle game logic tasks such as player movement, scoring, and collision
detection, making writing maintainable and scalable code for complex game mechanics easier.

9. Robotics

In robotics, one can use AOP for cross-cutting concerns such as performance monitoring and error
handling. It can be used to intercept method calls and measure the time taken by the method to be
completed. This helps identify performance bottlenecks and optimize the performance of the robotic
system. Additionally, aspect-oriented programming can handle errors gracefully and ensure that the
robotic system continues to function properly.
Moreover, AOP can be used in robotic process automation (RPA) to modularize cross-cutting
concerns and apply them to specific parts of an automation process without modifying the
fundamental business logic. This can make the automation process more efficient and easier to
maintain over time.
10. Big data

With the exponential increase in the volume of data generated by devices worldwide, substantial
market growth of big data is witnessed. According to a recent report by Expert Market Research, the
global big data market was valued at $271.30 billion in 2022 and is predicted to reach $624.27 billion
by 2028.
With such high growth in the big data market, AOP is expected to play a critical role in logging and
tracing, security, and error handling in data. AOP can intercept method calls and add logging and
tracing functionality to them, allowing developers to trace the flow of data in the system and identify
issues as and when they arise.

Component-based software engineering (CBSE)


Component-Based Software Engineering (CBSE) is a process that focuses on the design and
development of computer-based systems with the use of reusable software components.
It not only identifies candidate components but also qualifies each component’s interface, adapts
components to remove architectural mismatches, assembles components into a selected
architectural style, and updates components as requirements for the system change.
The process model for component-based software engineering occurs concurrently
with component-based development.
Component-based development:
Component-based development (CBD) is a CBSE activity that occurs in parallel with domain
engineering. Using analysis and architectural design methods, the software team refines an
architectural style that is appropriate for the analysis model created for the application to be built.
CBSE Framework Activities:
Framework activities of Component-Based Software Engineering are as follows:
1. Component Qualification: This activity ensures that the system architecture defines the
requirements of the components for becoming a reusable components. Reusable components
are generally identified through the traits in their interfaces. It means “the services that are
given and the means by which customers or consumers access these services ” are defined as a
part of the component interface.
2. Component Adaptation: This activity ensures that the architecture defines the design
conditions for all components and identifies their modes of connection. In some cases, existing
reusable components may not be allowed to get used due to the architecture’s design rules and
conditions. These components should adapt and meet the requirements of the architecture or be
refused and replaced by other, more suitable components.
3. Component Composition: This activity ensures that the Architectural style of the system
integrates the software components and forms a working system. By identifying the connection
and coordination mechanisms of the system, the architecture describes the composition of the
end product.
4. Component Update: This activity ensures the updation of reusable components. Sometimes,
updates are complicated due to the inclusion of third-party (the organization that developed the
reusable component may be outside the immediate control of the software engineering
organization accessing the component currently).

Service- oriented architecture (SOA):

Service-Oriented Architecture (SOA) is a stage in the evolution of application development and/or


integration. It defines a way to make software components reusable using the interfaces.
Formally, SOA is an architectural approach in which applications make use of services available in
the network. In this architecture, services are provided to form applications, through a network call
over the internet. It uses common communication standards to speed up and streamline the service
integrations in applications. Each service in SOA is a complete business function in itself. The
services are published in such a way that it makes it easy for the developers to assemble their apps
using those services. Note that SOA is different from microservice architecture.
 SOA allows users to combine a large number of facilities from existing services to form
applications.
 SOA encompasses a set of design principles that structure system development and provide
means for integrating components into a coherent and decentralized system.
 SOA-based computing packages functionalities into a set of interoperable services, which can
be integrated into different software systems belonging to separate business domains.

There are two major roles within Service-oriented Architecture:

1. Service provider: The service provider is the maintainer of the service and the organization
that makes available one or more services for others to use. To advertise services, the provider
can publish them in a registry, together with a service contract that specifies the nature of the
service, how to use it, the requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry and
develop the required client components to bind and use the service.
Services might aggregate information and data retrieved from other services or create workflows of
services to satisfy the request of a given service consumer. This practice is known as service
orchestration Another important interaction pattern is service choreography, which is the
coordinated interaction of services without a single point of control.
Components of SOA:

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service description documents.
2. Loose coupling: Services are designed as self-contained components, maintain relationships
that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description documents.
They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus reducing
development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service consumer
point of view, there is no need to know about their implementation.
6. Discoverability: Services are defined by description documents that constitute supplemental
metadata through which they can be effectively discovered. Service discovery provides an
effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex operations can
be implemented. Service orchestration and choreography provide a solid support for composing
services and achieving business goals.
Advantages of SOA:
 Service reusability: In SOA, applications are made from existing services. Thus, services can
be reused to make many applications.
 Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
 Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
 Availability: SOA facilities are easily available to anyone on request.
 Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
 Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:
 High overhead: A validation of input parameters of services is done whenever services interact
this decreases performance as it increases load and response time.
 High investment: A huge initial investment is required for SOA.
 Complex service management: When services interact they exchange messages to tasks. the
number of messages may go in millions. It becomes a cumbersome task to handle a large
number of messages.
Practical applications of SOA: SOA is used in many ways around us whether it is mentioned or
not.
1. SOA infrastructure is used by many armies and air forces to deploy situational awareness
systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example, an app
might need GPS so it uses the inbuilt GPS functions of the device. This is SOA in mobile
solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and content.

Agile software development and Scrum methodologies.


Agile software development is a flexible approach to software development that emphasizes
continuous improvement and adaptation to changing requirements, while Scrum is a specific,
structured framework within Agile that uses defined roles, meetings, and processes to manage
projects effectively, often considered the most popular Agile methodology; essentially, Agile is a
philosophy, and Scrum is a way to implement that philosophy with a set of defined practices.
Key points about Agile and Scrum:
 Agile principles:
Prioritizes customer feedback, delivers working software frequently, embraces change, and
encourages collaboration between teams and stakeholders.
 Scrum framework:
Includes defined roles like Product Owner, Scrum Master, and Development Team, utilizes time-
boxed "sprints" for focused work, and relies on regular meetings like daily stand-ups and sprint
reviews to track progress and adapt as needed.
Key differences between Agile and Scrum:
 Flexibility:
Agile is more flexible and adaptable to changing requirements, while Scrum provides a more
structured approach with set practices.
 Specificity:
Agile is a broad concept encompassing various methodologies, while Scrum is a specific
framework within Agile.
Benefits of using Agile and Scrum:
 Faster delivery:
Frequent iterations and feedback loops allow for quicker delivery of working software.
 Improved quality:
Continuous testing and feedback throughout the development cycle helps identify and address
issues early on.
 Customer satisfaction:
Increased involvement of the customer in the development process leads to a product that better
meets their needs.
 Team collaboration:
Promotes strong teamwork and communication within the development team.

You might also like