Oose CS - A, B
Oose CS - A, B
In safety-critical areas such as space, aviation, nuclear power plants, etc. The cost of
software failure can be massive because lives are at risk.
Dealing with the increased complexity of software need for new applications.
Budget
Efficiency
Usability
Dependability
Correctness
Functionality
Safety
Security
Transitional: This is an essential aspect when the software is moved from one platform to
another
Interoperability
Reusability
Portability
Adaptability
Maintenance: This aspect talks about how well software has the capabilities to adapt itself in
the quickly changing environment.
Flexibility
Maintainability
Modularity
Scalability
2. Class: encapsulates the data and procedural abstractions required to describe the
content and behavior of some real-world entity. In other words, class is a generalized
description that describes the collection of similar objects.
4. Operations: also called methods and services, provide a representation of one of the
behaviors of the class.
5. Subclass: specialization of the super class. A subclass can inherit both attributes and
operations from a super class.
6. Superclass: also called a base class, is a generalization of a set of classes that are
related to it.
9. Encapsulation: Binding data together and protecting it from the outer world is
referred to as encapsulation.
10. Polymorphism: Mechanism by which functions or entities are able to exist in
different forms.
A set of problems raised during software development process. These problems are
called as Crisis.
o Size: Software is becoming more expensive and more complex with the
growing complexity and expectation out of software. For example, the code in
the consumer product is doubling every couple of years.
o Quality: Many software products have poor quality, i.e., the software products
defects after putting into use due to ineffective testing technique. For example,
Software testing typically finds 25 errors per 1000 lines of code.
o Cost: Software development is costly i.e. in terms of time taken to develop and
the money involved. For example, Development of the FAA's Advanced
Automation System cost over $700 per lines of code.
o Delayed Delivery: Serious schedule overruns are common. Very often the
software takes longer than the estimated time to develop, which in turn leads
to cost shooting up. For example, one in four large-scale development projects
is never completed.
To overcome this software crisis, we have to follow the SDLC.
To develop software product, the development team must identify Suitable life cycle
model for particular project.
software development life cycle (SDLC)
SDLC (Software Development Life Cycle): It is a step by step procedure or systematic
approach to develop software. It is descriptive and diagrammatic representation of software
life cycles.
Process Models:
A (Software/System) process model is a description of the sequences of activities
carried out in an SE project, and the relative order of these activities.
A software process is the set of activities and associated outcome that produce a
software product. Software engineers mostly carry out these activities. These are four key
process activities, which are common to all software processes. These activities are:
The most used, popular and important SDLC models are given below:
1. Waterfall model
2. Iterative model
3. RAD model
4. Prototype model
5. Agile model
6. V model
7. Spiral model
8. Incremental model
1)Waterfall Model:
The Waterfall Model is also referred to as a linear-sequential life cycle model. It is
very simple to understand and use.
In a waterfall model, each phase must be completed before the next phase can begin
and there is no overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software
development.
The waterfall Model illustrates the software development process in a linear
sequential flow.
The following illustration is a representation of the different phases of the Waterfall Model.
1. Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
2. No working software is produced until late during the life cycle.
3. High amounts of risk and uncertainty.
4. Not a good model for complex and object-oriented projects.
5. Poor model for long and on-going projects.
6. Not suitable for the projects where requirements are at a moderate to high risk of
changing.
2) Iterative Model:
In the Iterative model, iterative process starts with a simple implementation of a small
set of the software requirements and iteratively enhances the evolving versions until the
complete system is implemented and ready to be deployed.
An iterative life cycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the
software, which is then reviewed to identify further requirements.
This process is then repeated, producing a new version of the software at the end of
each iteration of the model.
This model consists of the same phases as the waterfall model, but with fewer
restrictions. Generally, the phases occur in the same order as in the waterfall model, but these
may be conducted in several cycles. A reusable product is released at the end of the cycle,
with each release providing additional functionality.
1. Requirement and analysis phase: In this phase, requirements are collected from
customers and examined by an analyst to see if the requirements will be met. Analysts
examine what should or should not be achieved within the budget. After all this, the
software team proceeds to the next stage.
2. Design: In the design phase, the team designs the software with different diagrams
such as data flow diagram, activity diagram, class diagram, state transition diagram,
etc.
3. Implementation: In the implementation phase, coding is done, and it is converted to
complete software.
4. Testing:After completing the coding phase, software testing starts using various
testing methods. There are various testing methods, but the most common is white
box testing, black-box testing, and gray box testing methods.
5. Deployment: After completing all the steps, the software is deployed in its work
environment. In this phase, after product deployment, the review phase is carried out
to check the behavior and validity of the developed product.
6. Review: In this phase, after product deployment, the review phase is carried out to
check the behavior and validity of the developed product. And if an error is found, the
process begins to review again.
7. Maintenance:In the maintenance phase, there may be some bugs after the
deployment of the software in the work environment; some errors or new updates are
required. Maintenance includes debugging and new additional options.
1. It is easily adaptable to the ever changing needs of the project as well as the client.
2. It is best suited for agile organisations.
3. It is more cost effective to change the scope or requirements in Iterative model.
4. Parallel development can be planned.
5. Testing and debugging during smaller iteration is easy.
6. Risks are identified and resolved during iteration; and each iteration is an easily
managed.
7. In iterative model less time is spent on documenting and more time is given for
designing.
8. One can get reliable user feedback, when presenting sketches and blueprints of the
product to users for their feedback.
1) Business Modeling
2) Data modeling
3) Process modeling
4) Application generation
1) Business Modeling:
2) Data modeling:
The information in the business modeling phase is refined into the set of objects and it
is essential for the business.
The attributes of each object are identified and defined the relationship between
objects.
3) Process modeling:
The data objects defined in the data modeling phase are changed to fulfil the
information flow to implement the business model.
The process description is created for adding, modifying, deleting or retrieving a data
object.
4) Application generation:
In the application generation phase, the actual system is built.
To construct the software the automated tools are used.
The prototypes are independently tested after each iteration so that the overall testing
time is reduced.
The data flow and the interfaces between all the components are fully tested. Hence,
most of the programming components are already tested.
4) Prototype Model:
The Prototyping Model is one of the most popularly used Software Development Life
Cycle Models (SDLC models). This model is used when the customers do not know the exact
project requirements beforehand. In this model, a prototype of the end product is first
developed, tested and refined as per customer feedback repeatedly till a final acceptable
prototype is achieved which forms the basis for developing the final product.
1. Requirements gathering and analysis:
A requirement analysis is the first step in developing a prototyping model. During this
phase, the system’s desires are precisely defined. During the method, system users are
interviewed to determine what they expect from the system.
2. Quick design:
The second phase could consist of a preliminary design or a quick design. During this
stage, the system’s basic design is formed. However, it is not a complete design. It
provides the user with a quick overview of the system. The rapid design aids in the
development of the prototype.
3. Build a Prototype:
During this stage, an actual prototype is intended to support the knowledge gained from
quick design. It is a small low-level working model of the desired system.
The proposed system is presented to the client for preliminary testing at this stage. It is
beneficial to investigate the performance model’s strengths and weaknesses. Customer
feedback and suggestions are gathered and forwarded to the developer.
5. Refining prototype:
If the user is dissatisfied with the current model, you may want to improve the type that
responds to user feedback and suggestions. When the user is satisfied with the upgraded
model, a final system based on the approved final type is created.
The final system was fully tested and distributed to production after it was developed to
support the original version. To reduce downtime and prevent major failures, the
programmer is run on a regular basis.
5)Agile model
The meaning of Agile is swift or versatile. “Agile process model" refers to a software
development approach based on iterative development. Agile methods break tasks into
smaller iterations, or parts do not directly involve long term planning. The project scope and
requirements are laid down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are clearly defined in
advance.
Each iteration is considered as a short time "frame" in the Agile process model, which
typically lasts from one to four weeks. The division of the entire project into smaller parts
helps to minimize the project risk and to reduce the overall project delivery time
requirements. Each iteration involves a team working through a full software development life
cycle including planning, requirements analysis, design, coding, and testing before a working
product is demonstrated to the client.
Phases of Agile Model:
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project. Based
on this information, you can evaluate technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to
show the work of new features and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance and
looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback
6) V-Model
V-Model also referred to as the Verification and Validation Model. In this, each phase of
SDLC must complete before the next phase starts. It follows a sequential design process same
as the waterfall model. Testing of the device is planned in parallel with a corresponding stage
of development.
Verification: It involves a static analysis method (review) done without executing code. It is
the process of evaluation of the product development process to find whether specified
requirements meet.
So V-Model contains Verification phases on one side of the Validation phases on the other
side. Verification and Validation process is joined by coding phase in V-shape. Thus it is
known as V-Model.
UNIT-II
1. Eliciting requirements
2. Analyzing requirements
3. Requirements modeling
4. Review and retrospective
1- Eliciting requirements
2- Analyzing requirements
This step helps to determine the quality of the requirements. It involves identifying whether
the requirements are unclear, incomplete, ambiguous, and contradictory. These issues
resolved before moving to the next step.
3- Requirements modeling
In Requirements modeling, the requirements are usually documented in different formats
such as use cases, user stories, natural-language documents, or process specification.
This step is conducted to reflect on the previous iterations of requirements gathering in a bid
to make improvements in the process going forward.
Requirements Analysis Techniques:
There are different techniques used for business Requirements Analysis. Below is a list of
different business Requirements Analysis Techniques:
This technique is similar to creating process flowcharts, although BPMN has its own symbols
and elements. Business process modeling and notation is used to create graphs for the
business process. These graphs simplify understanding the business process. BPMN is widely
popular as a process improvement methodology.
UML consists of an integrated set of diagrams that are created to specify, visualize, construct
and document the artifacts of a software system. UML is a useful technique while creating
object-oriented software and working with the software development process. In UML,
graphical notations are used to represent the design of a software project. UML also help in
validating the architectural design of the software.
3- Flowchart technique
A flowchart depicts the sequential flow and control logic of a set of activities that are related.
Flowcharts are in different formats such as linear, cross-functional, and top-down. The
flowchart can represent system interactions, data flows, etc. Flow charts are easy to
understand and can be used by both the technical and non-technical team members.
Flowchart technique helps in showcasing the critical attributes of a process.
6- Gantt Charts
Gantt charts used in project planning as they provide a visual representation of tasks that are
scheduled along with the timelines. The Gantt charts help to know what is scheduled to be
completed by which date. The start and end dates of all the tasks in the project can be seen in
a single view.
Integrated definition for function modeling (IDEFM) technique represents the functions of a
process and their relationships to child and parent systems with the help of a box. It provides
a blueprint to gain an understanding of an organization’s system.
8- Gap Analysis
Gap analysis is a technique which helps to analyze the gaps in performance of a software
application to determine whether the business requirements are met or not. It also involves
the steps that are to be taken to ensure that all the business requirements are met successfully.
Gap denotes the difference between the present state and the target state. Gap analysis is also
known as need analysis, need assessment or need-gap analysis.
2. Class based elements: The elements of the class based model consist of classes and
object, attributes, operations, class – responsibility - collaborator (CRS) models.
3. Behavioral elements:
Behavioral elements represent state of the system and how it is changed by the
external events.
It shows how the data objects are transformed while they flow between the various
system functions.
The flow elements are data flow diagram, control flow diagram.
Object-oriented analysis and design (OOAD)
OOAD stands for Object Oriented Analysis and Design.
Object: Object is a combination of data and logic that represents some real world entities.
Example:
Object: car
Attributes: car registers no, car name, colour, number of doors.
Functions: stop, start, change, mileage.
Object Representation:
The Subsystem Layer: It represents the subsystem that enables software to achieve user
requirements and implement technical frameworks that meet user needs.
The Class and Object Layer: It represents the class hierarchies that enable the system to
develop using generalization and specialization. This layer also represents each object.
The Message Layer: It represents the design details that enable each object to communicate
with its partners. It establishes internal and external interfaces for the system.
The Responsibilities Layer: It represents the data structure and algorithmic design for all the
attributes and operations for each object.
What is OOAD?
It is a software engineering approach which model the system as interacting object.
Each object represents a system entity which plays a vital role to build the system.
Use case is a list of action defining the interaction between an actor and a system to achieve
a goal.
Action can be human.
Use case is a popular tool in requirement analysis.
Domain model is a conceptual model of the domain that incorporate both behaviour and
data.
It gives very good information about the problem to solve.
It describes the structure of a system by showing the system classes, their attributes,
operations and the relationship among the objects.
In Object-Oriented Analysis and Design (OOAD), design patterns are reusable fixes for typical
software design issues that occur during the development process. These patterns capture best
practices, principles, and guidelines for creating modular, scalable, and maintainable software
systems. They offer an organized method for resolving common design problems, encouraging
code reuse, adaptability, and ease of maintenance. OOAD design patterns that are frequently used
include the following:
Creational Patterns: These patterns focus on the techniques involved in the creation of
objects, helping in their appropriate creation. Examples include the Factory Method pattern,
Builder pattern, and Singleton pattern.
Structural Patterns: Structural patterns deal with object composition and class relationships,
aiming to simplify the structure of classes and objects. Examples include the Adapter pattern,
Composite pattern, and Decorator pattern.
Behavioral Patterns: Behavioral patterns address how objects interact and communicate with
each other, focusing on the delegation of responsibilities between objects. Examples include the
Observer pattern, Strategy pattern, and Command pattern.
Architectural Patterns: These patterns provide high-level templates for organizing a software
system's general structure. Examples include the Model-View-Controller (MVC) pattern,
Layered Architecture pattern, and Micro services pattern.
Developers can create software systems that are more reliable, maintainable, and scalable by
utilizing these design patterns, which provide tried-and-true solutions to common design issues. In
addition, design patterns facilitate team collaboration and increase overall development efficiency
by promoting consistency, code reusability, and ease of understanding.
Reduced Complexity: By leveraging existing patterns, developers can avoid reinventing the
wheel. This saves time and effort, leading to faster development cycles.
Improved Code Quality: Design patterns often promote good coding practices, resulting in
cleaner, more modular code that's easier to understand, maintain, and modify.
Enhanced Communication: Design patterns provide a common language for developers,
fostering better communication and collaboration within a team.
Promotes Reusability: The core concept of design patterns is reusability. They can be applied
in different contexts within a project or even across multiple projects.
Proven Solutions: Design patterns represent well-tested solutions, offering confidence that the
chosen approach is effective and avoids potential pitfalls.
In essence, design patterns empower developers to create well-structured, maintainable, and
efficient object-oriented software systems by offering a library of reusable solutions to common
design challenges.
In Object-Oriented Analysis and Design (OOAD), several design patterns are commonly used to
address recurring design problems. Here are some of the most commonly used design patterns:
1. Singleton Pattern
Ensures that a class has only one instance and provides a global point of access to it. Useful for
managing global resources or maintaining a single configuration throughout an application.
2. Factory Method Pattern
Defines an interface for creating an object, but allows subclasses to alter the type of objects that
will be created. Useful for decoupling the creation of objects from the client code.
3. Abstract Factory Pattern
Provides an interface for creating families of related or dependent objects without specifying their
concrete classes. Useful for creating objects with varying implementations but ensuring they work
together seamlessly.
4. Builder Pattern
Separates the construction of a complex object from its representation, allowing the same
construction process to create different representations. Useful for creating objects with a large
number of configuration options or parameters.
5. Prototype Pattern
Creates new objects by copying an existing object, known as the prototype, rather than creating
new instances from scratch. Useful for improving performance and reducing the overhead of object
creation.
6. Adapter Pattern
Allows incompatible interfaces to work together by providing a wrapper or intermediary that
converts the interface of one class into another interface expected by the client. Useful for
integrating legacy code or third-party libraries into new systems.
7. Observer Pattern
Defines a one-to-many dependency between objects so that when one object changes state, all its
dependents are notified and updated automatically. Useful for implementing event handling
systems or maintaining consistency between related objects.
8. Strategy Pattern
Defines a family of algorithms, encapsulates each one, and makes them interchangeable. It allows
the algorithm to vary independently from the clients that use it. Useful for selecting algorithms at
runtime or providing different implementations of the same behavior.
These design patterns provide solutions to common design problems encountered during software
development and promote principles such as code reuse, modularity, and flexibility in OOAD.
What is UML:
Unified Modeling Language (UML) is a general purpose modelling language. The main aim
of UML is to define a standard way to visualize the way a system has been designed.
UML is a pictorial language useful to make software blueprints. UML is not a programing
language. UML has a direct relation with object oriented analysis and design.
UML is linked with object oriented design and analysis. UML makes the use of elements and
forms associations between them to form diagrams. Diagrams in UML can be broadly classified as:
Behavior Diagrams: Behavioral diagrams basically capture the dynamic aspect of a system.
Dynamic aspect can be further described as the changing/moving parts of a system. Behavior
diagrams include: Use Case Diagrams, State Diagrams, Activity Diagrams and Interaction
Diagrams.
1. Activity Diagram: Activity diagram describes the flow of control in a system. It cons ists of
activities and links. The flow can be sequential, concurrent, or branched.
2. State Machine Diagram: The state machine diagram is also called the Statechart or State
Transition diagram. It models event-based systems to handle the state of an object. It also
defines several distinct states of a component within the system. Each object/component has
a specific state.
Behavioral state machine
Protocol state machine
3. Use Case Diagram: Use case diagrams are a set of use cases, actors, and their relationships.
They represent the use case view of a system. These diagrams also identify the interactions
between the system and its actors.
4. Interaction diagram: It is used to capture the interactive behavior of a system.
a) Communication diagrams: Communication diagrams is also known as collaboration
diagrams. It is used to relationships and interactions among software objects.
b) Interaction overview diagram: An interaction overview diagram is a form of activity
diagram in which the nodes represent interaction diagrams.
c) Sequence diagram: The sequence diagram represents the flow of messages in the system
and is also termed as an event diagram.
d) Timing diagram: The timing diagram describes how an object underwent a change
from one form to another.
Structural Diagrams: The structural diagrams represent the static aspect of the
system. These static parts are represented by classes, interfaces, objects, components, and nodes.
Structural Diagrams includes Component Diagrams, Object Diagrams, Class Diagrams and
Deployment Diagrams.
1. Class diagrams: Class diagrams are the most common diagrams used in UML. Class
diagram consists of classes, interfaces, associations, and collaboration. Class diagrams
basically represent the object-oriented view of a system, which is static in nature.
Top compartment contains the attributes of the class.
Bottom compartment contains the function (or) operations.
2. Component Diagram: Component diagrams are used to represent how the physical
components in a system have been organized.
3. Object Diagram: An Object Diagram can be referred to as a screenshot of the instances in a
system and the relationship that exists between them.
4. Composite Structure Diagram: We use composite structure diagrams to represent the
internal structure of a class and its interaction points with other parts of the system.
5. Deployment Diagram: Deployment Diagrams are used to represent system hardware and its
software.
6. Package Diagram: We use Package Diagrams to depict how packages and their elements
have been organized. A package diagram simply shows us the dependencies between
different packages and internal composition of packages. Basic Structural Modeling: Classes,
Relationships, common Mechanisms, and diagrams. Class & Object Diagrams: Terms, concepts,
modeling techniques for Class & Object Diagrams.
These static parts are represented by classes, interfaces, objects, components, and
nodes. The four structural diagrams are:
Class diagram
Object diagram
Component diagram
Deployment diagram
Package diagram
Class diagram:
Class diagrams are the most common diagrams used in UML. Class diagram consists of
classes, interfaces, associations, and collaboration. Class diagrams basically represent the
object-oriented view of a system, which is static in nature.
Elements of UML Class Diagram:
Class
Attributes
Operations
Relationships
A class represents state (data/attributes) and behavior (logic/ methods/ functions/ operations).
Class is represented by a rectangle (box).
Class Name: Class name is the only mandatory information. The name of the class appears
in the first partition.
(2) Attributes:
Class Attributes: Each attribute has a data type. The attributes are shown in the second
partition, the attribute type is shown after the colon.
(3) Operations:
Class Operations: Each operation has a signature. The operations are shown in the third
partition, the return type of a method is shown after the colon at the end of the method
signature.
Class Visibility: +, -, # are symbols before an attribute and operation name in a class denote
the visibility of the attribute and operation.
+ indicates Public attribute / operation
- indicates Private attribute / operation
# indicates Protected attribute / operation
(4) Relationships:
A class may be involved in I or more relationships with other classes. A relationship can be
one of the following types:
1. Association
Associations are relationships between classes in a UML Class Diagram. They are
represented by a solid line between classes. Associations are typically named using a verb or
verb phrase which reflects the real world problem domain.
2. Inheritance (or Generalization):
Realization is a relationship between the blueprint class and the object containing its
respective implementation level details.
4. Dependency
An object of one class might use an object of another class in the code of a method. If the
object is not stored in any field, then this is modeled as a dependency relationship.
5. Aggregation
6. Composition
A special type of aggregation where parts are destroyed when the whole is destroyed.
Identify the function or behavior of the part of a system you would like to model.
For each function or mechanism identify the classes, interfaces, collaborations and
relationships between them.
Use scenarios (sequence of activities) to walk through these things. You may find new
things or find that existing things are semantically wrong.
Populate the things found in the above steps. For example, take a class and fill its
responsibilities. Now, convert these responsibilities into attributes and operations.
Identify the rules for mapping the models to the implementation language of your
choice.
Depending on the semantics of the language, you may want to restrict the information
in your UML models. For example, UML supports multiple inheritance. But some
programming languages might not allow this.
Use tagged values to specify the target language.
Use tools to convert your models to code.
Object Diagrams:
Object diagrams are derived from class diagrams so object diagrams are dependent
upon class diagrams.
Object diagrams represent an instance of a class diagram. The basic concepts are
similar for class diagrams and object diagrams. Object diagrams also represent the
static view of a system but this static view is a snapshot of the system at a particular
moment.
Object diagrams are used to render a set of objects and their relationships as an
instance.
The classifiers in the class, deployment, component and use-case diagrams may be
instantiated to build object diagrams. Object diagrams in UML provide a picture of
the instances in a system and the connections between them.
Object diagrams are simple to create: they're made from objects represented by
rectangles linked together with lines. Take a look at the major elements of an object
diagram.
Objects: objects are instances of a class. For example, if "car" is a class, a 2009 Nissan
Altima is an object of a class.
Class Titles: Class titles are the specific attributes of a given class. In the family tree of
object diagram, class titles include the name, gender, and age of the family members. You
can list class titles as items on the object or even in the object's properties (such as color).
Class Attributes: Class attributes are represented by a rectangle with two tabs that indicates
a software element.
Links: Links are the lines that connect two shapes of an object diagram to each other. The
corporate object diagram below shows how departments are connected in the traditional
organizational chart style.
First, identify the function or behavior or part of a system you want to model as
collection of classes, interfaces and other things.
For each function or mechanism identify the classes and interfaces that collaborate
and also identify the relationships between them.
Consider a scenario (context) that walks through this mechanism and freeze at a
moment in time and identify the participating objects.
Represent the state of objects by listing out the attributes and their values.
Represent the links between objects which are instances of associations.
Forward engineering a object diagram is theoretically possible but practically of limited value
as the objects are created and destroyed dynamically at runtime, we cannot represent them
statically.
Specifications:
Adornments:
Common Divisions:
Extensibility Mechanisms
a) Stereotypes
Stereotypes are used to create new building blocks from existing blocks
New building blocks are domain-specific
Stereotypes are used to extend the vocabulary of a system
Graphically represented as a name enclosed by guillemets (« »)
b) Tagged Values
Tagged values are used to add to the information of the element (not of its instances)
Stereotypes help to create new building blocks, whereas tagged values help to create
new attributes
These are commonly used to specify information relevant to code generation,
configuration management and so on.
c) Constraints
UML behavioral diagrams visualize, specify, construct, and document the dynamic
aspects of a system.
The behavioral diagrams are categorized as follows: use case diagrams, interaction
diagrams, state–chart diagrams, and activity diagrams
Interaction diagram
Activity diagram
State-chart diagram
Use-case diagram
Interaction diagram:
Interaction diagrams are the dynamic behavior of the system
Interaction diagrams depict interactions of objects and their relationships. They also
include the messages that are exchanged between objects.
1. Sequence Diagrams
2. Collaboration Diagram
Sequence Diagrams:
Sequence diagrams are interaction diagrams
The sequence diagram represents the flow of messages in the system and is also
termed as an event diagram.
It is communication between any two lifelines as a time-ordered sequence of events,
such that these lifelines took part at the run time.
In UML, the lifeline is represented by a vertical bar, whereas the message flow is
represented by a vertical dotted line that extends across the bottom of the page. It
incorporates the iterations as well as branching.
Purpose of a Sequence Diagram:
1. To model high-level interaction among active objects within a system.
2. To model interaction among objects inside a collaboration realizing a use case.
3. It either models generic interactions or some certain instances of interaction.
Sequence Diagram Notations:
Lifeline
An individual participant in the sequence diagram is represented by a lifeline. It is positioned
at the top of the diagram.
Actor
A role played by an entity that interacts with the subject is called as an actor. It is out of the
scope of the system. It represents the role, which involves human users and external hardware
or subjects. An actor may or may not represent a physical entity, but it purely depicts the role
of an entity. Several distinct roles can be played by an actor or vice versa.
Activation
It is represented by a thin rectangle on the lifeline. It describes that time period in which an
operation is performed by an element, such that the top and the bottom of the rectangle is
associated with the initiation and the completion time, each respectively.
Messages
The messages depict the interaction between the objects and are represented by arrows. They
are in the sequential order on the lifeline. The core of the sequence diagram is formed by
messages and lifelines.
Call Message: It defines a particular communication between the lifelines of an
interaction,which represents that the target lifeline has invoked an operation.
Note
A note is the capability of attaching several remarks to the element. It basically carries useful
information for the modelers.
Example:
The collaboration diagram is used to show the relationship between the objects in a
system.
Both the sequence and the collaboration diagrams represent the same information but
differently.
Instead of showing the flow of messages, it depicts the architecture of the object
residing in the system as it is based on object-oriented programming.
An object consists of several features. Multiple objects present in the system are
connected to each other.
Activity diagrams:
It is also termed as an object-oriented flowchart. It encompasses activities composed
of a set of actions or operations that are applied to model the behavioral diagram.
The activity diagram is used to demonstrate the flow of control within the system
rather than the implementation. It models the concurrent and sequential activities.
The activity diagram helps in envisioning the workflow from one activity to another.
It put emphasis on the condition of flow and the order in which it occurs. The flow
can be sequential, branched, or concurrent, and to deal with such kinds of flows, the
activity diagram has come up with a fork, join, etc.
Join Nodes
Join nodes are the opposite of fork nodes. A Logical AND operation is performed on all of
the inward edges as it synchronizes the flow of input across one single output (outward) edge.
Final State: It is the stage where all the control flows and object flows end.
Decision Box: It makes sure that the control flow or object flow will follow only one path.
the system.
Deployment diagrams:
It portrays the static deployment view of a system. It involves the nodes and their
relationships.
The deployment diagram visualizes the physical hardware on which the software will be
deployed.
The main purpose of the deployment diagram is to represent how software is installed on the
hardware component. It depicts in what manner a software interacts with hardware to perform
its execution.
Symbol and notation of Deployment diagram:
Artifact: A product developed by the software, symbolized by a rectangle with the name and
the word “artifact” enclosed by double arrows.
Interface: A circle that indicates a contractual relationship. Those objects that realize the
interface must complete some sort of obligation.
Node as container: A node that contains another node inside of it—such as in the example
below, where the nodes contain components.
Association: A line that indicates a message or other type of communication between nodes.
Dependency: A dashed line that ends in an arrow, which indicates that one node or
component is dependent on another.
Stereotype: A device contained within the node, presented at the top of the node, with the
name bracketed by double arrows.
The deployment diagram portrays the deployment view of the system. It helps in
visualizing the topological view of a system. It incorporates nodes, which are physical
hardware. The nodes are used to execute the artifacts. The instances of artifacts can be
deployed on the instances of nodes.
Performance
Scalability
Maintainability
Portability
Nodes
Relationships among nodes
Monitor
Modem
Caching server
Server
Component diagrams:
Component diagrams are used to visualize the organization and relationships among
components in a system. These diagrams are also used to make executable systems.
A component diagram is used to break down a large object-oriented system into the
smaller components, so as to make them more manageable. It models the physical
view of a system such as executables, files, libraries, etc. that resides within the node.
Notation of a Component Diagram
Component
An entity required to execute a stereotype function. A component provides and consumes
behavior through interfaces, as well as through other components.
Node
Represents hardware or software objects, which are of a higher level than components.
Port symbol :Specifies a separate interaction point between the component and the
environment. Ports are symbolized with a small square
Dependency symbol :Shows that one part of your system depends on another. Dependencies
are represented by dashed lines linking one component (or element) to another.
Provided interfaces: A straight line from the component box with an attached circle. These
symbols represent the interfaces where a component produces information used by the
required interface of another component
Required interfaces: A straight line from the component box with an attached half circle
(also represented as a dashed arrow with an open arrow). These symbols represent the
interfaces where a component requires information in order to perform its proper function.
Dynamic testing or validation is the process of testing the actual product through the
tests done on the software application. Various parameters, including CPU and
memory usage, response time, and the overall performance of the software, are tested
and reviewed. The dynamic behavior of the code is examined through this testing
technique.
Methods: Black Box Testing, White Box Testing, and non-functional testing.
Static testing or verification is the testing method of checking files and documents to
ensure the development of the right product according to the specified design, test cases,
test plans, etc.
Testing any software or an application according to the client's needs without using
any automation tool is known as manual testing.
There are three types of testing approaches:
1. White Box Testing
2. Black Box Testing
3. Grey Box Testing
White Box Testing is a testing technique in which software’s internal structure, design, and
coding are tested to verify input-output flow and improve design, usability, and security. In
white box testing, code is visible to testers, so it is also called Clear box testing, Open box
testing, Transparent box testing, Code-based testing, and Glass box testing(Structural testing).
Following are important White Box Testing Techniques:
Statement Coverage
Decision Coverage
Branch Coverage
Condition Coverage
Multiple Condition Coverage
Finite State Machine Coverage
Path Coverage
Control flow testing
Data flow testing
Black Box Testing:
Black Box Testing is a software testing method in which the functionalities of software
applications are tested without having knowledge of internal code structure, implementation
details and internal paths. Black Box Testing mainly focuses on input and output of software
applications and it is entirely based on software requirements and specifications. It is also
known as Behavioral Testing.
Following are important Black Box Testing Techniques:
Equivalence Class Testing
Boundary Value Testing
Decision Table Testing
Black-box testing is again divided into two different types of testing:
1. Functional Testing
2. Non-Functional Testing.
Functional Testing:
Functional testing is also known as Component testing. This black box testing type is related
to the functional requirements of a system, it is done by software testers.
Function Testing has 3 types of testing:
Unit testing
Integration testing
System testing
Unit Testing: Unit testing is the first level of functional testing ,where individual units or
components of a software are tested. The purpose is to validate that each unit of the software
code performs as expected. Unit Testing is done during the development (coding phase) of an
application by the developers. Unit Tests isolate a section of code and verify its correctness.
A unit may be an individual function, method, procedure, module, or object.
Integration Testing: Once we are successfully implementing the unit testing, we will go
integration testing. It is the second level of functional testing, where we test the data flow
between dependent modules or interface between two features is called integration testing.
Integration testing is also further divided into the following parts:
i) Incremental Integration Testing: Suppose, we take two modules and analysis the
data flow between them if they are working fine or not. we can say that incrementally
adding up the modules and test the data flow between the modules is known as
Incremental integration testing.
Incremental integration testing can further classify into two parts:
1. Top-down Incremental Integration Testing
In this approach, we will add the modules step by step or incrementally and
test the data flow between them. We have to ensure that the modules we are
adding are the child of the earlier ones.
2. Bottom-up Incremental Integration Testing
In the bottom-up approach, we will add the modules incrementally and check
the data flow between modules. And also, ensure that the module we are
adding is the parent of the earlier ones
ii) Non-Incremental Integration Testing/ Big Bang Method: Whenever the data
flow is complex and very difficult to classify a parent and a child, we will go for the
non-incremental integration approach. The non-incremental method is also known as
the Big Bang method
System Testing: Whenever we are done with the unit and integration testing, we can proceed
with the system testing.
In system testing, the test environment is parallel to the production environment. It is also
known as end-to-end testing.
Non-functional Testing:
It provides detailed information on software product performance and used technologies.
Non-functional testing will help us minimize the risk of production and related costs of the
software.
Non-functional testing categorized into different parts of testing:
o Performance Testing
o Usability Testing
o Compatibility Testing
Performance Testing: In performance testing, the test engineer will test the working of an
application by applying some load. The test engineer will only focus on several aspects, such
as Response time, Load, scalability, and Stability of the software or an application.
Load Testing:While executing the performance testing, we will apply some load on
the particular application to check the application's performance, known as load
testing.
Stress Testing: It is used to analyze the user-friendliness and robustness of the
software beyond the common functional limits
Scalability Testing:To analysis, the application's performance by enhancing or
reducing the load in particular balances is known as scalability testing.
Stability Testing: we can rapidly find the system's defect even in a stressful situation.
Usability Testing: In usability testing, we will analyze the user-friendliness of an application
and detect the bugs in the software's end-user interface. (that the software is simple to use for
consumers)
Compatibility Testing: In compatibility testing, we will check the functionality of an
application in specific hardware and software environments. Once the application is
functionally stable then only, we go for compatibility testing.
Here, software means we can test the application on the different operating systems and other
browsers, and hardware means we can test the application on different sizes.
Grey-box Testing:
It is a collaboration of black box and white box testing. The grey box testing includes access
to internal coding for designing test cases. Grey box testing is performed by a person who
knows coding as well as testing. In other words, we can say that if a single-person team done
both white box and black-box testing, it is considered grey box testing.
3. QTP was renamed as UFT(Unified Functional Testing), this tool is primarily used for
functional, regression and service testing.
A Test Strategy is a plan for defining an approach to the Software Testing Life Cycle
(STLC).
Test Strategy Document is a well-described document in software testing which clearly
defines the exact software testing approach and testing objectives of the software application.
Test document is an important document for QA teams which is derived from actual business
requirements that guides the whole team about software testing approach and objectives for
each activity in the software testing process.
Static Testing Strategy:
The early-stage testing strategy is static testing: it is performed without actually
running the developing product. Basically, such desk-checking is required to detect bugs and
issues that are present in the code itself. Such a check-up is important at the pre-deployment
stage as it helps avoid problems caused by errors in the code and software structure deficits.
A Test Case is a defined format for software testing required to check if a particular
application (or) software (or) module is working or not. It consists of Id, conditions, steps,
input, expected output or result, status and remarks.
Module Name: Subject or title that defines the functionality of the test.
Test Case Id: A unique identifier assigned to every single condition in a test case.
Tester Name: The name of the person who would be carrying out the test.
Test scenario: The test scenario provides a brief description to the tester, as in providing a
small overview to know about what needs to be performed and the small features, and
components of the test.
Test Case Description: The condition required to be checked for a given software. for eg.
Check if only numbers validation is working or not for an age input box.
Test Steps: Steps to be performed for the checking of the condition.
Prerequisite: The conditions required to be fulfilled before the start of the test process.
Test Priority: As the name suggests gives the priority to the test cases as in which had to be
performed first, or are more important and which could be performed later.
Test Data: The inputs to be taken while checking for the conditions.
Test Expected Result: The output which should be expected at the end of the test.
Test parameters: Parameters assigned to a particular test case.
Actual Result: The output that is displayed at the end.
Environment Information: The environment in which the test is being performed, such as
operating system, security information, the software name, software version, etc.
Status: The status of tests such as pass, fail, NA, etc.
Comments: Remarks on the test regarding the test for the betterment of the software.
Example: Below is an example for preparing various test cases for a login page with a
username and password.
Here we are only checking if the username validates at least for the length of eight characters.
Berard proposes the following approach:
Each test case should be uniquely identified and explicitly associated with the class to
be tested.
A list of testing steps should be developed for each test and should contain:
A list of specified states for the object that is to be tested.
A list of messages and operations that will be exercised as a consequence of
thetest.
A list of exceptions that may occurs as the object is tested.
A list of external conditions (i.e., changes in the environment external to
the software that must exist in order to properly conduct the test).
Test-driven development (TDD)
Test Driven Development (TDD) is a software development methodology that emphasizes
writing tests before writing the actual code. It ensures that code is always tested and functional,
reducing bugs and improving code quality. In TDD, developers write small, focused tests that
define the desired functionality, then write the minimum code necessary to pass these tests, and
finally, refactor the code to improve structure and performance.
This cyclic process helps in creating reliable, maintainable, and efficient software. By following
TDD, teams can enhance code reliability, accelerate development cycles, and maintain high
standards of software quality
What is Test Driven Development (TDD)?
Test-driven development (TDD) is a method of coding in which you first write a test and it fails,
then write the code to pass the test of development, and clean up the code. This process recycled
for one new feature or change. In other methods in which you write either all the code or all the
tests first, TDD will combine and write tests and code together into one.
Test-Driven Development (TDD) is a method in software development where the focus is on
writing tests before writing the actual code for a feature. This approach uses short development
cycles that repeat to ensure quality and correctness.
Process of Test Driven Development (TDD)
It is the process in which test cases are written before the code that validates those cases. It depends
on the repetition of a concise development cycle. Test-driven Development is a technique in which
automated Unit tests are used to drive the design and free decoupling of dependencies.
The following sequence of steps is generally followed:
Run all the test cases and make sure that the new test case fails.
Red – Create a test case and make it fail, Run the test cases
Green – Make the test case pass by any means.
Refactor – Change the code to remove duplicate/redundancy.Refactor code – This is done to
remove duplication of code.
Repeat the above-mentioned steps again and again
Write a complete test case describing the function. To make the test cases the developer must
understand the features and requirements using user stories and use cases.
Advantages of Test Driven Development (TDD)
Unit test provides constant feedback about the functions.
Quality of design increases which further helps in proper maintenance.
Test driven development act as a safety net against the bugs.
TDD ensures that your application actually meets requirements defined for it.
TDD have very short development lifecycle.
Disadvantages of Test Driven Development (TDD)
Increased Code Volume: Using TDD means writing extra code for tests cases , which can
make the overall codebase larger and more Unstructured.
False Security from Tests: Passing tests will make the developers think the code is safer only
for assuming purpose.
Maintenance Overheads: Keeping a lot of tests up-to-date can be difficult to maintain the
information and it’s also time-consuming process.
Time-Consuming Test Processes: Writing and maintaining the tests can take a long time.
Testing Environment Set-Up: TDD needs to be a proper testing environment in which it will
make effort to set up and maintain the codes and data.
TDD Vs. Traditional Testing
Approach: Test Driven Development (TDD) it is a way of making software in that tests are
written first after that the code is written. In traditional testing it the other way, It will making
the code first and then start testing in it.
Testing Scope: TDD checks small parts of code one by one. Traditional testing checks the
whole system, including how different parts work together.
Iterative: TDD is works in small small steps. It will write a small code and tests, and then
improve regularly to code until it passes all the tests which are required. Traditional testing
tests the code one time and then fixing any problems which ate been find.
Debugging: TDD will try to find mistakes early in the process of code, which makes it will
easier to fix them. Traditional testing will find the mistakes for after, which can be when held
then it will harder to fix in the future.
Documentation: TDD will focuses on documentation of the tests and their results. Traditional
testing might have been more clear information about how the testing made done and the
system will tested.
UNIT-IV
It reduces the technical cost and makes the code more efficient and maintainable. If you don’t pay
attention to the code refactoring process earlier, you will pay for errors in your code later. So don’t
ignore cleaning up the code.
In a software development process, different developers have different code writing styles. They
make changes, maintain the code, extend the code, and most of the time they leave the code
without continuous refactoring. Un-refactored code tends to code rot: a lot of confusion and clutter
in code such as duplicate code, unhealthy dependencies between classes or packages, bad
allocation of class responsibilities, too many responsibilities per method or class, etc. To avoid all
these issues continuous refactoring is important.
Most Common Code Refactoring Techniques
There are many approaches and techniques to refactor the code. Let’s discuss some popular ones…
1. Red-Green Refactoring
Red-Green is the most popular and widely used code refactoring technique in the Agile software
development process. This technique follows the “test-first” approach to design and
implementation, this lays the foundation for all forms of refactoring. Developers take initiative for
the refactoring into the test-driven development cycle and it is performed into the three district
steps.
RED: The first step starts with writing the failing “red-test”. You stop and check what needs to
be developed.
Green: In the second step, you write the simplest enough code and get the development pass
“green” testing.
Refactor: In the final and third steps, you focus on improving and enhancing your code
keeping your test green.
So basically this technique has two distinct parts: The first part involves writing code that adds a
new function to your system and the second part is all about refactoring the code that does this
function. Keep in mind that you’re not supposed to do both at the same time during the workflow.
2)Refactoring By Abstraction
This technique is mostly used by developers when there is a need to do a large amount of
refactoring. Mainly we use this technique to reduce the redundancy (duplication) in our code. This
involves class inheritances, hierarchy, creating new classes and interfaces, extraction, replacing
inheritance with the delegation, and vice versa.
3)Composing Method
During the development phase of an application a lot of times we write long methods in our
program. These long methods make your code extremely hard to understand and hard to change.
The composing method is mostly used in these cases.
In this approach, we use streamline methods to reduce duplication in our code. Some examples are:
extract method, extract a variable, inline Temp, replace Temp with Query, inline method, split
temporary variable, remove assignments to parameters, etc.
In this technique, we create new classes, and we move the functionality safely between old and
new classes. We hide the implementation details from public access.
5)Preparatory Refactoring
This approach is best to use when you notice the need for refactoring while adding some new
features in an application. So basically it’s a part of a software update with a separate refactoring
process. You save yourself with future technical debt if you notice that the code needs to be
updated during the earlier phases of feature development.
You can make simple changes in UI and refactor the code. For example: align entry field, apply
font, reword in active voice indicate the format, apply common button size, and increase color
contrast, etc.
It is one of the simplest forms and has a database that kept all the changes to files under revision
control. RCS is one of the most common VCS tools. It keeps patch sets (differences between files)
in a special format on disk. By adding up all the patches it can then re-create what any file looked
like at any point in time.
Centralized version control systems contain just one repository globally and every user need to
commit for reflecting one’s changes in the repository. It is possible for others to see your changes
by updating.
Two things are required to make your changes visible to others which are:
You commit
They update
The benefit of CVCS (Centralized Version Control Systems) makes collaboration amongst
developers along with providing an insight to a certain extent on what everyone else is doing on the
project. It allows administrators to fine-grained control over who can do what.
It has some downsides as well which led to the development of DVS. The most obvious is the
single point of failure that the centralized repository represents if it goes down during that period
collaboration and saving versioned changes is not possible. What if the hard disk of the central
database becomes corrupted, and proper backups haven’t been kept? You lose absolutely
everything.
Distributed Version Control Systems: Distributed version control systems contain multiple
repositories. Each user has their own repository and working copy. Just committing your changes
will not give others access to your changes. This is because commit will reflect those changes in
your local repository and you need to push them in order to make them visible on the central
repository. Similarly, When you update, you do not get others’ changes unless you have first pulled
those changes into your repository.
To make your changes visible to others, 4 things are required:
You commit
You push
They pull
They update
The most popular distributed version control systems are Git, and Mercurial. They help us
overcome the problem of single point of failure.
Purpose of Version Control:
Multiple people can work simultaneously on a single project. Everyone works on and edits their
own copy of the files and it is up to them when they wish to share the changes made by them
with the rest of the team.
It also enables one person to use multiple computers to work on a project, so it is valuable even
if you are working by yourself.
It integrates the work that is done simultaneously by different members of the team. In some
rare cases, when conflicting edits are made by two people to the same line of a file, then human
assistance is requested by the version control system in deciding what should be done.
Version control provides access to the historical versions of a project. This is insurance against
computer crashes or data loss. If any mistake is made, you can easily roll back to a previous
version. It is also possible to undo specific edits that too without losing the work done in the
meanwhile. It can be easily known when, why, and by whom any part of a file was edited.
Code inspection is a type of Static testing that aims to review the software code and examine for
any errors. It helps reduce the ratio of defect multiplication and avoids later-stage error detection
by simplifying all the initial error detection processes. This code inspection comes under the review
process of any application.
Software Evolution
Software Evolution is a term that refers to the process of developing software initially, and then
timely updating it for various reasons, i.e., to add new features or to remove obsolete functionalities,
etc. This article focuses on discussing Software Evolution in detail.
The software evolution process includes fundamental activities of change analysis, release planning,
system implementation, and releasing a system to customers.
1. The cost and impact of these changes are accessed to see how much the system is affected by the
change and how much it might cost to implement the change.
2. If the proposed changes are accepted, a new release of the software system is planned.
3. During release planning, all the proposed changes (fault repair, adaptation, and new functionality)
are considered.
4. A design is then made on which changes to implement in the next version of the system.
5. The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented, and tested.
Necessity of Software Evolution
Re-engineering
Re-engineering, also known as software re-engineering, is the process of analyzing, designing, and
modifying existing software systems to improve their quality, performance, and maintainability.
1. This can include updating the software to work with new hardware or software platforms, adding
new features, or improving the software’s overall design and architecture.
2. Software re-engineering, also known as software restructuring or software renovation, refers to
the process of improving or upgrading existing software systems to improve their quality,
maintainability, or functionality.
3. It involves reusing the existing software artifacts, such as code, design, and documentation, and
transforming them to meet new or updated requirements.
Objective of Re-engineering
The primary goal of software re-engineering is to improve the quality and maintainability of the
software system while minimizing the risks and costs associated with the redevelopment of the
system from scratch. Software re-engineering can be initiated for various reasons, such as:
1. To describe a cost-effective option for system evolution.
2. To describe the activities involved in the software maintenance process.
3. To distinguish between software and data re-engineering and to explain the problems of data re-
engineering.
Overall, software re-engineering can be a cost-effective way to improve the quality and functionality
of existing software systems, while minimizing the risks and costs associated with starting from
scratch.
Re-engineering can be a costly process, and there are several factors that can affect the cost of
re-engineering a software system:
1. Size and complexity of the software: The larger and more complex the software system, the
more time and resources will be required to analyze, design, and modify it.
2. Number of features to be added or modified: The more features that need to be added or
modified, the more time and resources will be required.
3. Tools and technologies used: The cost of re-engineering can be affected by the tools and
technologies used, such as the cost of software development tools and the cost of hardware and
infrastructure.
4. Availability of documentation: If the documentation of the existing system is not available or is
not accurate, then it will take more time and resources to understand the system.
5. Team size and skill level: The size and skill level of the development team can also affect the
cost of re-engineering. A larger and more experienced team may be able to complete the project
faster and with fewer resources.
6. Location and rate of the team: The location and rate of the development team can also affect
the cost of re-engineering. Hiring a team in a lower-cost location or with lower rates can help to
reduce the cost of re-engineering.
7. Testing and quality assurance: Testing and quality assurance are important aspects of re-
engineering, and they can add significant costs to the project.
8. Post-deployment maintenance: The cost of post-deployment maintenance such as bug fixing,
security updates, and feature additions can also play a role in the cost of re-engineering.
In summary, the cost of re-engineering a software system can vary depending on a variety of factors,
including the size and complexity of the software, the number of features to be added or modified,
the tools and technologies used, and the availability of documentation and the skill level of the
development team. It’s important to carefully consider these factors when estimating the cost of re-
engineering a software system.
Advantages of Re-engineering
1. Reduced Risk: As the software is already existing, the risk is less as compared to new software
development. Development problems, staffing problems and specification problems are the lots
of problems that may arise in new software development.
2. Reduced Cost: The cost of re-engineering is less than the costs of developing new software.
3. Revelation of Business Rules: As a system is re-engineered , business rules that are embedded
in the system are rediscovered.
4. Better use of Existing Staff: Existing staff expertise can be maintained and extended
accommodate new skills during re-engineering.
5. Improved efficiency: By analyzing and redesigning processes, re-engineering can lead to
significant improvements in productivity, speed, and cost-effectiveness.
6. Increased flexibility: Re-engineering can make systems more adaptable to changing business
needs and market conditions.
7. Better customer service: By redesigning processes to focus on customer needs, re-engineering
can lead to improved customer satisfaction and loyalty.
8. Increased competitiveness: Re-engineering can help organizations become more competitive by
improving efficiency, flexibility, and customer service.
9. Improved quality: Re-engineering can lead to better quality products and services by identifying
and eliminating defects and inefficiencies in processes.
10. Increased innovation: Re-engineering can lead to new and innovative ways of doing things,
helping organizations to stay ahead of their competitors.
11. Improved compliance: Re-engineering can help organizations to comply with industry standards
and regulations by identifying and addressing areas of non-compliance.
Disadvantages of Re-engineering
Major architectural changes or radical reorganizing of the systems data management has to be done
manually. Re-engineered system is not likely to be as maintainable as a new system developed using
modern software Re-engineering methods.
1. High costs: Re-engineering can be a costly process, requiring significant investments in time,
resources, and technology.
2. Disruption to business operations: Re-engineering can disrupt normal business operations and
cause inconvenience to customers, employees and other stakeholders.
3. Resistance to change: Re-engineering can encounter resistance from employees who may be
resistant to change and uncomfortable with new processes and technologies.
4. Risk of failure: Re-engineering projects can fail if they are not planned and executed properly,
resulting in wasted resources and lost opportunities.
5. Lack of employee involvement: Re-engineering projects that are not properly communicated
and involve employees, may lead to lack of employee engagement and ownership resulting in
failure of the project.
6. Difficulty in measuring success: Re-engineering can be difficult to measure in terms of success,
making it difficult to justify the cost and effort involved.
Difficulty in maintaining continuity: Re-engineering can lead to significant changes in processes
and systems, making it difficult to maintain continuity and consistency in the organization.
UNIT-V
Model-Driven Engineering (MDE) is the practice of raising models to first-class artefacts of the
software engineering process, using such models to analyse, simulate, and reason about properties of
the system under development, and eventually, often auto-generate (a part of) its implementation.
MDE brings and adapts well-understood and long-established principles and practices of trustworthy
systems engineering to software engineering; it is unthinkable to start constructing e.g. a bridge or an
aircraft without designing and analysing several models of it first. It’s used extensively in
organisations that produce business or safety-critical software, including in the aerospace, automotive
and robotics industries, where defects can have catastrophic effects (e.g. loss of life), or can be very
expensive to remedy, for example requiring large-scale product recalls. MDE is also increasingly
used for non-critical systems due to the productivity and consistency benefits it delivers, largely
through automated code generation.
Essentially, the use of domain-specific models enables software engineers to capture essential
information about the system under development at precisely the level of detail that is appropriate for
their domain and technical stack. These models are then used to automate labor-intensive and tedious
work (writing setters and getters, JSON marshalling and un-marshalling code etc. is nobody’s idea of
fun) which lets engineers channel their creativity towards the novel and intellectually demanding
parts of the system under development.
The central artefact in MDE is the model. A model in the computing world is a simplification of a
process one wishes to capture or automate. The simplification is such that it does not take into
account details that can be overseen at a given stage of the engineering cycle.
The purpose is to focus on the relevant concepts at hand—much as for example a plaster model of a
car for studying body aerodynamics will not take into account the real materials a car is made of.
In the computing world a model is defined by using a given language. Coming back to the car
analogy,
if an engineer wishes to have a computational model of a car for 3D visualization, a language such as
the one defined by a Computer Assisted Design (Cad) tool will be necessary to express a particular
car design. In the computing world several such languages—called metamodels—are used to describe
families of models of computational artefacts that share the same abstraction concerns.
Each metamodel is a language (also called formalism) that may have many model instantiations,
much as in a Cad tool many different car designs can be described.
The missing piece in this set of concepts are Model Transformations.
Model transformations, as illustrated in Fig.
Model-Driven Security
Model-Driven Security (MDS) can be seen as a specialization of MDE for supporting the development
of security-critical applications. MDS makes use of the conceptual approach of MDE as well as its
associated techniques and tools to propose methods for the engineering of security-critical
applications. More specifically, models are the central artifacts in every MDS approach. Besides being
used to describe the system’s business logic, they are used extensively to capture security concerns.
Models
Model-Driven Architecture
Model-Driven Architecture (Mda) is an Omg proposal launched in 2001 to help standardize model
definitions, and favor model exchanges and compatibility. The Mda consists of the following points .
•
It builds on the Uml, an already standardized and well-accepted notation, already widely used
in Object-Oriented systems. In an effort to harmonize notations and clean the Uml’s internal
structure, Meta-Object Facility (Mof) was proposed for coping with the plethora of model
definitions and languages;
•
It proposes a pyramidal construction of models as can be observed in Fig. : artifacts
populating the level M0 represents the actual system; those in the M1 level model
the M0 ones; artifacts belonging to the M2 level are metamodels, allowing the definition
of M1 models; and finally, the unique artifact at the M3 level is Mof itself, considered as
meta-circularly defined as a model itself;
Aspect-oriented programming (AOP)
Aspect-oriented programming (AOP) is a coding approach that helps developers write cleaner, more
organized code by separating common tasks, such as logging or error handling, from the main
program logic. In AOP, code is separated into modules or ‘aspects’ that encapsulate related
functionality, making it easier to manage and modify.
In simpler terms, imagine you’re writing a program with considerable repeated code. Let’s say you’re
building a website and must perform security checks on every page. You could write the security
code on every single page; however, that would be inefficient and difficult to manage. Instead, you
could use AOP to create a separate ‘security aspect’ that handles security checks for all the pages in
one central location. This makes the code more organized and easier to modify in the future.
Development of AOP
AOP first emerged in the late 1990s. It was developed as a response to the limitations of Object-
oriented programming (OOP) in dealing with cross-cutting concerns. The concept of cross-cutting
concerns was first introduced in a 1997 paper by Gregor Kiczales, John Lamping, and others, titled
‘Aspect-Oriented Programming.’ They argued that certain concerns, such as logging, error handling,
and security, cut across multiple modules or components of an application and cannot be
encapsulated within a single class or module.
Today, AOP is widely used in various domains, including web and mobile development, gaming, big
data, cloud computing, and IoT.
It has proven to be a useful tool to improve application performance, reliability, and scalability by
separating cross-cutting concerns from business logic.
How Does AOP Work?
AOP is a programming technique that focuses on modularizing cross-cutting concerns, which are
features that cut across different parts of an application. Cross-cutting concerns include things such as
logging, security, performance monitoring, error handling, and more.
Here’s a step-by-step explanation of how AOP works.
1. Identifying concerns
First, identify the different concerns or responsibilities of your program. For example, if you’re
writing a program to process orders, you might identify concerns such as order validation, payment
processing, and order fulfilment.
2. Defining aspects
Once you’ve identified your concerns, you can define aspects for each one. An aspect is a modular
unit of code that encapsulates a specific behavior or responsibility. For example, you might create an
aspect for order validation, another for payment processing, and so on.
A join point is a specific point in your program’s execution where an aspect can be applied. For
example, a join point for the order validation aspect might be when a new order is submitted. You
can define join points in your code using annotations or other markers.
4. Defining pointcuts
A pointcuts is a set of join points where an aspect should be applied. For example, you might define a
pointcut for the order validation aspect that includes all the join points where a new order is
submitted. Pointcuts help narrow down the scope of an aspect so that it’s only applied where needed.
5. Defining advice
Advice is the behavior an aspect provides at a join point. Several types of advice exist, including
before, after, and around advice. Before advice is executed before the join point, after advice is
executed after the join point, and around advice wraps the join point and can modify its behavior.
6. Weaving aspects
Weaving is the process of applying aspects to your program’s execution. During weaving, the advice
provided by aspects is injected into the appropriate join points in your code. This allows the behavior
of aspects to be integrated into your program’s execution.
7. Executing program
Once your aspects are woven into your program, you can execute the program as usual. The
behaviour the aspects provide will be triggered at the appropriate join points, and your program will
run as expected.
Spring AOP is a lightweight AOP framework that is part of the larger ‘Spring Framework.’ It
supports several types of advice, including before, after, around, and after-returning advice. It also
supports pointcut expressions, which allow you to specify sets of join points using a simple syntax.
Spring AOP integrates easily with other parts of the Spring Framework, such as Spring MVC and
Spring Boot.
2. AspectJ
AspectJ is a powerful AOP framework that supports many features, including advanced pointcut
expressions, inter-type declarations, and aspect inheritance. It allows you to weave aspects into both
Java and bytecode at compile time or runtime, giving you increased flexibility. AspectJ is also
compatible with many integrated development environments (IDEs) and build tools, including
Eclipse and Maven.
3. JBoss AOP
JBoss AOP is an AOP framework part of the JBoss application server. It supports before and after
advice, around advice, and inter-type declarations. It also includes several built-in aspects for
common concerns such as security and transaction management. JBoss AOP weaves aspects into
bytecode at runtime and integrates easily with other parts of the JBoss ecosystem.
4. Guice AOP
Guice AOP is an AOP framework part of the Google Guice dependency injection library. It supports
before, after, and around advice in the form of method interceptors. Guice AOP uses method
interception rather than bytecode weaving, which can be simpler and faster in some cases. Integrating
with other Guice features, such as dependency injection and scopes, is easy.
5. PostSharp
PostSharp is an AOP framework for .NET that supports a wide range of features, including before
and after advice, around advice, and aspect inheritance. It also includes several pre-built aspects for
common concerns such as logging and caching. PostSharp weaves aspects into .NET assemblies at
compile time and integrates easily with Visual Studio and other .NET tools.
Technically, AOP frameworks provide developers with a powerful tool to implement cross-cutting
concerns in their applications. Each framework provides different features and has its own strengths.
The choice of framework largely depends on the application’s requirements, the programming
language used, and the development environment.
1. Web applications
AOP can be used in web applications to separate concerns such as logging, security, and transaction
management. For example, an AOP logging aspect can capture method execution times and stack
traces, while a security aspect can enforce authentication and authorization policies.
2. Enterprise applications
3. Mobile applications
AOP is used in mobile applications to manage device compatibility, data synchronization, and user
engagement. The device compatibility aspect ensures the application works seamlessly across
different platforms and devices. Meanwhile, a data synchronization aspect can handle data conflicts
and ensure data consistency across multiple devices.
4. Embedded systems
5. IoT
In IoT applications, AOP addresses concerns such as security, fault tolerance, and data processing.
This implies that the AOP security aspect can enforce authentication and authorization policies to
protect against cyber-attacks. Meanwhile, a fault tolerance aspect can handle errors and failures
gracefully to ensure that the system continues to operate even in unpredictable and unreliable
environments. A data processing aspect can handle complex data processing tasks such as
aggregation, filtering, and transformation, making it easier to write efficient and maintainable code
for IoT applications.
As per February 2023 research by IoT Analytics, the IoT market size is estimated to grow at a CAGR
of 19.4% between 2022 and 2027 to reach $483 billion. As the IoT market continues to accelerate,
AOP is expected to play a crucial role in managing IoT applications.
6. Finance sector
In financial applications, AOP can be deployed to address concerns such as transaction management,
auditing, and compliance. For instance, an AOP auditing aspect can capture audit logs to ensure
compliance with regulatory requirements. A compliance aspect can enforce compliance policies and
prevent unauthorized access to sensitive financial data.
7. Healthcare industry
8. Gaming industry
In gaming applications, AOP can be used to address issues such as performance optimization,
debugging, and game logic. For example, an AOP performance optimization aspect can optimize
game rendering and animation, improving the overall gameplay experience for players. An AOP
debugging aspect can capture detailed debugging information to help developers identify and resolve
issues quickly and efficiently.
A game logic aspect can handle game logic tasks such as player movement, scoring, and collision
detection, making writing maintainable and scalable code for complex game mechanics easier.
9. Robotics
In robotics, one can use AOP for cross-cutting concerns such as performance monitoring and error
handling. It can be used to intercept method calls and measure the time taken by the method to be
completed. This helps identify performance bottlenecks and optimize the performance of the robotic
system. Additionally, aspect-oriented programming can handle errors gracefully and ensure that the
robotic system continues to function properly.
Moreover, AOP can be used in robotic process automation (RPA) to modularize cross-cutting
concerns and apply them to specific parts of an automation process without modifying the
fundamental business logic. This can make the automation process more efficient and easier to
maintain over time.
10. Big data
With the exponential increase in the volume of data generated by devices worldwide, substantial
market growth of big data is witnessed. According to a recent report by Expert Market Research, the
global big data market was valued at $271.30 billion in 2022 and is predicted to reach $624.27 billion
by 2028.
With such high growth in the big data market, AOP is expected to play a critical role in logging and
tracing, security, and error handling in data. AOP can intercept method calls and add logging and
tracing functionality to them, allowing developers to trace the flow of data in the system and identify
issues as and when they arise.
1. Service provider: The service provider is the maintainer of the service and the organization
that makes available one or more services for others to use. To advertise services, the provider
can publish them in a registry, together with a service contract that specifies the nature of the
service, how to use it, the requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry and
develop the required client components to bind and use the service.
Services might aggregate information and data retrieved from other services or create workflows of
services to satisfy the request of a given service consumer. This practice is known as service
orchestration Another important interaction pattern is service choreography, which is the
coordinated interaction of services without a single point of control.
Components of SOA: