UNIT:3 Agile Software Design and Programming
What are SOLID Design Principles?
These Design principles are used to manage software problems and should be used to keep the
code modular, flexible and easy to maintain, with implementations of these teams also adapt to
changes quickly. SOLID is an acronym that stands for:
1. Single Responsibility Principle (SRP)
2. Open-Closed Principle (OCP)
3. Liskov Substitution Principle (LSP)
4. Interface Segregation Principle (ISP)
5. Dependency Inversion Principle (DIP)
The SOLID design principles are a subcategory of many principles introduced by the American
computer scientist and instructor, Robert C. Martin (A.K.A Uncle Bob) in a 2000 paper.
Following these principles can result in a very large codebase for a software system. But in the
long run, the main aim of the principles is never defeated. That is, helping software developers
make changes to their code without causing any major issues.
1. Single Responsibility Principle (SRP): This principle focuses on assigning one
responsibility to a particular class, method or an individual. It also suggests breaking down a
task into small manageable pieces, every story should have a single objective. Teams should be
cross functional but each team member should have a clear role.
● The single responsibility principle states that a class, module, or function should have
only one reason to change, meaning it should do one thing. For example, a class that
shows the name of an animal should not be the same class that displays the kind of
sound it makes and how it feeds.
2. Open-Closed Principle (OCP): This indicates that a functionality should be open for
extension but closed for modification i.e a new functionality should be added without modifying
the existing code.
● The open-closed principle states that classes, modules, and functions should be open
for extension but closed for modification. This principle might seem to contradict itself,
but you can still make sense of it in code. It means you should be able to extend the
functionality of a class, module, or function by adding more code without modifying the
existing code. Let’s understand Open/Closed Principle using an example:
● Imagine you have a class called PaymentProcessor that processes payments for an
online store. Initially, the PaymentProcessor class only supports processing payments
using credit cards. However, you want to extend its functionality to also support
processing payments using PayPal.
● Instead of modifying the existing PaymentProcessor class to add PayPal support, you
can create a new class called PayPalPaymentProcessor that extends the
PaymentProcessor class. This way, the PaymentProcessor class remains closed for
modification but open for extension, adhering to the Open-Closed Principle.
3. Liskov Substitution Principle (LSP): This principle emphasizes that a derived class should
substitute the base class without any issues.
● The principle was introduced by Barbara Liskov in 1987 and according to this principle
“Derived or child classes must be substitutable for their base or parent classes“. This
principle ensures that any class that is the child of a parent class should be usable in
place of its parent without any unexpected behavior.
● One of the classic examples of this principle is a rectangle having four sides. A
rectangle’s height can be any value and width can be any value. A square is a rectangle
with equal width and height. So we can say that we can extend the properties of the
rectangle class into square class.
● In order to do that you need to swap the child (square) class with parent (rectangle)
class to fit the definition of a square having four equal sides but a derived class does not
affect the behavior of the parent class so if you will do that it will violate the LSP.
LSP Violation Example:
● To see a potential violation of LSP, consider what would happen if you were to use the
Square class in a context expecting a Rectangle:
● If you substitute a Square where a Rectangle is expected, changing just the width or
height would lead to unexpected results because it will change both dimensions.
4. Interface Segregation Principle (ISP): This principle talks about not enforcing unnecessary
methods on an interface.
● The interface segregation principle states that clients should not be forced to implement
interfaces or methods they do not use. More specifically, the ISP suggests that software
developers should break down large interfaces into smaller, more specific ones, so that
clients only need to depend on the interfaces that are relevant to them. This can make
the codebase easier to maintain.
● This principle is fairly similar to the single responsibility principle (SRP). But it’s not just
about a single interface doing only one thing – it’s about breaking the whole codebase
into multiple interfaces or components.
● This means you create individual components that have functionality specific to them.
The component responsible for implementing scroll to the top, for example, will not be
the one to switch between light and dark, and so on.
5. Dependency Inversion Principle (DIP): This principle says that an interface should be
independent of underlying hardware or software specifications. The dependency inversion
principle is about decoupling software modules. That is, making them as separate from one
another as possible.
The Dependency Inversion Principle (DIP) is a principle in object-oriented design that states that
“High-level modules should not depend on low-level modules. Both should depend on
abstractions“. Additionally, abstractions should not depend on details. Details should depend on
abstractions.
● In simpler terms, the DIP suggests that classes should rely on abstractions (e.g.,
interfaces or abstract classes) rather than concrete implementations.
● This allows for more flexible and decoupled code, making it easier to change
implementations without affecting other parts of the codebase.
Need for SOLID Principles in Object-Oriented Design:
1. SOLID principles make code easier to maintain. When each class has a clear
responsibility, it’s simpler to find where to make changes without affecting unrelated parts
of the code.
2. These principles support growth in software. For example, the Open/Closed Principle
allows developers to add new features without changing existing code, making it easier
to adapt to new requirements.
3. SOLID encourages flexibility. By depending on abstractions rather than specific
implementations (as in the Dependency Inversion Principle), developers can change
components without disrupting the entire system.
Agile Design Principles with UML(Unified Modelling Language) Examples: Unified
Modeling Language (UML) is a general-purpose modeling language. The main aim of UML is to
define a standard way to visualize the way a system has been designed. It is quite similar to
blueprints used in other fields of engineering. UML is not a programming language, it is rather a
visual language. Software system artifacts can be specified, visualized, built, and documented
with the use of UML.
● We use UML diagrams to show the behavior and structure of a system.
● UML helps software engineers, businessmen, and system architects with modeling,
design, and analysis.
1. Class Diagram: These depict the static structure of your software, showing classes,
attributes, and their relationships. They are helpful for designing data models and
understanding the overall architecture.
● When to Use: Class diagrams are typically used during the initial stages of the project
when defining the system’s architecture and data models.
● Purpose: Use them to represent the static structure of the software, including classes,
their attributes, and relationships between classes.
● Scenarios: Class diagrams are helpful when you need to design the underlying data
structure or when discussing high-level system architecture.
2. Sequence Diagram: Use these to visualize the dynamic behavior of your system,
especially for interactions between different components or actors. Sequence diagrams
can be handy for understanding complex user stories.
● When to Use: Sequence diagrams are particularly useful during the development phase
when you want to visualize interactions between different components or actors.
● Purpose: Use them to show the dynamic behavior of your system, including the
sequence of messages or method calls between objects.
● Scenarios: Sequence diagrams can be used for understanding and documenting
complex user stories or scenarios that involve multiple system components.
3. Activity Diagram: These describe the workflow and flow of control in a system. They
are great for representing the steps involved in a specific process or user story.
● When to Use: Activity diagrams are versatile and can be used throughout the project,
from requirement analysis to design and even testing.
● Purpose: Use them to represent workflows, business processes, and the flow of control
within a system.
● Scenarios: Activity diagrams are helpful for documenting and visualizing the steps
involved in a specific process, such as user interaction flows or business processes.
4. Use Case Diagram: When dealing with user stories, use case diagrams can help
identify and document different user roles and their interactions with the system.
● When to Use: Use case diagrams are typically created during the early stages of the
project, often during requirements gathering.
● Purpose: Use them to define different user roles, their interactions with the system, and
the high-level functionality the system provides.
● Scenarios: Use case diagrams help identify and document user stories or features that
need to be implemented.
5. State Diagram: If your software has complex state transitions, state diagrams can be
beneficial for visualizing and documenting these transitions.
● When to Use: State diagrams are valuable when your software has complex state
transitions, which are often encountered during design and development.
● Purpose: Use them to visualize the states of an object and how it transitions between
those states in response to events or conditions.
● Scenarios: State diagrams can be used for modeling the behavior of specific
components or objects that have distinct states and transitions between them.
The role of UML in Agile Development:
● Visualisation: UML diagrams provide a common visual language for developers,
product owners, and other stakeholders. They help in creating a shared understanding
of the system’s architecture, design, and behavior.
● Design: UML supports the creation of detailed design artifacts like class diagrams,
sequence diagrams, and activity diagrams. These can be invaluable during the
development process for making informed design decisions.
● Documentation: While agile methodologies prioritize working software over
comprehensive documentation, UML diagrams can serve as lightweight documentation
that can be updated as the project progresses.
Need and Significance of Refactoring: Refactoring is the practice of improving existing
source code by making it cleaner and easier to understand without altering its external behavior
or functionality. Developers use refactoring to enhance code quality and reduce the technical
debt. Refactoring is important because as the software grow it becomes expensive to:
● Fix Bugs
● Reduce Technical Debt
● Make Changes
● Add New features
In Agile, teams maintain and enhance their code on an incremental basis from Sprint to Sprint. If
code is not refactored in an Agile project, it will result in poor code quality, such as unhealthy
dependencies between classes or packages, improper class responsibility allocation, too many
responsibilities per method or class, duplicate code, and a variety of other types of confusion
and clutter. Refactoring helps to remove this chaos and simplifies the unclear and complex
code.
Making consistent refactoring part of your sprints gives your team the opportunity to constantly
respond to an evolving code and organize and clean the code base which results in better
cohesion, readability and maintainability thereby reducing technical debt.
Best practice says to execute unit tests before and after refactoring. This is important to make
sure that a code actually works before it is touched.
When to Avoid Refactoring: Refactoring should be avoided when an application is being
revamped or when we are in the middle of a functional change.
Refactoring Techniques: Some of the common techniques for refactoring are:
1. Renaming Methods and Variables: Giving methods and variables meaningful names
makes the code more understandable and easier to maintain.
2. Adding Comments: Adding clear, concise comments where necessary helps others
(and your future self) understand the intent behind the code, especially in complex or
non-obvious sections.
3. Remove Unhealthy Dependencies: Dependencies that are no longer needed, or overly
complex, should be removed to reduce the risk of bugs and improve maintainability. This
also helps in enhancing modularity.
4. Simplify Code: Simplifying overly complex code, whether through removing
unnecessary logic or breaking down convoluted functions, improves readability and
reduces the chance of introducing errors.
5. Avoid Hardcoding: Hardcoded values make code difficult to maintain and adapt. Using
constants or configuration files instead of hardcoding values ensures flexibility and
easier updates.
6. Clean the Structure of Program: Ensuring that the program has a clear structure
makes it easier to follow and maintain. This includes consistent indentation, grouping
related functions, and following standard design patterns.
7. Break program into Modules: Splitting large programs into smaller, manageable
modules makes it easier to understand, test, and maintain. This also supports better
reusability.
8. Simplified Conditional Expressions: Simplifying complex conditional statements, such
as nested if-else or switch cases, can make the code more readable and less prone to
errors.
Refactoring Best Practices:
1. Collaborate closely with testers to ensure that existing functionality is not broken.
2. Refactor in small steps to minimize risk and simplify debugging.
3. Comment and Uncomment wisely – comment out old code temporarily during testing
and clean up once changes are confirmed.
4. Troubleshoot and fix bugs separately to avoid confusion.
5. Identify code rot (e.g., duplicated, dead, or outdated code) and ensure coding standards
are followed; refactoring should meet required quality criteria.
6. Prioritize code de-duplication by removing redundancy.
Guidelines for Refactoring:
1. Make sure the code is working before you start.
2. Ensure that an automated test suite is available and provides good coverage.
3. Run the tests frequently before, during, and after each refactoring.
4. Before each refactoring, use a version control tool and save a checkpoint. Not only does
this mean recovery can be done quickly from disasters, but it also means refactoring can
be tried out and then back out if unsatisfied with the refactored code.
5. Break each refactoring down into smaller units.
6. Finally, if a refactoring tool is available in your environment, utilize it.
Continuous Integration: Continuous Integration is a key practice within Agile methodologies
that includes a Continuous Delivery (CD) pipeline. As part of this process, new functionality is
continuously developed, tested, integrated, and validated in preparation for deployment and
release. CI helps improve code quality, reduce risks, and enables a faster, more reliable, and
sustainable development pace.
Continuous Exploration → Continuous Integration → Continuous Delivery
Activities of CI:
1. Develop: It includes the entire life cycle of creating, designing and maintaining the
software.
2. Build: Build on the other hand is the process of converting the code into an executable
and deployable form.
3. Test end to end: This is required to make sure that after integration a software or
application is still in working condition.
4. Stage: This is the step where the code or package is ready for production move and it is
also known as deployment.
Automated Build Tools: In order to enable faster and reliable software development and
delivery by automating tasks like compiling, testing and packaging the code. Example of tools:
● Jenkins
● TravisCI
● Selenium
The steps performed by these build tools include:
● Compiling source code.
● Packaging compiled files in compressed format.
● Producing installers
● Creating or updating database schema or data.
Version Control: Version control, also known as source control, is the practice of tracking and
managing changes to software code. Version control systems are software tools that help
software teams manage changes to source code over time. As development environments have
accelerated, version control systems help software teams work faster and smarter. They are
especially useful for DevOps teams since they help them to reduce development time and
increase successful deployments.
Version control software keeps track of every modification to the code in a special kind of
database. If a mistake is made, developers can turn back the clock and compare earlier
versions of the code to help fix the mistake while minimizing disruption to all team members.
Key Practices in Version Control:
1. Establishes a Clear branching strategy: A branching strategy defines how the team
manages and organizes code changes in the repository. It provides a systematic
approach to feature development, bug fixing, and release management, helping teams
collaborate effectively and avoid conflicts.
2. Commits small incremental changes regularly: Making frequent and small commits
ensures that changes are easier to test, review, and revert if needed. Each commit
should represent a single, logical change, making the project history cleaner and
debugging simpler.
3. Automated Testing along with CI: Automated tests triggered by every commit or
merge verify that new changes do not break existing functionality. Integrating version
control with CI pipelines guarantees that code remains stable, reliable, and ready for
deployment at all times.
4. Keeps the repository clean and organized: Maintaining a clean repository involves
removing unused branches, ignoring unnecessary files, and writing clear documentation
and commit messages. A clean repository improves accessibility, reduces confusion,
and supports efficient collaboration.
5. Meaningful Commit Messages: Writing clear, descriptive commit messages explains
the "what" and "why" behind each change. Good commit messages make it easier for
others (and your future self) to understand the project's history and reasoning behind
decisions, especially during troubleshooting or audits.
xUnit Framework:
● xUnit.net is a free, open source, community-focused unit testing tool for the .NET
Framework.
● xUnit is a family of unit testing frameworks that follow a common structure and design
pattern for writing and running tests.
● These frameworks support Test Driven Development (TDD) by allowing developers to
write test cases before writing actual code. The "x" in xUnit stands for the programming
language being used (e.g., JUnit for Java, NUnit for .NET, PyTest for Python).
Popular xUnit Frameworks by Language:
1. JUnit – Java
2. NUnit / xUnit.net – C#/.NET
3. PyTest / unittest – Python
4. Mocha / Jest – JavaScript
5. CppUnit / Google Test – C++
6. PHPUnit – PHP
Test Driven Development (TDD) Tools:
1. Unit Testing Frameworks (xUnit): These are the core tools for writing and running unit tests.
They support assertions, test case grouping, and test automation.
2. Build Automation Tools: Tools like Maven, Gradle, MSBuild, or npm can be integrated to
run tests automatically during builds. They help enforce TDD by running tests during each build
cycle.
3. Continuous Integration (CI) Tools: CI tools like Jenkins, GitHub Actions, Travis CI, and
GitLab CI run tests automatically on code commits. They ensure that every change is verified by
running the xUnit tests in real time.
4. Mocking Libraries: Mocking tools simulate the behavior of complex objects, APIs, or
databases. Examples:
● Mockito – Java
● Moq – .NET
● unittest.mock – Python
● Sinon.js – JavaScript
5. Code Coverage Tools: These tools measure how much of your code is being tested by your
test suite. Examples include JaCoCo (Java), Coverlet (.NET), Istanbul (JavaScript), and
Coverage.py (Python).
Best Practices for Test Driven Development (TDD):
1. Start with a clear understanding of requirements: Begin by understanding the
requirements or specifications of the feature you are developing. This will help you write
focused and relevant tests.
2. Write atomic tests: Each test should focus on a specific behavior or functionality. Keep
your tests small and focused, addressing a single aspect of the code. This improves test
readability, maintainability, and allows for easier debugging.
3. Write the simplest test case first: Begin by writing the simplest possible test case that
will fail. This helps you focus on the immediate task and avoids overwhelming yourself
with complex scenarios upfront.
4. Write tests for edge cases: Consider boundary conditions and edge cases when
designing your tests. These are inputs or scenarios that lie at the extremes of the input
domain and often reveal potential bugs or unexpected behavior.
5. Refactor regularly: After a test passes, take time to refactor the code and improve its
design without changing its behavior. This helps maintain clean and maintainable code
as the project progresses.
6. Maintain a fast feedback loop: Your test suite should execute quickly so that you can
receive immediate feedback on the health of your code. Fast feedback allows for faster
development iterations and catches issues early on.
7. Automate your tests: Utilize test automation frameworks and tools to automate the
execution of your tests. This enables you to run tests frequently, easily integrate them
into your development workflow, and ensure consistent and reliable test results.
8. Follow the Red-Green-Refactor cycle: Adhere to the core TDD cycle of writing a failing
test (Red), implementing the minimum code to pass the test (Green), and then
refactoring the code to improve its design (Refactor). Repeat this cycle for each new
behavior or feature.
9. Maintain a comprehensive test suite: Aim to achieve a good balance between unit
tests, integration tests, and acceptance tests. Each test type serves a different purpose
and provides different levels of confidence in the code.
10.Continuously run tests: Integrate your test suite with your development environment
and set up continuous integration (CI) pipelines to automatically execute tests whenever
code changes are made. This ensures that tests are run consistently and helps catch
regressions early.