0% found this document useful (0 votes)
5 views13 pages

Seunit Iii

The document outlines the fundamentals of software construction and testing, covering topics such as object-oriented design principles, programming practices, and various testing methodologies including unit, integration, and system testing. It emphasizes the importance of clear requirements, modular design, and effective debugging and testing strategies to enhance software quality. Additionally, it discusses the role of Test-Driven Development (TDD) and the significance of maintaining documentation and version control throughout the software development lifecycle.

Uploaded by

yariyapalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views13 pages

Seunit Iii

The document outlines the fundamentals of software construction and testing, covering topics such as object-oriented design principles, programming practices, and various testing methodologies including unit, integration, and system testing. It emphasizes the importance of clear requirements, modular design, and effective debugging and testing strategies to enhance software quality. Additionally, it discusses the role of Test-Driven Development (TDD) and the significance of maintaining documentation and version control throughout the software development lifecycle.

Uploaded by

yariyapalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT-III

SOFTWARE CONSTRUCTION AND TESTING:


SYLLABUS :Software construction basics, Object-oriented design principles,
Object-oriented programming languages (Java, C++, Python), Software testing
basics (unit testing, integration testing, system testing), Test-driven development
(TDD)
SOFTWARE CONSTRUCTION BASICS,
Software construction is a fundamental phase of software development that focuses
on the actual building of software. It involves activities such as coding, debugging,
unit testing, and integration. Here are the key basics of software construction:
1. Understanding Requirements
 Ensure clear understanding of the requirements.
 Translate requirements into technical specifications.
2. Software Design
 Create architectural and detailed designs.
 Ensure modularity, scalability, and maintainability.
3. Programming Practices
 Languages: Select appropriate programming languages (e.g., Python, Java,
C++).
 Coding Standards: Follow consistent coding guidelines for readability and
maintainability.
 Code Reuse: Leverage existing libraries or frameworks to reduce development
time.
4. Version Control
 Use tools like Git to track code changes and collaborate with team members.
 Implement branching strategies for parallel development and code reviews.
5. Debugging
 Detect and fix defects early.
 Use debugging tools and techniques to identify root causes of issues.
6. Unit Testing
 Write test cases to validate individual units of code.
 Use testing frameworks like JUnit (Java), PyTest (Python), or NUnit (.NET).
7. Integration
 Combine modules and ensure they work together as intended.
 Address dependencies and perform integration testing.
8. Error Handling
 Anticipate potential errors and handle them gracefully.
 Use exception handling mechanisms for robust software.
9. Code Optimization
 Refactor code for better performance and efficiency.
 Eliminate redundant operations and improve algorithm efficiency.
10. Documentation
 Document code for clarity.
 Maintain user and developer documentation.
11. Tools and Environments
 Use IDEs (e.g., IntelliJ, Visual Studio Code) for efficient coding.
 Leverage build tools (e.g., Maven, Gradle) for automation.
 Utilize CI/CD pipelines for automated building, testing, and deployment.
12. Quality Assurance
 Conduct code reviews to ensure adherence to standards.
 Perform static and dynamic analysis to find issues before deployment.
13. Maintenance and Updates
 Plan for future updates and patches.
 Implement feedback from end users effectively.
OBJECT-ORIENTED DESIGN PRINCIPLES
Object-oriented design (OOD) principles help to reduce complexity and increase code
maintainability. Some of the key principles of OOD include:
 SOLID: An acronym for the five key design principles:
1. Single Responsibility Principle (SRP)
 Definition: A class should have only one reason to change.
 Explanation: Each class should focus on a single responsibility or functionality.
 Example: A User class handles user data, while a UserRepository class manages
database interactions.
2. Open/Closed Principle (OCP)
 Definition: Classes should be open for extension but closed for modification.
 Explanation: New functionalities should be added by extending the class, not
by modifying existing code.
 Example: Use inheritance or composition to add features rather than altering
base class code.
3. Liskov Substitution Principle (LSP)
 Definition: Subtypes must be substitutable for their base types.
 Explanation: Derived classes should not violate the behavior of their parent
class.
 Example: If a Bird class has a fly() method, a subclass Penguin should not
implement fly(), as it doesn't make sense. Instead, rethink the hierarchy.
4. Interface Segregation Principle (ISP)
 Definition: A class should not be forced to implement interfaces it doesn't use.
 Explanation: Create smaller, specific interfaces rather than a large, general-
purpose one.
 Example: Instead of a large Animal interface with methods like fly(), swim(),
and walk(), have specific interfaces like Flyable, Swimmable, and Walkable.
5. Dependency Inversion Principle (DIP)
 Definition: High-level modules should not depend on low-level modules; both
should depend on abstractions.
 Explanation: Depend on interfaces or abstractions rather than concrete
implementations.
 Example: Instead of a Logger class directly writing to a file, use a Logger
Interface that different loggers (e.g., file, database, console) can implement.
 DRY: An acronym for "Don't Repeat Yourself". Code should not be copied
and pasted throughout the codebase
 Aggregation: A relationship where the child can exist independently of the
parent
 Composition: A strong life cycle dependency between the parent and child
 Coupling and cohesion: Principles concerned with organizing objects and
their interactions
 Robustness: Software should produce correct solutions and recover
gracefully from unexpected errors
 Adaptability: Software should be able to adapt to unexpected events and run
with minimal change on different hardware and operating system platforms
OOD is the process of planning a system of interacting objects to solve a software
problem
SOFTWARE TESTING BASICS (UNIT TESTING, INTEGRATION TESTING,
SYSTEM TESTING)
1)UNIT TESTING
 Unit Testing is a software testing technique in which individual units or
components of a software application are tested in isolation.
 These units are the smallest pieces of code, typically functions or methods,
ensuring they perform as expected.
 Unit testing helps in identifying bugs early in the development cycle, enhancing
code quality, and reducing the cost of fixing issues later.
 It is an essential part of Test-Driven Development (TDD), promoting reliable
code.
 Unit testing strategies
To create effective unit tests, follow these basic techniques to ensure all
scenarios are covered:
 Logic checks: Verify if the system performs correct calculations and follows the
expected path with valid inputs. Check all possible paths through the code are
tested.
 Boundary checks: Test how the system handles typical, edge case, and invalid
inputs. For example, if an integer between 3 and 7 is expected, check how the
system reacts to a 5 (normal), a 3 (edge case), and a 9 (invalid input).
 Error handling: Check the system properly handles errors. Does it prompt for a
new input, or does it crash when something goes wrong?
 Object-oriented checks: If the code modifies objects, confirm that the object’s
state is correctly updated after running the code.

 benefits of unit testing


Here are the Unit testing benefits which used in the software development with
many ways:
1. Early Detection of Issues: Unit testing allows developers to detect and fix
issues early in the development process before they become larger and more
difficult to fix.
2. Improved Code Quality: Unit testing helps to ensure that each unit of code
works as intended and meets the requirements, improving the overall quality of
the software.
3. Increased Confidence: Unit testing provides developers with confidence in
their code, as they can validate that each unit of the software is functioning as
expected.
4. Faster Development: Unit testing enables developers to work faster and more
efficiently, as they can validate changes to the code without having to wait for
the full system to be tested.
5. Better Documentation: Unit testing provides clear and concise documentation
of the code and its behavior, making it easier for other developers to understand
and maintain the software.
6. Facilitation of Refactoring: Unit testing enables developers to safely make
changes to the code, as they can validate that their changes do not break existing
functionality.
7. Reduced Time and Cost: Unit testing can reduce the time and cost required for
later testing, as it helps to identify and fix issues early in the development
process.

 How do developers use unit tests


Unit testing plays an important role throughout the software development
process:
 Test-Driven Development (TDD): In TDD, developers write tests before
writing the actual code. This ensures that once the code is completed, it instantly
meets the functional requirements when tested, saving time on debugging.
 After Completing Code Blocks: After a section of code is finished, unit tests
are created (if not already done through TDD). These tests are then run to verify
that the code works as expected. Unit tests are rarely the first set of tests run
during broader system testing.
 DevOps and CI/CD: In DevOps environments, Continuous
Integration/Continuous Delivery (CI/CD) automatically runs unit tests
whenever new code is added. This ensures that changes are integrated smoothly,
tested thoroughly, and deployed efficiently, maintaining overall code quality.

2)INTEGRATION TESTING
 Integration testing is the process of testing the interface between two
software units or modules.
 It focuses on determining the correctness of the interface.
 The purpose of integration testing is to expose faults in the interaction
between integrated units.
 Once all the modules have been unit-tested, integration testing is
performed.
 Integration testing is a software testing technique that focuses on
verifying the interactions and data exchange between different
components or modules of a software application.
 The goal of integration testing is to identify any problems or bugs that
arise when different components are combined and interact with each
other.
 Integration testing is typically performed after unit testing and before
system testing.
 Integration test approaches
There are four types of integration testing approaches. Those approaches are the
following:

1. Big-Bang Integration Testing


 It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of individual
module testing.
 In simple words, all the modules of the system are simply put together and
tested.
 This approach is practicable only for very small systems. If an error is found
during the integration testing, it is very difficult to localize the error as the error
may potentially belong to any of the modules being integrated.
 So, debugging errors reported during Big Bang integration testing is very
expensive to fix.
 Big-bang integration testing is a software testing approach in which all
components or modules of a software application are combined and tested at
once.
 This approach is typically used when the software components have a low
degree of interdependence or when there are constraints in the development
environment that prevent testing individual components.
 The goal of big-bang integration testing is to verify the overall functionality of
the system and to identify any integration problems that arise when the
components are combined.
 While big-bang integration testing can be useful in some situations, it can also
be a high-risk approach, as the complexity of the system and the number of
interactions between components can make it difficult to identify and diagnose
problems.
.
2. Bottom-Up Integration Testing
 In bottom-up testing, each module at lower levels are tested with higher
modules until all modules are tested.
 The primary purpose of this integration testing is that each subsystem tests the
interfaces among various modules making up the subsystem.
 This integration testing uses test drivers to drive and pass appropriate data to the
lower-level modules.
3. Top-Down Integration Testing
 Top-down integration testing technique is used in order to simulate the
behaviour of the lower-level modules that are not yet integrated.
 In this integration testing, testing takes place from top to bottom.
 First, high-level modules are tested and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is
working as intended.
4. Mixed Integration Testing
 A mixed integration testing is also called sandwiched integration testing.
 A mixed integration testing follows a combination of top down and bottom-up
testing approaches.
 In top-down approach, testing can start only after the top-level module have
been coded and unit tested. In bottom-up approach, testing can start only after
the bottom level modules are ready.
 This sandwich or mixed approach overcomes this shortcoming of the top-down
and bottom-up approaches. It is also called the hybrid
 integration testing. also, stubs and drivers are used in mixed integration testing.
 Applications of Integration Testing
1. Identify the components: Identify the individual components of your
application that need to be integrated. This could include the frontend, backend,
database, and any third-party services.
2. Create a test plan: Develop a test plan that outlines the scenarios and test cases
that need to be executed to validate the integration points between the different
components. This could include testing data flow, communication protocols, and
error handling.
3. Set up test environment: Set up a test environment that mirrors the production
environment as closely as possible. This will help ensure that the results of your
integration tests are accurate and reliable.
4. Execute the tests: Execute the tests outlined in your test plan, starting with the
most critical and complex scenarios. Be sure to log any defects or issues that
you encounter during testing.
5. Analyze the results: Analyze the results of your integration tests to identify any
defects or issues that need to be addressed. This may involve working with
developers to fix bugs or make changes to the application architecture.
6. Repeat testing: Once defects have been fixed, repeat the integration testing
process to ensure that the changes have been successful and that the application
still works as expected.
 Test Cases For Integration Testing
 Interface Testing : Verify that data exchange between modules occurs correctly.
Validate input/output parameters and formats. Ensure proper error handling and
exception propagation between modules.
 Functional Flow Testing : Test end-to-end functionality by simulating user
interactions. Verify that user inputs are processed correctly and produce
expected outputs. Ensure seamless flow of data and control between modules.
 Data Integration Testing : Validate data integrity and consistency across
different modules. Test data transformation and conversion between formats.
Verify proper handling of edge cases and boundary conditions.
 Dependency Testing : Test interactions between dependent modules. Verify that
changes in one module do not adversely affect others. Ensure proper
synchronization and communication between modules.
 Error Handling Testing : Validate error detection and reporting mechanisms.
Test error recovery and fault tolerance capabilities. Ensure that error messages
are clear and informative.
 Performance Testing : Measure system performance under integrated
conditions. Test response times, throughput, and resource utilization. Verify
scalability and concurrency handling between modules.
 Security Testing : Test access controls and permissions between integrated
modules. Verify encryption and data protection mechanisms. Ensure compliance
with security standards and regulations.
 Compatibility Testing : Test compatibility with external systems, APIs, and
third-party components. Validate interoperability and data exchange protocols.
Ensure seamless integration with different platforms and environments.
3)SYSTEM TESTING
 System testing is a type of software testing that evaluates the overall
functionality and performance of a complete and fully integrated software
solution.
 It tests if the system meets the specified requirements and if it is suitable for
delivery to the end-users.
 This type of testing is performed after the integration testing and before the
acceptance testing.
 System Testing Process
System Testing is performed in the following steps:
 Test Environment Setup: Create testing environment for the better quality
testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data, test
cases are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing
process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.
 Types of System Testing
 Performance Testing: Performance Testing is a type of software testing that is
carried out to test the speed, scalability, stability and reliability of the software
product or application.
 Load Testing: Load Testing is a type of software Testing which is carried out to
determine the behavior of a system or software product under extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to check
the robustness of the system under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which is
carried out to check the performance of a software application or system in
terms of its capability to scale up or scale down the number of user request load.
 Advantages of System Testing
 The testers do not require more knowledge of programming to carry out this
testing.
 It will test the entire product or software so that we will easily detect the errors
or defects which cannot be identified during the unit testing and integration
testing.
 The testing environment is similar to that of the real time production or business
environment.
 It checks the entire functionality of the system with different test scripts and
also it covers the technical and business requirements of clients.
 After this testing, the product will almost cover all the possible bugs or errors
and hence the development team will confidently go ahead with acceptance
testing
 Verifies the overall functionality of the system.
 Detects and identifies system-level problems early in the development cycle.
 Helps to validate the requirements and ensure the system meets the user needs.
 Improves system reliability and quality.
 Facilitates collaboration and communication between development and testing
teams.
 Enhances the overall performance of the system.
 Increases user confidence and reduces risks.
 Facilitates early detection and resolution of bugs and defects.
 Supports the identification of system-level dependencies and inter-module
interactions.
 Improves the system’s maintainability and scalability.
 Disadvantages of System Testing
 This testing is time consuming process than another testing techniques since it
checks the entire product or software.
 The cost for the testing will be high since it covers the testing of entire software.
 It needs good debugging tool otherwise the hidden errors will not be found.
 Can be time-consuming and expensive.
 Requires adequate resources and infrastructure.
 Can be complex and challenging, especially for large and complex systems.
 Dependent on the quality of requirements and design documents.
 Limited visibility into the internal workings of the system.
 Can be impacted by external factors like hardware and network configurations.
 Requires proper planning, coordination, and execution.
 Can be impacted by changes made during development.
 Requires specialized skills and expertise.
 May require multiple test cycles to achieve desired results.
TEST-DRIVEN DEVELOPMENT (TDD)
 Test Driven Development (TDD) is a software development methodology that
emphasizes writing tests before writing the actual code.
 It ensures that code is always tested and functional, reducing bugs and
improving code quality.
 In TDD, developers write small, focused tests that define the desired
functionality, then write the minimum code necessary to pass these tests, and
finally, refactor the code to improve structure and performance.
 This cyclic process helps in creating reliable, maintainable, and efficient
software.
 By following TDD, teams can enhance code reliability,
accelerate development cycles, and maintain high standards of software
quality
 Process of Test Driven Development (TDD)
 It is the process in which test cases are written before the code that validates
those cases. It depends on the repetition of a concise development cycle. Test-
driven Development is a technique in which automated Unit tests are used to
drive the design and free decoupling of dependencies.

 Run all the test cases and make sure that the new test case fails.
 Red – Create a test case and make it fail, Run the test cases
 Green – Make the test case pass by any means.
 Refactor – Change the code to remove duplicate/redundancy.Refactor code –
This is done to remove duplication of code.
 Repeat the above-mentioned steps again and again
 Advantages of Test Driven Development (TDD)
 Unit test provides constant feedback about the functions.
 Quality of design increases which further helps in proper maintenance.
 Test driven development act as a safety net against the bugs.
 TDD ensures that your application actually meets requirements defined for it.
 TDD have very short development lifecycle.
 Disadvantages of Test Driven Development (TDD)
 Increased Code Volume: Using TDD means writing extra code for tests cases
, which can make the overall codebase larger and more Unstructured.
 False Security from Tests: Passing tests will make the developers think the
code is safer only for assuming purpose.
 Maintenance Overheads: Keeping a lot of tests up-to-date can be difficult to
maintain the information and its also time-consuming process.
 Time-Consuming Test Processes: Writing and maintaining the tests can take
a long time.
 Testing Environment Set-Up: TDD needs to be a proper testing environment
in which it will make effort to set up and maintain the codes and dat
Test-driven work in Test Driven Development (TDD)
TDD, or Test-Driven Development, is not just for software only. It is also used to
create product and service teams as test-driven work. To make testing successful, it
needs to be created at both small and big levels in test-driven development.
This means testing every part of the work, like methods in a class, input data values,
log messages, and error codes. Other side of software, teams use quality control
(QC) will check before starting work. These will be checks to help plan and check
the outcomes of the tests. They follow a similar process to TDD, with some small
changes which are as follows:
1. “Add a check” instead of “Add a test”
2. “Run all checks” instead of “Run all tests”
3. “Do the work” instead of “Write some code”
4. “Run all checks” instead of “Run tests”
5. “Clean up the work” instead of “Refactor code”
6. Repeat these steps

You might also like