DevOps Unit-2,3,4,5
DevOps Unit-2,3,4,5
Software Testing
Software testing is a process of evaluating and verifying that a software application works as
intended. It aims to identify bugs, errors, or gaps in the software and ensures that it meets the
defined requirements. Testing can be manual or automated and is essential for delivering
high-quality software products.
1. Manual Testing
● Performed by human testers who execute test cases without the assistance of automation
tools.
● Ideal for exploratory, usability, and ad-hoc testing where human intuition is necessary.
● Advantages:
○ Simple to execute for small projects.
○ Detects usability issues effectively.
● Disadvantages:
○ Time-consuming and prone to human error.
2. Automation Testing
● Involves the use of specialized tools or scripts to execute test cases automatically.
● Ideal for repetitive and regression testing to save time and improve accuracy.
● Examples of automation tools: Selenium, JUnit, TestNG, and Appium.
● Advantages:
○ Faster execution of tests.
○ Reduces manual effort for repetitive tasks.
● Disadvantages:
○ Initial setup cost is high.
○ Requires programming knowledge.
Testing Techniques
○ Focus: Tests the internal logic, structure, and code of the software.
○ Performed by: Developers.
○ Example: Verifying algorithms, loops, and conditional statements.
○ Use Case: Unit testing and integration testing.
2. Black Box Testing
○ Focus: Tests the application’s functionality without knowing the internal code.
○ Performed by: Testers.
○ Example: Inputting data into a form and verifying the output.
○ Use Case: Functional testing and system testing.
3. Grey Box Testing
Functional Testing
Functional testing ensures that the software behaves as expected and meets the functional
requirements.
● Unit Testing
Non-Functional Testing
Non-functional testing assesses the performance, usability, and reliability of the software.
1. Performance Testing
1. Quality Assurance
○ Ensures the software meets the highest quality standards and customer
expectations.
2. Cost Efficiency
○ Detecting bugs early in the development phase saves the cost of fixing issues
later.
3. Error-Free Product
○ Identifies and resolves critical bugs or glitches before the product goes live.
4. Performance Validation
○ Ensures the software performs well under various conditions, such as high traffic
or data loads.
5. Security
○ Delivering a robust and reliable product enhances user trust and satisfaction.
● For Developers:
○ Ensures that the code is bug-free and integrates well with other modules.
● For Testers:
○ Simulates real-world scenarios to validate software functionality.
● For Organizations:
○ Builds trust by delivering a reliable, high-quality product.
○ Prevents loss of reputation caused by software failures.
Conclusion
Software testing is not just a phase in the software development lifecycle; it is an ongoing
process. Functional testing ensures the software does what it is supposed to do, while
non-functional testing verifies its performance, usability, and compatibility. By thoroughly
testing the software, organizations can deliver reliable, secure, and high-quality products that
meet both business goals and user expectations.
Automation Testing:
Automation Testing refers to the use of specialized software tools to automate the execution of
tests on a software application. It involves the creation and execution of test scripts to validate
the functionality of a software product, minimizing human intervention in the testing process.
Automation testing is particularly useful for repetitive tasks, regression testing, and performance
testing, where it helps save time, reduce errors, and improve the consistency of test results.
1. Faster Execution: Automated tests are executed much faster than manual tests,
especially for repetitive tasks.
2. 24/7 Availability: Automated tests can run round-the-clock without human involvement,
enabling continuous testing.
3. Consistency: Automation reduces human error, ensuring the same test is run in the same
way each time.
4. Increased Test Coverage: Automation can execute a large number of tests in a short
period, increasing test coverage.
5. Efficiency: Automated testing requires fewer resources for execution compared to
manual testing.
6. Stress and Load Testing: It is ideal for performance-related testing like load, stress, and
reliability testing.
7. Reusable Test Scripts: Once written, test scripts can be reused for future releases of the
software.
8. Focus on Creativity: With automation handling repetitive tasks, testers can focus on
more creative, exploratory testing.
9. Error Reduction: Automated tests have fewer chances of error, leading to more reliable
test results.
1. High Initial Costs: The setup of test automation tools and writing test scripts require a
significant investment in terms of both time and money.
2. Requires Skilled Personnel: Automation testing requires skilled professionals who are
familiar with the tools and programming languages used.
3. Limited Flexibility: Automation works well for repetitive tasks but may struggle with
testing complex, dynamic user interfaces or subjective criteria.
4. Maintenance: Test scripts need to be updated regularly to match changes in the software,
which can require additional effort and resources.
5. Not Suitable for All Testing Types: Some tests, such as usability or ad-hoc testing,
require human intuition and creativity, making them unsuitable for automation.
1. Smoke Testing:
○ Definition: Smoke testing is a quick check of the basic and critical functionalities
of an application before proceeding to more detailed testing.
○ Purpose: To ensure that the major functionalities are working and the software is
stable enough for further testing.
2. Sanity Testing:
○ Definition: Sanity testing is a focused check to verify that a specific bug or issue
has been fixed and that no new issues have been introduced.
○ Purpose: To quickly assess whether the changes made to the software have fixed
the reported issues and if the new functionalities work as expected.
3. Regression Testing:
○ Definition: UAT is the testing performed by the client or end users to validate the
software against business requirements.
○ Purpose: To ensure that the software meets the client's expectations and is ready
for deployment.
5. Exploratory Testing:
○ Definition: Adhoc testing is an informal and random type of testing where testers
do not follow predefined test cases but instead test the software in an unstructured
manner.
○ Purpose: To find defects that may not be covered by regular test cases, especially
through "negative testing."
7. Security Testing:
To implement automated testing for a software project, you can follow these general steps using
a popular testing tool like Selenium (for web applications):
○ Write automated test scripts for the identified test cases, such as login,
registration, or form submission.
3. Test Execution:
○ After execution, analyze the test results. Automation tools provide detailed logs
and reports on whether the tests passed or failed.
5. Maintain Test Scripts:
○ Update and maintain the test scripts as the software evolves and new features are
added.
Conclusion
Each type of testing has a specific purpose, and their use depends on the project requirements.
While automated testing offers substantial advantages, especially in terms of speed and
reliability, it is not suitable for all types of tests. Some types of testing, like exploratory or ad-hoc
testing, still require human involvement. By understanding the strengths and weaknesses of
different testing methods, you can choose the right approach for each phase of the software
development lifecycle.
Selenium:
What is Selenium?
Selenium is a popular open-source tool for automating web applications across different
browsers and platforms. Originally developed in 2004 by Jason Huggins at ThoughtWorks,
Selenium allows users to automate browser actions for testing purposes. It is widely used for
automating functional testing of web applications, enabling testers to simulate real user
interactions with the application to verify its functionality.
● Selenium IDE: A tool that allows testers to record browser actions and generate
automated test scripts. It is a great option for beginners.
● Selenium WebDriver: The most widely used component, WebDriver allows you to write
test scripts in various programming languages and directly interact with browsers.
● Selenium Grid: It allows running tests on multiple machines in parallel, making it
suitable for large-scale testing.
● Portability: Being open-source and supporting a wide range of browsers and operating
systems, Selenium is highly portable.
● Flexibility: Test scripts can be written in multiple languages, making it easy to integrate
with various development environments.
● Powerful Reporting: Selenium can be combined with frameworks like TestNG or JUnit
to organize test cases and generate detailed reports.
● No Server Installation: Unlike other testing tools, Selenium WebDriver does not require
server installation, which makes it easier to set up and use.
● Programming Languages: C#, Java, Python, Ruby, PHP, Perl, and JavaScript.
● Operating Systems: Android, iOS, Windows, Linux, Mac, Solaris.
● Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Safari, Opera.
Conclusion:
Selenium is a versatile and powerful tool for automating web application testing. It provides
flexibility with its cross-browser and cross-platform capabilities, supports a wide variety of
programming languages, and integrates easily with other tools to enhance testing processes.
Whether you are a beginner using Selenium IDE or an advanced user leveraging Selenium
WebDriver, it offers numerous features that make it a leading choice in the field of test
automation.
● Focus: TDD focuses on writing tests first to guide the code development, whereas
traditional testing focuses on writing test cases after the development.
● Coverage: TDD ensures 100% code coverage, testing every line of code. Traditional
testing might not always achieve this.
● Test Purpose: TDD helps developers to build confidence that the system meets its
requirements, while traditional testing often focuses on verifying the correctness of the
system.
Types of TDD:
TDD is a development approach that focuses on writing tests before the code to ensure the code
is bug-free and meets the specified requirements. It involves writing tests, running them, writing
minimal code to pass the tests, and then refactoring the code.
REPL-Driven Development
Definition:
REPL (Read-Eval-Print Loop) is an interactive programming approach where developers can
quickly execute small snippets of code and get immediate feedback. REPL environments allow
you to test and explore code step by step, making it easier to understand the code behavior and
improve productivity.
1. Increased Efficiency: Developers can test and modify their code quickly without
running a full application.
2. Improved Understanding: Immediate feedback helps developers understand how their
code behaves, identify errors early, and test ideas interactively.
3. Increased Collaboration: Developers can easily share code snippets and collaborate by
showing the results of their code quickly, making it easier to demonstrate or solve
problems together.
Conclusion:
● REPL-Driven Development is an interactive way to write and test code in small chunks,
providing immediate feedback. It is especially useful for dynamic languages like Python,
JavaScript, and Ruby, enhancing efficiency, understanding, and collaboration.
Definition:
In DevOps, deployment systems automate the process of releasing software updates and
applications from development to production. These systems ensure smooth, efficient, and
error-free deployment to production environments.
1. Jenkins:
An open-source automation server that supports building, deploying, and automating
projects. Jenkins helps in Continuous Integration (CI) and Continuous Deployment (CD),
allowing for frequent updates and testing.
2. Ansible:
An open-source automation tool for software provisioning, configuration management,
and application deployment. Ansible uses simple YAML-based playbooks for automating
tasks across multiple systems.
3. Docker:
A platform that enables developers to create, deploy, and run applications in containers,
which are isolated environments that contain everything needed for an application to run.
Docker ensures consistency across different environments.
4. Kubernetes:
An open-source system that automates the deployment, scaling, and management of
containerized applications. Kubernetes works well with Docker to manage clusters of
containers across various hosts.
5. AWS CodeDeploy:
A fully managed service that automates software deployment to compute services like
Amazon EC2, AWS Fargate, or on-premises servers, making deployment simpler and
faster.
6. Azure DevOps:
A Microsoft product offering end-to-end DevOps solutions, including tools for building,
testing, and deploying applications across different platforms, facilitating collaboration
and automating the entire software delivery pipeline.
Definition:
Virtualization in DevOps involves creating virtual environments, like virtual machines or
containers, to simulate different operating systems and isolate applications. This allows multiple
systems to run on a single physical machine and is key for testing, deploying, and scaling
applications.
1. Docker:
An open-source platform for containerizing applications, ensuring that they can run
consistently across different environments by packaging the app with all its
dependencies.
2. Kubernetes:
Often used in combination with Docker, Kubernetes automates the management, scaling,
and deployment of containerized applications across clusters of machines.
3. VirtualBox:
An open-source software that allows users to run multiple operating systems on a single
physical machine, providing virtual environments for testing and development.
4. VMware:
A commercial software suite that offers comprehensive tools for virtualization, cloud
computing, and management of virtual environments, suitable for enterprise-level
applications.
5. Hyper-V:
A Microsoft hypervisor that enables virtualization on Windows-based systems. Hyper-V
allows the creation and management of virtual machines on Windows servers.
Code Execution at the Client in DevOps
Definition:
Code execution at the client refers to running scripts or code on client devices (like user
machines or mobile devices) instead of on servers. This is a critical part of DevOps, enabling
dynamic interaction with client devices and ensuring efficient software delivery.
4. Mobile Apps:
Mobile applications themselves can execute code on client devices, allowing developers
to create interactive, dynamic experiences for users through native mobile applications.
○ The Puppet Master is the server where all configurations are stored and
managed. It is a Linux-based system where Puppet software is installed. The
Puppet Master interacts with the Puppet Agent (client) to manage configurations.
○ Role: It handles the entire configuration process and is responsible for generating
and sending configuration catalogs to the Puppet Agents.
2. Puppet Agent (Slave/Node):
○ The Puppet Agent (also called the Puppet Slave or Node) is installed on the
client machine, which can be any operating system (Linux, Windows, MacOS,
Solaris, etc.).
○ Role: The Agent requests configuration from the Puppet Master, applies the
configurations, and sends reports back to the master.
3. Configuration Repository:
○ The Configuration Repository stores all configuration files related to the nodes
(client systems) and servers. It is where configurations are stored, retrieved, and
applied to the target systems when needed.
4. Facts:
○ Facts are key-value pairs that store system information about the node (e.g., OS
type, IP address, network interfaces, etc.).
○ These facts are used by the Puppet Master to determine the current state of the
node and make decisions about which configurations to apply.
5. Catalog:
○ A Catalog is the compiled configuration that is sent from the Puppet Master to
the Puppet Agent. It contains all the configuration details and is applied to the
target node.
○ Process: The agent sends facts to the master, requests a catalog, the master
compiles and sends back the catalog, and the agent applies it to the node. The
agent checks for discrepancies and fixes them.
● SSL (Secure Socket Layer) is used to securely encrypt the communication between the
Puppet Master and the Puppet Agent.
● Process:
1. The Puppet Slave requests the Puppet Master’s certificate.
2. The Puppet Master sends its certificate to the Puppet Slave.
3. The Puppet Slave sends its certificate to the Puppet Master.
4. The Puppet Slave requests data from the Puppet Master.
5. The Puppet Master sends the required configuration data back to the Puppet
Slave.
1. Resources:
○ Resources are the basic building blocks in Puppet. They define the state of a
system and ensure that the desired configuration is applied. Examples include
files, packages, services, and users.
○ Example: Ensuring a specific version of a package is installed.
2. Classes:
○ Classes are groups of related resources combined together to form a unit. Classes
help organize code and allow reusability.
○ Example: A class could be used to install and configure a web server, grouping
related resources like packages, services, and files.
3. Manifests:
○ Manifests are directories that contain Puppet code written in Puppet DSL
(Domain Specific Language). These files have the .pp extension and define the
configuration for a node or class.
○ Example: A manifest file might define how to install and configure a web
application on a node.
4. Modules:
● Puppet Master manages and controls configurations, sending them to the Puppet Agent
(client).
● Facts provide information about the node’s current state.
● A Catalog contains the compiled configurations to be applied.
● Resources are the core building blocks that define the state of the system.
● Classes group related resources for easy management.
● Manifests are files that define the Puppet code.
● Modules bundle multiple manifests and resources to simplify and reuse configuration.
This architecture allows Puppet to automate the configuration, management, and deployment of
software across multiple systems in a secure, efficient, and scalable manner.
Ansible:
Definition:
Ansible is an open-source automation tool that simplifies the process of application deployment,
cloud provisioning, intra-service orchestration, and other IT tasks. It is agentless, meaning it does
not require any special agents or security infrastructure on the target systems.
1. Playbook:
○ Ansible is agentless, meaning it does not require any software agents on the
managed nodes.
○ It connects to the target systems via SSH (Secure Shell) by default, but it also
supports other methods like Kerberos for connection.
3. Ansible Modules:
○ The inventory file is a simple text file that lists the IP addresses or hostnames of
the systems to be managed by Ansible.
○ It allows grouping of hosts (e.g., "web servers", "database servers") to run specific
tasks on them.
Example:
[web-servers]
[db-servers]
5.
6. Configuration Management:
Ansible Workflow:
1. Management Node:
○ The Inventory file is used to identify the hosts and define which playbooks will
be executed on which system.
○ Ansible connects to each host via SSH, runs the tasks (modules), and removes the
modules after execution.
3. No Daemons or Servers:
○ Ansible operates without the need for any daemons or servers. It simply connects,
executes the tasks, and disconnects.
Ansible Terminology:
1. Ansible Server:
○ The machine where Ansible is installed and from which all tasks and playbooks
are executed.
2. Modules:
○ Modules are units of code that Ansible runs on managed systems to perform tasks
like installing packages or configuring services.
3. Task:
○ Facts are information gathered from a node. For example, facts can include data
about the operating system, network interfaces, IP addresses, and more.
6. Inventory:
○ The inventory file contains the list of hosts (servers) to be managed by Ansible,
along with their details like IP addresses and user credentials.
7. Play:
○ A play is the execution of a set of tasks in a playbook on a specific group of
hosts.
8. Handler:
○ A handler is a special type of task that only runs when notified by another task
(e.g., restart a service if a configuration file is changed).
9. Notifier:
○ A notifier is attached to a task. If that task changes the state of the system, it
triggers a handler.
10. Tag:
○ A tag is a label used to identify specific tasks or groups of tasks, making it easy to
run them selectively.
Ansible Architecture:
1. Inventory:
○ Ansible can interact with private or public cloud services via APIs. These APIs
facilitate communication between Ansible and cloud infrastructure.
3. Modules:
○ Ansible uses modules to execute tasks on the target systems. These modules can
reside on any machine and do not require a central database or server.
4. Plugins:
○ Plugins extend Ansible’s core functionality. They add features like custom
logging or new connection methods.
5. Playbooks:
○ Playbooks are written in YAML format. They define the tasks to be executed on
the target systems. Playbooks can be run synchronously or asynchronously.
6. Hosts:
○ Hosts refer to the target systems that Ansible automates. These can be any system
like Linux, Windows, or cloud instances.
7. Networking:
○ Ansible can automate tasks in cloud environments. It can manage cloud resources,
such as instances and storage, and interact with cloud APIs.
9. CMDB (Configuration Management Database):
Summary:
This simple, human-readable approach makes Ansible a powerful tool for automating IT
operations at scale.
Here’s an easy-to-understand summary of each deployment model mentioned:
1. Chef
Brief Description:
2. SaltStack
Brief Description:
● Purpose: Salt automates infrastructure management using a push model via SSH.
● Key Features:
○ Fault Tolerance: Minions (nodes) can connect to multiple masters for
redundancy.
○ Flexible: Can work with various systems like Agent/Server or Agent-only
models.
○ Scalable: Handles large-scale environments efficiently (up to 10,000 minions per
master).
○ Execution: Commands are executed in parallel on multiple systems, making it
fast and efficient.
● Architecture:
3. Docker
Definition: Docker is a platform for automating the deployment, scaling, and management of
applications using containerization. It allows developers to package applications into lightweight
containers that can run anywhere.
Brief Description:
● Benefits:
○ Lightweight and fast, making it easy to scale applications.
○ Can be deployed across any physical, virtual machine, or cloud infrastructure.
○ Consistent environments across various stages of development, testing, and
production.
● Chef is ideal for complex, multi-node systems and cloud environments, focusing on
configuration management through Ruby-based recipes.
● SaltStack is a flexible, scalable solution, particularly effective in large, fault-tolerant
systems with parallel execution.
● Docker is focused on containerization, offering easy portability and scalability for
applications in isolated environments.
Conclusion:
These tools (Chef, SaltStack, and Docker) each serve different purposes in the DevOps lifecycle:
Each tool is suited for different organizational needs, from cloud management and configuration
automation to containerized application deployment.