CCL Hard
CCL Hard
• Definition
Cloud computing refers to the use of hosted services, such as data storage, servers,
databases, networking, and software over the internet. The data is stored on physical
servers, which are maintained by a cloud service provider. Computer system resources,
especially data storage and computing power, are available on-demand, without direct
management by the user in cloud computing.
• Characteristics:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and
size of servers based on the need. This is done by either increasing or
decreasing the resources in the cloud. This ability to alter plans due to
fluctuations in business size and needs is a superb benefit of cloud computing,
especially when experiencing a sudden growth in demand.
3. Reliability: Rather than being hosted on one single instance of a physical server,
hosting is delivered on a virtual partition that draws its resource, such as disk
space, from an extensive network of underlying physical servers. If one server
goes offline it will have no effect on availability, as the virtual servers will
continue to pull resources from the remaining network of servers.
4. Physical Security: The underlying physical servers are still housed within data
centers and so benefit from the security measures that those facilities
implement to prevent people from accessing or disrupting them on-site.
5. Outsource Management: When you are managing the business, Someone else
manages your computing infrastructure. You do not need to worry about
management as well as degradation.
1) Public Cloud
• Public clouds are managed by third parties which provide cloud services
over the internet to the public, these services are available as pay-as-you-
go billing models.
3) Hybrid Cloud
• A hybrid cloud is a heterogeneous distributed system formed by
combining facilities of the public cloud and private cloud. For this reason,
they are also called heterogeneous clouds.
4) Commuity Cloud
➢ Software as a Service(SaaS)
• Software-as-a-Service (SaaS) is a way of delivering services and
applications over the Internet. Instead of installing and maintaining
software, we simply access it via the Internet, freeing ourselves from
the complex software and hardware management. It removes the
need to install and run applications on our own computers or in the
data centers eliminating the expenses of hardware as well as software
maintenance.
• SaaS provides a complete software solution that you purchase on a
pay- as-you-go basis from a cloud service provider. Most SaaS
applications can be run directly from a web browser without any
downloads or installations required. The SaaS applications are
sometimes called Web- based software, on-demand software, or
hosted software.
➢ Platform as a Service(Paas)
• PaaS is a category of cloud computing that provides a platform and
environment to allow developers to build applications and services
over the internet. PaaS services are hosted in the cloud and accessed
by users simply via their web browser.
• A PaaS provider hosts the hardware and software on its own
infrastructure. As a result, PaaS frees users from having to install in-
house hardware and software to develop or run a new application.
Thus, the development and deployment of the application take
place independent of the hardware.
The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, or storage,
but has control over the deployed applications and possibly
configuration settings for the application-hosting environment. To make
it simple, take the example of an annual day function, you will have two
options either to create a venue or to rent a venue but the function is
the same.
➢ Infrastructure as a Service(Iaas)
• Infrastructure as a service (IaaS) is a service model that delivers
computer infrastructure on an outsourced basis to support various
operations. Typically IaaS is a service where infrastructure is provided as
outsourcing to enterprises such as networking equipment, devices,
database, and web servers.
• It is also known as Hardware as a Service (HaaS). IaaS customers pay
on a per-user basis, typically by the hour, week, or month. Some
providers also charge customers based on the amount of virtual
machine space they use.
It simply provides the underlying operating systems, security,
networking, and servers for developing such applications, and services,
and deploying development tools, databases, etc.
• Advantages:-
1. Cost Efficiency: Cloud Computing provides flexible pricing to the users with
the principal pay-as-you-go model. It helps in lessening capital expenditures
of Infrastructure, particularly for small and medium-sized businesses
companies.
2. Flexibility and Scalability: Cloud services facilitate the scaling of resources
based on demand. It ensures the efficiency of businesses in handling
various workloads without the need for large amounts of investments in
hardware during the periods of low demand.
3. Collaboration and Accessibility: Cloud computing provides easy access to
data and applications from anywhere over the internet. This encourages
collaborative team participation from different locations through shared
documents and projects in real-time resulting in quality and productive
outputs.
4. Automatic Maintenance and Updates: AWS Cloud takes care of the
infrastructure management and keeping with the latest software
automatically making updates they is new versions. Through this, AWS
guarantee the companies always having access to the newest technologies to
focus completely on business operations and innvoations
• Disadvantages:-
Conclusion:- Cloud computing has fundamentally transformed the way individuals, businesses,
and organizations manage and deliver services and resources. By offering on-demand access to
a vast array of computing resources—such as storage, processing power, and applications—
cloud computing has proven to be both a cost-effective and scalable solution. It eliminates the
need for large upfront investments in physical hardware, allowing users to pay only for what
they use and scale up or down as required.
Key benefits such as flexibility, scalability, reliability, and security make cloud computing an
essential tool across industries. Companies are leveraging cloud platforms to enhance
operational efficiency, foster innovation, and access cutting-edge technologies like machine
learning, artificial intelligence, and big data analytics.
However, challenges such as data privacy concerns, security risks, and dependency on service
providers still exist. It is crucial for organizations to adopt best practices for managing these
risks, including robust encryption, data governance, and clear SLAs (Service Level Agreements).
In conclusion, cloud computing is not only a vital technology for modern businesses but also a
major driver of digital transformation. As it continues to evolve, it will provide even greater
opportunities for innovation, collaboration, and productivity in the future.
EXPERIMENT 2
Theory:
In cloud computing, virtualization plays a critical role by enabling the pooling of physical
resources and creating multiple isolated virtual environments for different users. This helps in
optimizing resource utilization, reducing costs, and enhancing scalability, which are fundamental
requirements in cloud-based environments.It is one of the main cost-effective, hardware-
reducing, and energy-saving techniques used by cloud providers. Virtualization allows sharing of
a single physical instance of a resource or an application among multiple customers and
organizations at one time.
Types of Virtualization:
• Application Virtualization
• Network Virtualization
• Desktop Virtualization
• Storage Virtualization
• Server Virtualization
• Data virtualization
Virtualization enables cloud providers to maximize resource utilization, isolate workloads, and
deliver resources on-demand. By abstracting physical resources, cloud providers can offer a
flexible, scalable, and efficient service to end users.
Resource Optimization: Virtualization allows cloud providers to dynamically allocate resources
as needed, improving overall efficiency and reducing wastage of physical hardware.
Scalability: Virtual environments can be created and removed as needed, making it easy to scale
applications up or down to meet demand.
Installation Process:
• Processor settings:
Theory:
To implement an IaaS service model using Amazon EC2, a user would access the AWS
console, choose an Amazon Machine Image (AMI) with their desired operating system,
specify the instance type (CPU, memory, storage), and launch a virtual machine (VM) on
demand, essentially renting computing power where they only manage the operating system
and applications installed on the virtual server, while AWS takes care of the underlying
physical hardware and network infrastructure; this allows for flexible scaling and on-demand
resource allocation, making it a prime example of an IaaS model.
Key Components in IaaS using EC2:
1. Amazon EC2 Instances: EC2 allows users to launch virtual machines (instances)
with customizable configurations (CPU, memory, storage, and networking) based
on their needs. These instances can run different operating systems, such as Linux
and Windows.
2. Virtual Private Cloud (VPC): A VPC is a logically isolated network within AWS where
users can launch their EC2 instances. It allows users to define their own network
configuration, including IP address ranges, subnets, route tables, and network
gateways.
3. Amazon Machine Images (AMIs): AMIs are pre-configured virtual machine
templates that define the operating system, applications, and settings for an
EC2 instance. Users can create custom AMIs or use AWS-provided AMIs.
4. Elastic Block Store (EBS): Amazon EBS provides block-level storage volumes that
can be attached to EC2 instances. EBS volumes are persistent, meaning the data
remains intact even when the instance is stopped or terminated. EBS is used to
store data like databases, logs, or application files.
5. Elastic Load Balancer (ELB): ELB distributes incoming traffic across multiple EC2
instances to ensure high availability and fault tolerance. This ensures that
applications are scalable and resilient under different load conditions.
6. Auto Scaling: EC2 Auto Scaling enables users to automatically scale the number of
EC2 instances based on demand. This ensures that the required compute capacity
is always available, while also optimizing costs by reducing capacity when demand
is low.
Output:
• Creation of web server
• Code bash
• Launch of instance
• Instances type changed
• Completed
• Report
Conclusion:-
The implementation of the IaaS model using Amazon EC2 demonstrates a flexible, scalable,
and cost-efficient approach to cloud computing. EC2 allows users to quickly provision and
manage virtual machines, storage, and networking resources without the need for physical
infrastructure. Features like Auto Scaling and Elastic Load Balancing ensure high availability
and responsiveness to changing demands. Overall, EC2 provides businesses with a reliable
platform to deploy applications efficiently while minimizing costs and maximizing
scalability.
EXPERIMENT 4
Theory:-
AWS Lambda is a serverless compute service that allows developers to run code without
provisioning or managing servers. It automatically manages the compute fleet offering a high
level of scalability and availability, making it a powerful solution for running event-driven
applications and services in the cloud.
In the traditional model, applications need to run on dedicated servers, or virtual machines,
which require manual provisioning, scaling, and maintenance. With Lambda, the entire backend
infrastructure is abstracted away, allowing developers to focus solely on writing the business
logic for their applications. Lambda takes care of automatically scaling and executing code in
response to events like HTTP requests, file uploads, database changes, etc.
1. Cold Start Delay – First-time execution or infrequent function calls may experience a
slight delay.
2. Execution Time Limit – Each Lambda function has a maximum execution time limit (15
minutes per execution).
3. Resource Constraints – Limited memory, CPU, and disk space may impact performance
for resource-intensive applications.
4. Vendor Lock-in – Applications built with AWS Lambda may face challenges when
migrating to other cloud providers.
5. Debugging Challenges – Traditional debugging and monitoring tools are limited,
making it difficult to trace issues in production.
Implementation
• Choose Add.
➢ Task 3:- Configure the Lambda function
• In the Code source pane
Theory:
Introduction to PaaS
Platform as a Service (PaaS) is a cloud computing model that provides a platform and
environment for developers to build, deploy, and manage applications without the
complexity of maintaining underlying infrastructure (servers, networks, storage, etc.). PaaS
abstracts away the need to manage hardware, operating systems, and middleware, allowing
developers to focus solely on application logic and code. AWS Elastic Beanstalk is Amazon
Web Services' (AWS) PaaS offering, simplifying the process of deploying and scaling web
applications and services.
AWS Elastic Beanstalk
This activity provides you with an Amazon Web Services (AWS) account where an AWS
Elastic Beanstalk environment has been pre-created for you. You will deploy code to it and
observe the AWS resources that make up the Elastic Beanstalk environment.
AWS Elastic Beanstalk is a fully managed PaaS that enables developers to easily deploy and
manage applications in the cloud. It supports a variety of programming languages and
frameworks, including Java, .NET, Node.js, Python, Ruby, PHP, and Go. Elastic Beanstalk
automatically handles the deployment, capacity provisioning, load balancing, and auto-
scaling of your application, reducing the operational overhead for developers.
Elastic Beanstalk uses AWS resources like EC2 (Elastic Compute Cloud) for hosting
applications, S3 (Simple Storage Service) for storing static assets, and RDS (Relational
Database Service) for database management, among others. It allows developers to focus
on writing code while AWS manages the infrastructure and services required to run the
application.
Key Features of AWS Elastic Beanstalk
Automatic Scaling: Elastic Beanstalk automatically scales the application based on traffic
and resource needs. It provisions new instances or reduces the number of instances to
ensure the application remains available and performant.
Integrated Monitoring: Elastic Beanstalk integrates with AWS CloudWatch, providing
monitoring, logging, and metrics to help developers track the health and performance of
their applications.
Version Management: Developers can deploy different versions of an application and roll
back to previous versions easily if necessary.
Environment Management: Elastic Beanstalk allows you to create and manage
environments for different stages of development, such as development, staging, and
production, with minimal effort.
Advantages of AWS Elastic Beanstalk
• Ease of Use: Simplifies deployment and management with minimal configuration.
• Automatic Scaling: Automatically adjusts infrastructure based on demand.
• Cost Efficiency: Pay-as-you-go model, reducing costs by scaling down when traffic is
low.
• Security: Integrates with AWS security features like IAM and VPC.
• Managed Service: No need to manage underlying infrastructure, AWS handles
provisioning, scaling, and maintenance.
Conclusion:-
AWS Elastic Beanstalk is a powerful PaaS solution that simplifies the deployment and
management of web applications. It abstracts much of the complexity of managing
infrastructure and offers automatic scaling, load balancing, and easy integration with other
AWS services. While it provides many advantages, such as ease of use and cost-efficiency, it
may not be suitable for every use case due to its limited customization options and
dependency on the AWS ecosystem. Understanding the trade-offs and requirements of your
application is crucial when deciding whether Elastic Beanstalk is the right choice.
EXPERIMENT 6
Theory:
AWS Identity and Access Management (IAM) is a web service that enables Amazon Web
Services (AWS) customers to manage users and user permissions in AWS. With IAM, you can
centrally manage users, security credentials such as access keys, and permissions that control
which AWS resources users can access.
• Manage IAM Users and their access: You can create Users and assign them individual
security credentials (access keys, passwords, and multi-factor authentication devices).
You can manage permissions to control which operations a User can perform.
• Manage IAM Roles and their permissions: An IAM Role is similar to a User, in that it is
an AWS identity with permission policies that determine what the identity can and
cannot do in AWS. However, instead of being uniquely associated with one person, a
Role is intended to be assumable by anyone who needs it.
• Manage federated users and their permissions: You can enable identity federation to
allow existing users in your enterprise to access the AWS Management Console, to call
AWS APIs and to access resources, without the need to create an IAM User for each
identity.
Output:
Conclusion:
-Followed a real-world scenario, adding users to groups with specific capabilities enabled
EXPERIMENT 7
Theory:
S3 is a highly scalable object storage service offered by AWS, where data is stored as "objects"
within "buckets.".
Object: A file with its associated metadata (like file name, size, content type) stored in a bucket.
Storage Classes: Different tiers for storing data based on access frequency and cost (e.g.,
Standard, Standard-IA, Glacier).
Screen shots:
Website
Conclusion:
EXPERIMENT 8
Objective:
To know the concept of Database as a Service running on cloud and to demonstrate the
CRUD operations on different SQL and NOSQL databases running on cloud like AWS RDS,
AZURE SQL/ Mongo Lab/ Firebase.
Theory:
Database as a Service (DBaaS) is a cloud-based service that allows users to access, manage,
and operate databases without the need to handle the underlying hardware, software, or
infrastructure. DBaaS provides scalability, high availability, automated backups, security
features, and reduced operational complexity, making it an attractive choice for businesses
and developers.
Amazon Relational Database Service (RDS) is one of the most popular DBaaS offerings,
provided by Amazon Web Services (AWS). It supports various database engines, including
MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server.
Output:
Conclusion:-
Implementing DBaaS using Amazon RDS showcases the advantages of cloud-based database
management, including ease of deployment, automation, scalability, and cost-effectiveness.
It also emphasizes the importance of security practices and performance monitoring in
managing cloud databases.
EXPERIMENT 9
Theory:
Security as a Service (SECaaS) is a cloud-based security model where security services are
provided on a subscription basis. Instead of relying on on-premise security solutions, businesses
can leverage cloud-native security tools to protect their data, applications, and infrastructure.
Security as a Service (SECaaS) is a cloud-based security model where security functions are
provided as managed services. Instead of relying on traditional on-premise security
solutions, organizations can use SECaaS to protect their cloud resources dynamically.
Amazon Web Services (AWS) offers a comprehensive set of SECaaS tools to ensure data
protection, threat detection, and secure access control for cloud workloads. This experiment
focuses on implementing SECaaS to secure a Windows EC2 instance, mitigating both inbound
and outbound security risks.
Screen Shots:
Conclusion:
Creating a Windows EC2 instance on AWS and securing it using Security as a Service (SECaaS)
improves overall cloud security. AWS provides robust tools protect against both inbound and
outbound threats.
EXPERIMENT 10
Theory:
Containerization is a method that allows applications and their dependencies to be packaged
together in isolated environments called containers. Containers run on the same host OS but
remain independent, ensuring consistency across different environments (development,
testing, production).
Docker is an open-source platform that automates the deployment, scaling, and management
of applications inside containers. It makes it easier to package and distribute applications along
with all their dependencies, ensuring they run the same way everywhere.
Key Components of Docker:
• Docker Images: A read-only template with the application and its dependencies.
• Docker Containers: Running instances of Docker images.
• Dockerfile: A text file that defines how to build a Docker image.
• Docker Hub: A repository for sharing and storing Docker images.
Advantages of Docker:
1. Portability: Containers run consistently across different environments.
2. Efficiency: Containers are lightweight, start quickly, and share the host OS kernel.
3. Isolation: Each container runs independently without interfering with others.
4. Scalability: Containers can be easily replicated and managed using orchestration
tools like Kubernetes.
Docker simplifies application deployment, scaling, and management by using containers. It
ensures consistency across environments, reduces overhead, and is a critical tool in modern
software development, especially in microservices and DevOps practices.
Implementation:
2. Version
5. Docker images
9. docker ps
14. cd docker-node-app
15. docker build -t my-node-app:1.0.
Conclusion:
Installation:
To install the latest minikube stable release on x86-64 Windows using .exe download:
From a terminal with administrator access (but not logged in as root), run:
minikube start
If minikube fails to start, see the drivers page for help setting up a compatible container or virtual-
machine manager.
If you already have kubectl installed (see documentation), you can now use it to access your shiny
new cluster:
kubectl get po -A
Alternatively, minikube can download the appropriate version of kubectl and you should be able to use it
like this:
You can also make your life easier by adding the following to your shell config: (for more details see:
kubectl)
Initially, some services such as the storage-provisioner, may not yet be in a Running state. This is a
normal condition during cluster bring-up, and will resolve itself momentarily. For additional insight
into your cluster state, minikube bundles the Kubernetes Dashboard, allowing you to get easily
acclimated to your new environment:
minikube dashboard
Deploy applications:
The easiest way to access this service is to let minikube launch a web browser for you:
minikube pause
minikube stop
minikube delete --