0% found this document useful (0 votes)
30 views72 pages

CCL Hard

The document provides a comprehensive overview of cloud computing, detailing its definition, characteristics, deployment models, service models, advantages, and disadvantages. It also covers virtualization and its importance in cloud computing, along with specific experiments on implementing IaaS using Amazon EC2 and AWS Lambda. The conclusion emphasizes the transformative impact of cloud computing on businesses, highlighting its cost-effectiveness and scalability while addressing security and dependency challenges.

Uploaded by

krishantcollege
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views72 pages

CCL Hard

The document provides a comprehensive overview of cloud computing, detailing its definition, characteristics, deployment models, service models, advantages, and disadvantages. It also covers virtualization and its importance in cloud computing, along with specific experiments on implementing IaaS using Amazon EC2 and AWS Lambda. The conclusion emphasizes the transformative impact of cloud computing on businesses, highlighting its cost-effectiveness and scalability while addressing security and dependency challenges.

Uploaded by

krishantcollege
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

EXPERIMENT 1

Aim:- Study and Overview of Cloud Computing

• Definition
Cloud computing refers to the use of hosted services, such as data storage, servers,
databases, networking, and software over the internet. The data is stored on physical
servers, which are maintained by a cloud service provider. Computer system resources,
especially data storage and computing power, are available on-demand, without direct
management by the user in cloud computing.

• Characteristics:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and
size of servers based on the need. This is done by either increasing or
decreasing the resources in the cloud. This ability to alter plans due to
fluctuations in business size and needs is a superb benefit of cloud computing,
especially when experiencing a sudden growth in demand.

2. Save Money: An advantage of cloud computing is the reduction in hardware


costs. Instead of purchasing in-house equipment, hardware needs are left to
the vendor. For companies that are growing rapidly, new hardware can be
large, expensive, and inconvenient. Cloud computing alleviates these issues
because resources can be acquired quickly and easily. Even better, the cost of
repairing or replacing equipment is passed to the vendors. Along with purchase
costs, off-site hardware cuts internal power costs and saves space. Large data
centers can take up precious office space and produce a large amount of heat.
Moving to cloud applications or storage can help maximize space and
significantly cut energy expenditures.

3. Reliability: Rather than being hosted on one single instance of a physical server,
hosting is delivered on a virtual partition that draws its resource, such as disk
space, from an extensive network of underlying physical servers. If one server
goes offline it will have no effect on availability, as the virtual servers will
continue to pull resources from the remaining network of servers.
4. Physical Security: The underlying physical servers are still housed within data
centers and so benefit from the security measures that those facilities
implement to prevent people from accessing or disrupting them on-site.

5. Outsource Management: When you are managing the business, Someone else
manages your computing infrastructure. You do not need to worry about
management as well as degradation.

• Types of Deployment Model:-

1) Public Cloud

• Public clouds are managed by third parties which provide cloud services
over the internet to the public, these services are available as pay-as-you-
go billing models.

• The fundamental characteristics of public clouds are multitenancy. A public


cloud is meant to serve multiple users, not a single customer. A user
requires a virtual computing environment that is separated, and most likely
isolated, from other users.

Examples: Amazon EC2, IBM, Azure, GCP


2) Private Cloud

• Private clouds are distributed systems that work on private infrastructure


and provide the users with dynamic provisioning of computing resources.
Instead of a pay-as-you-go model in private clouds, there could be other
schemes that manage the usage of the cloud and proportionally billing of
the different departments or sections of an enterprise. Private cloud
providers are HP Data Centers, Ubuntu, Elastic-Private cloud, Microsoft,
etc.

• Examples: VMware vCloud Suite, OpenStack, Cisco Secure Cloud, Dell


Cloud Solutions, HP Helion Eucalyptus

3) Hybrid Cloud
• A hybrid cloud is a heterogeneous distributed system formed by
combining facilities of the public cloud and private cloud. For this reason,
they are also called heterogeneous clouds.

• Examples: AWS Outposts, Azure Stack, Google Anthos, IBM Cloud


Satellite, Oracle Cloud at Customer

4) Commuity Cloud

• Community clouds are distributed systems created by integrating the


services of different clouds to address the specific needs of an industry,
a community, or a business sector. But sharing responsibilities among
the organizations is difficult.
• In the community cloud, the infrastructure is shared between
organizations that have shared concerns or tasks. An organization or
a third party may manage the cloud.
• Examples: CloudSigma, Nextcloud, Synology C2, OwnCloud,
Stratoscale

• Types of services Model:-

➢ Software as a Service(SaaS)
• Software-as-a-Service (SaaS) is a way of delivering services and
applications over the Internet. Instead of installing and maintaining
software, we simply access it via the Internet, freeing ourselves from
the complex software and hardware management. It removes the
need to install and run applications on our own computers or in the
data centers eliminating the expenses of hardware as well as software
maintenance.
• SaaS provides a complete software solution that you purchase on a
pay- as-you-go basis from a cloud service provider. Most SaaS
applications can be run directly from a web browser without any
downloads or installations required. The SaaS applications are
sometimes called Web- based software, on-demand software, or
hosted software.

➢ Platform as a Service(Paas)
• PaaS is a category of cloud computing that provides a platform and
environment to allow developers to build applications and services
over the internet. PaaS services are hosted in the cloud and accessed
by users simply via their web browser.
• A PaaS provider hosts the hardware and software on its own
infrastructure. As a result, PaaS frees users from having to install in-
house hardware and software to develop or run a new application.
Thus, the development and deployment of the application take
place independent of the hardware.
The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, or storage,
but has control over the deployed applications and possibly
configuration settings for the application-hosting environment. To make
it simple, take the example of an annual day function, you will have two
options either to create a venue or to rent a venue but the function is
the same.

➢ Infrastructure as a Service(Iaas)
• Infrastructure as a service (IaaS) is a service model that delivers
computer infrastructure on an outsourced basis to support various
operations. Typically IaaS is a service where infrastructure is provided as
outsourcing to enterprises such as networking equipment, devices,
database, and web servers.
• It is also known as Hardware as a Service (HaaS). IaaS customers pay
on a per-user basis, typically by the hour, week, or month. Some
providers also charge customers based on the amount of virtual
machine space they use.
It simply provides the underlying operating systems, security,
networking, and servers for developing such applications, and services,
and deploying development tools, databases, etc.
• Advantages:-

1. Cost Efficiency: Cloud Computing provides flexible pricing to the users with
the principal pay-as-you-go model. It helps in lessening capital expenditures
of Infrastructure, particularly for small and medium-sized businesses
companies.
2. Flexibility and Scalability: Cloud services facilitate the scaling of resources
based on demand. It ensures the efficiency of businesses in handling
various workloads without the need for large amounts of investments in
hardware during the periods of low demand.
3. Collaboration and Accessibility: Cloud computing provides easy access to
data and applications from anywhere over the internet. This encourages
collaborative team participation from different locations through shared
documents and projects in real-time resulting in quality and productive
outputs.
4. Automatic Maintenance and Updates: AWS Cloud takes care of the
infrastructure management and keeping with the latest software
automatically making updates they is new versions. Through this, AWS
guarantee the companies always having access to the newest technologies to
focus completely on business operations and innvoations
• Disadvantages:-

1. Security Concerns: Storing of sensitive data on external servers raised more


security concerns which is one of the main drawbacks of cloud computing.
2. Downtime and Reliability: Even though cloud services are usually dependable,
they may also have unexpected interruptions and downtimes. These might be
raised because of server problems, Network issues or maintenance disruptions in
Cloud providers which negative effect on business operations, creating issues for
users accessing their apps.
3. Dependency on Internet Connectivity: Cloud computing services heavily rely on
Internet connectivity. For accessing the cloud resources the users should have a
stable and high-speed internet connection for accessing and using cloud resources.
In regions with limited internet connectivity, users may face challenges in accessing
their data and applications.
4. Cost Management Complexity: The main benefit of cloud services is their pricing
model that coming with Pay as you go but it also leads to cost management
complexities. On without proper careful monitoring and utilization of resources
optimization, Organizations may end up with unexpected costs as per their use
scale. Understanding and Controlled usage of cloud services requires ongoing
attention.

Conclusion:- Cloud computing has fundamentally transformed the way individuals, businesses,
and organizations manage and deliver services and resources. By offering on-demand access to
a vast array of computing resources—such as storage, processing power, and applications—
cloud computing has proven to be both a cost-effective and scalable solution. It eliminates the
need for large upfront investments in physical hardware, allowing users to pay only for what
they use and scale up or down as required.

Key benefits such as flexibility, scalability, reliability, and security make cloud computing an
essential tool across industries. Companies are leveraging cloud platforms to enhance
operational efficiency, foster innovation, and access cutting-edge technologies like machine
learning, artificial intelligence, and big data analytics.

However, challenges such as data privacy concerns, security risks, and dependency on service
providers still exist. It is crucial for organizations to adopt best practices for managing these
risks, including robust encryption, data governance, and clear SLAs (Service Level Agreements).

In conclusion, cloud computing is not only a vital technology for modern businesses but also a
major driver of digital transformation. As it continues to evolve, it will provide even greater
opportunities for innovation, collaboration, and productivity in the future.
EXPERIMENT 2

Aim: To study and implement Hosted Virtualization using VirtualBox& KVM.

Theory:

Virtualization is the process of creating a virtual version of something, such as a server,


storage device, network resource, or operating system, within a computing environment. It
abstracts the physical hardware of a computer and allows multiple virtual instances or
environments to run on top of it, providing a more efficient, flexible, and scalable use of
resources.

In cloud computing, virtualization plays a critical role by enabling the pooling of physical
resources and creating multiple isolated virtual environments for different users. This helps in
optimizing resource utilization, reducing costs, and enhancing scalability, which are fundamental
requirements in cloud-based environments.It is one of the main cost-effective, hardware-
reducing, and energy-saving techniques used by cloud providers. Virtualization allows sharing of
a single physical instance of a resource or an application among multiple customers and
organizations at one time.

Types of Virtualization:

• Application Virtualization

• Network Virtualization

• Desktop Virtualization

• Storage Virtualization

• Server Virtualization

• Data virtualization

Importance of Virtualization in Cloud Computing

Virtualization enables cloud providers to maximize resource utilization, isolate workloads, and
deliver resources on-demand. By abstracting physical resources, cloud providers can offer a
flexible, scalable, and efficient service to end users.
Resource Optimization: Virtualization allows cloud providers to dynamically allocate resources
as needed, improving overall efficiency and reducing wastage of physical hardware.

Scalability: Virtual environments can be created and removed as needed, making it easy to scale
applications up or down to meet demand.

Cost-Efficiency: By pooling resources, virtualization enables more efficient use of hardware,


reducing the need for costly physical infrastructure and decreasing overhead.

Installation Process:

• Selection of Linux instance, with version being Ubuntu 64bit


• Selecting 8192mb base memory

• Processor settings:

• Choosing downloaded Ubuntu as virtual disk to be attached to instance


• Instance created:

• Install Ubuntu inside instance


• Choose erase disk and install:

• Instance created after installation:


• Pinging from Host to Guest

• Pinging from guest to host

Conclusion: Thus, we have studied virtualization with VirtualBox.


EXPERIMENT 3

Aim: Implementation of Iaas Service model using EC2

Theory:

To implement an IaaS service model using Amazon EC2, a user would access the AWS
console, choose an Amazon Machine Image (AMI) with their desired operating system,
specify the instance type (CPU, memory, storage), and launch a virtual machine (VM) on
demand, essentially renting computing power where they only manage the operating system
and applications installed on the virtual server, while AWS takes care of the underlying
physical hardware and network infrastructure; this allows for flexible scaling and on-demand
resource allocation, making it a prime example of an IaaS model.
Key Components in IaaS using EC2:

1. Amazon EC2 Instances: EC2 allows users to launch virtual machines (instances)
with customizable configurations (CPU, memory, storage, and networking) based
on their needs. These instances can run different operating systems, such as Linux
and Windows.
2. Virtual Private Cloud (VPC): A VPC is a logically isolated network within AWS where
users can launch their EC2 instances. It allows users to define their own network
configuration, including IP address ranges, subnets, route tables, and network
gateways.
3. Amazon Machine Images (AMIs): AMIs are pre-configured virtual machine
templates that define the operating system, applications, and settings for an
EC2 instance. Users can create custom AMIs or use AWS-provided AMIs.
4. Elastic Block Store (EBS): Amazon EBS provides block-level storage volumes that
can be attached to EC2 instances. EBS volumes are persistent, meaning the data
remains intact even when the instance is stopped or terminated. EBS is used to
store data like databases, logs, or application files.
5. Elastic Load Balancer (ELB): ELB distributes incoming traffic across multiple EC2
instances to ensure high availability and fault tolerance. This ensures that
applications are scalable and resilient under different load conditions.
6. Auto Scaling: EC2 Auto Scaling enables users to automatically scale the number of
EC2 instances based on demand. This ensures that the required compute capacity
is always available, while also optimizing costs by reducing capacity when demand
is low.
Output:
• Creation of web server

• Application and OS images


• Key-pair

• Assinging of public subnet and description


• Termination Protection

• Code bash

• Launch of instance
• Instances type changed

• Inbound rules changed to HTTP

• Web server hosted


• Instance turned tot2.micro

• Volume details changed

• Instanced type changed to t2.small


• Service quotas- running on demand

• Completed
• Report

Conclusion:-

The implementation of the IaaS model using Amazon EC2 demonstrates a flexible, scalable,
and cost-efficient approach to cloud computing. EC2 allows users to quickly provision and
manage virtual machines, storage, and networking resources without the need for physical
infrastructure. Features like Auto Scaling and Elastic Load Balancing ensure high availability
and responsiveness to changing demands. Overall, EC2 provides businesses with a reliable
platform to deploy applications efficiently while minimizing costs and maximizing
scalability.
EXPERIMENT 4

Aim:- Implementation of Lambda Service

Theory:-

AWS Lambda is a serverless compute service that allows developers to run code without
provisioning or managing servers. It automatically manages the compute fleet offering a high
level of scalability and availability, making it a powerful solution for running event-driven
applications and services in the cloud.

In the traditional model, applications need to run on dedicated servers, or virtual machines,
which require manual provisioning, scaling, and maintenance. With Lambda, the entire backend
infrastructure is abstracted away, allowing developers to focus solely on writing the business
logic for their applications. Lambda takes care of automatically scaling and executing code in
response to events like HTTP requests, file uploads, database changes, etc.

Key Concepts in AWS Lambda:

1. Serverless Computing: AWS Lambda represents a serverless computing model where


users do not need to worry about provisioning, scaling, or managing servers. The code is
run only when needed, and users only pay for the compute time they use.
2. Lambda Functions: A Lambda function is the core unit of execution. It is the code
(written in languages like Python, Node.js, Java, etc.) that is executed in response to
events. Lambda functions can handle a wide variety of tasks, from simple operations to
more complex workflows.
3. Event Triggers: AWS Lambda is event-driven, meaning it runs in response to specific
events. These events can come from a variety of AWS services like API Gateway (HTTP
requests), S3 (file uploads), DynamoDB (data changes), or CloudWatch (scheduled
tasks). Lambda can trigger automatically when the event occurs.
4. Lambda Execution Role: Each Lambda function is associated with an execution role that
defines permissions. This role allows Lambda to interact with other AWS services
securely, based on the permissions specified in the policy attached to the role.
5. Scaling and Concurrency: Lambda automatically scales by running multiple instances of
the function in parallel, depending on the incoming request load. This means that even
if there is a sudden surge in traffic, Lambda can scale quickly and efficiently without
requiring any manual intervention.
6. Cold Starts: A cold start occurs when AWS Lambda needs to initialize an environment
for a function (e.g., the first time it is invoked after being deployed or after being idle for
a period). Cold starts can result in slightly higher latency, but AWS continuously
improves Lambda's startup time.
7. Cost Efficiency: AWS Lambda charges based on the number of requests and the
duration of execution (measured in milliseconds). Unlike traditional services where you
pay for idle time and server maintenance, Lambda only charges for actual function
execution, making it very cost-efficient for variable workloads.

Advantages of AWS Lambda

1. Serverless Architecture – No need to manage or maintain servers. AWS handles all


infrastructure needs.
2. Automatic Scaling – Lambda functions scale automatically based on the number of
incoming requests.
3. Cost-Effective – Users are charged only for the execution time and the number
of requests, reducing unnecessary costs.
4. Event-Driven Execution – Easily integrates with other AWS services and triggers
functions based on events.
5. Improved Productivity – Developers can focus on writing code instead of managing
infrastructure.
6. Security & Isolation – Each function runs in a separate execution environment,
enhancing security.

Disadvantages of AWS Lambda

1. Cold Start Delay – First-time execution or infrequent function calls may experience a
slight delay.
2. Execution Time Limit – Each Lambda function has a maximum execution time limit (15
minutes per execution).
3. Resource Constraints – Limited memory, CPU, and disk space may impact performance
for resource-intensive applications.
4. Vendor Lock-in – Applications built with AWS Lambda may face challenges when
migrating to other cloud providers.
5. Debugging Challenges – Traditional debugging and monitoring tools are limited,
making it difficult to trace issues in production.
Implementation

➢ Task 1: Create a Lambda function


• Choose Create a function.

• In the Create function screen, configure these settings:


1. Choose Author from scratch
Function name: myStopinator
Runtime: Python 3.11
2. Choose Change default execution role
Execution role: Use an existing role
Existing role: From the dropdown list, choose myStopinatorRole

• Choose Create function.

➢ Task 2: Configure the trigger


• Choose Add trigger.
Choose the Select a trigger dropdown menu, and select EventBridge
(CloudWatch Events).
• For the rule, choose Create a new rule and configure these settings:
Rule name: everyMinute
Rule type: Schedule expression Schedule
expression: rate(1 minute)

• Choose Add.
➢ Task 3:- Configure the Lambda function
• In the Code source pane

• Choose Monitor Tab


➢ Task 4: Verify that the Lambda function worked
• Return to the Amazon EC2 console browser tab.
Conclusion:-
The implementation of AWS Lambda demonstrates how serverless computing enables efficient,
scalable, and cost-effective application deployment. Despite certain limitations like cold start
latency and resource constraints, its benefits in automation, cost reduction, and seamless
scalability make it a valuable solution for modern cloud-based applications.
EXPERIMENT 5

Aim: To study and implement Platform as Service using Elastic Beanstalk

Theory:

Introduction to PaaS
Platform as a Service (PaaS) is a cloud computing model that provides a platform and
environment for developers to build, deploy, and manage applications without the
complexity of maintaining underlying infrastructure (servers, networks, storage, etc.). PaaS
abstracts away the need to manage hardware, operating systems, and middleware, allowing
developers to focus solely on application logic and code. AWS Elastic Beanstalk is Amazon
Web Services' (AWS) PaaS offering, simplifying the process of deploying and scaling web
applications and services.
AWS Elastic Beanstalk
This activity provides you with an Amazon Web Services (AWS) account where an AWS
Elastic Beanstalk environment has been pre-created for you. You will deploy code to it and
observe the AWS resources that make up the Elastic Beanstalk environment.
AWS Elastic Beanstalk is a fully managed PaaS that enables developers to easily deploy and
manage applications in the cloud. It supports a variety of programming languages and
frameworks, including Java, .NET, Node.js, Python, Ruby, PHP, and Go. Elastic Beanstalk
automatically handles the deployment, capacity provisioning, load balancing, and auto-
scaling of your application, reducing the operational overhead for developers.
Elastic Beanstalk uses AWS resources like EC2 (Elastic Compute Cloud) for hosting
applications, S3 (Simple Storage Service) for storing static assets, and RDS (Relational
Database Service) for database management, among others. It allows developers to focus
on writing code while AWS manages the infrastructure and services required to run the
application.
Key Features of AWS Elastic Beanstalk
Automatic Scaling: Elastic Beanstalk automatically scales the application based on traffic
and resource needs. It provisions new instances or reduces the number of instances to
ensure the application remains available and performant.
Integrated Monitoring: Elastic Beanstalk integrates with AWS CloudWatch, providing
monitoring, logging, and metrics to help developers track the health and performance of
their applications.
Version Management: Developers can deploy different versions of an application and roll
back to previous versions easily if necessary.
Environment Management: Elastic Beanstalk allows you to create and manage
environments for different stages of development, such as development, staging, and
production, with minimal effort.
Advantages of AWS Elastic Beanstalk
• Ease of Use: Simplifies deployment and management with minimal configuration.
• Automatic Scaling: Automatically adjusts infrastructure based on demand.
• Cost Efficiency: Pay-as-you-go model, reducing costs by scaling down when traffic is
low.
• Security: Integrates with AWS security features like IAM and VPC.
• Managed Service: No need to manage underlying infrastructure, AWS handles
provisioning, scaling, and maintenance.

Disadvantages of AWS Elastic Beanstalk


• Limited Customization: Less control over infrastructure configuration.
• AWS Lock-in: Ties you to AWS, making future migration to other platforms difficult.
• Complex for Advanced Users: May restrict highly customized or advanced
configurations.
• Resource Limits: Cannot customize resources as freely as with IaaS solutions.
• Debugging Challenges: Troubleshooting can be harder due to the abstracted
environment.
• Learning Curve: New users may find it challenging to get started with AWS tools.
Implementation:

➢ Task 1: Access the Elastic Beanstalk environment


• In the console, in the search box to the right of to *Services*, search for
and choose *Elastic Beanstalk*.

• Under the Environment name column, choose the name of the


environment.

• Test access to the environment.


➢ Task 2: Deploy a sample application to Elastic Beanstalk
• To download a sample application, choose this link:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples/tomcat.
zi p
• Back in the Elastic Beanstalk Dashboard, choose Upload and Deploy.

• After the deployment is complete


• choose Configuration in the left pane.
• choose Monitoring.

➢ Task 3: Explore the AWS resources that support your application


• In the console, in the search box to the right of to *Services*, search for
and choose EC2
• Choose Instances

➢ Choose and Deploy for your own index.html


➢ Submit work

Conclusion:-

AWS Elastic Beanstalk is a powerful PaaS solution that simplifies the deployment and
management of web applications. It abstracts much of the complexity of managing
infrastructure and offers automatic scaling, load balancing, and easy integration with other
AWS services. While it provides many advantages, such as ease of use and cost-efficiency, it
may not be suitable for every use case due to its limited customization options and
dependency on the AWS ecosystem. Understanding the trade-offs and requirements of your
application is crucial when deciding whether Elastic Beanstalk is the right choice.
EXPERIMENT 6

Aim: AWS Identity and Access Management (IAM)

Theory:

AWS Identity and Access Management (IAM) is a web service that enables Amazon Web
Services (AWS) customers to manage users and user permissions in AWS. With IAM, you can
centrally manage users, security credentials such as access keys, and permissions that control
which AWS resources users can access.

This lab will demonstrate:

• Exploring pre-created IAM Users and Groups


• Inspecting IAM policies as applied to the pre-created groups
• Following a real-world scenario, adding users to groups with specific capabilities enabled
• Locating and using the IAM sign-in URL
• Experimenting with the effects of policies on service access
In this lab environment, access to AWS services and service actions might be restricted to the
ones that are needed to complete the lab instructions. You might encounter errors if you
attempt to access other services or perform actions beyond the ones that are described in this
lab.

AWS Identity and Access Management

AWS Identity and Access Management (IAM) can be used to:

• Manage IAM Users and their access: You can create Users and assign them individual
security credentials (access keys, passwords, and multi-factor authentication devices).
You can manage permissions to control which operations a User can perform.
• Manage IAM Roles and their permissions: An IAM Role is similar to a User, in that it is
an AWS identity with permission policies that determine what the identity can and
cannot do in AWS. However, instead of being uniquely associated with one person, a
Role is intended to be assumable by anyone who needs it.
• Manage federated users and their permissions: You can enable identity federation to
allow existing users in your enterprise to access the AWS Management Console, to call
AWS APIs and to access resources, without the need to create an IAM User for each
identity.

Output:
Conclusion:

In this experiment we:

-Explored pre-created IAM users and groups

-Inspected IAM policies as applied to the pre-created groups

-Followed a real-world scenario, adding users to groups with specific capabilities enabled

-Located and used the IAM sign-in URL

-Experimented with the effects of policies on service access


Name- Shivam Mane Class-TE-D Roll No-122A1089

EXPERIMENT 7

Aim: To study and Implement Storage as a Service using AWS S3.

Theory:

S3 is a highly scalable object storage service offered by AWS, where data is stored as "objects"
within "buckets.".

Bucket: A container that holds objects (files).

Object: A file with its associated metadata (like file name, size, content type) stored in a bucket.

Key: A unique identifier for an object within a bucket.

Storage Classes: Different tiers for storing data based on access frequency and cost (e.g.,
Standard, Standard-IA, Glacier).

Screen shots:
Website

Conclusion:

In conclusion, implementing Storage as a Service (SaaS) using AWS S3 offers numerous


advantages for both individuals and organizations looking to manage and scale their data
storage needs. AWS S3 is a highly reliable, scalable, and secure cloud storage service that
allows users to store and retrieve any amount of data at any time, from anywhere on the web.
Name- Shivam Mane Class-TE-D Roll No-122A1089

EXPERIMENT 8

Aim: Study & implement Database as a service using RDS service.

Objective:
To know the concept of Database as a Service running on cloud and to demonstrate the
CRUD operations on different SQL and NOSQL databases running on cloud like AWS RDS,
AZURE SQL/ Mongo Lab/ Firebase.
Theory:
Database as a Service (DBaaS) is a cloud-based service that allows users to access, manage,
and operate databases without the need to handle the underlying hardware, software, or
infrastructure. DBaaS provides scalability, high availability, automated backups, security
features, and reduced operational complexity, making it an attractive choice for businesses
and developers.
Amazon Relational Database Service (RDS) is one of the most popular DBaaS offerings,
provided by Amazon Web Services (AWS). It supports various database engines, including
MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server.

Output:

1. Launching an RDS Instance:


2. Database Configuration:
3. Connecting to the Database:

Conclusion:-
Implementing DBaaS using Amazon RDS showcases the advantages of cloud-based database
management, including ease of deployment, automation, scalability, and cost-effectiveness.
It also emphasizes the importance of security practices and performance monitoring in
managing cloud databases.
EXPERIMENT 9

Aim: To study and Implement Security as a Service on AWS

Theory:

Security as a Service (SECaaS) is a cloud-based security model where security services are
provided on a subscription basis. Instead of relying on on-premise security solutions, businesses
can leverage cloud-native security tools to protect their data, applications, and infrastructure.

Security as a Service (SECaaS) is a cloud-based security model where security functions are
provided as managed services. Instead of relying on traditional on-premise security
solutions, organizations can use SECaaS to protect their cloud resources dynamically.

Amazon Web Services (AWS) offers a comprehensive set of SECaaS tools to ensure data
protection, threat detection, and secure access control for cloud workloads. This experiment
focuses on implementing SECaaS to secure a Windows EC2 instance, mitigating both inbound
and outbound security risks.

Screen Shots:
Conclusion:

Creating a Windows EC2 instance on AWS and securing it using Security as a Service (SECaaS)
improves overall cloud security. AWS provides robust tools protect against both inbound and
outbound threats.
EXPERIMENT 10

Aim: Implementation of Containerization using Docker.

Theory:
Containerization is a method that allows applications and their dependencies to be packaged
together in isolated environments called containers. Containers run on the same host OS but
remain independent, ensuring consistency across different environments (development,
testing, production).
Docker is an open-source platform that automates the deployment, scaling, and management
of applications inside containers. It makes it easier to package and distribute applications along
with all their dependencies, ensuring they run the same way everywhere.
Key Components of Docker:
• Docker Images: A read-only template with the application and its dependencies.
• Docker Containers: Running instances of Docker images.
• Dockerfile: A text file that defines how to build a Docker image.
• Docker Hub: A repository for sharing and storing Docker images.
Advantages of Docker:
1. Portability: Containers run consistently across different environments.
2. Efficiency: Containers are lightweight, start quickly, and share the host OS kernel.
3. Isolation: Each container runs independently without interfering with others.
4. Scalability: Containers can be easily replicated and managed using orchestration
tools like Kubernetes.
Docker simplifies application deployment, scaling, and management by using containers. It
ensures consistency across environments, reduces overhead, and is a critical tool in modern
software development, especially in microservices and DevOps practices.
Implementation:

1. Docker Desktop GUI

2. Version

3. Docker search MySQL


4. docker pull ubuntu

5. Docker images

6. Docker pull python

7. docker run -it ubuntu


8. Docker run -it python

9. docker ps

10. docker stop

11. docker container ls –a

12. docker container rm

13. mkdir docker-node-app

14. cd docker-node-app
15. docker build -t my-node-app:1.0.

16. docker run my-node-app:1.0

Conclusion:

Docker simplifies the process of deploying applications by using containerization. By


encapsulating an application and its dependencies in a container, Docker eliminates the issues
of inconsistent environments and dependency conflicts. This makes it easier to develop, deploy,
and scale applications, especially in distributed and cloud environments.

Docker's popularity in DevOps, Continuous Integration/Continuous Deployment (CI/CD), and


microservices architectures is growing rapidly because of its ability to simplify infrastructure
management, improve collaboration, and enhance efficiency. As container orchestration
tools like Kubernetes become more widespread, Docker will continue to be a key player in
modern software development.
EXPERIMENT 11

Aim: Implementation of Kubernetes using minikube

Installation:

To install the latest minikube stable release on x86-64 Windows using .exe download:

Start your cluster:

From a terminal with administrator access (but not logged in as root), run:

minikube start

If minikube fails to start, see the drivers page for help setting up a compatible container or virtual-
machine manager.

Interact with your cluster:

If you already have kubectl installed (see documentation), you can now use it to access your shiny
new cluster:

kubectl get po -A
Alternatively, minikube can download the appropriate version of kubectl and you should be able to use it
like this:

minikube kubectl -- get po -A

You can also make your life easier by adding the following to your shell config: (for more details see:
kubectl)

alias kubectl="minikube kubectl --"

Initially, some services such as the storage-provisioner, may not yet be in a Running state. This is a
normal condition during cluster bring-up, and will resolve itself momentarily. For additional insight
into your cluster state, minikube bundles the Kubernetes Dashboard, allowing you to get easily
acclimated to your new environment:

minikube dashboard

Deploy applications:

Create a sample deployment and expose it on port 8080:

kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0 kubectl


expose deployment hello-minikube --type=NodePort --port=8080
Copy
It may take a moment, but your deployment will soon show up when you run:

kubectl get services hello-minikube

The easiest way to access this service is to let minikube launch a web browser for you:

minikube service hello-minikube

Alternatively, use kubectl to forward the port:

kubectl port-forward service/hello-minikube 7080:8080


Manage your cluster:

Pause Kubernetes without impacting deployed applications:

minikube pause

Unpause a paused instance:


Halt the cluster:

minikube stop

Change the default memory limit (requires a restart):

minikube config set memory 9001

Browse the catalog of easily installed Kubernetes services:

minikube addons list


Delete all of the minikube clusters:

minikube delete --

You might also like