0% found this document useful (0 votes)
38 views99 pages

AWS Practitioner LabRecord

The document is a hands-on lab manual for the AWS Cloud Practitioner course at VIT Bhopal University, detailing a series of experiments related to AWS services. It includes exercises on launching EC2 instances, managing S3 buckets, creating Lambda functions, and registering domain names with Route 53. Each experiment outlines prerequisites, objectives, underlying concepts, procedures, and expected outcomes to facilitate practical learning in cloud computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views99 pages

AWS Practitioner LabRecord

The document is a hands-on lab manual for the AWS Cloud Practitioner course at VIT Bhopal University, detailing a series of experiments related to AWS services. It includes exercises on launching EC2 instances, managing S3 buckets, creating Lambda functions, and registering domain names with Route 53. Each experiment outlines prerequisites, objectives, underlying concepts, procedures, and expected outcomes to facilitate practical learning in cloud computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 99

School of Computing Science & Engineering

(SCOPE)

AWS Practical Record File

AWS Cloud Practitioner (CSE3015)


Slot: C11+C12+C13

Submitted By: Submitted to:


Name: Dr. M. Suresh

Reg. no.:
AWS Cloud Practitioner
CSE3015

Hands-on Lab Manual

Preface

This manual consists of a series of laboratory experiments to accompany the syllabus of B.


Tech. Computing Science & Engineering, VIT Bhopal University.

The experiments are divided among following headings:

 Exercise (Objective)

 Prerequisites

 Objectives

 Underlying Concept / Theory (along with Figure if applicable)

 Problem Statement

 Algorithm

 Coding(if applicable)

 Output

 Remarks

 Conclusion

 Challenges

The appropriate headings are chosen to be placed in experiments depending on type of


experiment.
Indicative List of Experiments

Exp. Date
Title Remark
No. Conducted

Launch an EC2 instance using the AWS Management


Console by Select appropriate instance types, AMIs,
1
and configurations. Connect to the EC2 instance
using SSH or Remote Desktop.

Create an S3 bucket and Configure bucket


2 policies, versioning, and encryption. Upload and
download objects to/from the S3 bucket.

Create a Lambda function using Python or Node.js and


3
Configure event triggers and test the Lambda function.

Register a domain name using Amazon Route 53 and


4 Upload HTML, CSS, and JavaScript files to the S3
bucket to host the website.

Configuring a Virtual Private Cloud (VPC) in AWS,


5 including setting up subnets, route tables, internet
gateways, NAT gateway and security groups.

RESTful API using Amazon API Gateway using AWS


6
Lambda.

Launch an RDS instance by Selecting a database engine


(e.g., MySQL, PostgreSQL, SQL Server) and Configure
7
database settings, including storage, security, and
backups.

Create an Identity and Access Management (IAM) in


AWS, including creating users, groups, roles, and
8
policies, and managing access permissions to AWS
resources.

Creating and managing AMIs and snapshots in AWS


9
and Attach EBS on a running instance.

Create CloudWatch alarms to monitor metrics EC2


10 instances, RDS databases, and ELB load balancers
and set up dashboards to visualize performance
metrics.
Hands-on Lab Manual (Experiment 1)

Launch an EC2 instance using the AWS Management Console


Exercise 1

Launch an EC2 instance using the AWS Management Console by Select appropriate instance
types, AMIs, and configurations. Connect to the EC2 instance using SSH or Remote Desktop.

Prerequisites:

You should already be familiar with:

 AWS Management Console

 Basic understanding of cloud computing and virtualization.

 For Windows instance: Remote Desktop Protocol (RDP) client.

 A key pair (can be created during the instance setup).

Objectives:

 Understand EC2 and its configuration options.

 Launch an EC2 instance with appropriate AMI and instance type.

 Configure security groups and key pairs.

 Successfully connect to the instance using RDP.

Underlying Concept / Theory:

Amazon EC2 (Elastic Compute Cloud) is a web service that provides resizable compute
capacity in the cloud. It allows users to run virtual servers (instances) and manage them via
the AWS Management Console or programmatically.
Key concepts:

 AMI (Amazon Machine Image): Pre-configured template with OS and software.

 Instance Type: Defines hardware specs like CPU, memory.

 Key Pair: Used for secure SSH or RDP login.

 Security Group: Acts as a virtual firewall.

Problem Statement:
Launch and configure an EC2 instance from the AWS Management Console using a
specified AMI and instance type. Then, connect to the instance using SSH (for Linux) or RDP
(for Windows).

Procedure:

1. Login to AWS Console.

2. Navigate to the EC2 Dashboard.

3. Select Region from the right top corner.


4. Click “Launch Instance”.

5. Enter a name of EC2 Instance.

6. Select an Amazon Machine Image (AMI) (e.g. Windows).

7. Select an Instance Type (e.g., t3.micro).


8. Create and select a Key Pair for secure connection.

9. Configure Network Settings and allow ports.


10. Review configuration and click “Launch Instance”.

11. Wait for the instance status to show “Running”.


12. Select the Experiment1 checkbox and then connect it.

13. Goto RDP Client then download remote desktop file.

14. Click get password and upload Experiment1_Key and decrypt the password.

15. Run downloaded remote desktop file and enter the decrypted password.
16. Connect to the instance.

Output:

Successfully running an EC2 instance.

Remarks:

 Ensure the key pair is downloaded and securely stored.

 If the instance does not connect, verify security group settings (inbound rules).

 The free tier supports t2.micro instances for up to 750 hours/month.

 Don’t forget to terminate unused instances to avoid charges.


Conclusion:

The experiment demonstrates how to deploy and connect to a virtual server using Amazon
EC2 RDP Client. This provides foundational experience in cloud infrastructure
management and serves as a stepping stone toward more complex architectures and
deployments using AWS.
Hands-on Lab Manual (Experiment 2)

S3 BucketExercise 2

To demonstrate the concept of creating and managing an Amazon


S3 Bucket with policies, versioning, and encryption.
Prerequisites:

You should already be familiar with:

- Basic AWS services, especially Amazon S3.

- Using the AWS Management Console or AWS CLI.

- Basic concepts of cloud storage and permissions.

Objectives:

After completion of this lab student shall be able to:

- Create an S3 bucket using AWS Console.

- Configure bucket policies for access control.

- Enable versioning and encryption for data protection.

- Upload and download objects to/from the S3 bucket.

Underlying Concept / Theory:

Amazon Simple Storage Service (S3) is a scalable object storage service provided by AWS. It
allows you to store and retrieve any amount of data from anywhere on the web.
- Bucket: A container for storing objects (files).

- Bucket Policy: A JSON-based access policy attached to a bucket to define who can access
it and how.

- Versioning: Allows you to preserve, retrieve, and restore every version of every
object stored in a bucket.

- Encryption: Protects data by encrypting it at rest using server-side encryption (SSE-S3


or SSE-KMS).

Problem Statement:

Create an S3 bucket and configure it with appropriate bucket policies, versioning, and
encryption. Upload and download objects to/from the S3 bucket.

Algorithm:

1. Create an S3 Bucket:

 Go to AWS Console → S3 → Create Bucket.

 Provide a unique bucket name and region.

2. Enable Versioning:

 Go to “Properties” → “Bucket Versioning” → Enable.


3. Configure Bucket Policy.

4. Enable Encryption:

 Under “Properties” → “Default encryption” → Choose SSE-S3 or SSE-KMS.

5. Upload Objects:

 Go to the “Objects” tab → Upload → Select files from your computer.

6. Download Objects:

 Select the uploaded file → Actions → Download.

Implementation Steps:

Fig 1: Create Bucket


Fig 2: Provide Bucket Name

Fig 3: Enable Versioning


Fig 4: Update Bucket Policies

Fig 5: Enable File Encryption


Fig 6: Upload File via AWS Management Console

Fig 7: View Uploaded File

Output:
Fig 8: Uploaded File Successfully.

Fig 9: Successfully Downloaded.

Fig 10: View contents of uploaded file.

Remarks:

The AWS Management Console was used for all bucket operations.

Conclusion:

The objectives mentioned above are achieved successfully.


Challenges:

1. Security Management: Configuring correct bucket policies and access controls can
be complex and prone to misconfiguration.

2. Cost Optimization: Unmonitored storage growth and data transfer costs can lead
to unexpectedly high bills.

3. Latency Issues: Retrieving large objects from S3 can result in latency, especially
for real-time applications.
Hands-on Lab Manual (Experiment 3)

Create a Lambda function using Python or Node.js and Configure


event triggers and test the Lambda function.Exercise (Objective) 3:
To create a Lambda function using Python or Node.js and
Configure event triggers and test the Lambda function.
Prerequisites:

You should already be familiar with:

- Basic understanding of cloud computing concepts.

- Basic programming in Python or Node.js.

Objectives:

After completion of this lab student shall be able to:

- Explain the concept of AWS Lambda.

- Demonstrate the creation and deployment of a Lambda function.

- Implement and test Lambda functions triggered by events.

Underlying Concept / Theory:

AWS Lambda is a serverless compute service that lets you run code without provisioning or
managing servers. You simply upload your code as a Lambda function and AWS handles
everything needed to run and scale the execution.

Key concepts include:

 Function creation

 Handler definition

 Trigger configuration

 Testing and monitoring


Problem Statement:

Program 3: Create an AWS Lambda function using Python or Node.js, set up event triggers, and
perform testing to ensure proper execution.

Algorithm:

1. Create the Lambda Function:

 Access the AWS Lambda Console:

 Sign in to the AWS Management Console and open the Lambda console.

 Create Function:

 Click "Create function."

 Select "Author from scratch."AWS Documentation

 Enter a function name (e.g., MyFunction).

 Choose the runtime:AWS Documentation


 For Python, select Python 3.x.

 Set up permissions by selecting or creating an execution role.

 Click "Create function."

2. Add Function Code:

 In the function's code editor:

 For Python, edit lambda_function.py.

 Ensure the handler is set correctly:

 For Python, use lambda_function.lambda_handler.

 Click "Deploy" to save changes.

3. Configure an Event Trigger (e.g., API Gateway):

 In the Lambda function's page, under "Function overview," click "Add trigger."

 Select "API Gateway."

 Choose "Create an API" and select "HTTP API."

 Set security to "Open."

 Click "Add."

4. Test the Function:

 Direct Invocation:

 In the Lambda console, click "Test."

 Configure a test event with sample input data.

 Invoke the function and review the output.

 Via API Gateway:

 Use the provided API endpoint to send HTTP requests (e.g., using a
browser or tools like cURL or Postman).

 Verify the function's response.


Coding:

import json

def lambda_handler(event,

context): return {

"statusCode": 200,

"body": json.dumps("Hello from Lambda!")

Execution Snapshots:
Output:

Conclusion:

The objectives mentioned above are achieved successfully.


Hands-on Lab Manual (Experiment 4)

Registering a domain name using Amazon Route 53 and upload HTML, CSS
and Java Script files to the S3 bucket to host websiteExercise (Objective) 4:
To register a domain name using Amazon Route 53 and upload HTML, CSS
and
Java Script files to the S3 bucket to host website
Prerequisites:

You should already be familiar with:

- Amazon AWS console, already familiar with the working and the various services
provided by the console.

- Basic knowledge about Amazon S3 bucket and AWS Route 53.

Objectives:

After completion of this lab student shall be able to:

- Creating and registering custom domains under AWS Route 53.

- Configuration of a static domain under AWS Route 53.

- Uploading files and to thew static website.

Underlying Concept / Theory:

Suppose that you want to host a static website on Amazon S3. You've registered a
domain with Amazon Route 53 (for example, “example.com”), and you want requests
for http://www.example.com and http://example.com to be served from your Amazon
S3 content. This experiment will help us how to host a static website and create redirects
on Amazon S3 for a website with a custom domain name that is registered with Route
53.

 Amazon Route 53 – You use Route 53 to register domains and to define where you
want to route internet traffic for your domain. The example shows how to create
Route 53 alias records that route traffic for your domain (example.com) and
subdomain (www.example.com) to an Amazon S3 bucket that contains an HTML
file.
 Amazon S3 – You use Amazon S3 to create buckets, upload a sample website
page, configure permissions so that everyone can see the content, and then
configure the buckets for website hosting.

Problem Statement:

Experiment 4: Registering a domain name using Amazon Route 53 and upload HTML, CSS and
Java Script files to the S3 bucket to host website.

Procedure:

1. Registering a new domain using Route 53:

1. Sign in to the AWS Management Console and open the Route 53


console at https://console.aws.amazon.com/route53/.

2. In the navigation pane, choose Domains and then Registered domains.

3. On the Registered domains page, choose Register domains.

a. In the Search for domain section, enter the domain name that you want to
register, and choose Search to find out whether the domain name is available.

If the domain name that you want to register contains characters other than a-z, A-Z, 0-9,
and - (hyphen), note the following:

 You can enter the name using the applicable characters. You don't need to
convert the name to Punycode.

 A list of languages appears. Choose the language of the specified name. For
example, if you enter příklad ("example" in Czech), choose Czech (CES) or Czech
(CZE).

4. Choose Proceed to checkout.

5. On the Pricing page, choose the number of years that you want to register the
domain for and whether you want us to automatically renew your domain
registration before the expiration date.
6. On the Contact information page, enter contact information for the domain
registrant, admin, tech, and billing contacts. The values that you enter here are applied to
all of the domains that you're registering.

7. On the Review page, review the information that you entered, and optionally
correct it, read the terms of service, and select the check box to confirm that you've read
the terms of service.

Choose Submit.

8. In the navigation pane, choose Domains and then Requests.

On this page you can view the status of domain and also if you need to respond to registrant
contact verification email. You can also choose to resend the verification email.

If you specified an email address for the registrant contact that has never been used to register
a domain with Route 53, some TLD registries require you to verify that the address is valid.

We send a verification email from one of the following email addresses:

a. noreply@registrar.amazon – for TLDs registered by Amazon Registrar.

b. noreply@domainnameverification.net – for TLDs registered by our


registrar associate, Gandi. To determine who the registrar is for your TLD

B. Configuring a static website using a custom domain registered with Route 53:

Step 1: Register a custom domain with Route 53 Step

2: Create two buckets

To support requests from both the root domain and subdomain, you create two buckets.

 Domain bucket – example.com

 Subdomain bucket – www.example.com

These bucket names must match your domain name exactly. In this example, the domain name
is example.com. You host your content out of the root domain bucket (example.com). You
create a redirect request for the subdomain bucket (www.example.com). If someone
enters www.example.com in their browser, they are redirected to example.com and see
the content that is hosted in the Amazon S3 bucket with that name.

To create your buckets for website hosting

The following instructions provide an overview of how to create your buckets for website
hosting. For detailed, step-by-step instructions on creating a bucket, see Creating a bucket.
1. Sign in to the AWS Management Console and open the Amazon S3
console at https://console.aws.amazon.com/s3/.

2. Create your root domain bucket:

a. In the navigation bar on the top of the page, choose the name of the
currently displayed AWS Region. Next, choose the Region in which you want to
create a bucket.

b. In the left navigation pane, choose General purpose buckets.

c. Choose Create bucket. The Create bucket page opens.

d. Enter the Bucket name (for example, example.com).

e. Choose the Region where you want to create the bucket.

f. Choose a Region that is geographically close to you to minimize latency and costs,
or to address regulatory requirements. The Region that you choose determines your
Amazon S3 website endpoint. For more information, see Website endpoints.

g. To accept the default settings and create the bucket, choose Create.

3. Create your subdomain bucket:

a. Choose Create bucket.

b. Enter the Bucket name (for example, www.example.com).

c. Choose the Region where you want to create the bucket.

Choose a Region that is geographically close to you to minimize latency and costs, or to
address regulatory requirements. The Region that you choose determines your Amazon S3
website endpoint. For more information, see Website endpoints.

d. To accept the default settings and create the bucket, choose

Create. Step 3: Configure your root domain bucket for website hosting

In this step, you configure your root domain bucket (example.com) as a website. This
bucket will contain your website content. When you configure a bucket for website hosting,
you can access the website using the Website endpoints.

To enable static website hosting


1. Sign in to the AWS Management Console and open the Amazon S3
console at https://console.aws.amazon.com/s3/.

2. In the left navigation pane, choose General purpose buckets.

3. In the buckets list, choose the name of the bucket that you want to enable
static website hosting for.

4. Choose Properties.

5. Under Static website hosting, choose Edit.

6. Choose Use this bucket to host a website.

7. Under Static website hosting, choose Enable.

8. In Index document, enter the file name of the index document, typically index.html.

The index document name is case sensitive and must exactly match the file name of the
HTML index document that you plan to upload to your S3 bucket. When you configure a
bucket for website hosting, you must specify an index document. Amazon S3 returns this
index document when requests are made to the root domain or any of the subfolders. For
more information, see Configuring an index document.

9. To provide your own custom error document for 4XX class errors, in Error
document, enter the custom error document file name.

The error document name is case sensitive and must exactly match the file name of the
HTML error document that you plan to upload to your S3 bucket. If you don't specify a
custom error document and an error occurs, Amazon S3 returns a default HTML error
document. For more information, see Configuring a custom error document.

10. (Optional) If you want to specify advanced redirection rules, in Redirection


rules, enter JSON to describe the rules.

For example, you can conditionally route requests according to specific object key names or
prefixes in the request. For more information, see Configure redirection rules to use
advanced conditional redirects.

11. Choose Save changes.

Amazon S3 enables static website hosting for your bucket. At the bottom of the page,
under Static website hosting, you see the website endpoint for your bucket.

12. Under Static website hosting, note the Endpoint.

The Endpoint is the Amazon S3 website endpoint for your bucket. After you finish
configuring your bucket as a static website, you can use this endpoint to test your website.
After you edit block public access settings and add a bucket policy that allows public read
access, you can use the website endpoint to access your website.

In the next step, you configure your subdomain (www.example.com) to redirect requests to
your domain (example.com).

Step 4: Configure your subdomain bucket for website redirect

After you configure your root domain bucket for website hosting, you can configure
your subdomain bucket to redirect all requests to the domain. In this example, all
requests for www.example.com are redirected to example.com.

To configure a redirect request

1. On the Amazon S3 console, in the General purpose buckets list, choose


your subdomain bucket name (www.example.com in this example).

2. Choose Properties.

3. Under Static website hosting, choose Edit.

4. Choose Redirect requests for an object.

5. In the Target bucket box, enter your root domain, for example, example.com.

6. For Protocol, choose http.

7. Choose Save changes.

Step 5: Configure logging for website traffic

If you want to track the number of visitors accessing your website, you can optionally
enable logging for your root domain bucket. For more information, see Logging requests
with server access logging. If you plan to use Amazon CloudFront to speed up your website,
you can also use CloudFront logging.

To enable server access logging for your root domain bucket

1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.

2. In the same Region where you created the bucket that is configured as a
static website, create a bucket for logging, for example logs.example.com.

3. Create a folder for the server access logging log files (for example, logs).

4. (Optional) If you want to use CloudFront to improve your website


performance, create a folder for the CloudFront log files (for example, cdn).

5. In the Buckets list, choose your root domain bucket.


6. Choose Properties.

7. Under Server access logging, choose Edit.

8. Choose Enable.

9. Under the Target bucket, choose the bucket and folder destination for the
server access logs:

1. Browse to the folder and bucket location:

1. Choose Browse S3.

2. Choose the bucket name, and then choose the logs folder.

3. Choose Choose path.

2. Enter the S3 bucket path, for example, s3://logs.example.com/logs/.

10. Choose Save changes.

In your log bucket, you can now access your logs. Amazon S3 writes website access logs to your
log bucket every 2 hours.

Step 6: Upload index and website content

In this step, you upload your index document and optional website content to your root
domain bucket.

When you enable static website hosting for your bucket, you enter the name of the index
document (for example, index.html). After you enable static website hosting for the
bucket, you upload an HTML file with this index document name to your bucket.

To configure the index document

1. Create an index.html file.

If you don't have an index.html file, you can use the following HTML to create one:

Coding:

Experiment 4: HTML

<html xmlns="http://www.w3.org/1999/xhtml" >


<head>

<title>My Website Home Page</title>

</head>

<body>

<h1>Welcome to my website</h1>

<p>Now hosted on Amazon S3!</p>

</body>

</html>

1. Save the index file locally.

The index document file name must exactly match the index document name that you
enter in the Static website hosting dialog box. The index document name is case sensitive.
For example, if you enter index.html for the Index document name in the Static website
hosting dialog box, your index document file name must also be index.html and
not Index.html.

2. Sign in to the AWS Management Console and open the Amazon S3


console at https://console.aws.amazon.com/s3/.

3. In the left navigation pane, choose General purpose buckets.

4. In the buckets list, choose the name of the bucket that you want to use to host
a static website.

5. Enable static website hosting for your bucket, and enter the exact name of your
index document (for example, index.html). For more information, see Enabling website
hosting.

After enabling static website hosting, proceed to step 6.

6. To upload the index document to your bucket, do one of the following:

 Drag and drop the index file into the console bucket listing.

 Choose Upload, and follow the prompts to choose and upload the index

file. For step-by-step instructions, see Uploading objects.

7. (Optional) Upload other website content to your bucket.

Step 7: Upload an error document


When you enable static website hosting for your bucket, you enter the name of the error
document (for example, 404.html). After you enable static website hosting for the
bucket, you upload an HTML file with this error document name to your bucket.

To configure an error document

1. Create an error document, for example 404.html.

2. Save the error document file locally.

The error document name is case sensitive and must exactly match the name that you enter
when you enable static website hosting. For example, if you enter 404.html for the Error
document name in the Static website hosting dialog box, your error document file name
must also be 404.html.

3. Sign in to the AWS Management Console and open the Amazon S3


console at https://console.aws.amazon.com/s3/.

4. In the left navigation pane, choose General purpose buckets.

5. In the buckets list, choose the name of the bucket that you want to use to host
a static website.

6. Enable static website hosting for your bucket, and enter the exact name of your
error document (for example, 404.html). For more information, see Enabling website
hosting and Configuring a custom error document.

After enabling static website hosting, proceed to step 6.

7. To upload the error document to your bucket, do one of the following:

 Drag and drop the error document file into the console bucket listing.

 Choose Upload, and follow the prompts to choose and upload the index file.

Step 8: Edit S3 Block Public Access settings

By default, Amazon S3 blocks public access to your account and buckets. If you want to use
a bucket to host a static website, you can use these steps to edit your block public access
settings.

1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.

2. Choose the name of the bucket that you have configured as a static website.

3. Choose Permissions.
4. Under Block public access (bucket settings), choose Edit.

5. Clear Block all public access, and choose Save changes.

Step 9: Attach a bucket policy

After you edit S3 Block Public Access settings, you can add a bucket policy to grant public read
access to your bucket. When you grant public read access, anyone on the internet can
access your bucket.

1. Under Buckets, choose the name of your bucket.

2. Choose Permissions.

3. Under Bucket Policy, choose Edit.

4. To grant public read access for your website, copy the following bucket policy,
and paste it in the Bucket policy editor.

 Update the Resource to your bucket name.

In the preceding example bucket policy, Bucket-Name is a placeholder for the bucket name.
To use this bucket policy with your own bucket, you must update this name to match your
bucket name.
 Choose Save changes.

A message appears indicating that the bucket policy has been successfully added.

Step 10: Test your domain endpoint

After you configure your domain bucket to host a public website, you can test your
endpoint. For more information, see Website endpoints. You can only test the endpoint for
your domain bucket because your subdomain bucket is set up for website redirect and not
static website hosting.

1. Under Buckets, choose the name of your bucket.

2. Choose Properties.

3. At the bottom of the page, under Static website hosting, choose your
Bucket website endpoint.

Your index document opens in a separate browser window.

Step 11: Add alias records for your domain and subdomain

In this step, you create the alias records that you add to the hosted zone for your domain
maps example.com and www.example.com. Instead of using IP addresses, the alias records
use the Amazon S3 website endpoints. Amazon Route 53 maintains a mapping between the
alias records and the IP addresses where the Amazon S3 buckets reside. You create two
alias records, one for your root domain and one for your subdomain.

 Open the Route 53 console at https://console.aws.amazon.com/route53/

 Choose Hosted zones.

 In the list of hosted zones, choose the name of the hosted zone that matches
your domain name.

 Choose Create record.

 Choose Switch to wizard.

 hoose Simple routing, and choose Next.

 Choose Define simple record.


 In Record name, accept the default value, which is the name of your hosted zone
and your domain.

 In Value/Route traffic to, choose Alias to S3 website endpoint.

 Choose the Region.

 Choose the S3 bucket.

 The bucket name should match the name that appears in the Name box. In
the Choose S3 bucket list, the bucket name appears with the Amazon S3 website
endpoint for the Region where the bucket was created, for example, s3-website-
us- west-1.amazonaws.com (example.com).

 Choose S3 bucket lists a bucket if:

 You configured the bucket as a static website.

 The bucket name is the same as the name of the record that you're creating.

 The current AWS account created the bucket.

 If your bucket does not appear in the Choose S3 bucket list, enter the Amazon S3
website endpoint for the Region where the bucket was created, for example, s3-
website-us-west-2.amazonaws.com. For a complete list of Amazon S3 website
endpoints, see Amazon S3 Website endpoints. For more information about the
alias target, see Value/route traffic to in the Amazon Route 53 Developer Guide.

 In Record type, choose A - Routes traffic to an IPv4 address and some


AWS resources.

 For Evaluate target health, choose No.

 Choose Define simple record.

 To add an alias record for your subdomain (www.example.com)

 Under Configure records, choose Define simple record.

 In Record name for your subdomain, type www.

 In Value/Route traffic to, choose Alias to S3 website endpoint.

 Choose the Region.

 Choose the S3 bucket, for example, s3-website-us-west-


2.amazonaws.com (www.example.com).

 If your bucket does not appear in the Choose S3 bucket list, enter the Amazon
S3 website endpoint for the Region where the bucket was created, for example,
s3- website-us-west-2.amazonaws.com. For a complete list of Amazon S3
website
endpoints, see Amazon S3 Website endpoints. For more information about the alias
target, see Value/route traffic to in the Amazon Route 53 Developer Guide.

 In Record type, choose A - Routes traffic to an IPv4 address and some


AWS resources.

 For Evaluate target health, choose No.

 Choose Define simple record.

 On the Configure records page, choose Create records.

Step 12: Test the website

Verify that the website and the redirect work correctly. In your browser, enter your URLs. In
this example, you can try the following URLs:

Domain (http://example.com) – Displays the index document in the example.com bucket.

Subdomain (http://www.example.com) – Redirects your request to http://example.com. You


see the index document in the example.com bucket.

If your website or redirect links don't work, you can try the following:

Clear cache – Clear the cache of your web browser.

Check name servers – If your web page and redirect links don't work after you've cleared
your cache, you can compare the name servers for your domain and the name servers for
your hosted zone. If the name servers don't match, you might need to update your
domain name servers to match those listed under your hosted zone.

Output:

The website should be hosted now with the uploaded documents

Conclusion:

The objectives mentioned above are achieved successfully.


Hands-on Lab Manual (Experiment 5)

Virtual Private CloudExercise (Objective) 5: Configuring a


Virtual Private Cloud (VPC) in AWS, including setting up
subnets, route tables, internet gateways, NAT gateway and
security groups.
Prerequisites:

You should already be familiar with:

- Understand basic networking concepts

 IP address ranges (CIDR notation)

 Public vs. private subnets

 Gateways and routing

 Security groups vs. network ACLs

- Basic familiarity with AWS Management Console.

Objectives:

After completion of this lab student shall be able to:

- Design and Deploy a custom VPC Network

- Enable secure Internet Access

- Control network traffic using Security Groups.

Underlying Concept / Theory:


A VPC (Virtual Private Cloud) in AWS is a private, isolated network where you can launch
AWS resources. You define your own IP address range (CIDR) and split it into public and
private subnets. Route tables control traffic flow, and Internet Gateways provide internet
access to public subnets. NAT Gateways let private subnets access the internet securely,
while security groups act as firewalls to control traffic to/from instances.

Problem Statement:

In cloud-based application deployment, ensuring secure, scalable, and isolated


networking environments is critical. By default, all AWS resources are launched in a
default VPC, which may not provide the necessary control over network configuration,
routing, and security.

This experiment aims to address the challenge of building a custom network infrastructure
by configuring a Virtual Private Cloud (VPC) in AWS. The objective is to set up a secure,
isolated environment with public and private subnets, proper routing via Internet Gateway
and NAT Gateway, and enforce access control using security groups. This configuration will
simulate a real-world scenario where frontend services (e.g., web servers) are internet-
facing, while backend services (e.g., databases) are protected within private networks.

Steps

1. Open VPC Console

 Search for VPC in AWS Services and select it.

2. Create a VPC

 Ensure the region is N. Virginia (us-east-1).

 Click on VPC dashboard (top left) → Create VPC.

 If "Create VPC" isn't visible, use Launch VPC Wizard.

3. Configure VPC Settings

 Select VPC and more.

 Under Name tag auto-generation, change project to lab.

 Set IPv4 CIDR block to 10.0.0.0/16.


4. Configure Availability Zones and Subnets

 Select 1 Availability Zone (AZ).

 Add 1 Public Subnet and 1 Private Subnet.

 Set Public Subnet CIDR to 10.0.0.0/24.

 Set Private Subnet CIDR to 10.0.1.0/24.


5. Configure NAT Gateway and VPC Endpoints

 Select "In 1 AZ" for NAT Gateway to allow private subnet internet access.

 Keep VPC Endpoints as "None" (unless accessing S3 directly is needed).

6. Enable DNS Options


 Enable DNS hostnames.

 Enable DNS resolution.

7. Finalize and Create VPC

 Click "Create VPC" to complete the setup.

Preview -
8. Create a Public Subnet:

o VPC ID: lab-vpc

o Subnet Name: lab-subnet-public2

o Availability Zone: us-east-1b (second AZ)

o IPv4 CIDR Block: 10.0.2.0/24

9. Confirm the Creation of the second public subnet.

10. Create a Private Subnet:

o VPC ID: lab-vpc

o Subnet Name: lab-subnet-private2


o Availability Zone: us-east-1b (second AZ)

o IPv4 CIDR Block: 10.0.3.0/24

Both subnets will have IP addresses in the 10.0.2.x and 10.0.3.x ranges, respectively.
11. In the left navigation pane, choose Route Table

12. Select the lab-rtb-private1-us-east-1a

13. Select the "Subnet Associations" tab


 The route table was created earlier, and lab-subnet-private-1 was already
associated with it.

14. Associate lab-subnet-private-2 and Save

 Choose lab-subnet-private-2 from the list.

 Click "Save Associations" to apply changes

Step 15: Create a Security Group for HTTP Access

1. Enter Security Group Details

 Name: Web Security Group

 Description: Enable HTTP access


 VPC: Exp5-vpc

2. Set Inbound Rules

 Type: HTTP

 Protocol: TCP

 Port: 80

 Source: 0.0.0.0/0 (Allows traffic from anywhere)

 Description: Permit web requests

Internet Gateway
NAT Gateway

Subnets
Subnet 1 - Private Created

Subnet 2 - Public Created

Subnet 3 - Public Default


Subnet 4 - Private Default

Route Table
Remarks:

A public subnet (Exp5-subnet-public2) and a private subnet (Exp5-subnet-private2) were


successfully created and associated with the appropriate route tables.

The route table for private subnets was updated to include Exp5-subnet-private2, ensuring
proper network configuration.

A security group (Web Security Group) was created to allow HTTP traffic, ensuring that web
requests can reach instances within the VPC.

The security group is configured to permit inbound HTTP traffic from any IP (0.0.0.0/0),
which is necessary for public-facing web applications but should be restricted for security-
sensitive applications.

Conclusion:

The AWS VPC networking setup has been successfully configured with public and private
subnets, a route table association, and security group rules to allow web traffic.
Hands-on Lab Manual (Experiment 6)

RESTful API using Amazon API Gateway using AWS LambdaExercise


(Objective) 6: RESTful API using Amazon API Gateway using AWS
Lambda.
Prerequisites:

 Basic understanding of RESTful APIs

 AWS account setup

 Familiarity with AWS Lambda and API Gateway

 Basic knowledge of Python (or Node.js) for Lambda functions

Objectives:

After completion of this lab student shall be able to:

 Explain RESTful APIs and their principles.

 Demonstrate the integration of AWS Lambda with API Gateway.

 Implement a simple RESTful API using Amazon API Gateway and AWS Lambda.

 Understand the process of deploying and testing an API in AWS.

Underlying Concept / Theory:

A RESTful API (Representational State Transfer API) follows REST principles and
allows interaction between distributed systems using HTTP methods like GET, POST,
PUT, and DELETE.

Amazon API Gateway is a fully managed service that enables developers to create, deploy,
and manage secure APIs. It acts as a front door for applications, handling authentication,
request validation, and traffic management.

AWS Lambda allows running code without provisioning servers. It supports various
programming languages and integrates with API Gateway to execute functions upon API
requests.
Problem Statement:

Create a RESTful API using Amazon API Gateway and AWS Lambda that performs basic
CRUD (Create, Read, Update, Delete) operations on a user database.

Algorithm:

∙Create an AWS Lambda Function:

 Go to the AWS Lambda console.

 Create a new function.

 Choose "Author from scratch" and select Python or Node.js as the runtime.

 Write a simple function that returns a JSON response.

 Deploy the function.

∙Create an API in Amazon API Gateway:

 Open the API Gateway console.

 Create a new REST API.

 Define a resource (e.g., /greet).

 Create a GET method and link it to the Lambda function.

 Deploy the API.

∙Test the API Endpoint:

 Copy the API endpoint URL.

 Send a GET request using a web browser or Postman.

 Observe the response.

Coding:

Program 6: HelloLambda
import json

def lambda_handler(event,

context): return {

"statusCode": 200,

"body": json.dumps({"message": "Hello, welcome to RESTful APIs with


AWS Lambda!"})

} }

Output:

After deploying and invoking the API, the expected response is:

"message": "Hello, welcome to RESTful APIs with AWS Lambda!"}

Remarks:

 The API was successfully created using Amazon API Gateway.

 AWS Lambda executed the function and returned the expected response.

 The integration between API Gateway and Lambda was seamless.

Conclusion:

In this experiment, we successfully created a RESTful API using Amazon API Gateway and
AWS Lambda. We implemented a serverless function that handles HTTP requests and
returns a JSON response. This demonstrates how to build scalable and cost-efficient APIs
using AWS services.

Screenshots / Visuals:
Hands-on Lab Manual (Experiment 7)

Launch an RDS instance by Select a database engine (e.g., MySQL,


PostgreSQL, SQL Server) and Configure database settings, including
storage, security, and backups. Experiment 7: Launch an RDS
instance by Select a database engine (e.g., MySQL, PostgreSQL,
SQL Server) and Configure database settings, including storage,
security, and backups.
Prerequisites:

 AWS Account with access to Amazon RDS, VPC, and EC2.

 Basic understanding of network security (security groups, inbound/outbound rules).

 Knowledge of MySQL databases and cloud-based database management.

 A pre-configured web application running on an EC2 instance.

Objectives:

To deploy and configure an Amazon RDS MySQL Multi-AZ instance, secure it with a security
group, define a DB subnet group, and connect it to a web application for data persistence.

By the end of this exercise, you will be able to:

 Create an Amazon RDS MySQL instance with Multi-AZ deployment.

 Set up a Security Group to allow access from an EC2-based web application.

 Create and configure a DB Subnet Group for high availability.

 Establish a connection between the RDS instance and a web application.

 Understand how data replication works in Amazon RDS Multi-AZ.

Underlying Concept / Theory:

Amazon RDS Multi-AZ provides high availability and automatic failover by maintaining a
standby database in another Availability Zone (AZ). If the primary database fails, AWS
automatically redirects traffic to the standby instance.
Problem Statement:

Launch an RDS instance by Select a database engine (e.g., MySQL, PostgreSQL, SQL Server)
and Configure database settings, including storage, security, and backups.

Algorithm:

Step 1: Create a Security Group for RDS

1. Open AWS Management Console → Search VPC → Select Security Groups.

2. Click Create Security Group → Name: DB Security Group.

3. Add Inbound Rule:

 Type: MySQL/Aurora (3306).

 Source: Select Web Security Group.

4. Click Create Security Group.


Step 2: Create a DB Subnet Group

1. Open AWS RDS Console → Select Subnet Groups.

2. Click Create DB Subnet Group → Name: DB-Subnet-Group.

3. Choose Lab VPC and add subnets in us-east-1a and us-east-1b.

4. Select CIDR ranges: 10.0.1.0/24 and 10.0.3.0/24.

5. Click Create.

Step 3: Launch an RDS MySQL Instance

1. Open AWS RDS Console → Select Databases → Click Create database.

2. Choose MySQL as the engine, Multi-AZ for availability.

3. Configure:

 DB instance ID: lab-db.

 Username: main, Password: lab-password.

 DB instance class: db.t3.micro.

 Storage: 20 GB SSD.
 VPC: Lab VPC.

 Security Group: DB Security Group.

4. Disable encryption and automatic backups for faster setup.

5. Click Create Database and wait for the status to change to Available.

6. Copy the Endpoint (e.g., lab-db.xxxx.us-east-1.rds.amazonaws.com).

Step 4: Connect Web Application to RDS

1. Retrieve WebServer IP from AWS Details.

2. Open a browser → Enter WebServer IP → Click RDS.

3. Enter:
 Endpoint: (Paste the copied RDS endpoint).

 Database: lab.

 Username: main.

 Password: lab-password.

4. Click Submit → The Address Book will appear.

5. Test by adding, editing, and deleting contacts.

Output:

 Security Group created → Allows EC2 to connect to RDS.

 DB Subnet Group created → Ensures availability in multiple zones.

 Amazon RDS MySQL instance launched → Multi-AZ deployment enabled.

 Web application successfully connected to RDS.

 Address Book displayed → Data stored and retrieved in real time.

Remarks:

 Security Groups should always limit access to required instances only.

 Multi-AZ deployments ensure automatic failover and data replication.

 Automatic backups should be enabled in production environments.


Conclusion:

In this experiment, we successfully deployed an Amazon RDS MySQL Multi-AZ instance,


ensuring high availability and failover support. We configured security groups to allow
controlled access from a web application and set up a DB Subnet Group for redundancy. By
connecting a web-based Address Book to the RDS database, we demonstrated real-time
data storage and retrieval. This experiment highlights the importance of secure database
management and high-availability configurations in cloud environments.
Exercise 8: Create an Identity and Access Management (IAM) in
AWS, including creating users, groups, roles, and policies, and
managing access permissions to AWS resources.
Prerequisites:

You should already be familiar with:

 Basic AWS Console navigation.

 Basic understanding of cloud computing concepts.

 AWS Free Tier account access.

Objectives:

After completion of this lab student shall be able to:

 Understand the purpose of IAM in AWS.

 Create and manage Users, Groups, Roles, and Policies.

 Assign permissions to control access to AWS resources.

Underlying Concept / Theory:

AWS Identity and Access Management (IAM) is a web service that enables Amazon Web
Services (AWS) customers to manage users and user permissions in AWS. With IAM, you can
centrally manage users, security credentials such as access keys, and permissions that
control which AWS resources users can access
Problem Statement:

 Users:

 user-1

 user-2

 user-3

 Groups:

 EC2-Admin (Full EC2 Control: View, Start, and Stop)

 EC2-Support (EC2 Read-Only Access)

 S3-Support (S3 Read-Only Access)

 User Assignments:

 user-3 → EC2-Admin

 user-2 → EC2-Support

 user-1 → S3-Support

Steps:
1. In the search box to the right of Services, search for and choose IAM to open
the IAM console.

2. In the navigation pane on the left, choose

Users. The following IAM Users have been created for

you:

 user-1

 user-2

 user-3

3. Choose the user-1 link, Notice that user-1 does not have any

permissions. Choose the Groups tab, user-1 also is not a member of any groups.

4. In the navigation pane on the left, choose User

groups. The following groups have already been created for

you:

 EC2-Admin

 EC2-Support

 S3-Support

5. Choose the EC2-Support group link, This group has a Managed Policy associated
with it, called AmazonEC2ReadOnlyAccess. In the navigation pane on the left, choose User
groups.

6. Choose the S3-Support group link and then choose the Permissions

tab. The S3-Support group has the AmazonS3ReadOnlyAccess policy attached.


7. Choose the EC2-Admin group link and then choose the Permissions tab.

This Group is slightly different from the other two. Instead of a Managed Policy, it has an Inline
Policy, which is a policy assigned to just one User or Group. Inline Policies are typically used
to apply permissions for one-off situations.

8. Add user-1 to the S3-Support Group

 In the left navigation pane, choose User groups.

 Choose the S3-Support group link.

 Choose the Users tab.

 In the Users tab, choose Add users.

 In the Add Users to S3-Support window, configure the following:

 Select user-1.

 At the bottom of the screen, choose Add users.

9. Add user-2 to the EC2-Support Group, Using similar steps to the ones above, add
user-2 to the EC2-Support group.

10. Add user-3 to the EC2-Admin Group, Using similar steps to the ones above, add
user- 3 to the EC2-Admin group.

11. In the navigation pane on the left, choose Dashboard.

A Sign-in URL for IAM users in this account link is displayed on the right. It will look
similar to: https://123456789012.signin.aws.amazon.com/console

This link can be used to sign-in to the AWS Account you are currently using.

12. Paste the IAM users sign-in link into the address bar of your private browser
session and press Enter.
Next, you will sign-in as user-1, who has been hired as your Amazon S3 storage support staff.

 IAM user name: user-1

 Password: Lab-Password1

Since your user is part of the S3-Support Group in IAM, they have permission to view a list of
Amazon S3 buckets and the contents.

In the left navigation pane, choose Instances.

You cannot see any instances. Instead, you see a message that states You are not
authorized to perform this operation. This is because this user has not been granted any
permissions to access Amazon EC2.

13. You will now sign-in as user-2, who has been hired as your Amazon EC2
support person.

 IAM user name: user-2

 Password: Lab-Password2

You are now able to see an Amazon EC2 instance because you have Read Only permissions.
However, you will not be able to make any changes to Amazon EC2 resources.

 Select the instance named

LabHost. In the Stop Instance window,

select Stop.

You will receive an error stating You are not authorized to perform this operation. This
demonstrates that the policy only allows you to view information, without making changes.

14. You will now sign-in as user-2, who has been hired as your Amazon EC2
support person

 IAM user name: user-3

 Password: Lab-Password3
As an EC2 Administrator, you should now have permissions to Stop the Amazon EC2 instance.

 Select the instance named LabHost

. In the Stop instance window, choose

Stop.

The instance will enter the stopping state and will shutdown.

Output:
Remarks:

 The IAM Policies were correctly assigned to groups, ensuring users have only
the required permissions.

 The principle of least privilege was successfully implemented, reducing


security risks.

 The group-based access control simplifies user management, making it easier


to modify permissions when needed.

Conclusion:

 Explored pre-created IAM users and groups

 Inspected IAM policies as applied to the pre-created groups

 Followed a real-world scenario, adding users to groups with specific


capabilities enabled

 Located and used the IAM sign-in URL

 Experimented with the effects of policies on service access


Hands-on Lab Manual (Experiment 9)

Topic: Creating and managing AMIs and snapshots in AWS and


Attach EBS on a running instance
Prerequisites:

You should already be familiar with:

- AWS Account – An active AWS account with appropriate permissions to create


AMIs, snapshots, and manage EBS volumes.

- EC2 Instance – A running EC2 instance (Linux or Windows) in AWS.

- IAM Role/Permissions – Necessary permissions for EC2, AMI, and EBS


management (AmazonEC2FullAccess or equivalent custom policies).

- Key Pair – Access to the key pair if connecting to the EC2 instance via SSH or RDP.

- Basic Knowledge – Understanding of AWS EC2, EBS, and AMI concepts.

Objectives:

After completion of this lab student shall be able to:

- Create an AMI (Amazon Machine Image) – Capture the current state of an EC2 instance
for backup or replication.

- Manage AMIs – View, share, or delete AMIs as needed.

- Create and Manage EBS Snapshots – Take snapshots of EBS volumes for backup
and recovery.

- Attach an EBS Volume to a Running EC2 Instance – Dynamically attach additional


storage to an active instance.

- Verify the Attached Volume – Format, mount, and use the new volume inside the instance.

Underlying Concept / Theory:

Introduction to AMIs, Snapshots, and EBS Volumes


Amazon Web Services (AWS) provides Amazon Machine Images (AMIs) and Elastic Block
Store (EBS) snapshots as essential tools for managing cloud-based virtual machines and
storage. These features help in creating backups, scaling infrastructure, and disaster
recovery.

1. Amazon Machine Image (AMI)

What is an AMI?

An Amazon Machine Image (AMI) is a template that contains the operating system,
application server, and installed software required to launch an EC2 instance. AMIs help in
deploying instances quickly with pre-configured settings.

Key Components of an AMI:

 Root Volume (EBS or Instance Store) – Contains the operating system and software.

 Launch Permissions – Controls which AWS accounts can use the AMI.

 Block Device Mappings – Defines storage volumes attached to the instance.

Example: Creating an AMI

1. Log in to AWS Management Console → Navigate to EC2 Dashboard.

2. Select an existing running EC2 instance → Click Actions → Image and templates
→ Create Image.

3. Provide an AMI Name, description, and select EBS Volumes to include.

4. Click Create Image and AWS will generate an AMI.

2. EBS Snapshots

What is an EBS Snapshot?


An EBS Snapshot is a point-in-time backup of an EBS Volume. AWS stores snapshots in
Amazon S3, but they are incremental, meaning only changed blocks are saved to reduce
storage costs.

Use Cases of Snapshots:

 Disaster Recovery – Restore data in case of failure.

 Migration – Move volumes between AWS regions or accounts.

 Scaling – Create new EBS volumes from snapshots for new instances.

Example: Creating a Snapshot

1. Navigate to AWS EC2 Dashboard → Elastic Block Store (EBS) → Volumes.

2. Select an EBS Volume → Click Actions → Create Snapshot.

3. Provide a Description and click Create Snapshot.

4. The snapshot can later be used to create new volumes.


3. Attaching an EBS Volume to a Running

Instance Why Attach an EBS Volume?

 Increase Storage Capacity – Add extra storage dynamically.

 Data Separation – Keep OS and application data on separate volumes.

 Performance Optimization – Use SSDs or provisioned IOPS for better speed.

Output:
Conclusion:

The objectives mentioned above are achieved successfully.


Experiment 10: CloudWatch Alarms and Dashboards
Objective: To set up CloudWatch alarms to monitor EC2 CPU usage, RDS databases, and ELB
load balancers. Also, to create dashboards for visualizing performance metrics.

Step 1: Creating a CloudWatch Alarm for EC2 CPU Usage

1. Open CloudWatch in AWS Console.

2. Create an Alarm:

o Click Alarms > Create alarm.

3. Select a Metric:

o Choose EC2 Metrics > Per-Instance Metrics.

o Select CPUUtilization for the instance.

o Click Select metric.


4. Set Conditions:

o Use Static threshold.

o Set condition to Greater than (e.g., 80%).

o Click Next.
5. Set Notifications:

o Select or create an SNS topic for alerts.

o Click Next.

6. Finalize the Alarm:

o Name the alarm (e.g., "CPUUtilization").

o Click Next > Create alarm.


7. Monitor CPUUtilization

Step 2: Creating a CloudWatch Alarm for RDS Databases

1. Open CloudWatch > Alarms.

2. Click Create alarm > Select metric.

3. Choose RDS Metrics (CPUUtilization, FreeStorageSpace, etc.).

4. Set threshold values (e.g., CPU > 75%).

5. Configure notifications and create the alarm.

Step 3: Creating a CloudWatch Alarm for ELB Load Balancers


1. Open CloudWatch.

2. Click Alarms > Create alarm.

3. Select ELB Metrics (RequestCount, Latency, etc.).

4. Set threshold values (e.g., Latency > 1s).

5. Configure SNS notifications and create the alarm.

Step 4: Creating a CloudWatch Dashboard

1. Open CloudWatch > Dashboards.

2. Click Create Dashboard and enter a name.


3. Add Widgets:

o Select EC2, RDS, and ELB metrics.

o Choose visualization types (graph, number, etc.).

4. Save the dashboard.


Conclusion:

We successfully created CloudWatch alarms for EC2, RDS, and ELB and set up a dashboard to
monitor performance.

You might also like