AWS Practitioner LabRecord
AWS Practitioner LabRecord
(SCOPE)
Reg. no.:
AWS Cloud Practitioner
CSE3015
Preface
Exercise (Objective)
Prerequisites
Objectives
Problem Statement
Algorithm
Coding(if applicable)
Output
Remarks
Conclusion
Challenges
Exp. Date
Title Remark
No. Conducted
Launch an EC2 instance using the AWS Management Console by Select appropriate instance
types, AMIs, and configurations. Connect to the EC2 instance using SSH or Remote Desktop.
Prerequisites:
Objectives:
Amazon EC2 (Elastic Compute Cloud) is a web service that provides resizable compute
capacity in the cloud. It allows users to run virtual servers (instances) and manage them via
the AWS Management Console or programmatically.
Key concepts:
Problem Statement:
Launch and configure an EC2 instance from the AWS Management Console using a
specified AMI and instance type. Then, connect to the instance using SSH (for Linux) or RDP
(for Windows).
Procedure:
14. Click get password and upload Experiment1_Key and decrypt the password.
15. Run downloaded remote desktop file and enter the decrypted password.
16. Connect to the instance.
Output:
Remarks:
If the instance does not connect, verify security group settings (inbound rules).
The experiment demonstrates how to deploy and connect to a virtual server using Amazon
EC2 RDP Client. This provides foundational experience in cloud infrastructure
management and serves as a stepping stone toward more complex architectures and
deployments using AWS.
Hands-on Lab Manual (Experiment 2)
S3 BucketExercise 2
Objectives:
Amazon Simple Storage Service (S3) is a scalable object storage service provided by AWS. It
allows you to store and retrieve any amount of data from anywhere on the web.
- Bucket: A container for storing objects (files).
- Bucket Policy: A JSON-based access policy attached to a bucket to define who can access
it and how.
- Versioning: Allows you to preserve, retrieve, and restore every version of every
object stored in a bucket.
Problem Statement:
Create an S3 bucket and configure it with appropriate bucket policies, versioning, and
encryption. Upload and download objects to/from the S3 bucket.
Algorithm:
1. Create an S3 Bucket:
2. Enable Versioning:
4. Enable Encryption:
5. Upload Objects:
6. Download Objects:
Implementation Steps:
Output:
Fig 8: Uploaded File Successfully.
Remarks:
The AWS Management Console was used for all bucket operations.
Conclusion:
1. Security Management: Configuring correct bucket policies and access controls can
be complex and prone to misconfiguration.
2. Cost Optimization: Unmonitored storage growth and data transfer costs can lead
to unexpectedly high bills.
3. Latency Issues: Retrieving large objects from S3 can result in latency, especially
for real-time applications.
Hands-on Lab Manual (Experiment 3)
Objectives:
AWS Lambda is a serverless compute service that lets you run code without provisioning or
managing servers. You simply upload your code as a Lambda function and AWS handles
everything needed to run and scale the execution.
Function creation
Handler definition
Trigger configuration
Program 3: Create an AWS Lambda function using Python or Node.js, set up event triggers, and
perform testing to ensure proper execution.
Algorithm:
Sign in to the AWS Management Console and open the Lambda console.
Create Function:
In the Lambda function's page, under "Function overview," click "Add trigger."
Click "Add."
Direct Invocation:
Use the provided API endpoint to send HTTP requests (e.g., using a
browser or tools like cURL or Postman).
import json
def lambda_handler(event,
context): return {
"statusCode": 200,
Execution Snapshots:
Output:
Conclusion:
Registering a domain name using Amazon Route 53 and upload HTML, CSS
and Java Script files to the S3 bucket to host websiteExercise (Objective) 4:
To register a domain name using Amazon Route 53 and upload HTML, CSS
and
Java Script files to the S3 bucket to host website
Prerequisites:
- Amazon AWS console, already familiar with the working and the various services
provided by the console.
Objectives:
Suppose that you want to host a static website on Amazon S3. You've registered a
domain with Amazon Route 53 (for example, “example.com”), and you want requests
for http://www.example.com and http://example.com to be served from your Amazon
S3 content. This experiment will help us how to host a static website and create redirects
on Amazon S3 for a website with a custom domain name that is registered with Route
53.
Amazon Route 53 – You use Route 53 to register domains and to define where you
want to route internet traffic for your domain. The example shows how to create
Route 53 alias records that route traffic for your domain (example.com) and
subdomain (www.example.com) to an Amazon S3 bucket that contains an HTML
file.
Amazon S3 – You use Amazon S3 to create buckets, upload a sample website
page, configure permissions so that everyone can see the content, and then
configure the buckets for website hosting.
Problem Statement:
Experiment 4: Registering a domain name using Amazon Route 53 and upload HTML, CSS and
Java Script files to the S3 bucket to host website.
Procedure:
a. In the Search for domain section, enter the domain name that you want to
register, and choose Search to find out whether the domain name is available.
If the domain name that you want to register contains characters other than a-z, A-Z, 0-9,
and - (hyphen), note the following:
You can enter the name using the applicable characters. You don't need to
convert the name to Punycode.
A list of languages appears. Choose the language of the specified name. For
example, if you enter příklad ("example" in Czech), choose Czech (CES) or Czech
(CZE).
5. On the Pricing page, choose the number of years that you want to register the
domain for and whether you want us to automatically renew your domain
registration before the expiration date.
6. On the Contact information page, enter contact information for the domain
registrant, admin, tech, and billing contacts. The values that you enter here are applied to
all of the domains that you're registering.
7. On the Review page, review the information that you entered, and optionally
correct it, read the terms of service, and select the check box to confirm that you've read
the terms of service.
Choose Submit.
On this page you can view the status of domain and also if you need to respond to registrant
contact verification email. You can also choose to resend the verification email.
If you specified an email address for the registrant contact that has never been used to register
a domain with Route 53, some TLD registries require you to verify that the address is valid.
B. Configuring a static website using a custom domain registered with Route 53:
To support requests from both the root domain and subdomain, you create two buckets.
These bucket names must match your domain name exactly. In this example, the domain name
is example.com. You host your content out of the root domain bucket (example.com). You
create a redirect request for the subdomain bucket (www.example.com). If someone
enters www.example.com in their browser, they are redirected to example.com and see
the content that is hosted in the Amazon S3 bucket with that name.
The following instructions provide an overview of how to create your buckets for website
hosting. For detailed, step-by-step instructions on creating a bucket, see Creating a bucket.
1. Sign in to the AWS Management Console and open the Amazon S3
console at https://console.aws.amazon.com/s3/.
a. In the navigation bar on the top of the page, choose the name of the
currently displayed AWS Region. Next, choose the Region in which you want to
create a bucket.
f. Choose a Region that is geographically close to you to minimize latency and costs,
or to address regulatory requirements. The Region that you choose determines your
Amazon S3 website endpoint. For more information, see Website endpoints.
g. To accept the default settings and create the bucket, choose Create.
Choose a Region that is geographically close to you to minimize latency and costs, or to
address regulatory requirements. The Region that you choose determines your Amazon S3
website endpoint. For more information, see Website endpoints.
Create. Step 3: Configure your root domain bucket for website hosting
In this step, you configure your root domain bucket (example.com) as a website. This
bucket will contain your website content. When you configure a bucket for website hosting,
you can access the website using the Website endpoints.
3. In the buckets list, choose the name of the bucket that you want to enable
static website hosting for.
4. Choose Properties.
8. In Index document, enter the file name of the index document, typically index.html.
The index document name is case sensitive and must exactly match the file name of the
HTML index document that you plan to upload to your S3 bucket. When you configure a
bucket for website hosting, you must specify an index document. Amazon S3 returns this
index document when requests are made to the root domain or any of the subfolders. For
more information, see Configuring an index document.
9. To provide your own custom error document for 4XX class errors, in Error
document, enter the custom error document file name.
The error document name is case sensitive and must exactly match the file name of the
HTML error document that you plan to upload to your S3 bucket. If you don't specify a
custom error document and an error occurs, Amazon S3 returns a default HTML error
document. For more information, see Configuring a custom error document.
For example, you can conditionally route requests according to specific object key names or
prefixes in the request. For more information, see Configure redirection rules to use
advanced conditional redirects.
Amazon S3 enables static website hosting for your bucket. At the bottom of the page,
under Static website hosting, you see the website endpoint for your bucket.
The Endpoint is the Amazon S3 website endpoint for your bucket. After you finish
configuring your bucket as a static website, you can use this endpoint to test your website.
After you edit block public access settings and add a bucket policy that allows public read
access, you can use the website endpoint to access your website.
In the next step, you configure your subdomain (www.example.com) to redirect requests to
your domain (example.com).
After you configure your root domain bucket for website hosting, you can configure
your subdomain bucket to redirect all requests to the domain. In this example, all
requests for www.example.com are redirected to example.com.
2. Choose Properties.
5. In the Target bucket box, enter your root domain, for example, example.com.
If you want to track the number of visitors accessing your website, you can optionally
enable logging for your root domain bucket. For more information, see Logging requests
with server access logging. If you plan to use Amazon CloudFront to speed up your website,
you can also use CloudFront logging.
2. In the same Region where you created the bucket that is configured as a
static website, create a bucket for logging, for example logs.example.com.
3. Create a folder for the server access logging log files (for example, logs).
8. Choose Enable.
9. Under the Target bucket, choose the bucket and folder destination for the
server access logs:
2. Choose the bucket name, and then choose the logs folder.
In your log bucket, you can now access your logs. Amazon S3 writes website access logs to your
log bucket every 2 hours.
In this step, you upload your index document and optional website content to your root
domain bucket.
When you enable static website hosting for your bucket, you enter the name of the index
document (for example, index.html). After you enable static website hosting for the
bucket, you upload an HTML file with this index document name to your bucket.
If you don't have an index.html file, you can use the following HTML to create one:
Coding:
Experiment 4: HTML
</head>
<body>
<h1>Welcome to my website</h1>
</body>
</html>
The index document file name must exactly match the index document name that you
enter in the Static website hosting dialog box. The index document name is case sensitive.
For example, if you enter index.html for the Index document name in the Static website
hosting dialog box, your index document file name must also be index.html and
not Index.html.
4. In the buckets list, choose the name of the bucket that you want to use to host
a static website.
5. Enable static website hosting for your bucket, and enter the exact name of your
index document (for example, index.html). For more information, see Enabling website
hosting.
Drag and drop the index file into the console bucket listing.
Choose Upload, and follow the prompts to choose and upload the index
The error document name is case sensitive and must exactly match the name that you enter
when you enable static website hosting. For example, if you enter 404.html for the Error
document name in the Static website hosting dialog box, your error document file name
must also be 404.html.
5. In the buckets list, choose the name of the bucket that you want to use to host
a static website.
6. Enable static website hosting for your bucket, and enter the exact name of your
error document (for example, 404.html). For more information, see Enabling website
hosting and Configuring a custom error document.
Drag and drop the error document file into the console bucket listing.
Choose Upload, and follow the prompts to choose and upload the index file.
By default, Amazon S3 blocks public access to your account and buckets. If you want to use
a bucket to host a static website, you can use these steps to edit your block public access
settings.
2. Choose the name of the bucket that you have configured as a static website.
3. Choose Permissions.
4. Under Block public access (bucket settings), choose Edit.
After you edit S3 Block Public Access settings, you can add a bucket policy to grant public read
access to your bucket. When you grant public read access, anyone on the internet can
access your bucket.
2. Choose Permissions.
4. To grant public read access for your website, copy the following bucket policy,
and paste it in the Bucket policy editor.
In the preceding example bucket policy, Bucket-Name is a placeholder for the bucket name.
To use this bucket policy with your own bucket, you must update this name to match your
bucket name.
Choose Save changes.
A message appears indicating that the bucket policy has been successfully added.
After you configure your domain bucket to host a public website, you can test your
endpoint. For more information, see Website endpoints. You can only test the endpoint for
your domain bucket because your subdomain bucket is set up for website redirect and not
static website hosting.
2. Choose Properties.
3. At the bottom of the page, under Static website hosting, choose your
Bucket website endpoint.
Step 11: Add alias records for your domain and subdomain
In this step, you create the alias records that you add to the hosted zone for your domain
maps example.com and www.example.com. Instead of using IP addresses, the alias records
use the Amazon S3 website endpoints. Amazon Route 53 maintains a mapping between the
alias records and the IP addresses where the Amazon S3 buckets reside. You create two
alias records, one for your root domain and one for your subdomain.
In the list of hosted zones, choose the name of the hosted zone that matches
your domain name.
The bucket name should match the name that appears in the Name box. In
the Choose S3 bucket list, the bucket name appears with the Amazon S3 website
endpoint for the Region where the bucket was created, for example, s3-website-
us- west-1.amazonaws.com (example.com).
The bucket name is the same as the name of the record that you're creating.
If your bucket does not appear in the Choose S3 bucket list, enter the Amazon S3
website endpoint for the Region where the bucket was created, for example, s3-
website-us-west-2.amazonaws.com. For a complete list of Amazon S3 website
endpoints, see Amazon S3 Website endpoints. For more information about the
alias target, see Value/route traffic to in the Amazon Route 53 Developer Guide.
If your bucket does not appear in the Choose S3 bucket list, enter the Amazon
S3 website endpoint for the Region where the bucket was created, for example,
s3- website-us-west-2.amazonaws.com. For a complete list of Amazon S3
website
endpoints, see Amazon S3 Website endpoints. For more information about the alias
target, see Value/route traffic to in the Amazon Route 53 Developer Guide.
Verify that the website and the redirect work correctly. In your browser, enter your URLs. In
this example, you can try the following URLs:
If your website or redirect links don't work, you can try the following:
Check name servers – If your web page and redirect links don't work after you've cleared
your cache, you can compare the name servers for your domain and the name servers for
your hosted zone. If the name servers don't match, you might need to update your
domain name servers to match those listed under your hosted zone.
Output:
Conclusion:
Objectives:
Problem Statement:
This experiment aims to address the challenge of building a custom network infrastructure
by configuring a Virtual Private Cloud (VPC) in AWS. The objective is to set up a secure,
isolated environment with public and private subnets, proper routing via Internet Gateway
and NAT Gateway, and enforce access control using security groups. This configuration will
simulate a real-world scenario where frontend services (e.g., web servers) are internet-
facing, while backend services (e.g., databases) are protected within private networks.
Steps
2. Create a VPC
Select "In 1 AZ" for NAT Gateway to allow private subnet internet access.
Preview -
8. Create a Public Subnet:
Both subnets will have IP addresses in the 10.0.2.x and 10.0.3.x ranges, respectively.
11. In the left navigation pane, choose Route Table
Type: HTTP
Protocol: TCP
Port: 80
Internet Gateway
NAT Gateway
Subnets
Subnet 1 - Private Created
Route Table
Remarks:
The route table for private subnets was updated to include Exp5-subnet-private2, ensuring
proper network configuration.
A security group (Web Security Group) was created to allow HTTP traffic, ensuring that web
requests can reach instances within the VPC.
The security group is configured to permit inbound HTTP traffic from any IP (0.0.0.0/0),
which is necessary for public-facing web applications but should be restricted for security-
sensitive applications.
Conclusion:
The AWS VPC networking setup has been successfully configured with public and private
subnets, a route table association, and security group rules to allow web traffic.
Hands-on Lab Manual (Experiment 6)
Objectives:
Implement a simple RESTful API using Amazon API Gateway and AWS Lambda.
A RESTful API (Representational State Transfer API) follows REST principles and
allows interaction between distributed systems using HTTP methods like GET, POST,
PUT, and DELETE.
Amazon API Gateway is a fully managed service that enables developers to create, deploy,
and manage secure APIs. It acts as a front door for applications, handling authentication,
request validation, and traffic management.
AWS Lambda allows running code without provisioning servers. It supports various
programming languages and integrates with API Gateway to execute functions upon API
requests.
Problem Statement:
Create a RESTful API using Amazon API Gateway and AWS Lambda that performs basic
CRUD (Create, Read, Update, Delete) operations on a user database.
Algorithm:
Choose "Author from scratch" and select Python or Node.js as the runtime.
Coding:
Program 6: HelloLambda
import json
def lambda_handler(event,
context): return {
"statusCode": 200,
} }
Output:
After deploying and invoking the API, the expected response is:
Remarks:
AWS Lambda executed the function and returned the expected response.
Conclusion:
In this experiment, we successfully created a RESTful API using Amazon API Gateway and
AWS Lambda. We implemented a serverless function that handles HTTP requests and
returns a JSON response. This demonstrates how to build scalable and cost-efficient APIs
using AWS services.
Screenshots / Visuals:
Hands-on Lab Manual (Experiment 7)
Objectives:
To deploy and configure an Amazon RDS MySQL Multi-AZ instance, secure it with a security
group, define a DB subnet group, and connect it to a web application for data persistence.
Amazon RDS Multi-AZ provides high availability and automatic failover by maintaining a
standby database in another Availability Zone (AZ). If the primary database fails, AWS
automatically redirects traffic to the standby instance.
Problem Statement:
Launch an RDS instance by Select a database engine (e.g., MySQL, PostgreSQL, SQL Server)
and Configure database settings, including storage, security, and backups.
Algorithm:
5. Click Create.
3. Configure:
Storage: 20 GB SSD.
VPC: Lab VPC.
5. Click Create Database and wait for the status to change to Available.
3. Enter:
Endpoint: (Paste the copied RDS endpoint).
Database: lab.
Username: main.
Password: lab-password.
Output:
Remarks:
Objectives:
AWS Identity and Access Management (IAM) is a web service that enables Amazon Web
Services (AWS) customers to manage users and user permissions in AWS. With IAM, you can
centrally manage users, security credentials such as access keys, and permissions that
control which AWS resources users can access
Problem Statement:
Users:
user-1
user-2
user-3
Groups:
User Assignments:
user-3 → EC2-Admin
user-2 → EC2-Support
user-1 → S3-Support
Steps:
1. In the search box to the right of Services, search for and choose IAM to open
the IAM console.
you:
user-1
user-2
user-3
3. Choose the user-1 link, Notice that user-1 does not have any
permissions. Choose the Groups tab, user-1 also is not a member of any groups.
you:
EC2-Admin
EC2-Support
S3-Support
5. Choose the EC2-Support group link, This group has a Managed Policy associated
with it, called AmazonEC2ReadOnlyAccess. In the navigation pane on the left, choose User
groups.
6. Choose the S3-Support group link and then choose the Permissions
This Group is slightly different from the other two. Instead of a Managed Policy, it has an Inline
Policy, which is a policy assigned to just one User or Group. Inline Policies are typically used
to apply permissions for one-off situations.
Select user-1.
9. Add user-2 to the EC2-Support Group, Using similar steps to the ones above, add
user-2 to the EC2-Support group.
10. Add user-3 to the EC2-Admin Group, Using similar steps to the ones above, add
user- 3 to the EC2-Admin group.
A Sign-in URL for IAM users in this account link is displayed on the right. It will look
similar to: https://123456789012.signin.aws.amazon.com/console
This link can be used to sign-in to the AWS Account you are currently using.
12. Paste the IAM users sign-in link into the address bar of your private browser
session and press Enter.
Next, you will sign-in as user-1, who has been hired as your Amazon S3 storage support staff.
Password: Lab-Password1
Since your user is part of the S3-Support Group in IAM, they have permission to view a list of
Amazon S3 buckets and the contents.
You cannot see any instances. Instead, you see a message that states You are not
authorized to perform this operation. This is because this user has not been granted any
permissions to access Amazon EC2.
13. You will now sign-in as user-2, who has been hired as your Amazon EC2
support person.
Password: Lab-Password2
You are now able to see an Amazon EC2 instance because you have Read Only permissions.
However, you will not be able to make any changes to Amazon EC2 resources.
select Stop.
You will receive an error stating You are not authorized to perform this operation. This
demonstrates that the policy only allows you to view information, without making changes.
14. You will now sign-in as user-2, who has been hired as your Amazon EC2
support person
Password: Lab-Password3
As an EC2 Administrator, you should now have permissions to Stop the Amazon EC2 instance.
Stop.
The instance will enter the stopping state and will shutdown.
Output:
Remarks:
The IAM Policies were correctly assigned to groups, ensuring users have only
the required permissions.
Conclusion:
- Key Pair – Access to the key pair if connecting to the EC2 instance via SSH or RDP.
Objectives:
- Create an AMI (Amazon Machine Image) – Capture the current state of an EC2 instance
for backup or replication.
- Create and Manage EBS Snapshots – Take snapshots of EBS volumes for backup
and recovery.
- Verify the Attached Volume – Format, mount, and use the new volume inside the instance.
What is an AMI?
An Amazon Machine Image (AMI) is a template that contains the operating system,
application server, and installed software required to launch an EC2 instance. AMIs help in
deploying instances quickly with pre-configured settings.
Root Volume (EBS or Instance Store) – Contains the operating system and software.
Launch Permissions – Controls which AWS accounts can use the AMI.
2. Select an existing running EC2 instance → Click Actions → Image and templates
→ Create Image.
2. EBS Snapshots
Scaling – Create new EBS volumes from snapshots for new instances.
Output:
Conclusion:
2. Create an Alarm:
3. Select a Metric:
o Click Next.
5. Set Notifications:
o Click Next.
We successfully created CloudWatch alarms for EC2, RDS, and ELB and set up a dashboard to
monitor performance.