0% found this document useful (0 votes)
17 views86 pages

Aws Cloud

The document is a report on an AWS Cloud Virtual Internship undertaken by students from Gayatri Vidya Parishad College of Engineering as part of their Bachelor's degree in Computer Science and Engineering-Data Science. It includes acknowledgments, an abstract outlining the AWS services covered, and a detailed index of topics related to cloud computing, AWS architecture, and various services. The report serves as a formal record of their internship experience and learning outcomes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views86 pages

Aws Cloud

The document is a report on an AWS Cloud Virtual Internship undertaken by students from Gayatri Vidya Parishad College of Engineering as part of their Bachelor's degree in Computer Science and Engineering-Data Science. It includes acknowledgments, an abstract outlining the AWS services covered, and a detailed index of topics related to cloud computing, AWS architecture, and various services. The report serves as a formal record of their internship experience and learning outcomes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

lOMoARcPSD|46940664

Batch - 56 - Karthik Kanchi AWS Cloud Virtual Internship


Report AICTE GGVP
cloud computing (Gayatri Vidya Parishad College of Engineering)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

AWS CLOUD VIRTUAL INTERNSHIP

Internship -1 report submitted in partial fulfilment

of requirements for the award of degree of


BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING-DATA SCIENCE
By
K.Sai Teja (21135A4404)
K.Karthik (21135A4403)
P.Dinesh Naidu (20131A05K1)
P.Meherprem (20131A05H8)

Under the esteemed guidance of


Name of Internship Mentor Name of Course Coordinator
Dr.Ch.Avinash D r. CH. SITA KUMARI
(Asst. Professor) (Associate Professor)
Department of Computer Science and Engineering-Data Science

GAYATRI VIDYA PARISHAD COLLEGE OF ENGINEERING (AUTONOMOUS)

(Affiliated to JNTU-K, Kakinada)


VISAKHAPATNAM
2022 – 2023

1
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Gayatri Vidya Parishad College of Engineering (Autonomous)


Visakhapatnam

CERTIFICATE
This report on
“AWS CLOUD VIRTUAL INTERNSHIP”
is a Bonafide record of the Internship - I Work submitted
By
K.SAI TEJA (21135A4404)
K.KARTHIK (21135A4403)
P.DINESH NAIDU (20131A05K1)
P.MEHERPREM (20131A05H8)

In their V semester in partial fulfilment of the requirements for the Award of Degree of
Bachelor of Technology in Computer
Science and Engineering-Data Science
During the academic year 2022-2023

Name of Course Coordinator Head of the Department


Dr. CH. SITA KUMARI Dr. D. HARINI
(Associate Professor) (Associate Professor)

Internship Mentor
Dr.Ch.Avinash
(Assistant Professor)

2
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

ACKNOWLEDGEMENT

We would like to express our deep sense of gratitude to our esteemed


institute Gayatri Vidya Parishad College of Engineering (Autonomous), which has
provided us an opportunity to fulfil our cherished desire.

We thank our Course Mentor , Dr.Ch.Avinash, Associate Professor,


Department of Computer Science and Engineering, for the kind suggestions and
guidance for the successful completion of our internship

We thank our Course coordinator , Dr. CH. SITA KUMARI, Associate Professor,
Department of Computer Science and Engineering, for the kind suggestions and
guidance for the successful completion of our internship

We are highly indebted to Dr. D. N. D. HARINI, Associate Professor and Head


of the Department of Computer Science and Engineering, Gayatri Vidya Parishad
College of Engineering (Autonomous), for giving us an opportunity to do the
internship in college.

We express our sincere thanks to our Principal Dr. A.B. KOTESWARA RAO,
Gayatri Vidya Parishad College of Engineering (Autonomous) for his encouragement
to us during this project, giving us a chance to explore and learn new technologies in
the form of internship.

We are very Thank full to AICTE and Edu-skills for giving us an internship and
helping to solve every issue regarding the internship.

Finally, we are indebted to the teaching and non-teaching staff of the Computer
Science and Engineering Department for all their support in completion of our project.

K. SAI TEJA (21135A4404)

K. KARTHIK (21135A4404)

P. DINESH NAIDU (20131A05K1)

P. MEHERPREM (20131A05H8)

3
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

21135A4404

4
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

21135A4403

5
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

20131A05K1

6
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

20131A05H8

7
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

ABSTRACT

AWS Cloud Virtual Internship course consists of:-

Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon that provides on-demand cloud computing
platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.
These cloud computing web services provide distributed computing processing capacity and software
tools via AWS server farms. One of these services is Amazon Elastic Compute Cloud (EC2), which allows
users to have at their disposal a virtual cluster of computers, available all the time, through the Internet.
AWS's virtual computers emulate most of the attributes of a real computer, including hardware central
processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-
disk/SSD storage; a choice of operating systems; networking; and pre-loaded application software such
as web servers, databases, and customer relationship management (CRM).

AWS Academy Cloud Foundations is intended for students who seek an overall understanding of cloud
computing concepts, independent of specific technical roles. It provides a detailed overview of cloud
concepts, AWS core services, security, architecture, pricing, and support. Cloud computing is the on-
demand delivery of IT resources over the Internet with payas-you-go pricing. Instead of buying,
owning, and maintaining physical data centers and servers, you can access technology services, such as
computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon
Web Services (AWS).
AWS Academy Cloud Architecting covers the fundamentals of building IT infrastructure on Amazon
Web Services, or AWS. The course is designed to teach solutions architects how to optimize the use of
the AWS Cloud by understanding AWS services and how these services fit into cloud-based solutions. It
is the practice of applying cloud characteristics to a solution that uses cloud services and features to
meet an organization’s technical needs and business use cases. A solution is similar to a blueprint for a
building. Software systems require architects to manage their size and complexity.

8
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

INDEX
CLOUD FOUNDATION

1. Cloud Concepts Overview…..………………………………………………………………………………………………………….12


1.1 Introducing Cloud Architecting ………………………………………………………………………………….............12
1.2 Advantages of Cloud Computing…………………….………………………………………………………………..……12
1.3 Introduction to Amazon Web Services…………………………………………………………………………………..12
1.4 Moving to the Aws Cloud-The AWS Cloud Adoption Framework……………………………………………13
2. Cloud Economics And Billing …………………………………………………………………………………..……………….……..13
2.1 Fundamentals Of Pricing…………………………………………………………………………..…………………………….13
2.2 Total Cost Of Ownership………………………………………………….……………………………………….…………....14
2.3 AWS Organisations………………………………………………………………………………………………………..…….…14
2.4 AWS Billing and Management….…………………………………………………………………………………............15
2.5 Technical Support……………………………………………………………………………………………………………………15
3. AWS GLOBAL INFRASTRUCTURE OVERVIEW ….………………………………………………………………………............17
3.1 AWS Global Infrastructure………………………………………….………………………………………………..…..……17
3.2 AWS Services and services category overview………………………………………………………………..……..17
3.3 Summary……………………….………………………………………………………………………………………………………17
4. AWS CLOUD SECURITY………………….…………………………………………………………………………………………………18
4.1 AWS Shared responsibility model.………………………………………………………………………………..………..18
4.2 AWS Identity and Access Management….………………………………………………………………………..…….19
4.3 Securing a new AWS Account………………………………………………………………………………………………….19
4.4 Securing accounts……………………………….……………………………………………………………………..…………..19
4.5 Securing data on AWS……………………………………….……………………………………………………………………19
4.6 Working to ensure compliance……………………………………………………………………………………………….20
5. Networking And Content Delivery.……..……………………………………………………………………………………………20
5.1 Networking basics ……………………………………………….…………………………………………………………………20
5.2 Amazon VPC…..…………………….…………………………………………………………………………………….………….21
5.3 VPC networking…………………….…………………………………………………………………………………….………….21
5.4 VPC Security………………………………………………………………………………………………………………..………….22
5.5 Amazon Route S3…..………..……………………………………………………………………………………………..………22

9
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

5.6 Amazon Cloud Front ……………………..…………………………….…………………………………………………………22


6. Compute………………………………………………………………………………………………………………………………………….23
6.1 Compute Services Overview…………………………..……………………………………………………………………….23
6.2 Amazon EC2…………………………………………….……………………………………………………………………………..24
6.3 Amazon EC2 Cost Optimization………………………..…………………………………………………………………….24
6.4 Container Services……………………………………….…………………………………………………………………………24
6.5 Introduction To AWS Lambda ….…………………………………………………………………………………............25
6.6 Introduction To AWS Beanstalk..…………………………………………………………………………………............25
7. STORAGE………………………………………………………………………………………………………………………………………..26
7.1 Amazon Elastic Block Store…..………………………………………………………………………………………………..26
7.2 Amazon Simple Storage Service……………………………………………….………….…………………….…………..26
7.3 Amazon Elastic File System…………………………………………………….………………..…………………………….26
7.4 Amazon S3 Glacier……………………………………………………………………….…………………………………………27
8. DataBases……………………………………….………………………………………………………………………………………….…...27
8.1 Amazon Relational Database Service………………………………………………………………………………….…..27
8.2 Amazon DynamoDB………………………………………………………………….………………………………………….…28
8.3 Amazon Redshift….………………………………………………………………………………………………………………...28
8.4 Amazon Aurora.……………………………………………………………………………………………………………………..28
9. CLOUD ARCHITECHURE…………………………………………………………………………………………………………………...30
9.1 AWS Well-Architected Framework…………………….……………….…………………………………………………..30
9.2 Reliability and Availability……….………………………….……………..……………………………………………………30
9.3 AWS Trusted Advisor……………………………………..……………………………………………………………………….30
10. Auto Load Balancing………………..……………………………………………………………………………………….……………..31
10.1 Elastic Load Balancing……………………………………………………………………………………………………….31

10.2 Amazon Cloud Watch………………………………………………………………………………………………………..32

10.3 Amazon EC2 Auto Scaling……….…………………………………………………………………………………………32

10
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

CLOUD ARCHITECTURE

1. Welcome to AWS Cloud Architecting ……………………………………………………………………..…………………33


1.1 Course objectives and overview………………………………………………………………………………….............33
1.2 Café business introduction………………………………………………………………………………………………..……33
1.3 Roles in Cloud computing………………..……………………………………………………………………………………..33
2. Introducing Cloud Architecting …………………………………………………………………………………..……………….….34
2.1 What is cloud architecting………………………………………………………………………..…………………………….34
2.2 The Amazon Web Services Well – Architected Framework……………………………………….…………....35
2.3 Best practices for building solution on AWS…………………………………………………………………..…….…35
2.4 AWS global infrastructure……………………………………………………………………………………………............36
3. Adding Storage Layer……………………………………………………………………………………………………………………….36
3.1 The simplest architecture……………………………………………………………………………………………..…..……36
3.2 Using Amazon S3……………………………………………………………………………………………………………..……..37
3.3 Storing data in Amazon S3………………………………………………………………………………………………………38
3.4 Moving data to and from amazon S3………………………………………………………………………………………38
3.5 Choosing Regions for your architecture…………………………………………………………………………..........38
4. Adding a Compute Layer……………….…………………………………………………………………………………………………39
4.1 Architectural need………………………………………………………………………………………………………..………..39
4.2 Adding compute with Amazon EC2…………………………………………………………………………………..…….40
4.3 Choosing an Amazon Machine Image (AMI) to launch an Amazon Elastic
Compute Cloud (Amazon EC2) instance……………………………………………………………………………..…..41

4.4 Selecting an Amazon EC2 instance type……………………………………………………………………..…………..41


4.5 Using data to configure an Amazon EC2 instance……………………………………………………………………41
4.6 Adding Storage to an Amazon EC2 instance…………………………………………………………………………….41
4.7 Adding EC2 pricing options……………………………………………………………………………………………………..42
4.8 Amazon EC2 considerations…………………………………………..……………………………………………………….42
5. Adding Database Layer.……………………………………………………………………………………………………………………43
5.1 Architectural need……………………………………………….…………………………………………………………………43
5.2 Database layer considerations…………………………………………………………………………………….………….44
5.3 Amazon Relational Database Service (Amazon RDS) ………………………………………………………………44
5.4 Amazon DynamoDB……………………………………………………………………………………………………..…………45
5.5 Database security controls……………………………………………………………………………………………..………45
5.6 Migrating data into AWS databases…………………………….…………………………………………………………45
6. Creating a Networking Environment……………………………………………………………………………………………….47

11
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

6.1 Architectural need………………………………………..………………………………………………………………………..47


6.2 Creating an AWS networking environment……………………………………………………………………………..47
6.3 Connecting your AWS networking environment …………………………………………………………………….48
6.4 Securing your AWS networking environment …………………………………………………………………………48
7. Connecting Networks……………………………………………………………………………………………………………………..49
7.1 Architectural need………………………………………………………………………………………………………………….50
7.2 Connecting to your remote network with AWS Site-to-Site VPN………….…………………….…………..51
7.3 Connecting to your remote network with AWS Direct Connect………………..…………………………….51
7.4 Connecting virtual private clouds (VPCs) in AWS with VPC peering…………………………………………51
7.5 Scaling your VPC network with AWS Transit Gateway…………………………………………..………………..52
7.6 Connecting your VPC to supported AWS services……………………………………………………………………52
8. Securing User and Application Access……………………………………………………………………………………………...52
8.1 Architectural need…………………..……………………………………………………………………………………………..53
8.2 Account users and AWS Identity and Access Management (IAM)……………………………………………53
8.3 Organizing users……………………………………………………………………………………………………………………..53
8.4 Federating users……………………………………………………………………………………………………………………..54
8.5 Multiple accounts……………………………………………………………………………………………………………………54
9. Implementing Elasticity, High Availability, and Monitoring……………………………………………………………...55
9.1 Architectural need……………………………………………..…………………………………………………………………..56
9.2 Scaling your compute resources………………………..….…………………………………………………………………56
9.3 Scaling your databases……………………………………..…………………………………………………………………….56
9.4 Designing an environment that's highly available……………………………………………………………………57
9.5 Monitoring………………………………………………………………………………………………………………………………58
10. Automating Your Architecture………………………………………………………………………………………….……………..59
10.1 Architectural need…………………………………………………………………………………………………………….59

10.2 Reasons to automate………………………………………………………………………………………………………..60

10.3 Automating your infrastructure…………………………………………………………………………………………60

10.4 Automating deployments………………………………………………………………………………………………….60

10.5 AWS Elastic Beanstalk……………………………………………………………………………………………………….60

11. Caching Content…………….………………………………………………………………………………………………………………..61


11.1 Architectural need…………………………………………………………………………………………………………….62

11.2 Overview of caching………………………………………………………………………………………………………….62

11.3 Edge caching……………………………………………………………………………………………………………………..62

11.4 Caching web sessions………………………………………………………………………………………………………..63

12
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

11.5 Caching databases…………………………………………………………………………………………………………….63

12. Building Decoupled Architectures……………………………………………………………………………………………………64


12.1 Architectural need……………………………………………..…………………………………………………….……….64

12.2 Decoupling your architecture…………………………………………………………………………..…….…………64

12.3 Decoupling with Amazon Simple Queue Service (Amazon SQS)………………………………………..65

12.4 Decoupling with Amazon Simple Notification Service (Amazon SNS)………………………………...66

12.5 Sending messages between cloud applications and on-premises with Amazon MQ………….66

13. Building Microservices and Serverless Architectures…………………………………………………………………….….68


13.1 Architectural need…………………………………………………………………………………………………………….69

13.2 Introducing microservices………………………………………………………………………………………..……….70

13.3 Building microservice applications with AWS container service…………………………………………71

13.4 Introducing serverless architectures………………………………………………………………………………….71

13.5 Building serverless architectures with AWS Lambda………………………………………………………….71

13.6 Extending serverless architectures with Amazon API Gateway………………………………………….72

13.7 Orchestrating microservices with AWS Step Functions…………………………………………………..…72

14. Planning for Disaster.……………………………………………………………………………………….…………………….……….73


14.1 Architectural need………………………………………………………………………………………………..73
14.2 Disaster planning strategies …………………………………………………………………………………74
14.3 Disaster recovery Patterns……………………………………………………………………………………74
BRIDGING TO CERTIFICATION CAPSTONE PROJECT……………………………………………………………..……………………75
Case Study…………………………………………………………………………………………………………………………….…………………..86

13
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE 1

CLOUD CONCEPTS OVERVIEW


Section 1: Introduction to cloud computing :

Some key takeaways from this section of the module include:


• Cloud computing is the on-demand delivery of IT resources via the internet with pay-as
you-go pricing.
• Cloud computing enables you to think of (and use) your infrastructure as software.
• There are three cloud service models: IaaS, PaaS, and SaaS.
• There are three cloud deployment models: cloud, hybrid, and on-premises or private
cloud.
• There are many AWS service analogs for the traditional, on-premises IT space.

Section 2: Advantages of cloud computing :


The key takeaways from this section of the module include the six advantages of cloud
computing:
• Trade capital expense for variable expense
• Massive economies of scale
• Stop guessing capacity
• Increase speed and agility
• Stop spending money on running and maintaining data centers
• Go global in minutes

Fig. 1.1 Applications of Cloud Computing

14
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 3: Introduction to Amazon Web Services (AWS) :


The key takeaways from this section of the module include:
• AWS is a secure cloud platform that offers a broad set of global cloud-based products
called services that are designed to work together.
• There are many categories of AWS services, and each category has many services to
choose from.
• Choose a service based on your business goals and technology requirements.
• There are three ways to interact with AWS services.

Section 4: Moving to the AWS Cloud – The AWS Cloud Adoption Framework (AWS CAF):
The key takeaways from this section of the module include:
• Cloud adoption is not instantaneous for most organizations and requires a thoughtful,
deliberate strategy and alignment across the whole organization.
• The AWS CAF was created to help organizations develop efficient and effective plans
for their cloud adoption journey.
• The AWS CAF organizes guidance into six areas of focus, called perspectives.
• Perspectives consist of sets of business or technology capabilities that are the
responsibility of key stakeholders.

Module1: Summary :
In summary, in this module we learned how to:
• Define different types of cloud computing
• Describe six advantages of cloud computing
• Recognize the main AWS service categories and core services
• Reviewed the AWS Cloud Adoption Framework

15
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE – 2

CLOUD ECONOMICS AND BILLING


Section 1: Fundamentals of pricing :

→ In summary, while the number and types of services offered by AWS have increased
dramatically, our philosophy on pricing has not changed. At the end of each month, you
pay only for what you use, and you can start or stop using a product at any time. No
long-term contracts are required.
→ The best way to estimate costs is to examine the fundamental characteristics for each
AWS service, estimate your usage for each characteristic, and then map that usage to the
prices that are posted on the AWS website. The service pricing strategy gives you the
flexibility to choose the services that you need for each project and to pay only for what
you use.
→ There are several free AWS services, including:
• Amazon VPC
• Elastic Beanstalk
• AWS CloudFormation
• IAM
• Automatic scaling services
• AWS OpsWorks
• Consolidated Billing
→ While the services themselves are free, the resources that they provision might not be
free. In most cases, there is no charge for inbound data transfer or for data transfer
between other AWS services within the same AWS Region. There are some exceptions,
so be sure to verify data transfer rates before you begin to use the AWS service.
Outbound data transfer costs are tiered.
Section 2: Total cost of ownership :
→ It is difficult to compare an on-premises IT delivery model with the AWS Cloud. The
two are different because they use different concepts and terms. Using on-premises IT
involves a discussion that is based on capital expenditure, long planning cycles, and
multiple components to buy, build, manage, and refresh resources over time. Using the
AWS Cloud involves a discussion about flexibility, agility, and consumption-based costs.
→ Some of the costs that are associated with data center management include:

16
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Server costs for both hardware and software, and facilities costs to house the
equipment.
• Storage costs for the hardware, administration, and facilities.
• Network costs for hardware, administration, and facilities.
• And IT labor costs that are required to administer the entire solution.
→ Soft benefits include:
• Reusing service and applications that enable you to define (and redefine
solutions) by using the same cloud service
• Increased developer productivity

Fig 2.1 Total Cost of Ownership

Section 3: AWS Organisations :


AWS Organizations enables you to:
• Create service control policies (SCPs) that centrally control AWS services across
multiple AWS accounts.
• Create groups of accounts and then attach policies to a group to ensure that the
correct policies are applied across the accounts.
• Simplify account management by using application programming interfaces (APIs) to
automate the creation and management of new AWS accounts.
• Simplify the billing process by setting up a single payment method for all the AWS
accounts in your organization. With consolidated billing, you can see a combined view of
charges that are incurred by all your accounts, and you can take advantage of pricing
benefits from aggregated usage. Consolidated billing provides a central location to
manage billing across all of your AWS accounts, and the ability to benefit from volume
discounts.
Section 4: AWS Billing and Cost Management :
→ AWS Billing and Cost Management is the service that you use to pay your AWS bill,
monitor your usage, and budget your costs. Billing and Cost Management enables you to

17
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

forecast and obtain a better idea of what your costs and usage might be in the future so
that you can plan ahead.
→ You can set a custom time period and determine whether you would like to view your
data at a monthly or daily level of granularity.
→ With the filtering and grouping functionality, you can further analyze your data using
a variety of available dimensions. The AWS Cost and Usage Report Tool enables you to
identify opportunities for optimization by understanding your cost and usage data trends
and how you are using your AWS implementation.

Section 5: Technical Support :


→ Provide unique combination of tools and expertise:
• AWS Support
• AWS Support Plans
→ Support is provided for:
• Experimenting with AWS
• Production use of AWS
• Business-critical use of AWS
→ Proactive guidance :
• Technical Account Manager (TAM)
→ Best practices :
• AWS Trusted Advisor
→ Account assistance :
• AWS Support Concierge
Module2: Summary :
→ In summary, in this module, we:
• Explored the fundamentals of AWS pricing
• Reviewed Total Cost of Ownership concepts
• Reviewed an AWS Pricing Calculator estimate.
→ Total Cost of Ownership is a concept to help you understand and compare the costs
that are associated with different deployments. AWS provides the AWS Pricing Calculator
to assist you with the calculations that are needed to estimate cost savings.
→ Use the AWS Pricing Calculator to:
• Estimate monthly costs
• Identify opportunities to reduce monthly costs
• Model your solutions before building them
→ AWS Billing and Cost Management provides you with tools to help you access,
understand, allocate, control, and optimize your AWS costs and usage. These tools
include AWS Bills, AWS Cost Explorer, AWS Budgets, and AWS Cost and Usage Reports.

18
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

→ These tools give you access to the most comprehensive information about your AWS
costs and usage including which AWS services are the main cost drivers. Knowing and
understanding your usage and costs will enable you to plan ahead and improve your
AWS implementation.

MODULE – 3

AWS GLOBAL INFRASTRUCTURE OVERVIEW


Section 1: AWS Global Infrastructure :
Some key takeaways from this section of the module include:
• The AWS Global Infrastructure consists of Regions and Availability Zones.
• Your choice of a Region is typically based on compliance requirements or to
reduce latency.
• Each Availability Zone is physically separate from other Availability Zones and has
redundant power, networking, and connectivity.
• Edge locations, and Regional edge caches improve performance by caching
content closer to users.

Fig 3.1 Components of Global Infrastructure

Section 2: AWS services and service category overview :


AWS categories of services:
Analytics, Application Integration, AR and VR, Blockchain, Business Applications,
Compute, Cost Management, Customer Engagement, Database, Developer Tools, End
User Computing, Game Tech, Internet of Things, Machine Learning, Management and
Governance, Media Services, Migration and Transfer, Mobile, Networking and Content
Delivery, Robotics, Satellite, Security Identity and Compliance, Storage…

19
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Fig 3.4 AWS Services

Module3: Summary :
In summary, in this module we learned how to:
• Identify the difference between AWS Regions, Availability Zones, and edge
locations
• Identify AWS service and service categories

MODULE – 4

AWS CLOUD SECURITY


Section 1: AWS shared responsibility model :
Some key takeaways from this section of the module include:
→ AWS and the customer share security responsibilities–
• AWS is responsible for security of the cloud
• Customer is responsible for security in the cloud
→ AWS is responsible for protecting the infrastructure—including hardware, software,
networking, and facilities—that run AWS Cloud services
→ For services that are categorized as infrastructure as a service (IaaS), the customer is
responsible for performing necessary security configuration and management tasks
• For example, guest OS updates and security patches, firewall, security group
configurations

Section 2: AWS Identity and Access Management (IAM) :


Some key takeaways from this section of the module include:

20
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

→ IAM policies are constructed with JavaScript Object Notation (JSON) and define
permissions.
• IAM policies can be attached to any IAM entity.
• Entities are IAM users, IAM groups, and IAM roles.
→ An IAM user provides a way for a person, application, or service to authenticate to
AWS.
→ An IAM group is a simple way to attach the same policies to multiple users.
→ An IAM role can have permissions policies attached to it, and can be used to delegate
temporary access to users or applications.

Fig 4.2 AWS identity and access management

Section 3: Securing a new AWS account :


The key takeaways from this section of the module are all related to best practices for
securing an AWS account. Those best practice recommendations include:
• Secure logins with multi-factor authentication (MFA).
• Delete account root user access keys.
• Create individual IAM users and grant permissions according to the principle of
least privilege.
• Use groups to assign permissions to IAM users.
• Configure a strong password policy.
• Delegate using roles instead of sharing credentials.
• Monitor account activity using AWS CloudTrail.

Section 4: Securing accounts :

21
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• AWS Organizations enables you to consolidate multiple AWS accounts so that


you centrally manage them.
• AWS Key Management Service (AWS KMS) is a service that enables you to create
and manage encryption keys, and to control the use of encryption across a wide
range of AWS services and your applications.
• Amazon Cognito provides solutions to control access to AWS resources from
your application. You can define roles and map users to different roles so your
application can access only the resources that are authorized for each user.
• AWS Shield is a managed distributed denial of service (DDoS) protection service
that safeguards applications that run on AWS. It provides always-on detection and
automatic inline mitigations that minimize application downtime and latency, so
there is no need to engage AWS Support to benefit from DDoS protection.

Section 5: Securing data on AWS :


AWS supports encryption of data at rest
• Data at rest = Data stored physically (on disk or on tape)
• You can encrypt data stored in any service that is supported by AWS KMS,
including:
• Amazon S3
• Amazon EBS
• Amazon Elastic File System (Amazon EFS)
• Amazon RDS managed databases
• Tools and options for controlling access to S3 data include –
• Amazon S3 Block Public Access feature: Simple to use.
• IAM policies: A good option when the user can authenticate using IAM.
• Bucket policies
• Access control lists (ACLs): A legacy access control mechanism.
• AWS Trusted Advisor bucket permission check: A free feature.

Fig 4.6 Security Infrastructure

22
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 6: Working to ensure compliance :


Some key takeaways from this section of the module include:
• AWS security compliance programs provide information about the policies, processes,
and controls that are established and operated by AWS.
• AWS Config is used to assess, audit, and evaluate the configurations of AWS resources.
• AWS Artifact provides access to security and compliance reports.

Module4: Summary :

In summary, in this module we learned how to:


• Recognize the shared responsibility model
• Identify the responsibility of the customer and AWS
• Recognize IAM users, groups, and roles
• Describe different types of security credentials in IAM
• Identify the steps to securing a new AWS account
• Explore IAM users and groups
• Recognize how to secure AWS data
• Recognize AWS compliance programs

Module4: List of Labs :


Lab1 – Introduction to AWS IAM: This is a guided lab in which the instructions are there
to follow through which we get introduced to the AWS IAM

MODULE – 5

NETWORKING AND CONTENT DELIVERY


Section 1: Networking basics :
→ A computer network is two or more client machines that are connected together to
share resources
→ Each client machine in a network has a unique Internet Protocol (IP) address that
identifies it. An IP address is a numerical label in decimal format. Machines convert that
decimal number to a binary format.
→ A 32-bit IP address is called an IPv4 address. IPv6 addresses, which are 128 bits, are
also available. IPv6 addresses can accommodate more user devices.

23
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

→ A common method to describe networks is Classless Inter-Domain Routing (CIDR). The


CIDR address is expressed as follows:
• An IP address (which is the first address of the network)
• Next, a slash character (/)

• Finally, a number that tells you how many bits of the routing prefix must be fixed or
allocated for the network identifier

→ The Open Systems Interconnection (OSI) model is a conceptual model that is used to
explain how data travels over a network.

Fig 5.1 OSI layer


Section 2: Amazon VPC :
Some key takeaways from this section of the module include:
• A VPC is a logically isolated section of the AWS Cloud.

• A VPC belongs to one Region and requires a CIDR block.


• A VPC is subdivided into subnets.
• A subnet belongs to one Availability Zone and requires a CIDR block.
• Route tables control traffic for a subnet.
• Route tables have a built-in local route.
•You add additional routes to the table.
•The local route cannot be deleted.

Section 3: VPC networking :


Some key takeaways from this section of the module include:
→ There are several VPC networking options, which include:
• Internet gateway: Connects your VPC to the internet

24
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• NAT gateway: Enables instances in a private subnet to connect to the internet


• VPC endpoint: Connects your VPC to supported AWS services
• VPC peering: Connects your VPC to other VPCs
• VPC sharing: Allows multiple AWS accounts to create their application resources
into shared, centrally-managed Amazon VPCs
• AWS Site-to-Site VPN: Connects your VPC to remote networks
• AWS Direct Connect: Connects your VPC to a remote network by using a
dedicated network connection
• AWS Transit Gateway: A hub-and-spoke connection alternative to VPC peering
→ You can use the VPC Wizard to implement your design.

Fig 5.3VPC

Section 4: VPC security :


The key takeaways from this section of the module are:
• Build security into your VPC architecture.
• Security groups and network ACLs are firewall options that you can use to secure
your VPC.

Section 5: Amazon Route 53 :


Some key takeaways from this section of the module include:

25
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Amazon Route 53 is a highly available and scalable cloud DNS web service that
translates
domain names into numeric IP addresses.
• Amazon Route 53 supports several types of routing policies.
• Multi-Region deployment improves your application’s performance for a global
audience.
• You can use Amazon Route 53 failover to improve the availability of your
applications.

Section 6: Amazon CloudFront :


Some key takeaways from this section of the module include:
→ A CDN is a globally distributed system of caching servers that accelerates delivery of
content.
→ Amazon CloudFront is a fast CDN service that securely delivers data, videos,
applications, and APIs over a global infrastructure with low latency and high transfer
speeds.
→ Amazon CloudFront offers many benefits, including:
• Fast and global
• Security at the edge
• Highly programmable
• Deeply integrated with AWS
• Cost-effective

Module5: Summary :
In summary, in this module we learned how to:
• Recognize the basics of networking
• Describe virtual networking in the cloud with Amazon VPC
• Label a network diagram
• Design a basic VPC architecture
• Indicate the steps to build a VPC
• Identify security groups
• Create your own VPC and added additional components to it to produce a
customized network
• Identify the fundamentals of Amazon Route 53
• Recognize the benefits of Amazon CloudFront

Module5: List of Labs :

26
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Lab2 – Build your VPC and Launch a Web Server: In this lab, we have:
• Created an Amazon VPC.
• Created additional subnets.
• Created an Amazon VPC security group.
• Launched a web server instance on Amazon EC2.

MODULE - 6
COMPUTE
Section 1: Compute services overview :
→ Amazon Web Services (AWS) offers many compute services like Amazon EC2, Amazon
Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service (Amazon ECS),
AWS Elastic Beanstalk, AWS Lamba, Amazon Elastic Kubernetes Services (Amazon EKS),
Amazon Fargate…
→ Selecting the wrong compute solution for an architecture can lead to lower
performance efficiency
• A good starting place—Understand the available compute options

Section 2: Amazon EC2 :

Some key takeaways from this section of the module include:


• Amazon EC2 enables you to run Windows and Linux virtual machines in the cloud.
• You launch EC2 instances from an AMI template into a VPC in your account.
• You can choose from many instance types. Each instance type offers different
combinations of CPU, RAM, storage, and networking capabilities.
• You can configure security groups to control access to instances (specify allowed ports
and source).
• User data enables you to specify a script to run the first time that an instance launches.
• Only instances that are backed by Amazon EBS can be stopped.
• You can use Amazon CloudWatch to capture and review metrics on EC2 instances.
Section 3: Amazon EC2 cost optimization :
Some key takeaways from this section of the module are:
→ Amazon EC2 pricing models include On-Demand Instances, Reserved Instances, Spot
Instances, Dedicated Instances, and Dedicated Hosts. Per second billing is available for
On-

27
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Demand Instances, Reserved Instances, and Spot Instances that use only Amazon Linux
and
Ubuntu.
→ Spot Instances can be interrupted with a 2-minute notification. However, they can
offer
significant cost savings over On-Demand Instances.
→ The four pillars of cost optimization are–
• Right size
• Increase elasticity
• Optimal pricing model
• Optimize storage choices
Section 4: Container services :
Some key takeaways from this section include:
• Containers can hold everything that an application needs to run.
• Docker is a software platform that packages software into containers.
• A single application can span multiple containers.
• Amazon Elastic Container Service (Amazon ECS) orchestrates the running of
Docker containers.
• Kubernetes is open source software for container orchestration.
• Amazon Elastic Kubernetes Service (Amazon EKS) enables you to run Kubernetes
on AWS
• Amazon Elastic Container Registry (Amazon ECR) enables you to store, manage,
and deploy your Docker containers.

Section 5: Introduction to AWS Lambda :


Some key takeaways from this section of the module include:
• Serverless computing enables you to build and run applications and services
without
provisioning or managing servers.
• AWS Lambda is a serverless compute service that provides built-in fault
tolerance and
automatic scaling.
• An event source is an AWS service or developer-created application that triggers
a Lambda function to run.
• The maximum memory allocation for a single Lambda function is 10,240 MB.
• The maximum run time for a Lambda function is 15 minutes.

28
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Fig 6.5 AWS LAMBDA

Section 6: Introduction to AWS Elastic Beanstalk :


Some key takeaways from this section of the module include:
• AWS Elastic Beanstalk enhances developer productivity.
• Simplifies the process of deploying your application.
• Reduces management complexity.
• Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and
Docker.
• There is no charge for Elastic Beanstalk. Pay only for the AWS resources you use.

Module6: Summary :
In summary, in this module, we learned how to:
• Provide an overview of different AWS compute services in the cloud
• Demonstrate why to use Amazon Elastic Compute Cloud (Amazon EC2)
• Identify the functionality in the Amazon EC2 console
• Perform basic functions in Amazon EC2 to build a virtual computing environment
• Identify Amazon EC2 cost optimization elements
• Demonstrate when to use AWS Elastic Beanstalk
• Demonstrate when to use AWS Lambda
• Identify how to run containerized applications in a cluster of managed servers

Module6: List of Labs :

• Lab3 – Introduction to EC2: In this lab, we have :


1. Launched an instance that is configured as a web server
2. Viewed the instance system log
3. Reconfigured a security group
4. Modified the instance type and root volume size

29
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE - 7
STORAGE
Section 1: Amazon Elastic Block Store (Amazon EBS) :

• Amazon EBS provides block-level storage volumes for use with Amazon EC2 instances.
Amazon EBS volumes are off-instance storage that persists independently from the life of
an instance. They are analogous to virtual disks in the cloud. Amazon EBS provides three
volume types: General Purpose SSD, Provisioned IOPS SSD, and magnetic.
• The three volume types differ in performance characteristics and cost, so you can
choose the right storage performance and price for the needs of your applications.
• Additional benefits include replication in the same Availability Zone, easy and
transparent encryption, elastic volumes, and backup by using snapshots.

Section 2: Amazon Simple Storage Service (Amazon S3) :


• Amazon S3 is a fully managed cloud storage service.
• You can store a virtually unlimited number of objects.
• You pay for only what you use.
• You can access Amazon S3 at any time from anywhere through a URL.
• Amazon S3 offers rich security controls.

Section 3: Amazon Elastic File System (Amazon EFS) :


• Amazon EFS provides file storage over a network.
• Perfect for big data and analytics, media processing workflows, content management,
web serving, and home directories.
• Fully managed service that eliminates storage administration tasks.
• Accessible from the console, an API, or the CLI.
• Scales up or down as files are added or removed and you pay for what you use.

Section 4: Amazon S3 Glacier :


• Amazon S3 Glacier is a data archiving service that is designed for security, durability,
and an extremely low cost.
• Amazon S3 Glacier pricing is based on Region.
• Its extremely low-cost design works well for long-term archiving.
• The service is designed to provide 11 9s of durability for objects.

30
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Module7: Summary :

In summary, in this module, we learned how to:


• Identify the different types of storage
• Explain Amazon S3
• Identify the functionality in Amazon S3
• Explain Amazon EBS
• Identify the functionality in Amazon EBS
• Perform functions in Amazon EBS to build an Amazon EC2 storage solution
• Explain Amazon EFS
• Identify the functionality in Amazon EFS
• Explain Amazon S3 Glacier
• Identify the functionality in Amazon S3 Glacier
• Differentiate between Amazon EBS, Amazon S3, Amazon EFS, and Amazon S3
Glacier

Module7: List of Labs :


• Lab4 – Working with EBS: In this lab, we have:
• Created an Amazon EBS volume
• Attached the volume to an instance
• Configured the instance to use the virtual disk
• Created an Amazon EBS snapshot
• Restored the snapshot

31
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE – 8

DATABASES
Section 1: Amazon Relational Database Service :
• With Amazon RDS, you can set up, operate, and scale relational databases in the cloud.
• Features –

• Managed service
• Accessible via the console, AWS Command Line Interface (AWS CLI), or
application programming interface (API) calls
• Scalable (compute and storage)
• Automated redundancy and backup are available
• Supported database engines:
• Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, Microsoft SQL Server
Section 2: Amazon DynamoDB :
Amazon DynamoDB:
• Runs exclusively on SSDs.
• Supports document and key-value store models.
• Replicates your tables automatically across your choice of AWS Regions.
• Works well for mobile, web, gaming, adtech, and Internet of Things (IoT) applications.
• Is accessible via the console, the AWS CLI, and API calls.
• Provides consistent, single-digit millisecond latency at any scale.
• Has no limits on table size or throughput.

Section 3: Amazon Redshift :


• In summary, Amazon Redshift is a fast, fully managed data warehouse service. As a
business grows, you can easily scale with no downtime by adding more nodes. Amazon
Redshift automatically adds the nodes to your cluster and redistributes the data for
maximum performance.
• Amazon Redshift is designed to consistently deliver high performance. Amazon
Redshift uses columnar storage and a massively parallel processing architecture. These
features parallelize and distribute data and queries across multiple nodes. Amazon
Redshift also automatically monitors your cluster and backs up your data so that you can
easily restore if needed. Encryption is built in—you only need to enable it.

32
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 4: Amazon Aurora :


• In summary, Amazon Aurora is a highly available, performant, and cost-effective
managed
relational database.
• Aurora offers a distributed, high-performance storage subsystem. Using Amazon
Aurora can reduce your database costs while improving the reliability of the database.
• Aurora is also designed to be highly available. It has fault-tolerant and self-healing
storage built for the cloud. Aurora replicates multiple copies of your data across multiple
Availability Zones, and it continuously backs up your data to Amazon S3.
• Multiple levels of security are available, including network isolation by using Amazon
VPC;
encryption at rest by using keys that you create and control through AWS Key
Management
Service (AWS KMS); and encryption of data in transit by using Secure Sockets Layer (SSL).
• The Amazon Aurora database engine is compatible with existing MySQL and
PostgreSQL open source databases, and adds compatibility for new releases regularly.
• Finally, Amazon Aurora is fully managed by Amazon RDS. Aurora automates database
management tasks, such as hardware provisioning, software patching, setup,
configuration, or backups.

Fig 8.4 Features of Amazon Aurora

Module8: Summary :
In summary, in this module, we learn how to:
• Explain Amazon Relational Database Service (Amazon RDS)
• Identify the functionality in Amazon RDS
• Explain Amazon DynamoDB
• Identify the functionality in Amazon DynamoDB
• Explain Amazon Redshift

33
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Explain Amazon Aurora


• Perform tasks in an RDS database, such as launching, configuring, and interacting

Module8: List of Labs :


• Lab5 – Build a Database Server: In this lab, we have:
• Launched an Amazon RDS DB instance with high availability.
• Configured the DB instance to permit connections from your web server.
• Opened a web application and interacted with your database

MODULE - 9

CLOUD ARCHITECTURE
Section 1: AWS Well-Architected Framework :
Some key takeaways from this section of the module include:
• The AWS Well-Architected Framework provides a consistent approach to evaluate
cloud architectures and guidance to help implement designs.
• The AWS Well-Architected Framework documents a set of design principles and best
practices that enable you to understand if a specific architecture aligns well with cloud
best practices.
• The AWS Well-Architected Framework is organized into six pillars.

Section 2: Reliability and availability :


Some key takeaways from this section of the module include:

34
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Reliability is a measure of your system’s ability to provide functionality when desired by


the user, and it can be measured in terms of MTBF.
• Availability is the percentage of time that a system is operating normally or correctly
performing the operations expected of it (or normal operation time over total time).
• Three factors that influence the availability of your applications are fault tolerance,
scalability, and recoverability.
• You can design your workloads and applications to be highly available, but there is a
cost tradeoff to consider.

Section 3: AWS Trusted Advisor :


Some key takeaways from this section of the module include:
• AWS Trusted Advisor is an online tool that provides real-time guidance to help you
provision your resources by following AWS best practices.
• AWS Trusted Advisor looks at your entire AWS environment and gives you real-time
recommendations in five categories.

Module9: Summary :
In summary, in this module we learned how to:
• Describe the AWS Well-Architected Framework, including the six pillars
• Identify the design principles of the AWS Well-Architected Framework
• Explain the importance of reliability and high availability
• Identify how AWS Trusted Advisor helps customers
• Interpret AWS Trusted Advisor recommendations

MODULE - 10

AUTO SCALING AND MONITORING


Section 1: Elastic Load Balancing :
Some key takeaways from this section of the module include:
• Elastic Load Balancing distributes incoming application or network traffic across
multiple targets (such as Amazon EC2 instances, containers, IP addresses, and Lambda
functions) in one or more Availability Zones.
• Elastic Load Balancing supports three types of load balancers:

35
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Application Load Balancer


• Network Load Balancer
• Classic Load Balancer
• Elastic Load Balancing offers several monitoring tools for continuous monitoring and
logging for auditing and analytics.

Section 2: Amazon CloudWatch :

Some key takeaways from this section of the module include:


• Amazon CloudWatch helps you monitor your AWS resources—and the applications
that you run on AWS—in real time.
• CloudWatch enables you to –
• Collect and track standard and custom metrics.
• Set alarms to automatically send notifications to SNS topics or perform Amazon
EC2 Auto Scaling or Amazon EC2 actions based on the value of the metric or
expression relative to a threshold over a number of time periods.
• Define rules that match changes in your AWS environment and route these
events to targets for processing.

Section 3: Amazon EC2 Auto Scaling:


→ Some key takeaways from this section of the module include:
• Scaling enables you to respond quickly to changes in resource needs.
• Amazon EC2 Auto Scaling helps you maintain application availability, and enables
you to automatically add or remove EC2 instances according to your workloads.
• An Auto Scaling group is a collection of EC2 instances.
• A launch configuration is an instance configuration template.
• You can implement dynamic scaling with Amazon EC2 Auto Scaling, Amazon
CloudWatch, and Elastic Load Balancing.
→ AWS Auto Scaling is a separate service that monitors your applications, and it
automatically adjusts capacity for the following resources:
• Amazon EC2 instances and Spot Fleets
• Amazon ECS tasks
• Amazon DynamoDB tables and indexes
• Amazon Aurora Replicas

Module10: Summary :
In summary, in this module we learned how to:

36
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Indicate how to distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2)
instances using Elastic Load Balancing.
• Identify how Amazon CloudWatch enables you to monitor AWS resources and
applications in real time
• Explain how Amazon EC2 Auto Scaling launches and releases servers in response to
workload changes.
• Perform scaling and load balancing tasks to improve an architecture.

Module10: List of Labs :


• Lab6 – Scale and Load Balance your Architecture: In this lab, we have:
• Created an Amazon Machine Image (AMI) from a running instance.
• Created a load balancer.
• Created a launch configuration and an Auto Scaling group.
• Automatically scaled new instances within a private subnet.
• Created Amazon CloudWatch alarms and monitored performance of your
infrastructure.

37
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

AWS CLOUD ARCHITECHURE

MODULE - 1
WELCOME TO AWS ACADEMY CLOUD ARCHITECTING
Section 1: Course objectives and overview :
→Course objectives-
• Make architectural decisions based on AWS architectural principles and best
practices
• Use AWS services to make your infrastructure scalable, reliable, and highly
available
• Use AWS managed services to enable greater flexibility and resiliency in an
infrastructure
• Indicate how to increase the performance efficiency and reduce costs of
infrastructures built on AWS
• Use the AWS Well-Architected Framework to improve architectures that use
AWS solutions

Section 2: Café business case introduction :

• Introduced about the Café and Bakery, and mentioned the café owners and staff along
with their roles in the business. Also given information about few AWS consultants and
café visitors.

Section 3: Roles in cloud computing :


• IT Professional: IT professionals are generalists. They typically have a broad range of
skills
• IT Leader: They typically lead a team of IT professionals, and decide on the type of
technology that will be used for a project.
• Developer: They work with the details—writing, testing, and fixing the code that makes
an application work.
• DevOps Engineer: DevOps engineers spend their time building out the infrastructure
that applications run on.
• Cloud architect: Cloud architects spend their time reading and staying up-to-date with
the latest developments and trends in cloud computing.

38
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Module1: Summary :
In summary, in this module we learned how to:
• Identify course prerequisites and objectives
• Recognize the café business case
• Indicate the role of cloud architects

MODULE - 2

INTRODUCING CLOUD ARCHITECTING


Section 1: What is cloud architecting? :
Some key takeaways from this section of the module include:
• Cloud architecture is the practice of applying cloud characteristics to a solution that
uses cloud services and features to meet an organization’s technical needs and business
use cases
• You can use AWS services to create highly available, scalable, and reliable architectures

Section 2: The AWS Well-Architected Framework :

Fig 2.1 AWS well Architected Framework

Some key takeaways from this section of the module include:


• The AWS Well-Architected Framework provides a consistent approach to evaluate
cloud architectures and guidance to help implement designs.
• The AWS Well-Architected Framework is organized into five pillars.
• Each pillar documents a set of foundational questions that enable you to understand if
a specific architecture aligns well with cloud best practices.
• The AWS Well-Architected Tool helps you review the state of your workloads and
compares them to the latest AWS architectural best practices.

39
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 3: Best practices for building solutions on AWS :


The key takeaways from this section of the module are:
• As you design solutions, evaluate tradeoffs and base your decisions on empirical data
• Follow these best practices when building solutions on AWS –
• Enable scalability
• Automate your environment
• Treat resources as disposable
• Use loosely-coupled components
• Design services, not servers
• Choose the right database solution
• Avoid single points of failure
• Optimize for cost
• Use caching
• Secure your entire infrastructure

Section 4: AWS global infrastructure :


Some key takeaways from this section of the module include:
• The AWS global infrastructure consists of Regions and Availability Zones
• Your choice of a Region is typically based on compliance requirements or to reduce
latency
• Each Availability Zone is physically separate from other Availability Zones and has
redundant power, networking, and connectivity
• Edge locations and Regional edge caches improve performance by caching content
closer to users

Module2: Summary :
In summary, in this module, we learned how to:
• Define cloud architecture
• Describe how to design and evaluate architectures using the AWS Well-Architected
Framework
• Explain best practices for building solutions on AWS
• Describe how to make informed decisions on where to place AWS resources

40
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE - 3

ADDING A STORAGE LAYER


Section 1: The simplest architecture :
• Started with one of the simplest of architectures that can be implemented on AWS,
which is creating a static website by hosting it entirely on Amazon S3. Also mentioned
about the various Amazon S3 storage options and some key considerations for when you
choose a Region on AWS.
• In this section, there are details about café business requirement- The café has just
started up. They want to establish a simple static website that provides customers with
basic information about the café (including a menu, store hours, location, and more).

Section 2: Using Amazon S3 :


Some key takeaways from this section of the module include:
• Buckets must have a globally unique name and are defined at the Region level
• Buckets are private and protected by default
• Amazon S3 security can be configured with IAM policies, bucket policies, access control
lists, S3 access points, and presigned URLs
• Amazon S3 is strongly consistent for all new and existing objects in all Regions
• 5 TB is the maximum size of a single object, but you can store a virtually unlimited
number of objects

Section 3: Storing data in Amazon S3 :

Some key takeaways from this section of the module include:


• Amazon S3 storage classes include –S3 standard, S3 standard-Infrequent Access, S3
One Zone-Infrequent Access, S3 intelligent-Tiering, S3 Glacier, and S3 Glacier Deep
Archive
• An Amazon S3 lifecycle policy can delete or move objects to less expensive storage
classes based on age
• Transferring data in from the internet to Amazon S3 is free, but transferring out to
other Regions or to the internet incurs a fee

Section 4: Moving data to and from Amazon S3 :


Some key takeaways from this section of the module include:
• The S3 multipart upload option is a good option for files larger than 100 MB and in
situations where network connectivity might be inconsistent

41
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Amazon S3 Transfer Acceleration uses edge locations, and can significantly increase the
speed of uploads

Fig 3.4 Storage Layer


Section 5: Choosing Regions for your architecture :
There are many considerations when you decide what Region to host your data in.
• First, you should consider data privacy laws and your regulatory compliance
requirements.
• Second, proximity is an important factor in choosing your Region, especially when
latency is a critical factor.
• Third important consideration is the availability of AWS services and features.
• A fourth consideration when you choose a Region is cost. Service costs can differ
depending on which Region they are used in.
• Finally, in circumstances where your customers are in different areas of the world,
consider optimizing their experience by replicating your environment in multiple Regions
that are closer to them.

Module3: Summary :

In summary, in this module, we learned how to:


• Recognize the problems that Amazon Simple Storage Service (Amazon S3) can solve
• Describe how to store content efficiently using Amazon S3
• Recognize the various Amazon S3 storage classes and cost considerations
• Describe how to move data to and from Amazon S3
• Describe how to choose a Region
• Create a static website
Module3: List of Labs :
• Guided Lab – Hosting a Static Website : This is a guided lab in which instructions will be
provided to host a static website by AWS

42
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Challenge Lab – Creating a Static Website for the Café: Here we create a static website
using S3 for the café

MODULE - 4

ADDING A COMPUTE LAYER


Section 1: Architectural need :

• Café business requirement- The Café wants the website to display more than static
content and to provide dynamic capabilities. They want to introduce online ordering for
customers, and enable café staff to view submitted orders.

Section 2: Adding compute with Amazon EC2 :


Some key takeaways from this section of the module include:
• Amazon EC2 enables you to run Microsoft Windows and Linux virtual machines in the
cloud.
• You can use an EC2 instance when you need complete control of your computing
resources and want to run any type of workload.
• When you launch an EC2 instance, you must choose an AMI and an instance type.
Launching an instance involves specifying configuration parameters, including network,
security, storage, and user data settings.

Section 3: Choosing an AMI to launch an EC2 instance :

Some key takeaways from this section of the module include:


• An AMI provides the information that is needed to launch an EC2 instance
• For best performance, use an AMI with HVM virtualization type
• Only an instance launched from an Amazon EBS-backed AMI can be stopped and
started
• An AMI is available in a Region

Section 4: Selecting an EC2 instance type :


Some key takeaways from this section of the module include:
• An EC2 instance type defines a configuration of CPU, memory, storage, and network
performance characteristics

43
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• As a recommendation, choose new generation instance types in a family because they


generally have better price-to-performance ratios
• Use the Instance Types page in the Amazon EC2 console and AWS Compute Optimizer
to find the right instance type for your workload

Section 5: Using user data to configure an EC2 instance :

Some key takeaways from this section of the module include:


• User data enables you to pass configuration parameters or run an initialization script
when you launch an instance.
• Information about a running instance can be accessed in the instance through an
instance metadata URL.
• Baking configurations into an AMI increases AMI build time, but decreases instance
boot time. Configuring an instance by using user data decreases AMI build time, but
increases instance boot time.

Section 6: Adding storage to an Amazon EC2 instance :


Some key takeaways from this section of the module include:
• Storage options for EC2 instances include instance store, Amazon EBS, Amazon EFS,
and Amazon FSx for Windows File Server
• For a root volume, use instance store or SSD-backed Amazon EBS
• For a data volume that serves only one instance, use instance store or Amazon EBS
storage
• For a data volume that serves multiple Linux instances, use Amazon EFS
• For a data volume that serves multiple Microsoft Windows instances, use Amazon FSx
for Windows File Server

Section 7: Amazon EC2 pricing options :


Some key takeaways from this section of the module include:
• Amazon EC2 pricing models include On-Demand Instances, Reserved Instances, Savings
Plans, Spot Instances, and Dedicated Hosts
• Per-second billing is available only for On-Demand Instances, Reserved Instances, and
Spot Instances that run Amazon Linux or Ubuntu
• Use a combination of Reserved Instances, Savings Plans, On-Demand Instances, and
Spot Instances to optimize Amazon EC2 compute costs

Section 8: Amazon EC2 considerations :

44
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Placement groups enable you to control where instances run in an Availability Zone.
You can use placement groups to influence the placement of a group of interdependent
instances so they meet the needs of your workload.
• The cluster placement group is a logical grouping of instances in a single Availability
Zone. This grouping provides low-latency and high packet-per-second network
performance between instances.
• Partition placement groups help reduce the likelihood of correlated hardware failures
for your application. When you use partition placement groups, Amazon EC2 divides
each group into logical segments that are called partitions.
• A spread placement group is a grouping of instances that are purposely positioned on
distinct underlying hardware. This grouping reduces the risk of simultaneous failures that
could occur if instances shared underlying hardware.
Module4: Summary :
In summary, in this module, we learned how to:
• Identify how Amazon Elastic Compute Cloud (Amazon EC2) can be used in an
architecture
• Explain the value of using Amazon Machine Images (AMIs) to accelerate the creation
and repeatability of infrastructure
• Differentiate between the EC2 instance types
• Recognize how to launch Amazon EC2 instances with user data
• Recognize storage solutions for Amazon EC2 (Amazon Elastic Block Store, or Amazon
EBS, and Amazon Elastic File System, or Amazon EFS)
• Describe EC2 pricing options
• Determine the placement group given an architectural consideration
• Launch an Amazon EC2 instance

Module4: List of Labs :


• Guided Lab – Introducing Amazon Elastic File System (Amazon EFS) : In this guided lab,
we have:

• Created a security group to access your EFS file system


• Created an Amazon EFS file system
• Connected to your EC2 instance via SSH
• Created a new directory and mounting the EFS file system
• Examined the performance behavior of your new EFS file system
• Challenge Lab – Creating a Dynamic Website for the Café: In this challenge lab, we
have:

45
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Analyzed the existing EC2 instance


• Connected to the IDE on the EC2 instance
• Analyzed the LAMP stack environment and confirming that the web server is
accessible
• Installed the café application
• Tested the web application
• Created an AMI and launching another EC2 instance
• Verifyed the new café instance

MODULE - 5

ADDING A DATABASE LAYER


Section 1: Architectural need :
• Café business case requirement- The café needs a database solution that is easier to
maintain, and that provides essential features such as durability, scalability and high
performance.

Section 2: Database layer considerations :


Some key takeaways from this section of the module include:
• When you choose a database, consider scalability, storage requirements, the type and
size of objects to be stored, and durability requirements
• Relational databases have strict schema rules, provide data integrity, and support SQL
• Non-relational databases scale horizontally, provide higher scalability and flexibility,
and work well for semistructured and unstructured data

Section 3: Amazon RDS :


Some key takeaways from this section of the module include:
• Managed AWS database services handle administration tasks so you can focus on your
applications
• Amazon RDS supports Microsoft SQL Server, Oracle, MySQL, PostgreSQL, Aurora, and
MariaDB
• Amazon RDS Multi-AZ deployments provide high availability with automatic failover
• You can have up to five read replicas per primary database to improve performance
• Amazon Aurora is a fully managed, MySQL-and PostgreSQL-compatible, relational
database engine

46
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Amazon Redshift is a data warehousing relational database offering

Fig 5.3 AWS RDS

Section 4: Amazon DynamoDB :


Some key takeaways from this section of the module include:
• Amazon DynamoDB is a fully managed non-relational key-value and document NoSQL
database service.
• DynamoDB is serverless, provides extreme horizontal scaling and low latency.
• DynamoDB global tables ensure that data is replicated to multiple Regions.
• DynamoDB provides eventual consistency by default (in general, it is fully consistent for
reads 1 second after the write). Strong consistency is also an option.

Section 5: Database Security Controls :


• Security is a shared responsibility between you and AWS. AWS is responsible for
security of the cloud, which means that AWS protects the infrastructure that runs
Amazon RDS. Meanwhile, you are responsible for security in the cloud.
• To secure Amazon DynamoDB, many of the same best practices that you should use to
secure Amazon RDS also apply. For example, use IAM roles to secure authentication, and
use IAM policies to define access permissions.
Section 6: Migrating data into AWS databases :
• You can use the AWS Database Migration Service (AWS DMS) to migrate or replicate
existing databases to Amazon RDS. AWS DMS supports migration between the most
widely used databases.
• AWS DMS can be used to perform one-time migrations, but it can also be used to
accomplish continuous data replication between two databases. For example, you could

47
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

use it to configure continuous data replication of an on-premises database to an RDS


instance.
• AWS Database Migration Service (AWS DMS) can use AWS Snowball Edge and Amazon
S3 to migrate large databases more quickly than other methods.
Module5: Summary :
In summary, in this module, we learned how to:
• Compare database types
• Differentiate between managed versus unmanaged services
• Explain when to use Amazon Relational Database Service (Amazon RDS)
• Explain when to use Amazon DynamoDB
• Describe available database security controls
• Describe how to migrate data into Amazon Web Services (AWS) databases
• Deploy a database server

Module5: List of Labs :

• Guided Lab – Creating an Amazon RDS Database: In this guided lab, we have:

• Created an Amazon RDS database


• Configured web application communication with a database instance
• Challenge Lab – Migrating a Database to Amazon RDS: In this challenge lab, you:
• Created an RDS instance
• Analyzed the existing café application deployment
• Worked with the database on the EC2 instance
• Worked with the RDS database
• Imported the data into the RDS database instance
• Connected the café application to the new database

48
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE - 6

CREATING A NETWORK ENVIRONMENT


Section 1: Architectural need :
• Café business case requirement- The café must deploy and manage AWS resources in a
secure, isolated network environment.

Section 2: Creating an AWS networking environment :


Some key takeaways from this section of the module include:
• Amazon VPC enables you to provision VPCs, which are logically isolated sections of the
AWS Cloud where you can launch your AWS resources.
• A VPC belongs to only one Region and is divided into subnets.
• A subnet belongs to one Availability Zone or Local Zone. It is a subset of the VPC CIDR
block.
• You can create multiple VPCs within the same Region or in different Regions, and in the
same account or different accounts.
• Follow these best practices when designing your VPC –
• Create one subnet per Availability Zone for each group of hosts that have unique
routing requirements.
• Divide your VPC network range evenly across all available Availability Zones in a
Region.
• Do not allocate all network addresses at once. Instead, ensure that you reserve
some address space for future use.
• Size your VPC CIDR and subnets to support significant growth for the expected
workloads.
• Ensure that your VPC network range does not overlap with your organization’s
other private network ranges.

Section 3: Connecting your AWS networking environment to the internet :


Some key takeaways from this section of the module include:
• An internet gateway allows communication between instances in your VPC and the
internet.
• Route tables control traffic from your subnet or gateway.
• Elastic IP addresses are static, public IPv4 addresses that can be associated with an
instance or elastic network interface. They can be remapped to another instance in your
account.

49
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• NAT gateways enable instances in the private subnet to initiate outbound traffic to the
internet or other AWS services.
• A bastion host is a server whose purpose is to provide access to a private network from
an external network, such as the internet.

Section 4: Securing your AWS networking environment :

Some key takeaways from this section of the module include:


• Security groups are stateful firewalls that act at the instance level
• Network ACLs are stateless firewalls that act at the subgroup level
• When you set inbound and outbound rules to allow traffic to flow from the top tier to
the bottom tier of your architecture, you can chain security groups together to isolate a
security breach
• You should structure your infrastructure with multiple layers of defense

Fig 6.4 Networking

Module6: Summary :
In summary, in this module, we learned how to:
• Explain the foundational role of a VPC in AWS Cloud networking
• Identify how to connect your AWS networking environment to the internet
• Describe how to isolate resources within your AWS networking environment
• Create a VPC with subnets, an internet gateway, route tables, and a security group

Module6: List of Labs :

50
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Guided Lab – Creating a Virtual Private Cloud: In this guided lab, we will use Amazon
VPC to manually create a VPC with the following components:
• Public and private subnets
• An internet gateway
• A route table with a route to direct internet-bound traffic to the internet
gateway
• A security group for EC2 instances in the public subnet
• An application server to test the VPC
• Challenge Lab – Creating a VPC networking Environment for the cafe: In this challenge
lab, we:

• Create a public subnet


• Create a bastion host
• Allocate an Elastic IP address for the bastion host
• Test the connection to the bastion host
• Create a private subnet
• Create a NAT gateway
• Create an EC2 instance in a private subnet
• Configure your SSH client for SSH passthrough
• Test the SSH connection from the bastion host
• Create a network ACL
• Test your custom network ACL

51
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE - 7
CONNECTING NETWORKS
Section 1: Architectural need :
• Café business case requirement- The workloads for the café are increasing in
complexity. The architecture must support connectivity between multiple VPCs, and be
highly available and fault tolerant.

Section 2: Connecting to your remote network with AWS Site-to-Site VPN :


Some key takeaways from this section of the module include:
• AWS Site-to-Site VPN is a highly available solution that enables you to securely connect
your on-premises network or branch office site to your VPC
• AWS Site-to-Site VPN supports both static and dynamic routing
•You can establish multiple VPN connections from multiple customer gateway devices to
a single virtual private gateway

Section 3: Connecting to your remote network with AWS Direct Connect :


Some key takeaways from this section of the module include:
• AWS Direct Connect uses open standard 802.1q VLANs to let you establish a dedicated,
private network connection from your premises to AWS
• You can access any VPC or public AWS service in any Region (except China) from any
supported DX location
• You can implement highly available connectivity between your data centers and your
VPC by coupling one or more DX connections that you use for primary connectivity with a
lower-cost, backup VPN connection

Section 4: Connecting VPCs in AWS with VPC peering :

Some key takeaways from this section of the module include:


• VPC peering is a one-to-one networking connection between two VPCs that enables
you to route traffic between them privately
• You can establish peering relationships between VPCs across different AWS Regions
• VPC peering connections –
• Use private IP addresses
• Can be established between different AWS accounts
• Cannot have overlapping CIDR blocks
• Can have only one peering resource between any two VPCs
• Do not support transitive peering relationships

52
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 5: Scaling your VPC network with AWS Transit Gateway :


Some key takeaways from this section of the module include:
• AWS Transit Gateway enables you to connect your VPCs and on-premises networks to
a single gateway (called a transit gateway)
• AWS Transit Gateway uses a hub-and-spoke model to simplify VPC management and
reduce operational costs

Section 6: Connecting your VPC to supported AWS services :


Some key takeaways from this section of the module include:
• A VPC endpoint enables you to privately connect your VPC to supported AWS services
and VPC endpoint services powered by AWS PrivateLink
• VPC endpoints do not require an internet gateway, NAT device, VPN connection, or
AWS Direct Connect connection
• There are two types of VPC endpoints: interface endpoints and gateway endpoints

Fig 7.6 Networking Architecture


Module7: Summary :
In summary, in this module, we learned how to:
• Describe how to connect an on-premises network to the AWS Cloud
• Describe how to connect VPCs in the AWS Cloud
• Connect VPCs in the AWS Cloud using VPC peering
• Describe how to scale VPCs in the AWS Cloud
• Describe how to connect VPCs to supported AWS services

Module7: List of Labs :


• Guided Lab – Creating a VPC Peering Connection: In this guided lab, we:
• Create a peering connection between two VPCs
• Configure route tables to send traffic to the peering connection

53
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Test the peering connection

MODULE – 8

SECURING USER AND APPLICATION ACCESS


Section 1: Architectural need :
• Café business case requirement- The café must define what level of access users and
systems should have across their cloud resources. They must then put these access
controls into place across their AWS account.
Section 2: Account users and IAM :
Some key takeaways from this section of the module include:
• Avoid using the account root user for common tasks. Instead, create and use IAM user
credentials.
• Permissions for accessing AWS account resources are defined in one or more IAM
policy documents.
• Attach IAM policies to IAM users, groups, or roles.
• When IAM determines permissions, an explicit Deny will always override any Allow
statement.
• It is a best practice to follow the principle of least privilege when you grant access.

Section 3: Organising users :


• An IAM group is a collection of IAM users. Groups are a convenience that makes it
easier to manage permissions for a collection of users, instead of managing permissions
for each individual user.
• Groups make it easier to maintain consistent access rights across teams.

Section 4: Federating users :


Some key takeaways from this section of the module include:
• IAM Roles provide temporary security credentials assumable by a person, application,
or service.
• The AWS Security Token Service (STS) allows you to request temporary AWS
credentials.
• With identify federation, user authentication occurs external to the AWS account.
• Accomplished using STS, SAML, or Amazon Cognito.

54
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 5: Multiple Accounts :


Some key takeaways from this section of the module include:
• You can use multiple AWS accounts to isolate business units, development and test
environments, regulated workloads, and auditing data
• AWS Organizations allows you to configure automated account creation, consolidated
billing
• You can configure access controls across accounts by using service control policies
(SCPs)

Fig 8.5 Application Access


Module8: Summary :

In summary, in this module, we learned how to:


• Explain the purpose of AWS Identity and Access Management (IAM) users, groups, and
roles
• Describe how to allow user federation within an architecture to increase security
• Recognize how AWS Organizations service control policies (SCPs) increase security
within an architecture
• Describe how to manage multiple AWS accounts
• Configure IAM users

Module8: List of Labs :


• Challenge Lab –Controlling AWS Account Access by using IAM : In this challenge lab,
we:

• Configure an IAM group with policies and an IAM user

55
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Log in as Nikhil and test access


• Configure IAM for database administrator user access
• Log in as the database administrator and resolve the database connectivity
issue
• Use the IAM Policy Simulator and create a custom IAM policy with the visual
editor

MODULE - 9
IMPLEMENTING ELASTICITY, HIGH AVAILABILITY, AND MONITORING
Section 1: Architectural need :
• Café business case requirement- The café will be featured in a famous TV food show.
When it airs, the architecture must handle significant increases in capacity.
• A well designed reactive architecture can save you money and provide a better
experience for your users.

Section 2: Scaling your compute resources :


Some key takeaways from this section of the module include:
• An elastic infrastructure can expand and contract as capacity needs change
• Amazon EC2 Auto Scaling automatically adds or removes EC2 instances according to
policies that you define, schedules, and health checks
•Amazon EC2 Auto Scaling provides several scaling options to best meet the needs of
your applications
• When you configure an Auto Scaling group, you can specify the EC2 instance types and
the combination of pricing models that it uses

Section 3: Scaling your databases :


Some key takeaways from this section of the module include:
• You can use push-button scaling to vertically scale compute capacity for your RDS DB
instance
• You can use read replicas or shards to horizontally scale your RDS DB instance
• With Amazon Aurora, you can choose the DB instance class size and number of Aurora
replicas (up to 15)
• Aurora Serverless scales resources automatically based on the minimum and maximum
capacity specifications

56
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Amazon DynamoDB On-Demand offers a pay-per-request pricing model


• DynamoDB auto scaling uses Amazon Application Auto Scaling to dynamically adjust
provisioned throughput capacity
• DynamoDB adaptive capacity works by automatically increasing throughput capacity
for partitions that receive more traffic

Section 4: Designing an environment that’s highly available :

Some key takeaways from this section of the module include:


• You can design your network architectures to be highly available and avoid single
points of failure
• Route 53 offers a variety of routing options that can be combined with DNS failover to
enable a variety of low-latency, fault-tolerant architectures

Section 5: Monitoring :
Some key takeaways from this section of the module include:
• AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report, and the Cost
Optimization Monitor can help you understand and manage the cost of your AWS
infrastructure.
• CloudWatch collects monitoring and operational data in the form of logs, metrics, and
events. It visualizes the data by using automated dashboards so you can get a unified
view of your AWS resources, applications, and services that run in AWS and on-premises.
• EventBridge is a serverless event bus service that makes it easy to connect your
applications with data from a variety of sources. EventBridge ingests a stream of real-
time data from your own applications, SaaS applications, and AWS services. It then
routes that data to targets.

Module9: Summary :
In summary, in this module, we learned how to:
• Use Amazon EC2 Auto Scaling within an architecture to promote elasticity
• Explain how to scale your database resources
• Deploy an Application Load Balancer to create a highly available environment
• Use Amazon Route 53 for DNS failover
• Create a highly available environment
• Design architectures that use Amazon CloudWatch to monitor resources and react
accordingly

Module9: List of Labs :

57
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Guided Lab – Creating a Highly Available Environment: In this guided lab, we:

• Inspect a provided VPC


• Create an Application Load Balancer
• Create an Auto Scaling group
• Test the application for high availability
• Challenge Lab – Creating a Scalable and Highly Available Environment for the Cafe: In
this challenge lab, we:

• Create a NAT gateway for the second Availability Zone


• Create a bastion host instance in a public subnet
• Create a launch template
• Create an Auto Scaling group
• Create a load balancer
• Test the web application
• Test automatic scaling under load

MODULE - 10

AUTOMATING YOUR ARCHITECTURE


Section 1: Architectural need :
• Café business case requirement- The café has locations in multiple countries and must
start automating to keep growing. Their organization has many different architectures
and needs a way to consistently deploy, manage, and update them.

Section 2: Reasons to Automate :


• Many organizations will start using AWS by manually creating an Amazon Simple
Storage Service (Amazon S3) bucket, or launching an Amazon Elastic Compute Cloud
(Amazon EC2) instance and running a web server on it. Then, over time, they manually
add more resources as they find that expanding their use of AWS can meet additional
business needs. Soon, however, it can become challenging to manually manage and
maintain these resources.

58
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• It’s dangerous to allow anyone in your organization to manually control and edit your
environments. Consistency is critical when you want to minimize risks. Automation
enables you to maintain consistency.
• In operations, use automation wherever it’s practical, like for testing and deploying
changes, adding or removing capacity, and migrating data.

Section 3: Automating your infrastructure :


Some key takeaways from this section of the module include:
• AWS CloudFormation is an infrastructure as code (IaC) service that enables you to
model, create, and manage a collection of AWS resources
• AWS CloudFormation IaC is defined in templates that are authored in JSON or YAML
• A stack is what you create when you use a template to create AWS resources
• Actions that are available on an existing stack include update stack, detect drift, and
delete stack
• AWS Quick Starts provides AWS CloudFormation templates that are built by solutions
architects and that reflect AWS best practices

Section 4: Automating deployments :


• AWS Systems Manager is a management service that is designed to be highly focused
on automation.
• Once the SSM Agent is installed, it will be possible for Systems Manager to update,
manage, and configure the server on which it is installed. The agent processes requests
from Systems Manager and then runs them in accordance with the specification
provided in the request.
• AWS CloudFormation and AWS Systems Manager, consider how the two services
complement each other. Systems Manager works well for automating within a guest OS.
In contrast, AWS CloudFormation works well for defining AWS Cloud resources. You can
use AWS CloudFormation at the AWS Cloud layer to define AWS resources.
• AWS OpsWorks is a service for configuration management. You can use OpsWorks to
automate how EC2 instances are configured, deployed, and managed.
• One key OpsWorks Stacks feature is a set of lifecycle events—including Setup,
Configure, Deploy, Undeploy, and Shutdown—which automatically run a specified set of
recipes at the appropriate time on each instance.
• Use AWS CloudFormation to create the infrastructure (VPC, IAM roles and so on) and
deploy the application layer with OpsWorks Stacks.

59
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 5: AWS Elastic Beanstalk :


Some key takeaways from this section of the module include:
• AWS Elastic Beanstalk creates and manages a scalable and highly available web
application environment that enables you to focus on the application code
• You can author your Elastic Beanstalk application code in Java, .NET, PHP, Node.js,
Python, Ruby, Go, or Docker
• AWS resources that are created by Elastic Beanstalk are fully transparent—they are
visible in the AWS Management Console service page views
• No extra charge for Elastic Beanstalk –you pay only for the underlying resources that
are used

Module10: Summary :
In summary, in this module, we learned how to:
• Recognize when to automate and why
• Identify how to model, create, and manage a collection of AWS resources using AWS
CloudFormation
• Use the Quick Start AWS CloudFormation templates to set up an architecture
• Indicate how to use AWS System Manager and AWS OpsWorks for infrastructure and
deployment automation
• Indicate how to use AWS Elastic Beanstalk to deploy simple applications

Module10: List of Labs :

• Guided Lab – Automating Infrastructure Deployment with AWS CloudFormation: In this


guided lab, we:

• Deploy a networking layer


• Deploy an application layer
• Update a stack
• Explore templates with AWS CloudFormation Designer
• Delete the stack
• Challenge Lab – Automating Infrastructure Deployment: In this challenge lab, we:

• Create an AWS CloudFormation template from scratch


• Configure the bucket as a website and update the stack
• Clone a CodeCommit repository that contains AWS CloudFormation
templates
• Create a new network layer with AWS CloudFormation, CodeCommit, and
CodePipeline

60
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Update the network stack


• 6. Define an EC2 instance resource and create the application stack
• 7. Duplicate the café network and website to another AWS Region

MODULE – 11

CACHING CONTENT
Section 1: Architectural need :
• Café business case requirement- The capacity of the cafés infrastructure is constantly
being overloaded with the same requests. This inefficiency is increasing cost and latency.

Section 2: Overview of caching :


Some key takeaways from this section of the module include:
• A cache provides high throughput, low-latency access to commonly accessed
application data by storing the data in memory
• When you decide what data to cache, consider speed and expense, data and access
patterns, and your application’s tolerance for stale data
• Caches can be applied and used throughout various layers of technology, including
operating systems, networking layers, web applications, and databases

Section 3: Edge caching :


Some key takeaways from this section of the module include:
• Amazon CloudFront is a global CDN service that accelerates the delivery of content,
including static and video, to users with no minimum usage commitments.
• CloudFront uses a global network that comprises edge locations and regional edge
caches to deliver content to your users.
• To use CloudFront to deliver your content, you specify an origin server and configure a
CloudFront distribution. CloudFront assigns a domain name and sends your distribution’s
configuration to all of its edge locations.
• You can use Amazon CloudFront to improve the resilience of your applications that run
on AWS from DDoS attacks.

Section 4: Caching web sessions :

61
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Some key takeaways from this section of the module include:


• Sessions are used to manage user authentication and store user data while the user
interacts with the application.
• You can manage sessions with sticky sessions, which is a feature of Elastic Load
Balancing load balancers. Sticky sessions route requests to the specific server that is
managing the user’s session.
• You can also manage sessions by persisting session data outside the web server
instance—for example, in a distributed cache or DynamoDB table.
Section 5: Caching databases :
Some key takeaways from this section of the module include:
• A database cache supplements your primary database by removing unnecessary
pressure on it, typically in the form of frequently accessed read data
• DAX is a fully managed, highly available, in-memory cache for DynamoDB that delivers
a performance improvement of up to 10 times—from milliseconds to microseconds
• Amazon ElastiCache is a side cache that works as an in-memory data store to support
the most demanding applications requiring submillisecond response times

Fig 11.4 Caching Content

Module11: Summary :
In summary, in this module, we learned how to:
• Identify how caching content can improve application performance and reduce latency
• Create architectures that use Amazon CloudFront to cache content
• Identify how to design architectures that use edge locations for distribution and
distributed denial of service (DDoS) protection
• Recognize how session management relates to caching
• Describe how to design architectures that use Amazon ElastiCache

Module11: List of Labs :

62
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Guided Lab –Streaming Dynamic Content using Amazon CloudFront: In this guided lab,
we:

• Create an Amazon CloudFront distribution


• Create an Amazon Elastic Transcoder pipeline
• Test playback of the dynamic (multiple bitrate) stream

MODULE – 12

BUILDING DECOUPLED ARCHITECTURES


Section 1: Architectural need :

• Café business case requirement- The café’s architecture now supports hundreds of
thousands of users. However, the café’s systems are too tightly coupled. It’s difficult to
make changes to one layer of the application without affecting the other layers.

Section 2: Decoupling your architecture :


Some key takeaways from this section of the module include:
• Tightly coupled systems have chains of tightly integrated components and impede
scaling
• You can implement loose coupling in your system by using managed solutions (such as
Elastic Load Balancing) as intermediaries between layers

Section 3: Decoupling with Amazon SQS :


Some key takeaways from this section of the module include:
• Amazon SQS is a fully managed, message-queuing service that enables you to decouple
application components so they run independently.
• Amazon SQS supports standard and FIFO queues.
• A producer sends a message to a queue. A consumer processes and deletes the
message during the visibility timeout.
• Messages that cannot be processed can be sent to a dead letter queue.
• Long polling is a way to retrieve a large number of messages from your SQS queues.

63
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Section 4: Decoupling with Amazon SNS :


Some key takeaways from this section of the module include:
• Amazon SNS is a web service that you can use to set up, operate, and send notifications
from the cloud
• Amazon SNS follows the pub/sub messaging paradigm
• When you use Amazon SNS, you create a topic and set policies that restrict who can
publish or subscribe to the topic
• You can use topics to decouple message publishers from subscribers, fan-out messages
to multiple recipients at one time, and eliminate polling in your applications
• AWS services can publish messages to your SNS topics to trigger event-driven
computing and workflows
Section 5: Sending messages between cloud applications and on-premises with
Amazon MQ :
Some key takeaways from this section of the module include:
• Amazon MQ is a managed message broker service for Apache ActiveMQ that enables
you set up and operate message brokers in the cloud
• Amazon MQ manages the provisioning, setup, and maintenance of ActiveMQ, which is
a popular open-source message broker
• Amazon MQ is compatible with open standard APIs and protocols (that is, JMS, NMS,
AMQP, STOMP, MQTT, and WebSockets)
• You can use Amazon MQ to integrate on-premises and cloud environments by using
the network of brokers feature of ActiveMQ

Module12: Summary :
In summary, in this module, we learned how to:
• Differentiate between tightly and loosely coupled architectures
• Identify how Amazon SQS works and when to use it
• Identify how Amazon SNS works and when to use it
• Describe Amazon MQ

64
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

MODULE – 13

BUILDING MICROSERVICES AND SERVERLESS ARCHITECTURES


Section 1: Architectural need :
• Café business case requirement- The café wants to get daily reports via email about all
the orders that were placed on the website. They want this information so they can
anticipate demand and bake the correct number of desserts going forward (reducing
waste). They also want to identify any patterns in their business (analytics).

Section 2: Introducing microservices :


Some key takeaways from this section of the module include:
• Microservice applications are composed of independent services that communicate
over well-defined APIs
• Microservices share the following characteristics–
• Decentralized: Microservices are decentralized in the way they are developed,
deployed, managed, and operated
• Independent: Each component service in a microservices architecture can be
developed, deployed, operated, and scaled without affecting the function of other
services
• Specialized: Each component service is designed for a set of capabilities and
focuses on solving a specific problem
• Polyglot: Microservice architectures take a heterogeneous approach to operating
systems, programming languages, data stores, and tools
• Black boxes: The details of the complexity of microservice components are
hidden from other components
• You build it, you run it: DevOps is a key organizational principle for microservices

Section 3: Building microservice applications with AWS container services :


Some key takeaways from this section of the module include:
• Amazon ECS is a highly scalable, high-performance container management service. It
supports Docker containers and enables you to easily run applications on a managed
cluster of Amazon EC2 instances.
• Cluster auto scaling gives you more control over how you scale tasks within a cluster.
• AWS Cloud Map enables you to define custom names for your application resources. It
maintains the updated location of these dynamically changing resources.

65
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• AWS App Mesh is a service mesh that provides application-level networking to make it
easy for your services to communicate with each other across multiple types of compute
infrastructure.
• AWS App Mesh is a service mesh that provides application-level networking. It enables
your services to communicate easily with each other across multiple types of compute
infrastructure.
• AWS Fargate is a fully managed container service that enables you to run containers
without needing to manage servers or clusters.

Section 4: Introducing serverless architectures :


Some key takeaways from this section of the module include:
• Serverless computing enables you to build and run applications and services without
provisioning or managing servers
• Serverless architectures offer the following benefits –
• Lower TCO
• You can focus on your application
• You can use them to build microservice applications

Section 5: Building serverless architectures with AWS Lamba :


Some key takeaways from this section of the module include:
• Lambda is a serverless compute service that provides built-in fault tolerance and
automatic scaling.
• A Lambda function is custom code that you write that processes events.
• A Lambda function is invoked by a handler, which takes an event object and context
object as parameters.
• An event source is an AWS service or developer-created application that triggers a
Lambda function to run.
• Lambda layers enable functions to share code and keep deployment packages small.

Section 6: Extending serverless architectures with Amazon API Gateway :


Some key takeaways from this section of the module include:
• Amazon API Gateway is a fully managed service that enables you to create, publish,
maintain, monitor, and secure APIs at any scale.
• Amazon API Gateway acts as an entry point to backend resources for your applications.
It abstracts and exposes APIs that can call various backend applications. These

66
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

applications include Lambda functions, Docker containers that run on EC2 instances,
VPCs, or any publicly accessible endpoint.
• Amazon API Gateway is deeply integrated with Lambda.

Fig 13.6 Example of Serverless Architectures

Section 7: Orchestrating microservices with AWS Step Functions :

Some key takeaways from this section of the module include:


• AWS Step Functions is a web service that enables you to coordinate components of
distributed applications and microservices by using visual workflows
• AWS Step Functions enables you to create and automate your own state machines
within the AWS environment
• AWS Step Functions manages the logic of your application for you, and it implements
basic primitives, such as sequential or parallel branches, and timeouts
• You define state machines by using the Amazon States Language
Module13: Summary :
In summary, in this module, we learned how to:
• Indicate the characteristics of microservices
• Refactor a monolithic application into microservices and use Amazon ECS to deploy the
containerized microservices
• Explain serverless architecture
• Implement a serverless architecture with AWS Lambda
• Describe a common architecture for Amazon API Gateway
• Describe the types of workflows that AWS Step Functions supports

Module13: List of Labs :


• Guided Lab1 – Breaking a Monolith Node.js Application into Microservices: In this
guided lab, we:

• Prepare the AWS Cloud9 development environment


• Run a monolithic application on a basic Node.js server
• Containerize the monolith for Amazon ECS
• Deploy the monolith to Amazon ECS

67
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Refactor the monolith into containerized microservices


• Guided Lab2 – Implementing a Serverless Architecture with AWS Lambda: In this
guided lab, we:

• Create a Lambda function to load data


• Configure an Amazon S3 event
• Test the loading process
• Configure notifications
• Create a Lambda function to send notifications
• Test the system
• Challenge Lab – Implementing a Serverless Architecture for the Cafe: In this challenge
lab, we:

• Download the source code


• Create the DataExtractor Lambda function in the VPC
• Create the salesAnalysisReport Lambda function
• Create an SNS topic
• Create an email subscription to the SNS topic
• Test the salesAnalysisReport Lambda function
• Set up an Amazon EventBridge event to trigger the Lambda function each
day

MODULE – 14

PLANNING FOR DISASTER


Section 1: Architectural need :
• Café business case requirement- If the cafés infrastructure ever becomes unavailable,
the staff must be able to get their applications running again within an amount of time
that is acceptable to the business. They need an architecture that supports their disaster
recovery plans while also optimizing for cost.

Section 2: Disaster planning strategies :


Some key takeaways from this section of the module include:
• To choose the correct disaster recovery strategy, first identify your recovery point
objective (RPO) and recovery time objective (RTO)

68
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

• Use features such as S3 Cross-Region Replication, EBS volume snapshots, and RDS
snapshots to protect data
• Use networking features—such as Route 53 failover and Elastic Load Balancing—to
improve application availability
• Use automation services—such as AWS CloudFormation—as part of your DR strategy
to quickly deploy duplicate environments when necessary

Section 3: Disaster recovery patterns :


Some key takeaways from this section of the module include:
• Common disaster recovery patterns on AWS include backup and restore, pilot light,
warm standby, and multi-site.
• Backup and restore is the most cost effective approach, but it has the highest RTO.
• Multi-site provides the fastest RTO, but it costs the most because it provides a fully
running production-ready duplicate.
• AWS Storage Gateway provides three interfaces—file gateway, volume gateway, and
tape gateway—for data backup and recovery between on-premises and the AWS Cloud.

Module14: Summary :

In summary, in this module, we learned how to:


• Identify strategies for disaster planning
• Define RPO and RTO
• Describe four common patterns for backup and disaster recovery and how to
implement them
• Use AWS Storage Gateway for on-premises-to-cloud backup solutions
Module14: List of Labs :
• Guided Lab – Hybrid Storage and Data Migration with AWS Storage Gateway File
Gateway: In this guided lab, we:

• Review the lab architecture


• Create the primary and secondary S3 buckets
• Enable Cross-Region Replication
• Configure the file gateway and creating an NFS file share
• Mount the file share to the Linux instance and migrating the data
• Verify that the data is migrated

69
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

BRIDGING TO CERTIFICATION CAPSTONE PROJECT


Task 0 : Inspecting your environment:
Inspect the Example VPC Inspect the Subnets inspect the Security Groups
Inspect the AMI
Open VPC and collect the Private and Public subnet Info
1)Subnet Info
Public subnet1-10.0.0.0/24
Public subnet2-10.0.1.0/24
Private Subnet1-10.0.2.0/23
Private Subnet2-10.0.4.0/23
2)Security Groups Info
ALBSG- Bastion-SG-
Example-DB-Inventory-App-
Task 1 : Create a MySQL RDS database instance
Step1: Create Subnet Groups

RDS-->Choose Subnet Groups

Subnet Group Details

Name-Example-DB-subnet

Description-Example-DB-subnet

VPC-Select Example VPC

Add Subnet
AZ-Select us-east-1a(private subnet 1) and us-east-1b1b(private subnet 2)
Subnet-Select the Private subnet1(10.0.2.0/23) and Private subnet2(10.0.4.0/23)
Step2 : Create Database

Navigate RDS Service in AWS and create dB instance


RDS-->Database-->Create Database
Create database:
Choose a database creation method-Standard create

70
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Engine options-Select MySQL


Templates-Dev/Test
Availability and durability-Choose Multi-AZ DB instance

Settings
DB instance identifier-Example
Credentials Settings:
Master username-admin

Master password-password
Confirm password-password
Instance Configuration:
DB instance class-Choose Burstable classes (includes t classes) t3.micro Storage:
Storage type: General purpose
Allocated Storage -20GB Storage Autoscaling:
Enable storage autoscaling
Connectivity
Virtual private cloud (VPC)-Select Example VPC

Subnet Group-Example-DB-subnet(automatically chooses)


Public Access-No
VPC Security Group-Example-DB
Database Authentication
Database authentication options-Password authentication
Database options
Initial database name-exampledb
Backup-uncheck it
Monitoring -Disable monitoring Then click create database
Task 2 : Create Cloud9 Environment
Navigate cloud9
Create cloud environment

71
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Step 1
Name environment
Name-capstone project

Step 2
Configure settings
Environment type-ensure Direct access
Instance type-t2.micro (1 GiB RAM + 1 vCPU)

Platform-Amazon Linux 2 (recommended)


Network settings (advanced)
Network (VPC)-Select Example VPC
Subnet-Select Public Subnet2 then click Next
Step 3
Review -Create Environment
Task 3 : Install a LAMP web server on Amazon Linux2 on cloud9 instance
Step 1: Prepare the LAMP server

#sudo yum update -y


#sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
#sudo yum install -y httpd mariadb-server
#sudo systemctl start httpd

#sudo systemctl enable httpd


#sudo systeictl is-enabled httpd
Step2-Download the project assets(copy from link from the capstone project)

//Download PHP Code:

wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/ILT-TF-200ACACAD-20-
EN/capstone-project/Example.zip
Then Come back to Cloud9 services and Unzip the php downloaded file by using
following commands
ls

72
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

sudo mkdir nmindsacademy


sudo unzip Example.zip -d nmindsacademy/
sudo cp -R nmindsacaedmy/* /var/www/html/

sudo rm Examble.zip
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
Check public IP of Cloud9 EC2 instance and paste into new tab(Now Webpage is not
showing)
Solution:
Choose Instances and select your instance.(Cloud9 created Instance-Start with aws-
cloud9)

On the Security tab, view the inbound rules. Add HTTP protocol with 0.0.0.0/0 then again
refresh your webpage it will shows the webpage
Task 4 : Import the Database into the database
cd..
//Download Database:

wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/ILT-TF-
200ACACAD-20-EN/capstone-project/Countrydatadump.sql
ll or ls
Again go to RDS dash board now your database instance is created (available) and Copy
the endpoint of DB
Go to Cloud9 service to access the machine
Database file is download successfully
Important:
Go to Security Group and select Example-DB and add inbound rule for
MYSQL/Aurora and source to cloud9 instance then only we can able to import the
database then come back cloud9 instance
mysql -u admin -p --host <rds-endpoint>
DB instance identifier-Example
Credentials Settings-admin
Master password-password

73
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

copy and paste mysql -u admin -p --host <endpoint>


It asks password then give password(copy password from master password) then hit
enter
If correct means it will show like this MySQL [(none)]>

then type -show databases; then it will shows the tables in database
MySQL [(none)]> use exampledb;
MySQL [exampledb]> show tables;
MySQL [exampledb]> exit;
MySQL [exampledb]> exit;
Bye
voclabs:~/environment $
Import the file:
mysql -u admin -p exampledb --host <rds-endpoint> < Countrydatadump.sql

It asks password then give password(copy password from master password) then hit
enter
Verify once again database is attached or not by using following command
mysql -u admin -p --host <endpoint>
MySQL [(none)]> use exampledb;

MySQL [exampledb]> show tables;


MySQL [exampledb]> select*from countrydata_final;
MySQL [exampledb]> exit;

Task 5 : Configure Parameter values in AWS systems manager


Step1:
Navigate to AWS systems manager and create parameter for following values

/example/endpoint <rds-endpoint>

74
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

/example/username admin
/example/password password
/example/database exampledb

Step2 : Modify IAM role to cloud9 instance


Go to cloud9 EC2 instance and attach IAM role-Inventory-App-Role then refresh the web
page again
Now you can access the database successfully.
Task 6 : Create AMI for Autoscaling(Cloud9 instance)
In this step we take an AMI on cloud9 instance select
cloud9 instance-->actions-->images and templates-->create image

Image Name-CapstoneProjectAMI
Description-AMI for CapstoneProject

Then create Image.


It will takes few minutes.
Task 7 -Create Load Balancer
Step1:

Create Load Balancer


Go to EC2 console and select Load Balancer in new tab
Select Create Load Balancer
Select Application Load Balancer
Load balancer name-CapstoneProject-LB
Network mapping
VPC-Select Example VPC
Mappings-Select both us-east-1a below subnet choose Public subnet1 & us-east-
1b below subnet choose Public subnet2
Security groups
Security groups-Select ALBSG security group

75
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Listeners and routing


Default action-Create Load Balancer
Create Target Group:

Step 1
Specify group details:
Choose a target type-instance
Target group name-CapstoneProject-TG

VPC-Ensure Example VPC is selected then click next


Step 2
Register targets:
2 available Target is there so click create target Group(Don't select instances)
Come back to Load balance and refresh this Listener and routing, now we can see
created target group name and select it.

Select Capstone Project-TG


Then Click create Load Balancer
Copy the DNS Name
Task 8 -Create Autoscaling
EC2 management console under Auto Scaling choose Auto Scaling Groups in new tab
Create Auto Scaling group
Step 1

Choose launch template or configuration


Name
Auto Scaling group name-My-ASG
Launch template
Launch template-Choose Example-LT then modify the template
Change the AMI ID just now we created as CapstoneProjectAMI
Select Example-LT go to details-->Actions-->Modify Templates

76
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Scroll down and on Launch Templates Contents choose our CapstoneProjectAMI ID then
create it.
Ensure the CapstoneProjectAMI ID is changed in out template then click next
Step 2

Choose instance launch options


Network
VPC-Select Example VPC

Availability Zones and subnets-Select Private subnet1 & Private subnet2 then click next
Step 3 (optional)
Configure advanced options
Load balancing - optional-Select Attach to an existing load balancer
Attach to an existing load balancer
Existing load balancer target groups-Select CapstoneProject-LB
Health checks - Select ELB(check mark) then click next
Step 4 (optional)
Configure group size and scaling policies

Group size -Increase the group size like below


Desired capacity 2
Minimum capacity 2
Maximum capacity
2
Then click Next
Step 5 (optional)
Add notifications Then click Next
Step 6 (optional)
Add tags
Add name tag and value as Nminds-CapstoneProject and click next

77
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Step 7
Review then click Create Auto Scaling group
Task 9 : Check the output by using load balancer DNS name

Go to Load balancer and copy that DNS name and paste it into new tab.
And check website and database are connected or not.
CONCLUSION

Fig -1 social Research Organisation

Using the services of amazon webservices we have built a website capstone


•Amazon Elastic Compute Cloud (Amazon EC2)
•Amazon Simple Storage Service (Amazon S3)
•Amazon Virtual Private Cloud (Amazon VPC)
•Amazon Route 53

78
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

•Amazon Relational Database Service (Amazon RDS)


•Amazon Simple Queue Service (Amazon SQS)
We identified how to prepare for the AWS Certified Solutions Architect -Associate
project and where to find resources. In this Capstone project the we can explore
different countries and their country ids, total population in each country is explored
here by using AWS Web Services.

Fig-2 countries

79
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

CASE STUDY – UNILEVER


About UNILEVER :
Unilever was formed in 1930 by the merger of Dutch margarine producer, Margarine
Unie and British soap maker, Lever Brothers. Today, the consumer goods giant sells food,
home care, refreshments, and personal care products in over 190 countries. Unilever has
headquarters in London, United Kingdom and Rotterdam, the Netherlands, and
subsidiaries in over 90 countries. The company employs more than 170,000 people. In
2012, Unilever reported more than €51 billion in revenue.

The Challenge :
Unilever North America in Englewood Cliffs, New Jersey needed to re-design its
infrastructure to support Unilever’s digital marketing approach. Unilever previously
used on-premises data centers to host its web properties, all of which had different
technologies and processes. “We needed to standardize our environment to support a
faster time-to-market," says Sreenivas Yalamanchili, Digital Marketing Services (DMS)
Global Technical Manager. Unilever optimizes its business model by testing a marketing
campaign in a pilot country. If the campaign is successful, the company deploys it to
other countries and regions. The IT organization wanted to use the cloud to implement
the same process.

Why Amazon Web Services :

After a comprehensive RFP and review process involving more than 16 companies,
Unilever chose Amazon Web Services (AWS). Unilever’s priorities in choosing a digital
marketing platform included flexibility, a global infrastructure, technology, as well as a
rich ecosystem of members. “With AWS, we have the same hosting provider for all
regions, which means we don’t have to customize and tweak hosting solutions per
region,” says Yalamanchili. “Unilever is focused on delivering great brands to consumers;
it’s not an IT shop. We’re able to spend less and get more innovation by working with
AWS and members of the AWS Partner Network.”

80
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

The Unilever IT team had two goals for the AWS migration: deliver a common technology
platform for websites with regional content delivery architecture, and migrate existing
web properties to the cloud.
Benefits of AWS

• Improved business agility and operational efficiency.


• Responsiveness of the AWS Cloud
• Improved time to launch digital marketing campaigns
• Rapid rate of innovation

AWS Services Used :


Various Amazon services are used for various purposes.
Amazon S3 :
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. This means
customers of all sizes and industries can use it to store and protect any amount of data
for a range of use cases, such as websites, mobile applications, backup and restore,
archive, enterprise applications, IoT devices, and big data analytics.
Amazon Elastic Compute Cloud (Amazon EC2) :
Amazon EC2 is a web service that provides secure, resizable compute capacity in the
cloud. It provides with a complete control of computing resources and lets us run on
Amazon’s proven computing environment.
Unilever created AMIs running Windows and Linux for use on approximately 400 Amazon
EC2 instances.

Amazon Elastic Block Store :


Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage
service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput
and transaction intensive workloads at any scale
EBS snapshot copy :
EBS Snapshot Copy enables you to copy your EBS snapshots across AWS regions, thus
making it easier for you to leverage multiple AWS regions and accelerate your
geographical expansion, data center migration and disaster recovery.

81
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

Fig:-Snapshot copy

Amazon Virtual Private Cloud :


A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically
isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the
VPC, add subnets, add gateways, and associate security groups. A subnet is a range of IP addresses in
your VPC.

Auto Scaling :
AWS Auto Scaling monitors your applications and automatically adjusts capacity to
maintain steady, predictable performance at the lowest possible cost. Using AWS Auto
Scaling, it’s easy to setup application scaling for multiple resources across multiple
services in minutes. The service provides a simple, powerful user interface that lets you
build scaling plans for resources including Amazon EC2 instances and Spot Fleets,
Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas

Conclusion :
So we can believe that AWS saved it the expense of at least one operations position.
Additionally, the company have the flexibility and responsiveness of AWS is helping it to
prepare for more growth.
If the AWS does not yet exist, it probably will in a matter of months for the growth of
company. The low cost and simplicity of its services made it a no-brainer to switch to the
AWS cloud.
Because of AWS, there has always been an easy answer (in terms of time required and
cost) to scale site.

82
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

CONCLUSION
Cloud computing will affect large part of computer industry including Software
companies, Internet service providers. Cloud computing makes it very easy for
companies to provide their products to end-user without worrying about hardware
configurations and other requirements of servers. The cloud computing and visualization
are distinguished by the fact that all of the control plane activities that center around
creation, management, and maintenance of the virtual environment, are outsourced to
an automated layer that is called as an API and other management servers for the cloud
management.

In simple words, the virtualization is a part of cloud computing where manual


management is done for interacting with a hypervisor. On the other hand, in cloud
computing, the activities are self-managing where an API (Application Program Interface)
is used so that the users can self-consume the cloud service.

Coming to course, in the final capstone project, using the Amazon web services we
have built a website for café and bakery business where we used the various services
that are introduced, and worked by hands on lab provided in the course. Those are like:
•Amazon Elastic Compute Cloud (Amazon EC2)
•Amazon Simple Storage Service (Amazon S3)
•Amazon Virtual Private Cloud (Amazon VPC)
•Amazon Route 53
•Amazon Relational Database Service (Amazon RDS)
•Amazon Simple Queue Service (Amazon SQS)

So, we have learned and also worked with all the basic services and requirements
to create and maintain a business website.

83
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

REFERENCES :

Aws architecture course link:


[1] https://awsacademy.instructure.com/courses/25824

Aws cloud foundations link:


[2] https://awsacademy.instructure.com/courses/25086

[3] https://docs.aws.amazon.com/ec2/?icmpid=docs_homepage_featuredsvcs

[4] https://docs.aws.amazon.com/s3/?icmpid=docs_homepage_featuredsvcs

[5] https://docs.aws.amazon.com/rds/?icmpid=docs_homepage_featuredsvcs

[6] https://docs.aws.amazon.com/dynamodb/?icmpid=docs_homepage_featuredsvcs

[7] https://docs.aws.amazon.com/vpc/?icmpid=docs_homepage_featuredsvcs

[8] https://docs.aws.amazon.com/lambda/?icmpid=docs_homepage_featuredsvcs

Case Study Link :


[9] https://aws.amazon.com/solutions/case-studies/unilever/

84
Downloaded by Sai Alapati (saialapati087@gmail.com)
lOMoARcPSD|46940664

85
Downloaded by Sai Alapati (saialapati087@gmail.com)

You might also like