AWS Solutions Architect Exam Prep
AWS Solutions Architect Exam Prep
Question 1Incorrect
The solo founder at a tech startup has just created a brand new AWS account. The
founder has provisioned an Amazon EC2 instance 1A which is running in AWS
Region A. Later, he takes a snapshot of the instance 1A and then creates a new
Amazon Machine Image (AMI) in Region A from this snapshot. This AMI is then
copied into another Region B. The founder provisions an instance 1B in Region B
using this new AMI in Region B.
At this point in time, what entities exist in Region B?
Your answer is incorrect
1 Amazon EC2 instance and 1 snapshot exist in Region B
1 Amazon EC2 instance and 1 AMI exist in Region B
1 Amazon EC2 instance and 2 AMIs exist in Region B
Correct answer
1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B
Overall explanation
Correct option:
1 Amazon EC2 instance, 1 AMI and 1 snapshot exist in Region B
An Amazon Machine Image (AMI) provides the information required to launch an
instance. You must specify an AMI when you launch an instance. When the new AMI
is copied from Region A into Region B, it automatically creates a snapshot in Region
B because AMIs are based on the underlying snapshots. Further, an instance is
created from this AMI in Region B. Hence, we have 1 Amazon EC2 instance, 1 AMI
and 1 snapshot in Region B.
Amazon Machine Image (AMI) Overview:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Incorrect options:
1 Amazon EC2 instance and 1 AMI exist in Region B
1 Amazon EC2 instance and 1 snapshot exist in Region B
1 Amazon EC2 instance and 2 AMIs exist in Region B
As mentioned earlier in the explanation, when the new AMI is copied from Region A
into Region B, it also creates a snapshot in Region B because AMIs are based on the
underlying snapshots. In addition, an instance is created from this AMI in Region B.
So, we have 1 Amazon EC2 instance, 1 AMI and 1 snapshot in Region B. Hence all
three options are incorrect.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Domain
Design High-Performing Architectures
Question 2Correct
The engineering team at an e-commerce company wants to establish a dedicated,
encrypted, low latency, and high throughput connection between its data center
and AWS Cloud. The engineering team has set aside sufficient time to account for
the operational overhead of establishing this connection.
As a solutions architect, which of the following solutions would you recommend to
the company?
Use AWS site-to-site VPN to establish a connection between the data
center and AWS Cloud
Use AWS Transit Gateway to establish a connection between the data
center and AWS Cloud
Your answer is correct
Use AWS Direct Connect plus virtual private network (VPN) to establish a
connection between the data center and AWS Cloud
Use AWS Direct Connect to establish a connection between the data
center and AWS Cloud
Overall explanation
Correct option:
Use AWS Direct Connect plus virtual private network (VPN) to establish a
connection between the data center and AWS Cloud
AWS Direct Connect is a cloud service solution that makes it easy to establish a
dedicated network connection from your premises to AWS. AWS Direct Connect lets
you establish a dedicated network connection between your network and one of the
AWS Direct Connect locations.
With AWS Direct Connect plus VPN, you can combine one or more AWS Direct
Connect dedicated network connections with the Amazon VPC VPN. This
combination provides an IPsec-encrypted private connection that also reduces
network costs, increases bandwidth throughput, and provides a more consistent
network experience than internet-based VPN connections.
This solution combines the AWS managed benefits of the VPN solution with low
latency, increased bandwidth, more consistent benefits of the AWS Direct Connect
solution, and an end-to-end, secure IPsec connection. Therefore, AWS Direct
Connect plus VPN is the correct solution for this use-case.
AWS Direct Connect Plus VPN:
 via - https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-
options/aws-direct-connect-vpn.html
Incorrect options:
Use AWS site-to-site VPN to establish a connection between the data
center and AWS Cloud - AWS Site-to-Site VPN enables you to securely connect
your on-premises network or branch office site to your Amazon Virtual Private Cloud
(Amazon VPC). A VPC VPN Connection utilizes IPSec to establish encrypted network
connectivity between your intranet and Amazon VPC over the Internet. VPN
Connections are a good solution if you have an immediate need, have low to
modest bandwidth requirements, and can tolerate the inherent variability in
Internet-based connectivity. However, Site-to-site VPN cannot provide low latency
and high throughput connection, therefore this option is ruled out.
Use AWS Transit Gateway to establish a connection between the data
center and AWS Cloud - AWS Transit Gateway is a network transit hub that you
can use to interconnect your virtual private clouds (VPC) and on-premises networks.
AWS Transit Gateway by itself cannot establish a low latency and high throughput
connection between a data center and AWS Cloud. Hence this option is incorrect.
Use AWS Direct Connect to establish a connection between the data
center and AWS Cloud - AWS Direct Connect by itself cannot provide an
encrypted connection between a data center and AWS Cloud, so this option is ruled
out.
References:
https://aws.amazon.com/directconnect/
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-
direct-connect-plus-vpn-network-to-amazon.html
Domain
Design High-Performing Architectures
Question 3Correct
The payroll department at a company initiates several computationally intensive
workloads on Amazon EC2 instances at a designated hour on the last day of every
month. The payroll department has noticed a trend of severe performance lag
during this hour. The engineering team has figured out a solution by using Auto
Scaling Group for these Amazon EC2 instances and making sure that 10 Amazon
EC2 instances are available during this peak usage hour. For normal operations only
2 Amazon EC2 instances are enough to cater to the workload.
As a solutions architect, which of the following steps would you recommend to
implement the solution?
Configure your Auto Scaling group by creating a simple tracking policy and
setting the instance count to 10 at the designated hour. This causes the
scale-out to happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a target tracking policy and
setting the instance count to 10 at the designated hour. This causes the
scale-out to happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a scheduled action that
kicks-off at the designated hour on the last day of the month. Set the min
count as well as the max count of instances to 10. This causes the scale-
out to happen before peak traffic kicks in at the designated hour
Your answer is correct
Configure your Auto Scaling group by creating a scheduled action that
kicks-off at the designated hour on the last day of the month. Set the
desired capacity of instances to 10. This causes the scale-out to happen
before peak traffic kicks in at the designated hour
Overall explanation
Correct option:
Configure your Auto Scaling group by creating a scheduled action that
kicks-off at the designated hour on the last day of the month. Set the
desired capacity of instances to 10. This causes the scale-out to happen
before peak traffic kicks in at the designated hour
Scheduled scaling allows you to set your own scaling schedule. For example, let's
say that every week the traffic to your web application starts to increase on
Wednesday, remains high on Thursday, and starts to decrease on Friday. You can
plan your scaling actions based on the predictable traffic patterns of your web
application. Scaling actions are performed automatically as a function of time and
date.
A scheduled action sets the minimum, maximum, and desired sizes to what is
specified by the scheduled action at the time specified by the scheduled action. For
the given use case, the correct solution is to set the desired capacity to 10. When
we want to specify a range of instances, then we must use min and max values.
Incorrect options:
Configure your Auto Scaling group by creating a scheduled action that
kicks-off at the designated hour on the last day of the month. Set the min
count as well as the max count of instances to 10. This causes the scale-
out to happen before peak traffic kicks in at the designated hour - As
mentioned earlier in the explanation, only when we want to specify a range of
instances, then we must use min and max values. As the given use-case requires
exactly 10 instances to be available during the peak hour, so we must set the
desired capacity to 10. Hence this option is incorrect.
Configure your Auto Scaling group by creating a target tracking policy and
setting the instance count to 10 at the designated hour. This causes the
scale-out to happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a simple tracking policy and
setting the instance count to 10 at the designated hour. This causes the
scale-out to happen before peak traffic kicks in at the designated hour
Target tracking policy or simple tracking policy cannot be used to effect a scaling
action at a certain designated hour. Both these options have been added as
distractors.
Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html
Domain
Design Resilient Architectures
Question 4Correct
A media company runs a photo-sharing web application that is accessed across
three different countries. The application is deployed on several Amazon Elastic
Compute Cloud (Amazon EC2) instances running behind an Application Load
Balancer. With new government regulations, the company has been asked to block
access from two countries and allow access only from the home country of the
company.
Which configuration should be used to meet this changed requirement?
Configure the security group on the Application Load Balancer
Use Geo Restriction feature of Amazon CloudFront in a Amazon Virtual
Private Cloud (Amazon VPC)
Configure the security group for the Amazon EC2 instances
Your answer is correct
Configure AWS Web Application Firewall (AWS WAF) on the Application
Load Balancer in a Amazon Virtual Private Cloud (Amazon VPC)
Overall explanation
Correct option:
AWS Web Application Firewall (AWS WAF) is a web application firewall service that
lets you monitor web requests and protect your web applications from malicious
requests. Use AWS WAF to block or allow requests based on conditions that you
specify, such as the IP addresses. You can also use AWS WAF preconfigured
protections to block common attacks like SQL injection or cross-site scripting.
Configure AWS Web Application Firewall (AWS WAF) on the Application
Load Balancer in a Amazon Virtual Private Cloud (Amazon VPC)
You can use AWS WAF with your Application Load Balancer to allow or block
requests based on the rules in a web access control list (web ACL). Geographic
(Geo) Match Conditions in AWS WAF allows you to use AWS WAF to restrict
application access based on the geographic location of your viewers. With geo
match conditions you can choose the countries from which AWS WAF should allow
access.
Geo match conditions are important for many customers. For example, legal and
licensing requirements restrict some customers from delivering their applications
outside certain countries. These customers can configure a whitelist that allows only
viewers in those countries. Other customers need to prevent the downloading of
their encrypted software by users in certain countries. These customers can
configure a blacklist so that end-users from those countries are blocked from
downloading their software.
Incorrect options:
Use Geo Restriction feature of Amazon CloudFront in a Amazon Virtual
Private Cloud (Amazon VPC) - Geo Restriction feature of Amazon CloudFront
helps in restricting traffic based on the user's geographic location. But, CloudFront
works from edge locations and doesn't belong to a VPC. Hence, this option itself is
incorrect and given only as a distractor.
Configure the security group on the Application Load Balancer
Configure the security group for the Amazon EC2 instances
Security Groups cannot restrict access based on the user's geographic location.
References:
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-
geographic-match/
https://aws.amazon.com/blogs/aws/aws-web-application-firewall-waf-for-application-
load-balancers/
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-
Application-Load-Balancer/
Domain
Design Secure Architectures
Question 5Correct
An e-commerce company is looking for a solution with high availability, as it plans
to migrate its flagship application to a fleet of Amazon Elastic Compute Cloud
(Amazon EC2) instances. The solution should allow for content-based routing as part
of the architecture.
As a Solutions Architect, which of the following will you suggest for the company?
Use a Network Load Balancer for distributing traffic to the Amazon EC2
instances spread across different Availability Zones (AZs). Configure a
Private IP address to mask any failure of an instance
Use an Auto Scaling group for distributing traffic to the Amazon EC2
instances spread across different Availability Zones (AZs). Configure an
elastic IP address (EIP) to mask any failure of an instance
Your answer is correct
Use an Application Load Balancer for distributing traffic to the Amazon
EC2 instances spread across different Availability Zones (AZs). Configure
Auto Scaling group to mask any failure of an instance
Use an Auto Scaling group for distributing traffic to the Amazon EC2
instances spread across different Availability Zones (AZs). Configure a
Public IP address to mask any failure of an instance
Overall explanation
Correct option:
Use an Application Load Balancer for distributing traffic to the Amazon
EC2 instances spread across different Availability Zones (AZs). Configure
Auto Scaling group to mask any failure of an instance
The Application Load Balancer (ALB) is best suited for load balancing HTTP and
HTTPS traffic and provides advanced request routing targeted at the delivery of
modern application architectures, including microservices and containers. Operating
at the individual request level (Layer 7), the Application Load Balancer routes traffic
to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content
of the request.
This is the correct option since the question has a specific requirement for content-
based routing which can be configured via the Application Load Balancer. Different
Availability Zones (AZs) provide high availability to the overall architecture and Auto
Scaling group will help mask any instance failures.
More info on Application Load Balancer:
via - https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
Incorrect options:
Use a Network Load Balancer for distributing traffic to the Amazon EC2
instances spread across different Availability Zones (AZs). Configure a
Private IP address to mask any failure of an instance - Network Load Balancer
cannot facilitate content-based routing so this option is incorrect.
Use an Auto Scaling group for distributing traffic to the Amazon EC2
instances spread across different Availability Zones (AZs). Configure an
elastic IP address (EIP) to mask any failure of an instance
Use an Auto Scaling group for distributing traffic to the Amazon EC2
instances spread across different Availability Zones (AZs). Configure a
Public IP address to mask any failure of an instance
Both these options are incorrect as you cannot use the Auto Scaling group to
distribute traffic to the Amazon EC2 instances.
An elastic IP address (EIP) is a static, public, IPv4 address allocated to your AWS
account. With an Elastic IP address, you can mask the failure of an instance or
software by rapidly remapping the address to another instance in your account.
Elastic IPs do not change and remain allocated to your account until you delete
them.
More info on Elastic Load Balancing (ELB):
 via - https://docs.aws.amazon.com/whitepapers/latest/fault-tolerant-components/
fault-tolerant-components.pdf
You can span your Auto Scaling group across multiple Availability Zones (AZs) within
an AWS Region and then attaching a load balancer to distribute incoming traffic
across those zones.
 via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-
zone.html
References:
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
https://docs.aws.amazon.com/whitepapers/latest/fault-tolerant-components/fault-
tolerant-components.pdf
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-
zone.html
Domain
Design Resilient Architectures
Question 6Correct
An IT security consultancy is working on a solution to protect data stored in Amazon
S3 from any malicious activity as well as check for any vulnerabilities on Amazon
EC2 instances.
As a solutions architect, which of the following solutions would you suggest to help
address the given requirement?
Use Amazon Inspector to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon GuardDuty to
check for vulnerabilities on Amazon EC2 instances
Use Amazon GuardDuty to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon GuardDuty to
check for vulnerabilities on Amazon EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon Inspector to
check for vulnerabilities on Amazon EC2 instances
Your answer is correct
Use Amazon GuardDuty to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon Inspector to
check for vulnerabilities on Amazon EC2 instances
Overall explanation
Correct option:
Use Amazon GuardDuty to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon Inspector to
check for vulnerabilities on Amazon EC2 instances
Amazon GuardDuty offers threat detection that enables you to continuously monitor
and protect your AWS accounts, workloads, and data stored in Amazon S3.
GuardDuty analyzes continuous streams of meta-data generated from your account
and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and
DNS Logs. It also uses integrated threat intelligence such as known malicious IP
addresses, anomaly detection, and machine learning to identify threats more
accurately.
How Amazon GuardDuty works:
via - https://aws.amazon.com/guardduty/
Amazon Inspector security assessments help you check for unintended network
accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2
instances. Amazon Inspector assessments are offered to you as pre-defined rules
packages mapped to common security best practices and vulnerability definitions.
Incorrect options:
Use Amazon GuardDuty to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon GuardDuty to
check for vulnerabilities on Amazon EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon Inspector to
check for vulnerabilities on Amazon EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in
Amazon S3. Use security assessments provided by Amazon GuardDuty to
check for vulnerabilities on Amazon EC2 instances
These three options contradict the explanation provided above, so these options are
incorrect.
References:
https://aws.amazon.com/guardduty/
https://aws.amazon.com/inspector/
Domain
Design Secure Architectures
Question 7Correct
The development team at an e-commerce startup has set up multiple microservices
running on Amazon EC2 instances under an Application Load Balancer. The team
wants to route traffic to multiple back-end services based on the URL path of the
HTTP header. So it wants requests for https://www.example.com/orders to go to a
specific microservice and requests for https://www.example.com/products to go to
another microservice.
Which of the following features of Application Load Balancers can be used for this
use-case?
Host-based Routing
Query string parameter-based routing
Your answer is correct
Path-based Routing
HTTP header-based routing
Overall explanation
Correct option:
Path-based Routing
Elastic Load Balancing automatically distributes incoming application traffic across
multiple targets, such as Amazon EC2 instances, containers, IP addresses, and AWS
Lambda functions.
If your application is composed of several individual services, an Application Load
Balancer can route a request to a service based on the content of the request. Here
are the different types -
Host-based Routing:
You can route a client request based on the Host field of the HTTP header allowing
you to route to multiple domains from the same load balancer.
Path-based Routing:
You can route a client request based on the URL path of the HTTP header.
HTTP header-based routing:
You can route a client request based on the value of any standard or custom HTTP
header.
HTTP method-based routing:
You can route a client request based on any standard or custom HTTP method.
Query string parameter-based routing:
You can route a client request based on the query string or query parameters.
Source IP address CIDR-based routing:
You can route a client request based on source IP address CIDR from where the
request originates.
Path-based Routing Overview:
You can use path conditions to define rules that route requests based on the URL in
the request (also known as path-based routing).
The path pattern is applied only to the path of the URL, not to its query parameters.
 via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-
balancer-listeners.html#path-conditions
Incorrect options:
Query string parameter-based routing
HTTP header-based routing
Host-based Routing
As mentioned earlier in the explanation, none of these three types of routing
support requests based on the URL path of the HTTP header. Hence these three are
incorrect.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
listeners.html
Domain
Design Resilient Architectures
Question 8Correct
The engineering team at a Spanish professional football club has built a notification
system for its website using Amazon Simple Notification Service (Amazon SNS)
notifications which are then handled by an AWS Lambda function for end-user
delivery. During the off-season, the notification systems need to handle about 100
requests per second. During the peak football season, the rate touches about 5000
requests per second and it is noticed that a significant number of the notifications
are not being delivered to the end-users on the website.
As a solutions architect, which of the following would you suggest as the BEST
possible solution to this issue?
Your answer is correct
Amazon SNS message deliveries to AWS Lambda have crossed the account
concurrency quota for AWS Lambda, so the team needs to contact AWS
support to raise the account limit
The engineering team needs to provision more servers running the AWS
Lambda service
The engineering team needs to provision more servers running the
Amazon SNS service
Amazon SNS has hit a scalability limit, so the team needs to contact AWS
support to raise the account limit
Overall explanation
Correct option:
Amazon SNS message deliveries to AWS Lambda have crossed the account
concurrency quota for AWS Lambda, so the team needs to contact AWS
support to raise the account limit
Amazon Simple Notification Service (Amazon SNS) is a highly available, durable,
secure, fully managed pub/sub messaging service that enables you to decouple
microservices, distributed systems, and serverless applications.
How Amazon SNS Works:
via - https://aws.amazon.com/sns/
With AWS Lambda, you can run code without provisioning or managing servers. You
pay only for the compute time that you consume—there’s no charge when your
code isn’t running.
AWS Lambda currently supports 1000 concurrent executions per AWS account per
region. If your Amazon SNS message deliveries to AWS Lambda contribute to
crossing these concurrency quotas, your Amazon SNS message deliveries will be
throttled. You need to contact AWS support to raise the account limit. Therefore this
option is correct.
Incorrect options:
Amazon SNS has hit a scalability limit, so the team needs to contact AWS
support to raise the account limit - Amazon SNS leverages the proven AWS
cloud to dynamically scale with your application. You don't need to contact AWS
support, as SNS is a fully managed service, taking care of the heavy lifting related
to capacity planning, provisioning, monitoring, and patching. Therefore, this option
is incorrect.
The engineering team needs to provision more servers running the
Amazon SNS service
The engineering team needs to provision more servers running the AWS
Lambda service
As both AWS Lambda and Amazon SNS are serverless and fully managed services,
the engineering team cannot provision more servers. Both of these options are
incorrect.
References:
https://aws.amazon.com/sns/
https://aws.amazon.com/sns/faqs/
Domain
Design Resilient Architectures
Question 9Correct
A gaming company uses Amazon Aurora as its primary database service. The
company has now deployed 5 multi-AZ read replicas to increase the read
throughput and for use as failover target. The replicas have been assigned the
following failover priority tiers and corresponding instance sizes are given in
parentheses: tier-1 (16 terabytes), tier-1 (32 terabytes), tier-10 (16 terabytes), tier-
15 (16 terabytes), tier-15 (32 terabytes).
In the event of a failover, Amazon Aurora will promote which of the following read
replicas?
Tier-1 (16 terabytes)
Tier-15 (32 terabytes)
Your answer is correct
Tier-1 (32 terabytes)
Tier-10 (16 terabytes)
Overall explanation
Correct option:
Tier-1 (32 terabytes)
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system
that auto-scales up to 128TB per database instance. It delivers high performance
and availability with up to 15 low-latency read replicas, point-in-time recovery,
continuous backup to Amazon S3, and replication across three Availability Zones
(AZs).
For Amazon Aurora, each Read Replica is associated with a priority tier (0-15). In the
event of a failover, Amazon Aurora will promote the Read Replica that has the
highest priority (the lowest numbered tier). If two or more Aurora Replicas share the
same priority, then Amazon RDS promotes the replica that is largest in size. If two or
more Aurora Replicas share the same priority and size, then Amazon Aurora
promotes an arbitrary replica in the same promotion tier.
Therefore, for this problem statement, the Tier-1 (32 terabytes) replica will be
promoted.
Incorrect options:
Tier-15 (32 terabytes)
Tier-1 (16 terabytes)
Tier-10 (16 terabytes)
Given the failover rules discussed earlier in the explanation, these three options are
incorrect.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
Concepts.AuroraHighAvailability.html
Domain
Design Resilient Architectures
Question 10Skipped
A healthcare company uses its on-premises infrastructure to run legacy applications
that require specialized customizations to the underlying Oracle database as well as
its host operating system (OS). The company also wants to improve the availability
of the Oracle database layer. The company has hired you as an AWS Certified
Solutions Architect – Associate to build a solution on AWS that meets these
requirements while minimizing the underlying infrastructure maintenance effort.
Which of the following options represents the best solution for this use case?
Leverage cross AZ read-replica configuration of Amazon RDS for Oracle
that allows the Database Administrator (DBA) to access and customize the
database environment and the underlying operating system
Correct answer
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that
allows the Database Administrator (DBA) to access and customize the
database environment and the underlying operating system
Deploy the Oracle database layer on multiple Amazon EC2 instances
spread across two Availability Zones (AZs). This deployment configuration
guarantees high availability and also allows the Database Administrator
(DBA) to access and customize the database environment and the
underlying operating system
Leverage multi-AZ configuration of Amazon RDS for Oracle that allows the
Database Administrator (DBA) to access and customize the database
environment and the underlying operating system
Overall explanation
Correct option:
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that
allows the Database Administrator (DBA) to access and customize the
database environment and the underlying operating system
Amazon RDS is a managed service that makes it easy to set up, operate, and scale
a relational database in the cloud. It provides cost-efficient and resizable capacity
while managing time-consuming database administration tasks. Amazon RDS can
automatically back up your database and keep your database software up to date
with the latest version. However, RDS does not allow you to access the host OS of
the database.
For the given use-case, you need to use Amazon RDS Custom for Oracle as it allows
you to access and customize your database server host and operating system, for
example by applying special patches and changing the database software settings
to support third-party applications that require privileged access. Amazon RDS
Custom for Oracle facilitates these functionalities with minimum infrastructure
maintenance effort. You need to set up the RDS Custom for Oracle in multi-AZ
configuration for high availability.
Amazon RDS Custom for Oracle:
 via - https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-
capabilities-in-database-environment/
Incorrect options:
Leverage multi-AZ configuration of Amazon RDS for Oracle that allows the
Database Administrator (DBA) to access and customize the database
environment and the underlying operating system
Leverage cross AZ read-replica configuration of Amazon RDS for Oracle
that allows the Database Administrator (DBA) to access and customize the
database environment and the underlying operating system
Amazon RDS for Oracle does not allow you to access and customize your database
server host and operating system. Therefore, both these options are incorrect.
Deploy the Oracle database layer on multiple Amazon EC2 instances
spread across two Availability Zones (AZs). This deployment configuration
guarantees high availability and also allows the Database Administrator
(DBA) to access and customize the database environment and the
underlying operating system - The use case requires that the best solution
should involve minimum infrastructure maintenance effort. When you use Amazon
EC2 instances to host the databases, you need to manage the server health, server
maintenance, server patching, and database maintenance tasks yourself. In
addition, you will also need to manage the multi-AZ configuration by deploying
Amazon EC2 instances across two Availability Zones (AZs), perhaps by using an
Auto Scaling group. These steps entail significant maintenance effort. Hence this
option is incorrect.
References:
https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-
capabilities-in-database-environment/
https://aws.amazon.com/rds/faqs/
Domain
Design Resilient Architectures
Question 11Correct
A major bank is using Amazon Simple Queue Service (Amazon SQS) to migrate
several core banking applications to the cloud to ensure high availability and cost
efficiency while simplifying administrative complexity and overhead. The
development team at the bank expects a peak rate of about 1000 messages per
second to be processed via SQS. It is important that the messages are processed in
order.
Which of the following options can be used to implement this system?
Use Amazon SQS FIFO (First-In-First-Out) queue to process the messages
Your answer is correct
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4
messages per operation to process the messages at the peak rate
Use Amazon SQS standard queue to process the messages
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 2
messages per operation to process the messages at the peak rate
Overall explanation
Correct option:
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4
messages per operation to process the messages at the peak rate
Amazon Simple Queue Service (SQS) is a fully managed message queuing service
that enables you to decouple and scale microservices, distributed systems, and
serverless applications. SQS offers two types of message queues - Standard queues
vs FIFO queues.
For FIFO queues, the order in which messages are sent and received is strictly
preserved (i.e. First-In-First-Out). On the other hand, the standard SQS queues offer
best-effort ordering. This means that occasionally, messages might be delivered in
an order different from which they were sent.
By default, FIFO queues support up to 300 messages per second (300 send, receive,
or delete operations per second). When you batch 10 messages per operation
(maximum), FIFO queues can support up to 3,000 messages per second. Therefore
you need to process 4 messages per operation so that the FIFO queue can support
up to 1200 messages per second, which is well within the peak rate.
FIFO Queues Overview:
 via
- https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/
FIFO-queues.html
Incorrect options:
Use Amazon SQS standard queue to process the messages - As messages
need to be processed in order, therefore standard queues are ruled out.
Use Amazon SQS FIFO (First-In-First-Out) queue to process the messages -
By default, FIFO queues support up to 300 messages per second and this is not
sufficient to meet the message processing throughput per the given use-case.
Hence this option is incorrect.
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 2
messages per operation to process the messages at the peak rate - As
mentioned earlier in the explanation, you need to use FIFO queues in batch mode
and process 4 messages per operation, so that the FIFO queue can support up to
1200 messages per second. With 2 messages per operation, you can only support
up to 600 messages per second.
References:
https://aws.amazon.com/sqs/
https://aws.amazon.com/sqs/features/
Domain
Design Resilient Architectures
Question 12Correct
A leading social media analytics company is contemplating moving its dockerized
application stack into AWS Cloud. The company is not sure about the pricing for
using Amazon Elastic Container Service (Amazon ECS) with the EC2 launch type
compared to the Amazon Elastic Container Service (Amazon ECS) with the Fargate
launch type.
Which of the following is correct regarding the pricing for these two services?
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate
launch type are charged based on Amazon EC2 instances and Amazon EBS
Elastic Volumes used
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate
launch type are charged based on vCPU and memory resources that the
containerized application requests
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate
launch type are just charged based on Elastic Container Service used per
hour
Your answer is correct
Amazon ECS with EC2 launch type is charged based on EC2 instances and
EBS volumes used. Amazon ECS with Fargate launch type is charged based
on vCPU and memory resources that the containerized application
requests
Overall explanation
Correct option:
Amazon ECS with EC2 launch type is charged based on EC2 instances and
EBS volumes used. Amazon ECS with Fargate launch type is charged based
on vCPU and memory resources that the containerized application
requests
Amazon Elastic Container Service (Amazon ECS) is a fully managed container
orchestration service. ECS allows you to easily run, scale, and secure Docker
container applications on AWS.
Amazon ECS Overview:
via - https://aws.amazon.com/ecs/
With the Fargate launch type, you pay for the amount of vCPU and memory
resources that your containerized application requests. vCPU and memory resources
are calculated from the time your container images are pulled until the Amazon ECS
Task terminates, rounded up to the nearest second. With the EC2 launch type, there
is no additional charge for the EC2 launch type. You pay for AWS resources (e.g. EC2
instances or EBS volumes) you create to store and run your application.
Incorrect options:
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate
launch type are charged based on vCPU and memory resources that the
containerized application requests
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate
launch type are charged based on Amazon EC2 instances and Amazon EBS
Elastic Volumes used
As mentioned above - with the Fargate launch type, you pay for the amount of vCPU
and memory resources. With EC2 launch type, you pay for AWS resources (e.g. EC2
instances or EBS volumes). Hence both these options are incorrect.
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate
launch type are just charged based on Elastic Container Service used per
hour
This is a made-up option and has been added as a distractor.
References:
https://aws.amazon.com/ecs/pricing/
Domain
Design Cost-Optimized Architectures
Question 13Incorrect
The DevOps team at an e-commerce company wants to perform some maintenance
work on a specific Amazon EC2 instance that is part of an Auto Scaling group using
a step scaling policy. The team is facing a maintenance challenge - every time the
team deploys a maintenance patch, the instance health check status shows as out
of service for a few minutes. This causes the Auto Scaling group to provision
another replacement instance immediately.
As a solutions architect, which are the MOST time/resource efficient steps that you
would recommend so that the maintenance work can be completed at the earliest?
(Select two)
Take a snapshot of the instance, create a new Amazon Machine Image
(AMI) and then launch a new instance using this AMI. Apply the
maintenance patch to this new instance and then add it back to the Auto
Scaling Group by using the manual scaling policy. Terminate the earlier
instance that had the maintenance issue
Delete the Auto Scaling group and apply the maintenance fix to the given
instance. Create a new Auto Scaling group and add all the instances again
using the manual scaling policy
Your selection is incorrect
Suspend the ScheduledActions process type for the Auto Scaling group
and apply the maintenance patch to the instance. Once the instance is
ready, you can you can manually set the instance's health status back to
healthy and activate the ScheduledActions process type again
Your selection is correct
Put the instance into the Standby state and then update the instance by
applying the maintenance patch. Once the instance is ready, you can exit
the Standby state and then return the instance to service
Correct selection
Suspend the ReplaceUnhealthy process type for the Auto Scaling group
and apply the maintenance patch to the instance. Once the instance is
ready, you can manually set the instance's health status back to healthy
and activate the ReplaceUnhealthy process type again
Overall explanation
Correct options:
Put the instance into the Standby state and then update the instance by
applying the maintenance patch. Once the instance is ready, you can exit
the Standby state and then return the instance to service - You can put an
instance that is in the InService state into the Standby state, update some software
or troubleshoot the instance, and then return the instance to service. Instances that
are on standby are still part of the Auto Scaling group, but they do not actively
handle application traffic.
How Standby State Works:
 via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-
standby.html
Suspend the ReplaceUnhealthy process type for the Auto Scaling group
and apply the maintenance patch to the instance. Once the instance is
ready, you can manually set the instance's health status back to healthy
and activate the ReplaceUnhealthy process type again -
The ReplaceUnhealthy process terminates instances that are marked as unhealthy
and then creates new instances to replace them. Amazon EC2 Auto Scaling stops
replacing instances that are marked as unhealthy. Instances that fail EC2 or Elastic
Load Balancing health checks are still marked as unhealthy. As soon as you resume
the ReplaceUnhealthly process, Amazon EC2 Auto Scaling replaces instances that
were marked unhealthy while this process was suspended.
Incorrect options:
Take a snapshot of the instance, create a new Amazon Machine Image
(AMI) and then launch a new instance using this AMI. Apply the
maintenance patch to this new instance and then add it back to the Auto
Scaling Group by using the manual scaling policy. Terminate the earlier
instance that had the maintenance issue - Taking the snapshot of the existing
instance to create a new AMI and then creating a new instance in order to apply the
maintenance patch is not time/resource optimal, hence this option is ruled out.
Delete the Auto Scaling group and apply the maintenance fix to the given
instance. Create a new Auto Scaling group and add all the instances again
using the manual scaling policy - It's not recommended to delete the Auto
Scaling group just to apply a maintenance patch on a specific instance.
Suspend the ScheduledActions process type for the Auto Scaling group
and apply the maintenance patch to the instance. Once the instance is
ready, you can you can manually set the instance's health status back to
healthy and activate the ScheduledActions process type again - Amazon
EC2 Auto Scaling does not execute scaling actions that are scheduled to run during
the suspension period. This option is not relevant to the given use-case.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-
processes.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/health-checks-
overview.html
Domain
Design Resilient Architectures
Question 14Correct
A gaming company is looking at improving the availability and performance of its
global flagship application which utilizes User Datagram Protocol and needs to
support fast regional failover in case an AWS Region goes down. The company
wants to continue using its own custom Domain Name System (DNS) service.
Which of the following AWS services represents the best solution for this use-case?
Amazon Route 53
AWS Elastic Load Balancing (ELB)
Your answer is correct
AWS Global Accelerator
Amazon CloudFront
Overall explanation
Correct option:
AWS Global Accelerator
AWS Global Accelerator utilizes the Amazon global network, allowing you to improve
the performance of your applications by lowering first-byte latency (the round trip
time for a packet to go from a client to your endpoint and back again) and jitter (the
variation of latency), and increasing throughput (the amount of time it takes to
transfer data) as compared to the public internet.
AWS Global Accelerator improves performance for a wide range of applications over
TCP or UDP by proxying packets at the edge to applications running in one or more
AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as
gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that
specifically require static IP addresses or deterministic, fast regional failover.
Incorrect options:
Amazon CloudFront - Amazon CloudFront is a fast content delivery network (CDN)
service that securely delivers data, videos, applications, and APIs to customers
globally with low latency, high transfer speeds, all within a developer-friendly
environment.
AWS Global Accelerator and Amazon CloudFront are separate services that use the
AWS global network and its edge locations around the world. CloudFront improves
performance for both cacheable content (such as images and videos) and dynamic
content (such as API acceleration and dynamic site delivery), while Global
Accelerator improves performance for a wide range of applications over TCP or UDP.
AWS Elastic Load Balancing (ELB) - Both of the services, ELB and Global
Accelerator solve the challenge of routing user requests to healthy application
endpoints. AWS Global Accelerator relies on ELB to provide the traditional load
balancing features such as support for internal and non-AWS endpoints, pre-
warming, and Layer 7 routing. However, while ELB provides load balancing within
one Region, AWS Global Accelerator provides traffic management across multiple
Regions.
A regional ELB load balancer is an ideal target for AWS Global Accelerator. By using
a regional ELB load balancer, you can precisely distribute incoming application
traffic across backends, such as Amazon EC2 instances or Amazon ECS tasks, within
an AWS Region.
If you have workloads that cater to a global client base, AWS recommends that you
use AWS Global Accelerator. If you have workloads hosted in a single AWS Region
and used by clients in and around the same Region, you can use an Application
Load Balancer or Network Load Balancer to manage your resources.
Amazon Route 53 - Amazon Route 53 is a highly available and scalable cloud
Domain Name System (DNS) web service. It is designed to give developers and
businesses an extremely reliable and cost-effective way to route end users to
Internet applications by translating names like www.example.com into the numeric
IP addresses like 192.0.2.1 that computers use to connect to each other. Route 53 is
ruled out as the company wants to continue using its own custom DNS service.
Reference:
https://aws.amazon.com/global-accelerator/faqs/
Domain
Design High-Performing Architectures
Question 15Correct
A junior scientist working with the Deep Space Research Laboratory at NASA is
trying to upload a high-resolution image of a nebula into Amazon S3. The image size
is approximately 3 gigabytes. The junior scientist is using Amazon S3 Transfer
Acceleration (Amazon S3TA) for faster image upload. It turns out that Amazon S3TA
did not result in an accelerated transfer.
Given this scenario, which of the following is correct regarding the charges for this
image transfer?
The junior scientist only needs to pay Amazon S3 transfer charges for the
image upload
The junior scientist needs to pay both S3 transfer charges and S3TA
transfer charges for the image upload
The junior scientist only needs to pay S3TA transfer charges for the image
upload
Your answer is correct
The junior scientist does not need to pay any transfer charges for the
image upload
Overall explanation
Correct option:
The junior scientist does not need to pay any transfer charges for the
image upload
There are no S3 data transfer charges when data is transferred in from the internet.
Also with S3TA, you pay only for transfers that are accelerated. Therefore the junior
scientist does not need to pay any transfer charges for the image upload because
S3TA did not result in an accelerated transfer.
Amazon S3 Transfer Acceleration (S3TA) Overview:
via - https://aws.amazon.com/s3/transfer-acceleration/
Incorrect options:
The junior scientist only needs to pay S3TA transfer charges for the image
upload - Since S3TA did not result in an accelerated transfer, there are no S3TA
transfer charges to be paid.
The junior scientist only needs to pay Amazon S3 transfer charges for the
image upload - There are no S3 data transfer charges when data is transferred in
from the internet. So this option is incorrect.
The junior scientist needs to pay both S3 transfer charges and S3TA
transfer charges for the image upload - There are no Amazon S3 data transfer
charges when data is transferred in from the internet. Since S3TA did not result in
an accelerated transfer, there are no S3TA transfer charges to be paid.
References:
https://aws.amazon.com/s3/transfer-acceleration/
https://aws.amazon.com/s3/pricing/
Domain
Design High-Performing Architectures
Question 16Correct
A US-based healthcare startup is building an interactive diagnostic tool for COVID-
19 related assessments. The users would be required to capture their personal
health records via this tool. As this is sensitive health information, the backup of the
user data must be kept encrypted in Amazon Simple Storage Service (Amazon S3).
The startup does not want to provide its own encryption keys but still wants to
maintain an audit trail of when an encryption key was used and by whom.
Which of the following is the BEST solution for this use-case?
Your answer is correct
Use server-side encryption with AWS Key Management Service keys (SSE-
KMS) to encrypt the user data on Amazon S3
Use server-side encryption with customer-provided keys (SSE-C) to
encrypt the user data on Amazon S3
Use client-side encryption with client provided keys and then upload the
encrypted user data to Amazon S3
Use server-side encryption with Amazon S3 managed keys (SSE-S3) to
encrypt the user data on Amazon S3
Overall explanation
Correct option:
Use server-side encryption with AWS Key Management Service keys (SSE-
KMS) to encrypt the user data on Amazon S3
AWS Key Management Service (AWS KMS) is a service that combines secure, highly
available hardware and software to provide a key management system scaled for
the cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can
specify a customer-managed CMK that you have already created. SSE-KMS provides
you with an audit trail that shows when your CMK was used and by whom. Therefore
SSE-KMS is the correct solution for this use-case.
Server Side Encryption in S3:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Incorrect options:
Use server-side encryption with Amazon S3 managed keys (SSE-S3) to
encrypt the user data on Amazon S3 - When you use Server-Side Encryption
with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique
key. However this option does not provide the ability to audit trail the usage of the
encryption keys.
Use server-side encryption with customer-provided keys (SSE-C) to
encrypt the user data on Amazon S3 - With Server-Side Encryption with
Customer-Provided Keys (SSE-C), you manage the encryption keys and Amazon S3
manages the encryption, as it writes to disks, and decryption when you access your
objects. However this option does not provide the ability to audit trail the usage of
the encryption keys.
Use client-side encryption with client provided keys and then upload the
encrypted user data to Amazon S3 - Using client-side encryption is ruled out as
the startup does not want to provide the encryption keys.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Domain
Design Secure Architectures
Question 17Correct
A company runs a data processing workflow that takes about 60 minutes to
complete. The workflow can withstand disruptions and it can be started and stopped
multiple times.
Which is the most cost-effective solution to build a solution for the workflow?
Use Amazon EC2 reserved instances to run the workflow processes
Your answer is correct
Use Amazon EC2 spot instances to run the workflow processes
Use AWS Lambda function to run the workflow processes
Use Amazon EC2 on-demand instances to run the workflow processes
Overall explanation
Correct option:
Use Amazon EC2 spot instances to run the workflow processes
Amazon EC2 instance types:
via - https://aws.amazon.com/ec2/pricing/
Amazon EC2 Spot instances allow you to request spare Amazon EC2 computing
capacity for up to 90% off the On-Demand price.
Spot instances are recommended for:
Applications that have flexible start and end times Applications that are feasible
only at very low compute prices Users with urgent computing needs for large
amounts of additional capacity
For the given use case, spot instances offer the most cost-effective solution as the
workflow can withstand disruptions and can be started and stopped multiple times.
For example, considering a process that runs for an hour and needs about 1024 MB
of memory, spot instance pricing for a t2.micro instance (having 1024 MB of RAM) is
$0.0035 per hour.
Spot instance pricing:
via - https://aws.amazon.com/ec2/spot/pricing/
Contrast this with the pricing of a Lambda function (having 1024 MB of allocated
memory), which comes out to $0.0000000167 per 1ms or $0.06 per hour
($0.0000000167 * 1000 * 60 * 60 per hour).
AWS Lambda function pricing:
via - https://aws.amazon.com/lambda/pricing/
Thus, a spot instance turns out to be about 20 times cost effective than a Lambda
function to meet the requirements of the given use case.
Incorrect options:
Use AWS Lambda function to run the workflow processes - As mentioned in
the explanation above, a Lambda function turns out to be 20 times more expensive
than a spot instance to meet the workflow requirements of the given use case, so
this option is incorrect. You should also note that the maximum execution time of a
Lambda function is 15 minutes, so the workflow process would be disrupted for
sure. On the other hand, it is certainly possible that the workflow process can be
completed in a single run on the spot instance (the average frequency of stop
instance interruption across all Regions and instance types is <10%).
Use Amazon EC2 on-demand instances to run the workflow processes
Use Amazon EC2 reserved instances to run the workflow processes
You should note that both on-demand and reserved instances are more expensive
than spot instances. In addition, reserved instances have a term of 1 year or 3
years, so they are not suited for the given workflow. Therefore, both these options
are incorrect.
References:
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/ec2/spot/pricing/
https://aws.amazon.com/lambda/pricing/
https://aws.amazon.com/ec2/spot/instance-advisor/
Domain
Design Cost-Optimized Architectures
Question 18Correct
A healthcare startup needs to enforce compliance and regulatory guidelines for
objects stored in Amazon S3. One of the key requirements is to provide adequate
protection against accidental deletion of objects.
As a solutions architect, what are your recommendations to address these
guidelines? (Select two) ?
Your selection is correct
Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket
Create an event trigger on deleting any Amazon S3 object. The event
invokes an Amazon Simple Notification Service (Amazon SNS) notification
via email to the IT manager
Establish a process to get managerial approval for deleting Amazon S3
objects
Change the configuration on Amazon S3 console so that the user needs to
provide additional confirmation while deleting any Amazon S3 object
Your selection is correct
Enable versioning on the Amazon S3 bucket
Overall explanation
Correct options:
Enable versioning on the Amazon S3 bucket
Versioning is a means of keeping multiple variants of an object in the same bucket.
You can use versioning to preserve, retrieve, and restore every version of every
object stored in your Amazon S3 bucket. Versioning-enabled buckets enable you to
recover objects from accidental deletion or overwrite.
For example:
If you overwrite an object, it results in a new object version in the bucket. You can
always restore the previous version. If you delete an object, instead of removing it
permanently, Amazon S3 inserts a delete marker, which becomes the current object
version. You can always restore the previous version. Hence, this is the correct
option.
Versioning Overview:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket
To provide additional protection, multi-factor authentication (MFA) delete can be
enabled. MFA delete requires secondary authentication to take place before objects
can be permanently deleted from an Amazon S3 bucket. Hence, this is the correct
option.
Incorrect options:
Create an event trigger on deleting any Amazon S3 object. The event
invokes an Amazon Simple Notification Service (Amazon SNS) notification
via email to the IT manager - Sending an event trigger after object deletion does
not meet the objective of preventing object deletion by mistake because the object
has already been deleted. So, this option is incorrect.
Establish a process to get managerial approval for deleting Amazon S3
objects - This option for getting managerial approval is just a distractor.
Change the configuration on Amazon S3 console so that the user needs to
provide additional confirmation while deleting any Amazon S3 object -
There is no provision to set up Amazon S3 configuration to ask for additional
confirmation before deleting an object. This option is incorrect.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
Domain
Design Resilient Architectures
Question 19Correct
Which of the following feature of an Amazon S3 bucket can only be suspended and
not disabled once it have been enabled?
Requester Pays
Static Website Hosting
Server Access Logging
Your answer is correct
Versioning
Overall explanation
Correct option:
Versioning
Once you version-enable a bucket, it can never return to an unversioned state.
Versioning can only be suspended once it has been enabled.
Versioning Overview:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Incorrect options:
Server Access Logging
Static Website Hosting
Requester Pays
Server Access Logging, Static Website Hosting and Requester Pays features can be
disabled even after they have been enabled.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Domain
Design High-Performing Architectures
Question 20Correct
The flagship application for a gaming company connects to an Amazon Aurora
database and the entire technology stack is currently deployed in the United States.
Now, the company has plans to expand to Europe and Asia for its operations. It
needs the games table to be accessible globally but needs
the users and games_played tables to be regional only.
How would you implement this with minimal application refactoring?
Use a Amazon DynamoDB global table for the games table and use
Amazon Aurora for the users and games_played tables
Use an Amazon Aurora Global Database for the games table and use
Amazon DynamoDB tables for the users and games_played tables
Your answer is correct
Use an Amazon Aurora Global Database for the games table and use
Amazon Aurora for the users and games_played tables
Use a Amazon DynamoDB global table for the games table and use
Amazon DynamoDB tables for the users and games_played tables
Overall explanation
Correct option:
Use an Amazon Aurora Global Database for the games table and use
Amazon Aurora for the users and games_played tables
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for
the cloud, that combines the performance and availability of traditional enterprise
databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system
that auto-scales up to 128TB per database instance. Aurora is not an in-memory
database.
Amazon Aurora Global Database is designed for globally distributed applications,
allowing a single Amazon Aurora database to span multiple AWS regions. It
replicates your data with no impact on database performance, enables fast local
reads with low latency in each region, and provides disaster recovery from region-
wide outages. Amazon Aurora Global Database is the correct choice for the given
use-case.
For the given use-case, we, therefore, need to have two Aurora clusters, one for the
global table (games table) and the other one for the local tables (users and
games_played tables).
Incorrect options:
Use an Amazon Aurora Global Database for the games table and use
Amazon DynamoDB tables for the users and games_played tables
Use a Amazon DynamoDB global table for the games table and use
Amazon Aurora for the users and games_played tables
Use a Amazon DynamoDB global table for the games table and use
Amazon DynamoDB tables for the users and games_played tables
Here, we want minimal application refactoring. Amazon DynamoDB and Amazon
Aurora have a completely different APIs, due to Amazon Aurora being SQL and
Amazon DynamoDB being NoSQL. So all three options are incorrect, as they have
Amazon DynamoDB as one of the components.
Reference:
https://aws.amazon.com/rds/aurora/faqs/
Domain
Design Secure Architectures
Question 21Correct
A company uses Amazon DynamoDB as a data store for various kinds of customer
data, such as user profiles, user events, clicks, and visited links. Some of these use-
cases require a high request rate (millions of requests per second), low predictable
latency, and reliability. The company now wants to add a caching layer to support
high read volumes.
As a solutions architect, which of the following AWS services would you recommend
as a caching layer for this use-case? (Select two)
Amazon Redshift
Amazon OpenSearch Service
Amazon Relational Database Service (Amazon RDS)
Your selection is correct
Amazon DynamoDB Accelerator (DAX)
Your selection is correct
Amazon ElastiCache
Overall explanation
Correct options:
Amazon DynamoDB Accelerator (DAX)
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-
memory cache for DynamoDB that delivers up to a 10x performance improvement –
from milliseconds to microseconds – even at millions of requests per second. DAX
does all the heavy lifting required to add in-memory acceleration to your DynamoDB
tables, without requiring developers to manage cache invalidation, data population,
or cluster management. Therefore, this is a correct option.
DAX Overview:
via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/
DAX.concepts.html
Amazon ElastiCache
Amazon ElastiCache for Memcached is an ideal front-end for data stores like
Amazon RDS or Amazon DynamoDB, providing a high-performance middle tier for
applications with extremely high request rates and/or low latency requirements.
Therefore, this is also a correct option.
Incorrect options:
Amazon Relational Database Service (Amazon RDS) - Amazon Relational
Database Service (Amazon RDS) makes it easy to set up, operate, and scale a
relational database in the cloud. It provides cost-efficient and resizable capacity
while automating time-consuming administration tasks such as hardware
provisioning, database setup, patching, and backups. Amazon RDS cannot be used
as a caching layer for Amazon DynamoDB.
Amazon OpenSearch Service - Amazon OpenSearch Service is a managed
service that makes it easy for you to perform interactive log analytics, real-time
application monitoring, website search, and more. OpenSearch is an open source,
distributed search and analytics suite derived from Elasticsearch. It cannot be used
as a caching layer for Amazon DynamoDB.
Amazon Redshift - Amazon Redshift is a fully-managed petabyte-scale cloud-
based data warehouse product designed for large scale data set storage and
analysis. It cannot be used as a caching layer for Amazon DynamoDB.
References:
https://aws.amazon.com/dynamodb/dax/
https://aws.amazon.com/elasticache/faqs/
Domain
Design High-Performing Architectures
Question 22Correct
The engineering team at an in-home fitness company is evaluating multiple in-
memory data stores with the ability to power its on-demand, live leaderboard. The
company's leaderboard requires high availability, low latency, and real-time
processing to deliver customizable user data for the community of users working
out together virtually from the comfort of their home.
As a solutions architect, which of the following solutions would you recommend?
(Select two)
Your selection is correct
Power the on-demand, live leaderboard using Amazon DynamoDB with
DynamoDB Accelerator (DAX) as it meets the in-memory, high availability,
low latency requirements
Power the on-demand, live leaderboard using Amazon RDS for Aurora as it
meets the in-memory, high availability, low latency requirements
Your selection is correct
Power the on-demand, live leaderboard using Amazon ElastiCache for
Redis as it meets the in-memory, high availability, low latency
requirements
Power the on-demand, live leaderboard using Amazon Neptune as it meets
the in-memory, high availability, low latency requirements
Power the on-demand, live leaderboard using Amazon DynamoDB as it
meets the in-memory, high availability, low latency requirements
Overall explanation
Correct options:
Power the on-demand, live leaderboard using Amazon ElastiCache for
Redis as it meets the in-memory, high availability, low latency
requirements
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides
sub-millisecond latency to power internet-scale real-time applications. Amazon
ElastiCache for Redis is a great choice for real-time transactional and analytical
processing use cases such as caching, chat/messaging, gaming leaderboards,
geospatial, machine learning, media streaming, queues, real-time analytics, and
session store. ElastiCache for Redis can be used to power the live leaderboard, so
this option is correct.
Amazon ElastiCache for Redis Overview:
via - https://aws.amazon.com/elasticache/redis/
Although you can integrate Redis with DynamoDB, it's much more involved than
using DAX which is a much better fit.
Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and
ElastiCache Memcached for Amazon S3
Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for
Amazon S3
Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-
value store service that can be used as a cache or a data store. Amazon ElastiCache
for Memcached is a great choice for implementing an in-memory cache to decrease
access latency, increase throughput, and ease the load off your relational or NoSQL
database.
Amazon ElastiCache Memcached cannot be used as a cache to serve static content
from Amazon S3, so both these options are incorrect.
References:
https://aws.amazon.com/dynamodb/dax/
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-
amazon-cloudfront-a-match-made-in-the-cloud/
https://aws.amazon.com/elasticache/redis/
Domain
Design Cost-Optimized Architectures
Question 24Correct
A social photo-sharing company uses Amazon Simple Storage Service (Amazon S3)
to store the images uploaded by the users. These images are kept encrypted in
Amazon S3 by using AWS Key Management Service (AWS KMS) and the company
manages its own AWS KMS keys for encryption. A member of the DevOps team
accidentally deleted the AWS KMS key a day ago, thereby rendering the user's
photo data unrecoverable. You have been contacted by the company to consult
them on possible solutions to this crisis.
As a solutions architect, which of the following steps would you recommend to solve
this issue?
Contact AWS support to retrieve the AWS KMS key from their backup
The AWS KMS key can be recovered by the AWS root account user
Your answer is correct
As the AWS KMS key was deleted a day ago, it must be in the 'pending
deletion' status and hence you can just cancel the KMS key deletion and
recover the key
The company should issue a notification on its web application informing
the users about the loss of their data
Overall explanation
Correct option:
As the AWS KMS key was deleted a day ago, it must be in the 'pending
deletion' status and hence you can just cancel the KMS key deletion and
recover the key
AWS Key Management Service (KMS) makes it easy for you to create and manage
cryptographic keys and control their use across a wide range of AWS services and in
your applications. AWS KMS is a secure and resilient service that uses hardware
security modules that have been validated under FIPS 140-2.
Deleting an AWS KMS key in AWS Key Management Service (AWS KMS) is
destructive and potentially dangerous. Therefore, AWS KMS enforces a waiting
period. To delete a KMS key in AWS KMS you schedule key deletion. You can set the
waiting period from a minimum of 7 days up to a maximum of 30 days. The default
waiting period is 30 days. During the waiting period, the KMS key status and key
state is Pending deletion. To recover the KMS key, you can cancel key deletion
before the waiting period ends. After the waiting period ends you cannot cancel key
deletion, and AWS KMS deletes the KMS key.
How Deleting AWS KMS keys Works:
via - https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html
Incorrect options:
Contact AWS support to retrieve the AWS KMS key from their backup
The AWS KMS key can be recovered by the AWS root account user
The AWS root account user cannot recover the AWS KMS key and the AWS support
does not have access to KMS keys via any backups. Both these options just serve as
distractors.
The company should issue a notification on its web application informing
the users about the loss of their data - This option is not required as the data
can be recovered via the cancel key deletion feature.
Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html
Domain
Design Secure Architectures
Question 25Incorrect
A leading carmaker would like to build a new car-as-a-sensor service by leveraging
fully serverless components that are provisioned and managed automatically by
AWS. The development team at the carmaker does not want an option that requires
the capacity to be manually provisioned, as it does not want to respond manually to
changing volumes of sensor data.
Given these constraints, which of the following solutions is the BEST fit to develop
this car-as-a-sensor service?
Correct answer
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS)
standard queue, which is polled by an AWS Lambda function in batches
and the data is written into an auto-scaled Amazon DynamoDB table for
downstream processing
Your answer is incorrect
Ingest the sensor data in Amazon Kinesis Data Firehose, which directly
writes the data into an auto-scaled Amazon DynamoDB table for
downstream processing
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS)
standard queue, which is polled by an application running on an Amazon
EC2 instance and the data is written into an auto-scaled Amazon
DynamoDB table for downstream processing
Ingest the sensor data in Amazon Kinesis Data Streams, which is polled by
an application running on an Amazon EC2 instance and the data is written
into an auto-scaled Amazon DynamoDB table for downstream processing
Overall explanation
Correct option:
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS)
standard queue, which is polled by an AWS Lambda function in batches
and the data is written into an auto-scaled Amazon DynamoDB table for
downstream processing
AWS Lambda lets you run code without provisioning or managing servers. You pay
only for the compute time you consume. Amazon Simple Queue Service (SQS) is a
fully managed message queuing service that enables you to decouple and scale
microservices, distributed systems, and serverless applications. SQS offers two
types of message queues. Standard queues offer maximum throughput, best-effort
ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee
that messages are processed exactly once, in the exact order that they are sent.
AWS manages all ongoing operations and underlying infrastructure needed to
provide a highly available and scalable message queuing service. With SQS, there is
no upfront cost, no need to acquire, install, and configure messaging software, and
no time-consuming build-out and maintenance of supporting infrastructure. SQS
queues are dynamically created and scale automatically so you can build and grow
applications quickly and efficiently.
As there is no need to manually provision the capacity, so this is the correct option.
Incorrect options:
Ingest the sensor data in Amazon Kinesis Data Firehose, which directly
writes the data into an auto-scaled Amazon DynamoDB table for
downstream processing -Amazon Kinesis Data Firehose is a fully managed
service for delivering real-time streaming data to destinations such as Amazon
Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch
Service, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by
supported third-party service providers, including Datadog, Dynatrace,
LogicMonitor, MongoDB, New Relic, and Sumo Logic.
Firehose cannot directly write into a DynamoDB table, so this option is incorrect.
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS)
standard queue, which is polled by an application running on an Amazon
EC2 instance and the data is written into an auto-scaled Amazon
DynamoDB table for downstream processing
Ingest the sensor data in Amazon Kinesis Data Streams, which is polled by
an application running on an Amazon EC2 instance and the data is written
into an auto-scaled Amazon DynamoDB table for downstream processing
Using an application on an Amazon EC2 instance is ruled out as the carmaker wants
to use fully serverless components. So both these options are incorrect.
References:
https://aws.amazon.com/sqs/
https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
https://aws.amazon.com/kinesis/data-streams/faqs/
Domain
Design Resilient Architectures
Question 26Incorrect
A telecom company operates thousands of hardware devices like switches, routers,
cables, etc. The real-time status data for these devices must be fed into a
communications application for notifications. Simultaneously, another analytics
application needs to read the same real-time status data and analyze all the
connecting lines that may go down because of any device failures.
As an AWS Certified Solutions Architect – Associate, which of the following solutions
would you suggest, so that both the applications can consume the real-time status
data concurrently?
Your answer is incorrect
Amazon Simple Queue Service (SQS) with Amazon Simple Notification
Service (SNS)
Correct answer
Amazon Kinesis Data Streams
Amazon Simple Queue Service (SQS) with Amazon Simple Email Service
(Amazon SES)
Amazon Simple Notification Service (SNS)
Overall explanation
Correct option:
Amazon Kinesis Data Streams
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It
provides ordering of records, as well as the ability to read and/or replay records in
the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client
Library (KCL) delivers all records for a given partition key to the same record
processor, making it easier to build multiple applications reading from the same
Amazon Kinesis data stream (for example, to perform counting, aggregation, and
filtering).
AWS recommends Amazon Kinesis Data Streams for use cases with requirements
that are similar to the following:
   1. Routing related records to the same record processor (as in streaming
      MapReduce). For example, counting and aggregation are simpler when all
      records for a given key are routed to the same record processor.
   2. Ordering of records. For example, you want to transfer log data from the
      application host to the processing/archival host while maintaining the order of
      log statements.
   3. Ability for multiple applications to consume the same stream concurrently. For
      example, you have one application that updates a real-time dashboard and
      another that archives data to Amazon Redshift. You want both applications to
      consume data from the same stream concurrently and independently.
   4. Ability to consume records in the same order a few hours later. For example,
      you have a billing application and an audit application that runs a few hours
      behind the billing application. Because Amazon Kinesis Data Streams stores
      data for up to 365 days, you can run the audit application up to 365 days
      behind the billing application.
Incorrect options:
Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service
(SNS) is a highly available, durable, secure, fully managed pub/sub messaging
service that enables you to decouple microservices, distributed systems, and
serverless applications. Amazon SNS provides topics for high-throughput, push-
based, many-to-many messaging. Amazon SNS is a notification service and cannot
be used for real-time processing of data.
Amazon Simple Queue Service (SQS) with Amazon Simple Notification
Service (SNS) - Amazon Simple Queue Service (Amazon SQS) offers a reliable,
highly scalable hosted queue for storing messages as they travel between
computers. Amazon SQS lets you easily move data between distributed application
components and helps you build applications in which messages are processed
independently (with message-level ack/fail semantics), such as automated
workflows. Since multiple applications need to consume the same data stream
concurrently, Kinesis is a better choice when compared to the combination of SQS
with SNS.
Amazon Simple Queue Service (SQS) with Amazon Simple Email Service
(Amazon SES) - As discussed above, Amazon Kinesis is a better option for this use
case in comparison to Amazon SQS. Also, Amazon SES does not fit this use-case.
Hence, this option is an incorrect answer.
Reference:
https://aws.amazon.com/kinesis/data-streams/faqs/
Domain
Design High-Performing Architectures
Question 27Correct
A new DevOps engineer has joined a large financial services company recently. As
part of his onboarding, the IT department is conducting a review of the checklist for
tasks related to AWS Identity and Access Management (AWS IAM).
As an AWS Certified Solutions Architect – Associate, which best practices would you
recommend (Select two)?
Your selection is correct
Configure AWS CloudTrail to log all AWS Identity and Access Management
(AWS IAM) actions
Create a minimum number of accounts and share these account
credentials among employees
Grant maximum privileges to avoid assigning privileges again
Use user credentials to provide access specific permissions for Amazon
EC2 instances
Your selection is correct
Enable AWS Multi-Factor Authentication (AWS MFA) for privileged users
Overall explanation
Correct options:
Enable AWS Multi-Factor Authentication (AWS MFA) for privileged users
As per the AWS best practices, it is better to enable Multi Factor Authentication
(MFA) for privileged users via an MFA-enabled mobile device or hardware MFA token.
Configure AWS CloudTrail to log all AWS Identity and Access Management
(AWS IAM) actions
AWS recommends to turn on AWS CloudTrail to log all IAM actions for monitoring
and audit purposes.
Incorrect options:
Create a minimum number of accounts and share these account
credentials among employees - AWS recommends that user account credentials
should not be shared between users. So, this option is incorrect.
Grant maximum privileges to avoid assigning privileges again - AWS
recommends granting the least privileges required to complete a certain job and
avoid giving excessive privileges which can be misused. So, this option is incorrect.
Use user credentials to provide access specific permissions for Amazon
EC2 instances - It is highly recommended to use roles to grant access permissions
for EC2 instances working on different AWS services. So, this option is incorrect.
References:
https://aws.amazon.com/iam/
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
https://aws.amazon.com/cloudtrail/faqs/
Domain
Design Secure Architectures
Question 28Correct
The product team at a startup has figured out a market need to support both
stateful and stateless client-server communications via the application
programming interface (APIs) developed using its platform. You have been hired by
the startup as a solutions architect to build a solution to fulfill this market need
using Amazon API Gateway.
Which of the following would you identify as correct?
Amazon API Gateway creates RESTful APIs that enable stateful client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateful, full-
duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateless client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateless, full-
duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateful client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateless, full-
duplex communication between client and server
Your answer is correct
Amazon API Gateway creates RESTful APIs that enable stateless client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateful, full-
duplex communication between client and server
Overall explanation
Correct option:
Amazon API Gateway creates RESTful APIs that enable stateless client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateful, full-
duplex communication between client and server
Amazon API Gateway is a fully managed service that makes it easy for developers
to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the
front door for applications to access data, business logic, or functionality from your
backend services. Using API Gateway, you can create RESTful APIs and WebSocket
APIs that enable real-time two-way communication applications.
How Amazon API Gateway Works:
via - https://aws.amazon.com/api-gateway/
Amazon API Gateway creates RESTful APIs that:
Are HTTP-based.
Enable stateless client-server communication.
Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE.
Amazon API Gateway creates WebSocket APIs that:
Adhere to the WebSocket protocol, which enables stateful, full-duplex
communication between client and server. Route incoming messages based on
message content.
So Amazon API Gateway supports stateless RESTful APIs as well as stateful
WebSocket APIs. Therefore this option is correct.
Incorrect options:
Amazon API Gateway creates RESTful APIs that enable stateful client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateful, full-
duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateless client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateless, full-
duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateful client-
server communication and Amazon API Gateway also creates WebSocket
APIs that adhere to the WebSocket protocol, which enables stateless, full-
duplex communication between client and server
These three options contradict the earlier details provided in the explanation. To
summarize, Amazon API Gateway supports stateless RESTful APIs and stateful
WebSocket APIs. Hence these options are incorrect.
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html
Domain
Design High-Performing Architectures
Question 29Correct
A gaming company is developing a mobile game that streams score updates to a
backend processor and then publishes results on a leaderboard. The company has
hired you as an AWS Certified Solutions Architect Associate to design a solution that
can handle major traffic spikes, process the mobile game updates in the order of
receipt, and store the processed updates in a highly available database. The
company wants to minimize the management overhead required to maintain the
solution.
Which of the following will you recommend to meet these requirements?
Push score updates to an Amazon Simple Queue Service (Amazon SQS)
queue which uses a fleet of Amazon EC2 instances (with Auto Scaling) to
process these updates in the Amazon SQS queue and then store these
processed updates in an Amazon RDS MySQL database
Your answer is correct
Push score updates to Amazon Kinesis Data Streams which uses an AWS
Lambda function to process these updates and then store these processed
updates in Amazon DynamoDB
Push score updates to Amazon Kinesis Data Streams which uses a fleet of
Amazon EC2 instances (with Auto Scaling) to process the updates in
Amazon Kinesis Data Streams and then store these processed updates in
Amazon DynamoDB
Push score updates to an Amazon Simple Notification Service (Amazon
SNS) topic, subscribe an AWS Lambda function to this Amazon SNS topic
to process the updates and then store these processed updates in a SQL
database running on Amazon EC2 instance
Overall explanation
Correct option:
Push score updates to Amazon Kinesis Data Streams which uses an AWS
Lambda function to process these updates and then store these processed
updates in Amazon DynamoDB
To help ingest real-time data or streaming data at large scales, you can use Amazon
Kinesis Data Streams (KDS). KDS can continuously capture gigabytes of data per
second from hundreds of thousands of sources. The data collected is available in
milliseconds, enabling real-time analytics. KDS provides ordering of records, as well
as the ability to read and/or replay records in the same order to multiple Amazon
Kinesis Applications.
AWS Lambda integrates natively with Kinesis Data Streams. The polling,
checkpointing, and error handling complexities are abstracted when you use this
native integration. The processed data can then be configured to be saved in
Amazon DynamoDB.
Incorrect options:
Push score updates to an Amazon Simple Queue Service (Amazon SQS)
queue which uses a fleet of Amazon EC2 instances (with Auto Scaling) to
process these updates in the Amazon SQS queue and then store these
processed updates in an Amazon RDS MySQL database
Push score updates to Amazon Kinesis Data Streams which uses a fleet of
Amazon EC2 instances (with Auto Scaling) to process the updates in
Amazon Kinesis Data Streams and then store these processed updates in
Amazon DynamoDB
Push score updates to an Amazon Simple Notification Service (Amazon
SNS) topic, subscribe an AWS Lambda function to this Amazon SNS topic
to process the updates and then store these processed updates in a SQL
database running on Amazon EC2 instance
These three options use Amazon EC2 instances as part of the solution architecture.
The use-case seeks to minimize the management overhead required to maintain the
solution. However, Amazon EC2 instances involve several maintenance activities
such as managing the guest operating system and software deployed to the guest
operating system, including updates and security patches, etc. Hence these options
are incorrect.
Reference:
https://aws.amazon.com/blogs/big-data/best-practices-for-consuming-amazon-
kinesis-data-streams-using-aws-lambda/
Domain
Design High-Performing Architectures
Question 30Incorrect
As part of a pilot program, a biotechnology company wants to integrate data files
from its on-premises analytical application with AWS Cloud via an NFS interface.
Which of the following AWS service is the MOST efficient solution for the given use-
case?
AWS Storage Gateway - Tape Gateway
Correct answer
AWS Storage Gateway - File Gateway
Your answer is incorrect
AWS Storage Gateway - Volume Gateway
AWS Site-to-Site VPN
Overall explanation
Correct option:
AWS Storage Gateway - File Gateway
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises
access to virtually unlimited cloud storage. The service provides three different
types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that
seamlessly connect on-premises applications to cloud storage, caching data locally
for low-latency access.
AWS Storage Gateway's file interface, or file gateway, offers you a seamless way to
connect to the cloud in order to store application data files and backup images as
durable objects on Amazon S3 cloud storage. File gateway offers SMB or NFS-based
access to data in Amazon S3 with local caching. As the company wants to integrate
data files from its analytical instruments into AWS via an NFS interface, therefore
AWS Storage Gateway - File Gateway is the correct answer.
File Gateway Overview:
 via
- https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayCo
ncepts.html
Incorrect options:
AWS Storage Gateway - Volume Gateway - You can configure the AWS Storage
Gateway service as a Volume Gateway to present cloud-based iSCSI block storage
volumes to your on-premises applications. Volume Gateway does not support NFS
interface, so this option is not correct.
AWS Storage Gateway - Tape Gateway - AWS Storage Gateway - Tape Gateway
allows moving tape backups to the cloud. Tape Gateway does not support NFS
interface, so this option is not correct.
AWS Site-to-Site VPN - AWS Site-to-Site VPN enables you to securely connect
your on-premises network or branch office site to your Amazon Virtual Private Cloud
(Amazon VPC). You can securely extend your data center or branch office network to
the cloud with an AWS Site-to-Site VPN (Site-to-Site VPN) connection. It uses
internet protocol security (IPSec) communications to create encrypted VPN tunnels
between two locations. You cannot use AWS Site-to-Site VPN to integrate data files
via the NFS interface, so this option is not correct.
References:
https://aws.amazon.com/storagegateway/
https://aws.amazon.com/storagegateway/volume/
https://aws.amazon.com/storagegateway/file/
https://aws.amazon.com/storagegateway/vtl/
Domain
Design Cost-Optimized Architectures
Question 31Incorrect
A geological research agency maintains the seismological data for the last 100
years. The data has a velocity of 1GB per minute. You would like to store the data
with only the most relevant attributes to build a predictive model for earthquakes.
What AWS services would you use to build the most cost-effective solution with the
LEAST amount of infrastructure maintenance?
Ingest the data in a Spark Streaming Cluster on Amazon EMR and use
Spark Streaming transformations before writing to Amazon S3
Your answer is incorrect
Ingest the data in Amazon Kinesis Data Analytics and use SQL queries to
filter and transform the data before writing to Amazon S3
Correct answer
Ingest the data in Amazon Kinesis Data Firehose and use an intermediary
AWS Lambda function to filter and transform the incoming stream before
the output is dumped on Amazon S3
Ingest the data in Amazon Kinesis Data Streams and use an intermediary
AWS Lambda function to filter and transform the incoming stream before
the output is dumped on Amazon S3
Overall explanation
Correct option:
Ingest the data in Amazon Kinesis Data Firehose and use an intermediary
AWS Lambda function to filter and transform the incoming stream before
the output is dumped on Amazon S3
Amazon Kinesis Data Firehose is the easiest way to load streaming data into data
stores and analytics tools. It can capture, transform, and load streaming data into
Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and Splunk, enabling
near real-time analytics with existing business intelligence tools and dashboards
you’re already using today. It is a fully managed service that automatically scales to
match the throughput of your data and requires no ongoing administration. It can
also batch, compress, and encrypt the data before loading it, minimizing the
amount of storage used at the destination and increasing security.
Amazon Kinesis Data Firehose Overview
via - https://aws.amazon.com/kinesis/data-firehose/
The correct option is to ingest the data in Amazon Kinesis Data Firehose and use a
AWS Lambda function to filter and transform the incoming data before the output is
dumped on Amazon S3. This way you only need to store a sliced version of the data
with only the relevant data attributes required for your model. Also it should be
noted that this solution is entirely serverless and requires no infrastructure
maintenance.
Amazon Kinesis Data Firehose to Amazon S3:
via - https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html
Incorrect options:
Ingest the data in Amazon Kinesis Data Analytics and use SQL queries to
filter and transform the data before writing to Amazon S3 - Amazon Kinesis
Data Analytics is the easiest way to analyze streaming data in real-time. Kinesis
Data Analytics enables you to easily and quickly build queries and sophisticated
streaming applications in three simple steps: setup your streaming data sources,
write your queries or streaming applications, and set up your destination for
processed data. Kinesis Data Analytics cannot directly ingest data from the source
as it ingests data either from Kinesis Data Streams or Kinesis Data Firehose, so this
option is ruled out.
Ingest the data in Amazon Kinesis Data Streams and use an intermediary
AWS Lambda function to filter and transform the incoming stream before
the output is dumped on Amazon S3 - Amazon Kinesis Data Streams (KDS) is a
massively scalable, highly durable data ingestion and processing service optimized
for streaming data. Amazon Kinesis Data Streams is integrated with a number of
AWS services, including Amazon Kinesis Data Firehose for near real-time
transformation.
Kinesis Data Streams cannot directly write the output to Amazon S3. Unlike Amazon
Kinesis Data Firehose, KDS does not offer a ready-made integration via an
intermediary AWS Lambda function to reliably dump data into Amazon S3. You will
need to do a lot of custom coding to get the AWS Lambda function to process the
incoming stream and then store the transformed output to Amazon S3 with the
constraint that the buffer is maintained reliably and no transformed data is lost. So
this option is incorrect.
Ingest the data in a Spark Streaming Cluster on Amazon EMR and use
Spark Streaming transformations before writing to Amazon S3 - Amazon
EMR is the industry-leading cloud big data platform for processing vast amounts of
data using open source tools such as Apache Spark, Apache Hive, Apache HBase,
Apache Flink, Apache Hudi, and Presto. Amazon EMR uses Hadoop, an open-source
framework, to distribute your data and processing across a resizable cluster of
Amazon EC2 instances. Using an EMR cluster would imply managing the underlying
infrastructure so it’s ruled out because the correct solution for the given use-case
should require the least amount of infrastructure maintenance.
Reference:
https://aws.amazon.com/kinesis/data-firehose/
Domain
Design Cost-Optimized Architectures
Question 32Incorrect
A media agency stores its re-creatable assets on Amazon Simple Storage Service
(Amazon S3) buckets. The assets are accessed by a large number of users for the
first few days and the frequency of access falls down drastically after a week.
Although the assets would be accessed occasionally after the first week, but they
must continue to be immediately accessible when required. The cost of maintaining
all the assets on Amazon S3 storage is turning out to be very expensive and the
agency is looking at reducing costs as much as possible.
As an AWS Certified Solutions Architect – Associate, can you suggest a way to lower
the storage costs while fulfilling the business requirements?
Correct answer
Configure a lifecycle policy to transition the objects to Amazon S3 One
Zone-Infrequent Access (S3 One Zone-IA) after 30 days
Your answer is incorrect
Configure a lifecycle policy to transition the objects to Amazon S3 One
Zone-Infrequent Access (S3 One Zone-IA) after 7 days
Configure a lifecycle policy to transition the objects to Amazon S3
Standard-Infrequent Access (S3 Standard-IA) after 7 days
Configure a lifecycle policy to transition the objects to Amazon S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days
Overall explanation
Correct option:
Configure a lifecycle policy to transition the objects to Amazon S3 One
Zone-Infrequent Access (S3 One Zone-IA) after 30 days
Amazon S3 One Zone-IA is for data that is accessed less frequently, but requires
rapid access when needed. Unlike other S3 Storage Classes which store data in a
minimum of three Availability Zones (AZs), Amazon S3 One Zone-IA stores data in a
single Availability Zone (AZ) and costs 20% less than Amazon S3 Standard-IA.
Amazon S3 One Zone-IA is ideal for customers who want a lower-cost option for
infrequently accessed and re-creatable data but do not require the availability and
resilience of Amazon S3 Standard or Amazon S3 Standard-IA. The minimum storage
duration is 30 days before you can transition objects from Amazon S3 Standard to
Amazon S3 One Zone-IA.
Amazon S3 One Zone-IA offers the same high durability, high throughput, and low
latency of Amazon S3 Standard, with a low per GB storage price and per GB
retrieval fee. S3 Storage Classes can be configured at the object level, and a single
bucket can contain objects stored across Amazon S3 Standard, Amazon S3
Intelligent-Tiering, Amazon S3 Standard-IA, and Amazon S3 One Zone-IA. You can
also use S3 Lifecycle policies to automatically transition objects between storage
classes without any application changes.
Constraints for Lifecycle storage class transitions:
 via - https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-
general-considerations.html
Supported Amazon S3 lifecycle transitions:
 via - https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-
general-considerations.html
Incorrect options:
Configure a lifecycle policy to transition the objects to Amazon S3
Standard-Infrequent Access (S3 Standard-IA) after 7 days
Configure a lifecycle policy to transition the objects to Amazon S3 One
Zone-Infrequent Access (S3 One Zone-IA) after 7 days
As mentioned earlier, the minimum storage duration is 30 days before you can
transition objects from Amazon S3 Standard to Amazon S3 One Zone-IA or Amazon
S3 Standard-IA, so both these options are added as distractors.
Configure a lifecycle policy to transition the objects to Amazon S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days - Amazon S3
Standard-IA is for data that is accessed less frequently, but requires rapid access
when needed. S3 Standard-IA offers the high durability, high throughput, and low
latency of Amazon S3 Standard, with a low per GB storage price and per GB
retrieval fee. This combination of low cost and high performance makes Amazon S3
Standard-IA ideal for long-term storage, backups, and as a data store for disaster
recovery files. But, it costs more than Amazon S3 One Zone-IA because of the
redundant storage across Availability Zones (AZs). As the data is re-creatable, so
you don't need to incur this additional cost.
References:
https://aws.amazon.com/s3/storage-classes/
https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-
considerations.html
Domain
Design Cost-Optimized Architectures
Question 33Correct
A development team requires permissions to list an Amazon S3 bucket and delete
objects from that bucket. A systems administrator has created the following IAM
policy to provide access to the bucket and applied that policy to the group. The
group is not able to delete objects in the bucket. The company follows the principle
of least privilege.
    "Version": "2021-10-17",
    "Statement": [
         {
             "Action": [
                  "s3:ListBucket",
                  "s3:DeleteObject"
             ],
             "Resource": [
                  "arn:aws:s3:::example-bucket"
             ],
             "Effect": "Allow"
         }
    ]
Which statement should a solutions architect add to the policy to address this issue?
Your answer is correct
{
    "Action": [
         "s3:DeleteObject"
    ],
    "Resource": [
         "arn:aws:s3:::example-bucket/*"
    ],
    "Effect": "Allow"
}
{
    "Action": [
         "s3:DeleteObject"
    ],
    "Resource": [
         "arn:aws:s3:::example-bucket*"
    ],
    "Effect": "Allow"
}
{
    "Action": [
         "s3:*Object"
    ],
    "Resource": [
         "arn:aws:s3:::example-bucket/*"
    ],
    "Effect": "Allow"
}
{
    "Action": [
         "s3:*"
    ],
    "Resource": [
         "arn:aws:s3:::example-bucket/*"
    ],
     "Effect": "Allow"
}
Overall explanation
Correct option:
**
{
     "Action": [
          "s3:DeleteObject"
     ],
     "Resource": [
          "arn:aws:s3:::example-bucket/*"
     ],
     "Effect": "Allow"
}
**
The main elements of a policy statement are:
      1. Effect: Specifies whether the statement will Allow or Deny an action (Allow is
         the effect defined here).
      2. Action: Describes a specific action or actions that will either be allowed or
         denied to run based on the Effect entered. API actions are unique to each
         service (DeleteObject is the action defined here).
      3. Resource: Specifies the resources—for example, an Amazon S3 bucket or
         objects—that the policy applies to in Amazon Resource Name (ARN) format
         ( example-bucket/* is the resource defined here).
This policy provides the necessary delete permissions on the resources of the
Amazon S3 bucket to the group.
Incorrect options:
**
{
     "Action": [
          "s3:*Object"
     ],
     "Resource": [
          "arn:aws:s3:::example-bucket/*"
     ],
     "Effect": "Allow"
}
** - This policy is incorrect as the action value is invalid
**
{
     "Action": [
          "s3:*"
     ],
     "Resource": [
          "arn:aws:s3:::example-bucket/*"
     ],
     "Effect": "Allow"
}
** - This policy is incorrect since it allows all actions on the resource, which violates
the principle of least privilege, as required by the given use case.
**
{
     "Action": [
          "s3:DeleteObject"
     ],
     "Resource": [
          "arn:aws:s3:::example-bucket*"
     ],
     "Effect": "Allow"
}
** - This is incorrect, as the resource name is incorrect. It should have a /* after the
bucket name.
Reference:
https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-
policies/
Domain
Design Secure Architectures
Question 34Incorrect
The IT department at a consulting firm is conducting a training workshop for new
developers. As part of an evaluation exercise on Amazon S3, the new developers
were asked to identify the invalid storage class lifecycle transitions for objects
stored on Amazon S3.
Can you spot the INVALID lifecycle transitions from the options below? (Select two)
Your selection is incorrect
Amazon S3 Standard-IA => Amazon S3 Intelligent-Tiering
Your selection is correct
Amazon S3 One Zone-IA => Amazon S3 Standard-IA
Amazon S3 Standard => Amazon S3 Intelligent-Tiering
Correct selection
Amazon S3 Intelligent-Tiering => Amazon S3 Standard
Amazon S3 Standard-IA => Amazon S3 One Zone-IA
Overall explanation
Correct options:
As the question wants to know about the INVALID lifecycle transitions, the following
options are the correct answers -
Amazon S3 Intelligent-Tiering => Amazon S3 Standard
Amazon S3 One Zone-IA => Amazon S3 Standard-IA
Following are the unsupported life cycle transitions for S3 storage classes - Any
storage class to the Amazon S3 Standard storage class. Any storage class to the
Reduced Redundancy storage class. The Amazon S3 Intelligent-Tiering storage class
to the Amazon S3 Standard-IA storage class. The Amazon S3 One Zone-IA storage
class to the Amazon S3 Standard-IA or Amazon S3 Intelligent-Tiering storage
classes.
Incorrect options:
Amazon S3 Standard => Amazon S3 Intelligent-Tiering
Amazon S3 Standard-IA => Amazon S3 Intelligent-Tiering
Amazon S3 Standard-IA => Amazon S3 One Zone-IA
Here are the supported life cycle transitions for S3 storage classes - The S3
Standard storage class to any other storage class. Any storage class to the S3
Glacier or S3 Glacier Deep Archive storage classes. The S3 Standard-IA storage
class to the S3 Intelligent-Tiering or S3 One Zone-IA storage classes. The S3
Intelligent-Tiering storage class to the S3 One Zone-IA storage class. The S3 Glacier
storage class to the S3 Glacier Deep Archive storage class.
Amazon S3 supports a waterfall model for transitioning between storage classes, as
shown in the diagram below:
 via - https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-
general-considerations.html
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-
general-considerations.html
Domain
Design Cost-Optimized Architectures
Question 35Correct
A financial services company recently launched an initiative to improve the security
of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS
accounts owned by the company. Upon analysis, the company has found that the
costs incurred are much higher than expected.
Which of the following would you attribute as the underlying reason for the
unexpectedly high costs for AWS Shield Advanced service?
Savings Plans has not been enabled for the AWS Shield Advanced service
across all the AWS accounts
Your answer is correct
Consolidated billing has not been enabled. All the AWS accounts should
fall under a single consolidated billing for the monthly fee to be charged
only once
AWS Shield Advanced is being used for custom servers, that are not part
of AWS Cloud, thereby resulting in increased costs
AWS Shield Advanced also covers AWS Shield Standard plan, thereby
resulting in increased costs
Overall explanation
Correct option:
Consolidated billing has not been enabled. All the AWS accounts should
fall under a single consolidated billing for the monthly fee to be charged
only once
If your organization has multiple AWS accounts, then you can subscribe multiple
AWS Accounts to AWS Shield Advanced by individually enabling it on each account
using the AWS Management Console or API. You will pay the monthly fee once as
long as the AWS accounts are all under a single consolidated billing, and you own all
the AWS accounts and resources in those accounts.
Incorrect options:
AWS Shield Advanced is being used for custom servers, that are not part
of AWS Cloud, thereby resulting in increased costs - AWS Shield Advanced
does offer protection to resources outside of AWS. This should not cause unexpected
spike in billing costs.
AWS Shield Advanced also covers AWS Shield Standard plan, thereby
resulting in increased costs - AWS Shield Standard is automatically enabled for
all AWS customers at no additional cost. AWS Shield Advanced is an optional paid
service.
Savings Plans has not been enabled for the AWS Shield Advanced service
across all the AWS accounts - This option has been added as a distractor.
Savings Plans is a flexible pricing model that offers low prices on Amazon EC2
instances, AWS Lambda, and AWS Fargate usage, in exchange for a commitment to
a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. Savings
Plans is not applicable for the AWS Shield Advanced service.
References:
https://aws.amazon.com/shield/faqs/
https://aws.amazon.com/savingsplans/faq/
Domain
Design Cost-Optimized Architectures
Question 36Correct
A software engineering intern at an e-commerce company is documenting the
process flow to provision Amazon EC2 instances via the Amazon EC2 API. These
instances are to be used for an internal application that processes Human
Resources payroll data. He wants to highlight those volume types that cannot be
used as a boot volume.
Can you help the intern by identifying those storage volume types that CANNOT be
used as boot volumes while creating the instances? (Select two)
Your selection is correct
Cold Hard disk drive (sc1)
Provisioned IOPS Solid state drive (io1)
Your selection is correct
Throughput Optimized Hard disk drive (st1)
General Purpose Solid State Drive (gp2)
Instance Store
Overall explanation
Correct options:
Throughput Optimized Hard disk drive (st1)
Cold Hard disk drive (sc1)
The Amazon EBS volume types fall into two categories:
Solid state drive (SSD) backed volumes optimized for transactional workloads
involving frequent read/write operations with small I/O size, where the dominant
performance attribute is IOPS.
Hard disk drive (HDD) backed volumes optimized for large streaming workloads
where throughput (measured in MiB/s) is a better performance measure than IOPS.
Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types CANNOT be
used as a boot volume, so these two options are correct.
Please see this detailed overview of the volume types for Amazon EBS volumes.
 via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-
types.html
Incorrect options:
General Purpose Solid State Drive (gp2)
Provisioned IOPS Solid state drive (io1)
Instance Store
General Purpose SSD (gp2), Provisioned IOPS SSD (io1), and Instance Store can be
used as a boot volume.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
Domain
Design High-Performing Architectures
Question 37Incorrect
A financial services company uses Amazon GuardDuty for analyzing its AWS account
metadata to meet the compliance guidelines. However, the company has now
decided to stop using Amazon GuardDuty service. All the existing findings have to
be deleted and cannot persist anywhere on AWS Cloud.
Which of the following techniques will help the company meet this requirement?
Raise a service request with Amazon to completely delete the data from
all their backups
Your answer is incorrect
De-register the service under services tab
Suspend the service in the general settings
Correct answer
Disable the service in the general settings
Overall explanation
Correct option:
Amazon GuardDuty offers threat detection that enables you to continuously monitor
and protect your AWS accounts, workloads, and data stored in Amazon S3.
GuardDuty analyzes continuous streams of meta-data generated from your account
and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and
DNS Logs. It also uses integrated threat intelligence such as known malicious IP
addresses, anomaly detection, and machine learning to identify threats more
accurately.
Disable the service in the general settings
Disabling the service will delete all remaining data, including your findings and
configurations before relinquishing the service permissions and resetting the
service. So, this is the correct option for our use case.
Incorrect options:
Suspend the service in the general settings - You can stop Amazon GuardDuty
from analyzing your data sources at any time by choosing to suspend the service in
the general settings. This will immediately stop the service from analyzing data, but
does not delete your existing findings or configurations.
De-register the service under services tab - This is a made-up option, used
only as a distractor.
Raise a service request with Amazon to completely delete the data from
all their backups - There is no need to create a service request as you can delete
the existing findings by disabling the service.
Reference:
https://aws.amazon.com/guardduty/faqs/
Domain
Design Secure Architectures
Question 38Correct
A leading video streaming service delivers billions of hours of content from Amazon
Simple Storage Service (Amazon S3) to customers around the world. Amazon S3
also serves as the data lake for its big data analytics solution. The data lake has a
staging zone where intermediary query results are kept only for 24 hours. These
results are also heavily referenced by other parts of the analytics pipeline.
Which of the following is the MOST cost-effective strategy for storing this
intermediary query data?
Store the intermediary query results in Amazon S3 Glacier Instant
Retrieval storage class
Store the intermediary query results in Amazon S3 One Zone-Infrequent
Access storage class
Store the intermediary query results in Amazon S3 Standard-Infrequent
Access storage class
Your answer is correct
Store the intermediary query results in Amazon S3 Standard storage class
Overall explanation
Correct option:
Store the intermediary query results in Amazon S3 Standard storage class
Amazon S3 Standard offers high durability, availability, and performance object
storage for frequently accessed data. Because it delivers low latency and high
throughput, S3 Standard is appropriate for a wide variety of use cases, including
cloud applications, dynamic websites, content distribution, mobile and gaming
applications, and big data analytics. As there is no minimum storage duration
charge and no retrieval fee (remember that intermediary query results are heavily
referenced by other parts of the analytics pipeline), this is the MOST cost-effective
storage class amongst the given options.
Incorrect options:
Store the intermediary query results in Amazon S3 Glacier Instant
Retrieval storage class - Amazon S3 Glacier Instant Retrieval delivers the fastest
access to archive storage, with the same throughput and milliseconds access as the
S3 Standard and S3 Standard-IA storage classes. S3 Glacier Instant Retrieval is ideal
for archive data that needs immediate access, such as medical images, news media
assets, or user-generated content archives.
The minimum storage duration charge is 90 days, so this option is NOT cost-
effective because intermediary query results need to be kept only for 24 hours.
Hence this option is not correct.
Store the intermediary query results in Amazon S3 Standard-Infrequent
Access storage class - Amazon S3 Standard-IA is for data that is accessed less
frequently but requires rapid access when needed. S3 Standard-IA offers high
durability, high throughput, and low latency of S3 Standard, with a low per GB
storage price and per GB retrieval fee. This combination of low cost and high
performance makes S3 Standard-IA ideal for long-term storage, backups, and as a
data store for disaster recovery files. The minimum storage duration charge is 30
days, so this option is NOT cost-effective because intermediary query results need
to be kept only for 24 hours. Hence this option is not correct.
Store the intermediary query results in Amazon S3 One Zone-Infrequent
Access storage class - Amazon S3 One Zone-IA is for data that is accessed less
frequently but requires rapid access when needed. Unlike other S3 Storage Classes
which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA
stores data in a single AZ and costs 20% less than S3 Standard-IA. The minimum
storage duration charge is 30 days, so this option is NOT cost-effective because
intermediary query results need to be kept only for 24 hours. Hence this option is
not correct.
To summarize again, S3 Standard-IA and S3 One Zone-IA have a minimum storage
duration charge of 30 days (so instead of 24 hours, you end up paying for 30 days).
S3 Standard-IA and S3 One Zone-IA also have retrieval charges (as the results are
heavily referenced by other parts of the analytics pipeline, so the retrieval costs
would be pretty high). Therefore, these storage classes are not cost optimal for the
given use-case.
Reference:
https://aws.amazon.com/s3/storage-classes/
Domain
Design Cost-Optimized Architectures
Question 39Correct
A new DevOps engineer has just joined a development team and wants to
understand the replication capabilities for Amazon RDS Multi-AZ deployment as well
as Amazon RDS Read-replicas.
Which of the following correctly summarizes these capabilities for the given
database?
Multi-AZ follows asynchronous replication and spans one Availability Zone
(AZ) within a single region. Read replicas follow synchronous replication
and can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two
Availability Zones (AZs) within a single region. Read replicas follow
asynchronous replication and can be within an Availability Zone (AZ),
Cross-AZ, or Cross-Region
Your answer is correct
Multi-AZ follows synchronous replication and spans at least two
Availability Zones (AZs) within a single region. Read replicas follow
asynchronous replication and can be within an Availability Zone (AZ),
Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two
Availability Zones (AZs) within a single region. Read replicas follow
synchronous replication and can be within an Availability Zone (AZ), Cross-
AZ, or Cross-Region
Overall explanation
Correct option:
Multi-AZ follows synchronous replication and spans at least two
Availability Zones (AZs) within a single region. Read replicas follow
asynchronous replication and can be within an Availability Zone (AZ),
Cross-AZ, or Cross-Region
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for
RDS database (DB) instances, making them a natural fit for production database
workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically
creates a primary DB Instance and synchronously replicates the data to a standby
instance in a different Availability Zone (AZ). Multi-AZ spans at least two Availability
Zones (AZs) within a single region.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS
database (DB) instances. They make it easy to elastically scale out beyond the
capacity constraints of a single DB instance for read-heavy database workloads. For
the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines,
Amazon RDS creates a second DB instance using a snapshot of the source DB
instance. It then uses the engines' native asynchronous replication to update the
read replica whenever there is a change to the source DB instance.
Amazon RDS replicates all databases in the source DB instance. Read replicas can
be within an Availability Zone (AZ), Cross-AZ, or Cross-Region.
Exam Alert:
Please review this comparison vis-a-vis Multi-AZ vs Read Replica for Amazon RDS:
via - https://aws.amazon.com/rds/features/multi-az/
Incorrect Options:
Multi-AZ follows asynchronous replication and spans one Availability Zone
(AZ) within a single region. Read replicas follow synchronous replication
and can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two
Availability Zones (AZs) within a single region. Read replicas follow
synchronous replication and can be within an Availability Zone (AZ), Cross-
AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two
Availability Zones (AZs) within a single region. Read replicas follow
asynchronous replication and can be within an Availability Zone (AZ),
Cross-AZ, or Cross-Region
These three options contradict the earlier details provided in the explanation. To
summarize, Multi-AZ deployment follows synchronous replication for Amazon RDS.
Hence these options are incorrect.
References:
https://aws.amazon.com/rds/features/multi-az/
https://aws.amazon.com/rds/features/read-replicas/
Domain
Design Resilient Architectures
Question 40Correct
One of the biggest football leagues in Europe has granted the distribution rights for
live streaming its matches in the USA to a silicon valley based streaming services
company. As per the terms of distribution, the company must make sure that only
users from the USA are able to live stream the matches on their platform. Users
from other countries in the world must be denied access to these live-streamed
matches.
Which of the following options would allow the company to enforce these streaming
restrictions? (Select two)
Use Amazon Route 53 based failover routing policy to restrict distribution
of content to only the locations in which you have distribution rights
Your selection is correct
Use georestriction to prevent users in specific geographic locations from
accessing content that you're distributing through a Amazon CloudFront
web distribution
Your selection is correct
Use Amazon Route 53 based geolocation routing policy to restrict
distribution of content to only the locations in which you have distribution
rights
Use Amazon Route 53 based weighted routing policy to restrict
distribution of content to only the locations in which you have distribution
rights
Use Amazon Route 53 based latency-based routing policy to restrict
distribution of content to only the locations in which you have distribution
rights
Overall explanation
Correct options:
Use Amazon Route 53 based geolocation routing policy to restrict
distribution of content to only the locations in which you have distribution
rights
Geolocation routing lets you choose the resources that serve your traffic based on
the geographic location of your users, meaning the location that DNS queries
originate from. For example, you might want all queries from Europe to be routed to
an ELB load balancer in the Frankfurt region. You can also use geolocation routing to
restrict the distribution of content to only the locations in which you have
distribution rights.
Use georestriction to prevent users in specific geographic locations from
accessing content that you're distributing through a Amazon CloudFront
web distribution
You can use georestriction, also known as geo-blocking, to prevent users in specific
geographic locations from accessing content that you're distributing through a
Amazon CloudFront web distribution. When a user requests your content, Amazon
CloudFront typically serves the requested content regardless of where the user is
located. If you need to prevent users in specific countries from accessing your
content, you can use the CloudFront geo restriction feature to do one of the
following: Allow your users to access your content only if they're in one of the
countries on a whitelist of approved countries. Prevent your users from accessing
your content if they're in one of the countries on a blacklist of banned countries. So
this option is also correct.
Amazon Route 53 Routing Policy Overview:
 via - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html
Incorrect options:
Use Amazon Route 53 based latency-based routing policy to restrict
distribution of content to only the locations in which you have distribution
rights - Use latency-based routing when you have resources in multiple AWS
Regions and you want to route traffic to the region that provides the lowest latency.
To use latency-based routing, you create latency records for your resources in
multiple AWS Regions. When Amazon Route 53 receives a DNS query for your
domain or subdomain (example.com or acme.example.com), it determines which
AWS Regions you've created latency records for, determines which region gives the
user the lowest latency, and then selects a latency record for that region. Route 53
responds with the value from the selected record, such as the IP address for a web
server.
Use Amazon Route 53 based weighted routing policy to restrict
distribution of content to only the locations in which you have distribution
rights - Weighted routing lets you associate multiple resources with a single
domain name (example.com) or subdomain name (acme.example.com) and choose
how much traffic is routed to each resource. This can be useful for a variety of
purposes, including load balancing and testing new versions of the software.
Use Amazon Route 53 based failover routing policy to restrict distribution
of content to only the locations in which you have distribution rights -
Failover routing lets you route traffic to a resource when the resource is healthy or
to a different resource when the first resource is unhealthy. The primary and
secondary records can route traffic to anything from an Amazon S3 bucket that is
configured as a website to a complex tree of records.
Weighted routing or failover routing or latency routing cannot be used to restrict the
distribution of content to only the locations in which you have distribution rights. So
all three options above are incorrect.
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html#routing-policy-geo
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html#routing-policy-geo
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html#routing-policy-geo
Domain
Design Secure Architectures
Question 41Incorrect
A large financial institution operates an on-premises data center with hundreds of
petabytes of data managed on Microsoft’s Distributed File System (DFS). The CTO
wants the organization to transition into a hybrid cloud environment and run data-
intensive analytics workloads that support DFS.
Which of the following AWS services can facilitate the migration of these workloads?
Correct answer
Amazon FSx for Windows File Server
Your answer is incorrect
Amazon FSx for Lustre
AWS Directory Service for Microsoft Active Directory (AWS Managed
Microsoft AD)
Microsoft SQL Server on AWS
Overall explanation
Correct option:
Amazon FSx for Windows File Server
Amazon FSx for Windows File Server provides fully managed, highly reliable file
storage that is accessible over the industry-standard Service Message Block (SMB)
protocol. It is built on Windows Server, delivering a wide range of administrative
features such as user quotas, end-user file restore, and Microsoft Active Directory
(AD) integration. Amazon FSx supports the use of Microsoft’s Distributed File System
(DFS) to organize shares into a single folder structure up to hundreds of PB in size.
So this option is correct.
How Amazon FSx for Windows File Server Works:
via - https://aws.amazon.com/fsx/windows/
Incorrect options:
Amazon FSx for Lustre
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the
world’s most popular high-performance file system. It is used for workloads such as
machine learning, high-performance computing (HPC), video processing, and
financial modeling. Amazon FSx enables you to use Lustre file systems for any
workload where storage speed matters. FSx for Lustre does not support Microsoft’s
Distributed File System (DFS), so this option is incorrect.
AWS Directory Service for Microsoft Active Directory (AWS Managed
Microsoft AD)
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed
Microsoft AD, enables your directory-aware workloads and AWS resources to use
managed Active Directory in the AWS Cloud. AWS Managed Microsoft AD is built on
the actual Microsoft Active Directory and does not require you to synchronize or
replicate data from your existing Active Directory to the cloud. AWS Managed
Microsoft AD does not support Microsoft’s Distributed File System (DFS), so this
option is incorrect.
Microsoft SQL Server on AWS
Microsoft SQL Server on AWS offers you the flexibility to run Microsoft SQL Server
database on AWS Cloud. Microsoft SQL Server on AWS does not support Microsoft’s
Distributed File System (DFS), so this option is incorrect.
Reference:
https://aws.amazon.com/fsx/windows/
Domain
Design High-Performing Architectures
Question 42Correct
A retail company's dynamic website is hosted using on-premises servers in its data
center in the United States. The company is launching its website in Asia, and it
wants to optimize the website loading times for new users in Asia. The website's
backend must remain in the United States. The website is being launched in a few
days, and an immediate solution is needed.
What would you recommend?
Migrate the website to Amazon S3. Use S3 cross-region replication (S3
CRR) between AWS Regions in the US and Asia
Leverage a Amazon Route 53 geo-proximity routing policy pointing to on-
premises servers
Your answer is correct
Use Amazon CloudFront with a custom origin pointing to the on-premises
servers
Use Amazon CloudFront with a custom origin pointing to the DNS record of
the website on Amazon Route 53
Overall explanation
Correct option:
Use Amazon CloudFront with a custom origin pointing to the on-premises
servers
Amazon CloudFront is a web service that gives businesses and web application
developers an easy and cost-effective way to distribute content with low latency
and high data transfer speeds. Amazon CloudFront uses standard cache control
headers you set on your files to identify static and dynamic content. You can use
different origins for different types of content on a single site – e.g. Amazon S3 for
static objects, Amazon EC2 for dynamic content, and custom origins for third-party
content.
Amazon CloudFront:
via - https://aws.amazon.com/cloudfront/
An origin server stores the original, definitive version of your objects. If you're
serving content over HTTP, your origin server is either an Amazon S3 bucket or an
HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic
Compute Cloud (Amazon EC2) instance or on a server that you manage; these
servers are also known as custom origins.
 via - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/
Introduction.html
Amazon CloudFront employs a global network of edge locations and regional edge
caches that cache copies of your content close to your viewers. Amazon CloudFront
ensures that end-user requests are served by the closest edge location. As a result,
viewer requests travel a short distance, improving performance for your viewers.
Therefore for the given use case, the users in Asia will enjoy a low latency
experience while using the website even though the on-premises servers continue
to be in the US.
Incorrect options:
Use Amazon CloudFront with a custom origin pointing to the DNS record of
the website on Amazon Route 53 - This option has been added as a distractor.
CloudFront cannot have a custom origin pointing to the DNS record of the website
on Route 53.
Migrate the website to Amazon S3. Use S3 cross-region replication (S3
CRR) between AWS Regions in the US and Asia - The use case states that the
company operates a dynamic website. You can use Amazon S3 to host a static
website. On a static website, individual web pages include static content. They
might also contain client-side scripts. By contrast, a dynamic website relies on
server-side processing, including server-side scripts, such as PHP, JSP, or ASP.NET.
Amazon S3 does not support server-side scripting, but AWS has other resources for
hosting dynamic websites. So this option is incorrect.
Leverage a Amazon Route 53 geo-proximity routing policy pointing to on-
premises servers - Since the on-premises servers continue to be in the US, so
even using a Route 53 geo-proximity routing policy that directs the users in Asia to
the on-premises servers in the US would not reduce the latency for the users in
Asia. So this option is incorrect.
References:
https://aws.amazon.com/cloudfront/
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/
Introduction.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
Domain
Design High-Performing Architectures
Question 43Incorrect
A technology blogger wants to write a review on the comparative pricing for various
storage types available on AWS Cloud. The blogger has created a test file of size 1
gigabytes with some random data. Next he copies this test file into AWS S3
Standard storage class, provisions an Amazon EBS volume (General Purpose SSD
(gp2)) with 100 gigabytes of provisioned storage and copies the test file into the
Amazon EBS volume, and lastly copies the test file into an Amazon EFS Standard
Storage filesystem. At the end of the month, he analyses the bill for costs incurred
on the respective storage types for the test file.
What is the correct order of the storage charges incurred for the test file on these
three storage types?
Cost of test file storage on Amazon EFS < Cost of test file storage on
Amazon S3 Standard < Cost of test file storage on Amazon EBS
Correct answer
Cost of test file storage on Amazon S3 Standard < Cost of test file storage
on Amazon EFS < Cost of test file storage on Amazon EBS
Cost of test file storage on Amazon EBS < Cost of test file storage on
Amazon S3 Standard < Cost of test file storage on Amazon EFS
Your answer is incorrect
Cost of test file storage on Amazon S3 Standard < Cost of test file storage
on Amazon EBS < Cost of test file storage on Amazon EFS
Overall explanation
Correct option:
Cost of test file storage on Amazon S3 Standard < Cost of test file storage
on Amazon EFS < Cost of test file storage on Amazon EBS
With Amazon EBS Elastic Volumes, you pay only for the resources that you use. The
Amazon EFS Standard Storage pricing is $0.30 per GB per month. Therefore the cost
for storing the test file on EFS is $0.30 for the month.
For Amazon EBS General Purpose SSD (gp2) volumes, the charges are $0.10 per GB-
month of provisioned storage. Therefore, for a provisioned storage of 100GB for this
use-case, the monthly cost on EBS is $0.10*100 = $10. This cost is irrespective of
how much storage is actually consumed by the test file.
For S3 Standard storage, the pricing is $0.023 per GB per month. Therefore, the
monthly storage cost on S3 for the test file is $0.023.
Therefore this is the correct option.
Incorrect options:
Cost of test file storage on Amazon S3 Standard < Cost of test file storage
on Amazon EBS < Cost of test file storage on Amazon EFS
Cost of test file storage on Amazon EFS < Cost of test file storage on
Amazon S3 Standard < Cost of test file storage on Amazon EBS
Cost of test file storage on Amazon EBS < Cost of test file storage on
Amazon S3 Standard < Cost of test file storage on Amazon EFS
Following the computations shown earlier in the explanation, these three options
are incorrect.
References:
https://aws.amazon.com/ebs/pricing/
https://aws.amazon.com/s3/pricing/(https://aws.amazon.com/s3/pricing/)
https://aws.amazon.com/efs/pricing/
Domain
Design Cost-Optimized Architectures
Question 44Correct
An organization wants to delegate access to a set of users from the development
environment so that they can access some resources in the production environment
which is managed under another AWS account.
As a solutions architect, which of the following steps would you recommend?
Your answer is correct
Create a new IAM role with the required permissions to access the
resources in the production environment. The users can then assume this
IAM role while accessing the resources from the production environment
Both IAM roles and IAM users can be used interchangeably for cross-
account access
It is not possible to access cross-account resources
Create new IAM user credentials for the production environment and share
these credentials with the set of users from the development environment
Overall explanation
Correct option:
Create a new IAM role with the required permissions to access the
resources in the production environment. The users can then assume this
IAM role while accessing the resources from the production environment
IAM roles allow you to delegate access to users or services that normally don't have
access to your organization's AWS resources. IAM users or AWS services can assume
a role to obtain temporary security credentials that can be used to make AWS API
calls. Consequently, you don't have to share long-term credentials for access to a
resource. Using IAM roles, it is possible to access cross-account resources.
Incorrect options:
Create new IAM user credentials for the production environment and share
these credentials with the set of users from the development
environment - There is no need to create new IAM user credentials for the
production environment, as you can use IAM roles to access cross-account
resources.
It is not possible to access cross-account resources - You can use IAM roles to
access cross-account resources.
Both IAM roles and IAM users can be used interchangeably for cross-
account access - IAM roles and IAM users are separate IAM entities and should not
be mixed. Only IAM roles can be used to access cross-account resources.
Reference:
https://aws.amazon.com/iam/features/manage-roles/
Domain
Design Secure Architectures
Question 45Incorrect
A data analytics company measures what the consumers watch and what
advertising they’re exposed to. This real-time data is ingested into its on-premises
data center and subsequently, the daily data feed is compressed into a single file
and uploaded on Amazon S3 for backup. The typical compressed file size is around 2
gigabytes.
Which of the following is the fastest way to upload the daily compressed file into
Amazon S3?
Upload the compressed file in a single operation
Correct answer
Upload the compressed file using multipart upload with Amazon S3
Transfer Acceleration (Amazon S3TA)
FTP the compressed file into an Amazon EC2 instance that runs in the
same region as the Amazon S3 bucket. Then transfer the file from the
Amazon EC2 instance into the Amazon S3 bucket
Your answer is incorrect
Upload the compressed file using multipart upload
Overall explanation
Correct option:
Upload the compressed file using multipart upload with Amazon S3
Transfer Acceleration (Amazon S3TA)
Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure
transfers of files over long distances between your client and an S3 bucket. Transfer
Acceleration takes advantage of Amazon CloudFront’s globally distributed edge
locations. As the data arrives at an edge location, data is routed to Amazon S3 over
an optimized network path.
Multipart upload allows you to upload a single object as a set of parts. Each part is a
contiguous portion of the object's data. You can upload these object parts
independently and in any order. If transmission of any part fails, you can retransmit
that part without affecting other parts. After all parts of your object are uploaded,
Amazon S3 assembles these parts and creates the object. If you're uploading large
objects over a stable high-bandwidth network, use multipart uploading to maximize
the use of your available bandwidth by uploading object parts in parallel for multi-
threaded performance. If you're uploading over a spotty network, use multipart
uploading to increase resiliency to network errors by avoiding upload restarts.
Incorrect options:
Upload the compressed file in a single operation - In general, when your
object size reaches 100 megabytes, you should consider using multipart uploads
instead of uploading the object in a single operation. Multipart upload provides
improved throughput - you can upload parts in parallel to improve throughput.
Therefore, this option is not correct.
Upload the compressed file using multipart upload - Although using multipart
upload would certainly speed up the process, combining with Amazon S3 Transfer
Acceleration (Amazon S3TA) would further improve the transfer speed. Therefore
just using multipart upload is not the correct option.
FTP the compressed file into an Amazon EC2 instance that runs in the
same region as the Amazon S3 bucket. Then transfer the file from the
Amazon EC2 instance into the Amazon S3 bucket - This is a roundabout
process of getting the file into Amazon S3 and added as a distractor. Although it is
technically feasible to follow this process, it would involve a lot of scripting and
certainly would not be the fastest way to get the file into Amazon S3.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
Domain
Design High-Performing Architectures
Question 46Correct
An ivy-league university is assisting NASA to find potential landing sites for
exploration vehicles of unmanned missions to our neighboring planets. The
university uses High Performance Computing (HPC) driven application architecture
to identify these landing sites.
Which of the following Amazon EC2 instance topologies should this application be
deployed on?
The Amazon EC2 instances should be deployed in an Auto Scaling group so
that application meets high availability requirements
Your answer is correct
The Amazon EC2 instances should be deployed in a cluster placement
group so that the underlying workload can benefit from low network
latency and high network throughput
The Amazon EC2 instances should be deployed in a spread placement
group so that there are no correlated failures
The Amazon EC2 instances should be deployed in a partition placement
group so that distributed workloads can be handled effectively
Overall explanation
Correct option:
The Amazon EC2 instances should be deployed in a cluster placement
group so that the underlying workload can benefit from low network
latency and high network throughput
The key thing to understand in this question is that HPC workloads need to achieve
low-latency network performance necessary for tightly-coupled node-to-node
communication that is typical of HPC applications. Cluster placement groups pack
instances close together inside an Availability Zone. These are recommended for
applications that benefit from low network latency, high network throughput, or
both. Therefore this option is the correct answer.
Cluster Placement Group:
 via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html
Incorrect options:
The Amazon EC2 instances should be deployed in a partition placement
group so that distributed workloads can be handled effectively - A partition
placement group spreads your instances across logical partitions such that groups
of instances in one partition do not share the underlying hardware with groups of
instances in different partitions. This strategy is typically used by large distributed
and replicated workloads, such as Hadoop, Cassandra, and Kafka. A partition
placement group can have a maximum of seven partitions per Availability Zone.
Since a partition placement group can have partitions in multiple Availability Zones
in the same region, therefore instances will not have low-latency network
performance. Hence the partition placement group is not the right fit for HPC
applications.
Partition Placement Group:
 via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html
The Amazon EC2 instances should be deployed in a spread placement
group so that there are no correlated failures - A spread placement group is a
group of instances that are each placed on distinct racks, with each rack having its
own network and power source. The instances are placed across distinct underlying
hardware to reduce correlated failures. You can have a maximum of seven running
instances per Availability Zone per group. Since a spread placement group can span
multiple Availability Zones in the same Region, therefore instances will not have
low-latency network performance. Hence spread placement group is not the right fit
for HPC applications.
Spread Placement Group:
 via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-
groups.html
The Amazon EC2 instances should be deployed in an Auto Scaling group so
that application meets high availability requirements - An Auto Scaling group
contains a collection of Amazon EC2 instances that are treated as a logical grouping
for the purposes of automatic scaling. You do not use Auto Scaling groups per se to
meet HPC requirements.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Domain
Design High-Performing Architectures
Question 47Correct
A file-hosting service uses Amazon Simple Storage Service (Amazon S3) under the
hood to power its storage offerings. Currently all the customer files are uploaded
directly under a single Amazon S3 bucket. The engineering team has started seeing
scalability issues where customer file uploads have started failing during the peak
access hours with more than 5000 requests per second.
Which of the following is the MOST resource efficient and cost-optimal way of
addressing this issue?
Change the application architecture to create a new Amazon S3 bucket for
each customer and then upload each customer's files directly under the
respective buckets
Change the application architecture to use Amazon Elastic File System
(Amazon EFS) instead of Amazon S3 for storing the customers' uploaded
files
Your answer is correct
Change the application architecture to create customer-specific custom
prefixes within the single Amazon S3 bucket and then upload the daily
files into those prefixed locations
Change the application architecture to create a new Amazon S3 bucket for
each day's data and then upload the daily files directly under that day's
bucket
Overall explanation
Correct option:
Change the application architecture to create customer-specific custom
prefixes within the single Amazon S3 bucket and then upload the daily
files into those prefixed locations
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. Your
applications can easily achieve thousands of transactions per second in request
performance when uploading and retrieving storage from Amazon S3. Amazon S3
automatically scales to high request rates. For example, your application can
achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per
second per prefix in a bucket.
There are no limits to the number of prefixes in a bucket. You can increase your read
or write performance by parallelizing reads. For example, if you create 10 prefixes in
an Amazon S3 bucket to parallelize reads, you could scale your read performance to
55,000 read requests per second. Please see this example for more clarity on
prefixes: if you have a file f1 stored in an S3 object path like
so s3://your_bucket_name/folder1/sub_folder_1/f1,
then /folder1/sub_folder_1/ becomes the prefix for file f1.
Some data lake applications on Amazon S3 scan millions or billions of objects for
queries that run over petabytes of data. These data lake applications achieve
single-instance transfer rates that maximize the network interface used for their
Amazon EC2 instance, which can be up to 100 Gb/s on a single instance. These
applications then aggregate throughput across multiple instances to get multiple
terabits per second. Therefore creating customer-specific custom prefixes within the
single bucket and then uploading the daily files into those prefixed locations is the
BEST solution for the given constraints.
Optimizing Amazon S3 Performance:
 via - https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-
performance.html
Incorrect options:
Change the application architecture to create a new Amazon S3 bucket for
each customer and then upload each customer's files directly under the
respective buckets - Creating a new Amazon S3 bucket for each new customer is
an inefficient way of handling resource availability (S3 buckets need to be globally
unique) as some customers may use the service sparingly but the bucket name is
locked for them forever. Moreover, this is really not required as we can use S3
prefixes to improve the performance.
Change the application architecture to create a new Amazon S3 bucket for
each day's data and then upload the daily files directly under that day's
bucket - Creating a new Amazon S3 bucket for each new day's data is also an
inefficient way of handling resource availability (S3 buckets need to be globally
unique) as some of the bucket names may not be available for daily data
processing. Moreover, this is really not required as we can use S3 prefixes to
improve the performance.
Change the application architecture to use Amazon Elastic File System
(Amazon EFS) instead of Amazon S3 for storing the customers' uploaded
files - Amazon EFS is a costlier storage option compared to Amazon S3, so it is
ruled out.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html
Domain
Design High-Performing Architectures
Question 48Incorrect
A company is in the process of migrating its on-premises SMB file shares to AWS so
the company can get out of the business of managing multiple file servers across
dozens of offices. The company has 200 terabytes of data in its file servers. The
existing on-premises applications and native Windows workloads should continue to
have low latency access to this data which needs to be stored on a file system
service without any disruptions after the migration. The company also wants any
new applications deployed on AWS to have access to this migrated data.
Which of the following is the best solution to meet this requirement?
Use Amazon FSx File Gateway to provide low-latency, on-premises access
to fully managed file shares in Amazon EFS. The applications deployed on
AWS can access this data directly from Amazon EFS
Correct answer
Use Amazon FSx File Gateway to provide low-latency, on-premises access
to fully managed file shares in Amazon FSx for Windows File Server. The
applications deployed on AWS can access this data directly from Amazon
FSx in AWS
Your answer is incorrect
Use AWS Storage Gateway’s File Gateway to provide low-latency, on-
premises access to fully managed file shares in Amazon S3. The
applications deployed on AWS can access this data directly from Amazon
S3
Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-
premises access to fully managed file shares in Amazon FSx for Windows
File Server. The applications deployed on AWS can access this data
directly from Amazon FSx in AWS
Overall explanation
Correct option:
Use Amazon FSx File Gateway to provide low-latency, on-premises access
to fully managed file shares in Amazon FSx for Windows File Server. The
applications deployed on AWS can access this data directly from Amazon
FSx in AWS
For user or team file shares, and file-based application migrations, Amazon FSx File
Gateway provides low-latency, on-premises access to fully managed file shares in
Amazon FSx for Windows File Server. For applications deployed on AWS, you may
access your file shares directly from Amazon FSx in AWS.
For your native Windows workloads and users, or your SMB clients, Amazon FSx for
Windows File Server provides all of the benefits of a native Windows SMB
environment that is fully managed and secured and scaled like any other AWS
service. You get detailed reporting, replication, backup, failover, and support for
native Windows tools like DFS and Active Directory.
Amazon FSx File Gateway:
via - https://aws.amazon.com/storagegateway/file/
Incorrect options:
Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-
premises access to fully managed file shares in Amazon FSx for Windows
File Server. The applications deployed on AWS can access this data
directly from Amazon FSx in AWS - When you need to access S3 using a file
system protocol, you should use File Gateway. You get a local cache in the gateway
that provides high throughput and low latency over SMB.
AWS Storage Gateway’s File Gateway does not support file shares in Amazon FSx for
Windows File Server, so this option is incorrect.
AWS Storage Gateway’s File Gateway:
Use AWS Storage Gateway’s File Gateway to provide low-latency, on-
premises access to fully managed file shares in Amazon S3. The
applications deployed on AWS can access this data directly from Amazon
S3 - When you need to access S3 using a file system protocol, you should use File
Gateway. You get a local cache in the gateway that provides high throughput and
low latency over SMB.
The given use case requires low latency access to data which needs to be stored on
a file system service after migration. Since S3 is an object storage service, so this
option is incorrect.
Use Amazon FSx File Gateway to provide low-latency, on-premises access
to fully managed file shares in Amazon EFS. The applications deployed on
AWS can access this data directly from Amazon EFS - Amazon FSx File
Gateway provides access to fully managed file shares in Amazon FSx for Windows
File Server and it does not support EFS. You should also note that EFS uses the
Network File System version 4 (NFS v4) protocol and it does not support SMB
protocol. Therefore this option is incorrect for the given use case.
References:
https://aws.amazon.com/storagegateway/file/fsx/
https://aws.amazon.com/storagegateway/faqs/
https://aws.amazon.com/blogs/storage/aws-reinvent-recap-choosing-storage-for-on-
premises-file-based-workloads/
Domain
Design High-Performing Architectures
Question 49Incorrect
A video analytics organization has been acquired by a leading media company. The
analytics organization has 10 independent applications with an on-premises data
footprint of about 70 Terabytes for each application. The CTO of the media company
has set a timeline of two weeks to carry out the data migration from on-premises
data center to AWS Cloud and establish connectivity.
Which of the following are the MOST cost-effective options for completing the data
transfer and establishing connectivity? (Select two)
Correct selection
Setup AWS Site-to-Site VPN to establish on-going connectivity between
the on-premises data center and AWS Cloud
Order 1 AWS Snowmobile to complete the one-time data transfer
Your selection is correct
Order 10 AWS Snowball Edge Storage Optimized devices to complete the
one-time data transfer
Setup AWS Direct Connect to establish connectivity between the on-
premises data center and AWS Cloud
Order 70 AWS Snowball Edge Storage Optimized devices to complete the
one-time data transfer
Overall explanation
Correct options:
Order 10 AWS Snowball Edge Storage Optimized devices to complete the
one-time data transfer
AWS Snowball Edge Storage Optimized is the optimal choice if you need to securely
and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up
to 80 Terabytes of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up
to 40 Gigabytes network connectivity to address large scale data transfer and pre-
processing use cases.
As each Snowball Edge Storage Optimized device can handle 80 Terabytes of data,
you can order 10 such devices to take care of the data transfer for all applications.
Exam Alert:
The original Snowball devices were transitioned out of service and Snowball Edge
Storage Optimized are now the primary devices used for data transfer. You may see
the Snowball device on the exam, just remember that the original Snowball device
had 80 Terabytes of storage space.
Setup AWS Site-to-Site VPN to establish on-going connectivity between
the on-premises data center and AWS Cloud
AWS Site-to-Site VPN enables you to securely connect your on-premises network or
branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can
securely extend your data center or branch office network to the cloud with an AWS
Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish
encrypted network connectivity between your intranet and Amazon VPC over the
Internet. VPN Connections can be configured in minutes and are a good solution if
you have an immediate need, have low to modest bandwidth requirements, and can
tolerate the inherent variability in Internet-based connectivity.
Therefore this option is the right fit for the given use-case as the connectivity can
be easily established within the given timeframe.
Incorrect options:
Order 1 AWS Snowmobile to complete the one-time data transfer - Each
AWS Snowmobile has a total capacity of up to 100 petabytes. To migrate large
datasets of 10 petabytes or more in a single location, you should use AWS
Snowmobile. For datasets less than 10 petabytes or distributed in multiple locations,
you should use Snowball. So AWS Snowmobile is not the right fit for this use-case.
Setup AWS Direct Connect to establish connectivity between the on-
premises data center and AWS Cloud - AWS Direct Connect lets you establish a
dedicated network connection between your network and one of the AWS Direct
Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection
can be partitioned into multiple virtual interfaces. AWS Direct Connect does not
involve the Internet; instead, it uses dedicated, private network connections
between your intranet and Amazon VPC. Direct Connect involves significant
monetary investment and takes at least a month to set up, therefore it's not the
correct fit for this use-case.
Order 70 AWS Snowball Edge Storage Optimized devices to complete the
one-time data transfer - As the data-transfer can be completed with just 10 AWS
Snowball Edge Storage Optimized devices, there is no need to order 70 devices.
References:
https://aws.amazon.com/snowball/faqs/
https://aws.amazon.com/vpn/
https://aws.amazon.com/snowmobile/faqs/
https://aws.amazon.com/directconnect/
Domain
Design Cost-Optimized Architectures
Question 50Correct
Amazon CloudFront offers a multi-tier cache in the form of regional edge caches that
improve latency. However, there are certain content types that bypass the regional
edge cache, and go directly to the origin.
Which of the following content types skip the regional edge cache? (Select two)
Your selection is correct
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin
E-commerce assets such as product photos
User-generated videos
Your selection is correct
Dynamic content, as determined at request time (cache-behavior
configured to forward all headers)
Static content such as style sheets, JavaScript files
Overall explanation
Correct options:
Dynamic content, as determined at request time (cache-behavior
configured to forward all headers)
Amazon CloudFront is a fast content delivery network (CDN) service that securely
delivers data, videos, applications, and APIs to customers globally with low latency,
high transfer speeds, all within a developer-friendly environment.
CloudFront points of presence (POPs) (edge locations) make sure that popular
content can be served quickly to your viewers. CloudFront also has regional edge
caches that bring more of your content closer to your viewers, even when the
content is not popular enough to stay at a POP, to help improve performance for
that content.
Dynamic content, as determined at request time (cache-behavior configured to
forward all headers), does not flow through regional edge caches, but goes directly
to the origin. So this option is correct.
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin from the
POPs and do not proxy through the regional edge caches. So this option is also
correct.
How Amazon CloudFront Delivers Content:
via - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/
HowCloudFrontWorks.html
Incorrect Options:
E-commerce assets such as product photos
User-generated videos
Static content such as style sheets, JavaScript files
The following type of content flows through the regional edge caches - user-
generated content, such as video, photos, or artwork; e-commerce assets such as
product photos and videos and static content such as style sheets, JavaScript files.
Hence these three options are not correct.
Reference:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/
HowCloudFrontWorks.html
Domain
Design Secure Architectures
Question 51Incorrect
A news network uses Amazon Simple Storage Service (Amazon S3) to aggregate the
raw video footage from its reporting teams across the US. The news network has
recently expanded into new geographies in Europe and Asia. The technical teams at
the overseas branch offices have reported huge delays in uploading large video files
to the destination Amazon S3 bucket.
Which of the following are the MOST cost-effective options to improve the file
upload speed into Amazon S3 (Select two)
Your selection is incorrect
Use AWS Global Accelerator for faster file uploads into the destination
Amazon S3 bucket
Create multiple AWS Direct Connect connections between the AWS Cloud
and branch offices in Europe and Asia. Use the direct connect connections
for faster file uploads into Amazon S3
Your selection is correct
Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file
uploads into the destination S3 bucket
Correct selection
Use multipart uploads for faster file uploads into the destination Amazon
S3 bucket
Create multiple AWS Site-to-Site VPN connections between the AWS Cloud
and branch offices in Europe and Asia. Use these VPN connections for
faster file uploads into Amazon S3
Overall explanation
Correct options:
Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file
uploads into the destination S3 bucket
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files
over long distances between your client and an S3 bucket. Amazon S3TA takes
advantage of Amazon CloudFront’s globally distributed edge locations. As the data
arrives at an edge location, data is routed to Amazon S3 over an optimized network
path.
Use multipart uploads for faster file uploads into the destination Amazon
S3 bucket
Multipart upload allows you to upload a single object as a set of parts. Each part is a
contiguous portion of the object's data. You can upload these object parts
independently and in any order. If transmission of any part fails, you can retransmit
that part without affecting other parts. After all parts of your object are uploaded,
Amazon S3 assembles these parts and creates the object. In general, when your
object size reaches 100 MB, you should consider using multipart uploads instead of
uploading the object in a single operation. Multipart upload provides improved
throughput, therefore it facilitates faster file uploads.
Incorrect options:
Create multiple AWS Direct Connect connections between the AWS Cloud
and branch offices in Europe and Asia. Use the direct connect connections
for faster file uploads into Amazon S3 - AWS Direct Connect is a cloud service
solution that makes it easy to establish a dedicated network connection from your
premises to AWS. AWS Direct Connect lets you establish a dedicated network
connection between your network and one of the AWS Direct Connect locations.
Direct connect takes significant time (several months) to be provisioned and is an
overkill for the given use-case.
Create multiple AWS Site-to-Site VPN connections between the AWS Cloud
and branch offices in Europe and Asia. Use these VPN connections for
faster file uploads into Amazon S3 - AWS Site-to-Site VPN enables you to
securely connect your on-premises network or branch office site to your Amazon
Virtual Private Cloud (Amazon VPC). You can securely extend your data center or
branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC
VPN Connection utilizes IPSec to establish encrypted network connectivity between
your intranet and Amazon VPC over the Internet. VPN Connections are a good
solution if you have low to modest bandwidth requirements and can tolerate the
inherent variability in Internet-based connectivity. Site-to-site VPN will not help in
accelerating the file transfer speeds into S3 for the given use-case.
Use AWS Global Accelerator for faster file uploads into the destination
Amazon S3 bucket - AWS Global Accelerator is a service that improves the
availability and performance of your applications with local or global users. It
provides static IP addresses that act as a fixed entry point to your application
endpoints in a single or multiple AWS Regions, such as your Application Load
Balancers, Network Load Balancers or Amazon EC2 instances. AWS Global
Accelerator will not help in accelerating the file transfer speeds into S3 for the given
use-case.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
Domain
Design Cost-Optimized Architectures
Question 52Correct
An IT consultant is helping the owner of a medium-sized business set up an AWS
account. What are the security recommendations he must follow while creating the
AWS account root user? (Select two)
Encrypt the access keys and save them on Amazon S3
Send an email to the business owner with details of the login username
and password for the AWS root user. This will help the business owner to
troubleshoot any login issues in future
Your selection is correct
Enable Multi Factor Authentication (MFA) for the AWS account root user
account
Your selection is correct
Create a strong password for the AWS account root user
Create AWS account root user access keys and share those keys only with
the business owner
Overall explanation
Correct options:
Create a strong password for the AWS account root user
Enable Multi Factor Authentication (MFA) for the AWS account root user
account
Here are some of the best practices while creating an AWS account root user:
1) Use a strong password to help protect account-level access to the AWS
Management Console. 2) Never share your AWS account root user password or
access keys with anyone. 3) If you do have an access key for your AWS account root
user, delete it. If you must keep it, rotate (change) the access key regularly. You
should not encrypt the access keys and save them on Amazon S3. 4) If you don't
already have an access key for your AWS account root user, don't create one unless
you absolutely need to. 5) Enable AWS multi-factor authentication (MFA) on your
AWS account root user account.
AWS Root Account Security Best Practices:
via - https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Incorrect options:
Encrypt the access keys and save them on Amazon S3 - AWS recommends
that if you don't already have an access key for your AWS account root user, don't
create one unless you absolutely need to. Even an encrypted access key for the root
user poses a significant security risk. Therefore, this option is incorrect.
Create AWS account root user access keys and share those keys only with
the business owner - AWS recommends that if you don't already have an access
key for your AWS account root user, don't create one unless you absolutely need to.
Hence, this option is incorrect.
Send an email to the business owner with details of the login username
and password for the AWS root user. This will help the business owner to
troubleshoot any login issues in future - AWS recommends that you should
never share your AWS account root user password or access keys with anyone.
Sending an email with AWS account root user credentials creates a security risk as it
can be misused by anyone reading the email. Hence, this option is incorrect.
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#create-iam-
users
Domain
Design Secure Architectures
Question 53Incorrect
An Electronic Design Automation (EDA) application produces massive volumes of
data that can be divided into two categories. The 'hot data' needs to be both
processed and stored quickly in a parallel and distributed fashion. The 'cold data'
needs to be kept for reference with quick access for reads and updates at a low
cost.
Which of the following AWS services is BEST suited to accelerate the
aforementioned chip design process?
Your answer is incorrect
Amazon EMR
Correct answer
Amazon FSx for Lustre
Amazon FSx for Windows File Server
AWS Glue
Overall explanation
Correct option:
Amazon FSx for Lustre
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the
world’s most popular high-performance file system. It is used for workloads such as
machine learning, high-performance computing (HPC), video processing, and
financial modeling. The open-source Lustre file system is designed for applications
that require fast storage – where you want your storage to keep up with your
compute. FSx for Lustre integrates with Amazon S3, making it easy to process data
sets with the Lustre file system. When linked to an S3 bucket, an FSx for Lustre file
system transparently presents S3 objects as files and allows you to write changed
data back to S3.
FSx for Lustre provides the ability to both process the 'hot data' in a parallel and
distributed fashion as well as easily store the 'cold data' on Amazon S3. Therefore
this option is the BEST fit for the given problem statement.
Incorrect options:
Amazon FSx for Windows File Server - Amazon FSx for Windows File Server
provides fully managed, highly reliable file storage that is accessible over the
industry-standard Service Message Block (SMB) protocol. It is built on Windows
Server, delivering a wide range of administrative features such as user quotas, end-
user file restore, and Microsoft Active Directory (AD) integration. FSx for Windows
does not allow you to present S3 objects as files and does not allow you to write
changed data back to S3. Therefore you cannot reference the "cold data" with quick
access for reads and updates at low cost. Hence this option is not correct.
Amazon EMR - Amazon EMR is the industry-leading cloud big data platform for
processing vast amounts of data using open source tools such as Apache Spark,
Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR
uses Hadoop, an open-source framework, to distribute your data and processing
across a resizable cluster of Amazon EC2 instances. EMR does not offer the same
storage and processing speed as FSx for Lustre. So it is not the right fit for the given
high-performance workflow scenario.
AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service
that makes it easy for customers to prepare and load their data for analytics. AWS
Glue job is meant to be used for batch ETL data processing. AWS Glue does not offer
the same storage and processing speed as FSx for Lustre. So it is not the right fit for
the given high-performance workflow scenario.
References:
https://aws.amazon.com/fsx/lustre/
https://aws.amazon.com/fsx/windows/faqs/
Domain
Design High-Performing Architectures
Question 54Correct
A Big Data analytics company wants to set up an AWS cloud architecture that
throttles requests in case of sudden traffic spikes. The company is looking for AWS
services that can be used for buffering or throttling to handle such traffic variations.
Which of the following services can be used to support this requirement?
Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification
Service (Amazon SNS) and AWS Lambda
Amazon Gateway Endpoints, Amazon Simple Queue Service (Amazon SQS)
and Amazon Kinesis
Elastic Load Balancer, Amazon Simple Queue Service (Amazon SQS), AWS
Lambda
Your answer is correct
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and
Amazon Kinesis
Overall explanation
Correct option:
Throttling is the process of limiting the number of requests an authorized program
can submit to a given operation in a given amount of time.
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and
Amazon Kinesis
To prevent your API from being overwhelmed by too many requests, Amazon API
Gateway throttles requests to your API using the token bucket algorithm, where a
token counts for a request. Specifically, API Gateway sets a limit on a steady-state
rate and a burst of request submissions against all APIs in your account. In the
token bucket algorithm, the burst is the maximum bucket size.
Amazon Simple Queue Service (Amazon SQS) - Amazon Simple Queue Service (SQS)
is a fully managed message queuing service that enables you to decouple and scale
microservices, distributed systems, and serverless applications. Amazon SQS offers
buffer capabilities to smooth out temporary volume spikes without losing messages
or increasing latency.
Amazon Kinesis - Amazon Kinesis is a fully managed, scalable service that can
ingest, buffer, and process streaming data in real-time.
Incorrect options:
Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification
Service (Amazon SNS) and AWS Lambda - Amazon SQS has the ability to buffer
its messages. Amazon Simple Notification Service (SNS) cannot buffer messages
and is generally used with SQS to provide the buffering facility. When requests come
in faster than your Lambda function can scale, or when your function is at maximum
concurrency, additional requests fail as the Lambda throttles those requests with
error code 429 status code. So, this combination of services is incorrect.
Amazon Gateway Endpoints, Amazon Simple Queue Service (Amazon SQS)
and Amazon Kinesis - A Gateway Endpoint is a gateway that you specify as a
target for a route in your route table for traffic destined to a supported AWS service.
This cannot help in throttling or buffering of requests. Amazon SQS and Kinesis can
buffer incoming data. Since Gateway Endpoint is an incorrect service for throttling
or buffering, this option is incorrect.
Elastic Load Balancer, Amazon Simple Queue Service (Amazon SQS), AWS
Lambda - Elastic Load Balancer cannot throttle requests. Amazon SQS can be used
to buffer messages. When requests come in faster than your Lambda function can
scale, or when your function is at maximum concurrency, additional requests fail as
the Lambda throttles those requests with error code 429 status code. So, this
combination of services is incorrect.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-
request-throttling.html
https://aws.amazon.com/sqs/features/
Domain
Design Resilient Architectures
Question 55Incorrect
A company uses Amazon S3 buckets for storing sensitive customer data. The
company has defined different retention periods for different objects present in the
Amazon S3 buckets, based on the compliance requirements. But, the retention rules
do not seem to work as expected.
Which of the following options represent a valid configuration for setting up
retention periods for objects in Amazon S3 buckets? (Select two)
You cannot place a retention period on an object version through a bucket
default setting
Your selection is incorrect
When you use bucket default settings, you specify a Retain Until Date for
the object version
Correct selection
Different versions of a single object can have different retention modes
and periods
Your selection is correct
When you apply a retention period to an object version explicitly, you
specify a Retain Until Date for the object version
The bucket default settings will override any explicit retention mode or
period you request on an object version
Overall explanation
Correct options:
When you apply a retention period to an object version explicitly, you
specify a Retain Until Date for the object version
You can place a retention period on an object version either explicitly or through a
bucket default setting. When you apply a retention period to an object version
explicitly, you specify a Retain Until Date for the object version. Amazon S3 stores
the Retain Until Date setting in the object version's metadata and protects the
object version until the retention period expires.
Different versions of a single object can have different retention modes
and periods
Like all other Object Lock settings, retention periods apply to individual object
versions. Different versions of a single object can have different retention modes
and periods.
For example, suppose that you have an object that is 15 days into a 30-day
retention period, and you PUT an object into Amazon S3 with the same name and a
60-day retention period. In this case, your PUT succeeds, and Amazon S3 creates a
new version of the object with a 60-day retention period. The older version
maintains its original retention period and becomes deletable in 15 days.
Incorrect options:
You cannot place a retention period on an object version through a bucket
default setting - You can place a retention period on an object version either
explicitly or through a bucket default setting.
When you use bucket default settings, you specify a Retain Until Date for
the object version - When you use bucket default settings, you don't specify a
Retain Until Date. Instead, you specify a duration, in either days or years, for which
every object version placed in the bucket should be protected.
The bucket default settings will override any explicit retention mode or
period you request on an object version - If your request to place an object
version in a bucket contains an explicit retention mode and period, those settings
override any bucket default settings for that object version.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html
Domain
Design Secure Architectures
Question 56Incorrect
The engineering team at a data analytics company has observed that its flagship
application functions at its peak performance when the underlying Amazon Elastic
Compute Cloud (Amazon EC2) instances have a CPU utilization of about 50%. The
application is built on a fleet of Amazon EC2 instances managed under an Auto
Scaling group. The workflow requests are handled by an internal Application Load
Balancer that routes the requests to the instances.
As a solutions architect, what would you recommend so that the application runs
near its peak performance state?
Configure the Auto Scaling group to use simple scaling policy and set the
CPU utilization as the target metric with a target value of 50%
Configure the Auto Scaling group to use step scaling policy and set the
CPU utilization as the target metric with a target value of 50%
Correct answer
Configure the Auto Scaling group to use target tracking policy and set the
CPU utilization as the target metric with a target value of 50%
Your answer is incorrect
Configure the Auto Scaling group to use a Amazon Cloudwatch alarm
triggered on a CPU utilization threshold of 50%
Overall explanation
Correct option:
Configure the Auto Scaling group to use target tracking policy and set the
CPU utilization as the target metric with a target value of 50%
An Auto Scaling group contains a collection of Amazon EC2 instances that are
treated as a logical grouping for the purposes of automatic scaling and
management. An Auto Scaling group also enables you to use Amazon EC2 Auto
Scaling features such as health check replacements and scaling policies.
With target tracking scaling policies, you select a scaling metric and set a target
value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that
trigger the scaling policy and calculates the scaling adjustment based on the metric
and the target value. The scaling policy adds or removes capacity as required to
keep the metric at, or close to, the specified target value.
For example, you can use target tracking scaling to:
Configure a target tracking scaling policy to keep the average aggregate CPU
utilization of your Auto Scaling group at 50 percent. This meets the requirements
specified in the given use-case and therefore, this is the correct option.
Target Tracking Policy Overview:
 via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-
tracking.html
Incorrect options:
Configure the Auto Scaling group to use step scaling policy and set the
CPU utilization as the target metric with a target value of 50%
Configure the Auto Scaling group to use simple scaling policy and set the
CPU utilization as the target metric with a target value of 50%
With step scaling and simple scaling, you choose scaling metrics and threshold
values for the Amazon CloudWatch alarms that trigger the scaling process. Neither
step scaling nor simple scaling can be configured to use a target metric for CPU
utilization, hence both these options are incorrect.
Configure the Auto Scaling group to use a Amazon Cloudwatch alarm
triggered on a CPU utilization threshold of 50% - An Auto Scaling group
cannot directly use a Cloudwatch alarm as the source for a scale-in or scale-out
event, hence this option is incorrect.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-
tracking.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-
step.html
Domain
Design High-Performing Architectures
Question 57Incorrect
A research group runs its flagship application on a fleet of Amazon EC2 instances for
a specialized task that must deliver high random I/O performance. Each instance in
the fleet would have access to a dataset that is replicated across the instances by
the application itself. Because of the resilient application architecture, the
specialized task would continue to be processed even if any instance goes down, as
the underlying application would ensure the replacement instance has access to the
required dataset.
Which of the following options is the MOST cost-optimal and resource-efficient
solution to build this fleet of Amazon EC2 instances?
Use Amazon EC2 instances with Amazon EFS mount points
Use Amazon Elastic Block Store (Amazon EBS) based EC2 instances
Your answer is incorrect
Use Amazon EC2 instances with access to Amazon S3 based storage
Correct answer
Use Instance Store based Amazon EC2 instances
Overall explanation
Correct option:
Use Instance Store based Amazon EC2 instances
An instance store provides temporary block-level storage for your instance. This
storage is located on disks that are physically attached to the host instance.
Instance store is ideal for the temporary storage of information that changes
frequently such as buffers, caches, scratch data, and other temporary content, or
for data that is replicated across a fleet of instances, such as a load-balanced pool
of web servers. Instance store volumes are included as part of the instance's usage
cost.
As Instance Store based volumes provide high random I/O performance at low cost
(as the storage is part of the instance's usage cost) and the resilient architecture
can adjust for the loss of any instance, therefore you should use Instance Store
based Amazon EC2 instances for this use-case.
Amazon EC2 Instance Store Overview:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
Incorrect options:
Use Amazon Elastic Block Store (Amazon EBS) based EC2 instances -
Amazon Elastic Block Store (Amazon EBS) based volumes would need to use
provisioned IOPS (io1) as the storage type and that would incur additional costs. As
we are looking for the most cost-optimal solution, this option is ruled out.
Use Amazon EC2 instances with Amazon EFS mount points - Using Amazon
Elastic File System (Amazon EFS) implies that extra resources would have to be
provisioned (compared to using instance store where the storage is located on disks
that are physically attached to the host instance itself). As we are looking for the
most resource-efficient solution, this option is also ruled out.
Use Amazon EC2 instances with access to Amazon S3 based storage - Using
Amazon EC2 instances with access to Amazon S3 based storage does not deliver
high random I/O performance, this option is just added as a distractor.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
Domain
Design High-Performing Architectures
Question 58Incorrect
A retail company uses Amazon Elastic Compute Cloud (Amazon EC2) instances,
Amazon API Gateway, Amazon RDS, Elastic Load Balancer and Amazon CloudFront
services. To improve the security of these services, the Risk Advisory group has
suggested a feasibility check for using the Amazon GuardDuty service.
Which of the following would you identify as data sources supported by Amazon
GuardDuty?
Correct answer
VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events
Elastic Load Balancing logs, Domain Name System (DNS) logs, AWS
CloudTrail events
Your answer is incorrect
VPC Flow Logs, Amazon API Gateway logs, Amazon S3 access logs
Amazon CloudFront logs, Amazon API Gateway logs, AWS CloudTrail events
Overall explanation
Correct option:
VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events
Amazon GuardDuty is a threat detection service that continuously monitors for
malicious activity and unauthorized behavior to protect your AWS accounts,
workloads, and data stored in Amazon S3. With the cloud, the collection and
aggregation of account and network activities is simplified, but it can be time-
consuming for security teams to continuously analyze event log data for potential
threats. With GuardDuty, you now have an intelligent and cost-effective option for
continuous threat detection in AWS. The service uses machine learning, anomaly
detection, and integrated threat intelligence to identify and prioritize potential
threats.
Amazon GuardDuty analyzes tens of billions of events across multiple AWS data
sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with
no software or hardware to deploy or maintain. By integrating with Amazon
EventBridge Events, GuardDuty alerts are actionable, easy to aggregate across
multiple accounts, and straightforward to push into existing event management and
workflow systems.
How Amazon GuardDuty works:
via - https://aws.amazon.com/guardduty/
Incorrect options:
VPC Flow Logs, Amazon API Gateway logs, Amazon S3 access logs
Elastic Load Balancing logs, Domain Name System (DNS) logs, AWS
CloudTrail events
Amazon CloudFront logs, Amazon API Gateway logs, AWS CloudTrail events
These three options contradict the explanation provided above, so these options are
incorrect.
Reference:
https://aws.amazon.com/guardduty/
Domain
Design Secure Architectures
Question 59Correct
A company manages a multi-tier social media application that runs on Amazon
Elastic Compute Cloud (Amazon EC2) instances behind an Application Load
Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple
Availability Zones (AZs) and use an Amazon Aurora database. As an AWS Certified
Solutions Architect – Associate, you have been tasked to make the application more
resilient to periodic spikes in request rates.
Which of the following solutions would you recommend for the given use-case?
(Select two)
Use AWS Shield
Your selection is correct
Use Amazon Aurora Replica
Use AWS Direct Connect
Use AWS Global Accelerator
Your selection is correct
Use Amazon CloudFront distribution in front of the Application Load
Balancer
Overall explanation
Correct options:
You can use Amazon Aurora replicas and Amazon CloudFront distribution to make
the application more resilient to spikes in request rates.
Use Amazon Aurora Replica
Amazon Aurora Replicas have two main purposes. You can issue queries to them to
scale the read operations for your application. You typically do so by connecting to
the reader endpoint of the cluster. That way, Aurora can spread the load for read-
only connections across as many Aurora Replicas as you have in the cluster.
Amazon Aurora Replicas also help to increase availability. If the writer instance in a
cluster becomes unavailable, Aurora automatically promotes one of the reader
instances to take its place as the new writer. Up to 15 Aurora Replicas can be
distributed across the Availability Zones (AZs) that a DB cluster spans within an
AWS Region.
Use Amazon CloudFront distribution in front of the Application Load
Balancer
Amazon CloudFront is a fast content delivery network (CDN) service that securely
delivers data, videos, applications, and APIs to customers globally with low latency,
high transfer speeds, all within a developer-friendly environment. CloudFront points
of presence (POPs) (edge locations) make sure that popular content can be served
quickly to your viewers. Amazon CloudFront also has regional edge caches that
bring more of your content closer to your viewers, even when the content is not
popular enough to stay at a POP, to help improve performance for that content.
Amazon CloudFront offers an origin failover feature to help support your data
resiliency needs. Amazon CloudFront is a global service that delivers your content
through a worldwide network of data centers called edge locations or points of
presence (POPs). If your content is not already cached in an edge location, Amazon
CloudFront retrieves it from an origin that you've identified as the source for the
definitive version of the content.
Incorrect options:
Use AWS Shield - AWS Shield is a managed Distributed Denial of Service (DDoS)
protection service that safeguards applications running on AWS. AWS Shield
provides always-on detection and automatic inline mitigations that minimize
application downtime and latency. There are two tiers of AWS Shield - Standard and
Advanced. AWS Shield cannot be used to improve application resiliency to handle
spikes in traffic.
Use AWS Global Accelerator - AWS Global Accelerator is a service that improves
the availability and performance of your applications with local or global users. It
provides static IP addresses that act as a fixed entry point to your application
endpoints in a single or multiple AWS Regions, such as your Application Load
Balancers, Network Load Balancers or Amazon EC2 instances. Amazon Global
Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT),
or Voice over IP, as well as for HTTP use cases that specifically require static IP
addresses or deterministic, fast regional failover. Since Amazon CloudFront is better
for improving application resiliency to handle spikes in traffic, so this option is ruled
out.
Use AWS Direct Connect - AWS Direct Connect lets you establish a dedicated
network connection between your network and one of the AWS Direct Connect
locations. Using industry-standard 802.1q VLANs, this dedicated connection can be
partitioned into multiple virtual interfaces. AWS Direct Connect does not involve the
Internet; instead, it uses dedicated, private network connections between your
intranet and Amazon VPC. AWS Direct Connect cannot be used to improve
application resiliency to handle spikes in traffic.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/disaster-
recovery-resiliency.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/
Aurora.Replication.html
https://aws.amazon.com/global-accelerator/faqs/
https://docs.aws.amazon.com/global-accelerator/latest/dg/disaster-recovery-
resiliency.html
Domain
Design Resilient Architectures
Question 60Skipped
A company has a web application that runs 24*7 in the production environment. The
development team at the company runs a clone of the same application in the dev
environment for up to 8 hours every day. The company wants to build the MOST
cost-optimal solution by deploying these applications using the best-fit pricing
options for Amazon Elastic Compute Cloud (Amazon EC2) instances.
What would you recommend?
Correct answer
Use Amazon EC2 reserved instance (RI) for the production application and
on-demand instances for the dev application
Use on-demand Amazon EC2 instances for the production application and
spot instances for the dev application
Use Amazon EC2 reserved instance (RI) for the production application and
spot block instances for the dev application
Use Amazon EC2 reserved instance (RI) for the production application and
spot instances for the dev application
Overall explanation
Correct option:
Use Amazon EC2 reserved instance (RI) for the production application and
on-demand instances for the dev application
There are multiple pricing options for EC2 instances, such as On-Demand, Savings
Plans, Reserved Instances, and Spot Instances.
Amazon EC2 Instances Pricing Options:
via - https://aws.amazon.com/ec2/pricing/
Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%)
compared to On-Demand pricing and provide a capacity reservation when used in a
specific Availability Zone. RIs provide you with a significant discount (up to 72%)
compared to On-Demand instance pricing. You have the flexibility to change
families, OS types, and tenancies while benefitting from RI pricing when you use
Convertible RIs.
via - https://aws.amazon.com/ec2/pricing/
For the given use case, you can use Amazon EC2 Reserved Instances for the
production application as it is run 24*7. This way you can get a 72% discount if you
avail a 3-year term. You can use on-demand instances for the dev application since
it is only used for up to 8 hours per day. On-demand offers the flexibility to only pay
for the Amazon EC2 instance when it is being used (0 to 8 hours for the given use
case).
Incorrect options:
Use Amazon EC2 reserved instance (RI) for the production application and
spot block instances for the dev application - Spot blocks can only be used for
a span of up to 6 hours, so this option does not meet the requirements of the given
use case where the dev application can be up and running up to 8 hours. You should
also note that AWS has stopped offering Spot blocks to new customers.
Use Amazon EC2 reserved instance (RI) for the production application and
spot instances for the dev application
Use on-demand Amazon EC2 instances for the production application and
spot instances for the dev application
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the
AWS cloud. Spot Instances are available at up to a 90% discount compared to On-
Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or
flexible applications.
via - https://aws.amazon.com/ec2/spot/
Spot instances can be taken back by AWS with two minutes of notice, so spot
instances cannot be reliably used for running the dev application (which can be up
and running for up to 8 hours). So both these options are incorrect.
References:
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/blogs/aws/new-ec2-spot-blocks-for-defined-duration-
workloads/
https://aws.amazon.com/ec2/spot/
Domain
Design Cost-Optimized Architectures
Question 61Correct
While consolidating logs for the weekly reporting, a development team at an e-
commerce company noticed that an unusually large number of illegal AWS
application programming interface (API) queries were made sometime during the
week. Due to the off-season, there was no visible impact on the systems. However,
this event led the management team to seek an automated solution that can trigger
near-real-time warnings in case such an event recurs.
Which of the following represents the best solution for the given scenario?
AWS Trusted Advisor publishes metrics about check results to Amazon
CloudWatch. Create an alarm to track status changes for checks in the
Service Limits category for the APIs. The alarm will then notify when the
service quota is reached or exceeded
Run Amazon Athena SQL queries against AWS CloudTrail log files stored in
Amazon S3 buckets. Use Amazon QuickSight to generate reports for
managerial dashboards
Configure AWS CloudTrail to stream event data to Amazon Kinesis. Use
Amazon Kinesis stream-level metrics in the Amazon CloudWatch to trigger
an AWS Lambda function that will trigger an error workflow
Your answer is correct
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail
logs having API call details and looks at any errors by factoring in all the
error codes that need to be tracked. Create an alarm based on this
metric's rate to send an Amazon SNS notification to the required team
Overall explanation
Correct option:
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail
logs having API call details and looks at any errors by factoring in all the
error codes that need to be tracked. Create an alarm based on this
metric's rate to send an Amazon SNS notification to the required team
AWS CloudTrail log data can be ingested into Amazon CloudWatch to monitor and
identify your AWS account activity against security threats, and create a
governance framework for security best practices. You can analyze log trail event
data in CloudWatch using features such as Logs Insight, Contributor Insights, Metric
filters, and CloudWatch Alarms.
AWS CloudTrail integrates with the Amazon CloudWatch service to publish the API
calls being made to resources or services in the AWS account. The published event
has invaluable information that can be used for compliance, auditing, and
governance of your AWS accounts. Below we introduce several features available in
CloudWatch to monitor API activity, analyze the logs at scale, and take action when
malicious activity is discovered, without provisioning your infrastructure.
For the AWS Cloudtrail logs available in Amazon CloudWatch Logs, you can begin
searching and filtering the log data by creating one or more metric filters. Use these
metric filters to turn log data into numerical CloudWatch metrics that you can graph
or set a CloudWatch Alarm on.
Note: AWS CloudTrail Insights helps AWS users identify and respond to unusual
activity associated with write API calls by continuously analyzing CloudTrail
management events.
Insights events are logged when AWS CloudTrail detects unusual write management
API activity in your account. If you have AWS CloudTrail Insights enabled and
CloudTrail detects unusual activity, Insights events are delivered to the destination
Amazon S3 bucket for your trail. You can also see the type of insight and the
incident time when you view Insights events on the CloudTrail console. Unlike other
types of events captured in a CloudTrail trail, Insights events are logged only when
CloudTrail detects changes in your account's API usage that differ significantly from
the account's typical usage patterns.
Incorrect options:
Configure AWS CloudTrail to stream event data to Amazon Kinesis. Use
Amazon Kinesis stream-level metrics in the Amazon CloudWatch to trigger
an AWS Lambda function that will trigger an error workflow - AWS CloudTrail
cannot stream data to Amazon Kinesis. Amazon S3 buckets and Amazon
CloudWatch logs are the only destinations possible.
Run Amazon Athena SQL queries against AWS CloudTrail log files stored in
Amazon S3 buckets. Use Amazon QuickSight to generate reports for
managerial dashboards - Generating reports and visualizations help in
understanding and analyzing patterns but is not useful as a near-real-time
automatic solution for the given problem.
AWS Trusted Advisor publishes metrics about check results to Amazon
CloudWatch. Create an alarm to track status changes for checks in the
Service Limits category for the APIs. The alarm will then notify when the
service quota is reached or exceeded - When AWS Trusted Advisor refreshes
your checks, Trusted Advisor publishes metrics about your check results to Amazon
CloudWatch. You can view the metrics in CloudWatch. You can also create alarms to
detect status changes to Trusted Advisor checks and status changes for resources,
and service quota usage (formerly referred to as limits). The alarm will then notify
you when you reach or exceed a service quota for your AWS account. However, the
alarm is triggered only when the service limit is reached. We need a solution that
raises an alarm when the number of API calls randomly increases or an abnormal
pattern is detected. Hence, this option is not the right fit for the given use case.
References:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-
cloudtrail.html#cloudwatch-alarms-for-cloudtrail-authorization-failures
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-
concepts.html
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-
events-with-cloudtrail.html
https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-metrics-ta.html
Domain
Design Resilient Architectures
Question 62Skipped
The sourcing team at the US headquarters of a global e-commerce company is
preparing a spreadsheet of the new product catalog. The spreadsheet is saved on
an Amazon Elastic File System (Amazon EFS) created in us-east-1 region. The
sourcing team counterparts from other AWS regions such as Asia Pacific and Europe
also want to collaborate on this spreadsheet.
As a solutions architect, what is your recommendation to enable this collaboration
with the LEAST amount of operational overhead?
The spreadsheet will have to be copied into Amazon EFS file systems of
other AWS regions as Amazon EFS is a regional service and it does not
allow access from other AWS regions
The spreadsheet will have to be copied in Amazon S3 which can then be
accessed from any AWS region
The spreadsheet data will have to be moved into an Amazon RDS for
MySQL database which can then be accessed from any AWS region
Correct answer
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be
accessed in other AWS regions by using an inter-region VPC peering
connection
Overall explanation
Correct option:
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be
accessed in other AWS regions by using an inter-region VPC peering
connection
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully
managed elastic NFS file system for use with AWS Cloud services and on-premises
resources.
Amazon EFS is a regional service storing data within and across multiple Availability
Zones (AZs) for high availability and durability. Amazon EC2 instances can access
your file system across AZs, regions, and VPCs, while on-premises servers can
access using AWS Direct Connect or AWS VPN.
You can connect to Amazon EFS file systems from EC2 instances in other AWS
regions using an inter-region VPC peering connection, and from on-premises servers
using an AWS VPN connection. So this is the correct option.
Incorrect options:
The spreadsheet will have to be copied in Amazon S3 which can then be
accessed from any AWS region
The spreadsheet data will have to be moved into an Amazon RDS for
MySQL database which can then be accessed from any AWS region
Copying the spreadsheet into Amazon S3 or Amazon RDS for MySQL database is not
the correct solution as it involves a lot of operational overhead. For Amazon RDS,
one would need to write custom code to replicate the spreadsheet functionality
running off of the database. S3 does not allow in-place edit of an object.
Additionally, it's also not POSIX compliant. So one would need to develop a custom
application to "simulate in-place edits" to support collabaration as per the use-case.
So both these options are ruled out.
The spreadsheet will have to be copied into Amazon EFS file systems of
other AWS regions as Amazon EFS is a regional service and it does not
allow access from other AWS regions - Creating copies of the spreadsheet into
Amazon EFS file systems of other AWS regions would mean no collaboration would
be possible between the teams. In this case, each team would work on "its own file"
instead of a single file accessed and updated by all teams. Hence this option is
incorrect.
Reference:
https://aws.amazon.com/efs/
Domain
Design High-Performing Architectures
Question 63Correct
A logistics company is building a multi-tier application to track the location of its
trucks during peak operating hours. The company wants these data points to be
accessible in real-time in its analytics platform via a REST API. The company has
hired you as an AWS Certified Solutions Architect Associate to build a multi-tier
solution to store and retrieve this location data for analysis.
Which of the following options addresses the given use case?
Leverage Amazon QuickSight with Amazon Redshift
Leverage Amazon API Gateway with AWS Lambda
Leverage Amazon Athena with Amazon S3
Your answer is correct
Leverage Amazon API Gateway with Amazon Kinesis Data Analytics
Overall explanation
Correct option:
Leverage Amazon API Gateway with Amazon Kinesis Data Analytics
You can use Kinesis Data Analytics to transform and analyze streaming data in real-
time with Apache Flink. Kinesis Data Analytics enables you to quickly build end-to-
end stream processing applications for log analytics, clickstream analytics, Internet
of Things (IoT), ad tech, gaming, etc. The four most common use cases are
streaming extract-transform-load (ETL), continuous metric generation, responsive
real-time analytics, and interactive querying of data streams. Kinesis Data Analytics
for Apache Flink applications provides your application 50 GB of running application
storage per Kinesis Processing Unit (KPU).
Amazon API Gateway is a fully managed service that allows you to publish,
maintain, monitor, and secure APIs at any scale. Amazon API Gateway offers two
options to create RESTful APIs, HTTP APIs and REST APIs, as well as an option to
create WebSocket APIs.
Amazon API Gateway:
 via - https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-
capabilities-in-database-environment/
For the given use case, you can use Amazon API Gateway to create a REST API that
handles incoming requests having location data from the trucks and sends it to the
Kinesis Data Analytics application on the back end.
Amazon Kinesis Data Analytics:
via - https://aws.amazon.com/kinesis/data-analytics/
Incorrect options:
Leverage Amazon Athena with Amazon S3 - Amazon Athena is an interactive
query service that makes it easy to analyze data in Amazon S3 using standard SQL.
Athena cannot be used to build a REST API to consume data from the source. So this
option is incorrect.
Leverage Amazon QuickSight with Amazon Redshift - QuickSight is a cloud-
native, serverless business intelligence service. Quicksight cannot be used to build
a REST API to consume data from the source. Redshift is a fully managed AWS cloud
data warehouse. So this option is incorrect.
Leverage Amazon API Gateway with AWS Lambda - You cannot use Lambda to
store and retrieve the location data for analysis, so this option is incorrect.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-
with-aws-services-kinesis.html
https://aws.amazon.com/kinesis/data-analytics/
https://aws.amazon.com/kinesis/data-analytics/faqs/
Domain
Design High-Performing Architectures
Question 64Correct
An IT company wants to review its security best-practices after an incident was
reported where a new developer on the team was assigned full access to Amazon
DynamoDB. The developer accidentally deleted a couple of tables from the
production environment while building out a new feature.
Which is the MOST effective way to address this issue so that such incidents do not
recur?
The CTO should review the permissions for each new developer's IAM user
so that such incidents don't recur
Only root user should have full database access in the organization
Remove full database access for all IAM users in the organization
Your answer is correct
Use permissions boundary to control the maximum permissions employees
can grant to the IAM principals
Overall explanation
Correct option:
Use permissions boundary to control the maximum permissions employees
can grant to the IAM principals
A permissions boundary can be used to control the maximum permissions
employees can grant to the IAM principals (that is, users and roles) that they create
and manage. As the IAM administrator, you can define one or more permissions
boundaries using managed policies and allow your employee to create a principal
with this boundary. The employee can then attach a permissions policy to this
principal. However, the effective permissions of the principal are the intersection of
the permissions boundary and permissions policy. As a result, the new principal
cannot exceed the boundary that you defined. Therefore, using the permissions
boundary offers the right solution for this use-case.
Permission Boundary Example:
 via - https://aws.amazon.com/blogs/security/delegate-permission-management-to-
developers-using-iam-permissions-boundaries/
Incorrect options:
Remove full database access for all IAM users in the organization - It is not
practical to remove full access for all IAM users in the organization because a select
set of users need this access for database administration. So this option is not
correct.
The CTO should review the permissions for each new developer's IAM user
so that such incidents don't recur - Likewise the CTO is not expected to review
the permissions for each new developer's IAM user, as this is best done via an
automated procedure. This option has been added as a distractor.
Only root user should have full database access in the organization - As a
best practice, the root user should not access the AWS account to carry out any
administrative procedures. So this option is not correct.
Reference:
https://aws.amazon.com/blogs/security/delegate-permission-management-to-
developers-using-iam-permissions-boundaries/
Domain
Design Secure Architectures
Question 65Correct
An audit department generates and accesses the audit reports only twice in a
financial year. The department uses AWS Step Functions to orchestrate the report
creating process that has failover and retry scenarios built into the solution. The
underlying data to create these audit reports is stored on Amazon S3, runs into
hundreds of Terabytes and should be available with millisecond latency.
As an AWS Certified Solutions Architect – Associate, which is the MOST cost-effective
storage class that you would recommend to be used for this use-case?
Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
Your answer is correct
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Amazon S3 Glacier Deep Archive
Amazon S3 Standard
Overall explanation
Correct option:
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Since the data is accessed only twice in a financial year but needs rapid access
when required, the most cost-effective storage class for this use-case is Amazon S3
Standard-IA. S3 Standard-IA storage class is for data that is accessed less frequently
but requires rapid access when needed. S3 Standard-IA matches the high durability,
high throughput, and low latency of S3 Standard, with a low per GB storage price
and per GB retrieval fee. Amazon Standard-IA is designed for 99.9% availability
compared to 99.99% availability of Amazon S3 Standard. However, the report
creation process has failover and retry scenarios built into the workflow, so in case
the data is not available owing to the 99.9% availability of Amazon S3 Standard-IA,
the job will be auto re-invoked till data is successfully retrieved. Therefore this is the
correct option.
Amazon S3 Storage Classes Overview:
via - https://aws.amazon.com/s3/storage-classes/
Incorrect options:
Amazon S3 Standard - Amazon S3 Standard offers high durability, availability,
and performance object storage for frequently accessed data. As described above,
Amazon S3 Standard-IA storage is a better fit than Amazon S3 Standard, hence
using S3 standard is ruled out for the given use-case.
Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) - For a small monthly
object monitoring and automation charge, Amazon S3 Intelligent-Tiering monitors
access patterns and automatically moves objects that have not been accessed to
lower-cost access tiers. The Amazon S3 Intelligent-Tiering storage class is designed
to optimize costs by automatically moving data to the most cost-effective access
tier, without performance impact or operational overhead. S3 Standard-IA matches
the high durability, high throughput, and low latency of S3 Intelligent-Tiering, with a
low per GB storage price and per GB retrieval fee. Moreover, Standard-IA has the
same availability as that of Amazon S3 Intelligent-Tiering. So, it's cost-efficient to
use S3 Standard-IA instead of S3 Intelligent-Tiering.
Amazon S3 Glacier Deep Archive - Amazon S3 Glacier Deep Archive is a secure,
durable, and low-cost storage class for data archiving. Amazon S3 Glacier Deep
Archive does not support millisecond latency, so this option is ruled out.
For more details on the durability, availability, cost and access latency - please
review this reference link: https://aws.amazon.com/s3/storage-classes
Domain
Design Cost-Optimized Architectures