0% found this document useful (0 votes)
307 views27 pages

AWS S3 Default Settings and Use Cases

The document discusses Amazon S3 data consistency. It states that S3 provides read-after-write consistency for new object PUTs in all regions, with one caveat. If a HEAD or GET request is made before creating the object, S3 provides eventual consistency for read-after-write. It also notes that S3 offers eventual consistency for overwrite PUTs and DELETES in all regions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
307 views27 pages

AWS S3 Default Settings and Use Cases

The document discusses Amazon S3 data consistency. It states that S3 provides read-after-write consistency for new object PUTs in all regions, with one caveat. If a HEAD or GET request is made before creating the object, S3 provides eventual consistency for read-after-write. It also notes that S3 offers eventual consistency for overwrite PUTs and DELETES in all regions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Show Answers

All

QUESTION 1 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES

You have created an S3 bucket in us-east-1 region by not changing default “configure options” and “permissions”.
Which of the following options are incorrect in terms of default settings?
(choose 2 options)

A. Encryption is disabled.
B. Transfer Acceleration is enabled.
C. No bucket policy exists.
D. Versioning is enabled.

Explanation :
Answer: B, D
When creating an S3 bucket, you can change the default configuration according to your requirements or leave the default options and continue to
create the bucket. You can always change the configuration after you created the bucket.
For option A, Default encryption is not enabled.
For option B, Transfer Acceleration is suspended by default.
For Option C, bucket policy does not exist by default. We can restrict bucket access through bucket policy.
For option D, By default Versioning is Disabled.

Note:
The question is"Which of the following options are incorrect in terms of default settings?"

A. Encryption is disabled. -- we have to select the incorrect in terms of default settings, so it's not the answer.
B. Transfer Acceleration is enabled.-- we have to select the incorrect in terms of default settings, so it's correct the answer.
C. No bucket policy exists.-- we have to select the incorrect in terms of default settings, so it's not the answer.
D. Versioning is enabled.-- we have to select the incorrect in terms of default settings, so it's correct the answer.

Ask our Experts

QUESTION 2 UNATTEMPTED

Which of the following are S3 bucket properties?(Choose 2 options)

A. Server access logging


B. Object level logging
C. Storage class
D. Metadata

Explanation :
Answer: A, B
Following are S3 bucket properties.

https://docs.aws.amazon.com/AmazonS3/latest/user-guide/view-bucket-properties.html

Option C, Storage class property is at object level, not at bucket level. Following aredifferent storage classes.
For more information on storage classes, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html

For option D, metadata is at object level property, not bucket level.For detailedinformation on object metadata, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-metadata

Ask our Experts

QUESTION 3 UNATTEMPTED

You have created an S3 bucket in us-east-


1 region with defaultconfiguration. You are located in Asia and deleted an object in the bucketusing AWS CLI. However, when you
tried to list
the objects in the bucket,you still see the object you deleted. You are even able to download theobject. What could have caused this behaviour
?

A. Cross region deletes are not supported by AWS


B. AWS provides eventual consistency for DELETES.
C. AWS keeps copy of deleted object for 7 days in STANDARD storage.
D. AWS provides strong consistency for DELETES.

Explanation :
Answer: B
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.

https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#CoreConcepts and refer to “Amazon S3 Data Consistency Model”


For option A, you can perform DELETE operation from Console, CLI, programmaticallyfrom any region as long as you have access to
perform.
For option C, AWS S3 deletes any object for which DELETE request is made from anauthorized
IAM entity.
It does not keep a copy unless you have versioning enabled and you have multipleversions of the deleted object.

https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html
In this case, bucket is created with default configuration which has versioning disabled.For option D, AWS does not provide strong cons
istency for DELETES.

Ask our Experts

QUESTION 4 UNATTEMPTED

Your organization is planning to upload large number of files to AWScloud. These files need to be immediately availab
le for download acrossdifferent geographical regions right after the upload is complete. Theyconsulted you to check if
S3 is a suitable solution for the use case. Whatdo you suggest?

A. S3 is not suitable for immediate downloads because new AWS provides eventual consistency for new objects.
B. S3 is suitable for immediate downloads because AWS provides read-after-write consistency for new objects.
C. EFS is suitable for immediate downloads because AWS provides eventual consistency for new objects.
D. S3 is suitable for immediate downloads because AWS provides strong consistency for new objects.

Explanation :
Answer: B
Amazon S3 provides read-after-
write consistency for PUTS of new objects in your S3bucket in all regions with one caveat. The caveat is that if you make a HEAD or GETreques
t to the key name (to find if the object exists) before creating the object, AmazonS3 provides eventual consistency for read-after-write.

Option A is not true. Eventual consistency is for overwrite PUTS and DELETES. Option Cis not true. EFS provides read-after-write consistency.
For option D, AWS provides strong consistency for DynamoDB, not for S3.

Ask our Experts

QUESTION 5 UNATTEMPTED

You are a solutions architect. Your organization is building an application on premise. But would like to keep the
storage on AWS. Objects/files must only be accessed via the application as there are relational and access related logics
built in the application. But, as an exception, Administrators should be able to access the objects/files directly from
AWS S3 console/API bypassing the application. What solution would you provide?

A. Cached Volume Gateway


B. Stored Volume Gateway
C. File Gateway
D. Custom built S3 solution

Explanation :

Answer: C
The File Gateway presents a file interface that enables you to store files as objects in Amazon S3 using the industry-standard NFS and SMB file
protocols, and access those files via NFS and SMB from your datacenter or Amazon EC2, or access those files as objects with the S3 API.

https://d1.awsstatic.com/whitepapers/aws-storage-gateway-file-gateway-for-hybrid-architectures.pdf

For option A, with Cached Volumen Gateway, you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of
frequently accessed data subsets locally. However, these are stored as snapshots in S3 and cannot be accessed through console/API.

For option B, with stored volumes, you store the entire set of volume data on-premises and store periodic point-in-time backups (snapshots) in
AWS. In this model, your on-premises storage is primary, delivering low-latency access to your entire dataset. AWS storage is the backup that
you can restore in the event of a disaster in your data center.
For option D, although custom built solution using S3 might work, it is recommended to use
AWS provided services where ever possible.

For more information in AWS storage gateways, refer documentation


here. https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html

Ask our Experts

QUESTION 6 UNATTEMPTED

You have created an S3 bucket in us-east-


1 region with defaultconfigurations. You have uploaded few documents and would like toshare it with a group of peopl
e in your organization within the specified time duration. What is the recommended approach?

A. Create one IAM user per person, attach managed policy for each user with GetObject action on your S3 bucket.
Users can login to AWS console and download documents.
B. Create one IAM user per person, add them to an IAM grop, attach managed policy for the group with GetObject
action on your S3 bucket. Users can login to AWS console and download documents.
C. Generate pre-signed URL with an expiry date and share the URL with all persons via email.
D. By default, S3 bucket has public access enabled. Share the document URLs with all persons via email.

Explanation :
Answer: C
All objects by default are private. Only the object owner has permission to access theseobjects. However, the object owner can optionally share o
bjects with others by creatinga pre-signed URL, using their own security credentials, to grant time-limited permissionto download the objects.
Anyone who receives the pre-
signed URL can then access the object. For example, ifyou have a video in your bucket and both the bucket and the object are private, you cansha
re the video with others by generating a pre-signed URL.
For more information, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

For options A and B, although these solutions work, it’s a whole lot of setup for enablingdownload of documents. Also, AWS recommends using
temporary credentials for usecases where users ocasionally need access to AWS resources.
In this case, pre-signed URL is granting temporary access on the S3 objects and accessgets expired when the time limit has reached.
Option D is incorrect. All objects in S3 bucket are private by default.

Ask our Experts

QUESTION 7 UNATTEMPTED

Which of the following are valid statements about AmazonS3? (Choose 3 options)

A. S3 provides read-after-write consistency for any type of PUTS.


B. S3 provides strong consistency for PUTs or DELETES.
C. A successful response to a PUT request for new object only occurs when the object is completely saved.
D. S3 might return prior data when a process replaces an existing object and immediately attempts to read.
E. S3 provides eventual consistency for overwrite PUTS and DELETES

Explanation :
Answer: C, D, E
Amazon S3 provides read-after-
write consistency for PUTS of new objects in your S3bucket in all regions with one caveat. The caveat is that if you make a HEAD or GETreques
t to the key name (to find if the object exists) before creating the object, AmazonS3 provides eventual consistency for read-after-write.

Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.For more information on S3 consistency model, refer do
cumentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html #CoreConcepts
and refer to “Amazon S3 Data Consistency Model”
Option A is incorrect. Read-after-write consistency is only provided for new objectPUTS, not for any type of PUTS.
Option B is incorrect. AWS does not provide strong consistency for S3 objects. Strongconsistency model is for DynamoDB reads.
Option C translates to read-after-write consistency model. Hence correct.
Option D translates to eventual consistency model. Hence correct. Option E is correctfrom above statements.

Ask our Experts

QUESTION 8 UNATTEMPTED
You are designing a web application that stores static assets in an Amazon S3 bucket. You expect this bucket to
immediately receive over 400 requests with a mix of GET/PUT/DELETE per second. What should you do to ensure
optimal performance?

A. Amazon S3 will automatically manage performance at this scale.


B. Add a random prefix to the key names.
C. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names.
D. Use multi-part upload.

Explanation :

Correct Answer: A
#####################

Request Rate and Performance Guidelines


Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500
GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. It is simple to increase your read or
write performance exponentially. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read
performance to 55,000 read requests per second.

###################################
For More Information:
https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/

Ask our Experts

QUESTION 9 UNATTEMPTED

You have an applicaton running on EC2. When the application trying toupload a 7 GB file to S3, operation fails. What
could be the reason forfailure and what would be the solution?

A. With a single PUT operation, you can upload objects up to 5 GB in size. Use multi-part upload for larger file
uploads.
B. EC2 is designed to work best with EBS volumes. Use EBS Provisioned IOPs and use an Amazon EBS-
optimized instance.
C. NAT gateway only supports data transfers going out upto 5 GB. Use EBS
Provisioned IOPs and use an Amazon EBS-optimized instance.
D. VPC Endpoints only supports data transfers going out upto 5 GB. Use EBS
Provisioned IOPs and use an Amazon EBS-optimized instance.

Explanation :
Answer: A
AWS recommends using multi-part uploads for larger objects.
For more information on multi-
part uploads, refer documentation here.https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

For option B, Amazon EBS is a storage for the drives of your virtual machines. It storesdata as blocks of the same size and organizes them throug
h the hierarchy similar to atraditional file system. EBS is not a standalone storage service like Amazon S3 so youcan use it only in combination w
ith Amazon EC2.
Objects can be stored on EBS volumes, but not cost-effective and not highly resilientand fault tolarant compared to S3.
Optionc C and D are incorrect. NAT Gateway ad VPC endpoints do not have any datatransfer limitations.

Ask our Experts

QUESTION 10 UNATTEMPTED

You have an application on EC2 which stores the files in an S3 bucket. EC2is being launched using a role which has G
etObject permissions on the S3bucket defined in its policy. The users who authenticate to thisapplication will get a pre-
signed URL for the files in S3 bucket using EC2role temporary credentials. However, users reporting they get an error
when accessing pre- signed URLs. What could be the reason?(Choose 2 options)

A. Pre-signed URLs expired.


B. Logged in user must be an IAM user to download file through pre-signed URL.
C. Bucket might have a policy with Deny. EC2 role not whitelisted in the policy statement with Deny.
D. Default policy on temporary credentials does not have GetObject privileges on S3 bucket.

Explanation :
Answer: A, C
All objects in S3 are private by default. Only the object owner has permission to accessthese objects. However, the object owner can optionally sh
are objects with others bycreating a pre- signed URL, using their own security credentials, to grant time-
limitedpermission to download the objects.
Anyone who receives the pre-
signed URL can then access the object. For example, ifyou have a video in your bucket and both the bucket and the object are private, you cansha
re the video with others by generating a pre-signed URL.
For more information, refer documentation here.https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
For option A, while generating pre-
signed URL programatically using SDK/API, we give aduration how long should the URL be valid. When the URL is accessed after the specified
duration, you would get an error.

For option B, AWS recommends to use temporary credentials when ever users needtime- limited access to AWS resources instead of using IAM u
sers for each request.
For more information on temporary credentials, refer documentation here.https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_tem
p.html
For option C, if a bucket policy contains Effect as Deny, you must whitelist all the IAMresources which need access on the bucket. Otherwise, IA
M resources cannot accessS3 bucket even if they have full access.
For detailed information on how to restrict bucket, refer documentation here.https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-
bucket-access-to-a-specific-iam-role/
For option D, policy is an optional parameter when temporary credentials are generatedusing AssumeRole (which is how EC2 generates temporar
y credentials using instance-profile). There is no default policy.

Ask our Experts

QUESTION 11 UNATTEMPTED

Your organization has an S3 bucket which stores confidential information. Access is granted to certain programmatic
IAM users and restricted the requests from these IAM users to be originated from within your organization IP address
range. However, your organization suspect there might be requests from other IP addresses to S3 buckets to download
certain objects. How would you troubleshoot to find out requester IP address?(choose 2 options)

A. Enable VPC flow logs in the region where S3 bucket exists.


B. Enable Server logging
C. Enable CloudTrail logging using OPTIONS object
D. Enable CloudWatch metrics
Explanation :
Answer: B, C
Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example,
access log information can be useful in security and access audits.

For details on how to enable logging for S3, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html#server-access-logging-overview
For information about the format of the log file, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/LogFormat.html
For option A, S3 is a managed service and not part of VPC. So enabling VPC flow logs does not report traffic sent to S3 bucket.
Option B is correct.
Option C is correct. Using the information collected by CloudTrail, you can determine what request was made to Amazon S3, the source IP address from
which the request was made, who made the request, when it was made, and so on. This information helps you to track changes made to your AWS resources
and to troubleshoot operational issues.

For detailed information about how S3 requests are tracked using CloudTrail, refer documentation here.

https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html#cloudtrail-logging- s3-info
For option D, although CloudWatch has metrics for S3 requests, this does not provide detailed information about each request. It
generates metrics for number of requestsent for each type.
For more information about S3 CloudWatch request metrics, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudwatch-monitoring.html#s3-request-cloudwatch-metrics

Ask our Experts

QUESTION 12 UNATTEMPTED

An organization is planning to build web and mobile applications which can upload few 100,000 images every day into
S3. The applications expect a sudden increase in volume, however, they are lean on budget and looking for a cost-
effective solution. As an architect, you are approached if S3 suits their requirement. What information you will gather
from to make a decision? (Choose 2 options)

A. Gather information on high availability of data and frequency of requests to choose storage class of objects in S3.
B. Gather information on total size to properly design prefix namespace.
C. Gather information on total size to provision storage on S3 bucket.
D. Gather information on number of requests during peak time.

Explanation :

Answer: A, D
For option A, S3 offers different storage classes. Based on the storage type, availability % would change along with cost.
If the images need to be highly available and frequently accessed, choose STANDARD. If the images need not be highly available but frequently
accessed, choose
REDUCED_REDUNDANCY class.
If the images need to be highly available but not frequently accessed, choose STANDARD_IA
class.
If the images need not be highly available and not frequently accessed, choose ONEZONE_IA Following are the prices for each storage class.

For more information on S3 storage classes, refer documentation here.

https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html

For option B, prefix naming is required for optimal performance if we expect higher number of objects, not large sized objects. Option D is
correct. Following is the explanation.
Amazon S3 maintains an index of object key names in each AWS Region. Object keys are stored in UTF-8 binary ordering across multiple
partitions in the index. The key name determines which partition the key is stored in.
Although Amazon S3 automatically scales to high request rates, using a sequential prefix, such as timestamp or an alphabetical sequence,
increases the likelihood that Amazon S3 will target a specific partition for a large number of your keys, potentially overwhelming the I/O capacity
of the partition.
When your workload is a mix of request types, introduce some randomness to key names by adding a hash string as a prefix to the key name. By
introducing randomness to your key names the I/O load will be distributed across multiple index partitions. For example, you can compute an
MD5 hash of the character sequence that you plan to assign as the key and add 3 or 4 characters from the hash as a prefix to the key name. The
following example shows key names with a 4 character hexadecimal hash added as a prefix.
Without the 4 character hash prefix, S3 may distribute all of this load to 1 or 2 index partitions since the name of each object begins with
examplebucket/2013-26-05-15-00-0 and all objects in the index are stored in alpha-numeric order. The 4 character hash prefix ensures that the
load is spread across multiple index partitions.

When your workload is sending mostly GET requests, you can add randomness to key names. In addition, you can integrate Amazon CloudFront
with Amazon S3 to distribute content to your users with low latency and a high data transfer rate.
For option C, AWS S3 storage is virtually unlimited. No need to provision any storage upfront.

Ask our Experts

QUESTION 13 UNATTEMPTED

Which of the following are system metadata for objects in S3?(choose 3 options)

A. x-amz-server-side-encryption
B. x-amz-meta-object-id
C. x-amz-version-id
D. Content-Length
E. x-amz-meta-location

Explanation :
Answer: A, C, D
AWS S3 bucket objects contain two kinds of metadata, system metadata and user-defined metadata.
System metadata:
Metadata such as object creation date is system controlled where only Amazon S3 can modify the value.
Other system metadata, such as the storage class configured for the object and whether the object has server-side encryption enabled, are examples of
system metadata whose values you control. If your bucket is configured as a website, sometimes you might want to redirect a page request to another page
or an external URL. In this case, a webpage is

an object in your bucket. Amazon S3 stores the page redirect value as system metadata whose value you control.
When you create objects, you can configure values of these system metadata items or update the values when you need to
User-defined metadata:
When uploading an object, you can also assign metadata to the object. You provide this optional information as a name-value (key-value) pair
when you send a PUT or POST request to create the object. When you upload objects using the REST API, the optional user-defined metadata
names must begin with "x-amz-meta-" to distinguish them from other HTTP headers
For more information on object metadata, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-metadata
So, options B and E starts with x-amz-meta and are user-defined metadata.

Ask our Experts

QUESTION 14 UNATTEMPTED

Your organization needs to meet audit compliance and hence need to log all the requests sent to a set of 10 buckets
which contains confidential information. These also will be periodically used to find out if any requests are being made
from outside the organization’s IP address range. Your AWS application team had enabled S3 server access logging for
all the buckets into a common logging bucket named s3-server-logging. But after few hours they noticed no logs were
being written into logging bucket. What could be the reason?

A. Bucket user-defined deny policy is not allowing Log Delivery group to write into S3 logging bucket.
B. Bucket public access is not enabled.
C. Write access is disabled for Log Delivery group.
D. Bucket name for server access logging should be “s3-server-access-logging” inorder to write the request logs.

Explanation :
Answer: A
Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications.
For example, access log information can be useful in security and access audits.
For details on logging for S3, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html#server-access-logging-overview

For option A, S3 buckets would often be restricted using bucket policy with Effect as Deny except whitelisted IAM resources who would require
access.

For detailed information on how to restrict bucket, refer documentation here.


https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
For providing access to log delivery group, you need to explicitly add following statement to your bucket policy.
{
"Version": "2012-10-17",
"Statement": [
{
Delivery service",
"Sid": "Permit access log delivery by AWS ID for Log
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::858827067514:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::examplebucket/logs/*"
}
]
}

Also make sure the arn “arn:aws:iam::858827067514:root” is whitelisted in the Deny statement of your bucket policy.
For option B, public access is not required to be enabled for writing logs into S3 bucket. Only access required is PutObject for Log Delivery group.
For option C, although by default, Log Delivery group permission is disabled, permission will be granted when the bucket is selected as target for
logging.

https://docs.aws.amazon.com/AmazonS3/latest/dev/enable-logging-console.html

Option D is a false statement.

Ask our Experts

QUESTION 15 UNATTEMPTED

You are building a web application which will allow authenticated users to upload videos to AWS S3 bucket. However,
while testing the application, you found that the upload requests to S3 are being blocked. What should you do to make
the upload work?
A. Enable public access to allow uploads from web applications.
B. Add configuration in S3 bucket CORS to allow PUT requests from web application URL.
C. Add Content-Length and Content-MD5 headers while sending upload requests to S3
D. Web application URL must be added to bucket policy to allow PutObject requests.

Explanation :
Answer: B
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different
domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to
your Amazon S3 resources.
For more information on CORS, refer documentation here. https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html#example-scenarios-cors
For option A, enabling public access will not enable web application to send requests to S3 bucket. Further more, AWS does not recommend enabling
public access on an S3 bucket unless you are hosting static assets which can be accessed by all.
For more information on securing S3 buckets, refer documentation here. . https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-
resources/
For option C, Content-Length and Content-MD5 are system metadata for object. They are set during creating/uploading an object. However, these
paramaters do not enable web application to send requests to S3 bucket.
For option D, AWS S3 bucket policy does not grant permissions based on the web application
URLs.
However, you can setup a condition in the policy to restrict access only if the request is being sent from a certain URL using “aws:Referer” context-key.

https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example- bucket-policies-use-case-4

Ask our Experts

QUESTION 16 UNATTEMPTED

You have uploaded a file to AWS S3 bucket with content ‘foo’. You have overwritten the file with content ‘bar’. When
you made a GetObject request immediately after overwrite, what output can you expect?

A. foo
B. bar
C. either foo or bar
D. An error stating “Object updating. Please try after some time.”

Explanation :
Answer: C
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.
A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior
data.

Ask our Experts

QUESTION 17 UNATTEMPTED

You created a bucket named “myfirstwhizbucket” in US West region. What are valid URLs for accessing the bucket?
(Choose 2 options)
A. https://myfirstwhizbucket.s3-us-west-1.amazonaws.com
B. https://s3.myfirstwhizbucket.us-west-1.amazonaws.com
C. https://s3-us-west-1.amazonaws.com/myfirstwhizbucket
D. https://s3.us-west-1.amazonaws.com/myfirstwhizbucket
E. https://s3.amazonaws.com/myfirstwhizbucket

Explanation :

Answer: A, C

For option A, it matches the virtual-hosted-style URL and it is correct.


For option B, it does not match any of the above-mentioned URL patterns. It is incorrect.
For option C, it matches the path-style URL and it is correct.
For option D, it does not match any of the above-mentioned URL patterns.
For option E, it matches path-style URL, but since the bucket is in us-west-1 region, it must contain the region in the endpoint. So it is incorrect.

NOTE: Option C and D are different. (Dot and Hyphen).


Option C: https://s3-us-west-1.amazonaws.com/myfirstwhizbucket
Option D: https://s3.us-west-1.amazonaws.com/myfirstwhizbucket
Ask our Experts

QUESTION 18 UNATTEMPTED

What are the minimum and maximum file sizes that can be stored in S3 respectively?

A. 1 KB and 5 gigabytes
B. 1 KB and 5 terabytes
C. 1 Byte and 5 gigabytes
D. 0 Bytes and 5 terabytes

Explanation :
Answer: D

Ask our Experts

QUESTION 19 UNATTEMPTED

Your organization writes lot of application logs on regular basis to AWS s3 bucket and are the only copies available,
not stored anywhere else. These files range between 10MB-500MB in size and are not accessed regularly. They are
required once in a while to troubleshoot application issues. Application team need last 60 days log files to be
immediately available when required. Logs older than 60 days need not be accessible immediately, but need to keep a
copy for reference. What approach you will recommend to keep the billing cost to minimum?

A. Set object storage class to STANDARD-IA. Use Lifecycle Management to move data from STANDARD-IA to
Glacier after 60 days.
B. Set object storage class to STANDARD. Use Lifecycle Management to move data from STANDARD to STANDARD-
IA after 60 days.
C. Set storage class to STANDARD. Use Lifecycle Management to move data from STANDARD to STANDARD-IA
after 30 days and move data from STANDARD-IA to Glacier after 30 days.
D. Set object storage class to OENZONE-IA. Use Lifecycle Management to move data from ONEZONE-IA to Glacier
after 60 days.

Explanation :
Answer: A
Following are the storage classes for S3 objects and its pricing models.
STANDARD-IA offers cheaper storage than STANDARD class. However, AWS charges $0.01 per
GB of data retrieved from the Infrequent Access storage class apart from the standard download pricing.

Options B, C, D state the initial storage to be STANDARD and ONEZONE-IA.

For the given use case, due to following factors, STANDARD-IA is more suitable than than
STANDARD or ONEZONE-IA as initial storage.
Data is not accessed regularly. STANDARD is not suitable
Data is kept for atleast 60 days and minimum file size is 1 MB. Meets STANDARD-IA
requisites.
Data is the primary copy, not stored anywhere else. ONEZONE-IA is not suitable.
Data needs to be available immediately when required. Available with all classes except
Glacier.

After 60 days, the data can be transitioned to Glacier using Lifecycle management rules since it need not be accessible immediately.
Therefore, from above options, A is correct.
In the question, they mentioned that "Your organization writes lot of application logs on regular basis to AWS s3 bucket and are the only
copies available, not stored anywhere else." means the organization is not having another copy of the data at any other location. The data is just
stored in S3 only.

So if we use OneZone-IA storage class, it will not maintain a replica of your data in multiple availability zones.

AWS says"S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) is a new storage class designed for customers who want a lower-cost
option for infrequently accessed data, but do not require the multiple Availability Zone data resilience model of the S3 Standard and S3
Standard-Infrequent Access (S3 Standard-IA; S-IA) storage classes."

Based on the requirement, S3-IA is the suitable Option.

Ask our Experts

QUESTION 20 UNATTEMPTED

With S3 Versioning enabled on the bucket, how billing will be applied for the following scenario.
Total days bucket in use: 25 days.
File uploaded on 1st Day of the use – 1 GB.
File uploaded within the same bucket on 15th Day of the use – 5 GB.

A. Charges 6 GB for 25 days.


B. Charges 1 GB for 25 days and 5 GB for 11 days.
C. Charges 1 GB for 14 days and 5 GB for 11 days.
D. Charges 5 GB for 25 days.
Explanation :

Answer: B
When versioning is enabled on S3 bucket and a new version is added to an existing object, remember that older version still remains and AWS
charges same price for old verions and new versions.

In the given use case, 1 GB uploaded on day 1 remains in S3 for all 25 days. 5 GB uploaded on day 15 will be in S3 for only 11 days.

Ask our Experts

QUESTION 21 UNATTEMPTED

You have a version enabled S3 bucket. You have accidentally deleted anobject which contains 3 versions. You would w
ant to restore the deletedobject. What can be done?

A. Select the deleted object and choose restore option in More menu.
B. Delete the delete-marker on the object.
C. Versioning in S3 only supports keeping multiple copies. It does not support restoring deleted objects.
D. In version enabled bucket, Delete request only deletes latest version. You can still older verions of the object using
version Id in the GET request.

Explanation :
Answer: B
When you delete an object in a versioning-
enabled bucket, all versions remain in thebucket and Amazon S3 creates a delete marker for the object. To undelete the object,you must delete thi
s delete marker.
To undelete an object, you must delete the delete marker. Select the check box next tothe delete marker of the object to recover, and then choose d
elete from the More menu.

For more information on how to undelete objects in version enabled S3 bucket, referdocumentation here.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/undelete-objects.html

Ask our Experts

QUESTION 22 UNATTEMPTED

You have an application which writes application logs to version enabledS3 bucket. Each object has multiple versions a
ttached to it. After 60 days,application deletes the objects in S3 through DELETE API on the object.However, in next
month’s bill, you see charges for S3 usage on the bucket.What could have caused this?

A. DELETE API call on the object only deletes latest version.


B. DELETE API call on the object does not delete the actual object, but places delete marker on the object.
C. DELETE API call moves the object and its versions to S3 recycle bin from where object can be restored till 30 days.
D. DELETE API for all versions of the object in version enabled bucket cannot be done through API. It can be only
done by bucket owner through console.

Explanation :
Answer: B
When versioning is enabled, a simple DELETE cannot permanently delete an object.
Instead, Amazon S3 inserts a delete marker in the bucket, and that marker becomes thecurrent version of the object with a new ID. Whe
n you try to GET an object whosecurrent version is a
delete marker, Amazon S3 behaves as though the object has been deleted (even thoughit has not been erased) and returns a 404 error.
The following figure shows that a simple DELETE does not actually remove the specifiedobject. Instead, Amazon S3 inserts a delete ma
rker.
To permanently delete versioned objects, you must use DELETE Object versionId.
The following figure shows that deleting a specified object version permanentlyremoves that object.

For information on how to delete versioned objects through API, refer documentationhere.
https://docs.aws.amazon.com/AmazonS3/latest/dev/DeletingObjectVersions.html#delete-obj-version-enabled-bucket-rest

Option A is not true. DELETE call on object does not delete latest version unlessDELETE call is made with latest version id.
Option C is not true. AWS S3 does not have recycle bin.
Option D is not true. DELETE call on versioned object can be made through API byproviding version id of the object’s version to be deleted.

Ask our Experts

QUESTION 23 UNATTEMPTED

You are uploading multiple files ranging 10 GB –


20 GB in size to AWS S3bucket by using multi- part upload from an application on EC2. Once theupload is complete,
you would like to notify a group of people who do nothave AWS IAM accounts. How can you achieve this?(choose 2
options)

A. Use S3 event notification and configure Lambda function which sends email using AWS SES non-sandbox.
B. Use S3 event notification and configure SNS which sends email to subsribed email addresses.
C. Write a custom script on your application side to poll S3 bucket for new files and send email through SES non-
sandbox.
D. Write a custom script on your application side to poll S3 bucket for new files and send email through SES sandbox.

Explanation :
Answer: A, B
The Amazon S3 notification feature enables you to receive notifications when certainevents happen in your bucket. To enable notifications, you
must first add a notificationconfiguration identifying the events you want Amazon S3 to publish, and thedestinations where you want Amazon S3
to send the event notifications.
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
AWS Simple Email Service (SES) is a cost-
effective email service built on the reliable andscalable infrastructure that Amazon.com developed to serve its own customer base.With Amazon S
ES, you can send transactional email, marketing messages, or any othertype of high-quality content.
To help prevent fraud and abuse, and to help protect your reputation as a sender, weapply certain restrictions to new Amazon SES accounts.
We place all new accounts in the Amazon SES sandbox. While your account is in thesandbox, you can use all of the features of Amazon SES. Ho
wever, when your account isin the sandbox, we apply the following restrictions to your account:
You can only send mail to verified email addresses and domains, or to the Amazon SES
mailbox simulator.
You can only send mail from verified email addresses and domains.
Note
This restriction applies even when your account is not in the sandbox.
You can send a maximum of 200 messages per 24-hour period.
You can send a maximum of 1 message per second.
You can request to move out of sandbox mode when you are ready for productionmode.

For more information on how to move out of sandbox mode, refer documentation here.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/request-production-access.html

Option A triggers Lambda function which uses non-


sandbox SES to send email topeople who does not have AWS IAM account nor verified in AWS SES.
Option B triggers SNS.
Following document describes how to add SNS event notification to a bucket.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ways-to-add-notification-config-to-bucket.html

Options C and D, although sounds feasible options, it requires compute resources tocontinously monitor S3 for new files.
We should use AWS provided features where ever are applicable. Custom solutions canbe built when AWS provided features do not meet the requirement.

Ask our Experts

QUESTION 24 UNATTEMPTED

Your organization had built a video sharing website on EC2 within US forwhich S3 bucket in us- east-
1 is used to store the video files. The websitehas been receiving very good feedback and your organization decided toe
xpand the website all over the world. However, customers in Europe andAsia started to complain that website access, u
pload and download ofvideos files are slow. How can you resolve the issue? (choose 2 options)

A. Use CloudFront for improving the performance on website by caching static files.
B. Use VPC Endpoints in Europe and Asia regions to improve S3 uploads and downloads.
C. Enable Transfer Acceleration feature on S3 bucket which uses AWS edge locations to improve upload and download speeds.

D. Change your application design to provision higher-memory configuration EC2 instances and process S3 requests
through EC2.

Explanation :
Answer: A, C
Option A is correct. AWS CloudFront can be used to improve the performance of yourwebsite where network latency is an issue.
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-cloudfront- walkthrough.html
Option B is not correct. VPC endpoints do not support cross-region requests. More over,VPC
endpoints are for accessing AWS resources within VPC.
Option C is correct. Amazon S3 Transfer Acceleration enables fast, easy, and securetransfers of files over long distances between your client and
an S3 bucket. TransferAcceleration takes

advantage of Amazon CloudFront’s globally distributed edge locations. As the dataarrives at an edge location, data is routed to Amazon S3 over a
n optimized network path.
For more information on transfer acceleration, refer documentation here.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html#transfer-acceleration-why-use
Option D is not a good design. It increases cost on EC2 usage and does not solve theproblem with slower upload and download speeds to S3.

Ask our Experts

QUESTION 25 MARKED AS REVIEW UNATTEMPTED

Cross region replication requires versioning to be enabled on?

A. Only on Destination bucket.


B. Versioning is useful to avoid accidental deletes and not a requirement for replicating across regions.
C. Only on Source bucket.
D. Both Source and Destination buckets.

Explanation :
Answer: D
Cross-region replication is a bucket-
level configuration that enables automatic,asynchronous copying of objects across buckets in different AWS Regions. We refer tothese buckets as
source bucket and destination bucket. These buckets can be ownedby different AWS accounts.

For more information on AWS S3 cross-region replication, refer documentation here.


https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

You might also like