100% found this document useful (1 vote)
63 views22 pages

scs-c02 0

The document provides a collection of questions and answers related to the AWS Certified Security - Specialty (SCS-C02) certification exam. It includes scenarios involving Amazon Cognito, AWS Secrets Manager, AWS Config, and various security practices for managing AWS resources. The questions test knowledge on implementing security measures, compliance requirements, and effective use of AWS services for security management.

Uploaded by

hwadia34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
63 views22 pages

scs-c02 0

The document provides a collection of questions and answers related to the AWS Certified Security - Specialty (SCS-C02) certification exam. It includes scenarios involving Amazon Cognito, AWS Secrets Manager, AWS Config, and various security practices for managing AWS resources. The questions test knowledge on implementing security measures, compliance requirements, and effective use of AWS services for security management.

Uploaded by

hwadia34
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader

https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

SCS-C02 Dumps

AWS Certified Security - Specialty

https://www.certleader.com/SCS-C02-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

NEW QUESTION 1
A company in France uses Amazon Cognito with the Cognito Hosted Ul as an identity broker for sign-in and sign-up processes. The company is marketing an
application and expects that all the application's users will come from France.
When the company launches the application the company's security team observes fraudulent sign-ups for the application. Most of the fraudulent registrations are
from users outside of France.
The security team needs a solution to perform custom validation at sign-up Based on the results of the validation the solution must accept or deny the registration
request.
Which combination of steps will meet these requirements? (Select TWO.)

A. Create a pre sign-up AWS Lambda trigge


B. Associate the Amazon Cognito function with the Amazon Cognito user pool.
C. Use a geographic match rule statement to configure an AWS WAF web AC
D. Associate the web ACL with the Amazon Cognito user pool.
E. Configure an app client for the application's Amazon Cognito user poo
F. Use the app client ID to validate the requests in the hosted Ul.
G. Update the application's Amazon Cognito user pool to configure a geographic restriction setting.
H. Use Amazon Cognito to configure a social identity provider (IdP) to validate the requests on the hosted Ul.

Answer: B

Explanation:
https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-post-authentication.html

NEW QUESTION 2
A company has a relational database workload that runs on Amazon Aurora MySQL. According to new compliance standards the company must rotate all
database credentials every 30 days. The company needs a solution that maximizes security and minimizes development effort.
Which solution will meet these requirements?

A. Store the database credentials in AWS Secrets Manage


B. Configure automatic credential rotation tor every 30 days.
C. Store the database credentials in AWS Systems Manager Parameter Stor
D. Create an AWS Lambda function to rotate the credentials every 30 days.
E. Store the database credentials in an environment file or in a configuration fil
F. Modify the credentials every 30 days.
G. Store the database credentials in an environment file or in a configuration fil
H. Create an AWS Lambda function to rotate the credentials every 30 days.

Answer: A

Explanation:
To rotate database credentials every 30 days, the most secure and efficient solution is to store the database credentials in AWS Secrets Manager and configure
automatic credential rotation for every 30 days. Secrets Manager can handle the rotation of the credentials in both the secret and the database, and it can use
AWS KMS to encrypt the credentials. Option B is incorrect because it requires creating a custom Lambda function to rotate the credentials, which is more effort
than using Secrets Manager. Option C is incorrect because it stores the database credentials in an environment file or a configuration file, which is less secure
than using Secrets Manager. Option D is incorrect because it combines the drawbacks of option B and option C. Verified References:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-other.html

NEW QUESTION 3
A company is implementing new compliance requirements to meet customer needs. According to the new requirements the company must not use any Amazon
RDS DB instances or DB clusters that lack encryption of the underlying storage. The company needs a solution that will generate an email alert when an
unencrypted DB instance or DB cluster is created. The solution also must terminate the unencrypted DB instance or DB cluster.
Which solution will meet these requirements in the MOST operationally efficient manner?

A. Create an AWS Config managed rule to detect unencrypted ROS storag


B. Configure an automatic remediation action to publish messages to an Amazon Simple Notification Service (Amazon SNS) topic that includes an AWS Lambda
function and an email delivery target as subscriber
C. Configure the Lambda function to delete the unencrypted resource.
D. Create an AWS Config managed rule to detect unencrypted RDS storag
E. Configure a manual remediation action to invoke an AWS Lambda functio
F. Configure the Lambda function to publish messages to an Amazon Simple Notification Service (Amazon SNS) topic and to delete the unencrypted resource.
G. Create an Amazon EventBridge rule that evaluates RDS event patterns and is initiated by the creation of DB instances or DB clusters Configure the rule to
publish messages to an Amazon Simple Notification Service (Amazon SNS) topic that includes an AWS Lambda function and an email delivery target as
subscriber
H. Configure the Lambda function to delete the unencrypted resource.
I. Create an Amazon EventBridge rule that evaluates RDS event patterns and is initiated by the creation of DB instances or DB cluster
J. Configure the rule to invoke an AWS Lambda functio
K. Configure the Lambda function to publish messages to an Amazon Simple Notification Service (Amazon SNS) topic and to delete the unencrypted resource.

Answer: A

Explanation:
https://docs.aws.amazon.com/config/latest/developerguide/rds-storage-encrypted.html

NEW QUESTION 4
A security engineer wants to forward custom application-security logs from an Amazon EC2 instance to Amazon CloudWatch. The security engineer installs the
CloudWatch agent on the EC2 instance and adds the path of the logs to the CloudWatch configuration file. However, CloudWatch does not receive the logs. The

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

security engineer verifies that the awslogs service is running on the EC2 instance.
What should the security engineer do next to resolve the issue?

A. Add AWS CloudTrail to the trust policy of the EC2 instanc


B. Send the custom logs to CloudTrail instead of CloudWatch.
C. Add Amazon S3 to the trust policy of the EC2 instanc
D. Configure the application to write the custom logs to an S3 bucket that CloudWatch can use to ingest the logs.
E. Add Amazon Inspector to the trust policy of the EC2 instanc
F. Use Amazon Inspector instead of the CloudWatch agent to collect the custom logs.
G. Attach the CloudWatchAgentServerPolicy AWS managed policy to the EC2 instance role.

Answer: D

Explanation:
The correct answer is D. Attach the CloudWatchAgentServerPolicy AWS managed policy to the EC2 instance role.
According to the AWS documentation1, the CloudWatch agent is a software agent that you can install on your EC2 instances to collect system-level metrics and
logs. To use the CloudWatch agent, you need to attach an IAM role or user to the EC2 instance that grants permissions for the agent to perform actions on your
behalf. The CloudWatchAgentServerPolicy is an AWS managed policy that provides the necessary permissions for the agent to write metrics and logs to
CloudWatch2. By attaching this policy to the EC2 instance role, the security engineer can resolve the issue of CloudWatch not receiving the custom application-
security logs.
The other options are incorrect for the following reasons:
A. Adding AWS CloudTrail to the trust policy of the EC2 instance is not relevant, because CloudTrail is a service that records API activity in your AWS account,
not custom application logs3. Sending the custom logs to CloudTrail instead of CloudWatch would not meet the requirement of forwarding them to CloudWatch.
B. Adding Amazon S3 to the trust policy of the EC2 instance is not necessary, because S3 is a storage service that does not require any trust relationship with
EC2 instances4. Configuring the application to write the custom logs to an S3 bucket that CloudWatch can use to ingest the logs would be an alternative solution,
but it would be more complex and costly than using the CloudWatch agent directly.
C. Adding Amazon Inspector to the trust policy of the EC2 instance is not helpful, because Inspector is a service that scans EC2 instances for software
vulnerabilities and unintended network exposure, not custom application logs5. Using Amazon Inspector instead of the CloudWatch agent would not meet the
requirement of forwarding them to CloudWatch.
References:
1: Collect metrics, logs, and traces with the CloudWatch agent - Amazon CloudWatch 2: CloudWatchAgentServerPolicy - AWS Managed Policy 3: What Is AWS
CloudTrail? - AWS CloudTrail 4: Amazon S3 FAQs - Amazon Web Services 5: Automated Software Vulnerability Management - Amazon
Inspector - AWS

NEW QUESTION 5
A Security Engineer creates an Amazon S3 bucket policy that denies access to all users. A few days later, the Security Engineer adds an additional statement to
the bucket policy to allow read-only access to one other employee. Even after updating the policy, the employee still receives an access denied message.
What is the likely cause of this access denial?

A. The ACL in the bucket needs to be updated


B. The IAM policy does not allow the user to access the bucket
C. It takes a few minutes for a bucket policy to take effect
D. The allow permission is being overridden by the deny

Answer: D

NEW QUESTION 6
An Incident Response team is investigating an IAM access key leak that resulted in Amazon EC2 instances being launched. The company did not discover the
incident until many months later The Director of Information Security wants to implement new controls that will alert when similar incidents happen in the future
Which controls should the company implement to achieve this? {Select TWO.)

A. Enable VPC Flow Logs in all VPCs Create a scheduled IAM Lambda function that downloads and parses the logs, and sends an Amazon SNS notification for
violations.
B. Use IAM CloudTrail to make a trail, and apply it to all Regions Specify an Amazon S3 bucket to receive all the CloudTrail log files
C. Add the following bucket policy to the company's IAM CloudTrail bucket to prevent log tampering{"Version": "2012-10-17-,"Statement": { "Effect":
"Deny","Action": "s3:PutObject", "Principal": "-","Resource": "arn:IAM:s3:::cloudtrail/IAMLogs/111122223333/*"}}Create an Amazon S3 data event for an PutObject
attempts, which sends notifications to an Amazon SNS topic.
D. Create a Security Auditor role with permissions to access Amazon CloudWatch Logs m all Regions Ship the logs to an Amazon S3 bucket and make a lifecycle
policy to ship the logs to Amazon S3 Glacier.
E. Verify that Amazon GuardDuty is enabled in all Regions, and create an Amazon CloudWatch Events rule for Amazon GuardDuty findings Add an Amazon SNS
topic as the rule's target

Answer: AE

NEW QUESTION 7
A company is attempting to conduct forensic analysis on an Amazon EC2 instance, but the company is unable to connect to the instance by using AWS Systems
Manager Session Manager. The company has installed AWS Systems Manager Agent (SSM Agent) on the EC2 instance.
The EC2 instance is in a subnet in a VPC that does not have an internet gateway attached. The company has associated a security group with the EC2 instance.
The security group does not have inbound or outbound rules. The subnet's network ACL allows all inbound and outbound traffic.
Which combination of actions will allow the company to conduct forensic analysis on the EC2 instance without compromising forensic data? (Select THREE.)

A. Update the EC2 instance security group to add a rule that allows outbound traffic on port 443 for 0.0.0.0/0.
B. Update the EC2 instance security group to add a rule that allows inbound traffic on port 443 to the VPC's CIDR range.
C. Create an EC2 key pai
D. Associate the key pair with the EC2 instance.
E. Create a VPC interface endpoint for Systems Manager in the VPC where the EC2 instance is located.
F. Attach a security group to the VPC interface endpoin
G. Allow inbound traffic on port 443 to the VPC's CIDR range.
H. Create a VPC interface endpoint for the EC2 instance in the VPC where the EC2 instance is located.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

Answer: BCF

NEW QUESTION 8
A company uses AWS Organizations to manage several AWs accounts. The company processes a large volume of sensitive data. The company uses a serverless
approach to microservices. The company stores all the data in either Amazon S3 or Amazon DynamoDB. The company reads the data by using either AWS
lambda functions or container-based services that the company hosts on Amazon Elastic Kubernetes Service (Amazon EKS) on AWS Fargate.
The company must implement a solution to encrypt all the data at rest and enforce least privilege data access controls. The company creates an AWS Key
Management Service (AWS KMS) customer managed key.
What should the company do next to meet these requirements?

A. Create a key policy that allows the kms:Decrypt action only for Amazon S3 and DynamoD
B. Create an SCP that denies the creation of S3 buckets and DynamoDB tables that are not encrypted with the key.
C. Create an 1AM policy that denies the kms:Decrypt action for the ke
D. Create a Lambda function than runs on a schedule to attach the policy to any new role
E. Create an AWS Config rule to send alerts for resources that are not encrypted with the key.
F. Create a key policy that allows the kms:Decrypt action only for Amazon S3, DynamoDB, Lambda, and Amazon EK
G. Create an SCP that denies the creation of S3 buckets and DynamoDB tables that are not encrypted with the key.
H. Create a key policy that allows the kms:Decrypt action only for Amazon S3, DynamoDB, Lambda, and Amazon EK
I. Create an AWS Config rule to send alerts for resources that are not encrypted with the key.

Answer: B

NEW QUESTION 9
A company that uses AWS Organizations wants to see AWS Security Hub findings for many AWS accounts and AWS Regions. Some of the accounts are in the
company's organization, and some accounts are in organizations that the company manages for customers. Although the company can see findings in the Security
Hub administrator account for accounts in the company's organization, there are no findings from accounts in other organizations.
Which combination of steps should the company take to see findings from accounts that are outside the organization that includes the Security Hub administrator
account? (Select TWO.)

A. Use a designated administration account to automatically set up member accounts.


B. Create the AWS Service Role ForSecurrty Hub service-linked rote for Security Hub.
C. Send an administration request from the member accounts.
D. Enable Security Hub for all member accounts.
E. Send invitations to accounts that are outside the company's organization from the Security Hub administrator account.

Answer: CE

Explanation:
To see Security Hub findings for accounts that are outside the organization that includes the Security Hub administrator account, the following steps are required:
Send invitations to accounts that are outside the company’s organization from the Security Hub administrator account. This will allow the administrator account
to view and manage findings from those accounts. The administrator account can send invitations by using the Security Hub console, API, or CLI. For more
information, see Sending invitations to member accounts.
Send an administration request from the member accounts. This will allow the member accounts to accept the invitation from the administrator account and
establish a relationship with it. The member accounts can send administration requests by using the Security Hub console, API, or CLI. For more information, see
Sending administration requests.
This solution will enable the company to see Security Hub findings for many AWS accounts and AWS Regions, including accounts that are outside its own
organization.
The other options are incorrect because they either do not establish a relationship between the administrator and member accounts (A, B), do not enable Security
Hub for all member accounts (D), or do not use a valid service for Security Hub (F).
Verified References:
https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-member-accounts.html

NEW QUESTION 10
A security team is working on a solution that will use Amazon EventBridge (Amazon CloudWatch Events) to monitor new Amazon S3 objects. The solution will
monitor for public access and for changes to any S3 bucket policy or setting that result in public access. The security team configures EventBridge to watch for
specific API calls that are logged from AWS CloudTrail. EventBridge has an action to send an email notification through Amazon Simple Notification Service
(Amazon SNS) to the security team immediately with details of the API call.
Specifically, the security team wants EventBridge to watch for the s3:PutObjectAcl, s3:DeleteBucketPolicy, and s3:PutBucketPolicy API invocation logs from
CloudTrail. While developing the solution in a single account, the security team discovers that the s3:PutObjectAcl API call does not invoke an EventBridge event.
However, the s3:DeleteBucketPolicy API call and the s3:PutBucketPolicy API call do invoke an event.
The security team has enabled CloudTrail for AWS management events with a basic configuration in the AWS Region in which EventBridge is being tested.
Verification of the EventBridge event pattern indicates that the pattern is set up correctly. The security team must implement a solution so that the s3:PutObjectAcl
API call will invoke an EventBridge event. The solution must not generate false notifications.
Which solution will meet these requirements?

A. Modify the EventBridge event pattern by selecting Amazon S3. Select All Events as the event type.
B. Modify the EventBridge event pattern by selecting Amazon S3. Select Bucket Level Operations as the event type.
C. Enable CloudTrail Insights to identify unusual API activity.
D. Enable CloudTrail to monitor data events for read and write operations to S3 buckets.

Answer: D

Explanation:
The correct answer is D. Enable CloudTrail to monitor data events for read and write operations to S3 buckets. According to the AWS documentation1, CloudTrail
data events are the resource operations performed on or within a resource. These are also known as data plane operations. Data events are often high-volume
activities. For example, Amazon S3 object-level API activity (such as GetObject, DeleteObject, and PutObject) is a data event.
By default, trails do not log data events. To record CloudTrail data events, you must explicitly add the
supported resources or resource types for which you want to collect activity. For more information, see Logging data events in the Amazon S3 User Guide2.
In this case, the security team wants EventBridge to watch for the s3:PutObjectAcl API invocation logs from CloudTrail. This API uses the acl subresource to set

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

the access control list (ACL) permissions for a new or existing object in an S3 bucket3. This is a data event that affects the S3 object resource type. Therefore, the
security team must enable CloudTrail to monitor data events for read and write operations to S3 buckets in order to invoke an EventBridge event for this API call.
The other options are incorrect because:
A. Modifying the EventBridge event pattern by selecting Amazon S3 and All Events as the event type will not capture the s3:PutObjectAcl API call, because this
is a data event and not a management event. Management events provide information about management operations that are performed on resources in your
AWS account. These are also known as control plane operations4.
B. Modifying the EventBridge event pattern by selecting Amazon S3 and Bucket Level Operations as the event type will not capture the s3:PutObjectAcl API
call, because this is a data event that affects the S3 object resource type and not the S3 bucket resource type. Bucket level operations are management events
that affect the configuration or metadata of an S3 bucket5.
C. Enabling CloudTrail Insights to identify unusual API activity will not help the security team monitor new S3 objects or changes to any S3 bucket policy or
setting that result in public access. CloudTrail Insights helps AWS users identify and respond to unusual activity associated with API calls and API error rates by
continuously analyzing CloudTrail management events6. It does not analyze data events or generate EventBridge events.
References:
1: CloudTrail log event reference - AWS CloudTrail 2: Logging data events - AWS CloudTrail 3: PutObjectAcl - Amazon Simple Storage Service 4: [Logging
management events - AWS CloudTrail] 5: [Amazon S3 Event Types - Amazon Simple Storage Service] 6: Logging Insights events for trails - AWS CloudTrail

NEW QUESTION 10
A company Is planning to use Amazon Elastic File System (Amazon EFS) with its on-premises servers. The company has an existing IAM Direct Connect
connection established between its on-premises data center and an IAM Region Security policy states that the company's on-premises firewall should only have
specific IP addresses added to the allow list and not a CIDR range. The company also wants to restrict access so that only certain data center-based servers have
access to Amazon EFS
How should a security engineer implement this solution''

A. Add the file-system-id efs IAM-region amazonIAM com URL to the allow list for the data center firewall Install the IAM CLI on the data center-based servers to
mount the EFS file system in the EFS security group add the data center IP range to the allow list Mount the EFS using the EFS file system name
B. Assign an Elastic IP address to Amazon EFS and add the Elastic IP address to the allow list for the data center firewall Install the IAM CLI on the data center-
based servers to mount the EFS file system In the EFS security group, add the IP addresses of the data center servers to the allow list Mount the EFS using the
Elastic IP address
C. Add the EFS file system mount target IP addresses to the allow list for the data center firewall In the EFS security group, add the data center server IP
addresses to the allow list Use the Linux terminal to mount the EFS file system using the IP address of one of the mount targets
D. Assign a static range of IP addresses for the EFS file system by contacting IAM Support In the EFS security group add the data center server IP addresses to
the allow list Use the Linux terminal to mount the EFS file system using one of the static IP addresses

Answer: B

Explanation:
To implement the solution, the security engineer should do the following:
Assign an Elastic IP address to Amazon EFS and add the Elastic IP address to the allow list for the data center firewall. This allows the security engineer to
use a specific IP address for the EFS file system that can be added to the firewall rules, instead of a CIDR range or a URL.
Install the AWS CLI on the data center-based servers to mount the EFS file system. This allows the security engineer to use the mount helper provided by
AWS CLI to mount the EFS file system with encryption in transit.
In the EFS security group, add the IP addresses of the data center servers to the allow list. This allows the security engineer to restrict access to the EFS file
system to only certain data center-based servers.
Mount the EFS using the Elastic IP address. This allows the security engineer to use the Elastic IP address as the DNS name for mounting the EFS file
system.

NEW QUESTION 14
Your CTO thinks your IAM account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your
hackers are very sophisticated IAM engineers and doing everything they can to cover their tracks?
Please select:

A. Use CloudTrail Log File Integrity Validation.


B. Use IAM Config SNS Subscriptions and process events in real time.
C. Use CloudTrail backed up to IAM S3 and Glacier.
D. Use IAM Config Timeline forensics.

Answer: A

Explanation:
The IAM Documentation mentions the following
To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it you can use CloudTrail log file integrity validation. This feature is
built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete
or forge CloudTrail log files without detection. You can use the IAM CLI to validate the files in the location where CloudTrail delivered them
Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has
not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log file has
been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time.
Options B.C and D is invalid because you need to check for log File Integrity Validation for cloudtrail logs For more information on Cloudtrail log file validation,
please visit the below URL: http://docs.IAM.amazon.com/IAMcloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html
The correct answer is: Use CloudTrail Log File Integrity Validation. omit your Feedback/Queries to our Expert

NEW QUESTION 19
Within a VPC, a corporation runs an Amazon RDS Multi-AZ DB instance. The database instance is connected to the internet through a NAT gateway via two
subnets.
Additionally, the organization has application servers that are hosted on Amazon EC2 instances and use the RDS database. These EC2 instances have been
deployed onto two more private subnets inside the same VPC. These EC2 instances connect to the internet through a default route via the same NAT gateway.
Each VPC subnet has its own route table.
The organization implemented a new security requirement after a recent security examination. Never allow the database instance to connect to the internet. A

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

security engineer must perform this update promptly without interfering with the network traffic of the application servers.
How will the security engineer be able to comply with these requirements?

A. Remove the existing NAT gatewa


B. Create a new NAT gateway that only the application server subnets can use.
C. Configure the DB instance€™s inbound network ACL to deny traffic from the security group ID of the NAT gateway.
D. Modify the route tables of the DB instance subnets to remove the default route to the NAT gateway.
E. Configure the route table of the NAT gateway to deny connections to the DB instance subnets.

Answer: C

Explanation:
Each subnet has a route table, so modify the routing associated with DB instance subnets to prevent internet access.

NEW QUESTION 23
A startup company is using a single AWS account that has resources in a single AWS Region. A security engineer configures an AWS Cloud Trail trail in the same
Region to deliver log files to an Amazon S3 bucket by using the AWS CLI.
Because of expansion, the company adds resources in multiple Regions. The secu-rity engineer notices that the logs from the new Regions are not reaching the
S3 bucket.
What should the security engineer do to fix this issue with the LEAST amount of operational overhead?

A. Create a new CloudTrail trai


B. Select the new Regions where the company added resources.
C. Change the S3 bucket to receive notifications to track all actions from all Regions.
D. Create a new CloudTrail trail that applies to all Regions.
E. Change the existing CloudTrail trail so that it applies to all Regions.

Answer: D

Explanation:
The correct answer is D. Change the existing CloudTrail trail so that it applies to all Regions.
According to the AWS documentation1, you can configure CloudTrail to deliver log files from multiple Regions to a single S3 bucket for a single account. To
change an existing single-Region trail to log in all Regions, you must use the AWS CLI and add the --is-multi-region-trail option to the update-trail command2. This
will ensure that you log global service events and capture all management event activity in your account.
Option A is incorrect because creating a new CloudTrail trail for each Region will incur additional costs and increase operational overhead. Option B is incorrect
because changing the S3 bucket to receive notifications will not affect the delivery of log files from other Regions. Option C is incorrect because creating a new
CloudTrail trail that applies to all Regions will result in duplicate log files for the original Region and also incur additional costs.

NEW QUESTION 24
A security engineer has enabled IAM Security Hub in their IAM account, and has enabled the Center for internet Security (CIS) IAM Foundations compliance
standard. No evaluation results on compliance are returned in the Security Hub console after several hours. The engineer wants to ensure that Security Hub can
evaluate their resources for CIS IAM Foundations compliance.
Which steps should the security engineer take to meet these requirements?

A. Add full Amazon Inspector IAM permissions to the Security Hub service role to allow it to perform the CIS compliance evaluation
B. Ensure that IAM Trusted Advisor Is enabled in the account and that the Security Hub service role has permissions to retrieve the Trusted Advisor security-
related recommended actions
C. Ensure that IAM Confi
D. is enabled in the account, and that the required IAM Config rules have been created for the CIS compliance evaluation
E. Ensure that the correct trail in IAM CloudTrail has been configured for monitoring by Security Hub and that the Security Hub service role has permissions to
perform the GetObject operation on CloudTrails Amazon S3 bucket

Answer: C

Explanation:
To ensure that Security Hub can evaluate their resources for CIS AWS Foundations compliance, the security engineer should do the following:
Ensure that AWS Config is enabled in the account. This is a service that enables continuous assessment and audit of your AWS resources for compliance.
Ensure that the required AWS Config rules have been created for the CIS compliance evaluation. These are rules that represent your desired configuration
settings for specific AWS resources or for an entire AWS account.

NEW QUESTION 26
A Security Engineer is building a Java application that is running on Amazon EC2. The application communicates with an Amazon RDS instance and authenticates
with a user name and password.
Which combination of steps can the Engineer take to protect the credentials and minimize downtime when the credentials are rotated? (Choose two.)

A. Have a Database Administrator encrypt the credentials and store the ciphertext in Amazon S3. Grant permission to the instance role associated with the EC2
instance to read the object and decrypt the ciphertext.
B. Configure a scheduled job that updates the credential in AWS Systems Manager Parameter Store and notifies the Engineer that the application needs to be
restarted.
C. Configure automatic rotation of credentials in AWS Secrets Manager.
D. Store the credential in an encrypted string parameter in AWS Systems Manager Parameter Stor
E. Grant permission to the instance role associated with the EC2 instance to access the parameter and the AWS KMS key that is used to encrypt it.
F. Configure the Java application to catch a connection failure and make a call to AWS Secrets Manager to retrieve updated credentials when the password is
rotate
G. Grant permission to the instance role associated with the EC2 instance to access Secrets Manager.

Answer: CE

Explanation:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

AWS Secrets Manager is a service that helps you manage, retrieve, and rotate secrets such as database credentials, API keys, and other sensitive information. By
configuring automatic rotation of credentials in AWS Secrets Manager, you can ensure that your secrets are changed regularly and securely, without requiring
manual intervention or application downtime. You can also specify the rotation frequency and the rotation function that performs the logic of changing the
credentials on the database and updating the secret in Secrets Manager1.
* E. Configure the Java application to catch a connection failure and make a call to AWS Secrets Manager to retrieve updated credentials when the password is
rotated. Grant permission to the instance role associated with the EC2 instance to access Secrets Manager.
By configuring the Java application to catch a connection failure and make a call to AWS Secrets Manager to retrieve updated credentials, you can avoid hard-
coding the credentials in your application code or configuration files. This way, your application can dynamically obtain the latest credentials from Secrets Manager
whenever the password is rotated, without needing to restart or redeploy the application. To enable this, you need to grant permission to the instance role
associated with the EC2 instance to access Secrets Manager using IAM policies2. You can also use the AWS SDK for Java to integrate your application with
Secrets Manager3.

NEW QUESTION 27
A company needs a security engineer to implement a scalable solution for multi-account authentication and authorization. The solution should not introduce
additional user-managed architectural components. Native IAM features should be used as much as possible The security engineer has set up IAM Organizations
w1th all features activated and IAM SSO enabled.
Which additional steps should the security engineer take to complete the task?

A. Use AD Connector to create users and groups for all employees that require access to IAM accounts.Assign AD Connector groups to IAM accounts and link to
the IAM roles in accordance with the employees‘job functions and access requirements Instruct employees to access IAM accounts by using the IAM Directory
Service user portal.
B. Use an IAM SSO default directory to create users and groups for all employees that require access to IAM account
C. Assign groups to IAM accounts and link to permission sets in accordance with the employees‘job functions and access requirement
D. Instruct employees to access IAM accounts by using the IAM SSO user portal.
E. Use an IAM SSO default directory to create users and groups for all employees that require access to IAM account
F. Link IAM SSO groups to the IAM users present in all accounts to inherit existing permission
G. Instruct employees to access IAM accounts by using the IAM SSO user portal.
H. Use IAM Directory Service tor Microsoft Active Directory to create users and groups for all employees that require access to IAM accounts Enable IAM
Management Console access in the created directory and specify IAM SSO as a source cl information tor integrated accounts and permission set
I. Instruct employees to access IAM accounts by using the IAM Directory Service user portal.

Answer: B

NEW QUESTION 28
A security engineer receives a notice from the AWS Abuse team about suspicious activity from a Linux-based Amazon EC2 instance that uses Amazon Elastic
Block Store (Amazon EBS>-based storage The instance is making connections to known malicious addresses
The instance is in a development account within a VPC that is in the us-east-1 Region The VPC contains an internet gateway and has a subnet in us-east-1a and
us-easMb Each subnet is associate with a route table that uses the internet gateway as a default route Each subnet also uses the default network ACL The
suspicious EC2 instance runs within the us-east-1 b subnet. During an initial investigation a security engineer discovers that the suspicious instance is the only
instance that runs in the subnet
Which response will immediately mitigate the attack and help investigate the root cause?

A. Log in to the suspicious instance and use the netstat command to identify remote connections Use the IP addresses from these remote connections to create
deny rules in the security group of the instance Install diagnostic tools on the instance for investigation Update the outbound network ACL for the subnet inus-east-
lb to explicitly deny all connections as the first rule during the investigation of the instance
B. Update the outbound network ACL for the subnet in us-east-1b to explicitly deny all connections as the first rule Replace the security group with a new security
group that allows connections only from a diagnostics security group Update the outbound network ACL for the us-east-1b subnet to remove the deny all rule
Launch a new EC2 instance that has diagnostic tools Assign the new security group to the new EC2 instance Use the new EC2 instance to investigate the
suspicious instance
C. Ensure that the Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the suspicious EC2 instance will not delete upon termination
Terminate the instance Launch a new EC2 instance inus-east-1a that has diagnostic tools Mount the EBS volumes from the terminated instance for investigation
D. Create an AWS WAF web ACL that denies traffic to and from the suspicious instance Attach the AWS WAF web ACL to the instance to mitigate the attack Log
in to the instance and install diagnostic tools to investigate the instance

Answer: B

Explanation:
This option suggests updating the outbound network ACL for the subnet in us-east-1b to explicitly deny all connections as the first rule, replacing the security group
with a new one that only allows connections from a diagnostics security group, and launching a new EC2 instance with diagnostic tools to investigate the
suspicious instance. This option will immediately mitigate the attack and provide the necessary tools for investigation.

NEW QUESTION 31
What are the MOST secure ways to protect the AWS account root user of a recently opened AWS account? (Select TWO.)

A. Use the AWS account root user access keys instead of the AWS Management Console.
B. Enable multi-factor authentication for the AWS IAM users with the Adminis-tratorAccess managed policy attached to them.
C. Enable multi-factor authentication for the AWS account root user.
D. Use AWS KMS to encrypt all AWS account root user and AWS IAM access keys and set automatic rotation to 30 days.
E. Do not create access keys for the AWS account root user; instead, create AWS IAM users.

Answer: CE

NEW QUESTION 36
A company has hundreds of AWS accounts in an organization in AWS Organizations. The company operates out of a single AWS Region. The company has a
dedicated security tooling AWS account in the organization. The security tooling account is configured as the organization's delegated administrator for Amazon
GuardDuty and AWS Security Hub. The company has configured the environment to automatically enable GuardDuty and Security Hub for existing AWS accounts
and new AWS accounts.
The company is performing control tests on specific GuardDuty findings to make sure that the company's security team can detect and respond to security events.
The security team launched an Amazon EC2 instance and attempted to run DNS requests against a test domain, example.com, to generate a DNS finding.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

However, the GuardDuty finding was never created in the Security Hub delegated administrator account.
Why was the finding was not created in the Security Hub delegated administrator account?

A. VPC flow logs were not turned on for the VPC where the EC2 instance was launched.
B. The VPC where the EC2 instance was launched had the DHCP option configured for a custom OpenDNS resolver.
C. The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.
D. Cross-Region aggregation in Security Hub was not configured.

Answer: C

Explanation:
The correct answer is C. The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.
According to the AWS documentation1, GuardDuty findings are automatically sent to Security Hub only if the GuardDuty integration with Security Hub is enabled in
the same account and Region. This means that the security tooling account, which is the delegated administrator for both GuardDuty and Security Hub, must
enable the GuardDuty integration with Security Hub in each member account and Region where GuardDuty is enabled. Otherwise, the findings from GuardDuty
will not be visible in Security Hub.
The other options are incorrect because:
VPC flow logs are not required for GuardDuty to generate DNS findings. GuardDuty uses VPC DNS logs, which are automatically enabled for all VPCs, to
detect malicious or unauthorized DNS activity.
The DHCP option configured for a custom OpenDNS resolver does not affect GuardDuty’s ability to generate DNS findings. GuardDuty uses its own threat
intelligence sources to identify malicious domains, regardless of the DNS resolver used by the EC2 instance.
Cross-Region aggregation in Security Hub is not relevant for this scenario, because the company operates out of a single AWS Region. Cross-Region
aggregation allows Security Hub to aggregate findings from multiple Regions into a single Region.
References:
1: Managing GuardDuty accounts with AWS Organizations : Amazon GuardDuty Findings : How Amazon GuardDuty Works : Cross-Region aggregation in AWS
Security Hub

NEW QUESTION 39
A company is running an Amazon RDS for MySQL DB instance in a VPC. The VPC must not send or receive network traffic through the internet.
A security engineer wants to use AWS Secrets Manager to rotate the DB instance credentials automatically. Because of a security policy, the security engineer
cannot use the standard AWS Lambda function that Secrets Manager provides to rotate the credentials.
The security engineer deploys a custom Lambda function in the VPC. The custom Lambda function will be responsible for rotating the secret in Secrets Manager.
The security engineer edits the DB instance's security group to allow connections from this function. When the function is invoked, the function cannot
communicate with Secrets Manager to rotate the secret properly.
What should the security engineer do so that the function can rotate the secret?

A. Add an egress-only internet gateway to the VP


B. Allow only the Lambda function's subnet to route traffic through the egress-only internet gateway.
C. Add a NAT gateway to the VP
D. Configure only the Lambda function's subnet with a default route through the NAT gateway.
E. Configure a VPC peering connection to the default VPC for Secrets Manage
F. Configure the Lambda function's subnet to use the peering connection for routes.
G. Configure a Secrets Manager interface VPC endpoin
H. Include the Lambda function's private subnet during the configuration process.

Answer: D

Explanation:
You can establish a private connection between your VPC and Secrets Manager by creating an interface VPC endpoint. Interface endpoints are powered by AWS
PrivateLink, a technology that enables you to privately access Secrets Manager APIs without an internet gateway, NAT device, VPN connection, or AWS Direct
Connect connection. Reference:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/vpc-endpoint-overview.html
The correct answer is D. Configure a Secrets Manager interface VPC endpoint. Include the Lambda function’s private subnet during the configuration process.
A Secrets Manager interface VPC endpoint is a private connection between the VPC and Secrets Manager that does not require an internet gateway, NAT device,
VPN connection, or AWS Direct Connect connection1. By configuring a Secrets Manager interface VPC endpoint, the security engineer can enable the custom
Lambda function to communicate with Secrets Manager without sending or receiving network traffic through the internet. The security engineer must include the
Lambda function’s private subnet during the configuration process to allow the function to use the endpoint2.
The other options are incorrect for the following reasons:
A. An egress-only internet gateway is a VPC component that allows outbound communication over IPv6 from instances in the VPC to the internet, and prevents
the internet from initiating an IPv6 connection with the instances3. However, this option does not meet the requirement that the VPC must not send or receive
network traffic through the internet. Moreover, an egress-only internet gateway is for use with IPv6 traffic only, and Secrets Manager does not support IPv6
addresses2.
B. A NAT gateway is a VPC component that enables instances in a private subnet to connect to the internet or other AWS services, but prevents the internet
from initiating connections with those instances4. However, this option does not meet the requirement that the VPC must not send or receive network traffic
through the internet. Additionally, a NAT gateway requires an elastic IP address, which is a public IPv4 address4.
C. A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or
IPv6 addresses5. However, this option does not work because Secrets Manager does not have a default VPC that can be peered with. Furthermore, a VPC
peering connection does not provide a private connection to Secrets Manager APIs without an internet gateway or other devices2.

NEW QUESTION 40
A company's Security Team received an email notification from the Amazon EC2 Abuse team that one or more of the company's Amazon EC2 instances may have
been compromised
Which combination of actions should the Security team take to respond to (be current modem? (Select TWO.)

A. Open a support case with the IAM Security team and ask them to remove the malicious code from the affected instance
B. Respond to the notification and list the actions that have been taken to address the incident
C. Delete all IAM users and resources in the account
D. Detach the internet gateway from the VPC remove aft rules that contain 0.0.0.0V0 from the security groups, and create a NACL rule to deny all traffic Inbound
from the internet

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

E. Delete the identified compromised instances and delete any associated resources that the Security team did not create.

Answer: DE

Explanation:
these are the recommended actions to take when you receive an abuse notice from AWS8. You should review the abuse notice to see what content or activity was
reported and detach the internet gateway from the VPC to isolate the affected instances from the internet. You should also remove any rules that allow inbound
traffic from 0.0.0.0/0 from the security groups and create a network access control list (NACL) rule to deny all traffic inbound from the internet. You should then
delete the compromised instances and any associated resources
that you did not create. The other options are either inappropriate or unnecessary for responding to the abuse notice.

NEW QUESTION 43
During a manual review of system logs from an Amazon Linux EC2 instance, a Security Engineer noticed that there are sudo commands that were never properly
alerted or reported on the Amazon CloudWatch Logs agent
Why were there no alerts on the sudo commands?

A. There is a security group blocking outbound port 80 traffic that is preventing the agent from sending the logs
B. The IAM instance profile on the EC2 instance was not properly configured to allow the CloudWatchLogs agent to push the logs to CloudWatch
C. CloudWatch Logs status is set to ON versus SECURE, which prevents it from pulling in OS security event logs
D. The VPC requires that all traffic go through a proxy, and the CloudWatch Logs agent does not support a proxy configuration.

Answer: B

Explanation:
the reason why there were no alerts on the sudo commands. Sudo commands are commands that allow a user to execute commands as another user, usually the
superuser or root. CloudWatch Logs agent is a software agent that can send log data from an EC2 instance to CloudWatch Logs, a service that monitors and
stores log data. The CloudWatch Logs agent needs an IAM instance profile, which is a container for an IAM role that allows applications running on an EC2
instance to make API requests to AWS services. If the IAM instance profile on the EC2 instance was not properly configured to allow the CloudWatch Logs agent
to push the logs to CloudWatch, then there would be no alerts on the sudo commands. The other options are either irrelevant or invalid for explaining why there
were no alerts on the sudo commands.

NEW QUESTION 46
A security engineer is designing an IAM policy for a script that will use the AWS CLI. The script currently assumes an IAM role that is attached to three AWS
managed IAM policies: AmazonEC2FullAccess, AmazonDynamoDBFullAccess, and Ama-zonVPCFullAccess.
The security engineer needs to construct a least privilege IAM policy that will replace the AWS managed IAM policies that are attached to this role.
Which solution will meet these requirements in the MOST operationally efficient way?

A. In AWS CloudTrail, create a trail for management event


B. Run the script with the existing AWS managed IAM policie
C. Use IAM Access Analyzer to generate a new IAM policy that is based on access activity in the trai
D. Replace the existing AWS managed IAM policies with the generated IAM poli-cy for the role.
E. Remove the existing AWS managed IAM policies from the rol
F. Attach the IAM Access Analyzer Role Policy Generator to the rol
G. Run the scrip
H. Return to IAM Access Analyzer and generate a least privilege IAM polic
I. Attach the new IAM policy to the role.
J. Create an account analyzer in IAM Access Analyze
K. Create an archive rule that has a filter that checks whether the PrincipalArn value matches the ARN of the rol
L. Run the scrip
M. Remove the existing AWS managed IAM poli-cies from the role.
N. In AWS CloudTrail, create a trail for management event
O. Remove the exist-ing AWS managed IAM policies from the rol
P. Run the scrip
Q. Find the au-thorization failure in the trail event that is associated with the scrip
R. Create a new IAM policy that includes the action and resource that caused the authorization failur
S. Repeat the process until the script succeed
T. Attach the new IAM policy to the role.

Answer: A

NEW QUESTION 48
A company has multiple Amazon S3 buckets encrypted with customer-managed CMKs Due to regulatory requirements the keys must be rotated every year. The
company's Security Engineer has enabled automatic key rotation for the CMKs; however the company wants to verity that the rotation has occurred.
What should the Security Engineer do to accomplish this?

A. Filter IAM CloudTrail logs for KeyRotaton events


B. Monitor Amazon CloudWatcn Events for any IAM KMS CMK rotation events
C. Using the IAM CL
D. run the IAM kms gel-key-relation-status operation with the --key-id parameter to check the CMK rotation date
E. Use Amazon Athena to query IAM CloudTrail logs saved in an S3 bucket to filter Generate New Key events

Answer: C

Explanation:
the aws kms get-key-rotation-status command returns a boolean value that indicates whether automatic rotation of the customer master key (CMK) is enabled1.
This command also shows the date and time when the CMK was last rotated2. The other options are not valid ways to check the CMK rotation status.

NEW QUESTION 53
A company is using IAM Organizations to develop a multi-account secure networking strategy. The company plans to use separate centrally managed accounts for

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

shared services, auditing, and security inspection. The company plans to provide dozens of additional accounts to application owners for production and
development environments.
Company security policy requires that all internet traffic be routed through a centrally managed security inspection layer in the security inspection account. A
security engineer must recommend a solution that minimizes administrative overhead and complexity.
Which solution meets these requirements?

A. Use IAM Control Towe


B. Modify the default Account Factory networking template to automatically associate new accounts with a centrally managed VPC through a VPC peering
connection and to create a default route to the VPC peer in the default route tabl
C. Create an SCP that denies the CreatelnternetGateway actio
D. Attach the SCP to all accounts except the security inspection account.
E. Create a centrally managed VPC in the security inspection accoun
F. Establish VPC peering connections between the security inspection account and other account
G. Instruct account owners to create default routes in their account route tables that point to the VPC pee
H. Create an SCP that denies theAttach InternetGateway actio
I. Attach the SCP to all accounts except the security inspection account.
J. Use IAM Control Towe
K. Modify the default Account Factory networking template to automatically associate new accounts with a centrally managed transitgateway and to create a
default route to the transit gateway in the default route tabl
L. Create an SCP that denies the AttachlnternetGateway actio
M. Attach the SCP to all accounts except the security inspection account.
N. Enable IAM Resource Access Manager (IAM RAM) for IAM Organization
O. Create a shared transit gateway, and make it available by using an IAM RAM resource shar
P. Create an SCP that denies the CreatelnternetGateway actio
Q. Attach the SCP to all accounts except the security inspection accoun
R. Create routes in the route tables of all accounts that point to the shared transit gateway.

Answer: C

NEW QUESTION 58
A company’s security team needs to receive a notification whenever an AWS access key has not been rotated in 90 or more days. A security engineer must
develop a solution that provides these notifications automatically.
Which solution will meet these requirements with the LEAST amount of effort?

A. Deploy an AWS Config managed rule to run on a periodic basis of 24 hour


B. Select theaccess-keys-rotated managed rule, and set the maxAccessKeyAge parameter to 90 day
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule with an event pattern that matches the compliance type of NON_COMPLIANT from AWS
Config for the managed rul
D. Configure EventBridge (CloudWatch Events) to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
E. Create a script to export a .csv file from the AWS Trusted Advisor check for IAM access key rotation.Load the script into an AWS Lambda function that will
upload the .csv file to an Amazon S3 bucke
F. Create an Amazon Athena table query that runs when the .csv file is uploaded to the S3 bucke
G. Publish the results for any keys older than 90 days by using an invocation of an Amazon Simple Notification Service (Amazon SNS) notification to the security
team.
H. Create a script to download the IAM credentials report on a periodic basi
I. Load the script into an AWS Lambda function that will run on a schedule through Amazon EventBridge (Amazon CloudWatch Events). Configure the Lambda
script to load the report into memory and to filter the report for recordsin which the key was last rotated at least 90 days ag
J. If any records are detected, send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
K. Create an AWS Lambda function that queries the IAM API to list all the user
L. Iterate through the users by using the ListAccessKeys operatio
M. Verify that the value in the CreateDate field is not at least 90 days ol
N. Send an Amazon Simple Notification Service (Amazon SNS) notification to the security team if the value is at least 90 days ol
O. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule the Lambda function to run each day.

Answer: A

NEW QUESTION 63
A security engineer needs to create an IAM Key Management Service <IAM KMS) key that will De used to encrypt all data stored in a company’s Amazon S3
Buckets in the us-west-1 Region. The key will use
server-side encryption. Usage of the key must be limited to requests coming from Amazon S3 within the company's account.
Which statement in the KMS key policy will meet these requirements?
A)

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

B)

C)

A. Option A
B. Option B
C. Option C

Answer: A

NEW QUESTION 67
A security engineer needs to run an AWS CloudFormation script. The CloudFormation script builds AWS infrastructure to support a stack that includes web servers
and a MySQL database. The stack has been deployed in pre-production environments and is ready for production.
The production script must comply with the principle of least privilege. Additionally, separation of duties must exist between the security engineer's IAM account
and CloudFormation.
Which solution will meet these requirements?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

A. Use IAM Access Analyzer policy generation to generate a policy that allows the CloudFormation script to run and manage the stac
B. Attach the policy to a new IAM rol
C. Modify the security engineer's IAM permissions to be able to pass the new role to CloudFormation.
D. Create an IAM policy that allows ec2:* and rds:* permission
E. Attach the policy to a new IAM role.Modify the security engineer's IAM permissions to be able to assume the new role.
F. Use IAM Access Analyzer policy generation to generate a policy that allows the CloudFormation script to run and manage the stac
G. Modify the security engineer's IAM permissions to be able to run the CloudFormation script.
H. Create an IAM policy that allows ec2:* and rds:* permission
I. Attach the policy to a new IAM rol
J. Use the IAM policy simulator to confirm that the policy allows the AWS API calls that are necessary to build the stac
K. Modify the security engineer's IAM permissions to be able to pass the new role to CloudFormation.

Answer: A

Explanation:
The correct answer is A. Use IAM Access Analyzer policy generation to generate a policy that allows the CloudFormation script to run and manage the stack.
Attach the policy to a new IAM role. Modify the security engineer’s IAM permissions to be able to pass the new role to CloudFormation.
According to the AWS documentation, IAM Access Analyzer is a service that helps you identify the resources in your organization and accounts, such as Amazon
S3 buckets or IAM roles, that are shared with an external entity. You can also use IAM Access Analyzer to generate fine-grained policies that grant least privilege
access based on access activity and access attempts.
To use IAM Access Analyzer policy generation, you need to enable IAM Access Analyzer in your account or organization. You can then use the IAM console or the
AWS CLI to generate a policy for a resource based on its access activity or access attempts. You can review and edit the generated policy before applying it to the
resource.
To use IAM Access Analyzer policy generation with CloudFormation, you can follow these steps:
Run the CloudFormation script in a pre-production environment and monitor its access activity or access attempts using IAM Access Analyzer.
Use IAM Access Analyzer policy generation to generate a policy that allows the CloudFormation script to run and manage the stack. The policy will include only
the permissions that are necessary for the script to function.
Attach the policy to a new IAM role that has a trust relationship with CloudFormation. This will allow CloudFormation to assume the role and execute the script.
Modify the security engineer’s IAM permissions to be able to pass the new role to CloudFormation.
This will allow the security engineer to launch the stack using the role.
Run the CloudFormation script in the production environment using the new role.
This solution will meet the requirements of least privilege and separation of duties, as it will limit the permissions of both CloudFormation and the security engineer
to only what is needed for running and managing the stack.
Option B is incorrect because creating an IAM policy that allows ec2:* and rds:* permissions is not following the principle of least privilege, as it will grant more
permissions than necessary for running and managing the stack. Moreover, modifying the security engineer’s IAM permissions to be able to assume the new role
is not ensuring separation of duties, as it will allow the security engineer to bypass CloudFormation and directly access the resources.
Option C is incorrect because modifying the security engineer’s IAM permissions to be able to run the CloudFormation script is not ensuring separation of duties,
as it will allow the security engineer to execute the script without using CloudFormation.
Option D is incorrect because creating an IAM policy that allows ec2:* and rds:* permissions is not following the principle of least privilege, as it will grant more
permissions than necessary for running and managing the stack. Using the IAM policy simulator to confirm that the policy allows the AWS API calls that are
necessary to build the stack is not sufficient, as it will not generate a fine-grained policy based on access activity or access attempts.

NEW QUESTION 68
A company's policy requires that all API keys be encrypted and stored separately from source code in a centralized security account. This security account is
managed by the company's security team However, an audit revealed that an API key is steed with the source code of an IAM Lambda function m an IAM
CodeCommit repository in the DevOps account
How should the security learn securely store the API key?

A. Create a CodeCommit repository in the security account using IAM Key Management Service (IAMKMS) tor encryption Require the development team to
migrate the Lambda source code to this repository
B. Store the API key in an Amazon S3 bucket in the security account using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to encrypt
the key Create a resigned URL tor the S3 ke
C. and specify the URL m a Lambda environmental variable in the IAM CloudFormation template Update the Lambda function code to retrieve the key using the
URL and call the API
D. Create a secret in IAM Secrets Manager in the security account to store the API key using IAM Key Management Service (IAM KMS) tor encryption Grant
access to the IAM role used by the Lambda function so that the function can retrieve the key from Secrets Manager and call the API
E. Create an encrypted environment variable for the Lambda function to store the API key using IAM Key Management Service (IAM KMS) tor encryption Grant
access to the IAM role used by the Lambda function so that the function can decrypt the key at runtime

Answer: C

Explanation:
To securely store the API key, the security team should do the following:
Create a secret in AWS Secrets Manager in the security account to store the API key using AWS Key Management Service (AWS KMS) for encryption. This
allows the security team to encrypt and manage the API key centrally, and to configure automatic rotation schedules for it.
Grant access to the IAM role used by the Lambda function so that the function can retrieve the key from Secrets Manager and call the API. This allows the
security team to avoid storing the API key with the source code, and to use IAM policies to control access to the secret.

NEW QUESTION 72
A Security Engineer has been tasked with enabling IAM Security Hub to monitor Amazon EC2 instances fix CVE in a single IAM account The Engineer has already
enabled IAM Security Hub and Amazon Inspector m the IAM Management Console and has installed me Amazon Inspector agent on an EC2 instances that need
to be monitored.
Which additional steps should the Security Engineer lake 10 meet this requirement?

A. Configure the Amazon inspector agent to use the CVE rule package
B. Configure the Amazon Inspector agent to use the CVE rule package Configure Security Hub to ingest from IAM inspector by writing a custom resource policy
C. Configure the Security Hub agent to use the CVE rule package Configure IAM Inspector lo ingest from Security Hub by writing a custom resource policy
D. Configure the Amazon Inspector agent to use the CVE rule package Install an additional Integration library Allow the Amazon Inspector agent to communicate

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

with Security Hub

Answer: D

Explanation:
you need to configure the Amazon Inspector agent to use the CVE rule package, which is a set of rules that check for vulnerabilities and exposures on your EC2
instances5. You also need to install an additional integration library that enables communication between the Amazon Inspector agent and Security
Hub6. Security Hub is a service that provides you with a comprehensive view of your security state in AWS and helps you check your environment against security
industry standards and best practices7. The other options are either incorrect or incomplete for meeting the requirement.

NEW QUESTION 75
A security administrator is setting up a new AWS account. The security administrator wants to secure the data that a company stores in an Amazon S3 bucket.
The security administrator also wants to reduce the chance of unintended data exposure and the potential for misconfiguration of objects that are in the S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?

A. Configure the S3 Block Public Access feature for the AWS account.
B. Configure the S3 Block Public Access feature for all objects that are in the bucket.
C. Deactivate ACLs for objects that are in the bucket.
D. Use AWS PrivateLink for Amazon S3 to access the bucket.

Answer: D

NEW QUESTION 79
A security engineer is configuring account-based access control (ABAC) to allow only specific principals to put objects into an Amazon S3 bucket. The principals
already have access to Amazon S3.
The security engineer needs to configure a bucket policy that allows principals to put objects into the S3 bucket only if the value of the Team tag on the object
matches the value of the Team tag that is associated with the principal. During testing, the security engineer notices that a principal can still put objects into the S3
bucket when the tag values do not match.
Which combination of factors are causing the PutObject operation to succeed when the tag values are different? (Select TWO.)

A. The principal's identity-based policy grants access to put objects into the S3 bucket with no conditions.
B. The principal's identity-based policy overrides the condition because the identity-based policy contains an explicit allow.
C. The S3 bucket's resource policy does not deny access to put objects.
D. The S3 bucket's resource policy cannot allow actions to the principal.
E. The bucket policy does not apply to principals in the same zone of trust.

Answer: AC

Explanation:
The correct answer is A and C.
When using ABAC, the principal’s identity-based policy and the S3 bucket’s resource policy are both evaluated to determine the effective permissions. If either
policy grants access to the principal, the action is allowed. If either policy denies access to the principal, the action is denied. Therefore, to enforce the tag-based
condition, both policies must deny access when the tag values do not match.
In this case, the principal’s identity-based policy grants access to put objects into the S3 bucket with no conditions (A), which means that the policy does not check
for the tag values. This policy overrides the condition in the bucket policy because an explicit allow always takes precedence over an implicit deny. The
bucket policy can only allow or deny actions to the principal based on the condition, but it cannot override the identity-based policy.
The S3 bucket’s resource policy does not deny access to put objects ©, which means that it also does not check for the tag values. The bucket policy can only
allow or deny actions to the principal based on the condition, but it cannot override the identity-based policy.
Therefore, the combination of factors A and C are causing the PutObject operation to succeed when the tag values are different.
References:
Using ABAC with Amazon S3
Bucket policy examples

NEW QUESTION 81
A company's security engineer wants to receive an email alert whenever Amazon GuardDuty, AWS Identity and Access Management Access Analyzer, or Amazon
Made generate a high-severity security finding. The company uses AWS Control Tower to govern all of its accounts. The company also uses AWS Security Hub
with all of the AWS service integrations turned on.
Which solution will meet these requirements with the LEAST operational overhead?

A. Set up separate AWS Lambda functions for GuardDuty, 1AM Access Analyzer, and Macie to call each service's public API to retrieve high-severity finding
B. Use Amazon Simple Notification Service (Amazon SNS) to send the email alert
C. Create an Amazon EventBridge rule to invoke the functions on a schedule.
D. Create an Amazon EventBridge rule with a pattern that matches Security Hub findings events with high severit
E. Configure the rule to send the findings to a target Amazon Simple Notification Service (Amazon SNS) topi
F. Subscribe the desired email addresses to the SNS topic.
G. Create an Amazon EventBridge rule with a pattern that matches AWS Control Tower events with high severit
H. Configure the rule to send the findings to a target Amazon Simple Notification Service (Amazon SNS) topi
I. Subscribe the desired email addresses to the SNS topic.
J. Host an application on Amazon EC2 to call the GuardDuty, 1AM Access Analyzer, and Macie APIs.Within the application, use the Amazon Simple Notification
Service (Amazon SNS) API to retrieve high-severity findings and to send the findings to an SNS topi
K. Subscribe the desired email addresses to the SNS topic.

Answer: B

Explanation:
The AWS documentation states that you can create an Amazon EventBridge rule with a pattern that matches Security Hub findings events with high severity. You
can then configure the rule to send the findings to a target Amazon Simple Notification Service (Amazon SNS) topic. You can subscribe the desired email
addresses to the SNS topic. This method is the least operational overhead way to meet the requirements.
References: : AWS Security Hub User Guide

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

NEW QUESTION 86
A company is developing a highly resilient application to be hosted on multiple Amazon EC2 instances . The application will store highly sensitive user data in
Amazon RDS tables
The application must
• Include migration to a different IAM Region in the application disaster recovery plan.
• Provide a full audit trail of encryption key administration events
• Allow only company administrators to administer keys.
• Protect data at rest using application layer encryption
A Security Engineer is evaluating options for encryption key management
Why should the Security Engineer choose IAM CloudHSM over IAM KMS for encryption key management in this situation?

A. The key administration event logging generated by CloudHSM is significantly more extensive than IAM KMS.
B. CloudHSM ensures that only company support staff can administer encryption keys, whereas IAM KMS allows IAM staff to administer keys
C. The ciphertext produced by CloudHSM provides more robust protection against brute force decryption attacks than the ciphertext produced by IAM KMS
D. CloudHSM provides the ability to copy keys to a different Region, whereas IAM KMS does not

Answer: B

Explanation:
CloudHSM allows full control of your keys such including Symmetric (AES), Asymmetric (RSA), Sha-256, SHA 512, Hash Based, Digital Signatures (RSA). On the
other hand, AWS Key Management Service is a
multi-tenant key storage that is owned and managed by AWS1.
References: 1: What are the differences between AWS Cloud HSM and KMS?

NEW QUESTION 87
A company's security engineer has been tasked with restricting a contractor's IAM account access to the company's Amazon EC2 console without providing
access to any other IAM services The contractors IAM account must not be able to gain access to any other IAM service, even it the IAM account rs assigned
additional permissions based on IAM group membership
What should the security engineer do to meet these requirements''

A. Create an mime IAM user policy that allows for Amazon EC2 access for the contractor's IAM user
B. Create an IAM permissions boundary policy that allows Amazon EC2 access Associate the contractor's IAM account with the IAM permissions boundary policy
C. Create an IAM group with an attached policy that allows for Amazon EC2 access Associate the contractor's IAM account with the IAM group
D. Create a IAM role that allows for EC2 and explicitly denies all other services Instruct the contractor to always assume this role

Answer: B

Explanation:
To restrict the contractor’s IAM account access to the EC2 console without providing access to any other AWS services, the security engineer should do the
following:
Create an IAM permissions boundary policy that allows EC2 access. This is a policy that defines the maximum permissions that an IAM entity can have.
Associate the contractor’s IAM account with the IAM permissions boundary policy. This means that even if the contractor’s IAM account is assigned additional
permissions based on IAM group membership, those permissions are limited by the permissions boundary policy.

NEW QUESTION 89
A website currently runs on Amazon EC2, wan mostly statics content on the site. Recently the site was subjected to a DDoS attack a security engineer was (asked
was redesigning the edge security to help
Mitigate this risk in the future.
What are some ways the engineer could achieve this (Select THREE)?

A. Use IAM X-Ray to inspect the trafc going to the EC2 instances.
B. Move the static content to Amazon S3, and front this with an Amazon Cloud Front distribution.
C. Change the security group conguration to block the source of the attack trafc
D. Use IAM WAF security rules to inspect the inbound trafc.
E. Use Amazon Inspector assessment templates to inspect the inbound traffic.
F. Use Amazon Route 53 to distribute trafc.

Answer: BDF

Explanation:
To redesign the edge security to help mitigate the DDoS attack risk in the future, the engineer could do the following:
Move the static content to Amazon S3, and front this with an Amazon CloudFront distribution. This allows the engineer to use a global content delivery network
that can cache static content at edge locations and reduce the load on the origin servers.
Use AWS WAF security rules to inspect the inbound traffic. This allows the engineer to use web application firewall rules that can filter malicious requests
based on IP addresses, headers, body, or URI strings, and block them before they reach the web servers.
Use Amazon Route 53 to distribute traffic. This allows the engineer to use a scalable and highly available DNS service that can route traffic based on different
policies, such as latency, geolocation, or health checks.

NEW QUESTION 91
A company is designing a new application stack. The design includes web servers and backend servers that are hosted on Amazon EC2 instances. The design
also includes an Amazon Aurora MySQL DB cluster.
The EC2 instances are m an Auto Scaling group that uses launch templates. The EC2 instances for the web layer and the backend layer are backed by Amazon
Elastic Block Store (Amazon EBS) volumes. No layers are encrypted at rest. A security engineer needs to implement encryption at rest.
Which combination of steps will meet these requirements? (Select TWO.)

A. Modify EBS default encryption settings in the target AWS Region to enable encryptio
B. Use an Auto Scaling group instance refresh.

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

C. Modify the launch templates for the web layer and the backend layer to add AWS Certificate Manager (ACM) encryption for the attached EBS volume
D. Use an Auto Scaling group instance refresh.
E. Create a new AWS Key Management Service (AWS KMS) encrypted DB cluster from a snapshot of the existing DB cluster.
F. Apply AWS Key Management Service (AWS KMS) encryption to the existing DB cluster.
G. Apply AWS Certificate Manager (ACM) encryption to the existing DB cluster.

Answer: AC

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.Encryption.html https://aws.amazon.com/premiumsupport/knowledge-center/ebs-
automatic-encryption/
To implement encryption at rest for both the EC2 instances and the Aurora DB cluster, the following steps are required:
For the EC2 instances, modify the EBS default encryption settings in the target AWS Region to enable encryption. This will ensure that any new EBS volumes
created in that Region are encrypted by default using an AWS managed key. Alternatively, you can specify a customer managed key when creating new EBS
volumes. For more information, see Amazon EBS encryption.
Use an Auto Scaling group instance refresh to replace the existing EC2 instances with new ones that have encrypted EBS volumes attached. An instance
refresh is a feature that helps you update all instances in an Auto Scaling group in a rolling fashion without the need to manage the instance replacement process
manually. For more information, see Replacing Auto Scaling instances based on an instance refresh.
For the Aurora DB cluster, create a new AWS Key Management Service (AWS KMS) encrypted DB cluster from a snapshot of the existing DB cluster. You can
use either an AWS managed key or a customer managed key to encrypt the new DB cluster. You cannot enable or disable encryption for an existing DB cluster, so
you have to create a new one from a snapshot. For more information, see Encrypting Amazon Aurora resources.
The other options are incorrect because they either do not enable encryption at rest for the resources (B, D), or they use the wrong service for encryption (E).
Verified References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.Encryption.html

NEW QUESTION 94
A company is migrating one of its legacy systems from an on-premises data center to AWS. The application server will run on AWS, but the database must remain
in the on-premises data center for compliance reasons. The database is sensitive to network latency. Additionally, the data that travels between the on-premises
data center and AWS must have IPsec encryption.
Which combination of AWS solutions will meet these requirements? (Choose two.)

A. AWS Site-to-Site VPN


B. AWS Direct Connect
C. AWS VPN CloudHub
D. VPC peering
E. NAT gateway

Answer: AB

Explanation:
The correct combination of AWS solutions that will meet these requirements is A. AWS Site-to-Site VPN and B. AWS Direct Connect.
* A. AWS Site-to-Site VPN is a service that allows you to securely connect your on-premises data center to your AWS VPC over the internet using IPsec
encryption. This solution meets the requirement of encrypting the data in transit between the on-premises data center and AWS.
* B. AWS Direct Connect is a service that allows you to establish a dedicated network connection between your on-premises data center and your AWS VPC. This
solution meets the requirement of reducing network latency between the on-premises data center and AWS.
* C. AWS VPN CloudHub is a service that allows you to connect multiple VPN connections from different locations to the same virtual private gateway in your AWS
VPC. This solution is not relevant for this scenario, as there is only one on-premises data center involved.
* D. VPC peering is a service that allows you to connect two or more VPCs in the same or different regions using private IP addresses. This solution does not meet
the requirement of connecting an on-premises data center to AWS, as it only works for VPCs.
* E. NAT gateway is a service that allows you to enable internet access for instances in a private subnet in your AWS VPC. This solution does not meet the
requirement of connecting an on-premises data center to AWS, as it only works for outbound traffic from your VPC.

NEW QUESTION 97
A company uses AWS Signer with all of the company’s AWS Lambda functions. A developer recently stopped working for the company. The company wants to
ensure that all the code that the developer wrote can no longer be deployed to the Lambda functions.
Which solution will meet this requirement?

A. Revoke all versions of the signing profile assigned to the developer.


B. Examine the developer’s IAM role
C. Remove all permissions that grant access to Signer.
D. Re-encrypt all source code with a new AWS Key Management Service (AWS KMS) key.
E. Use Amazon CodeGuru to profile all the code that the Lambda functions use.

Answer: A

Explanation:
The correct answer is A. Revoke all versions of the signing profile assigned to the developer.
According to the AWS documentation1, AWS Signer is a fully managed code-signing service that helps you ensure the trust and integrity of your code. You can
use Signer to sign code artifacts, such as Lambda deployment packages, with code-signing certificates that you control and manage.
A signing profile is a collection of settings that Signer uses to sign your code artifacts. A signing profile includes information such as the following:
The type of signature that you want to create (for example, a code-signing signature).
The signing algorithm that you want Signer to use to sign your code.
The code-signing certificate and its private key that you want Signer to use to sign your code.
You can create multiple versions of a signing profile, each with a different code-signing certificate. You can also revoke a version of a signing profile if you no
longer want to use it for signing code artifacts.
In this case, the company wants to ensure that all the code that the developer wrote can no longer be deployed to the Lambda functions. One way to achieve this

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

is to revoke all versions of the signing profile that was assigned to the developer. This will prevent Signer from using that signing profile to sign any new code
artifacts, and also invalidate any existing signatures that were created with that signing profile. This way, the company can ensure that only trusted and authorized
code can be deployed to the Lambda functions.
The other options are incorrect because:
B. Examining the developer’s IAM roles and removing all permissions that grant access to Signer may not be sufficient to prevent the deployment of the
developer’s code. The developer may have already signed some code artifacts with a valid signing profile before leaving the company, and those signatures may
still be accepted by Lambda unless the signing profile is revoked.
C. Re-encrypting all source code with a new AWS Key Management Service (AWS KMS) key may not be effective or practical. AWS KMS is a service that lets
you create and manage encryption keys for your data. However, Lambda does not require encryption keys for deploying code artifacts, only valid signatures from
Signer. Therefore, re-encrypting the source code may not prevent the deployment of the developer’s code if it has already been signed with a valid signing profile.
Moreover, re-encrypting all source code may be time-consuming and disruptive for other developers who are working on the same code base.
D. Using Amazon CodeGuru to profile all the code that the Lambda functions use may not help with preventing the deployment of the developer’s code.
Amazon CodeGuru is a service that provides intelligent recommendations to improve your code quality and identify an application’s most expensive lines of code.
However, CodeGuru does not perform any security checks or validations on your code artifacts, nor does it interact with Signer or Lambda in any way. Therefore,
using CodeGuru may not prevent unauthorized or untrusted code from being deployed to the Lambda functions.
References:
1: What is AWS Signer? - AWS Signer

NEW QUESTION 99
A security engineer is trying to use Amazon EC2 Image Builder to create an image of an EC2 instance. The security engineer has configured the pipeline to send
logs to an Amazon S3 bucket. When the security engineer runs the pipeline, the build fails with the following error: “AccessDenied: Access Denied status code:
403”.
The security engineer must resolve the error by implementing a solution that complies with best practices for least privilege access.
Which combination of steps will meet these requirements? (Choose two.)

A. Ensure that the following policies are attached to the IAM role that the security engineer is using: EC2InstanceProfileForImageBuilder,
EC2InstanceProfileForImageBuilderECRContainerBuilds, and AmazonSSMManagedInstanceCore.
B. Ensure that the following policies are attached to the instance profile for the EC2 instance: EC2InstanceProfileForImageBuilder,
EC2InstanceProfileForImageBuilderECRContainerBuilds, and AmazonSSMManagedInstanceCore.
C. Ensure that the AWSImageBuilderFullAccess policy is attached to the instance profile for the EC2 instance.
D. Ensure that the security engineer’s IAM role has the s3:PutObject permission for the S3 bucket.
E. Ensure that the instance profile for the EC2 instance has the s3:PutObject permission for the S3 bucket.

Answer: BE

Explanation:
The most likely cause of the error is that the instance profile for the EC2 instance does not have the s3:PutObject permission for the S3 bucket. This permission is
needed to upload logs to the bucket. Therefore, the security engineer should ensure that the instance profile has this permission.
One possible solution is to attach the AWSImageBuilderFullAccess policy to the instance profile for the EC2 instance. This policy grants full access to Image
Builder resources and related AWS services, including the s3:PutObject permission for any bucket with “imagebuilder” in its name. However, this policy may grant
more permissions than necessary, which violates the principle of least privilege.
Another possible solution is to create a custom policy that only grants the s3:PutObject permission for the specific S3 bucket that is used for logging. This policy
can be attached to the instance profile along with the other policies that are required for Image Builder functionality: EC2InstanceProfileForImageBuilder,
EC2InstanceProfileForImageBuilderECRContainerBuilds, and AmazonSSMManagedInstanceCore. This solution follows the principle of least privilege more
closely than the previous one.
Ensure that the following policies are attached to the instance profile for the EC2 instance: EC2InstanceProfileForImageBuilder,
EC2InstanceProfileForImageBuilderECRContainerBuilds, and AmazonSSMManagedInstanceCore.
Ensure that the instance profile for the EC2 instance has the s3:PutObject permission for the S3 bucket.
This can be done by either attaching the AWSImageBuilderFullAccess policy or creating a custom policy with this permission.
1: Using managed policies for EC2 Image Builder - EC2 Image Builder 2: PutObject - Amazon Simple Storage Service 3: AWSImageBuilderFullAccess - AWS
Managed Policy

NEW QUESTION 100


A company manages multiple IAM accounts using IAM Organizations. The company's security team notices that some member accounts are not sending IAM
CloudTrail logs to a centralized Amazon S3 logging bucket. The security team wants to ensure there is at least one trail configured (or all existing accounts and for
any account that is created in the future.
Which set of actions should the security team implement to accomplish this?

A. Create a new trail and configure it to send CloudTrail logs to Amazon S3. Use Amazon EventBridge (Amazon CloudWatch Events) to send notification if a trail is
deleted or stopped.
B. Deploy an IAM Lambda function in every account to check if there is an existing trail and create a new trail, if needed.
C. Edit the existing trail in the Organizations master account and apply it to the organization.
D. Create an SCP to deny the cloudtrail:Delete" and cloudtrail:Stop' action
E. Apply the SCP to all accounts.

Answer: C

Explanation:
Users in member accounts will not have sufficient permissions to delete the organization trail, turn logging on or off, change what types of events are logged, or
otherwise alter the organization trail in any way. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html

NEW QUESTION 101


You work at a company that makes use of IAM resources. One of the key security policies is to ensure that all data i encrypted both at rest and in transit. Which of
the following is one of the right ways to implement this.
Please select:

A. Use S3 SSE and use SSL for data in transit


B. SSL termination on the ELB

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

C. Enabling Proxy Protocol


D. Enabling sticky sessions on your load balancer

Answer: A

Explanation:
By disabling SSL termination, you are leaving an unsecure connection from the ELB to the back end instances. Hence this means that part of the data transit is not
being encrypted.
Option B is incorrect because this would not guarantee complete encryption of data in transit Option C and D are incorrect because these would not guarantee
encryption
For more information on SSL Listeners for your load balancer, please visit the below URL: http://docs.IAM.amazon.com/elasticloadbalancine/latest/classic/elb-https-
load-balancers.htmll The correct answer is: Use S3 SSE and use SSL for data in transit
Submit your Feedback/Queries to our Experts

NEW QUESTION 102


A company has recently recovered from a security incident that required the restoration of Amazon EC2 instances from snapshots. The company uses an AWS
Key
Management Service (AWS KMS) customer managed key to encrypt all Amazon Elastic Block Store (Amazon EBS) snapshots.
The company performs a gap analysis of its disaster recovery procedures and backup strategies. A security engineer needs to implement a solution so that the
company can recover the EC2 instances if the AWS account is compromised and the EBS snapshots are deleted.
Which solution will meet this requirement?

A. Create a new Amazon S3 bucke


B. Use EBS lifecycle policies to move EBS snapshots to the new S3 bucke
C. Use lifecycle policies to move snapshots to the S3 Glacier Instant Retrieval storage clas
D. Use S3 Object Lock to prevent deletion of the snapshots.
E. Use AWS Systems Manager to distribute a configuration that backs up all attached disks to Amazon S3.
F. Create a new AWS account that has limited privilege
G. Allow the new account to access the KMS key that encrypts the EBS snapshot
H. Copy the encrypted snapshots to the new account on a recurring basis.
I. Use AWS Backup to copy EBS snapshots to Amazon S3. Use S3 Object Lock to prevent deletion of the snapshots.

Answer: C

Explanation:
This solution meets the requirement of recovering the EC2 instances if the AWS account is compromised and the EBS snapshots are deleted. By creating a new
AWS account with limited privileges, the company can isolate the backup snapshots from the main account and reduce the risk of accidental or malicious deletion.
By allowing the new account to access the KMS key that encrypts the EBS snapshots, the company can ensure that the snapshots are copied in an encrypted
form and can be decrypted when needed. By copying the encrypted snapshots to the new account on a recurring basis, the company can maintain a consistent
backup schedule and minimize data loss.

NEW QUESTION 106


A company has a new partnership with a vendor. The vendor will process data from the company's customers. The company will upload data files as objects into
an Amazon S3 bucket. The vendor will download the objects to perform data processing. The objects will contain sensi-tive data.
A security engineer must implement a solution that prevents objects from resid-ing in the S3 bucket for longer than 72 hours.
Which solution will meet these requirements?

A. Use Amazon Macie to scan the S3 bucket for sensitive data every 72 hour
B. Configure Macie to delete the objects that contain sensitive data when they are discovered.
C. Configure an S3 Lifecycle rule on the S3 bucket to expire objects that have been in the S3 bucket for 72 hours.
D. Create an Amazon EventBridge scheduled rule that invokes an AWS Lambda function every day.Program the Lambda function to remove any objects that have
been in the S3 bucket for 72 hours.
E. Use the S3 Intelligent-Tiering storage class for all objects that are up-loaded to the S3 bucke
F. Use S3 Intelligent-Tiering to expire objects that have been in the S3 bucket for 72 hours.

Answer: B

NEW QUESTION 107


A company has an AWS Key Management Service (AWS KMS) customer managed key with imported key material Company policy requires all encryption keys to
be rotated every year
What should a security engineer do to meet this requirement for this customer managed key?

A. Enable automatic key rotation annually for the existing customer managed key
B. Use the AWS CLI to create an AWS Lambda function to rotate the existing customer managed key annually
C. Import new key material to the existing customer managed key Manually rotate the key
D. Create a new customer managed key Import new key material to the new key Point the key alias to the new key

Answer: A

Explanation:
To meet the requirement of rotating the AWS KMS customer managed key every year, the most appropriate solution would be to enable automatic key rotation
annually for the existing customer managed key. This will ensure that AWS KMS generates new cryptographic material for the CMK every year. AWS KMS also
saves the CMK’s older cryptographic material in perpetuity so it can be used to decrypt data that it encrypted. AWS KMS does not delete any rotated key material
until you delete the CMK.
References: : Key Rotation Enabled | Trend Micro : Rotating AWS KMS keys - AWS Key Management Service

NEW QUESTION 109


A company is building a data processing application mat uses AWS Lambda functions. The application's Lambda functions need to communicate with an Amazon
RDS OB instance that is deployed within a VPC in the same AWS account

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

Which solution meets these requirements in the MOST secure way?

A. Configure the DB instance to allow public access Update the DB instance security group to allow access from the Lambda public address space for the AWS
Region
B. Deploy the Lambda functions inside the VPC Attach a network ACL to the Lambda subnet Provide outbound rule access to the VPC CIDR range only Update
the DB instance security group to allow traffic from 0.0.0.0/0
C. Deploy the Lambda functions inside the VPC Attach a security group to the Lambda functions Provide outbound rule access to the VPC CIDR range only
Update the DB instance security group to allow traffic from the Lambda security group
D. Peer the Lambda default VPC with the VPC that hosts the DB instance to allow direct network access without the need for security groups

Answer: C

Explanation:
This solution ensures that the Lambda functions are deployed inside the VPC and can communicate with the Amazon RDS DB instance securely. The security
group attached to the Lambda functions only allows
outbound traffic to the VPC CIDR range, and the DB instance security group only allows traffic from the Lambda security group. This solution ensures that the
Lambda functions can communicate with the DB instance securely and that the DB instance is not exposed to the public internet.

NEW QUESTION 110


A company has two teams, and each team needs to access its respective Amazon S3 buckets. The company anticipates adding more teams that also will have
their own S3 buckets. When the company adds these teams, team members will need the ability to be assigned to multiple teams. Team members also will need
the ability to change teams. Additional S3 buckets can be created or deleted.
An IAM administrator must design a solution to accomplish these goals. The solution also must be scalable and must require the least possible operational
overhead.
Which solution meets these requirements?

A. Add users to groups that represent the team


B. Create a policy for each team that allows the team to access its respective S3 buckets onl
C. Attach the policy to the corresponding group.
D. Create an IAM role for each tea
E. Create a policy for each team that allows the team to access its respective S3 buckets onl
F. Attach the policy to the corresponding role.
G. Create IAM roles that are labeled with an access tag value of a tea
H. Create one policy that allows dynamic access to S3 buckets with the same ta
I. Attach the policy to the IAM role
J. Tag the S3 buckets accordingly.
K. Implement a role-based access control (RBAC) authorization mode
L. Create the corresponding policies, and attach them to the IAM users.

Answer: A

NEW QUESTION 113


A company finds that one of its Amazon EC2 instances suddenly has a high CPU usage. The company does not know whether the EC2 instance is compromised
or whether the operating system is performing background cleanup.
Which combination of steps should a security engineer take before investigating the issue? (Select THREE.)

A. Disable termination protection for the EC2 instance if termination protection has not been disabled.
B. Enable termination protection for the EC2 instance if termination protection has not been enabled.
C. Take snapshots of the Amazon Elastic Block Store (Amazon EBS) data volumes that are attached to the EC2 instance.
D. Remove all snapshots of the Amazon Elastic Block Store (Amazon EBS) data volumes that are attached to the EC2 instance.
E. Capture the EC2 instance metadata, and then tag the EC2 instance as under quarantine.
F. Immediately remove any entries in the EC2 instance metadata that contain sensitive information.

Answer: BCE

Explanation:
https://d1.awsstatic.com/WWPS/pdf/aws_security_incident_response.pdf

NEW QUESTION 118


A company uses Amazon RDS for MySQL as a database engine for its applications. A recent security audit revealed an RDS instance that is not compliant with
company policy for encrypting data at rest. A security engineer at the company needs to ensure that all existing RDS databases are encrypted using server-side
encryption and that any future deviations from the policy are detected.
Which combination of steps should the security engineer take to accomplish this? (Select TWO.)

A. Create an IAM Config rule to detect the creation of unencrypted RDS database
B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger on the IAM Config rules compliance state change and use Amazon Simple
Notification Service (Amazon SNS) to notify the security operations team.
C. Use IAM System Manager State Manager to detect RDS database encryption configuration drif
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to track state changes and use Amazon Simple Notification Service (Amazon SNS) to notify
the security operations team.
E. Create a read replica for the existing unencrypted RDS database and enable replica encryption in the proces
F. Once the replica becomes active, promote it into a standalone database instance and terminate the unencrypted database instance.
G. Take a snapshot of the unencrypted RDS databas
H. Copy the snapshot and enable snapshot encryption in the proces
I. Restore the database instance from the newly created encrypted snapsho
J. Terminate the unencrypted database instance.
K. Enable encryption for the identified unencrypted RDS instance by changing the configurations of the existing database

Answer: AD

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

NEW QUESTION 119


An organization has a multi-petabyte workload that it is moving to Amazon S3, but the CISO is concerned about cryptographic wear-out and the blast radius if a
key is compromised. How can the CISO be assured that IAM KMS and Amazon S3 are addressing the concerns? (Select TWO )

A. There is no API operation to retrieve an S3 object in its encrypted form.


B. Encryption of S3 objects is performed within the secure boundary of the KMS service.
C. S3 uses KMS to generate a unique data key for each individual object.
D. Using a single master key to encrypt all data includes having a single place to perform audits and usage validation.
E. The KMS encryption envelope digitally signs the master key during encryption to prevent cryptographic wear-out

Answer: CE

Explanation:
because these are the features that can address the CISO’s concerns about cryptographic wear-out and blast radius. Cryptographic wear-out is a phenomenon
that occurs when a key is used too frequently or for too long, which increases the risk of compromise or degradation. Blast radius is a measure of how much
damage a compromised key can cause to the encrypted data. S3 uses KMS to generate a unique data key for each individual object, which reduces both
cryptographic wear-out and blast radius. The KMS encryption envelope digitally signs the master key during encryption, which prevents cryptographic wear-out by
ensuring that only authorized parties can use the master key. The other options are either incorrect or irrelevant for addressing the CISO’s concerns.

NEW QUESTION 123


Auditors for a health care company have mandated that all data volumes be encrypted at rest Infrastructure is deployed mainly via IAM CloudFormation however
third-party frameworks and manual deployment are required on some legacy systems
What is the BEST way to monitor, on a recurring basis, whether all EBS volumes are encrypted?

A. On a recurring basis, update an IAM user policies to require that EC2 instances are created with an encrypted volume
B. Configure an IAM Config rule lo run on a recurring basis 'or volume encryption
C. Set up Amazon Inspector rules tor volume encryption to run on a recurring schedule
D. Use CloudWatch Logs to determine whether instances were created with an encrypted volume

Answer: B

Explanation:
To support answer B, use the reference https://d1.IAMstatic.com/whitepapers/IAM-security-whitepaper.pdf "For example, IAM Config provides a managed IAM
Config Rules to ensure that encryption is turned on for
all EBS volumes in your account."

NEW QUESTION 124


A company has a batch-processing system that uses Amazon S3, Amazon EC2, and AWS Key Management Service (AWS KMS). The system uses two AWS
accounts: Account A and Account B.
Account A hosts an S3 bucket that stores the objects that will be processed. The S3 bucket also stores the results of the processing. All the S3 bucket objects are
encrypted by a KMS key that is managed in
Account A.
Account B hosts a VPC that has a fleet of EC2 instances that access the S3 buck-et in Account A by using statements in the bucket policy. The VPC was created
with DNS hostnames enabled and DNS resolution enabled.
A security engineer needs to update the design of the system without changing any of the system's code. No AWS API calls from the batch-processing EC2 in-
stances can travel over the internet.
Which combination of steps will meet these requirements? (Select TWO.)

A. In the Account B VPC, create a gateway VPC endpoint for Amazon S3. For the gateway VPC endpoint,create a resource policy that allows the s3:GetObject,
s3:ListBucket, s3:PutObject, and s3:PutObjectAcl actions for the S3 bucket.
B. In the Account B VPC, create an interface VPC endpoint for Amazon S3. For the interface VPC endpoint, create a resource policy that allows the s3:GetObject,
s3:ListBucket, s3:PutObject, and s3:PutObjectAcl actions for the S3 bucket.
C. In the Account B VPC, create an interface VPC endpoint for AWS KM
D. For the interface VPC endpoint, create a resource policy that allows the kms:Encrypt, kms:Decrypt, and kms:GenerateDataKey actions for the KMS ke
E. Ensure that private DNS is turned on for the endpoint.
F. In the Account B VPC, create an interface VPC endpoint for AWS KM
G. For the interface VPC endpoint, create a resource policy that allows the kms:Encrypt, kms:Decrypt, and kms:GenerateDataKey actions for the KMS ke
H. Ensure that private DNS is turned off for the endpoint.
I. In the Account B VPC, verify that the S3 bucket policy allows the s3:PutObjectAcl action for cross-account us
J. In the Account B VPC, create a gateway VPC endpoint for Amazon S3. For the gateway VPC endpoint, create a resource policy that allows the s3:GetObject,
s3:ListBucket, and s3:PutObject actions for the S3 bucket.

Answer: BC

NEW QUESTION 128


A company's engineering team is developing a new application that creates IAM Key Management Service (IAM KMS) CMK grants for users immediately after a
grant IS created users must be able to use the CMK tu encrypt a 512-byte payload. During load testing, a bug appears |intermittently where
AccessDeniedExceptions are occasionally triggered when a user rst attempts to encrypt using the CMK
Which solution should the c0mpany‘s security specialist recommend‘?

A. Instruct users to implement a retry mechanism every 2 minutes until the call succeeds.
B. Instruct the engineering team to consume a random grant token from users, and to call the CreateGrant operation, passing it the grant toke
C. Instruct use to use that grant token in their call to encrypt.
D. Instruct the engineering team to create a random name for the grant when calling the CreateGrant operatio
E. Return the name to the users and instruct them to provide the name as the grant token in the call to encrypt.
F. Instruct the engineering team to pass the grant token returned in the CreateGrant response to users.Instruct users to use that grant token in their call to encrypt.

Answer: D

Explanation:

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

To avoid AccessDeniedExceptions when users first attempt to encrypt using the CMK, the security specialist should recommend the following solution:
Instruct the engineering team to pass the grant token returned in the CreateGrant response to users. This allows the engineering team to use the grant token
as a form of temporary authorization for the grant.
Instruct users to use that grant token in their call to encrypt. This allows the users to use the grant token as a proof that they have permission to use the CMK,
and to avoid any eventual consistency issues with the grant creation.

NEW QUESTION 132


An application team wants to use IAM Certificate Manager (ACM) to request public certificates to ensure that data is secured in transit. The domains that are being
used are not currently hosted on Amazon Route 53
The application team wants to use an IAM managed distribution and caching solution to optimize requests to its systems and provide better points of presence to
customers The distribution solution will use a primary domain name that is customized The distribution solution also will use several alternative domain names The
certificates must renew automatically over an indefinite period of time
Which combination of steps should the application team take to deploy this architecture? (Select THREE.)

A. Request a certificate (torn ACM in the us-west-2 Region Add the domain names that the certificate will secure
B. Send an email message to the domain administrators to request vacation of the domains for ACM
C. Request validation of the domains for ACM through DNS Insert CNAME records into each domain's DNS zone
D. Create an Application Load Balancer for me caching solution Select the newly requested certificate from ACM to be used for secure connections
E. Create an Amazon CloudFront distribution for the caching solution Enter the main CNAME record as the Origin Name Enter the subdomain names or alternate
names in the Alternate Domain Names Distribution Settings Select the newly requested certificate from ACM to be used for secure connections
F. Request a certificate from ACM in the us-east-1 Region Add the domain names that the certificate wil secure

Answer: CDF

NEW QUESTION 133


A company is hosting a static website on Amazon S3 The company has configured an Amazon CloudFront distribution to serve the website contents The company
has associated an IAM WAF web ACL with the CloudFront distribution. The web ACL ensures that requests originate from the United States to address
compliance restrictions.
THE company is worried that the S3 URL might still be accessible directly and that requests can bypass the CloudFront distribution
Which combination of steps should the company take to remove direct access to the S3 URL? (Select TWO. )

A. Select "Restrict Bucket Access" in the origin settings of the CloudFront distribution
B. Create an origin access identity (OAI) for the S3 origin
C. Update the S3 bucket policy to allow s3 GetObject with a condition that the IAM Referer key matches the secret value Deny all other requests
D. Configure the S3 bucket poky so that only the origin access identity (OAI) has read permission for objects in the bucket
E. Add an origin custom header that has the name Referer to the CloudFront distribution Give the header asecret value.

Answer: AD

NEW QUESTION 138


A company has a large fleet of Linux Amazon EC2 instances and Windows EC2 instances that run in private subnets. The company wants all remote
administration to be performed as securely as possible in the AWS Cloud.
Which solution will meet these requirements?

A. Do not use SSH-RSA private keys during the launch of new instance
B. Implement AWS Systems Manager Session Manager.
C. Generate new SSH-RSA private keys for existing instance
D. Implement AWS Systems Manager Session Manager.
E. Do not use SSH-RSA private keys during the launch of new instance
F. Configure EC2 Instance Connect.
G. Generate new SSH-RSA private keys for existing instance
H. Configure EC2 Instance Connect.

Answer: A

Explanation:
AWS Systems Manager Session Manager is a fully managed service that allows you to securely and remotely administer your EC2 instances without the need to
open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager provides an interactive browser-based shell or CLI access to your instances,
as well as port forwarding and auditing capabilities. Session Manager works with both Linux and Windows instances, and supports hybrid environments and edge
devices.
EC2 Instance Connect is a feature that allows you to use SSH to connect to your Linux instances using
short-lived keys that are generated on demand and delivered securely through the AWS metadata service. EC2 Instance Connect does not require any additional
software installation or configuration on the instance, but it does require you to use SSH-RSA keys during the launch of new instances.
The correct answer is to use Session Manager, as it provides more security and flexibility than EC2 Instance Connect, and does not require SSH-RSA keys or
inbound ports. Session Manager also works with Windows instances, while EC2 Instance Connect does not.
Verified References:
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Connect-using-EC2-Instance-Connect.html
https://repost.aws/questions/QUnV4R9EoeSdW0GT3cKBUR7w/what-is-the-difference-between-ec-2-ins

NEW QUESTION 142


Company A has an AWS account that is named Account A. Company A recently acquired Company B, which has an AWS account that is named Account B.
Company B stores its files in an Amazon S3 bucket.
The administrators need to give a user from Account A full access to the S3 bucket in Account B.
After the administrators adjust the IAM permissions for the user in AccountA to access the S3 bucket in Account B, the user still cannot access any files in the S3
bucket.
Which solution will resolve this issue?

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

A. In Account B, create a bucket ACL to allow the user from Account A to access the S3 bucket in Account B.
B. In Account B, create an object ACL to allow the user from Account A to access all the objects in the S3 bucket in Account B.
C. In Account B, create a bucket policy to allow the user from Account A to access the S3 bucket in Account B.
D. In Account B, create a user policy to allow the user from Account A to access the S3 bucket in Account B.

Answer: C

Explanation:
A bucket policy is a resource-based policy that defines permissions for a specific S3 bucket. It can be used to grant cross-account access to another AWS account
or an IAM user or role in another account. A bucket policy can also specify which actions, resources, and conditions are allowed or denied.
A bucket ACL is an access control list that grants basic read or write permissions to predefined groups of users. It cannot be used to grant cross-account access to
a specific IAM user or role in another account.
An object ACL is an access control list that grants basic read or write permissions to predefined groups of users for a specific object in an S3 bucket. It cannot be
used to grant cross-account access to a specific IAM user or role in another account.
A user policy is an IAM policy that defines permissions for an IAM user or role in the same account. It cannot be used to grant cross-account access to another
AWS account or an IAM user or role in another account.
For more information, see Provide cross-account access to objects in Amazon S3 buckets and Example 2: Bucket owner granting cross-account bucket
permissions.

NEW QUESTION 147


A company wants to prevent SSH access through the use of SSH key pairs for any Amazon Linux 2 Amazon EC2 instances in its AWS account. However, a
system administrator occasionally will need to access these EC2 instances through SSH in an emergency. For auditing purposes, the company needs to record
any commands that a user runs in an EC2 instance.
What should a security engineer do to configure access to these EC2 instances to meet these requirements?

A. Use the EC2 serial console Configure the EC2 serial console to save all commands that are entered to an Amazon S3 bucke
B. Provide the EC2 instances with an IAM role that allows the EC2 serial console to access Amazon S3. Configure an IAM account for the system administrato
C. Provide an IAM policy that allows the IAM account to use the EC2 serial console.
D. Use EC2 Instance Connect Configure EC2 Instance Connect to save all commands that are entered to Amazon CloudWatch Log
E. Provide the EC2 instances with an IAM role that allows the EC2 instances to access CloudWatch Logs Configure an IAM account for the system administrato
F. Provide an IAM policy that allows the IAM account to use EC2 Instance Connect.
G. Use an EC2 key pair with an EC2 instance that needs SSH access Access the EC2 instance with this key pair by using SS
H. Configure the EC2 instance to save all commands that are entered to Amazon CloudWatch Log
I. Provide the EC2 instance with an IAM role that allows the EC2 instance to access Amazon S3 and CloudWatch Logs.
J. Use AWS Systems Manager Session Manager Configure Session Manager to save all commands that are entered in a session to an Amazon S3 bucke
K. Provide the EC2 instances with an IAM role that allows Systems Manager to manage the EC2 instance
L. Configure an IAM account for the system administrator Provide an IAM policy that allows the IAM account to use Session Manager.

Answer: D

Explanation:
Open the AWS Systems Manager console at https://console.aws.amazon.com/systems-manager/. In the navigation pane, choose Session Manager. Choose the
Preferences tab, and then choose Edit. Select the check box next to Enable under S3 logging. (Recommended) Select the check box next to Allow only encrypted
S3 buckets. With this option turned on, log data is encrypted using the server-side encryption key specified for the bucket. If you don't want to encrypt the log data
that is sent to Amazon S3, clear the check box. You must also clear the check box if encryption isn't allowed on the S3 bucket.

NEW QUESTION 148


......

The Leader of IT Certification visit - https://www.certleader.com


100% Valid and Newest Version SCS-C02 Questions & Answers shared by Certleader
https://www.certleader.com/SCS-C02-dumps.html (235 Q&As)

Thank You for Trying Our Product

* 100% Pass or Money Back


All our products come with a 90-day Money Back Guarantee.
* One year free update
You can enjoy free update one year. 24x7 online support.
* Trusted by Millions
We currently serve more than 30,000,000 customers.
* Shop Securely
All transactions are protected by VeriSign!

100% Pass Your SCS-C02 Exam with Our Prep Materials Via below:

https://www.certleader.com/SCS-C02-dumps.html

The Leader of IT Certification visit - https://www.certleader.com


Powered by TCPDF (www.tcpdf.org)

You might also like