Aif-C01 1
Aif-C01 1
Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)
Amazon-Web-Services
Exam Questions AIF-C01
AWS Certified AI Practitioner
NEW QUESTION 1
A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees
need to take to respond to customer questions.
Which business objective should the company use to evaluate the effect of the LLM chatbot?
Answer: B
Explanation:
The business objective to evaluate the effect of an LLM chatbot aimed at reducing the actions required by call center employees should be average call duration.
? Average Call Duration:
? Why Option B is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 2
A law firm wants to build an AI application by using large language models (LLMs). The application will read legal documents and extract key points from the
documents. Which solution meets these requirements?
Answer: C
Explanation:
A summarization chatbot is ideal for extracting key points from legal documents. Large language models (LLMs) can be used to summarize complex texts, such as
legal documents, making them more accessible and understandable.
? Option C (Correct): "Develop a summarization chatbot": This is the correct answer
because a summarization chatbot uses LLMs to condense and extract key information from text, which is precisely the requirement for reading and summarizing
legal documents.
? Option A: "Build an automatic named entity recognition system" is incorrect
because it focuses on identifying specific entities, not summarizing documents.
? Option B: "Create a recommendation engine" is incorrect as it is used to suggest products or content, not summarize text.
? Option D: "Develop a multi-language translation system" is incorrect because translation is unrelated to summarizing text.
AWS AI Practitioner References:
? Using LLMs for Text Summarization on AWS: AWS supports developing summarization tools using its AI services, including Amazon Bedrock.
NEW QUESTION 3
A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the
model.
The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.
Which solution will meet these requirements?
Answer: A
Explanation:
Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to production in a way that allows a web application to use the model
without the need to manage the underlying infrastructure.
? Amazon SageMaker Serverless Inference provides a fully managed environment
for deploying machine learning models. It automatically provisions, scales, and manages the infrastructure required to host the model, removing the need for the
company to manage servers or other underlying infrastructure.
? Why Option A is Correct:
? Why Other Options are Incorrect:
Thus, A is the correct answer, as it aligns with the requirement of deploying an ML model without managing any underlying infrastructure.
NEW QUESTION 4
A medical company deployed a disease detection model on Amazon Bedrock. To comply with privacy policies, the company wants to prevent the model from
including personal patient information in its responses. The company also wants to receive notification when policy violations occur.
Which solution meets these requirements?
A. Use Amazon Macie to scan the model's output for sensitive data and set up alerts for potential violations.
B. Configure AWS CloudTrail to monitor the model's responses and create alerts for any detected personal information.
C. Use Guardrails for Amazon Bedrock to filter conten
D. Set up Amazon CloudWatch alarms for notification of policy violations.
E. Implement Amazon SageMaker Model Monitor to detect data drift and receive alerts when model quality degrades.
Answer: C
Explanation:
Guardrails for Amazon Bedrock provide mechanisms to filter and control the content generated by models to comply with privacy and policy requirements. Using
guardrails ensures that sensitive or personal information is not included in the model's responses. Additionally, integrating Amazon CloudWatch alarms allows for
real-time notification when a policy violation occurs.
? Option C (Correct): "Use Guardrails for Amazon Bedrock to filter content. Set up
Amazon CloudWatch alarms for notification of policy violations": This is the correct answer because it directly addresses both the prevention of policy violations
and the requirement to receive notifications when such violations occur.
? Option A: "Use Amazon Macie to scan the model's output for sensitive data" is
incorrect because Amazon Macie is designed to monitor data in S3, not to filter real-time model outputs.
? Option B: "Configure AWS CloudTrail to monitor the model's responses" is
incorrect because CloudTrail tracks API activity and is not suited for content moderation.
? Option D: "Implement Amazon SageMaker Model Monitor to detect data drift" is
incorrect because data drift detection does not address content moderation or privacy compliance.
AWS AI Practitioner References:
? Guardrails in Amazon Bedrock: AWS provides guardrails to ensure AI models comply with content policies, and using CloudWatch for alerting integrates
monitoring capabilities.
NEW QUESTION 5
A company wants to build an ML model by using Amazon SageMaker. The company needs to share and manage variables for model development across multiple
teams.
Which SageMaker feature meets these requirements?
Answer: A
Explanation:
Amazon SageMaker Feature Store is the correct solution for sharing and managing variables (features) across multiple teams during model development.
? Amazon SageMaker Feature Store:
? Why Option A is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 6
Which AWS service or feature can help an AI development team quickly deploy and consume a foundation model (FM) within the team's VPC?
A. Amazon Personalize
B. Amazon SageMaker JumpStart
C. PartyRock, an Amazon Bedrock Playground
D. Amazon SageMaker endpoints
Answer: B
Explanation:
Amazon SageMaker JumpStart is the correct service for quickly deploying and consuming a foundation model (FM) within a team's VPC.
? Amazon SageMaker JumpStart:
? Why Option B is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 7
Which strategy evaluates the accuracy of a foundation model (FM) that is used in image classification tasks?
Answer: B
Explanation:
Measuring the model's accuracy against a predefined benchmark dataset is the correct strategy to evaluate the accuracy of a foundation model (FM) used in
image classification tasks.
? Model Accuracy Evaluation:
? Why Option B is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 8
A company wants to classify human genes into 20 categories based on gene characteristics. The company needs an ML algorithm to document how the inner
mechanism of the model affects the output.
Which ML algorithm meets these requirements?
A. Decision trees
B. Linear regression
C. Logistic regression
D. Neural networks
Answer: A
Explanation:
Decision trees are an interpretable machine learning algorithm that clearly documents the decision-making process by showing how each input feature affects the
output. This transparency is particularly useful when explaining how the model arrives at a certain decision, making it suitable for classifying genes into categories.
? Option A (Correct): "Decision trees": This is the correct answer because decision
trees provide a clear and interpretable representation of how input features influence the model's output, making it ideal for understanding the inner mechanisms
affecting predictions.
? Option B: "Linear regression" is incorrect because it is used for regression tasks,
not classification.
? Option C: "Logistic regression" is incorrect as it does not provide the same level of interpretability in documenting decision-making processes.
? Option D: "Neural networks" is incorrect because they are often considered "black boxes" and do not easily explain how they arrive at their outputs.
AWS AI Practitioner References:
? Interpretable Machine Learning Models on AWS: AWS supports using interpretable models, such as decision trees, for tasks that require clear documentation of
how input data affects output decisions.
NEW QUESTION 9
What does an F1 score measure in the context of foundation model (FM) performance?
Answer: A
Explanation:
The F1 score is a metric used to evaluate the performance of a classification model by considering both precision and recall. Precision measures the accuracy of
positive predictions (i.e., the proportion of true positive predictions among all positive predictions made by the model), while recall measures the model's ability to
identify all relevant positive instances (i.e., the proportion of true positive predictions among all actual positive instances). The F1 score is the harmonic mean of
precision and recall, providing a single metric that balances both concerns. This is particularly useful when dealing with imbalanced datasets or when the cost of
false positives and false negatives is significant. Options B, C, and D pertain to other aspects of model performance but are not related to the F1 score.
Reference: AWS Certified AI Practitioner Exam Guide
NEW QUESTION 10
An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to
automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the
user who has asked the question.
Which solution meets these requirements with the LEAST implementation effort?
A. Fine-tune the model by using additional training data that is representative of the various age ranges that the application will support.
B. Add a role description to the prompt context that instructs the model of the age range that the response should target.
C. Use chain-of-thought reasoning to deduce the correct style and complexity for a response suitable for that user.
D. Summarize the response text depending on the age of the user so that younger users receive shorter responses.
Answer: B
Explanation:
Adding a role description to the prompt context is a straightforward way to instruct the generative AI model to adjust its response style based on the user's age
range. This method requires minimal implementation effort as it does not involve additional training or complex logic.
? Option B (Correct): "Add a role description to the prompt context that instructs the model of the age range that the response should target": This is the correct
answer because it involves the least implementation effort while effectively guiding the
model to tailor responses according to the age range.
? Option A: "Fine-tune the model by using additional training data" is incorrect because it requires significant effort in gathering data and retraining the model.
? Option C: "Use chain-of-thought reasoning" is incorrect as it involves complex reasoning that may not directly address the need to adjust response style based
on age.
? Option D: "Summarize the response text depending on the age of the user" is incorrect because it involves additional processing steps after generating the initial
response, increasing complexity.
AWS AI Practitioner References:
? Prompt Engineering Techniques on AWS: AWS recommends using prompt context effectively to guide generative models in providing tailored responses based
on specific user attributes.
NEW QUESTION 10
A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are
stored as PDF files.
Which solution meets these requirements MOST cost-effectively?
A. Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock.
B. Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock.
C. Use all the PDF documents to fine-tune a model with Amazon Bedroc
D. Use the fine- tuned model to process user prompts.
E. Upload PDF documents to an Amazon Bedrock knowledge bas
F. Use the knowledge base to provide context when users submit prompts to Amazon Bedrock.
Answer: A
Explanation:
Using Amazon Bedrock with large language models (LLMs) allows for efficient utilization of AI to answer queries based on context provided in product manuals. To
achieve this cost- effectively, the company should avoid unnecessary use of resources.
? Option A (Correct): "Use prompt engineering to add one PDF file as context to the
user prompt when the prompt is submitted to Amazon Bedrock": This is the most cost-effective solution. By using prompt engineering, only the relevant content
from one PDF file is added as context to each query. This approach minimizes the amount of data processed, which helps in reducing costs associated with LLMs'
computational requirements.
? Option B: "Use prompt engineering to add all the PDF files as context to the user
prompt when the prompt is submitted to Amazon Bedrock" is incorrect. Including
all PDF files would increase costs significantly due to the large context size processed by the model.
? Option C: "Use all the PDF documents to fine-tune a model with Amazon Bedrock"
is incorrect. Fine-tuning a model is more expensive than using prompt engineering, especially if done for multiple documents.
? Option D: "Upload PDF documents to an Amazon Bedrock knowledge base" is
incorrect because Amazon Bedrock does not have a built-in knowledge base feature for directly managing and querying PDF documents.
AWS AI Practitioner References:
? Prompt Engineering for Cost-Effective AI: AWS emphasizes the importance of using prompt engineering to minimize costs when interacting with LLMs. By
carefully selecting relevant context, users can reduce the amount of data processed and save on expenses.
NEW QUESTION 15
A company is using an Amazon Bedrock base model to summarize documents for an internal use case. The company trained a custom model to improve the
summarization quality.
Which action must the company take to use the custom model through Amazon Bedrock?
Answer: B
Explanation:
To use a custom model that has been trained to improve summarization quality, the company must deploy the model on an Amazon SageMaker endpoint. This
allows the model to be used for real-time inference through Amazon Bedrock or other AWS services. By deploying the model in SageMaker, the custom model can
be accessed programmatically via API calls, enabling integration with Amazon Bedrock.
? Option B (Correct): "Deploy the custom model in an Amazon SageMaker endpoint
for real-time inference": This is the correct answer because deploying the model on SageMaker enables it to serve real-time predictions and be integrated with
Amazon Bedrock.
? Option A: "Purchase Provisioned Throughput for the custom model" is incorrect
because provisioned throughput is related to database or storage services, not model deployment.
? Option C: "Register the model with the Amazon SageMaker Model Registry" is
incorrect because while the model registry helps with model management, it does not make the model accessible for real-time inference.
? Option D: "Grant access to the custom model in Amazon Bedrock" is incorrect
because Bedrock does not directly manage custom model access; it relies on deployed endpoints like those in SageMaker.
AWS AI Practitioner References:
? Amazon SageMaker Endpoints: AWS recommends deploying models to SageMaker endpoints to use them for real-time inference in various applications.
NEW QUESTION 19
A company is using the Generative AI Security Scoping Matrix to assess security responsibilities for its solutions. The company has identified four different solution
scopes based on the matrix.
Which solution scope gives the company the MOST ownership of security responsibilities?
Answer: D
Explanation:
Building and training a generative AI model from scratch provides the company with the most ownership and control over security responsibilities. In this scenario,
the company is responsible for all aspects of the security of the data, the model, and the infrastructure.
? Option D (Correct): "Building and training a generative AI model from scratch by
using specific data that a customer owns": This is the correct answer because it involves complete ownership of the model, data, and infrastructure, giving the
company the highest level of responsibility for security.
? Option A: "Using a third-party enterprise application that has embedded generative
AI features" is incorrect as the company has minimal control over the security of the AI features embedded within a third-party application.
? Option B: "Building an application using an existing third-party generative AI
foundation model (FM)" is incorrect because security responsibilities are shared with the third-party model provider.
? Option C: "Refining an existing third-party generative AI FM by fine-tuning the
model with business-specific data" is incorrect as the foundation model and part of
the security responsibilities are still managed by the third party.
AWS AI Practitioner References:
? Generative AI Security Scoping Matrix on AWS: AWS provides a security responsibility matrix that outlines varying levels of control and responsibility depending
on the approach to developing and using AI models.
NEW QUESTION 20
Which AWS feature records details about ML instance data for governance and reporting?
Answer: A
Explanation:
Amazon SageMaker Model Cards provide a centralized and standardized repository for documenting machine learning models. They capture key details such as
the model's intended use, training and evaluation datasets, performance metrics, ethical considerations, and other relevant information. This documentation
facilitates governance and reporting by ensuring that all stakeholders have access to consistent and comprehensive information about each model. While Amazon
SageMaker Debugger is used for real-time debugging and monitoring during training, and Amazon SageMaker Model Monitor tracks deployed models for data and
prediction quality, neither offers the comprehensive documentation capabilities of Model Cards. Amazon SageMaker JumpStart provides pre-built models and
solutions but does not focus on governance documentation.
Reference: Amazon SageMaker Model Cards
NEW QUESTION 24
A company built a deep learning model for object detection and deployed the model to production.
Which AI process occurs when the model analyzes a new image to identify objects?
A. Training
B. Inference
C. Model deployment
D. Bias correction
Answer: B
Explanation:
Inference is the correct answer because it is the AI process that occurs when a deployed model analyzes new data (such as an image) to make predictions or
identify objects.
? Inference:
? Why Option B is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 27
A company deployed an AI/ML solution to help customer service agents respond to frequently asked questions. The questions can change over time. The
company wants to give customer service agents the ability to ask questions and receive automatically generated answers to common customer questions. Which
strategy will meet these requirements MOST cost-effectively?
Answer: D
Explanation:
RAG combines large pre-trained models with retrieval mechanisms to fetch relevant context from a knowledge base. This approach is cost-effective as it
eliminates the need for frequent model retraining while ensuring responses are contextually accurate and up to date. References: AWS RAG Techniques.
NEW QUESTION 31
A retail store wants to predict the demand for a specific product for the next few weeks by using the Amazon SageMaker DeepAR forecasting algorithm.
Which type of data will meet this requirement?
A. Text data
B. Image data
C. Time series data
D. Binary data
Answer: C
Explanation:
Amazon SageMaker's DeepAR is a supervised learning algorithm designed for forecasting scalar (one-dimensional) time series data. Time series data consists of
sequences of data points indexed in time order, typically with consistent intervals between them. In the context of a retail store aiming to predict product demand,
relevant time series data might include historical sales figures, inventory levels, or related metrics recorded over regular time intervals (e.g., daily or weekly). By
training the DeepAR model on this historical time series data, the store can generate forecasts for future product demand. This capability is
particularly useful for inventory management, staffing, and supply chain optimization. Other data types, such as text, image, or binary data, are not suitable for time
series forecasting tasks and would not be appropriate inputs for the DeepAR algorithm.
Reference: Amazon SageMaker DeepAR Algorithm
NEW QUESTION 32
A company has thousands of customer support interactions per day and wants to analyze these interactions to identify frequently asked questions and develop
insights.
Which AWS service can the company use to meet this requirement?
A. Amazon Lex
B. Amazon Comprehend
C. Amazon Transcribe
D. Amazon Translate
Answer: B
Explanation:
Amazon Comprehend is the correct service to analyze customer support interactions and identify frequently asked questions and insights.
? Amazon Comprehend:
? Why Option B is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 33
An AI company periodically evaluates its systems and processes with the help of independent software vendors (ISVs). The company needs to receive email
message notifications when an ISV's compliance reports become available.
Which AWS service can the company use to meet this requirement?
Answer: D
Explanation:
AWS Data Exchange is a service that allows companies to securely exchange data with third parties, such as independent software vendors (ISVs). AWS Data
Exchange can be configured to provide notifications, including email notifications, when new datasets or compliance reports become available.
? Option D (Correct): "AWS Data Exchange": This is the correct answer because it
enables the company to receive notifications, including email messages, when ISVs' compliance reports are available.
? Option A: "AWS Audit Manager" is incorrect because it focuses on assessing an
organization's own compliance, not receiving third-party compliance reports.
? Option B: "AWS Artifact" is incorrect as it provides access to AWS??s compliance reports, not ISVs'.
? Option C: "AWS Trusted Advisor" is incorrect as it offers optimization and best practices guidance, not compliance report notifications.
AWS AI Practitioner References:
? AWS Data Exchange Documentation: AWS explains how Data Exchange allows organizations to subscribe to third-party data and receive notifications when
updates are available.
NEW QUESTION 34
A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated
content aligns with the company's brand voice and messaging requirements.
Which solution meets these requirements?
A. Optimize the model's architecture and hyperparameters to improve the model's overall performance.
B. Increase the model's complexity by adding more layers to the model's architecture.
C. Create effective prompts that provide clear instructions and context to guide the model's generation.
D. Select a large, diverse dataset to pre-train a new generative model.
Answer: C
Explanation:
Creating effective prompts is the best solution to ensure that the content generated by a pre-trained generative AI model aligns with the company's brand voice
and messaging requirements.
? Effective Prompt Engineering:
? Why Option C is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 39
A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company
needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks.
Which solution will meet this requirement?
Answer: C
Explanation:
To manage the flow of data from Amazon S3 to SageMaker Studio notebooks securely, using a VPC with an S3 endpoint is the best solution.
? Amazon SageMaker and S3 Integration:
? Why Option C is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 42
Which metric measures the runtime efficiency of operating AI models?
Answer: C
Explanation:
The average response time is the correct metric for measuring the runtime efficiency of operating AI models.
? Average Response Time:
? Why Option C is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 43
An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential
harms.
What should the firm do when developing and deploying the LLM? (Select TWO.)
Answer: AC
Explanation:
To implement a large language model (LLM) responsibly, the firm should focus on fairness and mitigating bias, which are critical for ethical AI deployment.
? A. Include Fairness Metrics for Model Evaluation:
? C. Modify the Training Data to Mitigate Bias:
? Why Other Options are Incorrect:
NEW QUESTION 46
A company has built an image classification model to predict plant diseases from photos of plant leaves. The company wants to evaluate how many images the
model classified correctly.
Which evaluation metric should the company use to measure the model's performance?
A. R-squared score
B. Accuracy
C. Root mean squared error (RMSE)
D. Learning rate
Answer: B
Explanation:
Accuracy is the most appropriate metric to measure the performance of an image classification model. It indicates the percentage of correctly classified images out
of the total number of images. In the context of classifying plant diseases from images, accuracy will help the company determine how well the model is performing
by showing how many images were correctly classified.
? Option B (Correct): "Accuracy": This is the correct answer because accuracy
measures the proportion of correct predictions made by the model, which is suitable for evaluating the performance of a classification model.
? Option A: "R-squared score" is incorrect as it is used for regression analysis, not
classification tasks.
? Option C: "Root mean squared error (RMSE)" is incorrect because it is also used for regression tasks to measure prediction errors, not for classification
accuracy.
? Option D: "Learning rate" is incorrect as it is a hyperparameter for training, not a performance metric.
AWS AI Practitioner References:
? Evaluating Machine Learning Models on AWS: AWS documentation emphasizes the use of appropriate metrics, like accuracy, for classification tasks.
NEW QUESTION 47
What are tokens in the context of generative AI models?
A. Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units.
B. Tokens are the mathematical representations of words or concepts used in generative AI models.
C. Tokens are the pre-trained weights of a generative AI model that are fine-tuned for specific tasks.
D. Tokens are the specific prompts or instructions given to a generative AI model to generate output.
Answer: A
Explanation:
Tokens in generative AI models are the smallest units that the model processes, typically representing words, subwords, or characters. They are essential for the
model to understand and generate language, breaking down text into manageable parts for processing.
? Option A (Correct): "Tokens are the basic units of input and output that a
generative AI model operates on, representing words, subwords, or other linguistic units": This is the correct definition of tokens in the context of generative AI
models.
? Option B: "Mathematical representations of words" describes embeddings, not
tokens.
? Option C: "Pre-trained weights of a model" refers to the parameters of a model, not tokens.
? Option D: "Prompts or instructions given to a model" refers to the queries or commands provided to a model, not tokens.
AWS AI Practitioner References:
? Understanding Tokens in NLP: AWS provides detailed explanations of how tokens are used in natural language processing tasks by AI models, such as in
Amazon Comprehend and other AWS AI services.
NEW QUESTION 49
A company has installed a security camera. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has
discovered that the model disproportionately flags people who are members of a specific ethnic group.
Which type of bias is affecting the model output?
A. Measurement bias
B. Sampling bias
C. Observer bias
D. Confirmation bias
Answer: B
Explanation:
Sampling bias is the correct type of bias affecting the model output when it disproportionately flags people from a specific ethnic group.
? Sampling Bias:
? Why Option B is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 50
Which option is a use case for generative AI models?
Answer: B
Explanation:
Generative AI models are used to create new content based on existing data. One common use case is generating photorealistic images from text descriptions,
which is particularly useful in digital marketing, where visual content is key to engaging potential customers.
? Option B (Correct): "Creating photorealistic images from text descriptions for digital
marketing": This is the correct answer because generative AI models, like those offered by Amazon Bedrock, can create images based on text descriptions,
making them highly valuable for generating marketing materials.
? Option A: "Improving network security by using intrusion detection systems" is
incorrect because this is a use case for traditional machine learning models, not generative AI.
? Option C: "Enhancing database performance by using optimized indexing" is
incorrect as it is unrelated to generative AI.
? Option D: "Analyzing financial data to forecast stock market trends" is incorrect because it typically involves predictive modeling rather than generative AI.
AWS AI Practitioner References:
? Use Cases for Generative AI Models on AWS: AWS highlights the use of generative AI for creative content generation, including image creation, text generation,
and more, which is suited for digital marketing applications.
NEW QUESTION 53
A digital devices company wants to predict customer demand for memory hardware. The company does not have coding experience or knowledge of ML
algorithms and needs to develop a data-driven predictive model. The company needs to perform analysis on internal data and external data.
Which solution will meet these requirements?
A. Store the data in Amazon S3. Create ML models and demand forecast predictions by using Amazon SageMaker built-in algorithms that use the data from
Amazon S3.
B. Import the data into Amazon SageMaker Data Wrangle
C. Create ML models and demand forecast predictions by using SageMaker built-in algorithms.
D. Import the data into Amazon SageMaker Data Wrangle
E. Build ML models and demand forecast predictions by using an Amazon Personalize Trending-Now recipe.
F. Import the data into Amazon SageMaker Canva
G. Build ML models and demand forecast predictions by selecting the values in the data from SageMaker Canvas.
Answer: D
Explanation:
Amazon SageMaker Canvas is a visual, no-code machine learning interface that allows users to build machine learning models without having any coding
experience or knowledge of machine learning algorithms. It enables users to analyze internal and external data, and make predictions using a guided interface.
? Option D (Correct): "Import the data into Amazon SageMaker Canvas. Build ML
models and demand forecast predictions by selecting the values in the data from SageMaker Canvas": This is the correct answer because SageMaker Canvas is
designed for users without coding experience, providing a visual interface to build predictive models with ease.
? Option A: "Store the data in Amazon S3 and use SageMaker built-in algorithms" is
incorrect because it requires coding knowledge to interact with SageMaker's built- in algorithms.
? Option B: "Import the data into Amazon SageMaker Data Wrangler" is incorrect.
Data Wrangler is primarily for data preparation and not directly focused on creating ML models without coding.
? Option C: "Use Amazon Personalize Trending-Now recipe" is incorrect as Amazon
Personalize is for building recommendation systems, not for general demand forecasting.
AWS AI Practitioner References:
? Amazon SageMaker Canvas Overview: AWS documentation emphasizes Canvas as a no-code solution for building machine learning models, suitable for
business analysts and users with no coding experience.
NEW QUESTION 55
A company has petabytes of unlabeled customer data to use for an advertisement campaign. The company wants to classify its customers into tiers to advertise
and promote the company's products.
Which methodology should the company use to meet these requirements?
A. Supervised learning
B. Unsupervised learning
C. Reinforcement learning
D. Reinforcement learning from human feedback (RLHF)
Answer: B
Explanation:
Unsupervised learning is the correct methodology for classifying customers into tiers when the data is unlabeled, as it does not require predefined labels or
outputs.
? Unsupervised Learning:
? Why Option B is Correct:
NEW QUESTION 58
A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company's
review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing.
Which AWS service meets this requirement?
A. Amazon Textract
B. Amazon Personalize
C. Amazon Lex
D. Amazon Transcribe
Answer: A
Explanation:
Amazon Textract is a service that automatically extracts text and data from scanned documents, including PDFs. It is the best choice for converting resumes from
PDF format to plain text for further processing.
? Amazon Textract:
? Why Option A is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 60
A company has a database of petabytes of unstructured data from internal sources. The company wants to transform this data into a structured format so that its
data scientists can perform machine learning (ML) tasks.
Which service will meet these requirements?
A. Amazon Lex
B. Amazon Rekognition
C. Amazon Kinesis Data Streams
D. AWS Glue
Answer: D
Explanation:
AWS Glue is the correct service for transforming petabytes of unstructured data into a structured format suitable for machine learning tasks.
? AWS Glue:
? Why Option D is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 64
A company needs to build its own large language model (LLM) based on only the company's private data. The company is concerned about the environmental
effect of the training process.
Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?
Answer: D
Explanation:
The Amazon EC2 Trn series (Trainium) instances are designed for high-performance, cost- effective machine learning training while being energy-efficient. AWS
Trainium-powered instances are optimized for deep learning models and have been developed to minimize environmental impact by maximizing energy efficiency.
? Option D (Correct): "Amazon EC2 Trn series": This is the correct answer because the Trn series is purpose-built for training deep learning models with lower
energy consumption, which aligns with the company's concern about environmental effects.
? Option A: "Amazon EC2 C series" is incorrect because it is intended for compute-
intensive tasks but not specifically optimized for ML training with environmental considerations.
? Option B: "Amazon EC2 G series" (Graphics Processing Unit instances) is
optimized for graphics-intensive applications but does not focus on minimizing environmental impact for training.
? Option C: "Amazon EC2 P series" is designed for ML training but does not offer
the same level of energy efficiency as the Trn series.
AWS AI Practitioner References:
? AWS Trainium Overview: AWS promotes Trainium instances as their most energy- efficient and cost-effective solution for ML model training.
NEW QUESTION 66
Which feature of Amazon OpenSearch Service gives companies the ability to build vector database applications?
Answer: C
Explanation:
Amazon OpenSearch Service (formerly Amazon Elasticsearch Service) has introduced capabilities to support vector search, which allows companies to build
vector database applications. This is particularly useful in machine learning, where vector representations (embeddings) of data are often used to capture semantic
meaning.
Scalable index management and nearest neighbor search capability are the core features enabling vector database functionalities in OpenSearch. The service
allows users to index high-dimensional vectors and perform efficient nearest neighbor searches, which are crucial for tasks such as recommendation systems,
anomaly detection, and semantic search.
Here is why option C is the correct Answer:
? Scalable Index Management: OpenSearch Service supports scalable indexing of vector data. This means you can index a large volume of high-dimensional
vectors
and manage these indexes in a cost-effective and performance-optimized way. The service leverages underlying AWS infrastructure to ensure that indexing scales
seamlessly with data size.
? Nearest Neighbor Search Capability: OpenSearch Service's nearest neighbor
search capability allows for fast and efficient searches over vector data. This is essential for applications like product recommendation engines, where the system
needs to quickly find the most similar items based on a user's query or behavior.
? AWS AI Practitioner References:
The other options do not directly relate to building vector database applications:
? A. Integration with Amazon S3 for object storage is about storing data objects, not vector-based searching or indexing.
? B. Support for geospatial indexing and queries is related to location-based data, not vectors used in machine learning.
? D. Ability to perform real-time analysis on streaming data relates to analyzing incoming data streams, which is different from the vector search capabilities.
NEW QUESTION 70
A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short
and written in a specific language.
Which solution will align the LLM response quality with the company's expectations?
Answer: A
Explanation:
Adjusting the prompt is the correct solution to align the LLM outputs with the company's expectations for short, specific language responses.
? Adjust the Prompt:
? Why Option A is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 73
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company
does not need to access the model predictions immediately.
Which Amazon SageMaker inference option will meet these requirements?
A. Batch transform
B. Real-time inference
C. Serverless inference
D. Asynchronous inference
Answer: A
Explanation:
Batch transform in Amazon SageMaker is designed for offline processing of large datasets. It is ideal for scenarios where immediate predictions are not required,
and the inference can be done on large datasets that are multiple gigabytes in size. This method processes data in batches, making it suitable for analyzing
archived data without the need for real- time access to predictions.
? Option A (Correct): "Batch transform": This is the correct answer because batch
transform is optimized for handling large datasets and is suitable when immediate access to predictions is not required.
? Option B: "Real-time inference" is incorrect because it is used for low-latency, real-
time prediction needs, which is not required in this case.
? Option C: "Serverless inference" is incorrect because it is designed for small-scale, intermittent inference requests, not for large batch processing.
? Option D: "Asynchronous inference" is incorrect because it is used when immediate predictions are required, but with high throughput, whereas batch transform
is more suitable for very large datasets.
AWS AI Practitioner References:
? Batch Transform on AWS SageMaker: AWS recommends using batch transform for large datasets when real-time processing is not needed, ensuring cost-
effectiveness and scalability.
NEW QUESTION 76
An e-commerce company wants to build a solution to determine customer sentiments based on written customer reviews of products.
Which AWS services meet these requirements? (Select TWO.)
A. Amazon Lex
B. Amazon Comprehend
C. Amazon Polly
D. Amazon Bedrock
E. Amazon Rekognition
Answer: BD
Explanation:
To determine customer sentiments based on written customer reviews, the company can use Amazon Comprehend and Amazon Bedrock.
? Amazon Comprehend:
? Amazon Bedrock:
? Why Other Options are Incorrect:
NEW QUESTION 81
A company wants to use AI to protect its application from threats. The AI solution needs to check if an IP address is from a suspicious source.
Which solution meets these requirements?
Answer: C
Explanation:
An anomaly detection system is suitable for identifying unusual patterns or behaviors, such as suspicious IP addresses, which might indicate a potential threat.
? Anomaly Detection:
? Why Option C is Correct:
? Why Other Options are Incorrect:
Thus, C is the correct answer for detecting suspicious IP addresses.
NEW QUESTION 84
A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants to adapt pre-
trained models to create models for new, related tasks.
Which ML strategy meets these requirements?
Answer: B
Explanation:
Transfer learning is the correct strategy for adapting pre-trained models for new, related tasks without creating models from scratch.
? Transfer Learning:
? Why Option B is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 85
A company wants to develop an educational game where users answer questions such as the following: "A jar contains six red, four green, and three yellow
marbles. What is the probability of choosing a green marble from the jar?"
Which solution meets these requirements with the LEAST operational overhead?
A. Use supervised learning to create a regression model that will predict probability.
B. Use reinforcement learning to train a model to return the probability.
C. Use code that will calculate probability by using simple rules and computations.
D. Use unsupervised learning to create a model that will estimate probability density.
Answer: C
Explanation:
The problem involves a simple probability calculation that can be handled efficiently by straightforward mathematical rules and computations. Using machine
learning techniques would introduce unnecessary complexity and operational overhead.
? Option C (Correct): "Use code that will calculate probability by using simple rules and computations": This is the correct answer because it directly solves the
problem with minimal overhead, using basic probability rules.
? Option A: "Use supervised learning to create a regression model" is incorrect as it overcomplicates the solution for a simple probability problem.
? Option B: "Use reinforcement learning to train a model" is incorrect because reinforcement learning is not needed for a simple probability calculation.
? Option D: "Use unsupervised learning to create a model" is incorrect as unsupervised learning is not applicable to this task.
AWS AI Practitioner References:
? Choosing the Right Solution for AI Tasks: AWS recommends using the simplest and most efficient approach to solve a given problem, avoiding unnecessary
machine learning techniques for straightforward tasks.
NEW QUESTION 87
A student at a university is copying content from generative AI to write essays. Which challenge of responsible generative AI does this scenario represent?
A. Toxicity
B. Hallucinations
C. Plagiarism
D. Privacy
Answer: C
Explanation:
The scenario where a student copies content from generative AI to write essays represents the challenge of plagiarism in responsible AI use.
? Plagiarism:
? Why Option C is Correct:
? Why Other Options are Incorrect:
NEW QUESTION 88
A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other
languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.
Answer: A
Explanation:
BLEU (Bilingual Evaluation Understudy) is a metric used to evaluate the accuracy of machine-generated translations by comparing them against reference
translations. It is commonly used for translation tasks to measure how close the generated output is to professional human translations.
? Option A (Correct): "Bilingual Evaluation Understudy (BLEU)": This is the correct answer because BLEU is specifically designed to evaluate the quality of
translations, making it suitable for the company's use case.
? Option B: "Root mean squared error (RMSE)" is incorrect because RMSE is used for regression tasks to measure prediction errors, not translation quality.
? Option C: "Recall-Oriented Understudy for Gisting Evaluation (ROUGE)" is incorrect as it is used to evaluate text summarization, not translation.
? Option D: "F1 score" is incorrect because it is typically used for classification tasks, not for evaluating translation accuracy.
AWS AI Practitioner References:
? Model Evaluation Metrics on AWS: AWS supports various metrics like BLEU for specific use cases, such as evaluating machine translation models.
NEW QUESTION 92
A company is building a solution to generate images for protective eyewear. The solution must have high accuracy and must minimize the risk of incorrect
annotations.
Which solution will meet these requirements?
Answer: A
Explanation:
Amazon SageMaker Ground Truth Plus is a managed data labeling service that includes human-in-the-loop (HITL) validation. This solution ensures high accuracy
by involving human reviewers to validate the annotations and reduce the risk of incorrect annotations.
? Amazon SageMaker Ground Truth Plus:
? Why Option A is Correct:
? Why Other Options are Incorrect:
Thus, A is the correct answer for generating high-accuracy images with minimized annotation risks.
NEW QUESTION 96
A company is building an application that needs to generate synthetic data that is based on existing data.
Which type of model can the company use to meet this requirement?
Answer: A
Explanation:
Generative adversarial networks (GANs) are a type of deep learning model used for generating synthetic data based on existing datasets. GANs consist of two
neural networks (a generator and a discriminator) that work together to create realistic data.
? Option A (Correct): "Generative adversarial network (GAN)": This is the correct
answer because GANs are specifically designed for generating synthetic data that closely resembles the real data they are trained on.
? Option B: "XGBoost" is a gradient boosting algorithm for classification and
regression tasks, not for generating synthetic data.
? Option C: "Residual neural network" is primarily used for improving the performance of deep networks, not for generating synthetic data.
? Option D: "WaveNet" is a model architecture designed for generating raw audio waveforms, not synthetic data in general.
AWS AI Practitioner References:
? GANs on AWS for Synthetic Data Generation: AWS supports the use of GANs for creating synthetic datasets, which can be crucial for applications like training
machine learning models in environments where real data is scarce or sensitive.
NEW QUESTION 97
A company makes forecasts each quarter to decide how to optimize operations to meet expected demand. The company uses ML models to make these
forecasts.
An AI practitioner is writing a report about the trained ML models to provide transparency and explainability to company stakeholders.
What should the AI practitioner include in the report to meet the transparency and explainability requirements?
Answer: B
Explanation:
Partial dependence plots (PDPs) are visual tools used to show the relationship between a feature (or a set of features) in the data and the predicted outcome of a
machine learning model. They are highly effective for providing transparency and explainability of the model's behavior to stakeholders by illustrating how different
Answer: B
Explanation:
Increasing the number of epochs during model training allows the model to learn from the data over more iterations, potentially improving its accuracy up to a
certain point. This is a common practice when attempting to reach a specific level of accuracy.
? Option B (Correct): "Increase the epochs": This is the correct answer because
increasing epochs allows the model to learn more from the data, which can lead to higher accuracy.
? Option A: "Decrease the batch size" is incorrect as it mainly affects training speed
and may lead to overfitting but does not directly relate to achieving a specific accuracy level.
? Option C: "Decrease the epochs" is incorrect as it would reduce the training time,
possibly preventing the model from reaching the desired accuracy.
? Option D: "Increase the temperature parameter" is incorrect because temperature affects the randomness of predictions, not model accuracy.
AWS AI Practitioner References:
? Model Training Best Practices on AWS: AWS suggests adjusting training parameters, like the number of epochs, to improve model performance.
Answer: B
Explanation:
Ongoing pre-training when fine-tuning a foundation model (FM) improves model performance over time by continuously learning from new data.
? Ongoing Pre-Training:
? Why Option B is Correct:
? Why Other Options are Incorrect:
A. Ensure that the role that Amazon Bedrock assumes has permission to decrypt data withthe correct encryption key.
B. Set the access permissions for the S3 buckets to allow public access to enable access over the internet.
C. Use prompt engineering techniques to tell the model to look for information in Amazon S3.
D. Ensure that the S3 data does not contain sensitive information.
Answer: A
Explanation:
Amazon Bedrock needs the appropriate IAM role with permission to access and decrypt data stored in Amazon S3. If the data is encrypted with Amazon S3
managed keys (SSE- S3), the role that Amazon Bedrock assumes must have the required permissions to access and decrypt the encrypted data.
? Option A (Correct): "Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key": This is the correct
solution as it ensures that the AI model can access the encrypted data securely without changing the encryption settings or compromising data security.
? Option B: "Set the access permissions for the S3 buckets to allow public access" is incorrect because it violates security best practices by exposing sensitive
data to the public.
? Option C: "Use prompt engineering techniques to tell the model to look for information in Amazon S3" is incorrect as it does not address the encryption and
permission issue.
? Option D: "Ensure that the S3 data does not contain sensitive information" is incorrect because it does not solve the access problem related to encryption.
AWS AI Practitioner References:
? Managing Access to Encrypted Data in AWS: AWS recommends using proper IAM roles and policies to control access to encrypted data stored in S3.
A company has built a chatbot that can respond to natural language questions with images. The company wants to ensure that the chatbot does not return
inappropriate or unwanted images.
Which solution will meet these requirements?
Answer: A
Explanation:
Moderation APIs, such as Amazon Rekognition??s Content Moderation API, can help filter and block inappropriate or unwanted images from being returned by a
chatbot. These APIs are specifically designed to detect and manage undesirable content in images.
? Option A (Correct): "Implement moderation APIs": This is the correct answer because moderation APIs are designed to identify and filter inappropriate content,
ensuring the chatbot does not return unwanted images.
? Option B: "Retrain the model with a general public dataset" is incorrect because retraining does not directly prevent inappropriate content from being returned.
? Option C: "Perform model validation" is incorrect as it ensures model correctness, not content moderation.
? Option D: "Automate user feedback integration" is incorrect because user feedback does not prevent inappropriate images in real-time.
AWS AI Practitioner References:
? AWS Content Moderation Services: AWS provides moderation APIs for filtering unwanted content from applications.
* AIF-C01 Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* AIF-C01 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year