0% found this document useful (0 votes)
51 views25 pages

DP 203 Demo

The document provides an overview of the DP-203 exam for Azure Data Engineer Associate, including a case study on Contoso's data management strategies and requirements for integrating on-premises data with Azure Synapse Analytics. It outlines specific requirements for sales transaction datasets, customer sentiment analytics, and data integration, along with various questions and answers related to the exam content. Additionally, it includes a brief introduction to a second case study involving Litware, Inc., which operates convenience stores and aims to enhance its analytics capabilities.

Uploaded by

soleneazura71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views25 pages

DP 203 Demo

The document provides an overview of the DP-203 exam for Azure Data Engineer Associate, including a case study on Contoso's data management strategies and requirements for integrating on-premises data with Azure Synapse Analytics. It outlines specific requirements for sales transaction datasets, customer sentiment analytics, and data integration, along with various questions and answers related to the exam content. Additionally, it includes a brief introduction to a second case study involving Litware, Inc., which operates convenience stores and aims to enhance its analytics capabilities.

Uploaded by

soleneazura71
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Microsoft

DP-203 Exam
Azure Data Engineer Associate

Questions & Answers


(Demo Version - Limited Content)

Thank you for Downloading DP-203 exam PDF Demo

Get Full File:

https://authorizedumps.com/dp-203-exam-dumps/

www.authorizedumps.com
Questions & Answers PDF Page 2

Version:24.3

Topic 1, Contoso Case Study


Transactional Date
Contoso has three years of customer, transactional, operation, sourcing, and supplier data comprised
of 10 billion records stored across multiple on-premises Microsoft SQL Server servers. The SQL server
instances contain data from various operational systems. The data is loaded into the instances by
using SQL server integration Services (SSIS) packages.
You estimate that combining all product sales transactions into a company-wide sales transactions
dataset will result in a single table that contains 5 billion rows, with one row per transaction.
Most queries targeting the sales transactions data will be used to identify which products were sold
in retail stores and which products were sold online during different time period. Sales transaction
data that is older than three years will be removed monthly.
You plan to create a retail store table that will contain the address of each retail store. The table will
be approximately 2 MB. Queries for retail store sales will include the retail store addresses.
You plan to create a promotional table that will contain a promotion ID. The promotion ID will be
associated to a specific product. The product will be identified by a product ID. The table will be
approximately 5 GB.
Streaming Twitter Data
The ecommerce department at Contoso develops and Azure logic app that captures trending Twitter
feeds referencing the company’s products and pushes the products to Azure Event Hubs.
Planned Changes
Contoso plans to implement the following changes:
* Load the sales transaction dataset to Azure Synapse Analytics.
* Integrate on-premises data stores with Azure Synapse Analytics by using SSIS packages.
* Use Azure Synapse Analytics to analyze Twitter feeds to assess customer sentiments about
products.
Sales Transaction Dataset Requirements
Contoso identifies the following requirements for the sales transaction dataset:
• Partition data that contains sales transaction records. Partitions must be designed to provide
efficient loads by month. Boundary values must belong: to the partition on the right.
• Ensure that queries joining and filtering sales transaction records based on product ID complete as
quickly as possible.
• Implement a surrogate key to account for changes to the retail store addresses.
• Ensure that data storage costs and performance are predictable.
• Minimize how long it takes to remove old records.
Customer Sentiment Analytics Requirement

Contoso identifies the following requirements for customer sentiment analytics:


• Allow Contoso users to use PolyBase in an A/ure Synapse Analytics dedicated SQL pool to query
the content of the data records that host the Twitter feeds. Data must be protected by using row-

www.authorizedumps.com
Questions & Answers PDF Page 3

level security (RLS). The users must be authenticated by using their own A/ureAD credentials.
• Maximize the throughput of ingesting Twitter feeds from Event Hubs to Azure Storage without
purchasing additional throughput or capacity units.
• Store Twitter feeds in Azure Storage by using Event Hubs Capture. The feeds will be converted into
Parquet files.
• Ensure that the data store supports Azure AD-based access control down to the object level.
• Minimize administrative effort to maintain the Twitter feed data records.
• Purge Twitter feed data records;itftaitJ are older than two years.
Data Integration Requirements
Contoso identifies the following requirements for data integration:
Use an Azure service that leverages the existing SSIS packages to ingest on-premises data into
datasets stored in a dedicated SQL pool of Azure Synaps Analytics and transform the data.
Identify a process to ensure that changes to the ingestion and transformation activities can be
version controlled and developed independently by multiple data engineers.

Question: 1

DRAG DROP

You need to ensure that the Twitter feed data can be analyzed in the dedicated SQL pool. The
solution must meet the customer sentiment analytics requirements.
Which three Transaction-SQL DDL commands should you run in sequence? To answer, move the
appropriate commands from the list of commands to the answer area and arrange them in the
correct order.
NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct
orders you select.

Answer:

Explanation:

www.authorizedumps.com
Questions & Answers PDF Page 4

Scenario: Allow Contoso users to use PolyBase in an Azure Synapse Analytics dedicated SQL pool to
query the content of the data records that host the Twitter feeds. Data must be protected by using
row-level security (RLS). The users must be authenticated by using their own Azure AD credentials.

Box 1: CREATE EXTERNAL DATA SOURCE


External data sources are used to connect to storage accounts.

Box 2: CREATE EXTERNAL FILE FORMAT


CREATE EXTERNAL FILE FORMAT creates an external file format object that defines external data
stored in Azure Blob Storage or Azure Data Lake Storage. Creating an external file format is a
prerequisite for creating an external table.

Box 3: CREATE EXTERNAL TABLE AS SELECT

When used in conjunction with the CREATE TABLE AS SELECT statement, selecting from an external
table imports data into a table within the SQL pool. In addition to the COPY statement, external
tables are useful for loading data.

Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables

Question: 2
HOTSPOT
You need to design a data storage structure for the product sales transactions. The solution must
meet the sales transaction dataset requirements.
What should you include in the solution? To answer, select the appropriate options in the answer
area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

www.authorizedumps.com
Questions & Answers PDF Page 5

Box 1: Hash
Scenario:
Ensure that queries joining and filtering sales transaction records based on product ID complete as
quickly as possible.

A hash distributed table can deliver the highest query performance for joins and aggregations on
large tables.

Box 2: Set the distribution column to the sales date.

Scenario: Partition data that contains sales transaction records. Partitions must be designed to
provide efficient loads by month. Boundary values must belong to the partition on the right.

Reference:
https://rajanieshkaushikk.com/2020/09/09/how-to-choose-right-data-distribution-strategy-for-
azure-synapse/

Question: 3
HOTSPOT
You need to design the partitions for the product sales transactions. The solution must meet the sales
transaction dataset requirements.
What should you include in the solution? To answer, select the appropriate options in the answer
area.
NOTE: Each correct selection is worth one point.

www.authorizedumps.com
Questions & Answers PDF Page 6

Answer:
Explanation:

Box 1: Sales date


Scenario: Contoso requirements for data integration include:
Partition data that contains sales transaction records. Partitions must be designed to provide efficient
loads by month. Boundary values must belong to the partition on the right.

Box 2: An Azure Synapse Analytics Dedicated SQL pool


Scenario: Contoso requirements for data integration include:
Ensure that data storage costs and performance are predictable.

The size of a dedicated SQL pool (formerly SQL DW) is determined by Data Warehousing Units
(DWU).
Dedicated SQL pool (formerly SQL DW) stores data in relational tables with columnar storage. This
format significantly reduces the data storage costs, and improves query performance.
Synapse analytics dedicated sql pool

www.authorizedumps.com
Questions & Answers PDF Page 7

Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-
overview-what-is

Question: 4
HOTSPOT
You need to implement an Azure Synapse Analytics database object for storing the sales transactions
dat

a. The solution must meet the sales transaction dataset requirements.


What solution must meet the sales transaction dataset requirements.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: Create table


Scenario: Load the sales transaction dataset to Azure Synapse Analytics

Box 2: RANGE RIGHT FOR VALUES


Scenario: Partition data that contains sales transaction records. Partitions must be designed to
provide efficient loads by month. Boundary values must belong to the partition on the right.

RANGE RIGHT: Specifies the boundary value belongs to the partition on the right (higher values).
FOR VALUES ( boundary_value [,...n] ): Specifies the boundary values for the partition.

Scenario: Load the sales transaction dataset to Azure Synapse Analytics.

www.authorizedumps.com
Questions & Answers PDF Page 8

Contoso identifies the following requirements for the sales transaction dataset:

Partition data that contains sales transaction records. Partitions must be designed to provide efficient
loads by month. Boundary values must belong to the partition on the right.
Ensure that queries joining and filtering sales transaction records based on product ID complete as
quickly as possible.
Implement a surrogate key to account for changes to the retail store addresses.
Ensure that data storage costs and performance are predictable.
Minimize how long it takes to remove old records.

Reference:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-azure-sql-data-warehouse

Question: 5

You need to integrate the on-premises data sources and Azure Synapse Analytics. The solution must
meet the data integration requirements.
Which type of integration runtime should you use?

A. Azure-SSIS integration runtime


B. self-hosted integration runtime
C. Azure integration runtime

Answer: C
Explanation:

Question: 6

You need to implement the surrogate key for the retail store table. The solution must meet the sales
transaction
dataset requirements.

What should you create?

A. a table that has an IDENTITY property


B. a system-versioned temporal table
C. a user-defined SEQUENCE object
D. a table that has a FOREIGN KEY constraint

Answer: A
Explanation:

Scenario: Implement a surrogate key to account for changes to the retail store addresses.

A surrogate key on a table is a column with a unique identifier for each row. The key is not generated

www.authorizedumps.com
Questions & Answers PDF Page 9

from the table data. Data modelers like to create surrogate keys on their tables when they design
data warehouse models. You can use the IDENTITY property to achieve this goal simply and
effectively without affecting load performance.

Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-
tables-identity

Question: 7

HOTSPOT

You need to design an analytical storage solution for the transactional dat
a. The solution must meet the sales transaction dataset requirements.

What should you include in the solution? To answer, select the appropriate options in the answer
area.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

www.authorizedumps.com
Questions & Answers PDF Page 10

Box 1: Round-robin
Round-robin tables are useful for improving loading speed.

Scenario: Partition data that contains sales transaction records. Partitions must be designed to
provide efficient loads by month.

Box 2: Hash
Hash-distributed tables improve query performance on large fact tables.

Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-
tables-distribute

Question: 8

You need to design a data retention solution for the Twitter teed data records. The solution must
meet the customer sentiment analytics requirements.
Which Azure Storage functionality should you include in the solution?

A. time-based retention
B. change feed
C. soft delete
D. Iifecycle management

Answer: C
Explanation:

Question: 9

HOTSPOT

www.authorizedumps.com
Questions & Answers PDF Page 11

You need to design a data ingestion and storage solution for the Twitter feeds. The solution must
meet the customer sentiment analytics requirements.
What should you include in the solution? To answer, select the appropriate options in the answer
area
NOTE: Each correct selection b worth one point.

Answer:
Explanation:

Box 1: Configure Evegent Hubs partitions


Scenario: Maximize the throughput of ingesting Twitter feeds from Event Hubs to Azure Storage
without purchasing additional throughput or capacity units.

Event Hubs is designed to help with processing of large volumes of events. Event Hubs throughput is
scaled by using partitions and throughput-unit allocations.

Event Hubs traffic is controlled by TUs (standard tier). Auto-inflate enables you to start small with
the minimum required TUs you choose. The feature then scales automatically to the maximum limit
of TUs you need, depending on the increase in your traffic.

Box 2: An Azure Data Lake Storage Gen2 account

Scenario: Ensure that the data store supports Azure AD-based access control down to the object
level.

Azure Data Lake Storage Gen2 implements an access control model that supports both Azure role-
based access control (Azure RBAC) and POSIX-like access control lists (ACLs).

www.authorizedumps.com
Questions & Answers PDF Page 12

Reference:
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features

https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-access-control

Question: 10
DRAG DROP
You need to implement versioned changes to the integration pipelines. The solution must meet the
data integration requirements.
In which order should you perform the actions? To answer, move all actions from the list of actions to
the answer area and arrange them in the correct order.

Answer:

Scenario: Identify a process to ensure that changes to the ingestion and transformation activities can
be version-controlled and developed independently by multiple data engineers.

www.authorizedumps.com
Questions & Answers PDF Page 13

Step 1: Create a repository and a main branch


You need a Git repository in Azure Pipelines, TFS, or GitHub with your app.

Step 2: Create a feature branch

Step 3: Create a pull request

Step 4: Merge changes


Merge feature branches into the main branch using pull requests.

Step 5: Publish changes

Reference:
https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/pipeline-options-for-git

Question: 11

You need to design a data retention solution for the Twitter feed data records. The solution must
meet the customer sentiment analytics requirements.

Which Azure Storage functionality should you include in the solution?

A. change feed
B. soft delete
C. time-based retention
D. lifecycle management

Answer: B
Explanation:

Scenario: Purge Twitter feed data records that are older than two years.

Data sets have unique lifecycles. Early in the lifecycle, people access some data often. But the need
for access often drops drastically as the data ages. Some data remains idle in the cloud and is rarely
accessed once stored. Some data sets expire days or months after creation, while other data sets are
actively read and modified throughout their lifetimes. Azure Storage lifecycle management offers a
rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire
data at the end of the data lifecycle.

Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview

Topic 2, Litware, inc. Case Study

Overview

This is a case study. Case studies are not timed separately. You can use as much exam time as you
would like to complete each case. However, there may be additional case studies and sections on

www.authorizedumps.com
Questions & Answers PDF Page 14

this exam. You must manage your time to ensure that you are able to complete all questions included
on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is
provided in the case study. Case studies might contain exhibits and other resources that provide
more information about the scenario that is described in the case study. Each question is
independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your
answers and to make changes before you move to the next section of the exam. After you begin a
new section, you cannot return to this section.

To start the case study


To display the first question in this case study, click the Next button. Use the buttons in the left pane
to explore the content of the case study before you answer the questions. Clicking these buttons
displays information such as business requirements, existing environment, and problem statements.
If the case study has an All Information tab, note that the information displayed is identical to the
information displayed on the subsequent tabs. When you are ready to answer a question, click the
Question button to return to the question.

Overview

Litware, Inc. owns and operates 300 convenience stores across the US. The company sells a variety of
packaged foods and drinks, as well as a variety of prepared foods, such as sandwiches and pizzas.

Litware has a loyalty club whereby members can get daily discounts on specific items by providing
their membership number at checkout.

Litware employs business analysts who prefer to analyze data by using Microsoft Power BI, and data
scientists who prefer analyzing data in Azure Databricks notebooks.

Requirements

Business Goals

Litware wants to create a new analytics environment in Azure to meet the following requirements:

See inventory levels across the stores. Data must be updated as close to real time as possible.
Execute ad hoc analytical queries on historical data to identify whether the loyalty club discounts
increase sales of the discounted products.
Every four hours, notify store employees about how many prepared food items to produce based on
historical demand from the sales data.

Technical Requirements

Litware identifies the following technical requirements:

Minimize the number of different Azure services needed to achieve the business goals.

www.authorizedumps.com
Questions & Answers PDF Page 15

Use platform as a service (PaaS) offerings whenever possible and avoid having to provision virtual
machines that must be managed by Litware.
Ensure that the analytical data store is accessible only to the company's on-premises network and
Azure services.
Use Azure Active Directory (Azure AD) authentication whenever possible.
Use the principle of least privilege when designing security.
Stage Inventory data in Azure Data Lake Storage Gen2 before loading the data into the analytical data
store. Litware wants to remove transient data from Data Lake Storage once the data is no longer in
use. Files that have a modified date that is older than 14 days must be removed.
Limit the business analysts' access to customer contact information, such as phone numbers,
because this type of data is not analytically relevant.
Ensure that you can quickly restore a copy of the analytical data store within one hour in the event of
corruption or accidental deletion.

Planned Environment

Litware plans to implement the following environment:

The application development team will create an Azure event hub to receive real-time sales data,
including store number, date, time, product ID, customer loyalty number, price, and discount
amount, from the point of sale (POS) system and output the data to data storage in Azure.
Customer data, including name, contact information, and loyalty number, comes from Salesforce, a
SaaS application, and can be imported into Azure once every eight hours. Row modified dates are
not trusted in the source table.
Product data, including product ID, name, and category, comes from Salesforce and can be imported
into Azure once every eight hours. Row modified dates are not trusted in the source table.
Daily inventory data comes from a Microsoft SQL server located on a private network.
Litware currently has 5 TB of historical sales data and 100 GB of customer data. The company expects
approximately 100 GB of new data per month for the next year.
Litware will build a custom application named FoodPrep to provide store employees with the
calculation results of how many prepared food items to produce every four hours.
Litware does not plan to implement Azure ExpressRoute or a VPN between the on-premises network
and Azure.

Question: 12

HOTSPOT

Which Azure Data Factory components should you recommend using together to import the daily
inventory data from the SQL server to Azure Data Lake Storage? To answer, select the appropriate
options in the answer area.

NOTE: Each correct selection is worth one point.

www.authorizedumps.com
Questions & Answers PDF Page 16

Answer:
Explanation:

www.authorizedumps.com
Questions & Answers PDF Page 17

Box 1: Self-hosted integration runtime


A self-hosted IR is capable of running copy activity between a cloud data stores and a data store in
private network.

Box 2: Schedule trigger


Schedule every 8 hours

Box 3: Copy activity

Scenario:
Customer data, including name, contact information, and loyalty number, comes from Salesforce and
can be imported into Azure once every eight hours. Row modified dates are not trusted in the source
table.
Product data, including product ID, name, and category, comes from Salesforce and can be imported
into Azure once every eight hours. Row modified dates are not trusted in the source table.

Question: 13
What should you do to improve high availability of the real-time data processing solution?

A. Deploy identical Azure Stream Analytics jobs to paired regions in Azure.


B. Deploy a High Concurrency Databricks cluster.
C. Deploy an Azure Stream Analytics job and use an Azure Automation runbook to check the status of
the job and to start the job if it stops.
D. Set Data Lake Storage to use geo-redundant storage (GRS).

www.authorizedumps.com
Questions & Answers PDF Page 18

Answer: A
Explanation:

Guarantee Stream Analytics job reliability during service updates


Part of being a fully managed service is the capability to introduce new service functionality and
improvements at a rapid pace. As a result, Stream Analytics can have a service update deploy on a
weekly (or more frequent) basis. No matter how much testing is done there is still a risk that an
existing, running job may break due to the introduction of a bug. If you are running mission critical
jobs, these risks need to be avoided. You can reduce this risk by following Azure’s paired region
model.
Scenario: The application development team will create an Azure event hub to receive real-time
sales data, including store number, date, time, product ID, customer loyalty number, price, and
discount amount, from the point of sale (POS) system and output the data to data storage in Azure

Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-job-reliability

Question: 14

What should you recommend to prevent users outside the Litware on-premises network from
accessing the analytical data store?

A. a server-level virtual network rule


B. a database-level virtual network rule
C. a database-level firewall IP rule
D. a server-level firewall IP rule

Answer: A
Explanation:

Virtual network rules are one firewall security feature that controls whether the database server for
your single databases and elastic pool in Azure SQL Database or for your databases in SQL Data
Warehouse accepts communications that are sent from particular subnets in virtual networks.
Server-level, not database-level: Each virtual network rule applies to your whole Azure SQL Database
server, not just to one particular database on the server. In other words, virtual network rule applies
at the serverlevel, not at the database-level.
Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-vnet-service-endpoint-rule-
overview

Question: 15

What should you recommend using to secure sensitive customer contact information?

A. data labels
B. column-level security

www.authorizedumps.com
Questions & Answers PDF Page 19

C. row-level security
D. Transparent Data Encryption (TDE)

Answer: B
Explanation:

Scenario: All cloud data must be encrypted at rest and in transit.

Always Encrypted is a feature designed to protect sensitive data stored in specific database columns
from access (for example, credit card numbers, national identification numbers, or data on a need to
know basis). This includes database administrators or other privileged users who are authorized to
access the database to perform management tasks, but have no business need to access the
particular data in the encrypted columns. The data is always encrypted, which means the encrypted
data is decrypted only for processing by client applications with access to the encryption key.

Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-security-overview

Topic 3, Mix Questions

Question: 16

You have an Azure Data Lake Storage account that has a virtual network service endpoint configured.

You plan to use Azure Data Factory to extract data from the Data Lake Storage account. The data will
then be loaded to a data warehouse in Azure Synapse Analytics by using PolyBase.

Which authentication method should you use to access Data Lake Storage?

A. shared access key authentication


B. managed identity authentication
C. account key authentication
D. service principal authentication

Answer: B
Explanation:

Reference:
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse#use-
polybase-to-load-data-into-azure-sql-data-warehouse

Question: 17

HOTSPOT

You have an Azure subscription that contains the following resources:

www.authorizedumps.com
Questions & Answers PDF Page 20

An Azure Active Directory (Azure AD) tenant that contains a security group named Group1
An Azure Synapse Analytics SQL pool named Pool1

You need to control the access of Group1 to specific columns and rows in a table in Pool1.

Which Transact-SQL commands should you use? To answer, select the appropriate options in the
answer area.

Answer:
Explanation:

Box 1: GRANT
You can implement column-level security with the GRANT T-SQL statement.

Box 2: CREATE SECURITY POLICY


Implement Row Level Security by using the CREATE SECURITY POLICY Transact-SQL statement

Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/column-level-
security

www.authorizedumps.com
Questions & Answers PDF Page 21

Question: 18
HOTSPOT

You need to implement an Azure Databricks cluster that automatically connects to Azure Data Lake
Storage Gen2 by using Azure Active Directory (Azure AD) integration.

How should you configure the new cluster? To answer, select the appropriate options in the answer
area.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: High Concurrency


Enable Azure Data Lake Storage credential passthrough for a high-concurrency cluster.

Incorrect:
Support for Azure Data Lake Storage credential passthrough on standard clusters is in Public Preview.

www.authorizedumps.com
Questions & Answers PDF Page 22

Standard clusters with credential passthrough are supported on Databricks Runtime 5.5 and above
and are limited to a single user.

Box 2: Azure Data Lake Storage Gen1 Credential Passthrough


You can authenticate automatically to Azure Data Lake Storage Gen1 and Azure Data Lake Storage
Gen2 from Azure Databricks clusters using the same Azure Active Directory (Azure AD) identity that
you use to log into Azure Databricks. When you enable your cluster for Azure Data Lake Storage
credential passthrough, commands that you run on that cluster can read and write data in Azure Data
Lake Storage without requiring you to configure service principal credentials for access to storage.

Reference:
https://docs.azuredatabricks.net/spark/latest/data-sources/azure/adls-passthrough.html

Question: 19
You have an Azure Synapse Analystics dedicated SQL pool that contains a table named Contacts.
Contacts contains a column named Phone.
You need to ensure that users in a specific role only see the last four digits of a phone number when
querying the Phone column.
What should you include in the solution?

A. a default value
B. dynamic data masking
C. row-level security (RLS)
D. column encryption
E. table partitions

Answer: B
Explanation:

Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to
designate how much of the sensitive data to reveal with minimal impact on the application layer. It’s
a policy-based security feature that hides the sensitive data in the result set of a query over
designated database fields, while the data in the database is not changed.

Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/dynamic-data-masking-overview

Question: 20

HOTSPOT

You have an Azure Synapse Analytics dedicated SQL pool that contains the users shown in the
following table.

www.authorizedumps.com
Questions & Answers PDF Page 23

User1 executes a query on the database, and the query returns the results shown in the following
exhibit.

User1 is the only user who has access to the unmasked data.

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the graphic.

NOTE: Each correct selection is worth one point.

www.authorizedumps.com
Questions & Answers PDF Page 24

Answer:
Explanation:

Box 1: 0
The YearlyIncome column is of the money data type.
The Default masking function: Full masking according to the data types of the designated fields
Use a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint,
smallmoney, tinyint, float, real).

Box 2: the values stored in the database


Users with administrator privileges are always excluded from masking, and see the original data
without any mask.

Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/dynamic-data-masking-overview

www.authorizedumps.com
Thank You for trying DP-203 PDF Demo

https://authorizedumps.com/dp-203-exam-dumps/

Start Your DP-203 Preparation

[Limited Time Offer] Use Coupon " SAVE20 " for extra 20%
discount the purchase of PDF file. Test your
DP-203 preparation with actual exam questions

www.authorizedumps.com

You might also like