0% found this document useful (0 votes)
124 views21 pages

DP 700

Uploaded by

Tanveer Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views21 pages

DP 700

Uploaded by

Tanveer Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

EXAM HEIST

Premium exam material


Get certification quickly with the Exam Heist Premium exam material.
Everything you need to prepare, learn & pass your certification exam easily. Lifetime free updates
First attempt guaranteed success.
https://www.ExamHeist.com
Microsoft

(DP-700)

Implementing Data Engineering Solutions Using Microsoft Fabric


(beta)

Total: 67 Questions
Link: https://examheist.com/papers/microsoft/dp-700
Question: 1 Exam Heist
Case Study -

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions.
Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the
information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button
to return to the question.

Overview. Company Overview -

Contoso, Ltd. is an online retail company that wants to modernize its analytics platform by moving to Fabric. The
company plans to begin using Fabric for marketing analytics.

Overview. IT Structure -

The company’s IT department has a team of data analysts and a team of data engineers that use analytics
systems.

The data engineers perform the ingestion, transformation, and loading of data. They prefer to use Python or SQL to
transform the data.

The data analysts query data and create semantic models and reports. They are qualified to write queries in Power
Query and T-SQL.

Existing Environment. Fabric -

Contoso has an F64 capacity named Cap1. All Fabric users are allowed to create items.
Contoso has two workspaces named WorkspaceA and WorkspaceB that currently use Pro license mode.

Existing Environment. Source Systems

Contoso has a point of sale (POS) system named POS1 that uses an instance of SQL Server on Azure Virtual
Machines in the same Microsoft Entra tenant as Fabric. The host virtual machine is on a private virtual network that
has public access blocked. POS1 contains all the sales transactions that were processed on the company’s
website.

The company has a software as a service (SaaS) online marketing app named MAR1. MAR1 has seven entities. The
entities contain data that relates to email open rates and interaction rates, as well as website interactions. The
data can be exported from MAR1 by calling REST APIs. Each entity has a different endpoint.

Contoso has been using MAR1 for one year. Data from prior years is stored in Parquet files in an Amazon Simple
Storage Service (Amazon S3) bucket. There are 12 files that range in size from 300 MB to 900 MB and relate to
email interactions.

Existing Environment. Product Data


POS1 contains a product list and related data. The data comes from the following three tables:

Products -

ProductCategories -

ProductSubcategories -
In the data, products are related to product subcategories, and subcategories are related to product categories.

Existing Environment. Azure -


Contoso has a Microsoft Entra tenant that has the following mail-enabled security groups:
DataAnalysts: Contains the data analysts
DataEngineers: Contains the data engineers

Contoso has an Azure subscription.

The company has an existing Azure DevOps organization and creates a new project for repositories that relate to
Fabric.
Existing Environment. User Problems
The VP of marketing at Contoso requires analysis on the effectiveness of different types of email content. It
typically takes a week to manually compile and analyze the data. Contoso wants to reduce the time to less than
one day by using Fabric.

The data engineering team has successfully exported data from MAR1. The team experiences transient
connectivity errors, which causes the data exports to fail.

Requirements. Planned Changes -

Contoso plans to create the following two lakehouses:


Lakehouse1: Will store both raw and cleansed data from the sources
Lakehouse2: Will serve data in a dimensional model to users for analytical queries
Additional items will be added to facilitate data ingestion and transformation.
Contoso plans to use Azure Repos for source control in Fabric.

Requirements. Technical Requirements

The new lakehouses must follow a medallion architecture by using the following three layers: bronze, silver, and
gold. There will be extensive data cleansing required to populate the MAR1 data in the silver layer, including
deduplication, the handling of missing values, and the standardizing of capitalization.

Each layer must be fully populated before moving on to the next layer. If any step in populating the lakehouses
fails, an email must be sent to the data engineers.

Data imports must run simultaneously, when possible.


The use of email data from the Amazon S3 bucket must meet the following requirements:
Minimize egress costs associated with cross-cloud data access.
Prevent saving a copy of the raw data in the lakehouses.

Items that relate to data ingestion must meet the following requirements:
The items must be source controlled alongside other workspace items.
Ingested data must land in the bronze layer of Lakehouse1 in the Delta format.

No changes other than changes to the file formats must be implemented before the data lands in the bronze layer.

Development effort must be minimized and a built-in connection must be used to import the source data.

In the event of a connectivity error, the ingestion processes must attempt the connection again.

Lakehouses, data pipelines, and notebooks must be stored in WorkspaceA. Semantic models, reports, and
dataflows must be stored in WorkspaceB.
Once a week, old files that are no longer referenced by a Delta table log must be removed.

Requirements. Data Transformation


In the POS1 product data, ProductID values are unique. The product dimension in the gold layer must include only
active products from product list. Active products are identified by an IsActive value of 1.

Some product categories and subcategories are NOT assigned to any product. They are NOT analytically relevant
and must be omitted from the product dimension in the gold layer.

Requirements. Data Security -

Security in Fabric must meet the following requirements:


The data engineers must have read and write access to all the lakehouses, including the underlying files.

The data analysts must only have read access to the Delta tables in the gold layer.

The data analysts must NOT have access to the data in the bronze and silver layers.

The data engineers must be able to commit changes to source control in WorkspaceA.

You need to ensure that the data analysts can access the gold layer lakehouse.

What should you do?

A.Add the DataAnalyst group to the Viewer role for WorkspaceA.


B.Share the lakehouse with the DataAnalysts group and grant the Build reports on the default semantic model
permission.
C.Share the lakehouse with the DataAnalysts group and grant the Read all SQL Endpoint data permission.
D.Share the lakehouse with the DataAnalysts group and grant the Read all Apache Spark permission.

Answer: C

Explanation:

C: Share the lakehouse with the DataAnalysts group and grant the Read all data permission.This approach
ensures that data analysts have the necessary read access to the Delta tables in the gold layer, aligning with
the requirement that they should not have access to data in the bronze and silver layers.

By granting Read all SQL Endpoint data permission, the analysts get the necessary and sufficient access to
query the gold layer data while adhering to the principle of least privilege.

Question: 2 Exam Heist


You have a Fabric workspace.
You have semi-structured data.
You need to read the data by using T-SQL, KQL, and Apache Spark. The data will only be written by using Spark.
What should you use to store the data?

A.a lakehouse
B.an eventhouse
C.a datamart
D.a warehouse

Answer: A

Explanation:

A lakehouse in Microsoft Fabric is designed to handle semi-structured and unstructured data, combining the
flexibility of a data lake with the structure of a data warehouse. It supports data writing via Apache Spark and
allows querying through T-SQL and KQL, making it suitable for the specified requirements.

A lakehouse combines the features of data lakes and data warehouses. It is designed to handle both
structured and semi-structured data, making it ideal for storing diverse data formats.

Question: 3 Exam Heist


You have a Fabric workspace that contains a warehouse named Warehouse1.
You have an on-premises Microsoft SQL Server database named Database1 that is accessed by using an on-
premises data gateway.
You need to copy data from Database1 to Warehouse1.
Which item should you use?

A.a Dataflow Gen1 dataflow


B.a data pipeline
C.a KQL queryset
D.a notebook

Answer: B

Explanation:

B: a data pipeline.

A data pipeline is the most suitable tool for moving data between different sources and destinations. In this
case, you need to copy data from your on-premises Microsoft SQL Server database (Database1) to your Fabric
warehouse (Warehouse1). A data pipeline can efficiently handle this task by allowing you to define and
manage the data transfer process.

Question: 4 Exam Heist


You have a Fabric workspace that contains a warehouse named Warehouse1.
You have an on-premises Microsoft SQL Server database named Database1 that is accessed by using an on-
premises data gateway.
You need to copy data from Database1 to Warehouse1.
Which item should you use?

A.an Apache Spark job definition


B.a data pipeline
C.a Dataflow Gen1 dataflow
D.an eventstream

Answer: B

Explanation:

B: a data pipeline.

A data pipeline is specifically designed for orchestrating and automating data movement tasks between
different sources and destinations. Here’s why a data pipeline is the best choice for copying data from your
on-premises Microsoft SQL Server database (Database1) to your Fabric warehouse (Warehouse1)

Data pipelines in Microsoft Fabric are designed to facilitate the movement and transformation of data
between various sources and destinations. In this scenario, a data pipeline can be configured to copy data
from the on-premises SQL Server database to the Fabric warehouse, utilizing the on-premises data gateway
for secure connectivity.

Question: 5 Exam Heist


You have a Fabric F32 capacity that contains a workspace. The workspace contains a warehouse named DW1 that
is modelled by using MD5 hash surrogate keys.
DW1 contains a single fact table that has grown from 200 million rows to 500 million rows during the past year.
You have Microsoft Power BI reports that are based on Direct Lake. The reports show year-over-year values.
Users report that the performance of some of the reports has degraded over time and some visuals show errors.
You need to resolve the performance issues. The solution must meet the following requirements:
Provide the best query performance.
Minimize operational costs.
Which should you do?

A.Change the MD5 hash to SHA256.


B.Increase the capacity.
C.Enable V-Order.
D.Modify the surrogate keys to use a different data type.
E.Create views.

Answer: C

Explanation:

V-Order is a columnar storage format that optimizes data storage and retrieval. It can significantly improve
query performance and reduce storage costs by compressing data and minimizing the amount of data read
during queries. This makes it a suitable choice for large fact tables and scenarios where you need to improve
performance without increasing operational costs.

V-Order improves read performance by applying special optimizations such as sorting, row group distribution,
dictionary encoding, and compression on Parquet files. This enhances query performance significantly,
especially for large datasets. Additionally, V-Order is cost-effective as it reduces the amount of resources
needed for reading data, leading to improved performance without increasing operational costs.

Question: 6 Exam Heist


HOTSPOT -
You have a Fabric workspace that contains a warehouse named DW1. DW1 contains the following tables and
columns.

You need to create an output that presents the summarized values of all the order quantities by year and product.
The results must include a summary of the order quantities at the year level for all the products.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

Box1 -> SELECT YEAR || Box2 -> ROLLUP(YEAR(SO.ModifiedDATE), P.Name)

Explanation:
Key Details:

The use of ROLLUP ensures compliance with the requirement for summarized values at different grouping
levels.
SUM(SO.OrderQty) calculates the total order quantities.

Question: 7 Exam Heist


You have a Fabric workspace that contains a lakehouse named Lakehouse1. Data is ingested into Lakehouse1 as
one flat table. The table contains the following columns.

You plan to load the data into a dimensional model and implement a star schema. From the original flat table, you
create two tables named FactSales and DimProduct. You will track changes in DimProduct.
You need to prepare the data.
Which three columns should you include in the DimProduct table? Each correct answer presents part of the
solution.
NOTE: Each correct selection is worth one point.

A.Date
B.ProductName
C.ProductColor
D.TransactionID
E.SalesAmount
F.ProductID

Answer: BCF

Explanation:

B. ProductName: This attribute describes the product and is crucial for understanding and analyzing the data
related to each product.

C. ProductColor: This attribute provides additional information about the product, which can be useful for
analysis, reporting, and segmentation.

F. ProductID: This is the unique identifier for each product and serves as the primary key for the DimProduct
table. It's essential for establishing the relationship between the FactSales table and the DimProduct table.
Question: 8 Exam Heist
You have a Fabric workspace named Workspace1 that contains a notebook named Notebook1.
In Workspace1, you create a new notebook named Notebook2.
You need to ensure that you can attach Notebook2 to the same Apache Spark session as Notebook1.
What should you do?

A.Enable high concurrency for notebooks.


B.Enable dynamic allocation for the Spark pool.
C.Change the runtime version.
D.Increase the number of executors.

Answer: A

Explanation:

A.Enable high concurrency for notebooks: High concurrency allows multiple notebooks to share the same
Apache Spark session. This setting ensures that different notebooks can run simultaneously within the same
session, facilitating collaboration and efficient resource usage.

Question: 9 Exam Heist


You have a Fabric workspace named Workspace1 that contains a lakehouse named Lakehouse1. Lakehouse1
contains the following tables:

Orders -

Customer -

Employee -
The Employee table contains Personally Identifiable Information (PII).
A data engineer is building a workflow that requires writing data to the Customer table, however, the user does
NOT have the elevated permissions required to view the contents of the Employee table.
You need to ensure that the data engineer can write data to the Customer table without reading data from the
Employee table.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A.Share Lakehouse1 with the data engineer.


B.Assign the data engineer the Contributor role for Workspace2.
C.Assign the data engineer the Viewer role for Workspace2.
D.Assign the data engineer the Contributor role for Workspace1.
E.Migrate the Employee table from Lakehouse1 to Lakehouse2.
F.Create a new workspace named Workspace2 that contains a new lakehouse named Lakehouse2.
G.Assign the data engineer the Viewer role for Workspace1.

Answer: DEF

Explanation:

D. Assign the data engineer the Contributor role for Workspace1:

Assigning the Contributor role to the data engineer for Workspace1 grants them the necessary permissions to
write data to the Customer table in Lakehouse1. However, since the data engineer does not have elevated
permissions to view the Employee table, they won't be able to access its content.
E. Migrate the Employee table from Lakehouse1 to Lakehouse2:

Moving the Employee table, which contains Personally Identifiable Information (PII), to a separate Lakehouse2
helps ensure that the data engineer cannot accidentally or intentionally access it. This action keeps sensitive
data segregated from the data engineer's operational environment.

F. Create a new workspace named Workspace2 that contains a new lakehouse named Lakehouse2:

By creating a new workspace and lakehouse for the Employee table, you further isolate the sensitive data.
The data engineer can still perform their tasks in Workspace1 without accessing Workspace2, ensuring secure
data handling and compliance with privacy requirements.

Question: 10 Exam Heist


You have a Fabric warehouse named DW1. DW1 contains a table that stores sales data and is used by multiple sales
representatives.
You plan to implement row-level security (RLS).
You need to ensure that the sales representatives can see only their respective data.
Which warehouse object do you require to implement RLS?

A.STORED PROCEDURE
B.CONSTRAINT
C.SCHEMA
D.FUNCTION

Answer: D

Explanation:

To implement Row-Level Security (RLS) in a Fabric warehouse like DW1, need to use a FUNCTION to define
the filtering logic. Specifically, a user-defined function (UDF) is created and associated with the RLS policy to
determine which rows each user can access.

Reference:

https://learn.microsoft.com/en-us/fabric/data-warehouse/tutorial-row-level-security#2-define-security-
policies

Question: 11 Exam Heist


HOTSPOT -
You have a Fabric workspace named Workspace1_DEV that contains the following items:
10 reports

Four notebooks -

Three lakehouses -

Two data pipelines -

Two Dataflow Gen1 dataflows -

Three Dataflow Gen2 dataflows -


Five semantic models that each has a scheduled refresh policy
You create a deployment pipeline named Pipeline1 to move items from Workspace1_DEV to a new workspace
named Workspace1_TEST.
You deploy all the items from Workspace1_DEV to Workspace1_TEST.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Answer: No/Yes/No

Explanation:

1. Data from the semantic models will be deployed to the target stage.

Answer: No
Semantic models are only deployed to the target stage in the form of metadata. The deployment process
does not copy actual data; instead, only the structural and configuration metadata (e.g., model schema and
measures) is deployed. The target stage will require a refresh to fetch the data into the semantic models.
Reference: Microsoft Learn - Item Properties Copied During Deployment

2.The Dataflow Gen1 dataflows will be deployed to the target stage.

Answer: Yes
Dataflow Gen1 objects are included in the deployment pipeline and are fully deployed to the target stage,
including their configurations. This ensures that Dataflow Gen1 pipelines can run in the target environment.
The deployment process supports this functionality without requiring a manual configuration.

3.The scheduled refresh policies will be deployed to the target stage.

Answer: No
The deployment process does not copy or deploy refresh schedules for datasets, semantic models, or other
items. Although metadata for the items is deployed, refresh schedules must be manually recreated or
configured in the target stage. This limitation is highlighted in Microsoft's documentation.
Reference: Microsoft Learn - Item Properties Copied During Deployment

Question: 12 Exam Heist


You have a Fabric deployment pipeline that uses three workspaces named Dev, Test, and Prod.
You need to deploy an eventhouse as part of the deployment process.
What should you use to add the eventhouse to the deployment process?

A.GitHub Actions
B.a deployment pipeline
C.an Azure DevOps pipeline

Answer: B

Explanation:

B. a deployment pipeline.

Deployment Pipeline: In Microsoft Fabric, a deployment pipeline is specifically designed for managing and
deploying resources across different environments (Dev, Test, and Prod). It allows you to automate the
deployment process, ensuring consistency and efficiency. By using a deployment pipeline, you can easily
include the eventhouse in your deployment process and manage its promotion through the different stages
(Dev, Test, Prod).

Reference:

https://learn.microsoft.com/en-us/fabric/cicd/deployment-pipelines/get-started-with-deployment-pipelines?
tabs=from-fabric%2Cnew%2Cstage-settings-new

https://learn.microsoft.com/en-us/fabric/cicd/deployment-pipelines/understand-the-deployment-process?
tabs=new

Question: 13 Exam Heist


You have a Fabric workspace named Workspace1 that contains a warehouse named Warehouse1.
You plan to deploy Warehouse1 to a new workspace named Workspace2.
As part of the deployment process, you need to verify whether Warehouse1 contains invalid references. The
solution must minimize development effort.
What should you use?

A.a database project


B.a deployment pipeline
C.a Python script
D.a T-SQL script

Answer: B
Explanation:

Microsoft Fabric's deployment pipelines provide a built-in mechanism to manage and validate the deployment
of artifacts like warehouses. When you use a deployment pipeline to move Warehouse1 from one workspace
(Workspace1) to another (Workspace2), the pipeline automatically checks for issues such as invalid
references or missing dependencies during the deployment process.

Question: 14 Exam Heist


You have a Fabric workspace that contains a Real-Time Intelligence solution and an eventhouse.
Users report that from OneLake file explorer, they cannot see the data from the eventhouse.
You enable OneLake availability for the eventhouse.
What will be copied to OneLake?

A.only data added to new databases that are added to the eventhouse
B.only the existing data in the eventhouse
C.no data
D.both new data and existing data in the eventhouse
E.only new data added to the eventhouse

Answer: E

Explanation:

E. only new data added to the eventhouse.

When you enable OneLake availability for an eventhouse, only the new data that is added to the eventhouse
after enabling this setting will be copied to OneLake. The existing data present in the eventhouse prior to
enabling OneLake availability will not be copied automatically. This ensures that users can access the most
recent data through the OneLake file explorer while maintaining the efficiency of data synchronization.

Question: 15 Exam Heist


You have a Fabric workspace named Workspace1.
You plan to integrate Workspace1 with Azure DevOps.
You will use a Fabric deployment pipeline named deployPipeline1 to deploy items from Workspace1 to higher
environment workspaces as part of a medallion architecture. You will run deployPipeline1 by using an API call from
an Azure DevOps pipeline.
You need to configure API authentication between Azure DevOps and Fabric.
Which type of authentication should you use?

A.service principal
B.Microsoft Entra username and password
C.managed private endpoint
D.workspace identity

Answer: A

Explanation:

A. service principal.

Service Principal: A service principal is a security identity used by applications, services, and automation tools
to access specific Azure resources. It provides a secure way to authenticate and authorize API calls between
Azure DevOps and Fabric. By using a service principal, you can grant the necessary permissions to
deployPipeline1 to interact with the Fabric workspace (Workspace1) and deploy items to higher environments.
This approach ensures secure and managed access without relying on individual user credentials.

Question: 16 Exam Heist


You have a Google Cloud Storage (GCS) container named storage1 that contains the files shown in the following
table.

You have a Fabric workspace named Workspace1 that has the cache for shortcuts enabled. Workspace1 contains a
lakehouse named Lakehouse1. Lakehouse1 has the shortcuts shown in the following table.

You need to read data from all the shortcuts.


Which shortcuts will retrieve data from the cache?

A.Stores only
B.Products only
C.Stores and Products only
D.Products, Stores, and Trips
E.Trips only
F.Products and Trips only

Answer: C

Explanation:

C. Stores and Products only.

When the cache for shortcuts is enabled in a Fabric workspace, it allows for faster access to the data by
caching the files locally. However, the effectiveness of this caching depends on whether the cache was
enabled before the files were added to the storage or if the shortcuts were already pointing to those files.

Question: 17 Exam Heist


You have a Fabric workspace named Workspace1 that contains an Apache Spark job definition named Job1.
You have an Azure SQL database named Source1 that has public internet access disabled.
You need to ensure that Job1 can access the data in Source1.
What should you create?

A.an on-premises data gateway


B.a managed private endpoint
C.an integration runtime
D.a data management gateway

Answer: B

Explanation:

B. a managed private endpoint.

Managed Private Endpoint: This allows secure and private communication between Azure services without
exposing data to the public internet. By creating a managed private endpoint, you can establish a direct
connection between the Apache Spark job in Workspace1 and the Azure SQL database (Source1) while
keeping public internet access disabled. This approach ensures that data transfer happens securely within the
Azure network.

To ensure that Job1 can access the data in Source1, you need to create a managed private endpoint. This will
allow the Spark job to securely connect to the Azure SQL database without requiring public internet access.

Question: 18 Exam Heist


You have an Azure Data Lake Storage Gen2 account named storage1 and an Amazon S3 bucket named storage2.
You have the Delta Parquet files shown in the following table.

You have a Fabric workspace named Workspace1 that has the cache for shortcuts enabled. Workspace1 contains a
lakehouse named Lakehouse1. Lakehouse1 has the following shortcuts:
A shortcut to ProductFile aliased as Products
A shortcut to StoreFile aliased as Stores
A shortcut to TripsFile aliased as Trips
The data from which shortcuts will be retrieved from the cache?

A.Trips and Stores only


B.Products and Store only
C.Stores only
D.Products only
E.Products, Stores, and Trips

Answer: B

Explanation:

B. Products and Stores only.

When the cache for shortcuts is enabled in a Fabric workspace, it allows for faster access to the data by
caching the files locally. This means that data accessed through the cached shortcuts is retrieved from the
local cache instead of the original storage locations, which improves performance.

Reference:

https://learn.microsoft.com/en-us/fabric/onelake/onelake-shortcuts
Question: 19 Exam Heist
HOTSPOT -
You have a Fabric workspace named Workspace1 that contains the items shown in the following table.

For Model1, the Keep your Direct Lake data up to date option is disabled.
You need to configure the execution of the items to meet the following requirements:
Notebook1 must execute every weekday at 8:00 AM.
Notebook2 must execute when a file is saved to an Azure Blob Storage container.
Model1 must refresh when Notebook1 has executed successfully.
How should you orchestrate each item? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

Box1 ->2 || Box2 -> 3 || Box3 -> 2 || Box4 -> 1


Explanation:
Question: 20 Exam Heist
Your company has a sales department that uses two Fabric workspaces named Workspace1 and Workspace2.
The company decides to implement a domain strategy to organize the workspaces.
You need to ensure that a user can perform the following tasks:
Create a new domain for the sales department.
Create two subdomains: one for the east region and one for the west region.
Assign Workspace1 to the east region subdomain.
Assign Workspace2 to the west region subdomain.
The solution must follow the principle of least privilege.
Which role should you assign to the user?

A.workspace Admin
B.domain admin
C.domain contributor
D.Fabric admin

Answer: D

Explanation:

Fabric Admin: Possesses the highest level of permissions within the Fabric environment, enabling the creation
of domains and subdomains, as well as the assignment of resources to those subdomains.

Question: 21 Exam Heist


You have a Fabric workspace named Workspace1 that contains a warehouse named DW1 and a data pipeline named
Pipeline1.
You plan to add a user named User3 to Workspace1.
You need to ensure that User3 can perform the following actions:
View all the items in Workspace1.
Update the tables in DW1.
The solution must follow the principle of least privilege.
You already assigned the appropriate object-level permissions to DW1.
Which workspace role should you assign to User3?

A.Admin
B.Member
C.Viewer
D.Contributor

Answer: B

Explanation:

Member: This role allows users to view and interact with all the items in the workspace. When combined with
the already assigned object-level permissions to DW1, it ensures that User3 can update the tables in DW1.
Thank you
Thank you for being so interested in the premium exam material.
I'm glad to hear that you found it informative and helpful.

But Wait

I wanted to let you know that there is more content available in the full version.
The full paper contains additional sections and information that you may find helpful,
and I encourage you to download it to get a more comprehensive and detailed view of
all the subject matter.

Download Full Version Now

Total: 67 Questions
Link: https://examheist.com/papers/microsoft/dp-700

You might also like