0% found this document useful (0 votes)
21 views358 pages

AZ Question 2

The document discusses configuring Network Security Groups (NSGs) for three virtual machines (VMs) in Azure to meet specific security objectives, including allowing RDP access to one VM, web traffic to another, and blocking internet traffic to a third. It explains the need for security rules and the default behavior of Azure's NSGs, emphasizing that only one NSG with two rules is necessary. Additionally, it covers scenarios involving multiple network interface cards (NICs) and the implications of IP forwarding and routing for communication between VMs.

Uploaded by

Derick Correa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views358 pages

AZ Question 2

The document discusses configuring Network Security Groups (NSGs) for three virtual machines (VMs) in Azure to meet specific security objectives, including allowing RDP access to one VM, web traffic to another, and blocking internet traffic to a third. It explains the need for security rules and the default behavior of Azure's NSGs, emphasizing that only one NSG with two rules is necessary. Additionally, it covers scenarios involving multiple network interface cards (NICs) and the implications of IP forwarding and routing for communication between VMs.

Uploaded by

Derick Correa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 358

Question 1

You have three VMs across two subnets in your Azure virtual network. A
Standard SKU public IP address is associated with each VM’s NIC.

None of the VM’s NICs or the VNet’s subnets is associated with a Network
Security Group yet. You need to add security rules in the NSG and
associate the NSG with either subnet/Network interface so that the below
four objectives are met:

a. Allow RDP connections only to vm01 from the Internet

b. Allow web server traffic only to vm02

c. vm01 and vm02 can communicate with the Internet

d. Block all inbound traffic from the Internet to vm03

Based on the given information, answer the below two questions. Select
answers that satisfy all the required objectives.
1, 3
Correct answer
1, 2
2, 2
2, 3
Overall explanation
Short Answer for Revision:

Objective a: Needs a security rule in an NSG for RDP connection only to


vm01. Associate this NSG to subnet01.

Objective b: Create another security rule in the same NSG to allow web
traffic only to vm02.

Objective c: The default security rule AllowInternetOutBound in the NSG


allows outbound connections to the Internet. No need to create a security
rule.

Objective d: vm03 doesn't inherit any security rules as it has no NSG


attached. But its NIC is associated with a Standard SKU public IP. So, by
default, all traffic from outside the VNet is blocked. So, there is no need to
create a security rule.

So, in total, 1 NSG and 2 security rules.

Detailed Explanation:

Before proceeding with this question, I highly recommend you to refer to


the question in Practice Test 1 ( Related lecture video title : Select a
destination IP address for the NSG inbound security rule) to understand
how the Azure Native NAT service processes the NSG rules.

Coming to this question, when you create a Network Security Group, the
NSG comes with three default inbound and outbound security rules. So, if
the default rules satisfy any of the given objectives, we don’t have to add
a new security rule for that objective.
Objective 1:

The RDP connections are not open by any of the default security rules. So,
we need to create an inbound security rule to allow the RDP connection.
Since RDP should be allowed only for vm01, we use vm01’s private IP
address for the Destination IP address field.

We can associate this NSG with the subnet or the NIC of vm01. But let’s
associate it with the subnet, so we don’t have to create an additional NSG
for vm02, which is also in the same subnet as vm01.
This ensures that users can RDP to vm01. Since we targeted only vm01, it
also ensures that users cannot RDP to vm02.

So, we need to create a rule for objective 1.

Objective 2:

The web server traffic over port 80 is also not allowed by any of the
default security rules. So, in the same NSG, we can create an inbound
security rule that targets vm02 with its private IP address to allow traffic
over port 80.
To test the security rule, we can RDP from vm01 to vm02. Note that
although we did not create a security rule to allow RDP for vm02, the
default security rule AllowVnetInBound allows all traffic from the virtual
network. Since we are doing RDP from vm01, which is in the same virtual
network as vm02, the connection will be allowed.

Once logged into vm02, install the web server role. We can verify that we
can access the vm02’s web server by using its public IP address (Check
the related lecture video).

So, we need to create another security rule for objective 2.

Objective 3:

The default security rule AllowInternetOutBound ensures unrestricted


network traffic to the Internet. Since the NSG is already associated with
subnet01, this default security rule ensures that both vm01 and vm02 can
communicate with the Internet.

So, we don’t have to create any rule for objective 3.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/network-security-groups-overview#allowinternetoutbound

Objective 4:
In the current architecture until now, we have only one NSG, which is
associated with subnet01. So, no security rules apply to vm03, which is
deployed in subnet02. When the VM doesn’t inherit any security rules,
traffic flow to the VM is governed by its public IP address SKU.

Simply put, the Standard SKU public IP address is closed by default. If


there is no associated NSG, all inbound traffic is restricted. So, we
associate NSGs to allow traffic. The Basic SKU public IP address, which is
retiring soon, is open by default. If there is no NSG, all traffic is allowed.
So, we associate NSGs to restrict traffic.

Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
public-ip-addresses#sku

Given that a Standard SKU public IP address is associated with all the VMs.
So, by definition, all inbound traffic to vm03 is blocked from the Internet.

To verify this, log in to vm03 using its private IP address from vm01. Note
that the Standard SKU public IP address restricts all inbound traffic from
outside the VNet. So, traffic from vm01, which is in the same VNet as
vm03 is allowed.

Again, install the web server role using Server Manager by performing
similar steps as you did for vm02. After the web server is installed, use
the public IP address of vm03 to connect to the web server. Although the
web server role is installed, you wouldn’t be able to connect to the server,
as the Standard SKU public IP address is secure by default and blocks all
traffic from outside the VNet.

But you can access the web server from within vm03 to check if the web
server is running successfully.
So, we don’t have to create any security rule or associate any NSG to
subnet02, where vm03 is deployed. The Standard SKU public IP address
does the magic.

We have created only one NSG with 2 NSG rules so far. So, option B is the
correct answer.

Reference Link: https://stackoverflow.com/questions/47679381/azure-


vm-able-to-rdp-even-when-not-assigned-to-a-nsg-arm-model/
75974558#answer-75974558

GitHub Repo Link: Minimum no. of. network security rules required

Resources
Minimum no. of. network security rules required
Domain
Implement and manage virtual networking

Question 2
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

You have three Network Interface Cards (NICs) deployed in a virtual


network vnet01. The nic01 is in subnet01 and is attached to vm01. The
vm02 has two NICs attached to it.
The nic02 is the primary network interface card attached to vm02 and
nic03 is the secondary network interface card attached to vm02.

Further, IP Forwarding is enabled on the primary network interface (nic02)


of vm02.
You need to ensure that a user logged into vm01 can ping the secondary
network interface (nic03) of vm02.

Solution: You turn on IP forwarding in the Windows Operation System of


vm02.

Does the solution meet the stated goal?

Note: Assume the ICMP is enabled on all the VMs

Yes
Correct answer
No
Overall explanation
Short Answer for Revision:

A side effect of enabling IP forwarding in the guest OS of vm02 is that it


ensures vm02 can now use its secondary network interface (nic03) to
forward packets to vm01 (as turning on IP forwarding in the guest OS
enables the forwarding capability for all the attached NICs).

Nevertheless, vm01 still will not be able to communicate with the


secondary NIC of vm02.

Detailed Explanation:
By default, the first NIC attached to the VM is the primary network
interface. So nic02 is the primary network interface card of vm02. All
other network interfaces attached to the VM are secondary network
interfaces. So nic03 is the secondary network interface card attached to
vm02.

Further, a VM by default uses the primary IP configuration of the primary


network interface for all outbound and inbound communications.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/virtual-network-network-interface-vm#constraints

So, when you ping vm02 from vm01, the traffic reaches the primary NIC of
vm02, whose private IP address is 10.0.2.4.

Even if you explicitly ping secondary NICs’ private IP of vm02, which is


10.0.2.5, the packet isn’t routed to the intended destination.
Similarly, if you ping vm01 from vm02, vm02 uses only the primary
network interface (nic02) to establish the connection. Even if you explicitly
use the secondary NIC of vm02 to ping vm01, the communication will fail.

From these examples, we can verify that a VM by default uses only the
primary IP configuration of the primary network interface for all outbound
and inbound communications and our inability to use the secondary NIC
for communication.

But the question requires that we ping the secondary NIC of vm02. So,
let’s implement the given solution which is to turn on IP forwarding within
the guest Operating System in vm02.

Before doing that, first, check the no. of. network interface cards attached
to vm02 in the guest OS. As expected, there are two network interface
cards as vm02 is attached to two NICs, nic02 and nic03.
Although IP forwarding is enabled in the Azure portal for nic02, the IP
forwarding is disabled in the guest OS for both these NICs.

We can turn on IP forwarding in the Windows OS by navigating to the


Registry Editor -> HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet
-> Services -> Tcpip -> Parameters and updating the IPEnableRouter from
0 to 1.
This update ensures that the VM operates as a router and is capable of
receiving packets addressed to other destinations and forwarding them to
the destined VMs. To observe the effect of this operation, restart vm02.
After restarting the VM, you can verify that the forwarding is enabled for
both network interfaces.

Since the IP forwarding is turned on in the guest OS for both the network
interface cards, it enables vm02 to forward the packets to other VMs using
the secondary network interface card. So from vm02, you can use the
secondary network interface card to ping vm01.
However, you will still not be able to ping the secondary network interface
card of vm02 from vm01 (Check the video lecture). The given solution
doesn’t satisfy the stated goal. Option No is the correct answer.

GitHub Repo Link: Enable communication with a VM's secondary


network interface card

Resources
Enable communication with a VM's secondary network interface card - 1
Domain
Implement and manage virtual networking

Question 3
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

You have three Network Interface Cards (NICs) deployed in a virtual


network vnet01. The nic01 is in subnet01 and is attached to vm01. The
vm02 has two NICs attached to it.
The nic02 is the primary network interface card attached to vm02 and
nic03 is the secondary network interface card attached to vm02.

Further, IP Forwarding is enabled on the primary network interface (nic02)


of vm02.
You need to ensure that a user logged into vm01 can ping the secondary
network interface (nic03) of vm02.

Solution: From the Windows command line of vm02, you use the route
add command to add a route with the default gateway address for nic03 in
the IP routing table.

Does the solution meet the stated goal?

Note: Assume the ICMP is enabled on all the VMs

Correct answer
Yes
No
Overall explanation
Short Answer for Revision:

Only the primary NIC attached to a VM is assigned a default gateway. The


secondary NIC isn't. This default gateway is required for bidirectional
communications outside the subnet.

Adding a route with the default gateway address for nic03 ensures that
the default gateway is assigned to nic03. So, any VM from outside the
subnet can communicate with nic03. Also vm02 can use nic03 to
communicate with any VM outside the subnet.
Detailed Explanation:

By default, Azure assigns a default gateway only to the primary network


interface card. Azure does not assign a default gateway to any secondary
network interfaces attached to the VM.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


machines/windows/multiple-nics#configure-guest-os-for-multiple-nics

To view the default gateway, run the ipconfig command.

The nic02, which is the primary network interface attached to vm02, with
the private IP 10.0.2.4, has a default gateway assigned to it. But nic03,
which is the secondary network interface attached to vm02, with the
private IP 10.0.2.5, doesn’t have any default gateway assigned.

This default gateway acts as a router that forwards data packets outside
the subnet, subnet02. Now the vice versa is also true. If vm01 from
outside the subnet wants to communicate with vm02 inside the subnet
(subnet02), it has to go through both subnet01 and subnet02’s default
gateway.
The lack of a default gateway assigned to the secondary NIC (nic03) is the
main reason why you cannot ping the secondary NIC of vm02 from vm01.
It is also the reason why you cannot use nic03 to ping vm01.

So, let’s use the route add command to add a default route with the
default gateway, which is always the first IP in the network, i.e., 10.0.2.1,
for nic03.

After the route is added, verify if the route exists in the routing table.
Now you can use the secondary network interface of vm02 to ping vm01.

As discussed, adding a route ensures that communication vice-versa is


also possible. So, you can ping vm02’s secondary network interface from
vm01.
The given solution meets the stated goal. Option Yes is the correct
answer.

Reference
Link: https://learn.microsoft.com/en-us/windows-server/administration/
windows-commands/route_ws2008

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-
faq#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets

GitHub Repo Link: Same as the previous question in the question set.

Resources
Enable communication with a VM's secondary network interface card - 2
Domain
Implement and manage virtual networking

Question 4
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

You have three Network Interface Cards (NICs) deployed in a virtual


network vnet01. The nic01 is in subnet01 and is attached to vm01. The
vm02 has two NICs attached to it.
The nic02 is the primary network interface card attached to vm02 and
nic03 is the secondary network interface card attached to vm02.

Further, IP Forwarding is enabled on the primary network interface (nic02)


of vm02.
You need to ensure that a user logged into vm01 can ping the secondary
network interface (nic03) of vm02.

Solution: Move vm01 to vnet01/subnet02

Does the solution meet the stated goal?

Note: Assume the ICMP is enabled on all the VMs

Correct answer
Yes
No
Overall explanation
Short Answer for Revision:

You need a default gateway only for communication between subnets. For
intra-subnet communication, there is no need for a default gateway. If
vm02 and vm01 are deployed in the same subnet, vm02 can use its
secondary NIC, nic03 (which is not assigned a default gateway) to
communicate with vm01 . Further, vm01 can also communicate with
nic03.

Detailed Explanation:

Let’s first move vm01 to subnet02 and understand the implications of this
move operation. Moving a virtual machine to a different subnet is nothing
but moving the VM’s NIC to a different subnet. So, go to nic01, which is
the Network Interface attached to vm01, and change its subnet from
subnet01 to subnet02.

This is Networking 101 we are discussing here but do note that a default
gateway is necessary only if the VM communication needs to traverse
subnets or networks.

Since vm01 is moved to subnet02, all communication between vm02 and


vm01, irrespective of the network interface used, is kept within the
subnet. So, in this case, the traffic never leaves outside the subnet and so
a default gateway is never required when vm01 pings the secondary
network interface of vm02.
So, assigning a default gateway to the secondary NIC of vm02 is not
required. The vm01 can now directly ping the secondary NIC (nic03) of
vm02. The given solution meets the stated goal. Option Yes is the correct
answer.

GitHub Repo Link: Same as the first question in the question set.

Resources
Enable communication with a VM's secondary network interface card - 3
Domain
Implement and manage virtual networking

Question 5
You have an Azure Load Balancer and the below two VMs added to the
load balancer’s backend pool.
Select what you will configure for the below two tasks:

Inbound NAT rule, Inbound NAT rule


Correct answer
Load balancing rule, Inbound NAT rule
Frontend IP configuration, Inbound NAT rule
Load balancing rule, Load balancing rule
Overall explanation
Short Answer for Revision:

Load balancing rule -> Load balance/distribute traffic to backend VMs.


Inbound NAT rule -> Forward the traffic to a specific VM.

Detailed Explanation:

First, know the difference between a load balancing rule and an inbound
NAT rule. A load balancing rule distributes traffic to backend VMs whereas
an inbound NAT rule forwards traffic to a specific VM.

Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/load-
balancer-faqs#how-are-inbound-nat-rules-different-from-load-balancing-
rules-

Question 1:

To distribute HTTP traffic to either of the VMs in the backend pool, create
a load balancing rule that maps the load balancer’s frontend IP address to
the backend pool. A load balancer rule also requires a health probe that
checks the health status of VMs.
Note that after the rule is created, the load balancer’s public IP address is
assigned to the individual VMs. So, when the HTTP traffic hits the load
balancer's public IP address, the load balancer distributes the traffic to
either vm01 or vm02.

Question 1 -> Load balancing rule.

Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/components#
load-balancer-rules

Question 2:

To forward the RDP traffic only to specific VMs, create an inbound NAT
rule that targets a specific virtual machine using the RDP protocol.
This rule too assigns the load balancer’s public IP address to the targeted
VMs. So, users can use just the load balancer’s public IP address to RDP
into the targeted VM using the VM’s account credentials.

Question 2 -> Inbound NAT rule. Option B is the correct answer.

Frontend IP configuration defines the IP address of the load balancer. It


cannot forward/distribute traffic to the VMs.

Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/components#
inbound-nat-rules

GitHub Repo Link: Distribute & forward traffic to load balanced VMs

Resources
Distribute and forward traffic to load balanced VMs
Domain
Implement and manage virtual networking

Question 6
Given below is an ARM template that adds a Microsoft Entra domain
extension to an Azure Windows VM to join the virtual machine to the
Microsoft Entra Domain Services managed domain.

Select the options that deploy the template successfully.

ravikiransrinivasulu@ravikirans171.onmicrosoft.com,
“domainjoin”
ravikirans171.onmicrosoft.com\\ravikiransrinivasulu,
“domainjoin”
Correct answer
ravikirans171.onmicrosoft.com\\ravikiransrinivasulu,
“[concat(parameters('vmName'),'/domainjoin')]”
ravikirans171.onmicrosoft.com\ravikiransrinivasulu,
“[concat(parameters('vmName'),'/domainjoin')]”
Overall explanation
Short Answer for Revision:

A single ‘\’ escapes the character ‘r’ next to it. So, we get an incorrect
username value.
A double ‘\\’ ensures that only a slash is escaped and not any other
character. We get the correct username value. Further, using the UPN
(which looks similar to an email address) is also correct.

You should format the name and type values of child resources defined
outside the parent resource with a ‘/’ to include the parent resource name
and type. The correct format for the child resource name should be
vmName/extensionName.

Detailed Answer:

You can either log into a VM and domain-join a VM or install an extension


that automatically domain-joins a VM to a managed domain, via an Azure
Resource Manager template.

The given ARM template creates a VM extension resource to domain-join


the VM to a managed domain. For domain-joining a VM, we need a user
who has the necessary permissions to join computers to a Microsoft Entra
domain.

That is, the user needs to be a member of the AAD DC Administrators


security group, which is automatically created when you create the
managed domain.
In the ARM template, we need to provide this user’s username, which will
be used by the resource manager to domain-join the VM.

The deployment fails when you provide the username with a single ‘\’. As
you can observe below, a single backslash escapes the character (‘r’) next
to it, so it picks an incorrect username.
Option D is incorrect.

The correct way is to provide the username with a double backslash to


indicate that the backslash is not used as an escape character but to
actually print it as part of the username.

But providing a user principal name (looks similar to an email address), is


an equally correct way of specifying the username.
So, both options, one with a double backslash and the other with a UPN
can go into box 1.

Recognize that, in the given template, the extension resource (of resource
type: Microsoft.Compute/virtualMachines/extensions) is a child resource of
the virtual machine resource (of resource type:
Microsoft.Compute/virtualMachines)

So, when a child resource is not defined within the parent resource, i.e.,
outside the parent resource, you should format the name and type values
with slashes to include the parent resource name and type.

So, the resource type for the extension child resource should be
Microsoft.Compute/virtualMachines/extensions, and not just extensions.
And the name for the extensions child resource should be the parent
resource name/child resource name, and not just the child resource name.

Note that this is only the case when the child resource is not defined as a
nested resource of the parent resource.

So, using only the child name for the extension resource will produce the
below error.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-
manager/templates/child-resource-name-type#outside-parent-resource

So, options A and B are incorrect.

Only option C picks the correct choices for both boxes. So, if you deploy a
template with these values ( ravikirans171.onmicrosoft.com\\
ravikiransrinivasulu , “[concat(parameters('vmName'),'/
domainjoin')]” ) in box1 and box2, the deployment will be successful.

Go to the extensions section to check the provisioning state of the


extensions.

To verify if the VM is domain-joined, you can also directly login to the VM


using any username that’s a member of the AAD DC Administrators
security group (Check the related lecture video).
Option C is the correct answer.

GitHub Repo Link: Install an extension to domain-join a VM with ARM


Template - armtemplate.json

Note: This template requires you to already have the VM and the domain
services deployed in your subscription.

Reference Link: https://learn.microsoft.com/en-us/entra/identity/domain-


services/join-windows-vm-template#join-an-existing-windows-server-vm-
to-a-managed-domain

Resources
Install an extension to domain-join a VM with ARM Template
Domain
Deploy and manage Azure compute resources

Question 7
Given below are the JSON definitions of two custom roles role-st-
dev01 and role-st-dev02 that you assign to User One and User Three,
respectively at the Dev subscription scope.

1. {
2. "properties": {
3. "roleName": "role-st-dev01",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "*/read"
12. ],
13. "notActions": [],
14. "dataActions": [
15. "Microsoft.Storage/*"
16. ],
17. "notDataActions": [
18. "Microsoft.Storage/storageAccounts/fileServices/*",
19. "Microsoft.Storage/storageAccounts/queueServices/*",
20. "Microsoft.Storage/storageAccounts/tableServices/*",
21. "Microsoft.Storage/storageAccounts/*/delete",
22. "Microsoft.Storage/storageAccounts/*/write",
23. "Microsoft.Storage/storageAccounts/*/action"
24. ]
25. }
26. ]
27. }
28. }

1. {
2. "properties": {
3. "roleName": "role-st-dev02",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "Microsoft.Storage/*"
12. ],
13. "notActions": [
14. "Microsoft.Storage/storageAccounts/blobServices/*"
15. ],
16. "dataActions": [
17. "Microsoft.Storage/*"
18. ],
19. "notDataActions": [
20. "Microsoft.Storage/storageAccounts/fileServices/*",
21. "Microsoft.Storage/storageAccounts/queueServices/*",
22. "Microsoft.Storage/storageAccounts/tableServices/*",
23. "Microsoft.Storage/storageAccounts/*/delete",
24. "Microsoft.Storage/storageAccounts/*/write",
25. "Microsoft.Storage/storageAccounts/*/action"
26. ]
27. }
28. ]
29. }
30. }

What will be the default access when the two users access the blob data
in the storage account from the Azure portal?
Correct answer
Accesses blob data with Microsoft Entra ID authentication, Cannot
access blob data
Accesses blob data with Microsoft Entra ID authentication,
Accesses blob data with Microsoft Entra ID authentication
Accesses blob data with storage account keys, Accesses blob data
with Microsoft Entra ID authentication
Cannot access blob data, Accesses blob data with storage account
keys
Overall explanation
User One:

Recall from practice test 1 (Related lecture video title: Microsoft


Entra ID authentication for blob data ) that you need the below two
permissions to access blob data via Microsoft Entra ID authentication in
the Azure portal.

1. Read access to a storage account (previously, the Reader role enabled


read access permissions in the Actions section)

2. Read access to blob data in the storage account (previously, the


Storage Blob Data Reader role enabled data read access permissions in
the DataActions section).

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-
data-operations-portal#use-your-microsoft-entra-account

The actions section of the custom role, role-st-dev01 , contains the same
permissions as the reader role, providing read access to all the Azure
resources. So, using this permission, User One can navigate to the storage
account in the Azure portal.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#reader
Further, "Microsoft.Storage/*" in the DataActions section provides
access to data in all the services in the storage account like files, blobs,
and queues. The NotActions section doesn’t remove access to blob data.

So, when User One accesses the blob data, he accesses it via Microsoft
Entra ID authentication, by default.

Since the custom role doesn’t provide any permissions


( Microsoft.Storage/storageAccounts/listkeys/action ) for reading the
access keys, switching to access key authentication will produce this
error.
So, User One -> Accesses blob data with Microsoft Entra ID authentication.

User Three:

The DataActions and NotDataActions section for the custom role role-st-
dev02 is the same as role-st-dev01 .

Since permissions in the NotActions are removed from those in the


Actions section, User Three cannot list any of the containers and
subsequently the blobs in the storage account.
However, since the access permission for only the blob service is removed
( "Microsoft.Storage/storageAccounts/blobServices/*" in the NotActions
section), User Three can access other services like files and queues in the
storage account via access keys.

User Three -> Cannot access blob data.

Option A is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/role-definitions

Although the custom role role-st-dev02 doesn’t provide access to data in


file share ( "Microsoft.Storage/storageAccounts/fileServices/*" in the
notDataActions section), User Three can still access file share data as he
has access to access keys
(Microsoft.Storage/storageAccounts/listkeys/action in the Actions
section).

Resources
Analyze the default user access to blob data
Domain
Implement and manage storage

Question 8
There are three blob containers source1, source2, and source3 with
a Public access level of Container, Blob, and Private, respectively, in the
strdev011 storage account, which is in the rg-dev-01 resource group.

There are another two blob containers target1 and target2, with a Public
access level of Container and Private, respectively, in the strdev012
storage account, which is in the rg-test-01 resource group.
There are two users User One and User Two in the Microsoft Entra ID
tenant, with the following roles assigned at the respective resource group
scopes.

There is a backup file in all the source containers.

Which of the following statements is correct about users using the azcopy
command to copy the file from the source container to the target
container? Select two options.

Note: This is the syntax of the commands the users run:

azcopy copy 'https://strdev011.blob.core.windows.net/<<container>>'


'https://strdev012.blob.core.windows.net/<<container>>' --recursive

User One can copy the file from source1 to target1


User One can copy the file from source2 to target2
Correct selection
User Two can copy the file from source1 to target1
Correct selection
User Two can copy the file from source2 to target2
Overall explanation
I highly recommend you to refer to Practice Test 1 (Related lecture
video title: Use azcopy to copy data with SAS and different container
access levels) to understand how container access levels work. This
knowledge is helpful for analyzing the access permissions required for the
source/target containers.

In option A, the Container access level of source1 enables anyone to read


the container and the blob file. So, User One can read the file from the
source container. But he cannot write the file to the target container
target1, as:

1. Even the highest access level (Container) of target1 provides only read
access to the container.

2. The user’s Storage Blob Data Reader role provides only read access to
data, not write permissions.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#storage-blob-data-reader

Option A is incorrect.

In option B, the Blob access level of source2 doesn’t permit read access to
the file, as from the command syntax, we understand that the azcopy tool
copies data from a container and not an individual blob file. But still, User
One can copy the file using storage account access keys as the Storage
Account Contributor role provides read access to them.

However, he cannot write the file to the target container, as the Storage
Blob Data Reader role provides only read access to data, not write
permissions.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#storage-account-contributor

Option B is also incorrect.

Irrespective of the access levels of the source containers, User Two ’s


Storage Blob Data Reader role enables him to read data from the source.
The Storage Blob Data Owner role enables him to write data to the target
containers. Options C and D are the correct answer choices.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-
read-access-configure

https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-blob-data-owner

https://learn.microsoft.com/en-us/azure/storage/common/storage-use-
azcopy-authorize-azure-active-directory

Note: The related lecture video has more details on how to authorize a
user with Microsoft Entra ID.

GitHub Repo Link: Use azcopy to copy data with Azure AD and different
container access levels - PS Command.ps1

Resources
Use azcopy to copy data with Microsoft Entra & different container access
levels
Domain
Implement and manage storage

Question 9
A user is assigned the below custom role at the Dev subscription scope.

1. {
2. "properties": {
3. "roleName": "role-st-dev01",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "*/read"
12. ],
13. "notActions": [],
14. "dataActions": [],
15. "notDataActions": []
16. }
17. ]
18. }
19. }

Which additional role would you assign him (at the same scope) so he can
upload your organization’s AHM videos in the blob container using
Microsoft Entra ID authentication from the Azure portal?

Storage blob data reader


Correct answer
Storage blob data contributor
Contributor
Storage account contributor
Overall explanation
Recall from practice test 1 (Related lecture video title: Microsoft
Entra ID authentication for blob data ) that you need the below two
permissions to access blob data via Microsoft Entra ID authentication in
the Azure portal.

1. Read access to a storage account (previously, the Reader role enabled


read access permissions in the Actions section)

2. Read access to blob data in the storage account (previously, the


Storage Blob Data Reader role enabled data read access permissions in
the DataActions section).

The custom role the user is assigned is very much a reader role that
grants access to view all resources. But instead of a role that grants
permission to access data, we need a role with permission to upload/write
data.

Storage blob data reader, as the name indicates, gives permission to only
read blob data. So, when the user tries to upload a file via Microsoft Entra
ID authentication, he will get an access permission error.
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#storage-blob-data-reader

Option A is incorrect.

Storage blob data contributor role provides read, write, and delete
permissions on the data in the storage account. So, this role allows a user
to upload blob data via Microsoft Entra ID authentication.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#storage-blob-data-contributor

Option B is the correct answer.

The contributor role, similar to the owner role, doesn’t provide access to
data (no permissions in the DataActions section). So, the user cannot even
access the blobs in the container via Microsoft Entra ID authentication, let
alone, upload blobs to the container.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#contributor

Option C is incorrect.

But the contributor role provides access to storage account keys. So, he
can still upload data via account key authentication (check the related
lecture video).

The storage account contributor role enables you to manage storage


accounts. Similar to the Contributor role, this role too, doesn't provide
access to data. Using this role too, the users cannot access or upload blob
data via Microsoft Entra ID authentication.
However, since the role has permission to read the access keys, the user
can upload the data via storage account keys.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#storage-account-contributor

Option D is also incorrect.

Resources
Upload blobs to storage account via Microsoft Entra ID authentication
Domain
Implement and manage storage

Question 10
You have an App Service app with an Azure SQL database. The app has
one production slot and one staging deployment slot.
In addition to the automatic backup, you have also configured a custom
backup of the app in the production slot to the storage account with the
below settings:
Which of the following restore actions are possible? Select two options.

You can restore the app + database from the automatic backup to
the production slot
You can restore the app + database from the automatic backup to
the staging slot
Correct selection
You can restore the app + database from the custom backup to
the staging slot
Correct selection
You can restore the app + database from the custom backup to
the production slot
Overall explanation
Short Answer for Revision:

2 types of backups possible for App Service: Automatic (only app) and
Custom (both app & database). Further, you can restore any type of
backup to a new app, or current app (same slot or different slot or a new
slot).

Detailed Answer:

For Azure App Service apps, two types of backups can be created:

a. Automatic backups

b. Custom backups
Automatic backups are created every hour by default if you deploy apps in
a Basic App Service plan or higher. However automatic backups do not
backup the database. So, options A and B are incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/app-service/manage-
backup?tabs=portal#back-up--restore-vs-disaster-recovery

With custom backups, you can include the database, in addition to the
app for restoration, if the database was originally selected to include in
the backup process.
Further, irrespective of the type of backup, whether automatic or custom,
you can restore the backup to the same app or a different app and to the
production (same) slot or other deployment slots or even a new slot.
Options C and D are the correct answer choices.

GitHub Repo Link: Backup App Service App to a deployment slot

Resources
Back up App Service App to a deployment slot
Domain
Deploy and manage Azure compute resources

Question 11
You have a virtual machine and its related resources (not shown) in the
South India location. The VM has a private IP address of 192.168.1.4 and
is connected to subnet01 of vnet01.

There is another virtual network, vnet02 in the West Europe location with
three subnets. Their corresponding IPv4 address ranges are shown below:
To meet your Business Continuity & Disaster Recovery (BCDR) needs, you
configure disaster recovery by replicating vm01 to the target region West
Europe and the target VNet, vnet02.

After the replication is successfully configured, you run a disaster recovery


drill of vm01 to validate your disaster recovery strategy. In which subnet
do you think the test VM will be created?

The drill will fail as you can replicate VMs only between Azure
region pairs
Correct answer
The drill will succeed and the test VM will be deployed in
FirstSubnet
The drill will succeed and the test VM will be deployed in
SecondSubnet
The drill will succeed and the test VM will be deployed in
ThirdSubnet
Overall explanation
Short Answer for Revision:

If the target VNet has a subnet with the same name as the source VM’s
subnet, the test VM will be deployed in that subnet. Else, the first subnet
in alphabetical order is set as the target VM’s subnet.

Detailed Answer:

You can replicate VMs between any two Azure regions. Option A is
incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-
azure-support-matrix#region-support

To answer this question, let’s perform a test failover and observe which
subnet the test VM uses. Since we have already mapped vnet02 in the
replication settings, let’s select the mapped virtual network.
Once the test failover process is underway, you can check the test VM’s
subnet. It is FirstSubnet. The reason it uses FirstSubnet, and not any
other, is because if the subnet with the same name as the source VM's
subnet doesn’t exist in the target virtual network, the first subnet in the
alphabetical order is set as the target VM’s subnet. This sounds strange,
but that’s how the Site Recovery process works.

If you do not want any surprises, create a subnet with the same name as
the source VM’s subnet in the target VNet. This will ensure that the target
VM will be deployed in the same subnet name.

Option B is the correct answer.


Reference
Link: https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-
azure-network-mapping#specify-a-subnet

Although vm01’s private IP address is in the range of SecondSubnet,


option C is incorrect, as it is not a criterion that Site Recovery uses.

GitHub Repo Link: Subnet of the test VM after test failover

Resources
Subnet of the test VM after test failover
Domain
Monitor and maintain Azure resources

Question 12
In a managed domain (with Microsoft Entra Domain Services) that
integrates with your Microsoft Entra ID tenant, you enable identity-based
authentication for Azure File share over SMB.

In the Microsoft Entra ID tenant, you create the below two users with the
following role assignments.

The users log into the domain-joined VM to access the file share using
Microsoft Entra ID credentials.

Further, you assign the Storage File Data SMB Share Reader role as a
default share-level permission on your storage account.

Given below are two statements based on the above information. Select
Yes if the statement is correct. Else select No.
Yes, No
Correct answer
Yes, Yes
No, No
No, Yes
Overall explanation
Well, let’s first lay out the architecture to understand the problem better.

1. Microsoft Entra Domain Services provides similar features as Active


Directory Domain Services. Rather than deploying Active Directory in your
on-premises environment, you can create Microsoft Entra Domain
Services in the cloud.

2. To enable access to file share over SMB, the storage account is domain-
joined or registered with the Microsoft Entra Domain Services deployment.

3. Since Microsoft Entra Domain Services is integrated with Microsoft


Entra ID, the user identities ( User Three and User Four ) created in
Microsoft Entra ID are also available in Microsoft Entra Domain Services.

4. So, the two users authenticate directly against Microsoft Entra Domain
Services. They send the received token to Azure Files for authorization.
Refer to the links to know more details about the deployment.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-files-
identity-auth-domain-services-enable?tabs=azure-portal

https://learn.microsoft.com/en-us/azure/storage/files/storage-files-active-
directory-overview#microsoft-entra-domain-services

Now, the two users need share-level permissions to access the file share.
There are two ways you can assign share-level permissions:

1. Default-level permissions assign the role to all authenticated identities


that access the share from the domain-joined VM. It’s given that we
assign the Storage File Data SMB Share Reader role as a default share-
level permission on the storage account.
2. Or assign permissions to specific users/groups from Access Control
(IAM). User Three is assigned the Storage File Data SMB Share
Contributor role at the file share scope.
When both permissions are specified, the user will have the higher of the
two permissions.

So, although User Four does not have any specific role assignment, he
will still be able to view files in the file share, thanks to default share-level
permissions.

So, statement 2 is Yes.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-files-
identity-ad-ds-assign-permissions?tabs=azure-portal#what-happens-if-
you-use-both-configurations

User Three is assigned the Storage File Data SMB Share


Contributor role which has higher permissions than the default share-
level permissions.

So, with the contributor role, he can access and modify files in the file
share. Statement 1 is Yes.

Option B is the correct answer.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-files-
identity-ad-ds-assign-permissions?tabs=azure-portal#share-level-
permissions
Resources
Grant Azure File share access with Microsoft Entra ID credentials
Domain
Implement and manage storage

Question 13
You have three virtual machines across two different regions in your Azure
subscription.

The virtual machine vm02 stores the project files (the Data Analysis
folder), and the VM is backed up daily to a Recovery Services Vault.

One of your teammates accidentally deletes the project folder. To which


of the given VMs can you recover the folder?
Correct answer
To any given VM in the subscription
Only to the source VM (vm02)
You can recover the folder only to the VMs in the same region as
the source VM
You can recover only the entire VM or disks, not specific folders
or files.
Overall explanation
Short Answer for Revision:

The process of recovering a file entail generating a script and executing


the script on the target VM/computer. Other than some OS restrictions,
you can restore a file/folder to a VM in any region.

Detailed Answer:

Let’s go through the process of recovering a file to understand which VMs


can we recover the folder to.

To recover a file, select the backup item of the VM and click File
Recovery .

Recovering a file is a three-step process:

1. First, we select the restore point that contains the deleted folder.
2. Next, we download the script that mounts the disks from the selected
recovery point as local drives on any machine where it is run.

3. And, finally, unmount the drives.

Let’s first connect to vm01 and copy the downloaded executable. And run
the script by copy-pasting the required password. In a few moments, we
can see the mounted drive, where we can browse the project folder.

Let’s also connect to the other two VMs and perform the same set of steps
to mount the drive. As you can observe, we can mount the drive in each of
the given VMs.

So, option A is the correct answer.

As we have seen, although vm03 is in a different region than the source


VM, the script works, irrespective of the location of the VM. Option C is
incorrect.
In addition to restoring the entire VM or only the disks, we have seen that
we can use file recovery to mount disks from the recovery point for
restoring specific files and folders. Option D is also incorrect.

Reference Link: https://learn.microsoft.com/en-us/azure/backup/backup-


azure-restore-files-from-vm

GitHub Repo Link: Recover a folder to a VM from a recovery point

Resources
Recover a folder to a VM from a recovery point
Domain
Monitor and maintain Azure resources

Question 14
You have two subscriptions Dev and Prod, under the Apps management
group as shown below:
And you create three resource groups across the two subscriptions.

In each resource group you deploy a virtual machine with the related
resources.

Finally, you create the below four users and assign roles at different
scopes.
Given below are three statements based on the above information. Select
Yes if the statement is correct. Else select No.

No, Yes, No

No, No, Yes


Correct answer
No, No, No
Yes, No, No
Yes, No, Yes
Overall explanation
It would be easy to answer these statements after knowing the hierarchy
of resources in Azure. From image 1 in the question, the hierarchy begins
in the following fashion:
Thanks to icons8 for the above icons

From image 2 in the question, there are two resource groups, rg-dev-
01 and rg-dev-02 in the Dev subscription and one resource group rg-
prod-01 in the Prod subscription.
Thanks to icons8 for the above icons

Finally, one virtual machine in each resource group.


Thanks to icons8 for the above icons

Statement 1:

Since Admin one is the owner of the Apps management group, he has
owner permissions on the subscriptions and all the resources created
using those subscriptions.
But the Tenant Root Group is one level above the Apps management
group. Granting owner access at the Apps management group scope
doesn’t grant access to the root management group. Consequently, he
cannot assign policies at the root management group scope.

Therefore, Admin one cannot assign policies at the Tenant Root


Group scope. Statement 1 is incorrect.

So, statement 1 -> No.


Among the other users, only Admin two can assign policies at the Tenant
Root Group scope. Since he is a Microsoft Entra ID global administrator, he
can elevate his access to a user access administrator in Azure RBAC at the
root scope.

This enables him to access the root management group and all the
subscriptions and management groups in the tenant.

Since the user access administrator role provides access to the entire
resource provider Microsoft.Authorization (See the Actions list in the 1st
link below), in addition to role assignments for managing user
access, Admin two can also perform policy assignments (check the related
lecture video).
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#user-access-administrator

https://learn.microsoft.com/en-us/azure/role-based-access-control/
resource-provider-operations#microsoftauthorization

So, only Admin two can assign policies at the Tenant Root Group scope.

Statement 2:

We have granted Virtual Machine Contributor role to User two at


the Dev subscription scope. So, he will have the Virtual Machine
Contributor access to virtual machines in the rg-dev-01 and rg-dev-
02 resource groups.

But the Virtual Machine Contributor role provides the contributor access
only to the virtual machine. This role doesn’t grant management access to
resources that the VM depends on. i.e., he cannot manage the virtual
network where the VM is deployed.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#virtual-machine-contributor
Consequently, when he tries to add a new address space to the VNet, he
will get the below error:

Therefore, User two cannot add a new address space in the VNet.
Statement 2 is incorrect.

So, statement 2 -> No.

Statement 3:

We have granted the Virtual Machine User Login role to User one at
the rg-dev-02 resource group scope. So, he will have login access to only
the vmsqlnode01 virtual machine.

To log in to vmsharepoint01 VM, he needs the role access either at


the rg-dev-01 resource group scope or directly at the VM scope.

Therefore, User one cannot log in to the vmsharepoint01 VM. Statement 3


is incorrect.

So, statement 3 -> No.

Option No, No, No is the correct answer.


Note for Statement 1 (You can skip this section if you do not want
to know why the error occurred in the video lecture):

When a user with the user access administrator role assigns policies
at any management group scope, he gets a success message for policy
assignment and an error message for failure to register
Microsoft.PolicyInsights resource provider (as seen in the video).

This error does not occur when this user assigns policies at the
subscription scope if he has registered the subscription with
Microsoft.PolicyInsights resource provider in Azure portal.

In the Azure portal, there is no option to register the management group


with the resource provider. If you don't want to see the error, use REST
API to register the provider with the management group.

This is the response from the Microsoft team on the above issue (contains
details on why Microsoft.PolicyInsights RP is needed):

Link: https://learn.microsoft.com/en-us/answers/questions/1055370/can-
user-access-administrator-assign-azure-policie.html

Resources
Managing assignments at Management groups & Subscriptions
Domain
Manage Azure identities and governance
Question 15
You create the below backup policy to back up your Azure VMs to a
Recovery Services Vault.

Assume you create the policy on April 1, Saturday, 00:00 AM. Based on
the given backup schedule, answer the below two questions:
30 Months, 10
6 Weeks, 9
Correct answer
5 years, 8
30 Months, 9
Overall explanation
Short Answer for Revision:

Question 1: If more than one backup retention point exists on a day, Azure
will retain the backup for the longest duration. For the 4th of April, daily,
weekly, monthly, and yearly backup retention points exist. So, the backup
will be retained for 5 years.

Question 2: Since daily backups are retained only for 7 days, on 12th
April, you will have the backups for the last 7 days. Further, on 4th April,
the backup is retained for 5 years. So, in total, 8 backups.

Detailed Answer:

Question 1:

Let’s understand the given backup policy by plotting the timelines from
1st April to 12th April. It is given that a daily backup point occurs at 8:00
AM.

The backup points that occur every Tuesday are retained as weekly
backup points for six weeks. These backups occur on the 4th and the 11th
of April.

The backup points that occur on the 4th of every month are retained as
monthly backup points for thirty months. For April, the monthly backup
point falls on a Tuesday.

Finally, the backup points that occur on the first Tuesday of April month
are retained as yearly backup points for five years. The yearly backup
point also falls on the 4th of April.
Coincidentally, for the April month of the given year, the backup that
occurs on the 4th of April is retained as daily, weekly, monthly, and yearly
backup points.

It is important to understand that only one recovery point is retained as


daily, weekly, monthly, or yearly backups for different amounts of time.

In case of a conflict between two or more retention ranges (weekly,


monthly, etc.) for a backup, Azure retains the recovery point for the
longest duration. Therefore, the backup that occurs on 4th of April will be
retained for five years.

So, Question 1 -> 5 years.

Question 2:

Each daily backup point will be retained only for seven days. So, on the
12th of April at 6:00 PM, the daily backups from 1st to 3rd April, and the
5th of April will be deleted. The recovery point on the 4th of April is
untouched as it will be retained for five years, as just discussed.
So, there will be a total of 6 daily backups, one weekly backup, and one
yearly backup. Therefore, on the 12th of April, there will be a total of 8
backups. Question 2 -> 8.

Option C is the correct answer.

Reference Link:https://learn.microsoft.com/en-us/azure/backup/backup-
azure-arm-vms-prepare#create-a-custom-policy

Resources
Backup policy in a Recovery Services Vault
Domain
Monitor and maintain Azure resources

Question 16
Given below are two statements based on Azure Storage object
replication. Select Yes if the statement is correct. Else select No.

Yes, No
No, No
Correct answer
No, Yes
Yes, Yes
Overall explanation
Statement 1:

By default, cross-tenant object replication is disabled for a storage


account. However, you can enable cross-tenant replication by clicking
the Advanced settings in the Object replication section and selecting
the checkbox Allow cross-tenant replication .

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-
replication-prevent-cross-tenant-policies?tabs=portal

After enabling cross-tenant replication, let's go ahead and verify if we can


perform cross-tenant replication. The user interface in the Azure portal
doesn’t provide a way to select a destination storage account from
another tenant.
To get around this problem, let’s define replication rules by uploading
JSON files in both the source and the target storage accounts in different
Microsoft Entra ID tenants.

To get a sample JSON for defining replication rules, I create a rule (in the
UI) to replicate objects between two storage accounts, both in the same
Microsoft Entra ID tenant. And download the rule in a JSON file.
In this JSON file, we need to do three things:

1. Replace all occurrences of the policyId with the string ‘default’.

2. Replace the values for the target subscription id, target resource group,
and target storage account in the destinationAccount property.

3. Replace the values for the target container in the destinationContainer


property.
Save and upload the JSON file in the object replication section of the
target storage account (in a different Microsoft Entra ID tenant).

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-
replication-configure?tabs=portal#configure-object-replication-using-a-
json-file

If you download the JSON replication file from the target storage account,
you can view the generated policyId.

Upload this downloaded file to the object replication section in the source
storage account. The same policyId in the JSON file on the source and the
target account ensures that replication takes place, and you can view the
replicated files in the target storage account (Check the related lecture
video).

Since we can replicate objects between storage accounts in different


Microsoft Entra ID tenants, Statement 1 is No.

Statement 2:

Azure Storage object replication replicates objects between two storage


accounts. i.e., between a source and a target storage account. You cannot
copy objects between containers in the same storage account.
Statement 2 is Yes.

Option C is the correct answer.

Note related to explanations for statement 1:

The below link explains best why we first uploaded the JSON file to the
destination storage account, then downloaded the JSON file from the
destination account to upload it to the source storage account.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-
replication-overview#replication-policies

If you first upload the JSON file (with the ‘default’ value for policyId) to the
source storage account, you will get this error.
Resources
Azure Storage object replication across tenants
Domain
Implement and manage storage

Question 17
You have three Azure Virtual Machines, two VMs in vnet01, and a VM
hosting a custom DNS server in vnet02. The two VNets peer with each
other.

The vnet01 uses the custom DNS server (on the left), which hosts the
forward lookup zone birdsource.com (on the right). There are a couple of
‘A’ records in the zone pointing to the VM’s private IP as shown:
Also, an Azure Private DNS zone, bigstuff.com is linked with vnet01 with
auto-registration enabled.

Which of the following domain names can vm01 resolve using the
nslookup command?

a. vm02.internal.cloudapp.net

b. vm02.bigstuff.com

c. vm02

d. vm02.birdsource.com

Only a, c, and d
Only b
Correct answer
Only d
Only a, b, and c
Overall explanation
Short Answer for Revision:

Each VNet comes with a default, Azure-provided DNS server, which can
resolve vm02 and vm02.internal.cloudapp.net. They can also forward DNS
queries to the private DNS zones linked to the VNet.

But when you use a custom DNS server, they cannot resolve vm02 or
vm02.internal.cloudapp.net as the lookup zone is different. They can only
resolve domain names like vm02.birdsource.com. They also do not
forward queries to the private DNS zones linked to the VNet.

Option C is the correct answer.

Detailed Answer:

For questions like these that include a lot of moving pieces, we can start
with organizing the information as a visual architecture map.

The best way to understand the concepts underlying this question is by


running nslookup commands and analyzing the produced output.

Given that vnet01 doesn’t use the default, Azure-provided DNS server but
rather a custom DNS server hosted in a peered virtual network at
10.0.0.5.
In your command prompt, type nslookup to use the command in an
interactive mode. Subsequently, whenever you lookup domain names, this
command first hits the custom DNS server. Generally, the Azure-provided
DNS servers resolve domain names like vm02 and
vm02.internal.cloudapp.net in any virtual network.

But the custom DNS server has no idea of these domain names. So, unless
you set up forwarding that forwards the DNS queries to the Azure-
provided DNS server, vm01 cannot resolve these names.

So, options A and D are incorrect.

As you can expect, vm01 should be able to resolve vm02.birdsource.com,


as its VNet uses the custom DNS server which hosts the lookup zone
birdsource.com. But although vnet01 is registered with the Azure Private
DNS zone bigstuff.com, vm01 cannot resolve vm02.bigstuff.com, as the
custom DNS server isn't set up to forward the DNS queries to the private
zone.
Option C is the correct answer.

We can get a complete picture of how Azure DNS works by switching


vnet01 to use the Azure-provided DNS server.

Now when you use nslookup in an interactive mode, the DNS queries use
the Azure-provided DNS server at the virtual IP 168.63.129.16. This public
IP address is owned by Azure and facilitates communication with the
Azure DNS and the DHCP servers from any virtual network in any region.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/what-is-ip-address-168-63-129-16

The default DNS is set up to resolve names like vm02 and


vm02.internal.cloudapp.net.
The Azure-provided DNS server is also setup with the capability to forward
the DNS queries to any private DNS zone linked with the VNet, like
bigstuff.com. So, vm01 can also resolve vm02.bigstuff.com.

For the sake of completeness, as expected, vm01 cannot resolve


vm02.birdsource.com as the VNet doesn't use the custom DNS server.

Reference
Link: https://learn.microsoft.com/en-us/windows-server/administration/
windows-commands/nslookup
https://learn.microsoft.com/en-us/azure/dns/dns-faq-private#will-dns-
resolution-by-using-the-default-fqdn--internal-cloudapp-net--still-work-
even-when-a-private-zone--for-example--private-contoso-com--is-linked-to-
a-virtual-network-

GitHub Repo Link: Using custom DNS server with Azure Private DNS
zone

Resources
Using custom DNS server with Azure Private DNS zone
Domain
Implement and manage virtual networking

Question 18
You have three VMs, deployed across two VNets in your Azure
subscription. From the virtual machine vm02 and vm03’s Windows Server,
you configure birdsource.com as the primary DNS suffix.

You have an Azure Public DNS zone named birdsource.com. The system-
assigned managed identity of vm03 is assigned the below role to the
public DNS zone.
Further, an Azure Private DNS zone named bigstuff.com is registered with
vnet02, with auto-registration enabled.

Finally, the Virtual Networks vnet01 and vnet02 peer with each other.
Given below are three statements based on the above information. Select
Yes if the statement is correct. Else select No.

No, No, Yes


No, Yes, No
Yes, No, No
Correct answer
No, No, No
Overall explanation
Short Answer for Revision:

Virtual network linking and auto-registration is possible only for Azure


Private DNS zones. Further, the owner role of vm03 doesn't change this
DNS behavior. Statement 1 -> No.

The Azure DHCP service ignores the primary DNS suffix configured in the
VM's OS when its VNet registers with the private DNS zone. So, vm03
cannot resolve the birdsource domain. Statement 2 -> No.

VNet peering links does not forward the DNS queries to the private zone.
Statement 3 -> No.
Detailed Answer:

There are a lot of details going on here, so let’s get the architecture
mapped out:

Statement 1:

The concept of linking a virtual network with a DNS zone and enabling
auto-registration applies only to Azure Private DNS zones. You cannot
even link your VNet with a public DNS zone, let alone enable auto-
registration.
So, an ‘A’ record pointing to vm03’s private IP address is automatically
created in the private zone, not on the public zone, birdsource.com.

The owner role of vm03 on the public DNS zone grants permissions to
perform either zone-level or record-level operations, like adding a record
set or deleting a zone, etc., The role doesn’t interfere with the default DNS
operations.

Reference Link: https://learn.microsoft.com/en-us/azure/dns/private-dns-


virtual-network-links

Statement 1 -> No.

Statement 2:

vnet02 is the registration virtual network for the private DNS zone,
bigstuff.com. So vm03 can resolve the domain name of vm02, which is
vm02.bigstuff.com.

Although the primary DNS suffix of both vm02 and vm03 are configured
with birdsource.com, the Azure DHCP service ignores the primary DNS
suffix when it registers the private DNS zone.

So vm03 cannot resolve vm02.birdsource.com.


Reference Link: https://learn.microsoft.com/en-us/azure/dns/dns-faq-
private#i-have-configured-a-preferred-dns-suffix-in-my-windows-virtual-
machine--why-are-my-records-still-registered-in-the-zone-linked-to-the-
virtual-network-

Statement 2 -> No.

Statement 3:

The VNet peering only enables vm01 and vm02 in different virtual
networks to communicate with each other. Since vnet01, where vm01 is
deployed, is not linked to the private DNS zone, it doesn’t have access to
the DNS records.

The VNet peering links do not forward the DNS queries to the private
zone. Since all virtual networks must be linked to the private DNS zone to
support DNS resolution between virtual networks, vm01 cannot resolve
the domain bigstuff.com for vm02.
Reference Link: https://learn.microsoft.com/en-us/azure/dns/dns-faq-
private#will-azure-private-dns-zones-work-across-azure-regions-

Statement 3 -> No.

GitHub Repo Link: Use Azure DNS to resolve domain names

Resources
Use Azure DNS to resolve domain names
Domain
Implement and manage virtual networking
Question 19
You have created the below resources in your Azure subscription.

There is a blob container and a file share in the Azure storage account as
shown below:
Based on the given information, answer the below two questions:

Only the disk, Container1 and share1, 3


Only the disk and share1, 2
Only the disk and Container1, 3
Correct answer
Only the disk and Container1, 4
Overall explanation
Short Answer for Revision:

Question 1: Different data sources can be backed up to either the


Recovery Services vault or the backup vault. You can back up only Azure
Disks and blob containers to a backup vault.

Question 2: Each data source requires a separate backup policy. The


policy cannot be reused across data sources, even within a recovery
services/backup vault.
Detailed Answer:

From the backup center, you can create two types of vaults: a Recovery
Services vault and a Backup vault. Each vault supports backing up a
specific type of data source.

From this information, we can conclude that vault02, which is a backup


vault can backup Azure disks and Azure blob containers.
And Azure Recovery Services Vault can back up the VM and Azure file
share.

Question 1 -> Only the disk and Container1.


Question 2:

Although both Azure Disks and the blob containers can be backed up to a
backup vault, you cannot reuse a backup policy between the two data
sources.

For example, I already created a backup policy called AzureBlobPolicy for


backing up blob containers to the backup vault. When I back up Azure
Disks, I cannot reuse this policy.

Similarly, each data source type backed up to a Recovery Services vault


needs a dedicated backup policy. Backup policies can only be shared by
data sources of the same type.

So, I need a minimum of four backup policies to back up four types of data
sources. Question 2 -> 4. Option D is the correct answer.

Reference Link:https://learn.microsoft.com/en-us/azure/backup/backup-
azure-arm-vms-prepare#apply-a-backup-policy

GitHub Repo Link: Backup different data sources to a vault

Resources
Backup different data sources to a vault
Domain
Monitor and maintain Azure resources

Question 20
I have a public domain called ravikiransrinivasulu.com that’s already
delegated to Azure DNS. Azure DNS successfully resolves queries for my
domain from the Internet.

Select and order the steps you would perform to delegate a subdomain
called courses to another separate Azure DNS zone.

Create a new zone named courses.ravikiransrinivasulu.com for


the subdomain
Create an NS record pointing to the parent domain’s name
servers from the child zone

Create a new zone named courses for the subdomain

Create an NS record pointing to the child domain’s name servers


from the parent zone

Correct answer
Create a new zone named courses.ravikiransrinivasulu.com for
the subdomain

Create an NS record pointing to the child domain’s name servers


from the parent zone

Create a new zone named courses for the subdomain

Create an NS record pointing to the parent domain’s name


servers from the child zone

Overall explanation
Short Answer for Revision:

To delegate a subdomain to another Azure DNS zone, first, create a DNS


zone with the format: <<subdomain.domain.com>>.

NS records indicate which DNS server is authoritative for a


domain/subdomain, so it tells the Internet where to find the domain’s IP
address. Since we delegate the subdomain to another zone (the new zone
is authoritative for that subdomain), we create an NS record in the parent
zone that points to the subdomain’s name servers.

Detailed Answer:

For the domain ravikiransrinivasulu.com, a subdomain named courses will


be addressed as courses.ravikiransrinivasulu.com.

To delegate the courses subdomain to another separate Azure DNS zone,


we need to create a DNS zone named courses.ravikiransrinivasulu.com,
and not just courses. Since the DNS zone name looks very similar to a
domain, you will get this error if you enter only courses as the name of the
DNS zone.
So, options B and D, which tell you to create a new zone called courses,
are incorrect.

<<Refer to the question in Practice Test 1 (Related lecture video


title: Delegate a domain to Azure DNS zone) to understand how DNS
delegation works>> This question and the explanations that follow are a
continuation of that one.

Recall that when a user visits a website, the domain delegation with the
help of NS records ensures that the DNS query can reach the DNS zone of
the target domain. Note that:

a. All along the DNS hierarchy, the parent zone points to the name server
of the child zone.

b. To delegate a subdomain to another separate Azure DNS zone, we need


to create an NS record in the parent zone that points to the name servers
of the child DNS zone.
So, in the parent DNS zone, create a new record set of type NS and
include all the name servers of the child DNS zone.

You can verify if the delegation works correctly by creating an A record


with a dummy IP address in the child DNS zone. You should be able to
resolve the subdomain’s IP address with the nslookup command.
So, option C is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/dns/delegate-


subdomain

Note: Having said that, an NS record that points to the child domain’s
name servers is automatically created in the parent zone if you create a
child DNS zone:

a. Through a parent DNS zone, where the parent zone name is pre-
populated, or

b. By selecting the below checkbox in the Create DNS zone page.


Reference Link: https://learn.microsoft.com/en-us/azure/dns/tutorial-
public-dns-zones-child

GitHub Repo Link: Delegate a subdomain to another DNS zone

Resources
Delegate a subdomain to another DNS zone
Domain
Implement and manage virtual networking

Question 21
You need to establish a site-to-site VPN connection from your Azure Virtual
Network to your on-premises network. For this connection, you plan to
deploy a zone-redundant VPN gateway across different availability zones.

Which of the following public IP addresses would you use to associate with
the Gateway IP configuration?

Either ipv4StandardZR or ipv4StandardZ1


Correct answer
Only ipv4StandardZR
Any of the Standard, Static public IP addresses
Everything other than the dynamic IP address
Overall explanation
Short Answer for Revision:
A VPN gateway requires a public IP address for communication. To deploy
a zone-redundant (more than one zone) gateway, use a public IP address
that also supports zone-redundancy. i.e., more than 1 zone. Only the IP
address ipv4StandardZR is deployed in multiple zones. Option B is the
correct answer.

Detailed Answer:

The public IP addresses come in two SKUs: Basic and Standard:

1. The Basic SKU public IP addresses do not support availability zones.

2. The Standard SKU public IP addresses can be:

a. Zonal, i.e., placed in a single availability zone

b. Zone-redundant, i.e., placed in three availability zones in a region

c. Non-zonal i.e., not placed in any availability zone

Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
public-ip-addresses#sku
So, the Basic SKU public IPs support only the regional gateway, which
doesn’t have zone redundancy built into it. Whereas the Standard SKU
public IPs can support either zone redundant or zonal gateways.

Reference
Link: https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#piprg

https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#pipzg

https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#pipzrg

Option D is incorrect as it includes a Basic SKU public IP address.

Option C is also incorrect as although a Standard, static IP address can


support availability zones, the IP address may not be configured to use
the availability zones. For example, the ipv4Standard IP resource.

All VPN Gateway SKUs that end with AZ , like the VpnGw2AZ, supports
zone redundancy. For this SKU, you can view the eligible IP addresses for
association, that are either zonal or zone-redundant.
Reference
Link: https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-
about-vpngateways#benchmark

Creating a zonal or zone-redundant VPN gateway is dependent on the


public IP address used.

If you use the public IP address created as zonal, i.e., if the IP address is
placed in a single zone, the gateway will also be zonal.

If you use the public IP address created as zone-redundant, i.e., if the IP


address is placed in all three zones, the gateway will also be zone-
redundant.

So, to deploy a zone-redundant VPN gateway, you need to use a public IP


address set up for zone-redundancy.

Reference
Link: https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#pipskus

Option B is the correct answer.


GitHub Repo Link: Choose a public IP address for zone-redundant
gateway

Resources
Choose a public IP address for zone-redundant gateway
Domain
Implement and manage virtual networking

Question 22
You have four storage accounts in your Azure subscription.

And two service endpoint policies and a virtual network in the East US
region.
Each service endpoint policy allows access only to specific storage
accounts. You can identify the access from the policy name. For example,
policyaccess0910 grants access only to strdev009 and strdev010.

Similarly, policyaccess0911 grants access only to strdev009 and


strdev011 storage accounts. Both the policies are associated with
subnet01 in vnet01.
Assume you deploy a virtual machine in vnet01/subnet01. Given below
are two statements based on the given information. Select Yes if the
statement is correct. Else select No.

Yes, No
Yes, Yes
Correct answer
No, No
No, Yes
Overall explanation
Short Answer for Revision:
Consider a service endpoint policy as a whitelisting tool. If more than one
policy is applied to a subnet, access is granted to all the storage accounts
whitelisted in the policy.

Detailed Explanation:

A lot is going on concerning the number of resources given here. As


always, to clear the clutter, it helps to map the resources so we can
understand the layout better.

The policyaccess0910 service endpoint policy allows access only to


strdev009 and strdev010 storage accounts. The policyaccess0911 service
endpoint policy allows access only to strdev009 and strdev011 storage
accounts.

Further, I have also created blobs in all the storage accounts which we will
try to access from the VM.
Statements 1 and 2:

By default, if no policies are associated with a subnet, service endpoints


allow network access to all the storage accounts in the subscription.
Consider the service endpoint policy as a whitelisting tool.

The service endpoint policies deny access to all storage accounts not
listed in their definition.

If more than one policy is associated with the subnet, as is the case, then
access to all storage accounts whitelisted in the policy is allowed.

So, if you log in to the VM, you should be able to access blobs in storage
accounts strdev009, strdev010, and strdev011. However, access to
strdev012 is denied as none of the service endpoint policies whitelist the
account.
So both the given statements are incorrect. Option C is the correct
answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/virtual-network-service-endpoint-policies-overview#configuration

GitHub Repo Link: Access storage accounts from a subnet with multiple
service endpoint policies

Resources
Access storage accounts from a subnet with multiple service endpoint
policies
Domain
Implement and manage virtual networking

Question 23
You have a list of public IP addresses in your Azure subscription, as shown
below:

You have to choose the correct IP address for your load balancer Frontend
IP configuration. Based on this information, answer the two statements
below. Select Yes if the statement is correct. Else, select No.
Correct answer
Yes, No
Yes, Yes
No, Yes
No, No
Overall explanation
Short Answer for Revision:

2 types of Public Azure Load Balancer SKUs available: Basic & Standard.
Only Basic SKU public IP address can be associated with Azure Load
Balancer Basic. Similarly, only Standard SKU public IP address can be
associated with Azure Load Balancer Standard. No mix-and-match
allowed.

So, statement 1 -> Yes.

Azure Load Balancer Basic supports only IPv4. LB Standard supports both
IPv4 and IPv6. Statement 2 -> No.

Detailed Answer:

Statement 1:

Azure Public Load Balancers come in two SKUs: Basic and Standard.

Unlike in the case of an Azure Firewall, you can associate a Basic SKU
public IP address with an Azure Load Balancer. But the caveat is that you
cannot mix and match the SKUs of both types of resources.

So, a Basic SKU public IP address can only be associated with a Basic SKU
Load Balancer. Similarly, a Standard SKU public IP address can only be
associated with a Standard SKU Load Balancer.

So yes, we can associate only a Basic SKU public IP address with an Azure
Load Balancer Basic SKU. Statement 1 -> Yes.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
configure-public-ip-load-balancer

Statement 2:

Standard SKU load balancers support both IPv4 and IPv6 addresses. But
Basic SKU load balancers support only IPv4 addresses. So, you cannot
associate an IPv6 address with the Basic SKU Azure Load Balancer.

Statement 2 -> No. Option A is the correct answer.

GitHub Repo Link: Public IP address SKU for an Azure Load Balancer

Resources
Public IP address SKU for an Azure Load Balancer
Domain
Implement and manage virtual networking

Question 24
You have three web server virtual machines in your Azure subscription.

Further, you have the related resources of the virtual machines.

You need to group all three virtual machines into an application security
group so you can define network policies based on those groups for web
server VMs. How would you proceed?
Add the subnet of the VMs as a member of the application
security group
Associate the VM’s network security group with the application
security group
Correct answer
Add all the network interfaces as members of the application
security group
Add all the VMs as members of the application security group
Overall explanation
Short Answer for Revision:

The NICs of VMs are members of ASG, not any other resource.

Detailed Answer:

Application Security Groups help you group the VMs per your application
architecture. Once you group the VMs, instead of IP addresses, you can
use the group names to filter network traffic to a set of VMs using network
security rules.

Although you configure application security groups from the virtual


machine blade in the Azure portal, it is actually the VM’s Network
Interface Card that you add to the group.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/application-security-groups

https://azure.microsoft.com/en-in/blog/applicationsecuritygroups/
Option C is the correct answer. All other options are incorrect.

GitHub Repo Link: Group VMs into an application security group

Resources
Group VMs into an application security group
Domain
Implement and manage virtual networking

Question 25
You deploy two virtual machines in an Azure virtual network subnet.

As shown above, two Network Security Groups (nsg01 and nsg02) are
associated with the Network Interface Cards (nic01 and nic02) attached to
the VMs.

In addition to the default security rules, the nsg01 also has a rule that
denies inbound traffic through the ICMP protocol from any source.
Similarly, in addition to the default rules, the nsg02 also has a rule that
denies outbound traffic through the ICMP protocol to any source.
Based on the given information, answer the below two statements. Select
Yes if the statement is correct. Else select No.

Note: Assume you already enabled ICMP through the Windows firewall on
both VMs.

Yes, No
Yes, Yes
Correct answer
No, Yes
No, No
Overall explanation
Short Answer for Revision:

Statement 2 -> The nsg01 does not deny outbound connections and
nsg02 does not deny inbound connections. So, vm01 can ping vm02 (the
response is automatically allowed).

Statement 1 -> The nsg02 denies outbound connections. So, vm02 cannot
ping vm01.

Detailed Answer:

If the ICMP is already enabled for both VMs through the Windows firewall,
vm01 and vm02 can ping each other because the default security
rules AllowVnetInBound and AllowVnetOutBound allow both inbound
and outbound traffic within the VNet.
Well, unless you override the rule with a higher priority. The Ping tool uses
the ICMP protocol to send an echo request to the target VM. But this ICMP
is denied outbound traffic from vm02 and inbound traffic to vm01.

Recall from practice test 1 (Related lecture video title: Configure NSG
to allow external traffic) that NSG rules are stateful. This means when a
rule allows outbound traffic from the NIC, you do not require an explicit
inbound rule to allow the response back to the NIC. Similarly, when a rule
allows inbound traffic to the NIC, you do not require an explicit outbound
rule to allow the response from the NIC.
Statement 2:

In the given scenario, two different NSGs are associated with the
individual NICs of both the VMs. When you ping vm02 from vm01:

a. nsg01 denies only the inbound traffic through the ICMP protocol. So, the
outbound request from vm01 will go through.

b. nsg02 denies only the outbound traffic through the ICMP protocol. So, it
doesn’t prevent vm01’s message from reaching vm02.

c. When you ping vm02 from vm01, the message reaches vm02
successfully.

d. The stateful nature of the NSG rules ensures that vm01 receives the
response, even though its NSG blocks the inbound traffic.
So, you can ping vm02 from vm01.

Statement 2 -> Yes.

Statement 1:

Since nsg02 denies the outbound traffic through the ICMP protocol, the
outbound request from vm02 will not go through. Consequently, the ping
is unsuccessful.
So, you cannot ping vm01 from vm02.

Statement 1 -> No. Option C is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/network-security-groups-overview#security-rules

GitHub Repo Link: Effects of associating NSG with a NIC

Resources
Effects of associating NSG with a NIC
Domain
Implement and manage virtual networking

Question 26
In your Microsoft Entra ID tenant, you create three users. The below table
summarizes their access roles in Microsoft Entra ID and Azure
subscription.

Further, User One logs into the Microsoft Entra ID tenant and toggles the
setting to Yes, as indicated below.

A user (User Four) who is no longer in the project should have his access
to the subscription revoked. Which users can remove their access?

Only User Three


Only User One
Correct answer
Only User Three and User One
Only User Two and User One
Overall explanation
Short Answer for Revision:

A User Access Administrator role is assigned to User One following his


elevated access. So, he can remove User Four. An Owner has all the
permissions. So, he can also remove access.

Contributor access doesn’t give permission to manage access.

Detailed Explanation:

Refer to Practice Test 1 (Related lecture video title: Assign


subscription owner access to a new user ), where we discussed that
Microsoft Entra ID roles and Azure subscription roles are independent of
each other.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/rbac-and-directory-admin-roles#differences-between-
azure-roles-and-microsoft-entra-roles

Since User One (a global admin in Microsoft Entra ID) has elevated his
access to manage all Azure subscriptions in the tenant, he can view all the
management groups and the subscriptions within the tenant.

Following his elevated access:

1. User One has the User Access Administrator role access to all
subscriptions and management groups in the tenant. With this role, he
can manage users’ access to Azure resources.

2. Per the above point, User One can remove subscription access for User
Four .
Reference Link: https://docs.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#user-access-administrator

https://docs.microsoft.com/en-us/azure/role-based-access-control/elevate-
access-global-admin

User Two has only contributor access to the subscription. This role lets you
create/modify/delete resources in Azure but it does not have permissions
to manage user access to Azure resources. So, he cannot remove User
Four .

Note that all the actions like adding/removing roles are disabled for User
Two .
Reference Link: https://docs.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#contributor

Finally, a subscription owner has the highest role privileges in Azure


RBAC. He can manage all Azure resources and user access. So, User
Three can revoke access for User Four .
Reference Link: https://docs.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#owner

Only User One and User Three can revoke subscription access for User
Four .

Option C is the correct choice.

Resources
Remove subscription access to a user
Domain
Manage Azure identities and governance
Question 27
You would like to host two different static websites (foo.com and boo.com)
on a standalone Virtual Machine. So obviously, the two websites require
different public IP addresses for communication. Based on this scenario,
answer the below two questions:
Correct answer
1, 1
1, 2
2, 1
2, 2
Overall explanation
Short Answer for Revision:

You can create many IP configurations for a NIC. Each IP configuration can
be associated with a different public IP address. So, one NIC is sufficient.
Question 1 -> 1.

For a single NIC, you need a single subnet. Question 2 -> 1.

Detailed Answer:

Question 1:

You can attach either one or multiple Network Interface Cards to a VM. For
each NIC, you can create multiple IP configurations. Each IP configuration
will have a private IP and an optional public IP.
To host two different websites on a standalone VM, you can add two IP
configurations, that each hold a Public IP address to a single Network
Interface Card.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
virtual-network-multiple-ip-addresses-portal

There is no necessity to attach an additional NIC to the VM. In fact, you


should attach multiple NICs only when you need to place each NIC in a
different subnet for isolating various types of network traffic. For example,
a NIC placed in each frontend and a backend subnet.

Reference
Link: https://github.com/toddkitta/azure-content/blob/master/articles/
virtual-network/virtual-networks-multiple-nics.md

Multiple NICs are also a requirement of network virtual appliances that


help you better manage your network traffic by isolating various types of
traffic across the different NICs.

Reference Link: https://azure.microsoft.com/en-in/blog/best-practices-to-


consider-before-deploying-a-network-virtual-appliance/

So the minimum or the correct number of NICs required is one. Question 1


-> 1.

Question 2:

A Network Interface Card (NIC) connects a VM to a virtual network subnet.


Since you can create IP configurations for both public IP addresses on the
same NIC, you require only one subnet.

So, question 2 -> 1.

By installing multiple SSL certificates on a single instance, each


associated with a distinct IP address, you can host multiple websites on a
VM. Option A is the correct answer.

Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
virtual-network-multiple-ip-addresses-portal

GitHub Repo Link: Network configuration for hosting multiple websites


on a VM
Resources
Network configuration for hosting multiple websites on a VM
Domain
Implement and manage virtual networking

Question 28
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

You have four Virtual Networks across two Azure subscriptions, Dev and
Test, and two locations, East US and West US. You deployed your project’s
existing web apps in vnet01.

You need additional network address spaces to scale your apps. Your
manager suggested peering vnet01 with any of the other available
networks.

Solution: You peer vnet01 with vnet03.

Does the solution meet the stated goal?

Note: All the remaining three VNets have the required number of usable
IP addresses.

Yes
Correct answer
No
Overall explanation
Short Answer for Revision:

For VNet peering, the address spaces of the two VNets should not overlap.
But the address spaces of vnet01 and vnet02 overlap (check the detailed
explanations).

Detailed Answer:

The address spaces of the two virtual networks should not overlap when
you enable peering. From the given address spaces for the two networks
(vnet01 and vnet03), let’s evaluate the address ranges to verify if we can
enable peering between them.

<<You can skip this section if you are well aware of finding address
ranges from a CIDR notation>>

To begin with, we plot the bits as below for the first and the last IP address
for the address space 10.0.0.0/12 (vnet01). /12 indicates that the first 12
bits are for the network, and the remaining bits are for the host.
Now, the network bits (the first 12) are untouched for calculating the first
and the last IP addresses. For the first IP address, set all the host bits to 0,
which it already has. For the last IP, set all the host bits to 1. Now,
converting from binary to decimal will give you the address range of
vnet01.

Similarly, let’s go through the process again to calculate the address


range of vnet03. /25 indicates that the first 25 bits are for the network,
and the remaining bits are for the host.
Similar to the earlier case, the network bits (the first 25) are untouched
for calculating both IP addresses. After setting all the host bits to 0 and 1,
for the first and the last IP address, respectively, we get the IP address
range 10.0.15.0 to 10.0.15.127.

Clearly, the IP addresses overlap. vnet01 includes all IPs from 0-15 (from
the second octet). Whereas vnet03 also includes a subset of these IPs.

So, as expected, when you try to peer vnet03 with vnet01, you will get
this error.
The given solution does not meet the stated goal. Option No is the correct
answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/virtual-networks-faq#can-i-peer-two-vnets-with-matching-or-
overlapping-address-ranges

GitHub Repo Link: Peer Azure Virtual Networks

Resources
Peer Azure Virtual Networks - 1
Domain
Implement and manage virtual networking

Question 29
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You have four Virtual Networks across two Azure subscriptions, Dev and
Test, and two locations, East US and West US. You deployed your project’s
existing web apps in vnet01.

You need additional network address spaces to scale your apps. Your
manager suggested peering vnet01 with any of the other available
networks.

Solution: You peer vnet01 with vnet02.

Does the solution meet the stated goal?

Note: All the remaining three VNets have the required number of usable
IP addresses.

Correct answer
Yes
No
Overall explanation
Short Answer for Revision:

Azure supports global VNet peering, so you can peer VNets in different
regions as long as their address spaces do not overlap.
Detailed Answer:

Similar to the previous question, first, let’s check if the address spaces of
the two networks overlap.

We already know the address range for vnet01. Doing a similar analysis
for vnet02, which has the address space 10.16.32.0/22, will give the
address range between 10.16.32.0 and 10.16.35.355.

Since vnet01 only includes ranges between 0-15 (the second octet), the
address spaces of vnet02 and vnet01 don’t overlap.

So, although vnet02 is in a different Azure region than vnet01, you can
peer the two virtual networks as Azure supports global virtual network
peering.
Since peering with vnet02 is possible, option yes is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/virtual-network-peering-overview

GitHub Repo Link: Peer Azure Virtual Networks

Resources
Peer Azure Virtual Networks - 2
Domain
Implement and manage virtual networking

Question 30
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

You have four Virtual Networks across two Azure subscriptions, Dev and
Test, and two locations, East US and West US. You deployed your project’s
existing web apps in vnet01.
You need additional network address spaces to scale your apps. Your
manager suggested peering vnet01 with any of the other available
networks.

Solution: You peer vnet01 with vnet04.

Does the solution meet the stated goal?

Note: All the remaining three VNets have the required number of usable
IP addresses.

Correct answer
Yes
No
Overall explanation
Short Answer for Revision:

You can peer VNets in different subscriptions and Microsoft Entra ID


tenants as long as their address spaces do not overlap.

Detailed Answer:

First, let’s check if the address spaces of the two networks overlap.
We already know the address range for vnet01. Doing a similar analysis
for vnet04, which has the address space 10.16.0.0/12, will give the
address range between 10.16.0.0 and 10.31.255.255.

Since vnet04 includes ranges between 16-31 in the second octet, which is
different from that of vnet01, the address spaces of vnet04 and vnet01
don’t overlap.

So, although vnet04 and vnet01 are in different Azure subscriptions, you
can successfully peer the two virtual networks.
In fact, you can even peer virtual networks in different Microsoft Entra ID
tenants.

Since peering with vnet04 is possible, option yes is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/create-peering-different-subscriptions?tabs=create-peering-portal

GitHub Repo Link: Peer Azure Virtual Networks

Resources
Peer Azure Virtual Networks - 3
Domain
Implement and manage virtual networking

Question 31
You deploy a Basic SKU Azure Bastion to let your users connect to
Windows virtual machines using their browsers and the Azure portal. Your
colleague associates a Network Security Group to the subnet where the
Bastion service is deployed.
Which of the following ports do you need to open to ensure the subnet’s
egress traffic can reach the target VMs?

22
443
Correct answer
3389
80
Overall explanation
Short Answer for Revision:

Port 80 enables Bastion to communicate with the internet

Port 443 enabled the Bastion service to communicate with Azure services.

Port 3389 (Windows) and 22 (Linux) allow the Bastion service to reach the
target VMs.

Detailed Answer:

First, you don’t have to associate any Network Security Group to the
Bastion subnet. Even after deploying only a Bastion in the virtual network,
you can use it to connect to the VMs.

In case if you need to associate an NSG to the Bastion subnet, you must
ensure the NSG has all the required inbound and outbound security rules.

Reference Link: https://learn.microsoft.com/en-us/azure/bastion/bastion-


nsg#apply

For the egress traffic from the Bastion subnet,

a. Port 80 allows the Bastion service to communicate with the internet for
establishing sessions and certificate validation. So, option D is incorrect.

b. Port 443 allows the Bastion service to communicate with other Azure
services for storing logs. So, option B is incorrect.

c. And port 3389 and 22 allow the Bastion service to reach the target VMs.
Port 22 is for SSH to Linux VMs, and port 3389 is for RDP connections to
Windows VMs. Since we are using a Basic SKU Bastion, there is also no
possibility of using custom ports. Bastion service should use port 3389.

Option A is incorrect, and option C is the correct answer.

Reference
Link: https://learn.microsoft.com/en-us/azure/bastion/configuration-
settings#ports

Further, it is good to know that when you use the Azure portal to connect
with Bastion, the RDP/SSH session is on port 443.

Reference Link: https://learn.microsoft.com/en-us/azure/bastion/bastion-


overview (see the image).

GitHub Repo Link: Open ports with NSG for Azure Bastion

Resources
Open ports with NSG for Azure Bastion
Domain
Implement and manage virtual networking

Question 32
You have an App Service app currently running with one instance. The app
runs on a Standard App Service plan defined with the below autoscale
settings:
Shown above (on the right) is the scale rule for scale-out.

After creating the autoscale rule, assume the Average CPU utilization of
the App Service Plan hits 95% for the first 30 minutes. How many
instances will be running at the end of 20 minutes?

Correct answer
2
3
4
5
Overall explanation
Short Answer for Revision:

This setup begins with one VM instance. After the CPU shoots to 100%,
two VM instances will be running. But since the Enable metric divide by
instance count is checked, the calculated CPU utilization is 100% / 2 ->
50%. The scale-out rule is no longer triggered, and we have 2 instances at
the end of 20 minutes.

Detailed Answer:

To simulate a CPU utilization of 95%, I write a For loop that runs infinitely
in a PowerShell script file. Let’s upload the file as a web job so the script
runs as a background task in the Azure App Service. Since we need to
mimic the Average CPU utilization for the App Service Plan (irrespective of
the number of VM instances actually running in the plan), I set the scale to
Multi-instance, so the web job runs across all the VM instances.

<<Refer to the question in Practice Test 1 (Related lecture video


title: Autoscaling in VMSS) to understand how to interpret the autoscaling
policy >> This lecture is heavily dependent on your knowledge from that
video.

Although the default no. of. VM instances in the autoscale rule are 2, it is
given that the App Service app is currently running with one VM instance.
So even after the autoscale rule is created, we will have 1 running VM as
the default rule will kick in only if there is a problem reading the metric
value.
Reference Link: https://stackoverflow.com/questions/75778407/azure-
app-plan-autoscaling-confused-by-default-settings

Immediately after the web job is uploaded, the CPU utilization hits 100%.
But for the first 10 minutes, nothing happens as the autoscale engine
requires the scale out condition to be true for a duration of at least 10
minutes to trigger a scale action. After 10 minutes, the engine calculates
the Average Percentage CPU, which now is peaking at 100%, and creates
a new VM instance.

Since the cooldown period is 5 minutes, the autoscale engine performs


another check only after 5 minutes.
Note that the App service plan utilization is still 100% as we set the web
job to run in each VM instance in the plan. But since the Enable metric
divide by instance count is checked, the average CPU utilization is 50%
(100% CPU utilization divided by 2 VM instances).

With 50% CPU utilization, the autoscale engine doesn’t create another VM.
So, only two VM instances will be running after 20 minutes. Option A is the
correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/azure-monitor/autoscale/
autoscale-troubleshoot#example-2-advanced-autoscaling-for-a-virtual-
machine-scale-set

GitHub Repo Link: Autoscaling an App Service App

Resources
Autoscaling an App Service App
Domain
Deploy and manage Azure compute resources

Question 33
While creating an Azure Container Instance in the Azure portal, you
specify a container Restart policy.

Assume that the container you deploy to the Container Instance exits with
an exit code of 1. Which of the policies ensures that the containers run at
most once?

Always
Correct answer
Never
OnFailure
Overall explanation
Short Answer for Revision:

The ‘always’ restart policy will always restart the container, irrespective of
the exit code. The container will run more than once.

The ‘never’ restart policy never attempts to restart the container,


irrespective of the exit code. The container runs at most once (with a
failed status). Option B is the correct answer.

The ‘OnFailure’ restart policy will restart the container only if it exits with
a non-zero exit code. Since the container exits with an exit code of 1
every time, the container will also run more than once.

Detailed Answer:

To demonstrate the scenario in this question, I created the below


Dockerfile in Visual Studio Code. When you build an image from this
Dockerfile and run the container, the container will:

a. Print the exit code,

b. And exit with an exit code of 1.

I build an image from this Dockerfile and push the image to the Azure
Container Registry. As these steps are only the preparation steps for this
question, I am not including those details here. If you are interested,
check the related lecture video to know more details about the process.

Using the image in the registry instance, I create three container apps
each, with a different restart policy setting: Always, Never, and OnFailure.
Even before proceeding further, notice the status of the container with the
'never' restart policy is Failed. That’s because the 'never' restart policy
never tries to restart the container after it terminates with a nonzero exit
code, which is generally an indication that the process executed in the
container has failed.

With a ‘never’ restart policy, the Azure Container Instance never attempts
to restart the container, irrespective of the exit code. So, this container
will have a 0 restart count and the State will be terminated after the
container runs once.
So, containers deployed with a 'never' restart policy will run at most once.
Option B is the correct choice.

A container deployed with an ‘always’ restart policy will always restart the
container, irrespective of the exit code. So, this container will have
increasing restart counts with time.
The container with the 'always' restart policy run many times. So, option A
is incorrect.

Finally, a container deployed with an ‘OnFailure’ restart policy will restart


the container if the container exits with a non-zero exit code. Since the
container exits with an exit code of 1 every time, it will also have
increasing restart counts with time.
Due to this reason, the container with the 'OnFailure' restart policy also
runs many times. so, Option C is also incorrect.

Reference Link: https://learn.microsoft.com/en-us/azure/container-


instances/container-instances-restart-policy

https://learn.microsoft.com/en-us/azure/container-registry/container-
registry-get-started-docker-cli?tabs=azure-cli

GitHub Repo Link: Restart policy for Azure Container Instances

Resources
Restart policy for Azure Container Instances
Domain
Deploy and manage Azure compute resources

Question 34
In a Microsoft Entra ID tenant, one of the users ( User Four) in your
organization needs to perform billing tasks like purchasing Microsoft 365
products, updating payment information, etc.

From the user accounts page in Microsoft Entra ID, where would you
assign the billing administrator role to the user?

Azure role assignments blade


Licenses blade
Applications blade
Correct answer
Assigned roles blade
Overall explanation
In the Assigned roles section for a user, you can view a list of assigned
administrative roles for the user and add new assignments to the
Microsoft Entra ID directory roles.

Reference Link: https://learn.microsoft.com/en-us/entra/identity/role-


based-access-control/list-role-assignments-users

Option D is the correct choice.

In the Azure role assignments blade, you can only view the user’s role
assignment to resources in the Azure subscription. We cannot assign
Microsoft Entra ID roles like billing administrators here.
Option A is an incorrect choice.

In the licenses blade, you can assign/unassign Microsoft product licenses


like Microsoft 365, Power Apps, etc., to the user.
Reference Link: https://docs.microsoft.com/en-us/azure/active-
directory/fundamentals/license-users-groups

Option B is an incorrect choice.

Finally, as the names indicate, in the Applications blade, you can view
applications assigned to the user. Option C is incorrect.

Resources
Assign billing administrator role to the user
Domain
Manage Azure identities and governance

Question 35
You deploy two virtual machines in an Azure virtual network subnet.
As shown above, the subnet is also associated with a Network Security
Group. In addition to default rules, the NSG also has a rule that denies
outbound traffic through the ICMP protocol from any source.

Based on the given information, answer the below two statements. Select
Yes if the statement is correct. Else select No.
Note: Assume you already enabled ICMP through the Windows firewall on
both VMs.

Yes, No
Yes, Yes
No, Yes
Correct answer
No, No
Overall explanation
Short Answer for Revision:

Both VMs are in the same subnet, which has an associated NSG. So, both
VMs inherit the same NSG rules.

The outbound traffic for ICMP traffic is blocked. So, none of the VMs can
send pings to the other.

Detailed Answer:

If the ICMP is already enabled for both VMs through the Windows firewall,
vm01 and vm02 can ping each other because the default security
rules AllowVnetInBound and AllowVnetOutBound allow both inbound
and outbound traffic within the VNet.
Well, unless you override the rule with a higher priority. The Ping tool uses
the ICMP protocol to send an echo request to the target VM. But this ICMP
is denied outbound traffic from the subnet.

Recall from practice test 1 (Related lecture video title: Configure NSG
to allow external traffic) that NSG rules are stateful. This means when a
rule allows outbound traffic from the subnet, you do not require an explicit
inbound rule to allow the response back to the subnet. Similarly, when a
rule allows inbound traffic to the subnet, you do not require an explicit
outbound rule to allow the response from the subnet.
In the given scenario, the NSG is associated with the subnet where both
the VMs reside. So, the NICs of both the VMs will inherit all the rules
defined in the NSG. You can verify this by navigating to the Effective
security rules of each VM’s NIC.

So, if you ping vm02 from vm01, the outbound security rule for vm01
ensures that the message does not go through.
Similarly, if you ping vm01 from vm02, the outbound security rule for
vm02 ensures the message does not go through.

The correct option for both statements is No. Option D is the correct
answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/network-security-groups-overview#security-rules
GitHub Repo Link: Effects of associating NSG with a subnet

Resources
Effects of associating NSG with a subnet
Domain
Implement and manage virtual networking

Question 36
You run a Linux container app in Azure Container Instance. Container apps
are stateless, so if the app restarts, all the data is lost. You have to store
the data generated by the app in an external (permanent) storage, so
data is available across container restarts/crashes.

Which of the following Azure services would you use?

Azure Tables
Azure Blobs
Azure Cosmos DB
Correct answer
Azure Files
Overall explanation
Short Answer for Revision:

Of the given services, you can mount only an Azure file share as a volume
into a container directory.

Detailed Answer:

You can mount an Azure file share, created in Azure Files, as a volume
into a container directory in Azure Container Instances.

For this, first, create a Storage account and a file share where you want to
persist the app data in Azure Files.
Next, while deploying the container instance using the az container
create Azure CLI command, ensure to specify:

a. The storage account name,

b. The storage account access keys,

c. The file share,

d. And the volume mount point (the directory where you want to mount
the file share).
Once the app is created successfully, you can test if the app persists the
data to the file share by entering sample data.

In the file share, a new text file will be created. You can download the file
to verify its contents.

You can also view the created text files by connecting to the container
app and running the below commands.
Reference Link: https://learn.microsoft.com/en-us/azure/container-
instances/container-instances-volume-azure-files

Option D is the correct answer. None of the other Azure storage account
services provide persistent storage for Azure Container Instances.

GitHub Repo Link: Connect persistent storage to a container

Resources
Connect persistent storage to a container
Domain
Deploy and manage Azure compute resources

Question 37
You deploy an Azure VM and its related resources in the Dev subscription.
The VM’s Network Interface Card (NIC) is associated with a static,
Standard SKU Public IP address resource.

Select the steps you perform to move the virtual machine and all its
related resources to a different subscription in the same Microsoft Entra ID
tenant.

Correct answer
Disassociate the Public IP from the VM

Move all the resources to the new subscription

Associate the Public IP to the VM

Downgrade the Public IP SKU from Standard to Basic

Move all the resources to the new subscription

Upgrade the Public IP SKU from Basic to Standard

Move all the resources except the Public IP resource

Move the Public IP resource in a separate move operation


Delete the public IP resource

Move all the resources to the new subscription

Recreate the public IP resource

Overall explanation
If the VM’s NIC is associated with a Standard SKU Public IP address
resource, we cannot move the VM and the related resources across Azure
subscriptions. If you try to do so, you will get this error.

But we can move the VM and the related resources across Azure
subscriptions if the VM’s NIC is associated with a Basic SKU Public IP
resource.
Reference
Link: https://learn.microsoft.com/en-us/answers/questions/410380/public-
ip-tenant-move.html

So, it’s tempting to conclude that we first downgrade the Public IP SKU
from Standard to Basic, perform the move operation and upgrade the
Public IP SKU from Basic to Standard.

But, although we can upgrade the Public IP SKU from Basic to Standard,
reverse operation is not possible.
Reference
Link: https://learn.microsoft.com/en-us/answers/questions/298941/change
-std-sku-public-ip-to-basic-sku.html

So, option B is incorrect.

Moving all resources except the Public IP resource will not work, as all
dependent resources have to be moved together to a different
subscription.
Since the Public IP resource is still associated with the VM’s NIC, you will
get the above error.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/management/move-resource-group-and-subscription#checklist-
before-moving-resources

Option C is also incorrect.

The best solution will be to first disassociate the Public IP resource from
the VM’s NIC, move the resources to a different subscription, and
associate the Public IP resource to the VM’s NC in the target subscription
(Check the related lecture video).

Option A is the correct answer.


Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-
manager/management/move-limitations/networking-move-
limitations#dependent-resources

Option D, talks about deleting the Public IP resource and moving the rest
of the resources to the target subscription. Since the IP address is still
associated with the VM’s NIC, it cannot be deleted.

Reference
Link: https://learn.microsoft.com/en-us/troubleshoot/azure/azure-
kubernetes/cannot-delete-ip-subnet-nsg

Even if Azure allows deletion, the IP address is static only during the
lifecycle of the Public IP resource. So, if you recreate a new static Public IP
resource in the target subscription, it will have a different IP address.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
virtual-network-public-ip-address#create-a-public-ip-address

Option D is incorrect.

Resources
Move Azure VM and related resources to a different subscription
Domain
Deploy and manage Azure compute resources

Question 38
You have an ARM template and a parameter file that defines a VM and its
related resources. You want to reuse the template to deploy multiple VMs
after updating the references to the VM’s password in plaintext, so that
the password is stored and retrieved from Azure Key Vault.

What are the two steps you would do to achieve this objective?

Correct selection
Create a secret in Azure Key Vault
Create Keys in Azure Key Vault
Create an access policy to grant access to the user deploying the
template
Correct selection
Assign access to Azure Resource Manager for template
deployment
Overall explanation
Short Answer for Revision:
In Azure Key Vault, VM’s passwords are stored as secrets, not keys. Option
A is one of the correct answers.

To retrieve the Key Vault secret from the template, we need to grant
permission to the ARM template deployment. Option D is the other correct
answer.

Granting access to the user (via access policy) will enable the user to
access the secret, for example, using the Azure portal.

Detailed Answer:

To store the VM’s password in Key Vault, create a Key Vault secret and
enter the password as the Secret value .

Key Vault secrets are better suited for providing secure storage of
passwords, connection strings, etc., Option A is one of the correct
answers.

Keys, as the name indicate, are used for storing software-protected and
HSM-protected keys. Option B is incorrect.
Reference
Link: https://learn.microsoft.com/en-us/azure/key-vault/general/about-
keys-secrets-certificates#object-types

Once we store the VM’s password as a secret in the Key Vault, we can
update the ARM template to point the VM’s password reference to the Key
Vault secret.

But if you deploy this template, the deployment process will run into an
error as the template does not have access to the secret defined in the
Key Vault.

Let’s first try to fix this issue by creating a Key Vault access policy that
grants access to the user who will deploy the ARM template.
If you try deploying the template, you will get the same error again, as the
access policy grants permissions to the user to retrieve the secret, like in
an Azure portal. It doesn’t enable the template deployment process to
access the secret. Option C is also incorrect.

To enable the template to retrieve the secret defined in the key vault, we
need to enable the Key Vault access policy to allow resource access to
Azure Resource Manager for template deployment.
If you deploy the template now, the deployment process should be
complete without any errors. Once the VM is deployed, the user can use
the Secret value defined in the Key Vault to log into the VM.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/template-tutorial-use-key-vault#prepare-a-key-vault

Option D is the other correct answer.

GitHub Repo Link: Replace password with Key Vault secrets in an ARM
Template

Resources
Replace password with Key Vault secrets in an ARM Template
Domain
Deploy and manage Azure compute resources

Question 39
You have a Windows Azure Virtual Machine in stopped status.
Which of the following actions can you NOT do when the VM is in a
deallocated status? Select two correct options.

Correct selection
Add a DSC configuration extension
Correct selection
Configure Site Recovery to move VM from availability zone 1 to 2
in the same region
Resize the VM
Attach another NIC to the VM
Overall explanation
<<This is a NOT question>>

You can resize the VM either when it is running or is deallocated. Option C


is incorrect.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-
machines/resize-vm?tabs=portal#change-the-vm-size

In fact, you can’t add a NIC to an Azure VM in running status. So, you have
to stop the VM to attach a NIC.
Option D is incorrect.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/virtual-network-network-interface-vm#add-a-network-interface-
to-an-existing-vm

To add any extension to an Azure VM, you need an Azure VM agent. This
VM agent is a lightweight thread running in your virtual machine. For
example, Windows VM will have the WindowsAzureGuestAgent agent
running when you deploy any image from the marketplace.

So, when you add an extension to a VM, it’s the agent that executes the
instructions and configures the extension. So, it’s evident that when the
VM is stopped, no VM agent is running. And so, we cannot add an
extension to the VM in a deallocated status.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-
machines/extensions/agent-windows

Option A is one of the correct answers.

You can use Azure Site Recovery to move a VM between availability zones
in a region.
Azure Site Recovery, a disaster recovery solution, replicates storage from
one zone to the other. So, it creates a disk in the target availability zone,
from where you can spin up a new VM.
Since all we care about in a VM is the data on the disk, moving the data to
a different availability zone is equivalent to moving a VM to a different
zone.

But you can configure site recovery only when the VM is running. When
the VM is stopped, you cannot configure site recovery as it requires
installing an extension to the VM (check the related lecture video).
Since you cannot configure Site Recovery to move the VM to a different
availability zone when the VM is stopped, option B is the other correct
answer.

Reference
Link: https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-
azure-how-to-enable-zone-to-zone-disaster-recovery#using-availability-
zones-for-disaster-recovery

Resources
Actions that cannot be done when a VM is deallocated
Domain
Deploy and manage Azure compute resources

Question 40
The company Airbus has to add nearly 1000 users external to Airbus’
Microsoft Entra domain, from 78 supplier organizations that supply
airplane manufacturing parts to them.

Which of the following is the best way to create user accounts?

Use Bulk create operation


Use the Add-AzureADInvitation cmdlet
Correct answer
Use the New-AzureADMSInvitation cmdlet
Use the New-AzureADUser cmdlet
Overall explanation
Using the New-AzureADUser PowerShell command, you can create only
internal Microsoft Entra ID users. When you try creating an external user
using this command, you get the message that the domain portion of the
userPrincipalName property is invalid.

Reference
Link: https://learn.microsoft.com/en-us/powershell/module/azuread/new-
azureaduser
Option D is incorrect.

And the bulk create operation also adds internal users in bulk when you
upload the data in a CSV file.

If you add external users to the CSV file and upload the data, you get a
similar error.
The bulk invite operation is more suitable to invite external users in
bulk.

Reference
Link: https://learn.microsoft.com/en-us/entra/identity/users/users-bulk-
add#to-create-users-in-bulk

https://learn.microsoft.com/en-us/entra/external-id/tutorial-bulk-invite

Option A is incorrect.

Using the New-AzureADMSInvitation cmdlet, you can invite an external


user to your directory.

This command sends an email invitation to the external user to join your
directory.
The user can click the Accept invitation link to join your directory.

Reference
Link: https://learn.microsoft.com/en-us/powershell/module/azuread/new-
azureadmsinvitation

Option C is the correct choice.

Finally, there is no command like Add-AzureADInvitation, although it looks


similar to New-AzureADMSInvitation.
You don’t have to know every PowerShell command for the exam. But you
should be able to deduce if a command can be used in a specific scenario.

In PowerShell, both New and Add are approved verbs. But commands
with the New verb create a new resource: for example, creating a new
guest user account.

On the contrary, cmdlets with the Add verb add something to an existing
resource. For example, Add-AzureADGroupMember cmdlet adds a
member to an existing group.

Reference
Link: https://learn.microsoft.com/en-us/powershell/module/azuread/add-
azureadgroupmember

https://learn.microsoft.com/en-us/powershell/scripting/developer/cmdlet/
approved-verbs-for-windows-powershell-commands?view=powershell-
7.2#new-vs-add

So, from this understanding, we can conclude that Add-AzureADInvitation


is incorrect, as we expect the command to only send a new invitation to
an external user to your directory. We do not expect the command to add
the invitation to a queue or something.

Option B is also incorrect.

GitHub Repo Link: Create external user accounts - PS Commands.ps1

Resources
Create external user accounts
Domain
Manage Azure identities and governance

Question 41
You have a storage account in your Azure subscription. You export the
storage account’s ARM template, including the parameters, and add it to
the library.
Your teammate tries to deploy a new storage account from the exported
template in the library. What storage account properties can he
configure/select?

Only Storage Account name


Correct answer
Only Storage Account name, resource group, and subscription
Only Storage Account name, resource group, region, and
subscription
Only Storage Account name and resource group
Overall explanation
There is only one parameter in the template, and since we include the
parameters while exporting, we can configure the name property of the
storage account.

Other than the parameters included in the template, we can also


configure the subscription and the resource group. These two property
values are like a gold standard that we can set while deploying from any
exported template in the library for any type of Azure resource.
Although Region is displayed in the above template spec, it is not
editable/configurable. The region of the resource assumes the selected
resource group’s region. Since the region value is dependent on the
chosen resource group, the user cannot configure/select the region.

So, apart from region, he can configure the subscription, resource group,
and storage account name. Option B is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/template-tutorial-export-template?tabs=azure-
powershell#export-template

Resources
Deploy from an exported template in the library
Domain
Deploy and manage Azure compute resources
Question 42
You need to deploy an Azure Virtual Network with two subnets using the
ARM template. Which of the following ARM templates would you use?

a.

1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. }
11. },
12. "resources": [
13. {
14. "type": "subnets",
15. "name": "[concat('subnet', copyIndex())]",
16. "dependsOn": [
17. "vnet01"
18. ],
19. "properties": {
20. "addressPrefix": "[concat('10.0.', copyIndex(),
'.0/24')]"
21. },
22. "copy": {
23. "name": "subnetCopy",
24. "count": "2"
25. }
26. }
27. ]
28. }
29. ]

b.

1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. },
11. "copy": [
12. {
13. "name": "subnetloop",
14. "count": 2,
15. "input": {
16. "name": "[concat('subnet',
copyIndex('subnetloop'))]",
17. "properties": {
18. "addressPrefix": "[concat('10.0.',
copyIndex('subnetloop'), '.0/24')]"
19. }
20. }
21. }
22. ]
23. }
24. }
25. ]

c.

1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. },
11. "copy": [
12. {
13. "name": "subnets",
14. "count": 2,
15. "input": {
16. "name": "[concat('subnet', copyIndex('subnets'))]",
17. "properties": {
18. "addressPrefix": "[concat('10.0.',
copyIndex('subnets'), '.0/24')]"
19. }
20. }
21. }
22. ]
23. }
24. }
25. ]

d.

1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. },
11. "subnets": {
12. "name": "[concat('subnet', copyIndex('subnets'))]",
13. "properties": {
14. "addressPrefix": "[concat('10.0.',
copyIndex('subnets'), '.0/24')]"
15. }
16. },
17. "copy": [
18. {
19. "name": "subnets",
20. "count": "2"
21. }
22. ]
23. }
24. }
25. ]
Only a
Only b
Correct answer
Only c
Only d
Overall explanation
You can create multiple instances of a resource by adding a copy loop.
You can add a copy loop to any four sections: resources, properties,
variables, and outputs.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/copy-resources

A subnet can be defined as a child resource of the parent resource (option


a). Here, the subnet resource is nested within the parent VNet resource.
But you cannot use a copy loop in the child resource to create multiple
subnets. If you deploy the template code in option A, you will get this
error.
Option A is incorrect.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/copy-resources#iteration-for-a-child-resource

A subnet can also be defined as a property of the top-level VNet resource


(options b, c, and d). So, to create multiple properties (subnets) of a VNet,
use the copy loop.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/copy-properties#syntax

In the copy loop, the name indicates the name of the VNet property (i.e.,
subnet). So, option B is incorrect, as there is no VNet property named
subnetloop. There is only a VNet property called subnets.

Syntax-wise, option D is incorrect. Given below is the correct way to


define the copy loop within the properties section:

1. name of the property (i.e., a subnet),


2. subnet property values like name and addressPrefix in the input
element section.

Option D defines the copy loop separately from the subnets property that
needs to be replicated for the VNet resource. If you deploy the template
code in option D, you will get this error.

So, running the New-AzResourceGroupDeployment command to deploy


the template code in option C will create the VNet with two subnets
successfully.
Option C is the correct answer.

GitHub repo link: ARM Templates for copy property iteration

Resources
Using copy to create multiple resources in the ARM template
Domain
Deploy and manage Azure compute resources

Question 43
Given below are two statements based on associating a service endpoint
policy to a virtual network’s subnet. Select Yes if the statement is correct.
Else select No.

Yes, No
Correct answer
Yes, Yes
No, No
No, Yes
Overall explanation
Short Answer for Revision:

First, configure a service endpoint (Microsoft.Storage) for a subnet. Then


apply the service endpoint policy. Further, the region of both VNet and the
service endpoint policy should be the same.

Detailed Explanation:

The service endpoints are enabled per subnet per service, which means
all resources in a subnet can access all instances of a service in your
subscription, for example, the Azure Storage account. A service endpoint
policy enables you to filter traffic to specific Azure service instances over
service endpoints.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/virtual-network-service-endpoint-policies-overview

Note that service endpoint policies are supported only for Azure Storage
accounts.

To verify the given statements, I created three VNets in different regions


and a service endpoint policy in one of those VNet regions in my Azure
subscription.
Statement 1:

The service endpoint policy is in the central US region and only vnet012 is
in the same region. So, first, let’s try associating the policy to a subnet in
vnet012. As you can see, there is no option to associate the policy to this
subnet because the service endpoint is not enabled yet.

But when you enable a service endpoint for the service Microsoft.Storage,
you see the option to apply the policy.
If you think about it, it makes sense. If there are no service endpoints
enabled on a subnet, then what’s the need for a service endpoint policy?

So, we can conclude that we can apply service endpoint policy on a


subnet only if service endpoints are already configured for the Azure
service. Statement 1 -> Yes.

Statement 2:

We have just seen that we can apply the service endpoint policy to a
virtual network subnet that is in the same region as the policy. Now, let’s
try applying the policy to a subnet that is in a different region. The service
endpoint policies dropdown doesn’t populate any policy.
So, we can conclude that the service endpoint policies can only be applied
to VNet subnets in the same region. Statement 2 -> Yes.

Option B is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


network/virtual-network-service-endpoint-policies-overview#limitations

GitHub Repo Link: Apply Virtual network service endpoint policy to a


subnet

Resources
Apply Virtual network service endpoint policy to a subnet
Domain
Implement and manage virtual networking

Question 44
You plan to move a legacy software that uses an older version of SQL
Server to Azure. Since the app doesn’t support application-level
replication, you consider deploying your stack in a single instance of a
virtual machine.

In case of a data center failure, how would you ensure the application’s
availability?

Deploy the VM in an availability zone


Place the VM in an availability set
Correct answer
Use ZRS managed disk
Deploy the app to both the availability zone and availability set
Overall explanation
Availability sets provide isolation within a data center by arranging VMs in
separate fault and update domains. Availability zones provide isolation
within an Azure region if you place VMs in different availability zones. You
cannot use both for your VMs.

Option D is incorrect.

Both availability sets and availability zones provide resiliency to your app
when you create multiple VM instances.

Placing a single VM in an availability set only places the VM in a fault


domain. If the server rack corresponding to that fault domain goes down,
your app fails.
Availability zones too, help protect your workload only when you replicate
three copies of the VM and place them in different availability zones. In
that case, even if you face zonal failures, i.e., an entire zone is
unavailable, VMs in other zones are up to serve requests.
Note: An Azure data center may have many server racks. But your
subscription is mapped only to three of those racks (fault domains).

So, just assigning the VM to a single availability zone doesn’t offer any
protection from data center outages. If the data center where you host the
VM is unavailable, or if the entire zone is unavailable, your app fails.

Options A and B are incorrect.

Since we have a legacy application, replicating the VMs thrice in three


availability zones is not a solution. So, instead of replicating the VMs, we
can just replicate the data by using the zone-redundant storage option for
managed disks.
Note: ZRS option for managed disks is not supported in all locations.

The ZRS option for managed disk ensures that the data is synchronously
replicated across three availability zones in a region.
So, when the data center/availability zone that hosts the VM goes down,
you can use the replicated data in another availability zone to spin up a
new VM.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


machines/disks-deploy-zrs?tabs=portal#limitations

Option C is the correct answer.

Resources
Availability of a single instance VM
Domain
Deploy and manage Azure compute resources

Question 45
You have an Azure file share that contains a backup file. You generate a
SAS URL for the individual backup file.
Given below are two statements based on the above information. Select
Yes if the statement is correct. Else select No.

Yes, No
No, No
Yes, Yes
Correct answer
No, Yes
Overall explanation
Statement 1:

Using Azure Storage Explorer, you can use only the file share’s SAS URL to
connect to the file share. If you use the individual backup file’s SAS URL,
you will get this error.

Similarly, you cannot use the SAS URL for a file share or a container in a
browser, as we are not accessing a particular file.

So, statement 1 -> No.


Statement 2:

But you can use the individual backup file’s SAS URL in a browser to
directly download the file to your desktop (check the related lecture
video).

Statement 2 -> Yes.

Option No, Yes is the correct answer.

Resources
Using SAS URLs with Storage Explorer and web browser
Domain
Implement and manage storage

Question 46
When customers place new orders from the company’s website, the order
details are enqueued as messages in an Azure Storage account queue.

You process the order messages using Azure Virtual Machines Scale Sets
by scaling out or scaling in VMs based on the number of messages in the
queue.

Here is the scaling policy you define for the Virtual Machine Scale Set.
The scale out policy is given below:
And here are the details of the scale in policy:
Answer the below questions based on the given information. Assume the
events described in questions 1 and 2 happen sequentially over time.
3, 2
Correct answer
4, 3
4, 2
5, 1
Overall explanation
The question mentions that the default instance count is two. So, to begin
with, for three messages in the storage account queue, we have two VMs
in the scale set.
Question 1:

Since we enabled metric divide by instance count for the scale out
policy, each VM will process a maximum of up to four queue messages
before the policy triggers, scaling out to add additional instances. When
the 9th message is added to the queue, the scale out policy kicks off, and
a new VM is added to the scale set.

This new VM will also handle a maximum of up to 4 queue messages.


Beyond 12 messages, any additional incoming message triggers the scale
out policy again to increase the instance count by 1, assuming the policy
trigger happens after the cool down period, which is 2 minutes.

So, when there are 14 messages in the queue waiting to be processed,


there will be 4 running VMs.
So, question 1 -> 4.

Question 2:

But we have not enabled metric divide by instance count for the scale
in policy. So, the scale in policy will trigger only when the total queue
messages are less than four, irrespective of the no. of. running VM
instances.

So, when the count of queue messages reduce to three, the scale in policy
triggers and deletes a VM in the scale set.
Reference
Link: https://learn.microsoft.com/en-us/azure/azure-monitor/autoscale/
autoscale-best-practices#considerations-for-scaling-threshold-values-for-
special-metrics

https://github.com/hashicorp/terraform-provider-azurerm/issues/7696

So, question 2 -> 3. Option B is the correct answer.

Resources
Autoscale VMs based on queue messages
Domain
Deploy and manage Azure compute resources

Question 47
You have two Virtual Machine Scale Sets, each with different orchestration
modes, in the rg-dev-02 resource group.

Further, you have these resource groups in your subscription.


You have to create and add a new virtual machine to the scale set. In
which scale set and resource groups can you create the VM?

vmss-uni, Only rg-dev-02


vmss-uni, Any resource group
vmss-flex, Any resource group
Correct answer
vmss-flex, Only rg-dev-02
Overall explanation
Azure VMs created by the Virtual Machine Scale Set with the uniform
orchestration mode expose virtual machines as scale set instances.
Whereas VMs created with the flexible orchestration mode expose actual
virtual machine resources.
So, when you create a new VM, you can add the VM only to the scale set
with the flexible orchestration mode. Of course, needless to say, the scale
set has to be in the same subscription and the region.

So, Virtual Machine Scale Set -> vmss-flex.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-machine-


scale-sets/virtual-machine-scale-sets-orchestration-modes#what-has-
changed-with-flexible-orchestration-mode
When I select a resource group for the VM different from that of the scale
set, I was surprised I was not able to add the VM to the scale set. Since I
know I can place resources like the web app and SQL DB, both part of the
same solution, in different resource groups, my concerns are valid.

Azure requires resources that share the same lifecycle, placed in the same
resource group. Although the scale set and the VMs may not exactly share
the same lifecycle (a scale in policy can delete the VMs), Azure enforces
the related resources to be part of the same resource group.

To test if there is any bug with the portal, I try to create a new VM in a
different resource group from that of the scale set, at the same time
adding the VM to the scale set using Azure PowerShell. Consistent with
the portal, I was not able to create the VM.
But when I place the VM in the same resource group as that of the scale
set, I was able to create and add the VM to the scale set (Refer to the
related lecture video).

So, resource group -> Only rg-dev-02.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/management/overview#resource-groups

Option D is the correct answer.

Resources
Resource groups and scale sets for new VMs
Domain
Deploy and manage Azure compute resources

Question 48
You deploy an Azure virtual network with two subnets in a resource group
using the below ARM template.
What would happen if you remove subnet1 and subnet2 from the ARM
template and redeploy the template to the resource group in incremental
deployment mode?

Correct answer
Both subnets will be removed
The resource manager will throw an error
Both subnets will remain intact
The VNet and the subnets will be removed
Overall explanation
While creating a virtual network in the portal, Azure requires the VNet to
have at least one subnet.
But after we create the Virtual Network resource, the Azure Resource
Manager allows you to remove all the subnets via the portal or the ARM
template.

Since the virtual network alone can be a standalone resource without any
subnets, removing both subnets will not delete the VNet.

Option D is incorrect.

Recall from Practice Test 1 (Related lecture video titled: Redeploy a


template in different deployment modes) about how the incremental and
complete deployment modes treat resources differently that are not
defined in the template but live in the resource group.

But that’s how the deployment modes work for resources. The subnet01
and subnet02 are defined here as properties of a VNet, not as separate
standalone resources. Hence those rules do not apply to the properties of
a resource.
If you update the properties of a resource, irrespective of the deployment
modes, they are updated in the target resource.

In the given scenario, one of the properties (array of subnets) is deleted.


This property update is applied to the resource.

So, when you run the template in any deployment mode, both the subnets
will be removed. Option A is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/deployment-modes

GitHub Repo Link: Effect of updating a resource's property in


incremental mode - armtemplate.json

Resources
Effect of updating a resource's property in incremental mode
Domain
Deploy and manage Azure compute resources

Question 49
You are creating a Virtual Machine in the Azure portal. You have the
option to select the following VM features (marked in boxes).
Which feature requires that the VM compulsorily use Azure managed
disks?

Correct answer
Availability Zone
Availability set
VM image
OS disk type
Overall explanation
From 2019, Azure Generation 2 virtual machines are generally available in
Azure. These Gen2 VMs provide advanced features and capabilities
compared to Gen1 VMs.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-
machines/generation-2#features-and-capabilities

https://azure.microsoft.com/en-in/updates/azure-generation-2-virtual-
machines-vms-are-now-generally-available/

Although Gen2 VMs always require using managed disk, you can use Gen
1 VMs to bypass this restriction.

So, VM image doesn’t completely necessitate that we create VMs with


managed disk as Gen1 VMs support unmanaged disks. Option C is
incorrect.

The OS disks for the VM, similar to Azure Storage accounts, support
locally-redundant storage, and zone-redundant storage, based on their
type. But only the OS disks, especially those that support zone-
redundancy, require managed disks.
So, you can still use disks that support LRS to create VMs with unmanaged
disks. So, the type of OS disk too, doesn't compulsorily require that VMs
use managed disks. Option D is also incorrect.

There are two types of availability sets you can create: Managed
availability sets and unmanaged availability sets.

An availability set is created as managed or unmanaged by setting the


property Use managed disks . If Use managed disks is set to yes, we create
a managed availability set. If Use managed disks is set to No, we create an
unmanaged availability set.

Only VMs with managed disks can be created in a managed availability


set.

And, only VMs with unmanaged disks can be created in an unmanaged


availability set.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-
machines/availability-set-overview#how-do-availability-sets-work

Since the user can use an unmanaged availability set to create the VM,
even availability sets don’t completely require the use of managed disks.
Option B is incorrect.

For many scenarios, availability zones and managed disks do go hand-in-


hand. So, when you deploy the VM in any of the availability zones, you
need to use the managed disk. Option A is the correct answer.

Resources
Azure VM feature requiring managed disks
Domain
Deploy and manage Azure compute resources

Question 50
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

There are two storage accounts in different resource groups in your


subscription.

User One has the following two roles at the rg-dev-01 resource group
scope:

1. Storage account contributor

2. Storage blob data owner

Goal: You need to provide access to a SQL backup file to User One.

Solution: You upload the backup file in a blob container in the strdev011
storage account.

Note: After you upload the backup file, your team has configured the
below setting in both storage accounts:
Can User One access the SQL backup file?

Correct answer
Yes
No
Overall explanation
Since the Allow storage account key access is disabled, User One cannot
use access keys to download the backup file.

Recall from practice test 1 (Related lecture video title: Microsoft


Entra ID authentication for blob data ) that you need read access to a
storage account and read access to blob data in the storage account to
access blob data via Microsoft Entra ID authentication.

The storage account contributor role provides permissions to manage


storage accounts. And the storage blob data owner role provides
permissions to manage data in the storage account. So together, both
these roles provide permissions to access data in blob containers using
Microsoft Entra ID authentication.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/built-in-roles#storage-account-contributor

https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-blob-data-owner

https://learn.microsoft.com/en-us/azure/storage/common/shared-key-
authorization-prevent?tabs=portal#disable-shared-key-authorization

So, User One can access and download the SQL file using Microsoft Entra
ID authentication.

Option Yes is the correct answer.


Resources
Provide access to a file in Azure storage account - 1
Domain
Implement and manage storage

Question 51
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

There are two storage accounts in different resource groups in your


subscription.

User One has the following two roles at the rg-dev-01 resource group
scope:

1. Storage account contributor

2. Storage blob data owner

Goal: You need to provide access to a SQL backup file to User One.

Solution: You upload the backup file in a file share in the strdev011
storage account.

Note: After you upload the backup file, your team has configured the
below setting in both storage accounts:
Can User One access the SQL backup file?

Yes
Correct answer
No
Overall explanation
When the user accesses data in a file share, the Azure portal first checks if
the user has access to the storage account keys. Although the user is
assigned the storage account contributor role, which provides access to
the storage account keys, he will not be able to access the data using
those keys as the Allow storage account key access is disabled for the
storage account.

Since access was not possible with keys, the Azure portal attempts to
access data using the user’s Microsoft Entra account. However, the
Storage Blob Data Owner role provides access to blob data in the storage
account, not file data. So, he still cannot access the files.
To have access to files in the file share using a Microsoft Entra account,
the user should be assigned a role specific to file data, for example, the
Storage File Data Privileged Contributor role (Check the related lecture
video).

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/authorize-
oauth-rest?tabs=portal#authorize-access-using-filerest-data-plane-api

https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles/storage#storage-file-data-privileged-contributor

https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-account-contributor

https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-blob-data-owner

https://learn.microsoft.com/en-us/azure/storage/common/shared-key-
authorization-prevent?tabs=portal#disable-shared-key-authorization

Option No is the correct answer.

Resources
Provide access to a file in Azure storage account - 2
Domain
Implement and manage storage

Question 52
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

There are two storage accounts in different resource groups in your


subscription.

User One has the following two roles at the rg-dev-01 resource group
scope:

1. Storage account contributor

2. Storage blob data owner

Goal: You need to provide access to a SQL backup file to User One.

Solution: You upload the backup file in a file share in the strdev012
storage account and share a Shared Access Signature token to the
individual file.

Note: After you upload the backup file and generate the SAS token, your
team has configured the below setting in both storage accounts:
Can User One access the SQL backup file?

Yes
Correct answer
No
Overall explanation
Since the Allow storage account key access is disabled on both the
storage accounts, the generated SAS URLs will not work as they are
signed by the access keys.

So, when User One uses the SAS URL, he will get this error.
Option No is the correct answer.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/shared-
key-authorization-prevent?tabs=portal#understand-how-disallowing-
shared-key-affects-sas-tokens

https://learn.microsoft.com/en-us/azure/storage/common/shared-key-
authorization-prevent?tabs=portal#disable-shared-key-authorization

Resources
Provide access to a file in Azure storage account - 3
Domain
Implement and manage storage

Question 53
You configure the below scale out policy for your Virtual Machine Scale Set
(VMSS).
This is the scale in policy for the VMSS.
Here are the other details of the scaling policy, like the default, minimum,
and maximum no. of instances in the Virtual Machine Scale Set.

Based on the above information, answer the below two questions, which
depict two scenarios that occur in a sequence:
3, 3
3, 0
Correct answer
2, 2
2, 3
Overall explanation
Refer to Practice Test 1 (Related lecture video title: Autoscaling in
VMSS), where I detailed how autoscaling works. The explanation to this
question assumes you have watched that video.

Question 1:

As the scale set CPU utilization peaks at nearly 100% and there are two
VMs in the scale set, the average percentage CPU utilization for both VMs
is 100%.

Since the time window (duration) is 10 minutes for the scale out policy,
the autoscaling policy waits for 10 minutes to collect sufficient
CPU percentage data from the scale set before performing any scale
action. So, even though the CPU percentage utilization is 100% after 8
minutes, the autoscale doesn't trigger any scale action.
The scale set will have the default no. of VMs it’s created with. So,
question 1 -> 2.

Question 2:

Since the amount of time that the Autoscale engine will look back for
metrics is 10 minutes, the first policy trigger happens after 10 minutes,
creating a new VM.
Since there is a cool down period of 5 minutes after a scale action (per the
scale out policy), there will be no checks by the Autoscale engine for
another 5 minutes. As the scale set average percentage CPU utilization
peaks at 100% for the first 17 minutes, another scale out action occurs
after the cool down period, creating another VM instance.

There will be four instances in the scale set after 17 minutes.

Since the scale set cools down to 0% CPU utilization after 17 minutes, the
scale in policy kicks in and tries to remove 3 VMs from the scale set.
But since the required minimum number of VMs in the scale set is 2, the
scale action will not remove all 3 VMs. It will remove just 2 VMs to satisfy
the minimum VM condition.

Question 2 -> 2. Option C is the correct answer.

Reference
Link: https://negatblog.wordpress.com/2018/07/06/autoscaling-scale-sets-
based-on-metrics/

https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-
machine-scale-sets-autoscale-portal

Resources
Analyze running or removed VMs from a scale set
Domain
Deploy and manage Azure compute resources

Question 54
In the storage account below, your company has stored data for apps and
some company confidential information.
Your team realizes that they are unable to configure a lifecycle
management policy in this storage account for optimizing the storage
cost.

Which of the following actions would you perform so they can create and
manage lifecycle policies? Select an option that minimizes the effort.

Create a new storage account that supports access tiers


Correct answer
Upgrade the account kind to StorageV2
Change to a premium storage account
Upgrade to a storage account with Data Lake Gen2 capabilities
Overall explanation
We create lifecycle management policies to transition blobs to appropriate
access tiers or delete them at the end of the lifecycle, based on rules, to
optimize for cost.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-
management-overview

The account type Standard, or Premium, is related to the storage account


performance. Although some premium account types support blob
lifecycle management and access tiers, you cannot change the storage
account performance once the storage account is created.
Option C is incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-overview#types-of-storage-accounts

Upgrading to a storage account with Azure Data Lake Gen2 capabilities is


not supported for the legacy, general-purpose v1 accounts.
Further, you would upgrade to a storage account with Data Lake Gen2
capabilities to unlock features that support big data analytics, not to
enable support for blob tiering. Option D is incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/upgrade-to-
data-lake-storage-gen2-how-to?tabs=azure-portal

The account kind of given storage account is a legacy, general-purpose v1


account, an earlier version of the general-purpose v2 account.

The v1 account does not support access tiers. So, while uploading files to
the container, you do not see an option to set access tiers.

You also cannot change the access tiers for the files that are already
uploaded.
Since the v1 accounts do not support access tiers, you also cannot
configure lifecycle management policies to transition blobs to the
appropriate access tier for this storage account.

As seen in the above image, the general-purpose V2 storage accounts


support creating lifecycle management policies.
So, upgrading the storage account kind to StorageV2 will be the most
suitable choice. You can also set the default access tier for the storage
account while upgrading.

The account upgradation doesn’t take much time, and once it is complete,
you can create lifecycle management policies.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-upgrade?tabs=azure-portal

Option B is the correct answer.

We can create a brand new, general-purpose v2 storage account as we


know it supports blob tiering and lifecycle management policies. But since
the company already has data stored in a storage account, creating a new
one means an additional effort to move the data files to the new account.
As we need to minimize effort, option A is not the most optimal solution.

Resources
Update a storage account property based on use case
Domain
Implement and manage storage

Question 55
Your team maintains a virtual machine running custom software in your
on-premises environment. You use the Hyper-V Manager to export a
specialized disk from the VM to lift and shift the on-premises VM with its
data, apps, & user accounts to Azure. Which of the following PowerShell
commands would you use if you need to transfer the disk to the Azure
Storage account?
New-AzImage -ResourceGroupName "rg-dev-03" -Destination "<Storage
account path>" -LocalFilePath “<Local path>”
New-AzDisk -ResourceGroupName "rg-dev-03" -Destination "Storage
account path" -LocalFilePath “<Local path>”
Add-AzDisk -ResourceGroupName "rg-dev-03" -Destination "Storage
account path" -LocalFilePath “<Local path>”
Correct answer
Add-AzVhd -ResourceGroupName "rg-dev-03" -Destination "Storage
account path" -LocalFilePath “<Local path>
Overall explanation
Before answering the question, let’s understand the context of the
question.

We have a VM in the on-premises environment that we want to lift and


shift to Azure. To do so, we first export the virtual hard disk from the VM.

This export operation creates a specialized VHD that contains specific


configuration or identity information.
On a side note, if you want to create a generalized VHD, use the Sysprep
PowerShell command to remove unique information from the VM, like the
computer name, which can make it suitable for using it as a base image
for creating multiple virtual machines.

Reference Link: https://learn.microsoft.com/en-us/azure/databox-


online/azure-stack-edge-gpu-prepare-windows-vhd-generalized-
image#generalize-the-vhd

Once you have the VHD, export it to the Azure Storage account using the
Add-AzVhd PowerShell command. This command uploads the virtual hard
disk from your machine to Azure as a page blob.

Option D is the correct answer.

Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
add-azvhd?view=azps-9.3.0

Once the VHD is uploaded to the storage account, you can use the New-
AzImage command to build an Azure VM image from the VHD.

So, the command New-AzImage creates an Azure VM image from the VHD
(stored as a page blob in the Azure Storage account), not to upload a VHD
from the on-premises environment to Azure. Option A is incorrect.

Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
new-azimage?view=azps-9.3.0#example-1
The New-AzDisk command, as the name indicates, creates a new
managed disk in Azure. After the disk is created, you can directly upload
the VHD from the on-premises environment to the managed disk using
the azcopy tool or any other PowerShell command. But New-AzDisk
doesn't upload the VHD to the storage account. Option B is also incorrect.

Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
new-azdisk?view=azps-9.3.0

https://learn.microsoft.com/en-us/azure/virtual-machines/windows/disks-
upload-vhd-to-managed-disk-powershell#upload-a-vhd-1

In PowerShell, commands with the New verb create a new resource: for
example, New-AzDisk creates a new managed disk resource.

On the other hand, commands with the Add verb add something to an
existing resource. For example, Add-AzVhd uploads (adds) a VHD file to a
blob storage account or a managed disk (as discussed earlier).

Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
add-azvhd?view=azps-9.3.0#example-1-add-a-vhd-file-to-a-blob

https://learn.microsoft.com/en-us/powershell/module/az.compute/add-
azvhd?view=azps-9.3.0#example-5-add-a-vhd-file-directly-to-a-managed-
disk

We have only the New-AzDisk command to create a new managed disk.


We don’t have an Add-AzDisk command that adds the disk to something
else. Note that a disk is a standalone resource in Azure. You don’t store
the disk in a storage account or a VM or anything else. Whereas a VHD is
stored as a page blob in a storage account. Option C is incorrect.

Reference
Link: https://learn.microsoft.com/en-us/powershell/scripting/developer/
cmdlet/approved-verbs-for-windows-powershell-commands?
view=powershell-7.3#similar-verbs-for-different-actions

GitHub Repo link: PowerShell to upload disk to storage account.ps1


Resources
Upload a disk to the storage account
Domain
Deploy and manage Azure compute resources

Question 56
You plan to move your 3-tier application architecture comprising a web
tier, a business tier, and a data tier (using SQL Server) to Azure. To
increase availability, all three workloads are replicated thrice in Azure with
the help of IaaS Virtual Machines.

Which of the following is the best way to architect the solution using
availability sets to ensure that the application is always running in case of
a power failure affecting a server rack in the data center?

Create three availability sets, each with three fault domains. Each
set contains a web server VM, a VM for processing business logic,
and a SQL Server VM for the data tier.
Correct answer
Create three availability sets, each with three fault domains. Web
server VMs in a set, VMs for processing business logic in another
set, and SQL Server VMs for data tier in the final set.
Create one availability set with three fault domains. Place all 9
VMs (3 web server VMs, 3 VMs for processing business logic, and
3 SQL Server VMs for data tier) in the availability set.
Create nine availability sets with three fault domains. Place a VM
in each availability set.
Overall explanation
Azure data centers are vast, and it’s safe to assume that there would be
many server racks. But your subscription is mapped to only three of those
racks. So, all VMs you place in the availability set is created in those three
server racks or fault domains.

Further, since the question pertains to a power failure affecting a server


rack, we are concerned only with fault domains.
If you ever deploy VMs in an availability set, you would notice that the first
VM is deployed to fault domain 0, the next VM to fault domain 1, and the
next to fault domain 2. And so, this continues in a round-robin way.

If you mix and match workloads in each availability set, you should be
careful to line up different application components in each fault domain
across availability sets. Since mistakes can happen, there is a chance that
similar types of workloads end up in the same fault domain.

In case of a power failure to any server rack, one of the component tiers
will completely fail, bringing down the entire application. Option A is
incorrect.

The best approach would be to place similar application components in


the same availability set. Using this approach, you don’t have to track how
fault domains are assigned to a VM.
In case of a power failure to any server rack, only one application instance
will fail. Option B is the correct answer.

Note: A load balancer will probe the instance health to redirect the traffic
between different application tiers.

If you place all 9 VMs in an availability set, there is a possibility to line up


similar application components required to run the application in the same
fault domain.
Option C is also incorrect.

Using solutions in options A and C, you can still achieve the desired
architecture, but there is an element of risk if the components are not
deployed in a certain way. Option B removes that element of risk.

Finally, if you place a VM in each availability set, there is a 100% chance


that all VMs are placed in the same fault domain. So, in case of power
failure in fault domain 0, all VMs will go down.
Option D is incorrect.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


machines/availability-set-overview

Resources
Architect your solution with availability sets
Domain
Deploy and manage Azure compute resources

Question 57
The New-AzSubscriptionDeployment and the New-
AzResourceGroupDeployment PowerShell commands deploy resources to
the subscription and the resource group, respectively. Given below are
two statements based on these commands. Select Yes if the statement is
correct. Else select No.

No, Yes
Yes, No
Correct answer
No, No
Yes, Yes
Overall explanation
Recall from practice test 1 (Related lecture video title: Select resource
type and PowerShell command to deploy the ARM template) where we
create a deployment resource using the resource type
Microsoft.Resources/deployments in the ARM template to deploy
resources at a different scope from that defined by the PowerShell
command.

Honestly, I like to think of the deployment resource as an enabler for


traversing the deployment scopes, helping us create resources at different
scopes.
So, irrespective of the New-AzSubscriptionDeployment or the New-
AzResourceGroupDeployment PowerShell commands that create a
deployment resource at the subscription and the resource group scope
respectively, you can define a deployment resource in a template to
navigate to the desired scope and deploy the resources at that scope.

Statement 1:

Use the New-AzResourceGroupDeployment command to deploy resources


to a subscription scope by making the following changes to your
ARM template:

1. Specify any subscription ID in your tenant to scope your deployment to


the subscription scope. Just remember that using
Microsoft.Resources/deployments resource type to create a deployment
resource at the subscription scope in an ARM template is equivalent to
running the New-AzSubscriptionDeployment PowerShell command.

2. Provide the location in the ARM template since this command also
requires a location to store the deployment data.

3. Provide the content for the nested template, which will be deployed at
the subscription scope.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/deploy-to-resource-group?tabs=azure-
powershell#scope-to-subscription

https://learn.microsoft.com/en-us/powershell/module/az.resources/new-
azdeployment?view=azps-9.3.0
So, statement 1 -> No.

Statement 2:

Use the New-AzSubscriptionDeployment command to deploy resources to


a resource group scope by making the following changes to your
ARM template:

1. Specify the resource group to scope your deployment.

2. Since using Microsoft.Resources/deployments resource type to create a


deployment resource at the resource group scope in an ARM template is
equivalent to running the New-AzResourceGroupDeployment PowerShell
command, we need to ensure to specify all the required parameters for
the deployment like the resource group and the content for the child
template.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/deploy-to-subscription?tabs=azure-cli#scope-to-
resource-group

https://learn.microsoft.com/en-us/powershell/module/az.resources/new-
azresourcegroupdeployment?view=azps-9.3.0

So, statement 2 -> No.

Therefore, we can use both commands to deploy resources across any


deployment scope. Option C is the correct answer.
Resources
Use PowerShell deployment commands to deploy ARM templates at
different scopes
Domain
Deploy and manage Azure compute resources
Question 58
Shown below is the folder structure of an Azure App Service app.

The log files consume a huge space, and it is not necessary to back them
up. So, you need to configure backup such that all files and folders except
the LogFiles folder are backed up on an hourly basis.

Based on the given information, answer the below two questions:


Correct answer
Custom, Upload a _backup.filter file
Automatic, Upload a _backup.filter file
Automatic, Create a backup policy
Custom, Create a backup policy
Overall explanation
Short Answer for Revision:

Partial backups can only be configured for custom backups by uploading


a _backup.filter file to the site/wwwroot directory.

Detailed Answer:

Question 1:

You can configure partial backups to exclude specific files and folders.
However, partial backups are supported only for custom backups. So,
Question 1 -> Custom.

Question 2:

To exclude the log files folder, navigate to your App Service app’s
companion app, called the Kudu app at https://<app-
name>.scm.azurewebsites.net/

From there, choose either the Command line or the PowerShell debug
console.
But, before understanding how to exclude a folder in the backup process,
let’s first analyze the contents of a full backup stored in the storage
account.

To do so, download the zip file of any full backup. When you extract the
folder, you can view all the folders and files that you saw earlier in the
Kudu app.

So, let’s go back to the Kudu app and proceed to exclude the folder from
the backup process. In the Kudu app, navigate to Site -> wwwroot and
create a _backup.filter file.
In the file, add all directories and files that you want to exclude from the
custom backup in each line. For example, to exclude the LogFiles folder
from backups, add the below entry in the backup filter file.

Now, when the next custom backup runs, you will not see the log file
folder in the backup.

So, Question 2 -> Upload a _backup.filter file. Option A is the correct


answer.
For Azure App Service backups, there is no concept of a backup policy. So,
options C and D are incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/app-service/manage-
backup?tabs=portal#configure-partial-backups

https://learn.microsoft.com/en-us/azure/app-service/resources-kudu

GitHub Repo Link: Partial backups in Azure App Service

Resources
Partial backups in Azure App Service
Domain
Deploy and manage Azure compute resources

Question 59
In an Azure subscription, a team has created the following two resources
in the rg-prod-01 resource group:

Select and order the steps you would perform before deleting the
resource group.
Delete the Private DNS zone

Delete the Read-only lock

Correct answer
Delete the Read-only lock

Delete the linked virtual network

Delete the Private DNS zone

Delete the linked virtual network

Delete the Read-only lock

Delete the Private DNS zone

Overall explanation
Since the virtual network is linked to the Private DNS zone, you cannot
delete the Private DNS zone without first deleting the linked/nested
resources. If you try to do so, you will get this below error:

Reference
Link: https://learn.microsoft.com/en-us/cli/azure/network/private-dns/
zone?view=azure-cli-latest#az-network-private-dns-zone-delete

Options A and C are incorrect.


Option D is incorrect for the same reason. Removing the read-only lock on
the virtual network only enables the virtual network to be deleted but is
not actually deleted. Since the second step is to delete the Private DNS
zone, you will get the same above error as the VNet is still intact.

The correct order is to first delete the read-only lock on the VNet and
delete the linked virtual network. With no virtual network links, you can
remove the Private DNS zone resource when you delete the resource
group (check the related lecture video).

Option B is the correct choice.

Note: In a real-world scenario, the best way to delete this resource group
would be to:

1. Delete the read-only lock

2. Delete the resource group

To test the correct order of deleting resources, I have not mentioned the
option Delete the resource group . Else, the question becomes easy.

Tips for the exam:

Some other important order of deletion of resources to know:

1. Cannot delete the Recovery Services Vault before deleting the VM


backups in them.

2. Cannot delete Network Security Group before disassociating it from the


Network Interface.

3. Cannot delete a virtual network/subnet without deleting the assigned


network interface.

Resources
Order of deletion of resources (with locks)
Domain
Manage Azure identities and governance
Question 60
In your Microsoft Entra ID tenant, you add the below users as members of
the Azure administrative units IT Dept and HR Dept.

Further, you add the below security group to the IT Dept administrative
unit.

You grant a user (admin one) a password administrator administrative role


to delegate access to reset passwords for users only in the IT Department,
as shown below.
Which of the following user's passwords can admin one reset?

poc three and poc four only


poc one , poc two , and poc five only
poc five only
Correct answer
poc one and poc two only
All user’s passwords
Overall explanation
Since admin one is granted the password administrator role only for the IT
Dept administrative unit, he can reset passwords only for users added as
members in that administrative unit.

This means, he can reset passwords for users poc one and poc two , who
are members of the IT Dept administrative unit.

He cannot do a password reset for other users, poc three and poc four ,
who are members of the HR Dept administrative unit.
Options A and E are incorrect.

Further, admin one can only manage the properties of the group added in
the administrative unit, not the individual members of the group. So, he
can manage group properties that his role would allow, just that resetting
a password isn’t much relevant in the context of a group.

But since the user poc five is not added directly to the IT
Dept administrative unit, admin one cannot reset his password,
although poc five is a member of the group that's added to
the IT Dept administrative unit.

Reference Link: https://learn.microsoft.com/en-us/entra/identity/role-


based-access-control/administrative-units#groups

So, options B and C are incorrect. Therefore option D is the correct


answer.

Resources
Reset password for an Azure Administrative Unit
Domain
Manage Azure identities and governance

Question 61
You receive a requirements document from your client to create ten Azure
storage accounts with the following details.

Just before you deploy the storage account, you receive a communication
to update the Account Kind of the storage accounts from StorageV2 to
BlockBlobStorage in the document.

While creating storage accounts in the Azure portal, which two other
property values (from the table) also require a change?

Correct selection
Redundancy
Hierarchical namespace
SkuName
Correct selection
Performance
Overall explanation
We don’t set the Account Kind property while creating the storage
account resource in the Azure portal. To create a storage account with
the Account Kind as StorageV2, we select Performance as Standard,
which creates a standard, general-purpose V2 account.

Now, to create storage accounts with the Account Kind BlockBlobStorage,


which is a premium account type, set the Performance to Premium and
the Premium account type to Block blobs.

Since the requirement change affects the Performance property of the


storage account, Option D is one of the correct answers.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-overview

Only the standard storage accounts support all six replication types: LRS,
GRS, RA-GRS, ZRS, GZRS, and RA-GZRS. Premium storage accounts
support only LRS and ZRS. Since the initial replication type RAGRS is not
supported for premium accounts, the update affects the storage account's
redundancy too.
Option A is the other correct answer.

The SkuName is used as a parameter in Azure PowerShell while creating


the storage account.

The SkuName parameter combines the property values of both


Performance and Redundancy. Since in the Azure portal, we separately
select the values for Performance and Redundancy, the SkuName
property is not affected. Option D is incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-create?tabs=azure-powershell#create-a-storage-account-1

The hierarchical namespace is available for both standard, general-


purpose v2 accounts and premium block blob storage accounts. So, the
change in Account Kind will not have any impact on the hierarchical
namespace property.

Resources
Properties affected by the change in account kind in Azure storage
account
Domain
Implement and manage storage

Question 62
You configure the self-service password reset (SSPR) policy for the
following users in your organization:
The policy has the below authentication methods defined.
Based on the given information, select the correct answer choice for the
below two statements:
Only User One , 5
Only User One , User Two & User Three , 4
Correct answer
Only User One and User Three , 5
Only User One , 6
Only User One and User Two , 0
Overall explanation
Statement 1:

To modify the self-service password reset policy, you need an account


with either global administrator or authentication policy administrator
permissions. That is, only User One and User Three can modify the SSPR
policy, like changing the authentication methods or adding predefined or
custom security questions.
Reference Link: https://learn.microsoft.com/en-us/azure/active-
directory/authentication/tutorial-enable-sspr#enable-self-service-
password-reset

So, question 1 -> Only User One and User Three .

Statement 2:

And User Four needs to answer five questions from a total of 6 questions
to set up the self-service password reset. But when he resets the
password, he has to answer only 4 (check the related lecture video).

So, question 2 -> 5.

Option C is the correct answer.


Resources
Configuring self-service password reset (SSPR) policy
Domain
Manage Azure identities and governance

Question 63
Given below is a custom Azure RBAC role for managing storage accounts.

1. {
2. "properties": {
3. "roleName": "Cust role",
4. "description": "",
5. "roletype": "",
6. "assignableScopes": [
7. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
8. ],
9. "permissions": [
10. {
11. "actions": [
12. "Microsoft.Authorization/*/read",
13. "Microsoft.Resources/deployments/*",
14. "Microsoft.Resources/subscriptions/resourceGroups/read",
15. "Microsoft.Storage/storageAccounts/*",
16. "Microsoft.Support/*"
17. ],
18. "notActions": [],
19. "dataActions": [
20. "Microsoft.Storage/storageAccounts/*"
21. ],
22. "notDataActions": []
23. }
24. ]
25. }
26. }

Answer the below two questions based on the custom role definition:
Actions, assignableScopes
NotDataActions, roleType
Correct answer
NotActions, assignableScopes
DataActions, DataActions
Overall explanation
<<Refer to the question in Practice Test 1 (Related lecture video
title: Create a custom Azure RBAC role - 1 ) to understand control plane
and data plane operations>>

Question 1:

"Microsoft.Storage/storageAccounts/*" in the DataActions section has


permissions to access data in the storage account.
Microsoft.Storage/storageAccounts/* in the Actions section includes all the
control plane operations on the storage account, including reading the
access keys.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-
access-azure-active-directory#azure-built-in-roles-for-blobs

So, to ensure the custom role doesn’t have permission to read the access
keys, add Microsoft.Storage/storageAccounts/listkeys/action permission in
the NotActions section.

When the user tries to read the access keys, he will get the below error.
Since you need to modify the NotActions section, question 1 ->
NotActions.

Question 2:

The assignableScopes section list the scopes where the custom role is
available for assignment. In the given JSON, the custom role is available
for assignment across the entire subscription. To ensure the custom role
is available only at the rg-dev-01 scope, modify the assignableScopes
section.

Reference Link: https://learn.microsoft.com/en-us/azure/role-based-


access-control/role-definitions#assignablescopes
So, question 2 -> assignableScopes. Option C is the correct answer.

Resources
Modify the custom role definition
Domain
Manage Azure identities and governance

Question 64
To move unstructured data in the traditional Network-attached storage
(NAS) devices to the cloud, you create an file share in Azure storage
account.

Which of the following is NOT the correct way to connect to the file share
from a Windows device?

From file explorer, map the network drive


Copy-paste the UNC path directly into the file explorer
Correct answer
Run the net share command to mount the drive
Run the PowerShell script provided by Azure File share
Overall explanation
<<This is a NOT question>>

To mount the Azure file share using the storage account key on a
Windows device, copy the PowerShell script provided by Azure Files.
Since the Windows OS uses the SMB protocol to connect to the file share,
the Test-NetConnection command checks if the file share can be accessed
over port 445, which is the port the SMB uses.

If the connection and the mount operation are a success, you will see this
message and the file share connection on your device.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-
to-use-files-windows#mount-the-azure-file-share

Option D is incorrect.

You can also copy just the UNC path from the script and map the network
drive from your Windows device.

Right-click on This PC (in Windows 10) and select Map network drive .

Paste the copied UNC path, and click Finish. This is another way to mount
Azure file share from your device.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-
to-use-files-windows#mount-the-azure-file-share-with-file-explorer

Option A is also incorrect.

Rather than mounting a drive, you can also directly paste the previously
copied UNC path in the file explorer to access the file share.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-
to-use-files-windows#access-an-azure-file-share-via-its-unc-path

Option B is incorrect.
We use the Net share command to share folders from the command line,
not to access a shared folder. The Net use command is more suitable for
accessing/mounting the shared folder.

Reference
Link: https://learn.microsoft.com/en-us/previous-versions/windows/it-
pro/windows-server-2012-r2-and-2012/hh750728(v=ws.11)#examples

https://techcommunity.microsoft.com/t5/fasttrack-for-azure/mapping-a-
network-drive-to-an-azure-file-share-using-domain/ba-p/2220773

Option C is the correct answer.

Resources
Connect to Azure file share from a Windows device
Domain
Implement and manage storage

Question 65
You have the following resources created in the rg-dev-01 resource group.
User One is already assigned the below custom role at
the Dev subscription scope.

1. {
2. "properties": {
3. "roleName": "Cust role",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "Microsoft.Resources/*/read"
12. ],
13. "notActions": [],
14. "dataActions": [],
15. "notDataActions": []
16. }
17. ]
18. }
19. }
Which additional permission would you add to the Actions section of the
custom role to ensure User One can read the following resources in the
Azure portal?

None, None
Microsoft.Authorization/*/read , Microsoft.Authorization/*/read
Correct answer
Microsoft.Storage/*/read , Microsoft.Compute/*/read
Microsoft.Portal/*/read , Microsoft.Subscription/*/read
Overall explanation
Microsoft.Resources resource provider maps to the Azure Resource
Manager service, which is the deployment and management layer in
Azure. So, Microsoft.Resources/*/read provides read permissions to
resource types like subscriptions, tags and resource groups, etc., but not
to any individual Azure resources like Storage accounts or virtual
machines.
The current custom role cannot provide read access to any resource.
Option A is incorrect.

Curious about the error message displayed by the Azure portal when
accessing the storage account, I copy-pasted the Azure storage account
link (from a user who has access to the resource) into the browser
windows of User One . We get the below error that the user does not
have Microsoft.Storage/storageAccounts/read permission on the
resource.
Similarly, we need Microsoft.Compute/virtualMachines/read permissions
on the resource to access the VM.

Once we add these permissions to the Actions section in the custom


role, User One can access the VM and the storage account (Check the
related lecture video).
So, Storage account -> Microsoft.Storage/*/read

Virtual machine -> Microsoft.Compute/*/read

Option C is the correct answer.

Microsoft.Authorization resource provider lets you access resources like


role assignments, policy assignments, and others. You can check all the
permissions in the below list.

Adding Microsoft.Authorization/*/read permission doesn’t enable


access to individual resources.

Option B is incorrect.

Microsoft.Portal resource provider maps to the Azure portal. It provides


resource types like dashboards, etc.
As the name indicates, Microsoft.Subscription resource provider
provides access to subscriptions and other related items.

They both do not grant access to resources. Option D is incorrect.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/management/azure-services-resource-providers
Resources
Custom role permissions to read storage account & VM
Domain
Manage Azure identities and governance

Question 66
A company requires that all user’s devices that access corporate
resources be joined/registered to Microsoft Entra ID. An employee already
has his corporate Windows 10 & macOS laptop joined/registered to
Microsoft Entra ID. But he is unable to register his personal, iOS-based
mobile device, signed in with a local account, to Microsoft Entra ID.

Which of the following could be the reason?

The admin has prevented users from registering personal devices


with Microsoft Entra ID
iOS devices cannot be registered with Microsoft Entra ID
Correct answer
The limit of maximum devices per user is reached
Devices signed in with local accounts cannot be registered with
Microsoft Entra ID
Overall explanation
The difference between Microsoft Entra registered and joined devices is:

1. Registered devices are user-owned devices (BYOD scenario) that


support iOS, and Android OS, logged in with a local user account like a
Microsoft account.

2. Microsoft Entra joined devices are organization-owned devices that


support only Windows 10 and 11, logged in using organizational accounts.

Reference Link: https://learn.microsoft.com/en-us/azure/active-


directory/devices/concept-azure-ad-register

https://learn.microsoft.com/en-us/entra/identity/devices/concept-directory-
join

From the above understanding, we can conclude that options B and D are
incorrect.

Option A is incorrect too, as the user has already registered the corporate
macOS laptop with Microsoft Entra ID. If the administrator has prevented
the user from registering the device, he wouldn’t be able to register his
macOS device in the first place.
Further, there is no separate option to disable users from registering their
personal devices with Microsoft Entra ID.

Option C is the correct choice. A Microsoft Entra ID admin can set


the Maximum number of devices per user limit under device settings.

If a user tries to register a device beyond the specified limit, he will not be
allowed to do so. He either needs to request the admin to increase the
device limit per user or delete one of his registered/joined devices before
registering the new device.

Reference
Link: https://learn.microsoft.com/en-us/troubleshoot/azure/entra/entra-
id/ad-dmn-services/maximum-number-of-devices-joined-workplace
Resources
Register personal iOS mobile device with Microsoft Entra ID
Domain
Manage Azure identities and governance

Question 67
You have servers in your on-premises data center running Windows 10
Pro. To transfer the data from the servers to Azure Storage account, you
attach an external disk to the server.

Select and order the steps you will perform to copy the data to Azure Blob
storage using the Azure Import/Export service.

Prepare drives for the import job with the WAImportExport tool

Create an import job in the Azure Import/Export service

Data is transferred from the hard drives to the Azure Storage


account

Update the job with tracking information

Verify if data is successfully uploaded to Azure

Correct answer
Prepare drives for the import job with the WAImportExport tool

Create an import job in the Azure Import/Export service

Ship the hard drives to the Azure data center

Update the job with tracking information

Verify if data is successfully uploaded to Azure


Prepare drives for the import job with the WAImportExport tool

Create an import job in the Azure Import/Export service

Ship the hard drives to the Azure data center

Data is transferred from the hard drives to the Azure Storage


account

Verify if data is successfully uploaded to Azure

Create an import job in the Azure Import/Export service

Prepare drives for the import job with the WAImportExport tool

Ship the hard drives to the Azure data center

Update the job with tracking information

Verify if data is successfully uploaded to Azure

Prepare drives for the import job with the WAImportExport tool

Create an import job in the Azure Import/Export service

Ship the hard drives to the Azure data center

Update the job with tracking information

Data is transferred from the hard drives to the Azure Storage


account

Overall explanation
The first step in importing the data to Azure is to prepare the drives.
Preparing the drive means encrypting the drive, and copying data using
the WAImportExport tool that ultimately creates a journal file, which
stores information such as the drive serial number, encryption key, and
storage account details.

Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-1-prepare-
the-drives

So, box 1 -> Prepare drives for the import job with the WAImportExport
tool.
In the next step, you create an import job using the Azure Import/Export
service to transfer the data to the target storage account.

In the import job, you upload the journal file created in the previous step.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-2-create-an-
import-job

So, box 2 -> Create an import job in the Azure Import/Export service.

After creating the job, you ship the encrypted disk drives containing your
data to an Azure data center using carriers like FedEx.

Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-4-ship-the-
drives

So, box 3 -> Ship the hard drives to the Azure data center.

After you ship the drives, it is important to update the job with tracking
information, or else the job will expire. So, return to the job and provide
key information like carrier and tracking number.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-5-update-
the-job-with-tracking-information

So, box 4 -> Update the job with tracking information.

After Microsoft receives the disks, they process them and transfer the
data to your storage account. So, the step Data is transferred from the
hard drives to the Azure Storage account is performed by Microsoft,
and not you. Since this step is not one of the steps you need to perform,
all options A, C and E are incorrect.

Once the data is transferred to the storage account, verify the data from
your end.

Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-6-verify-
data-upload-to-azure

So, box 5 -> Verify if data is successfully uploaded to Azure


Option D is incorrect because creating the job requires the journal file,
which is an output of the WAImportExport tool from the drive preparation
step. So, you create the job only after preparing the drive.

Option B is the correct answer.

Resources
Copy data to Azure Blob storage using the Azure Import/Export service
Domain
Implement and manage storage

Question 68
A SaaS company has to store the customer’s data in Azure Storage
account in a multi-tenant fashion.

Which of the following Storage account features would you employ to


ensure a secure boundary of customer A’s data from that of customer B?

Correct answer
Encryption scope
Shared access signatures
Immutable storage
Lifecycle management policies
Overall explanation
By default, the encryption key that encrypts the data at rest in your Azure
Storage account is scoped to the entire account. In other words, all data in
the Azure Storage account is encrypted with a single key.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
service-encryption#about-encryption-key-management

Encryption scope enables you to specify different encryption keys for


different containers/blobs in a Storage account. Using encryption scope,
you can encrypt different customers’ data with different encryption keys
to ensure a secure data boundary.

For example, let’s suppose that the SaaS company has two customers. We
can create a key for each customer and store them in Azure Key Vault.
In the Azure Storage account, we create two encryption scopes, scope1,
and scope2, using keys key1 and key2, respectively, for encryption.

Finally, we can store the data related to a customer in their respective


blob containers. While creating the containers, specify an encryption
scope for each container.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/encryption-
scope-overview

Rather than using a single key vault for storing keys of all the customers
as shown above, you can provision and grant access to a key vault for
each customer so they have complete control of the keys that encrypt
their data.
So, encryption scopes ensure a secure boundary of customer A’s data
from that of customer B. Option A is the correct choice.

All the other options are also features of the Azure Storage account that
support multitenancy.

For example, a shared access signature is helpful in a multi-tenant


scenario by granting access to specific containers. But SAS is related to
enabling client access to data rather than storing data.

Reference
Link: https://learn.microsoft.com/en-us/azure/architecture/guide/multiten
ant/service/storage#shared-access-signatures

Option B is incorrect.

Different customers may have different access storage needs. The


immutable storage, as the name indicates, prevents deletion or
modification of data.

You can create an immutable storage policy of type Time-based


retention for a container to manage different data management needs
for each customer.
The above policy wouldn't allow anyone to delete or modify data in the
container until the retention period expires.

An immutable storage policy doesn't create a secure boundary for


customer's data. It is only related to managing data retention.

Reference
Link: https://learn.microsoft.com/en-us/azure/architecture/guide/multiten
ant/service/storage#immutable-storage

Option C is also incorrect.

A lifecycle management policy is used in a multi-tenant scenario to help


you optimize costs by automatically moving blob data to other storage
tiers.

For example, in the below scenario, if any of the customers is inactive for
three weeks, the lifecycle management policy helps to reduce storage
costs by changing the access tier of the blobs to cool storage.
Option D is incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/architecture/guide/multiten
ant/service/storage#lifecycle-management

Resources
A multi-tenant feature for secure boundary of customer's data
Domain
Implement and manage storage

Question 69
You deploy a public load balancer along with other resources as given in
the below architecture:
The public load balancer defines:

a. A load balancing rule that maps the frontend IP configuration to the two
VMs in the backend pool.

b. A health probe that checks the health status of the VM instances in the
backend pool (via port 80).
The Network Security Group (nsg01) associated with the subnet defines
custom rules that allow traffic from only a specific client IP address to
reach the VMs in the backend pool.

You realize that the client IP is unable to access the web server VM. Which
of the following actions will you perform to enable access? Select two
options.

Correct selection
Add an inbound security rule with priority 105 to allow HTTP
traffic from the Azure Load Balancer
Add an inbound security rule with priority 103 to allow traffic
from Azure Load Balancer via port 443
Correct selection
Update the port of the inbound security rule with priority 110
from 80 to 443
Update both port/backend ports in the load balancing rule from
80 to 53
Overall explanation
Short Answer for Revision:

Before the Azure Load Balancer can direct the client requests to the
backend VM, the load balancer checks the health of the VM instances
using a health probe. Only if the VM instances are healthy, does the load
balancer direct the requests to the VM.
In this case, the security rule with priority 110 blocks all requests over
port 80 (also used by the load balancer's health probe). So health probe
checks fail.

To fix the issue, either change the port in the security rule with priority
110 (so the health checks go through) [option C] or update the port used
by the health probe (so, it bypasses the security rule). Another way would
be to add a security rule that overrides rule 110 (option A).

Detailed Explanation:

From the answer options, we can guess that the issue of client IP being
unable to access the web server VM is due to a misconfiguration in the
network security rules.

Before an Azure Load Balancer sends the traffic to the VMs in the backend
pool, it checks whether the VM instances in the pool are healthy and can
receive traffic. Only if the health probe doesn’t fail, does the load balancer
send client requests to the healthy VMs.

From the question, the port used by the load balancer's health probe to
check the health of VM instances is 80.

Usually, the default security rule AllowAzureLoadBalancerInBound allows


all requests from the load balancer to the destination.
But this default rule is partly overridden by the custom security rule with
priority 110. And guess what? The rule blocks all traffic over port 80.

So although one of the custom rules in the NSG allows the client IP to
access the VMs (via the load balancer's frontend IP), the other custom rule
in the NSG prevents the health probe from checking the health of the
VM instances. Since the load balancer is unable to check the health of the
VM instances, it assumes that the instances are not healthy and doesn't
direct any request to the VM. That's the reason why the client IP is unable
to access the web server VM.

To make things work, you can update the port of the custom security rule
with priority 110 from 80 to any other port, for example, 443.

This ensures that the health probe operation by the load balancer is not
affected. Only after the health probe detects a healthy instance can the
client IP access the web server VM in the backend pool.

Option C is one of the correct answers.

Instead of updating the security rule, the other way to make things work
would be to change the port the health probe uses to detect a healthy
instance. Since the security rule with priority 110 blocks only port 80,
updating the port of the health probe to port, for example, say, 53, can
still enable the client IP to access the VM as the health probe checks go
through.

Note: 53 is the default port for the DNS server. Before you can do the
operation, you should turn on the DNS role for the VMs using a PowerShell
command. Refer to the end for more details.
However, option D doesn’t update the port of the health probe. It just
updates the port the load balancer uses to communicate with the VM.
Performing this change will not allow the client IP to access the VM. Option
D is incorrect.

Since we understand the issue is with Azure Load Balancer’s health probe,
we can also add an inbound security rule with a priority (105) higher than
that of the deny rule (110) to allow HTTP traffic from the load balancer.
After the rule is created too, the user can access the VM. Option A is the
other correct answer.
Given that the health probe uses port 80. So, if you are adding a new
inbound security rule, the rule needs to allow traffic from Azure Load
Balancer over port 80. Allowing traffic over port 443 will not have any
effect. Option B is incorrect.

Note: Sometimes you need to change browsers or restart VMs to see the
desired output.

Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/components#
health-probes

https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-
custom-probe-overview#probe-source-ip-address

Run this PowerShell command to turn on DNS Server role for both
VMs:

## So that the DNS server is running to respond to health checks by the


load balancer.

Install-WindowsFeature -Name DNS -IncludeManagementTools

GitHub Repo Link: Resolve connections to VMs in a load balancer's


backend pool

Resources
Resolve connections to VMs in a load balancer's backend pool
Domain
Implement and manage virtual networking

Question 70
Given below are four storage accounts in an Azure subscription.
For these storage accounts, answer the below questions related to storage
redundancy.

Only strdev011 and strdev012, Only strdev011


All, Only strdev011 and strdev012
Correct answer
Only strdev011, All
All, All
Overall explanation
The easiest way to understand the storage redundancy options offered by
different storage account types is to check in the Azure portal.
For standard, general-purpose v2 storage accounts,

1. Azure provides all four main types of redundancy options: LRS, GRS,
ZRS, and GZRS.

2. Further, for both GRS and GZRS, where data is replicated to a


secondary region, selecting the checkbox (below the Redundancy
dropdown) ensures that we can spin off two additional redundancy
options: RA-GRS and RA-GZRS, that offer read-only access to data in the
secondary region (even before any failover occurs).

So, strdev011 offers all six possible types of storage redundancy.

But all the premium storage accounts of Premium account type : block
blobs, file shares, and page blobs support only LRS and ZRS. That is, they
do not support any geo-replication.
So, RA-GRS is supported only by the standard, general-purpose storage
account, strdev011. Statement 1 -> Only strdev011.

While ZRS is supported by all the storage account types. Statement 2 ->
All.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#supported-storage-account-types

Option C is the correct answer.

Resources
Storage redundancy for Azure Storage accounts
Domain
Implement and manage storage

Question 71
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.

You need to create an Azure Storage account that meets the following
requirements:
1. Protects data from disasters that affect a region, like tsunamis, floods,
earthquakes, etc.

2. Require high availability in the primary region.

3. Data is available even in the event of a data center failure.

4. The solution is optimized for cost.

Solution: You create the storage account with the redundancy setting
ZRS (Zone-redundant storage).

Does the solution meet the stated goal?

Yes
Correct answer
No
Overall explanation
For your Azure Storage account, you can configure any of the six available
redundancy setting: LRS, ZRS, GRS, GZRS, RA-GRS, and RA-GZRS.

There are one or more data centers in an availability zone. In Zone-


redundant storage (ZRS), the data copies are replicated in three data
centers, each in a different availability zone.

Reference
Link: https://learn.microsoft.com/en-us/azure/reliability/availability-zones-
overview#availability-zones

https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#zone-redundant-storage

In the event of a data center failure, the data copy in only one of the
availability zones is unavailable. So, ZRS ensures that data is available
even in the event of a data center failure. So, requirement 3 is satisfied.

But since all the data centers are located in a single region, ZRS doesn’t
protect your data from regional outages like a tsunami. Since requirement
1 is not satisfied, the ZRS redundancy setting doesn’t meet the stated
goal.
Option No is the correct answer.

Resources
Create a storage account with the appropriate redundancy setting - 1
Domain
Implement and manage storage

Question 72
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You need to create an Azure Storage account that meets the following
requirements:

1. Protects data from disasters that affect a region, like tsunamis, floods,
earthquakes, etc.

2. Require high availability in the primary region.

3. Data is available even in the event of a data center failure.

4. The solution is optimized for cost.

Solution: You create the storage account with the redundancy setting
GRS (Geo-redundant storage).

Does the solution meet the stated goal?

Yes
Correct answer
No
Overall explanation
From the image in the previous question, it is evident that except LRS, all
other storage redundancy settings, including GRS, satisfy requirement 3.

Since geo-redundant storage asynchronously replicates your data to a


secondary region, it protects your data from regional events like a
tsunami. So, GRS satisfies requirement 1.

Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#geo-redundant-storage

In the primary region, GRS is very similar to LRS. Since all storage copies
are located in the same data center in the primary region, GRS doesn’t
provide high availability in the primary region. For satisfying requirement
2, a storage redundancy that uses availability zones in the primary region
like, ZRS, GZRS, or RA-GZRS is more suitable.
Option No is the correct answer.

Resources
Create a storage account with the appropriate redundancy setting - 2
Domain
Implement and manage storage

Question 73
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You need to create an Azure Storage account that meets the following
requirements:

1. Protects data from disasters that affect a region, like tsunamis, floods,
earthquakes, etc.

2. Require high availability in the primary region.

3. Data is available even in the event of a data center failure.

4. The solution is optimized for cost.

Solution: You create the storage account with the redundancy setting
RA-GZRS (Read-access geo-zone-redundant storage).

Does the solution meet the stated goal?

Yes
Correct answer
No
Overall explanation
We already concluded that all storage redundancies except LRS satisfy
requirement 3. And all storage redundancies except LRS and ZRS satisfy
requirement 1.

After considering requirement 2, we have only two options left: GZRS and
RA-GZRS.

Between them, GZRS is a more cost-optimized solution (requirement 4).


Given below is a chart that displays Azure blob storage pricing for storing
more than 500 TB/month in a flat file structure in the Central US location.
For any Azure region, GZRS is more cost-efficient than RA-GZRS. So, RA-
GZRS doesn’t satisfy requirement 4.

For the given set of requirements, choosing the GZRS redundancy solution
is the best answer. Option No is the correct answer.

Reference
Link: https://azure.microsoft.com/en-in/pricing/details/storage/blobs/

Note: The given pricing order (in the above chart) is true for most Azure
regions, except for East US, where RAGRS is more costly than RAGZRS.
Nevertheless, even in the East US region, GZRS is a more cost-efficient
solution than RAGZRS. So, for the given set of requirements, this solution
holds true for any Azure region.

Resources
Create a storage account with the appropriate redundancy setting - 3
Domain
Implement and manage storage

Question 74
You have four Azure Storage accounts in your subscription.
In which of the storage accounts can you move a blob to an archive tier?

Only strdev011
Correct answer
Only strdev011 and strdev012
Only strdev011 and strdev013
All the storage accounts
Overall explanation
Storage accounts that support zone redundancy do not support moving
blobs to the archive tier. That is, all storage accounts that are configured
with the replication types, like ZRS, GZRS, and RA-GZRS, do not support
moving data to an archive tier.

So, both strdev013 and strdev014 do not support the archive tier.
strdev011 and strdev012 do not support zone redundancy. So, they
support moving blobs to an archive tier.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-
overview#archive-access-tier

Option B is the correct answer.

Resources
Move blobs to an archive tier
Domain
Implement and manage storage

Question 75
You have two virtual machines in your Azure subscription.

The virtual machine vm01 has two data disks attached to it.
Select and order the steps you would perform to attach the disk01 disk to
vm02, that results in minimal downtime for end users.

Correct answer
Start vm02

Detach disk01 from vm01

Attach disk01 to vm02


Detach disk01 from vm01

Start vm02

Attach disk01 to vm02

Stop vm01

Detach disk01 from vm01

Attach disk01 to vm02

Detach disk01 from vm01

Attach disk01 to vm02

Start vm02

Overall explanation
Since the disk type of both the disks is standard HDD, we cannot enable
shared disks. So, you have to detach the disk from vm01 before attaching
it to vm02.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


machines/disks-shared#general-limitations
You can attach/detach a disk to/from a VM in any VM status. That is, you
don’t have to either stop the VM or start the VM to perform the attach disk
or detach disk operation.

So, option C is incorrect because stopping vm01 is not necessary to


transfer disk from vm01 to vm02. Further, since vm02 is in a deallocated
state, it has to be started before end users can access its disk contents.

It is important to realize that the downtime associated with detaching


disk01 from vm01 and attaching disk01 to vm02 is inevitable to achieve
the desired objective. But we should try to minimize the impact of the
downtime associated with starting vm02, which does take some time,
probably a minute or so.

Option B is not the most optimal solution because after we detach the disk
from vm01, the downtime begins. Although starting a VM and attaching a
disk can be a synchronous operation (may not be sequential, as shown
below), a better solution would completely avoid the impact of delay
caused by starting the VM, especially after the disk is detached.

Option D is also not an optimal solution for the same reason. Although we
can endlessly argue if starting a VM could cause more delay than
attaching a disk, the fact remains that starting the VM should be a
precursor to detaching the disk to prevent any easily avoidable downtime
to end users.
Option A is the most optimal solution. Here, we first start the VM, so vm02
is ready. After detaching the disk from vm01, downtime for end users
begins. However, unlike earlier cases, there is now no additional
downtime associated with starting the VM. Subsequently, we can attach
the disk to vm02, and end users can begin to access the disk.

Option A is the correct answer.

Reference Link: https://learn.microsoft.com/en-us/azure/virtual-


machines/windows/attach-managed-disk-portal

https://learn.microsoft.com/en-us/azure/virtual-machines/windows/detach-
disk

Resources
Attach disk to an Azure VM
Domain
Deploy and manage Azure compute resources

Question 76
In the Dev subscription, there are three resource groups with the below
tags applied:
You create a resource in each resource group. Some of the resources are
also tagged.

Next, you assign the below policy to the Dev subscription scope.
After one week, you modify the access policy to grant access to all
cryptographic operations on the keys to the user principal in the key vault.

Analyze and answer which of the following tags apply to different


resources:

kv-ravi-dev01 -> Team: Compliance and Dept: IT

Ip-pub-01 -> Dept: HR

Correct answer
kv-ravi-dev01 -> Team: Compliance and Env: Dev

Ip-pub-01 -> None

kv-ravi-dev01 -> Team: Compliance , Dept: IT and Env: Dev

Ip-pub-01 -> Env: Dev


kv-ravi-dev01 -> Team: Compliance

Ip-pub-01 -> Env: Dev and Dept: HR

Overall explanation
Let’s use the given information to arrange the resources in a hierarchy.

Unlike locks, resources do not inherit the tags applied to a resource group
or a subscription.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/management/tag-resources?tabs=json#inherit-tags

Further:

1. When the policy is created, it will not have any effect on the three
existing resources.
2. Also, this built-in policy only affects the resources, not the resource
groups, as indicated by its name.

So, the IP address resource will not have any tag.

So, ip-pub-01 -> None.

But when the access policy of a key vault is modified to grant access to all
cryptographic operations on the keys to the user principal, the policy
detects the resource update and adds the tag as specified in the policy
definition.

So, in addition to the Team: Compliance tag, the Key Vault resource will
also have the Env: Dev tag.

So, kv-ravi-dev01 -> Team: Compliance and Env: Dev

Option B is the correct choice.


Resources
Analyze resource tags
Domain
Manage Azure identities and governance

Question 77
You have a Standard Azure Load Balancer with a backend pool. There are
four Azure VMs in your subscription. The VMs with different images are
deployed across VNets and availability zones as shown:

Which VMs can you add to the load balancer’s backend pool?

vm01 and vm04


wm01 and vm02
Correct answer
vm01 and vm03
All the VMs can be added to the backend pool
Overall explanation
Short Answer for Revision:

For Standard Azure Load Balancers -> All VMs added to the backend pool
must be in the same virtual network.

For Basic Azure Load Balancers (soon to be retired) -> All VMs added to
the backend pool must be in the same availability set / virtual machine
scale sets.
Detailed Explanation:

If you navigate to the backend pool of an Azure Load Balancer and try to
add VMs to the pool, you will realize that, first, you need to select a virtual
network.

So, for a Standard Azure Load Balancer, all VMs added to the backend
pool must be in the same virtual network. Depending on the chosen
virtual network, you can add either only vm01 and vm03 or only vm02
and vm04.
Option C is the correct answer.

Other factors like the availability zones or availability sets and, the image
of the VM does not matter for a Standard SKU load balancer. So, all other
options are incorrect.

Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/skus#skus

GitHub Repo Link: Add VMs to Azure Load Balancer's backend pool

Domain
Implement and manage virtual networking

Question 78
You have data on Product images, Product Inventory, and Customers
related to the business in your Azure Storage account. And Sales
transactions data is stored in the Cosmos DB. You would like to download
the data and host a copy of them in your on-premises servers.

Your colleague suggested you use the Azure Import/Export tool. Which of
the following business entities can you download from Azure using the
tool?

Only Customer data


Only Product images and Customer data
Correct answer
Only Product images
Only Product images, Customer and Sales data
Overall explanation
Downloading the data from Azure is the same as exporting the data from
Azure. So, we create an export job in the Azure Import/Export service.

You can export the data only from Azure blobs with the Azure
Import/export tool.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-requirements#supported-storage-types

So, you can download only the product images from the blob container.
Option C is the correct answer.

Resources
Download data with the Azure Import/Export tool
Domain
Implement and manage storage

Question 79
An organization (ravikirans.com) collaborates with organizations like:
contoso.com and fabrikam.com.
You need to ensure that your admins can add users from these partner
domains into your Microsoft Entra ID tenant but cannot add/invite users
from organizations like gmail.com, yahoo.com, and hotmail.com.

Which of the following settings would you configure?

Add custom domain


Correct answer
Manage external collaboration settings
Microsoft cloud settings
Cross-tenant access settings
Overall explanation
Short Answer for Revision:

In Manage external collaboration settings , you can restrict invitations


to specific domains. For other unrestricted partner domains, collaboration
is allowed.

Detailed Explanation:

In Microsoft Entra ID, under User Settings -> Manage external


collaboration settings , you can allow or block invitations to B2B users
from specific organizations.
Under Collaboration restrictions , you can deny invitations to domains
like gmail.com, hotmail.com, and yahoo.com.

Admins cannot send B2B invitations to these domains. When they try to
do so, it displays an error message.
But the blocklist ensures that invitations are sent to other domains
successfully.

Reference
Link: https://learn.microsoft.com/en-us/entra/external-id/allow-deny-list

Option B is the correct choice.

Cross-tenant access settings help to manage collaboration with external


Microsoft Entra organizations. For managing collaboration with non-
Microsoft Entra organizations like gmail.com, we use external
collaboration settings.
Reference
Link: https://learn.microsoft.com/en-us/entra/external-id/cross-tenant-
access-settings-b2b-collaboration

Option D is incorrect.

Microsoft cloud settings enable you to collaborate with organizations from


other Microsoft clouds like Microsoft Azure China, etc., not for
collaboration with non-Microsoft Entra organizations. You can find this
setting within cross-tenant access settings.
Reference
Link: https://learn.microsoft.com/en-us/entra/external-id/cross-tenant-
access-overview#microsoft-cloud-settings

Option C is incorrect.

You can add a custom domain (like ravikirans.com or contoso.com) to


your Microsoft Entra ID tenant. This will ensure that you can create users
using these domain names. But adding a custom domain will not prevent
an admin from inviting users from other external domains like gmail.com.

Option A is incorrect.

Resources
Block user invitations from gmail.com
Domain
Manage Azure identities and governance

Question 80
You have deployed a pilot app in a Shared App Service plan. The client
requires you to enable two new features for the app:

1. Adding a custom domain

2. Collect traces to identify performance issues in the app


Which of the following two actions would you do?

Correct selection
Enable Application Insights Profiler
Set Collection level to Basic
Scale out the App Service plan
Correct selection
Scale up the App Service plan
Overall explanation
Short Answer for Revision:

Scale out - add more VM instances either manually or automatically.

Scale up - get a fatty VM (with more vCPU, RAM, Storage) with additional
features like custom domains.

Detailed Answer:

The Free and the Shared plans do not come with any features like adding
a custom domain, autoscaling, daily backups, and staging slots. So, you
need to scale up the App Service Plan to at least Basic to add custom
domains.

Option D is one of the correct answers.


Option C is incorrect as scaling out adds additional VM instances for
running your app.

Reference
Link: https://learn.microsoft.com/en-us/azure/app-service/manage-scale-
up

While creating an App Service, if you enable Application Insights, an


Application Insights resource is created along with the app.
Within the app, in the Application Insights section, you can set the
Collection level to either Basic or Recommended. If you set the Collection
level as Basic, you cannot enable any of the features like Profiler,
Snapshot debugger, etc.,
If you set the Collection level to Recommended, you can enable Profiler,
which can collect traces to identify performance issues in the app.

So, option B is incorrect. Option A is the correct answer.

Resources
Scaling App Service Plan
Domain
Deploy and manage Azure compute resources

Question 81
You deploy a virtual network with two subnets in a resource group using
the below ARM template.
What would happen if you remove subnet1 and subnet2 from the ARM
template and redeploy the template to the resource group in a complete
deployment mode?

Both subnets will be removed


The resource manager will throw an error
Correct answer
Both subnets will remain intact
The VNet and the subnets will be removed
Overall explanation
While creating a virtual network in the portal, Azure requires the VNet to
have at least one subnet.
But after we create the Virtual Network resource, the Azure Resource
Manager allows you to remove all the subnets via the portal or the ARM
template.

Since the virtual network alone can be a standalone resource without any
subnets, removing both subnets will not delete the VNet.

Option D is incorrect.

Recall from Practice Test 1 (Related lecture video titled: Redeploy a


template in different deployment modes) about how the incremental and
complete deployment modes treat resources differently that are not
defined in the template but live in the resource group.

But that’s how the deployment modes work for top-level resources. The
subnet01 and subnet02 of resource type,
Microsoft.Network/virtualNetworks/subnets are defined here as child
resources of the Virtual Network of resource type,
Microsoft.Network/virtualNetworks. So, the subnets are child resources of
the VNet and are not top-level resources.

Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-


manager/templates/child-resource-name-type#outside-parent-resource

Before jumping to the answer, let’s understand how incremental mode


would work for a child resource. In incremental mode, new child resources
are added to the top-level resource (a), but existing resources are not
deleted, even if they are not defined in the template (b), similar to how
incremental mode works for top-level resources.

For the complete deployment mode, similar to top-level resources, new


child resources defined in the template are added to the top-level
resource (a). But unlike for top-level resources, if the child resources are
not included in the template, they are not deleted to preserve the parent-
child relationship (b).

So, when you remove both the subnets and deploy the template, the
subnets will not be removed from the VNet resource. Option C is the
correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-
manager/templates/deployment-modes#complete-mode

Resources
Redeploying child resources in complete mode
Domain
Deploy and manage Azure compute resources

You might also like