AZ Question 2
AZ Question 2
You have three VMs across two subnets in your Azure virtual network. A
Standard SKU public IP address is associated with each VM’s NIC.
None of the VM’s NICs or the VNet’s subnets is associated with a Network
Security Group yet. You need to add security rules in the NSG and
associate the NSG with either subnet/Network interface so that the below
four objectives are met:
Based on the given information, answer the below two questions. Select
answers that satisfy all the required objectives.
1, 3
Correct answer
1, 2
2, 2
2, 3
Overall explanation
Short Answer for Revision:
Objective b: Create another security rule in the same NSG to allow web
traffic only to vm02.
Detailed Explanation:
Coming to this question, when you create a Network Security Group, the
NSG comes with three default inbound and outbound security rules. So, if
the default rules satisfy any of the given objectives, we don’t have to add
a new security rule for that objective.
Objective 1:
The RDP connections are not open by any of the default security rules. So,
we need to create an inbound security rule to allow the RDP connection.
Since RDP should be allowed only for vm01, we use vm01’s private IP
address for the Destination IP address field.
We can associate this NSG with the subnet or the NIC of vm01. But let’s
associate it with the subnet, so we don’t have to create an additional NSG
for vm02, which is also in the same subnet as vm01.
This ensures that users can RDP to vm01. Since we targeted only vm01, it
also ensures that users cannot RDP to vm02.
Objective 2:
The web server traffic over port 80 is also not allowed by any of the
default security rules. So, in the same NSG, we can create an inbound
security rule that targets vm02 with its private IP address to allow traffic
over port 80.
To test the security rule, we can RDP from vm01 to vm02. Note that
although we did not create a security rule to allow RDP for vm02, the
default security rule AllowVnetInBound allows all traffic from the virtual
network. Since we are doing RDP from vm01, which is in the same virtual
network as vm02, the connection will be allowed.
Once logged into vm02, install the web server role. We can verify that we
can access the vm02’s web server by using its public IP address (Check
the related lecture video).
Objective 3:
Objective 4:
In the current architecture until now, we have only one NSG, which is
associated with subnet01. So, no security rules apply to vm03, which is
deployed in subnet02. When the VM doesn’t inherit any security rules,
traffic flow to the VM is governed by its public IP address SKU.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
public-ip-addresses#sku
Given that a Standard SKU public IP address is associated with all the VMs.
So, by definition, all inbound traffic to vm03 is blocked from the Internet.
To verify this, log in to vm03 using its private IP address from vm01. Note
that the Standard SKU public IP address restricts all inbound traffic from
outside the VNet. So, traffic from vm01, which is in the same VNet as
vm03 is allowed.
Again, install the web server role using Server Manager by performing
similar steps as you did for vm02. After the web server is installed, use
the public IP address of vm03 to connect to the web server. Although the
web server role is installed, you wouldn’t be able to connect to the server,
as the Standard SKU public IP address is secure by default and blocks all
traffic from outside the VNet.
But you can access the web server from within vm03 to check if the web
server is running successfully.
So, we don’t have to create any security rule or associate any NSG to
subnet02, where vm03 is deployed. The Standard SKU public IP address
does the magic.
We have created only one NSG with 2 NSG rules so far. So, option B is the
correct answer.
GitHub Repo Link: Minimum no. of. network security rules required
Resources
Minimum no. of. network security rules required
Domain
Implement and manage virtual networking
Question 2
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
Yes
Correct answer
No
Overall explanation
Short Answer for Revision:
Detailed Explanation:
By default, the first NIC attached to the VM is the primary network
interface. So nic02 is the primary network interface card of vm02. All
other network interfaces attached to the VM are secondary network
interfaces. So nic03 is the secondary network interface card attached to
vm02.
So, when you ping vm02 from vm01, the traffic reaches the primary NIC of
vm02, whose private IP address is 10.0.2.4.
From these examples, we can verify that a VM by default uses only the
primary IP configuration of the primary network interface for all outbound
and inbound communications and our inability to use the secondary NIC
for communication.
But the question requires that we ping the secondary NIC of vm02. So,
let’s implement the given solution which is to turn on IP forwarding within
the guest Operating System in vm02.
Before doing that, first, check the no. of. network interface cards attached
to vm02 in the guest OS. As expected, there are two network interface
cards as vm02 is attached to two NICs, nic02 and nic03.
Although IP forwarding is enabled in the Azure portal for nic02, the IP
forwarding is disabled in the guest OS for both these NICs.
Since the IP forwarding is turned on in the guest OS for both the network
interface cards, it enables vm02 to forward the packets to other VMs using
the secondary network interface card. So from vm02, you can use the
secondary network interface card to ping vm01.
However, you will still not be able to ping the secondary network interface
card of vm02 from vm01 (Check the video lecture). The given solution
doesn’t satisfy the stated goal. Option No is the correct answer.
Resources
Enable communication with a VM's secondary network interface card - 1
Domain
Implement and manage virtual networking
Question 3
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
Solution: From the Windows command line of vm02, you use the route
add command to add a route with the default gateway address for nic03 in
the IP routing table.
Correct answer
Yes
No
Overall explanation
Short Answer for Revision:
Adding a route with the default gateway address for nic03 ensures that
the default gateway is assigned to nic03. So, any VM from outside the
subnet can communicate with nic03. Also vm02 can use nic03 to
communicate with any VM outside the subnet.
Detailed Explanation:
The nic02, which is the primary network interface attached to vm02, with
the private IP 10.0.2.4, has a default gateway assigned to it. But nic03,
which is the secondary network interface attached to vm02, with the
private IP 10.0.2.5, doesn’t have any default gateway assigned.
This default gateway acts as a router that forwards data packets outside
the subnet, subnet02. Now the vice versa is also true. If vm01 from
outside the subnet wants to communicate with vm02 inside the subnet
(subnet02), it has to go through both subnet01 and subnet02’s default
gateway.
The lack of a default gateway assigned to the secondary NIC (nic03) is the
main reason why you cannot ping the secondary NIC of vm02 from vm01.
It is also the reason why you cannot use nic03 to ping vm01.
So, let’s use the route add command to add a default route with the
default gateway, which is always the first IP in the network, i.e., 10.0.2.1,
for nic03.
After the route is added, verify if the route exists in the routing table.
Now you can use the secondary network interface of vm02 to ping vm01.
Reference
Link: https://learn.microsoft.com/en-us/windows-server/administration/
windows-commands/route_ws2008
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-
faq#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets
GitHub Repo Link: Same as the previous question in the question set.
Resources
Enable communication with a VM's secondary network interface card - 2
Domain
Implement and manage virtual networking
Question 4
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
Correct answer
Yes
No
Overall explanation
Short Answer for Revision:
You need a default gateway only for communication between subnets. For
intra-subnet communication, there is no need for a default gateway. If
vm02 and vm01 are deployed in the same subnet, vm02 can use its
secondary NIC, nic03 (which is not assigned a default gateway) to
communicate with vm01 . Further, vm01 can also communicate with
nic03.
Detailed Explanation:
Let’s first move vm01 to subnet02 and understand the implications of this
move operation. Moving a virtual machine to a different subnet is nothing
but moving the VM’s NIC to a different subnet. So, go to nic01, which is
the Network Interface attached to vm01, and change its subnet from
subnet01 to subnet02.
This is Networking 101 we are discussing here but do note that a default
gateway is necessary only if the VM communication needs to traverse
subnets or networks.
GitHub Repo Link: Same as the first question in the question set.
Resources
Enable communication with a VM's secondary network interface card - 3
Domain
Implement and manage virtual networking
Question 5
You have an Azure Load Balancer and the below two VMs added to the
load balancer’s backend pool.
Select what you will configure for the below two tasks:
Detailed Explanation:
First, know the difference between a load balancing rule and an inbound
NAT rule. A load balancing rule distributes traffic to backend VMs whereas
an inbound NAT rule forwards traffic to a specific VM.
Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/load-
balancer-faqs#how-are-inbound-nat-rules-different-from-load-balancing-
rules-
Question 1:
To distribute HTTP traffic to either of the VMs in the backend pool, create
a load balancing rule that maps the load balancer’s frontend IP address to
the backend pool. A load balancer rule also requires a health probe that
checks the health status of VMs.
Note that after the rule is created, the load balancer’s public IP address is
assigned to the individual VMs. So, when the HTTP traffic hits the load
balancer's public IP address, the load balancer distributes the traffic to
either vm01 or vm02.
Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/components#
load-balancer-rules
Question 2:
To forward the RDP traffic only to specific VMs, create an inbound NAT
rule that targets a specific virtual machine using the RDP protocol.
This rule too assigns the load balancer’s public IP address to the targeted
VMs. So, users can use just the load balancer’s public IP address to RDP
into the targeted VM using the VM’s account credentials.
Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/components#
inbound-nat-rules
GitHub Repo Link: Distribute & forward traffic to load balanced VMs
Resources
Distribute and forward traffic to load balanced VMs
Domain
Implement and manage virtual networking
Question 6
Given below is an ARM template that adds a Microsoft Entra domain
extension to an Azure Windows VM to join the virtual machine to the
Microsoft Entra Domain Services managed domain.
ravikiransrinivasulu@ravikirans171.onmicrosoft.com,
“domainjoin”
ravikirans171.onmicrosoft.com\\ravikiransrinivasulu,
“domainjoin”
Correct answer
ravikirans171.onmicrosoft.com\\ravikiransrinivasulu,
“[concat(parameters('vmName'),'/domainjoin')]”
ravikirans171.onmicrosoft.com\ravikiransrinivasulu,
“[concat(parameters('vmName'),'/domainjoin')]”
Overall explanation
Short Answer for Revision:
A single ‘\’ escapes the character ‘r’ next to it. So, we get an incorrect
username value.
A double ‘\\’ ensures that only a slash is escaped and not any other
character. We get the correct username value. Further, using the UPN
(which looks similar to an email address) is also correct.
You should format the name and type values of child resources defined
outside the parent resource with a ‘/’ to include the parent resource name
and type. The correct format for the child resource name should be
vmName/extensionName.
Detailed Answer:
The deployment fails when you provide the username with a single ‘\’. As
you can observe below, a single backslash escapes the character (‘r’) next
to it, so it picks an incorrect username.
Option D is incorrect.
Recognize that, in the given template, the extension resource (of resource
type: Microsoft.Compute/virtualMachines/extensions) is a child resource of
the virtual machine resource (of resource type:
Microsoft.Compute/virtualMachines)
So, when a child resource is not defined within the parent resource, i.e.,
outside the parent resource, you should format the name and type values
with slashes to include the parent resource name and type.
So, the resource type for the extension child resource should be
Microsoft.Compute/virtualMachines/extensions, and not just extensions.
And the name for the extensions child resource should be the parent
resource name/child resource name, and not just the child resource name.
Note that this is only the case when the child resource is not defined as a
nested resource of the parent resource.
So, using only the child name for the extension resource will produce the
below error.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-
manager/templates/child-resource-name-type#outside-parent-resource
Only option C picks the correct choices for both boxes. So, if you deploy a
template with these values ( ravikirans171.onmicrosoft.com\\
ravikiransrinivasulu , “[concat(parameters('vmName'),'/
domainjoin')]” ) in box1 and box2, the deployment will be successful.
Note: This template requires you to already have the VM and the domain
services deployed in your subscription.
Resources
Install an extension to domain-join a VM with ARM Template
Domain
Deploy and manage Azure compute resources
Question 7
Given below are the JSON definitions of two custom roles role-st-
dev01 and role-st-dev02 that you assign to User One and User Three,
respectively at the Dev subscription scope.
1. {
2. "properties": {
3. "roleName": "role-st-dev01",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "*/read"
12. ],
13. "notActions": [],
14. "dataActions": [
15. "Microsoft.Storage/*"
16. ],
17. "notDataActions": [
18. "Microsoft.Storage/storageAccounts/fileServices/*",
19. "Microsoft.Storage/storageAccounts/queueServices/*",
20. "Microsoft.Storage/storageAccounts/tableServices/*",
21. "Microsoft.Storage/storageAccounts/*/delete",
22. "Microsoft.Storage/storageAccounts/*/write",
23. "Microsoft.Storage/storageAccounts/*/action"
24. ]
25. }
26. ]
27. }
28. }
1. {
2. "properties": {
3. "roleName": "role-st-dev02",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "Microsoft.Storage/*"
12. ],
13. "notActions": [
14. "Microsoft.Storage/storageAccounts/blobServices/*"
15. ],
16. "dataActions": [
17. "Microsoft.Storage/*"
18. ],
19. "notDataActions": [
20. "Microsoft.Storage/storageAccounts/fileServices/*",
21. "Microsoft.Storage/storageAccounts/queueServices/*",
22. "Microsoft.Storage/storageAccounts/tableServices/*",
23. "Microsoft.Storage/storageAccounts/*/delete",
24. "Microsoft.Storage/storageAccounts/*/write",
25. "Microsoft.Storage/storageAccounts/*/action"
26. ]
27. }
28. ]
29. }
30. }
What will be the default access when the two users access the blob data
in the storage account from the Azure portal?
Correct answer
Accesses blob data with Microsoft Entra ID authentication, Cannot
access blob data
Accesses blob data with Microsoft Entra ID authentication,
Accesses blob data with Microsoft Entra ID authentication
Accesses blob data with storage account keys, Accesses blob data
with Microsoft Entra ID authentication
Cannot access blob data, Accesses blob data with storage account
keys
Overall explanation
User One:
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-
data-operations-portal#use-your-microsoft-entra-account
The actions section of the custom role, role-st-dev01 , contains the same
permissions as the reader role, providing read access to all the Azure
resources. So, using this permission, User One can navigate to the storage
account in the Azure portal.
So, when User One accesses the blob data, he accesses it via Microsoft
Entra ID authentication, by default.
User Three:
The DataActions and NotDataActions section for the custom role role-st-
dev02 is the same as role-st-dev01 .
Resources
Analyze the default user access to blob data
Domain
Implement and manage storage
Question 8
There are three blob containers source1, source2, and source3 with
a Public access level of Container, Blob, and Private, respectively, in the
strdev011 storage account, which is in the rg-dev-01 resource group.
There are another two blob containers target1 and target2, with a Public
access level of Container and Private, respectively, in the strdev012
storage account, which is in the rg-test-01 resource group.
There are two users User One and User Two in the Microsoft Entra ID
tenant, with the following roles assigned at the respective resource group
scopes.
Which of the following statements is correct about users using the azcopy
command to copy the file from the source container to the target
container? Select two options.
1. Even the highest access level (Container) of target1 provides only read
access to the container.
2. The user’s Storage Blob Data Reader role provides only read access to
data, not write permissions.
Option A is incorrect.
In option B, the Blob access level of source2 doesn’t permit read access to
the file, as from the command syntax, we understand that the azcopy tool
copies data from a container and not an individual blob file. But still, User
One can copy the file using storage account access keys as the Storage
Account Contributor role provides read access to them.
However, he cannot write the file to the target container, as the Storage
Blob Data Reader role provides only read access to data, not write
permissions.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-
read-access-configure
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-blob-data-owner
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-
azcopy-authorize-azure-active-directory
Note: The related lecture video has more details on how to authorize a
user with Microsoft Entra ID.
GitHub Repo Link: Use azcopy to copy data with Azure AD and different
container access levels - PS Command.ps1
Resources
Use azcopy to copy data with Microsoft Entra & different container access
levels
Domain
Implement and manage storage
Question 9
A user is assigned the below custom role at the Dev subscription scope.
1. {
2. "properties": {
3. "roleName": "role-st-dev01",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "*/read"
12. ],
13. "notActions": [],
14. "dataActions": [],
15. "notDataActions": []
16. }
17. ]
18. }
19. }
Which additional role would you assign him (at the same scope) so he can
upload your organization’s AHM videos in the blob container using
Microsoft Entra ID authentication from the Azure portal?
The custom role the user is assigned is very much a reader role that
grants access to view all resources. But instead of a role that grants
permission to access data, we need a role with permission to upload/write
data.
Storage blob data reader, as the name indicates, gives permission to only
read blob data. So, when the user tries to upload a file via Microsoft Entra
ID authentication, he will get an access permission error.
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#storage-blob-data-reader
Option A is incorrect.
Storage blob data contributor role provides read, write, and delete
permissions on the data in the storage account. So, this role allows a user
to upload blob data via Microsoft Entra ID authentication.
The contributor role, similar to the owner role, doesn’t provide access to
data (no permissions in the DataActions section). So, the user cannot even
access the blobs in the container via Microsoft Entra ID authentication, let
alone, upload blobs to the container.
Option C is incorrect.
But the contributor role provides access to storage account keys. So, he
can still upload data via account key authentication (check the related
lecture video).
Resources
Upload blobs to storage account via Microsoft Entra ID authentication
Domain
Implement and manage storage
Question 10
You have an App Service app with an Azure SQL database. The app has
one production slot and one staging deployment slot.
In addition to the automatic backup, you have also configured a custom
backup of the app in the production slot to the storage account with the
below settings:
Which of the following restore actions are possible? Select two options.
You can restore the app + database from the automatic backup to
the production slot
You can restore the app + database from the automatic backup to
the staging slot
Correct selection
You can restore the app + database from the custom backup to
the staging slot
Correct selection
You can restore the app + database from the custom backup to
the production slot
Overall explanation
Short Answer for Revision:
2 types of backups possible for App Service: Automatic (only app) and
Custom (both app & database). Further, you can restore any type of
backup to a new app, or current app (same slot or different slot or a new
slot).
Detailed Answer:
For Azure App Service apps, two types of backups can be created:
a. Automatic backups
b. Custom backups
Automatic backups are created every hour by default if you deploy apps in
a Basic App Service plan or higher. However automatic backups do not
backup the database. So, options A and B are incorrect.
Reference
Link: https://learn.microsoft.com/en-us/azure/app-service/manage-
backup?tabs=portal#back-up--restore-vs-disaster-recovery
With custom backups, you can include the database, in addition to the
app for restoration, if the database was originally selected to include in
the backup process.
Further, irrespective of the type of backup, whether automatic or custom,
you can restore the backup to the same app or a different app and to the
production (same) slot or other deployment slots or even a new slot.
Options C and D are the correct answer choices.
Resources
Back up App Service App to a deployment slot
Domain
Deploy and manage Azure compute resources
Question 11
You have a virtual machine and its related resources (not shown) in the
South India location. The VM has a private IP address of 192.168.1.4 and
is connected to subnet01 of vnet01.
There is another virtual network, vnet02 in the West Europe location with
three subnets. Their corresponding IPv4 address ranges are shown below:
To meet your Business Continuity & Disaster Recovery (BCDR) needs, you
configure disaster recovery by replicating vm01 to the target region West
Europe and the target VNet, vnet02.
The drill will fail as you can replicate VMs only between Azure
region pairs
Correct answer
The drill will succeed and the test VM will be deployed in
FirstSubnet
The drill will succeed and the test VM will be deployed in
SecondSubnet
The drill will succeed and the test VM will be deployed in
ThirdSubnet
Overall explanation
Short Answer for Revision:
If the target VNet has a subnet with the same name as the source VM’s
subnet, the test VM will be deployed in that subnet. Else, the first subnet
in alphabetical order is set as the target VM’s subnet.
Detailed Answer:
You can replicate VMs between any two Azure regions. Option A is
incorrect.
Reference
Link: https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-
azure-support-matrix#region-support
To answer this question, let’s perform a test failover and observe which
subnet the test VM uses. Since we have already mapped vnet02 in the
replication settings, let’s select the mapped virtual network.
Once the test failover process is underway, you can check the test VM’s
subnet. It is FirstSubnet. The reason it uses FirstSubnet, and not any
other, is because if the subnet with the same name as the source VM's
subnet doesn’t exist in the target virtual network, the first subnet in the
alphabetical order is set as the target VM’s subnet. This sounds strange,
but that’s how the Site Recovery process works.
If you do not want any surprises, create a subnet with the same name as
the source VM’s subnet in the target VNet. This will ensure that the target
VM will be deployed in the same subnet name.
Resources
Subnet of the test VM after test failover
Domain
Monitor and maintain Azure resources
Question 12
In a managed domain (with Microsoft Entra Domain Services) that
integrates with your Microsoft Entra ID tenant, you enable identity-based
authentication for Azure File share over SMB.
In the Microsoft Entra ID tenant, you create the below two users with the
following role assignments.
The users log into the domain-joined VM to access the file share using
Microsoft Entra ID credentials.
Further, you assign the Storage File Data SMB Share Reader role as a
default share-level permission on your storage account.
Given below are two statements based on the above information. Select
Yes if the statement is correct. Else select No.
Yes, No
Correct answer
Yes, Yes
No, No
No, Yes
Overall explanation
Well, let’s first lay out the architecture to understand the problem better.
2. To enable access to file share over SMB, the storage account is domain-
joined or registered with the Microsoft Entra Domain Services deployment.
4. So, the two users authenticate directly against Microsoft Entra Domain
Services. They send the received token to Azure Files for authorization.
Refer to the links to know more details about the deployment.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-files-
identity-auth-domain-services-enable?tabs=azure-portal
https://learn.microsoft.com/en-us/azure/storage/files/storage-files-active-
directory-overview#microsoft-entra-domain-services
Now, the two users need share-level permissions to access the file share.
There are two ways you can assign share-level permissions:
So, although User Four does not have any specific role assignment, he
will still be able to view files in the file share, thanks to default share-level
permissions.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-files-
identity-ad-ds-assign-permissions?tabs=azure-portal#what-happens-if-
you-use-both-configurations
So, with the contributor role, he can access and modify files in the file
share. Statement 1 is Yes.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-files-
identity-ad-ds-assign-permissions?tabs=azure-portal#share-level-
permissions
Resources
Grant Azure File share access with Microsoft Entra ID credentials
Domain
Implement and manage storage
Question 13
You have three virtual machines across two different regions in your Azure
subscription.
The virtual machine vm02 stores the project files (the Data Analysis
folder), and the VM is backed up daily to a Recovery Services Vault.
Detailed Answer:
To recover a file, select the backup item of the VM and click File
Recovery .
1. First, we select the restore point that contains the deleted folder.
2. Next, we download the script that mounts the disks from the selected
recovery point as local drives on any machine where it is run.
Let’s first connect to vm01 and copy the downloaded executable. And run
the script by copy-pasting the required password. In a few moments, we
can see the mounted drive, where we can browse the project folder.
Let’s also connect to the other two VMs and perform the same set of steps
to mount the drive. As you can observe, we can mount the drive in each of
the given VMs.
Resources
Recover a folder to a VM from a recovery point
Domain
Monitor and maintain Azure resources
Question 14
You have two subscriptions Dev and Prod, under the Apps management
group as shown below:
And you create three resource groups across the two subscriptions.
In each resource group you deploy a virtual machine with the related
resources.
Finally, you create the below four users and assign roles at different
scopes.
Given below are three statements based on the above information. Select
Yes if the statement is correct. Else select No.
No, Yes, No
From image 2 in the question, there are two resource groups, rg-dev-
01 and rg-dev-02 in the Dev subscription and one resource group rg-
prod-01 in the Prod subscription.
Thanks to icons8 for the above icons
Statement 1:
Since Admin one is the owner of the Apps management group, he has
owner permissions on the subscriptions and all the resources created
using those subscriptions.
But the Tenant Root Group is one level above the Apps management
group. Granting owner access at the Apps management group scope
doesn’t grant access to the root management group. Consequently, he
cannot assign policies at the root management group scope.
This enables him to access the root management group and all the
subscriptions and management groups in the tenant.
Since the user access administrator role provides access to the entire
resource provider Microsoft.Authorization (See the Actions list in the 1st
link below), in addition to role assignments for managing user
access, Admin two can also perform policy assignments (check the related
lecture video).
Reference Link: https://learn.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#user-access-administrator
https://learn.microsoft.com/en-us/azure/role-based-access-control/
resource-provider-operations#microsoftauthorization
So, only Admin two can assign policies at the Tenant Root Group scope.
Statement 2:
But the Virtual Machine Contributor role provides the contributor access
only to the virtual machine. This role doesn’t grant management access to
resources that the VM depends on. i.e., he cannot manage the virtual
network where the VM is deployed.
Therefore, User two cannot add a new address space in the VNet.
Statement 2 is incorrect.
Statement 3:
We have granted the Virtual Machine User Login role to User one at
the rg-dev-02 resource group scope. So, he will have login access to only
the vmsqlnode01 virtual machine.
When a user with the user access administrator role assigns policies
at any management group scope, he gets a success message for policy
assignment and an error message for failure to register
Microsoft.PolicyInsights resource provider (as seen in the video).
This error does not occur when this user assigns policies at the
subscription scope if he has registered the subscription with
Microsoft.PolicyInsights resource provider in Azure portal.
This is the response from the Microsoft team on the above issue (contains
details on why Microsoft.PolicyInsights RP is needed):
Link: https://learn.microsoft.com/en-us/answers/questions/1055370/can-
user-access-administrator-assign-azure-policie.html
Resources
Managing assignments at Management groups & Subscriptions
Domain
Manage Azure identities and governance
Question 15
You create the below backup policy to back up your Azure VMs to a
Recovery Services Vault.
Assume you create the policy on April 1, Saturday, 00:00 AM. Based on
the given backup schedule, answer the below two questions:
30 Months, 10
6 Weeks, 9
Correct answer
5 years, 8
30 Months, 9
Overall explanation
Short Answer for Revision:
Question 1: If more than one backup retention point exists on a day, Azure
will retain the backup for the longest duration. For the 4th of April, daily,
weekly, monthly, and yearly backup retention points exist. So, the backup
will be retained for 5 years.
Question 2: Since daily backups are retained only for 7 days, on 12th
April, you will have the backups for the last 7 days. Further, on 4th April,
the backup is retained for 5 years. So, in total, 8 backups.
Detailed Answer:
Question 1:
Let’s understand the given backup policy by plotting the timelines from
1st April to 12th April. It is given that a daily backup point occurs at 8:00
AM.
The backup points that occur every Tuesday are retained as weekly
backup points for six weeks. These backups occur on the 4th and the 11th
of April.
The backup points that occur on the 4th of every month are retained as
monthly backup points for thirty months. For April, the monthly backup
point falls on a Tuesday.
Finally, the backup points that occur on the first Tuesday of April month
are retained as yearly backup points for five years. The yearly backup
point also falls on the 4th of April.
Coincidentally, for the April month of the given year, the backup that
occurs on the 4th of April is retained as daily, weekly, monthly, and yearly
backup points.
Question 2:
Each daily backup point will be retained only for seven days. So, on the
12th of April at 6:00 PM, the daily backups from 1st to 3rd April, and the
5th of April will be deleted. The recovery point on the 4th of April is
untouched as it will be retained for five years, as just discussed.
So, there will be a total of 6 daily backups, one weekly backup, and one
yearly backup. Therefore, on the 12th of April, there will be a total of 8
backups. Question 2 -> 8.
Reference Link:https://learn.microsoft.com/en-us/azure/backup/backup-
azure-arm-vms-prepare#create-a-custom-policy
Resources
Backup policy in a Recovery Services Vault
Domain
Monitor and maintain Azure resources
Question 16
Given below are two statements based on Azure Storage object
replication. Select Yes if the statement is correct. Else select No.
Yes, No
No, No
Correct answer
No, Yes
Yes, Yes
Overall explanation
Statement 1:
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-
replication-prevent-cross-tenant-policies?tabs=portal
To get a sample JSON for defining replication rules, I create a rule (in the
UI) to replicate objects between two storage accounts, both in the same
Microsoft Entra ID tenant. And download the rule in a JSON file.
In this JSON file, we need to do three things:
2. Replace the values for the target subscription id, target resource group,
and target storage account in the destinationAccount property.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-
replication-configure?tabs=portal#configure-object-replication-using-a-
json-file
If you download the JSON replication file from the target storage account,
you can view the generated policyId.
Upload this downloaded file to the object replication section in the source
storage account. The same policyId in the JSON file on the source and the
target account ensures that replication takes place, and you can view the
replicated files in the target storage account (Check the related lecture
video).
Statement 2:
The below link explains best why we first uploaded the JSON file to the
destination storage account, then downloaded the JSON file from the
destination account to upload it to the source storage account.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/object-
replication-overview#replication-policies
If you first upload the JSON file (with the ‘default’ value for policyId) to the
source storage account, you will get this error.
Resources
Azure Storage object replication across tenants
Domain
Implement and manage storage
Question 17
You have three Azure Virtual Machines, two VMs in vnet01, and a VM
hosting a custom DNS server in vnet02. The two VNets peer with each
other.
The vnet01 uses the custom DNS server (on the left), which hosts the
forward lookup zone birdsource.com (on the right). There are a couple of
‘A’ records in the zone pointing to the VM’s private IP as shown:
Also, an Azure Private DNS zone, bigstuff.com is linked with vnet01 with
auto-registration enabled.
Which of the following domain names can vm01 resolve using the
nslookup command?
a. vm02.internal.cloudapp.net
b. vm02.bigstuff.com
c. vm02
d. vm02.birdsource.com
Only a, c, and d
Only b
Correct answer
Only d
Only a, b, and c
Overall explanation
Short Answer for Revision:
Each VNet comes with a default, Azure-provided DNS server, which can
resolve vm02 and vm02.internal.cloudapp.net. They can also forward DNS
queries to the private DNS zones linked to the VNet.
But when you use a custom DNS server, they cannot resolve vm02 or
vm02.internal.cloudapp.net as the lookup zone is different. They can only
resolve domain names like vm02.birdsource.com. They also do not
forward queries to the private DNS zones linked to the VNet.
Detailed Answer:
For questions like these that include a lot of moving pieces, we can start
with organizing the information as a visual architecture map.
Given that vnet01 doesn’t use the default, Azure-provided DNS server but
rather a custom DNS server hosted in a peered virtual network at
10.0.0.5.
In your command prompt, type nslookup to use the command in an
interactive mode. Subsequently, whenever you lookup domain names, this
command first hits the custom DNS server. Generally, the Azure-provided
DNS servers resolve domain names like vm02 and
vm02.internal.cloudapp.net in any virtual network.
But the custom DNS server has no idea of these domain names. So, unless
you set up forwarding that forwards the DNS queries to the Azure-
provided DNS server, vm01 cannot resolve these names.
Now when you use nslookup in an interactive mode, the DNS queries use
the Azure-provided DNS server at the virtual IP 168.63.129.16. This public
IP address is owned by Azure and facilitates communication with the
Azure DNS and the DHCP servers from any virtual network in any region.
Reference
Link: https://learn.microsoft.com/en-us/windows-server/administration/
windows-commands/nslookup
https://learn.microsoft.com/en-us/azure/dns/dns-faq-private#will-dns-
resolution-by-using-the-default-fqdn--internal-cloudapp-net--still-work-
even-when-a-private-zone--for-example--private-contoso-com--is-linked-to-
a-virtual-network-
GitHub Repo Link: Using custom DNS server with Azure Private DNS
zone
Resources
Using custom DNS server with Azure Private DNS zone
Domain
Implement and manage virtual networking
Question 18
You have three VMs, deployed across two VNets in your Azure
subscription. From the virtual machine vm02 and vm03’s Windows Server,
you configure birdsource.com as the primary DNS suffix.
You have an Azure Public DNS zone named birdsource.com. The system-
assigned managed identity of vm03 is assigned the below role to the
public DNS zone.
Further, an Azure Private DNS zone named bigstuff.com is registered with
vnet02, with auto-registration enabled.
Finally, the Virtual Networks vnet01 and vnet02 peer with each other.
Given below are three statements based on the above information. Select
Yes if the statement is correct. Else select No.
The Azure DHCP service ignores the primary DNS suffix configured in the
VM's OS when its VNet registers with the private DNS zone. So, vm03
cannot resolve the birdsource domain. Statement 2 -> No.
VNet peering links does not forward the DNS queries to the private zone.
Statement 3 -> No.
Detailed Answer:
There are a lot of details going on here, so let’s get the architecture
mapped out:
Statement 1:
The concept of linking a virtual network with a DNS zone and enabling
auto-registration applies only to Azure Private DNS zones. You cannot
even link your VNet with a public DNS zone, let alone enable auto-
registration.
So, an ‘A’ record pointing to vm03’s private IP address is automatically
created in the private zone, not on the public zone, birdsource.com.
The owner role of vm03 on the public DNS zone grants permissions to
perform either zone-level or record-level operations, like adding a record
set or deleting a zone, etc., The role doesn’t interfere with the default DNS
operations.
Statement 2:
vnet02 is the registration virtual network for the private DNS zone,
bigstuff.com. So vm03 can resolve the domain name of vm02, which is
vm02.bigstuff.com.
Although the primary DNS suffix of both vm02 and vm03 are configured
with birdsource.com, the Azure DHCP service ignores the primary DNS
suffix when it registers the private DNS zone.
Statement 3:
The VNet peering only enables vm01 and vm02 in different virtual
networks to communicate with each other. Since vnet01, where vm01 is
deployed, is not linked to the private DNS zone, it doesn’t have access to
the DNS records.
The VNet peering links do not forward the DNS queries to the private
zone. Since all virtual networks must be linked to the private DNS zone to
support DNS resolution between virtual networks, vm01 cannot resolve
the domain bigstuff.com for vm02.
Reference Link: https://learn.microsoft.com/en-us/azure/dns/dns-faq-
private#will-azure-private-dns-zones-work-across-azure-regions-
Resources
Use Azure DNS to resolve domain names
Domain
Implement and manage virtual networking
Question 19
You have created the below resources in your Azure subscription.
There is a blob container and a file share in the Azure storage account as
shown below:
Based on the given information, answer the below two questions:
From the backup center, you can create two types of vaults: a Recovery
Services vault and a Backup vault. Each vault supports backing up a
specific type of data source.
Although both Azure Disks and the blob containers can be backed up to a
backup vault, you cannot reuse a backup policy between the two data
sources.
So, I need a minimum of four backup policies to back up four types of data
sources. Question 2 -> 4. Option D is the correct answer.
Reference Link:https://learn.microsoft.com/en-us/azure/backup/backup-
azure-arm-vms-prepare#apply-a-backup-policy
Resources
Backup different data sources to a vault
Domain
Monitor and maintain Azure resources
Question 20
I have a public domain called ravikiransrinivasulu.com that’s already
delegated to Azure DNS. Azure DNS successfully resolves queries for my
domain from the Internet.
Select and order the steps you would perform to delegate a subdomain
called courses to another separate Azure DNS zone.
Correct answer
Create a new zone named courses.ravikiransrinivasulu.com for
the subdomain
Overall explanation
Short Answer for Revision:
Detailed Answer:
Recall that when a user visits a website, the domain delegation with the
help of NS records ensures that the DNS query can reach the DNS zone of
the target domain. Note that:
a. All along the DNS hierarchy, the parent zone points to the name server
of the child zone.
Note: Having said that, an NS record that points to the child domain’s
name servers is automatically created in the parent zone if you create a
child DNS zone:
a. Through a parent DNS zone, where the parent zone name is pre-
populated, or
Resources
Delegate a subdomain to another DNS zone
Domain
Implement and manage virtual networking
Question 21
You need to establish a site-to-site VPN connection from your Azure Virtual
Network to your on-premises network. For this connection, you plan to
deploy a zone-redundant VPN gateway across different availability zones.
Which of the following public IP addresses would you use to associate with
the Gateway IP configuration?
Detailed Answer:
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
public-ip-addresses#sku
So, the Basic SKU public IPs support only the regional gateway, which
doesn’t have zone redundancy built into it. Whereas the Standard SKU
public IPs can support either zone redundant or zonal gateways.
Reference
Link: https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#piprg
https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#pipzg
https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#pipzrg
All VPN Gateway SKUs that end with AZ , like the VpnGw2AZ, supports
zone redundancy. For this SKU, you can view the eligible IP addresses for
association, that are either zonal or zone-redundant.
Reference
Link: https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-
about-vpngateways#benchmark
If you use the public IP address created as zonal, i.e., if the IP address is
placed in a single zone, the gateway will also be zonal.
Reference
Link: https://learn.microsoft.com/en-us/azure/vpn-gateway/about-zone-
redundant-vnet-gateways#pipskus
Resources
Choose a public IP address for zone-redundant gateway
Domain
Implement and manage virtual networking
Question 22
You have four storage accounts in your Azure subscription.
And two service endpoint policies and a virtual network in the East US
region.
Each service endpoint policy allows access only to specific storage
accounts. You can identify the access from the policy name. For example,
policyaccess0910 grants access only to strdev009 and strdev010.
Yes, No
Yes, Yes
Correct answer
No, No
No, Yes
Overall explanation
Short Answer for Revision:
Consider a service endpoint policy as a whitelisting tool. If more than one
policy is applied to a subnet, access is granted to all the storage accounts
whitelisted in the policy.
Detailed Explanation:
Further, I have also created blobs in all the storage accounts which we will
try to access from the VM.
Statements 1 and 2:
The service endpoint policies deny access to all storage accounts not
listed in their definition.
If more than one policy is associated with the subnet, as is the case, then
access to all storage accounts whitelisted in the policy is allowed.
So, if you log in to the VM, you should be able to access blobs in storage
accounts strdev009, strdev010, and strdev011. However, access to
strdev012 is denied as none of the service endpoint policies whitelist the
account.
So both the given statements are incorrect. Option C is the correct
answer.
GitHub Repo Link: Access storage accounts from a subnet with multiple
service endpoint policies
Resources
Access storage accounts from a subnet with multiple service endpoint
policies
Domain
Implement and manage virtual networking
Question 23
You have a list of public IP addresses in your Azure subscription, as shown
below:
You have to choose the correct IP address for your load balancer Frontend
IP configuration. Based on this information, answer the two statements
below. Select Yes if the statement is correct. Else, select No.
Correct answer
Yes, No
Yes, Yes
No, Yes
No, No
Overall explanation
Short Answer for Revision:
2 types of Public Azure Load Balancer SKUs available: Basic & Standard.
Only Basic SKU public IP address can be associated with Azure Load
Balancer Basic. Similarly, only Standard SKU public IP address can be
associated with Azure Load Balancer Standard. No mix-and-match
allowed.
Azure Load Balancer Basic supports only IPv4. LB Standard supports both
IPv4 and IPv6. Statement 2 -> No.
Detailed Answer:
Statement 1:
Azure Public Load Balancers come in two SKUs: Basic and Standard.
Unlike in the case of an Azure Firewall, you can associate a Basic SKU
public IP address with an Azure Load Balancer. But the caveat is that you
cannot mix and match the SKUs of both types of resources.
So, a Basic SKU public IP address can only be associated with a Basic SKU
Load Balancer. Similarly, a Standard SKU public IP address can only be
associated with a Standard SKU Load Balancer.
So yes, we can associate only a Basic SKU public IP address with an Azure
Load Balancer Basic SKU. Statement 1 -> Yes.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
configure-public-ip-load-balancer
Statement 2:
Standard SKU load balancers support both IPv4 and IPv6 addresses. But
Basic SKU load balancers support only IPv4 addresses. So, you cannot
associate an IPv6 address with the Basic SKU Azure Load Balancer.
GitHub Repo Link: Public IP address SKU for an Azure Load Balancer
Resources
Public IP address SKU for an Azure Load Balancer
Domain
Implement and manage virtual networking
Question 24
You have three web server virtual machines in your Azure subscription.
You need to group all three virtual machines into an application security
group so you can define network policies based on those groups for web
server VMs. How would you proceed?
Add the subnet of the VMs as a member of the application
security group
Associate the VM’s network security group with the application
security group
Correct answer
Add all the network interfaces as members of the application
security group
Add all the VMs as members of the application security group
Overall explanation
Short Answer for Revision:
The NICs of VMs are members of ASG, not any other resource.
Detailed Answer:
Application Security Groups help you group the VMs per your application
architecture. Once you group the VMs, instead of IP addresses, you can
use the group names to filter network traffic to a set of VMs using network
security rules.
https://azure.microsoft.com/en-in/blog/applicationsecuritygroups/
Option C is the correct answer. All other options are incorrect.
Resources
Group VMs into an application security group
Domain
Implement and manage virtual networking
Question 25
You deploy two virtual machines in an Azure virtual network subnet.
As shown above, two Network Security Groups (nsg01 and nsg02) are
associated with the Network Interface Cards (nic01 and nic02) attached to
the VMs.
In addition to the default security rules, the nsg01 also has a rule that
denies inbound traffic through the ICMP protocol from any source.
Similarly, in addition to the default rules, the nsg02 also has a rule that
denies outbound traffic through the ICMP protocol to any source.
Based on the given information, answer the below two statements. Select
Yes if the statement is correct. Else select No.
Note: Assume you already enabled ICMP through the Windows firewall on
both VMs.
Yes, No
Yes, Yes
Correct answer
No, Yes
No, No
Overall explanation
Short Answer for Revision:
Statement 2 -> The nsg01 does not deny outbound connections and
nsg02 does not deny inbound connections. So, vm01 can ping vm02 (the
response is automatically allowed).
Statement 1 -> The nsg02 denies outbound connections. So, vm02 cannot
ping vm01.
Detailed Answer:
If the ICMP is already enabled for both VMs through the Windows firewall,
vm01 and vm02 can ping each other because the default security
rules AllowVnetInBound and AllowVnetOutBound allow both inbound
and outbound traffic within the VNet.
Well, unless you override the rule with a higher priority. The Ping tool uses
the ICMP protocol to send an echo request to the target VM. But this ICMP
is denied outbound traffic from vm02 and inbound traffic to vm01.
Recall from practice test 1 (Related lecture video title: Configure NSG
to allow external traffic) that NSG rules are stateful. This means when a
rule allows outbound traffic from the NIC, you do not require an explicit
inbound rule to allow the response back to the NIC. Similarly, when a rule
allows inbound traffic to the NIC, you do not require an explicit outbound
rule to allow the response from the NIC.
Statement 2:
In the given scenario, two different NSGs are associated with the
individual NICs of both the VMs. When you ping vm02 from vm01:
a. nsg01 denies only the inbound traffic through the ICMP protocol. So, the
outbound request from vm01 will go through.
b. nsg02 denies only the outbound traffic through the ICMP protocol. So, it
doesn’t prevent vm01’s message from reaching vm02.
c. When you ping vm02 from vm01, the message reaches vm02
successfully.
d. The stateful nature of the NSG rules ensures that vm01 receives the
response, even though its NSG blocks the inbound traffic.
So, you can ping vm02 from vm01.
Statement 1:
Since nsg02 denies the outbound traffic through the ICMP protocol, the
outbound request from vm02 will not go through. Consequently, the ping
is unsuccessful.
So, you cannot ping vm01 from vm02.
Resources
Effects of associating NSG with a NIC
Domain
Implement and manage virtual networking
Question 26
In your Microsoft Entra ID tenant, you create three users. The below table
summarizes their access roles in Microsoft Entra ID and Azure
subscription.
Further, User One logs into the Microsoft Entra ID tenant and toggles the
setting to Yes, as indicated below.
A user (User Four) who is no longer in the project should have his access
to the subscription revoked. Which users can remove their access?
Detailed Explanation:
Since User One (a global admin in Microsoft Entra ID) has elevated his
access to manage all Azure subscriptions in the tenant, he can view all the
management groups and the subscriptions within the tenant.
1. User One has the User Access Administrator role access to all
subscriptions and management groups in the tenant. With this role, he
can manage users’ access to Azure resources.
2. Per the above point, User One can remove subscription access for User
Four .
Reference Link: https://docs.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#user-access-administrator
https://docs.microsoft.com/en-us/azure/role-based-access-control/elevate-
access-global-admin
User Two has only contributor access to the subscription. This role lets you
create/modify/delete resources in Azure but it does not have permissions
to manage user access to Azure resources. So, he cannot remove User
Four .
Note that all the actions like adding/removing roles are disabled for User
Two .
Reference Link: https://docs.microsoft.com/en-us/azure/role-based-
access-control/built-in-roles#contributor
Only User One and User Three can revoke subscription access for User
Four .
Resources
Remove subscription access to a user
Domain
Manage Azure identities and governance
Question 27
You would like to host two different static websites (foo.com and boo.com)
on a standalone Virtual Machine. So obviously, the two websites require
different public IP addresses for communication. Based on this scenario,
answer the below two questions:
Correct answer
1, 1
1, 2
2, 1
2, 2
Overall explanation
Short Answer for Revision:
You can create many IP configurations for a NIC. Each IP configuration can
be associated with a different public IP address. So, one NIC is sufficient.
Question 1 -> 1.
Detailed Answer:
Question 1:
You can attach either one or multiple Network Interface Cards to a VM. For
each NIC, you can create multiple IP configurations. Each IP configuration
will have a private IP and an optional public IP.
To host two different websites on a standalone VM, you can add two IP
configurations, that each hold a Public IP address to a single Network
Interface Card.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
virtual-network-multiple-ip-addresses-portal
Reference
Link: https://github.com/toddkitta/azure-content/blob/master/articles/
virtual-network/virtual-networks-multiple-nics.md
Question 2:
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
virtual-network-multiple-ip-addresses-portal
Question 28
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You have four Virtual Networks across two Azure subscriptions, Dev and
Test, and two locations, East US and West US. You deployed your project’s
existing web apps in vnet01.
You need additional network address spaces to scale your apps. Your
manager suggested peering vnet01 with any of the other available
networks.
Note: All the remaining three VNets have the required number of usable
IP addresses.
Yes
Correct answer
No
Overall explanation
Short Answer for Revision:
For VNet peering, the address spaces of the two VNets should not overlap.
But the address spaces of vnet01 and vnet02 overlap (check the detailed
explanations).
Detailed Answer:
The address spaces of the two virtual networks should not overlap when
you enable peering. From the given address spaces for the two networks
(vnet01 and vnet03), let’s evaluate the address ranges to verify if we can
enable peering between them.
<<You can skip this section if you are well aware of finding address
ranges from a CIDR notation>>
To begin with, we plot the bits as below for the first and the last IP address
for the address space 10.0.0.0/12 (vnet01). /12 indicates that the first 12
bits are for the network, and the remaining bits are for the host.
Now, the network bits (the first 12) are untouched for calculating the first
and the last IP addresses. For the first IP address, set all the host bits to 0,
which it already has. For the last IP, set all the host bits to 1. Now,
converting from binary to decimal will give you the address range of
vnet01.
Clearly, the IP addresses overlap. vnet01 includes all IPs from 0-15 (from
the second octet). Whereas vnet03 also includes a subset of these IPs.
So, as expected, when you try to peer vnet03 with vnet01, you will get
this error.
The given solution does not meet the stated goal. Option No is the correct
answer.
Resources
Peer Azure Virtual Networks - 1
Domain
Implement and manage virtual networking
Question 29
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You have four Virtual Networks across two Azure subscriptions, Dev and
Test, and two locations, East US and West US. You deployed your project’s
existing web apps in vnet01.
You need additional network address spaces to scale your apps. Your
manager suggested peering vnet01 with any of the other available
networks.
Note: All the remaining three VNets have the required number of usable
IP addresses.
Correct answer
Yes
No
Overall explanation
Short Answer for Revision:
Azure supports global VNet peering, so you can peer VNets in different
regions as long as their address spaces do not overlap.
Detailed Answer:
Similar to the previous question, first, let’s check if the address spaces of
the two networks overlap.
We already know the address range for vnet01. Doing a similar analysis
for vnet02, which has the address space 10.16.32.0/22, will give the
address range between 10.16.32.0 and 10.16.35.355.
Since vnet01 only includes ranges between 0-15 (the second octet), the
address spaces of vnet02 and vnet01 don’t overlap.
So, although vnet02 is in a different Azure region than vnet01, you can
peer the two virtual networks as Azure supports global virtual network
peering.
Since peering with vnet02 is possible, option yes is the correct answer.
Resources
Peer Azure Virtual Networks - 2
Domain
Implement and manage virtual networking
Question 30
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You have four Virtual Networks across two Azure subscriptions, Dev and
Test, and two locations, East US and West US. You deployed your project’s
existing web apps in vnet01.
You need additional network address spaces to scale your apps. Your
manager suggested peering vnet01 with any of the other available
networks.
Note: All the remaining three VNets have the required number of usable
IP addresses.
Correct answer
Yes
No
Overall explanation
Short Answer for Revision:
Detailed Answer:
First, let’s check if the address spaces of the two networks overlap.
We already know the address range for vnet01. Doing a similar analysis
for vnet04, which has the address space 10.16.0.0/12, will give the
address range between 10.16.0.0 and 10.31.255.255.
Since vnet04 includes ranges between 16-31 in the second octet, which is
different from that of vnet01, the address spaces of vnet04 and vnet01
don’t overlap.
So, although vnet04 and vnet01 are in different Azure subscriptions, you
can successfully peer the two virtual networks.
In fact, you can even peer virtual networks in different Microsoft Entra ID
tenants.
Since peering with vnet04 is possible, option yes is the correct answer.
Resources
Peer Azure Virtual Networks - 3
Domain
Implement and manage virtual networking
Question 31
You deploy a Basic SKU Azure Bastion to let your users connect to
Windows virtual machines using their browsers and the Azure portal. Your
colleague associates a Network Security Group to the subnet where the
Bastion service is deployed.
Which of the following ports do you need to open to ensure the subnet’s
egress traffic can reach the target VMs?
22
443
Correct answer
3389
80
Overall explanation
Short Answer for Revision:
Port 443 enabled the Bastion service to communicate with Azure services.
Port 3389 (Windows) and 22 (Linux) allow the Bastion service to reach the
target VMs.
Detailed Answer:
First, you don’t have to associate any Network Security Group to the
Bastion subnet. Even after deploying only a Bastion in the virtual network,
you can use it to connect to the VMs.
In case if you need to associate an NSG to the Bastion subnet, you must
ensure the NSG has all the required inbound and outbound security rules.
a. Port 80 allows the Bastion service to communicate with the internet for
establishing sessions and certificate validation. So, option D is incorrect.
b. Port 443 allows the Bastion service to communicate with other Azure
services for storing logs. So, option B is incorrect.
c. And port 3389 and 22 allow the Bastion service to reach the target VMs.
Port 22 is for SSH to Linux VMs, and port 3389 is for RDP connections to
Windows VMs. Since we are using a Basic SKU Bastion, there is also no
possibility of using custom ports. Bastion service should use port 3389.
Reference
Link: https://learn.microsoft.com/en-us/azure/bastion/configuration-
settings#ports
Further, it is good to know that when you use the Azure portal to connect
with Bastion, the RDP/SSH session is on port 443.
GitHub Repo Link: Open ports with NSG for Azure Bastion
Resources
Open ports with NSG for Azure Bastion
Domain
Implement and manage virtual networking
Question 32
You have an App Service app currently running with one instance. The app
runs on a Standard App Service plan defined with the below autoscale
settings:
Shown above (on the right) is the scale rule for scale-out.
After creating the autoscale rule, assume the Average CPU utilization of
the App Service Plan hits 95% for the first 30 minutes. How many
instances will be running at the end of 20 minutes?
Correct answer
2
3
4
5
Overall explanation
Short Answer for Revision:
This setup begins with one VM instance. After the CPU shoots to 100%,
two VM instances will be running. But since the Enable metric divide by
instance count is checked, the calculated CPU utilization is 100% / 2 ->
50%. The scale-out rule is no longer triggered, and we have 2 instances at
the end of 20 minutes.
Detailed Answer:
To simulate a CPU utilization of 95%, I write a For loop that runs infinitely
in a PowerShell script file. Let’s upload the file as a web job so the script
runs as a background task in the Azure App Service. Since we need to
mimic the Average CPU utilization for the App Service Plan (irrespective of
the number of VM instances actually running in the plan), I set the scale to
Multi-instance, so the web job runs across all the VM instances.
Although the default no. of. VM instances in the autoscale rule are 2, it is
given that the App Service app is currently running with one VM instance.
So even after the autoscale rule is created, we will have 1 running VM as
the default rule will kick in only if there is a problem reading the metric
value.
Reference Link: https://stackoverflow.com/questions/75778407/azure-
app-plan-autoscaling-confused-by-default-settings
Immediately after the web job is uploaded, the CPU utilization hits 100%.
But for the first 10 minutes, nothing happens as the autoscale engine
requires the scale out condition to be true for a duration of at least 10
minutes to trigger a scale action. After 10 minutes, the engine calculates
the Average Percentage CPU, which now is peaking at 100%, and creates
a new VM instance.
With 50% CPU utilization, the autoscale engine doesn’t create another VM.
So, only two VM instances will be running after 20 minutes. Option A is the
correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/azure-monitor/autoscale/
autoscale-troubleshoot#example-2-advanced-autoscaling-for-a-virtual-
machine-scale-set
Resources
Autoscaling an App Service App
Domain
Deploy and manage Azure compute resources
Question 33
While creating an Azure Container Instance in the Azure portal, you
specify a container Restart policy.
Assume that the container you deploy to the Container Instance exits with
an exit code of 1. Which of the policies ensures that the containers run at
most once?
Always
Correct answer
Never
OnFailure
Overall explanation
Short Answer for Revision:
The ‘always’ restart policy will always restart the container, irrespective of
the exit code. The container will run more than once.
The ‘OnFailure’ restart policy will restart the container only if it exits with
a non-zero exit code. Since the container exits with an exit code of 1
every time, the container will also run more than once.
Detailed Answer:
I build an image from this Dockerfile and push the image to the Azure
Container Registry. As these steps are only the preparation steps for this
question, I am not including those details here. If you are interested,
check the related lecture video to know more details about the process.
Using the image in the registry instance, I create three container apps
each, with a different restart policy setting: Always, Never, and OnFailure.
Even before proceeding further, notice the status of the container with the
'never' restart policy is Failed. That’s because the 'never' restart policy
never tries to restart the container after it terminates with a nonzero exit
code, which is generally an indication that the process executed in the
container has failed.
With a ‘never’ restart policy, the Azure Container Instance never attempts
to restart the container, irrespective of the exit code. So, this container
will have a 0 restart count and the State will be terminated after the
container runs once.
So, containers deployed with a 'never' restart policy will run at most once.
Option B is the correct choice.
A container deployed with an ‘always’ restart policy will always restart the
container, irrespective of the exit code. So, this container will have
increasing restart counts with time.
The container with the 'always' restart policy run many times. So, option A
is incorrect.
https://learn.microsoft.com/en-us/azure/container-registry/container-
registry-get-started-docker-cli?tabs=azure-cli
Resources
Restart policy for Azure Container Instances
Domain
Deploy and manage Azure compute resources
Question 34
In a Microsoft Entra ID tenant, one of the users ( User Four) in your
organization needs to perform billing tasks like purchasing Microsoft 365
products, updating payment information, etc.
From the user accounts page in Microsoft Entra ID, where would you
assign the billing administrator role to the user?
In the Azure role assignments blade, you can only view the user’s role
assignment to resources in the Azure subscription. We cannot assign
Microsoft Entra ID roles like billing administrators here.
Option A is an incorrect choice.
Finally, as the names indicate, in the Applications blade, you can view
applications assigned to the user. Option C is incorrect.
Resources
Assign billing administrator role to the user
Domain
Manage Azure identities and governance
Question 35
You deploy two virtual machines in an Azure virtual network subnet.
As shown above, the subnet is also associated with a Network Security
Group. In addition to default rules, the NSG also has a rule that denies
outbound traffic through the ICMP protocol from any source.
Based on the given information, answer the below two statements. Select
Yes if the statement is correct. Else select No.
Note: Assume you already enabled ICMP through the Windows firewall on
both VMs.
Yes, No
Yes, Yes
No, Yes
Correct answer
No, No
Overall explanation
Short Answer for Revision:
Both VMs are in the same subnet, which has an associated NSG. So, both
VMs inherit the same NSG rules.
The outbound traffic for ICMP traffic is blocked. So, none of the VMs can
send pings to the other.
Detailed Answer:
If the ICMP is already enabled for both VMs through the Windows firewall,
vm01 and vm02 can ping each other because the default security
rules AllowVnetInBound and AllowVnetOutBound allow both inbound
and outbound traffic within the VNet.
Well, unless you override the rule with a higher priority. The Ping tool uses
the ICMP protocol to send an echo request to the target VM. But this ICMP
is denied outbound traffic from the subnet.
Recall from practice test 1 (Related lecture video title: Configure NSG
to allow external traffic) that NSG rules are stateful. This means when a
rule allows outbound traffic from the subnet, you do not require an explicit
inbound rule to allow the response back to the subnet. Similarly, when a
rule allows inbound traffic to the subnet, you do not require an explicit
outbound rule to allow the response from the subnet.
In the given scenario, the NSG is associated with the subnet where both
the VMs reside. So, the NICs of both the VMs will inherit all the rules
defined in the NSG. You can verify this by navigating to the Effective
security rules of each VM’s NIC.
So, if you ping vm02 from vm01, the outbound security rule for vm01
ensures that the message does not go through.
Similarly, if you ping vm01 from vm02, the outbound security rule for
vm02 ensures the message does not go through.
The correct option for both statements is No. Option D is the correct
answer.
Resources
Effects of associating NSG with a subnet
Domain
Implement and manage virtual networking
Question 36
You run a Linux container app in Azure Container Instance. Container apps
are stateless, so if the app restarts, all the data is lost. You have to store
the data generated by the app in an external (permanent) storage, so
data is available across container restarts/crashes.
Azure Tables
Azure Blobs
Azure Cosmos DB
Correct answer
Azure Files
Overall explanation
Short Answer for Revision:
Of the given services, you can mount only an Azure file share as a volume
into a container directory.
Detailed Answer:
You can mount an Azure file share, created in Azure Files, as a volume
into a container directory in Azure Container Instances.
For this, first, create a Storage account and a file share where you want to
persist the app data in Azure Files.
Next, while deploying the container instance using the az container
create Azure CLI command, ensure to specify:
d. And the volume mount point (the directory where you want to mount
the file share).
Once the app is created successfully, you can test if the app persists the
data to the file share by entering sample data.
In the file share, a new text file will be created. You can download the file
to verify its contents.
You can also view the created text files by connecting to the container
app and running the below commands.
Reference Link: https://learn.microsoft.com/en-us/azure/container-
instances/container-instances-volume-azure-files
Option D is the correct answer. None of the other Azure storage account
services provide persistent storage for Azure Container Instances.
Resources
Connect persistent storage to a container
Domain
Deploy and manage Azure compute resources
Question 37
You deploy an Azure VM and its related resources in the Dev subscription.
The VM’s Network Interface Card (NIC) is associated with a static,
Standard SKU Public IP address resource.
Select the steps you perform to move the virtual machine and all its
related resources to a different subscription in the same Microsoft Entra ID
tenant.
Correct answer
Disassociate the Public IP from the VM
Overall explanation
If the VM’s NIC is associated with a Standard SKU Public IP address
resource, we cannot move the VM and the related resources across Azure
subscriptions. If you try to do so, you will get this error.
But we can move the VM and the related resources across Azure
subscriptions if the VM’s NIC is associated with a Basic SKU Public IP
resource.
Reference
Link: https://learn.microsoft.com/en-us/answers/questions/410380/public-
ip-tenant-move.html
So, it’s tempting to conclude that we first downgrade the Public IP SKU
from Standard to Basic, perform the move operation and upgrade the
Public IP SKU from Basic to Standard.
But, although we can upgrade the Public IP SKU from Basic to Standard,
reverse operation is not possible.
Reference
Link: https://learn.microsoft.com/en-us/answers/questions/298941/change
-std-sku-public-ip-to-basic-sku.html
Moving all resources except the Public IP resource will not work, as all
dependent resources have to be moved together to a different
subscription.
Since the Public IP resource is still associated with the VM’s NIC, you will
get the above error.
The best solution will be to first disassociate the Public IP resource from
the VM’s NIC, move the resources to a different subscription, and
associate the Public IP resource to the VM’s NC in the target subscription
(Check the related lecture video).
Option D, talks about deleting the Public IP resource and moving the rest
of the resources to the target subscription. Since the IP address is still
associated with the VM’s NIC, it cannot be deleted.
Reference
Link: https://learn.microsoft.com/en-us/troubleshoot/azure/azure-
kubernetes/cannot-delete-ip-subnet-nsg
Even if Azure allows deletion, the IP address is static only during the
lifecycle of the Public IP resource. So, if you recreate a new static Public IP
resource in the target subscription, it will have a different IP address.
Reference
Link: https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/
virtual-network-public-ip-address#create-a-public-ip-address
Option D is incorrect.
Resources
Move Azure VM and related resources to a different subscription
Domain
Deploy and manage Azure compute resources
Question 38
You have an ARM template and a parameter file that defines a VM and its
related resources. You want to reuse the template to deploy multiple VMs
after updating the references to the VM’s password in plaintext, so that
the password is stored and retrieved from Azure Key Vault.
What are the two steps you would do to achieve this objective?
Correct selection
Create a secret in Azure Key Vault
Create Keys in Azure Key Vault
Create an access policy to grant access to the user deploying the
template
Correct selection
Assign access to Azure Resource Manager for template
deployment
Overall explanation
Short Answer for Revision:
In Azure Key Vault, VM’s passwords are stored as secrets, not keys. Option
A is one of the correct answers.
To retrieve the Key Vault secret from the template, we need to grant
permission to the ARM template deployment. Option D is the other correct
answer.
Granting access to the user (via access policy) will enable the user to
access the secret, for example, using the Azure portal.
Detailed Answer:
To store the VM’s password in Key Vault, create a Key Vault secret and
enter the password as the Secret value .
Key Vault secrets are better suited for providing secure storage of
passwords, connection strings, etc., Option A is one of the correct
answers.
Keys, as the name indicate, are used for storing software-protected and
HSM-protected keys. Option B is incorrect.
Reference
Link: https://learn.microsoft.com/en-us/azure/key-vault/general/about-
keys-secrets-certificates#object-types
Once we store the VM’s password as a secret in the Key Vault, we can
update the ARM template to point the VM’s password reference to the Key
Vault secret.
But if you deploy this template, the deployment process will run into an
error as the template does not have access to the secret defined in the
Key Vault.
Let’s first try to fix this issue by creating a Key Vault access policy that
grants access to the user who will deploy the ARM template.
If you try deploying the template, you will get the same error again, as the
access policy grants permissions to the user to retrieve the secret, like in
an Azure portal. It doesn’t enable the template deployment process to
access the secret. Option C is also incorrect.
To enable the template to retrieve the secret defined in the key vault, we
need to enable the Key Vault access policy to allow resource access to
Azure Resource Manager for template deployment.
If you deploy the template now, the deployment process should be
complete without any errors. Once the VM is deployed, the user can use
the Secret value defined in the Key Vault to log into the VM.
GitHub Repo Link: Replace password with Key Vault secrets in an ARM
Template
Resources
Replace password with Key Vault secrets in an ARM Template
Domain
Deploy and manage Azure compute resources
Question 39
You have a Windows Azure Virtual Machine in stopped status.
Which of the following actions can you NOT do when the VM is in a
deallocated status? Select two correct options.
Correct selection
Add a DSC configuration extension
Correct selection
Configure Site Recovery to move VM from availability zone 1 to 2
in the same region
Resize the VM
Attach another NIC to the VM
Overall explanation
<<This is a NOT question>>
In fact, you can’t add a NIC to an Azure VM in running status. So, you have
to stop the VM to attach a NIC.
Option D is incorrect.
To add any extension to an Azure VM, you need an Azure VM agent. This
VM agent is a lightweight thread running in your virtual machine. For
example, Windows VM will have the WindowsAzureGuestAgent agent
running when you deploy any image from the marketplace.
So, when you add an extension to a VM, it’s the agent that executes the
instructions and configures the extension. So, it’s evident that when the
VM is stopped, no VM agent is running. And so, we cannot add an
extension to the VM in a deallocated status.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-
machines/extensions/agent-windows
You can use Azure Site Recovery to move a VM between availability zones
in a region.
Azure Site Recovery, a disaster recovery solution, replicates storage from
one zone to the other. So, it creates a disk in the target availability zone,
from where you can spin up a new VM.
Since all we care about in a VM is the data on the disk, moving the data to
a different availability zone is equivalent to moving a VM to a different
zone.
But you can configure site recovery only when the VM is running. When
the VM is stopped, you cannot configure site recovery as it requires
installing an extension to the VM (check the related lecture video).
Since you cannot configure Site Recovery to move the VM to a different
availability zone when the VM is stopped, option B is the other correct
answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-
azure-how-to-enable-zone-to-zone-disaster-recovery#using-availability-
zones-for-disaster-recovery
Resources
Actions that cannot be done when a VM is deallocated
Domain
Deploy and manage Azure compute resources
Question 40
The company Airbus has to add nearly 1000 users external to Airbus’
Microsoft Entra domain, from 78 supplier organizations that supply
airplane manufacturing parts to them.
Reference
Link: https://learn.microsoft.com/en-us/powershell/module/azuread/new-
azureaduser
Option D is incorrect.
And the bulk create operation also adds internal users in bulk when you
upload the data in a CSV file.
If you add external users to the CSV file and upload the data, you get a
similar error.
The bulk invite operation is more suitable to invite external users in
bulk.
Reference
Link: https://learn.microsoft.com/en-us/entra/identity/users/users-bulk-
add#to-create-users-in-bulk
https://learn.microsoft.com/en-us/entra/external-id/tutorial-bulk-invite
Option A is incorrect.
This command sends an email invitation to the external user to join your
directory.
The user can click the Accept invitation link to join your directory.
Reference
Link: https://learn.microsoft.com/en-us/powershell/module/azuread/new-
azureadmsinvitation
In PowerShell, both New and Add are approved verbs. But commands
with the New verb create a new resource: for example, creating a new
guest user account.
On the contrary, cmdlets with the Add verb add something to an existing
resource. For example, Add-AzureADGroupMember cmdlet adds a
member to an existing group.
Reference
Link: https://learn.microsoft.com/en-us/powershell/module/azuread/add-
azureadgroupmember
https://learn.microsoft.com/en-us/powershell/scripting/developer/cmdlet/
approved-verbs-for-windows-powershell-commands?view=powershell-
7.2#new-vs-add
Resources
Create external user accounts
Domain
Manage Azure identities and governance
Question 41
You have a storage account in your Azure subscription. You export the
storage account’s ARM template, including the parameters, and add it to
the library.
Your teammate tries to deploy a new storage account from the exported
template in the library. What storage account properties can he
configure/select?
So, apart from region, he can configure the subscription, resource group,
and storage account name. Option B is the correct answer.
Resources
Deploy from an exported template in the library
Domain
Deploy and manage Azure compute resources
Question 42
You need to deploy an Azure Virtual Network with two subnets using the
ARM template. Which of the following ARM templates would you use?
a.
1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. }
11. },
12. "resources": [
13. {
14. "type": "subnets",
15. "name": "[concat('subnet', copyIndex())]",
16. "dependsOn": [
17. "vnet01"
18. ],
19. "properties": {
20. "addressPrefix": "[concat('10.0.', copyIndex(),
'.0/24')]"
21. },
22. "copy": {
23. "name": "subnetCopy",
24. "count": "2"
25. }
26. }
27. ]
28. }
29. ]
b.
1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. },
11. "copy": [
12. {
13. "name": "subnetloop",
14. "count": 2,
15. "input": {
16. "name": "[concat('subnet',
copyIndex('subnetloop'))]",
17. "properties": {
18. "addressPrefix": "[concat('10.0.',
copyIndex('subnetloop'), '.0/24')]"
19. }
20. }
21. }
22. ]
23. }
24. }
25. ]
c.
1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. },
11. "copy": [
12. {
13. "name": "subnets",
14. "count": 2,
15. "input": {
16. "name": "[concat('subnet', copyIndex('subnets'))]",
17. "properties": {
18. "addressPrefix": "[concat('10.0.',
copyIndex('subnets'), '.0/24')]"
19. }
20. }
21. }
22. ]
23. }
24. }
25. ]
d.
1. "resources": [
2. {
3. "type": "Microsoft.Network/virtualNetworks",
4. "name": "vnet01",
5. "properties": {
6. "addressSpace": {
7. "addressPrefixes": [
8. "10.0.0.0/16"
9. ]
10. },
11. "subnets": {
12. "name": "[concat('subnet', copyIndex('subnets'))]",
13. "properties": {
14. "addressPrefix": "[concat('10.0.',
copyIndex('subnets'), '.0/24')]"
15. }
16. },
17. "copy": [
18. {
19. "name": "subnets",
20. "count": "2"
21. }
22. ]
23. }
24. }
25. ]
Only a
Only b
Correct answer
Only c
Only d
Overall explanation
You can create multiple instances of a resource by adding a copy loop.
You can add a copy loop to any four sections: resources, properties,
variables, and outputs.
In the copy loop, the name indicates the name of the VNet property (i.e.,
subnet). So, option B is incorrect, as there is no VNet property named
subnetloop. There is only a VNet property called subnets.
Option D defines the copy loop separately from the subnets property that
needs to be replicated for the VNet resource. If you deploy the template
code in option D, you will get this error.
Resources
Using copy to create multiple resources in the ARM template
Domain
Deploy and manage Azure compute resources
Question 43
Given below are two statements based on associating a service endpoint
policy to a virtual network’s subnet. Select Yes if the statement is correct.
Else select No.
Yes, No
Correct answer
Yes, Yes
No, No
No, Yes
Overall explanation
Short Answer for Revision:
Detailed Explanation:
The service endpoints are enabled per subnet per service, which means
all resources in a subnet can access all instances of a service in your
subscription, for example, the Azure Storage account. A service endpoint
policy enables you to filter traffic to specific Azure service instances over
service endpoints.
Note that service endpoint policies are supported only for Azure Storage
accounts.
The service endpoint policy is in the central US region and only vnet012 is
in the same region. So, first, let’s try associating the policy to a subnet in
vnet012. As you can see, there is no option to associate the policy to this
subnet because the service endpoint is not enabled yet.
But when you enable a service endpoint for the service Microsoft.Storage,
you see the option to apply the policy.
If you think about it, it makes sense. If there are no service endpoints
enabled on a subnet, then what’s the need for a service endpoint policy?
Statement 2:
We have just seen that we can apply the service endpoint policy to a
virtual network subnet that is in the same region as the policy. Now, let’s
try applying the policy to a subnet that is in a different region. The service
endpoint policies dropdown doesn’t populate any policy.
So, we can conclude that the service endpoint policies can only be applied
to VNet subnets in the same region. Statement 2 -> Yes.
Resources
Apply Virtual network service endpoint policy to a subnet
Domain
Implement and manage virtual networking
Question 44
You plan to move a legacy software that uses an older version of SQL
Server to Azure. Since the app doesn’t support application-level
replication, you consider deploying your stack in a single instance of a
virtual machine.
In case of a data center failure, how would you ensure the application’s
availability?
Option D is incorrect.
Both availability sets and availability zones provide resiliency to your app
when you create multiple VM instances.
So, just assigning the VM to a single availability zone doesn’t offer any
protection from data center outages. If the data center where you host the
VM is unavailable, or if the entire zone is unavailable, your app fails.
The ZRS option for managed disk ensures that the data is synchronously
replicated across three availability zones in a region.
So, when the data center/availability zone that hosts the VM goes down,
you can use the replicated data in another availability zone to spin up a
new VM.
Resources
Availability of a single instance VM
Domain
Deploy and manage Azure compute resources
Question 45
You have an Azure file share that contains a backup file. You generate a
SAS URL for the individual backup file.
Given below are two statements based on the above information. Select
Yes if the statement is correct. Else select No.
Yes, No
No, No
Yes, Yes
Correct answer
No, Yes
Overall explanation
Statement 1:
Using Azure Storage Explorer, you can use only the file share’s SAS URL to
connect to the file share. If you use the individual backup file’s SAS URL,
you will get this error.
Similarly, you cannot use the SAS URL for a file share or a container in a
browser, as we are not accessing a particular file.
But you can use the individual backup file’s SAS URL in a browser to
directly download the file to your desktop (check the related lecture
video).
Resources
Using SAS URLs with Storage Explorer and web browser
Domain
Implement and manage storage
Question 46
When customers place new orders from the company’s website, the order
details are enqueued as messages in an Azure Storage account queue.
You process the order messages using Azure Virtual Machines Scale Sets
by scaling out or scaling in VMs based on the number of messages in the
queue.
Here is the scaling policy you define for the Virtual Machine Scale Set.
The scale out policy is given below:
And here are the details of the scale in policy:
Answer the below questions based on the given information. Assume the
events described in questions 1 and 2 happen sequentially over time.
3, 2
Correct answer
4, 3
4, 2
5, 1
Overall explanation
The question mentions that the default instance count is two. So, to begin
with, for three messages in the storage account queue, we have two VMs
in the scale set.
Question 1:
Since we enabled metric divide by instance count for the scale out
policy, each VM will process a maximum of up to four queue messages
before the policy triggers, scaling out to add additional instances. When
the 9th message is added to the queue, the scale out policy kicks off, and
a new VM is added to the scale set.
Question 2:
But we have not enabled metric divide by instance count for the scale
in policy. So, the scale in policy will trigger only when the total queue
messages are less than four, irrespective of the no. of. running VM
instances.
So, when the count of queue messages reduce to three, the scale in policy
triggers and deletes a VM in the scale set.
Reference
Link: https://learn.microsoft.com/en-us/azure/azure-monitor/autoscale/
autoscale-best-practices#considerations-for-scaling-threshold-values-for-
special-metrics
https://github.com/hashicorp/terraform-provider-azurerm/issues/7696
Resources
Autoscale VMs based on queue messages
Domain
Deploy and manage Azure compute resources
Question 47
You have two Virtual Machine Scale Sets, each with different orchestration
modes, in the rg-dev-02 resource group.
Azure requires resources that share the same lifecycle, placed in the same
resource group. Although the scale set and the VMs may not exactly share
the same lifecycle (a scale in policy can delete the VMs), Azure enforces
the related resources to be part of the same resource group.
To test if there is any bug with the portal, I try to create a new VM in a
different resource group from that of the scale set, at the same time
adding the VM to the scale set using Azure PowerShell. Consistent with
the portal, I was not able to create the VM.
But when I place the VM in the same resource group as that of the scale
set, I was able to create and add the VM to the scale set (Refer to the
related lecture video).
Resources
Resource groups and scale sets for new VMs
Domain
Deploy and manage Azure compute resources
Question 48
You deploy an Azure virtual network with two subnets in a resource group
using the below ARM template.
What would happen if you remove subnet1 and subnet2 from the ARM
template and redeploy the template to the resource group in incremental
deployment mode?
Correct answer
Both subnets will be removed
The resource manager will throw an error
Both subnets will remain intact
The VNet and the subnets will be removed
Overall explanation
While creating a virtual network in the portal, Azure requires the VNet to
have at least one subnet.
But after we create the Virtual Network resource, the Azure Resource
Manager allows you to remove all the subnets via the portal or the ARM
template.
Since the virtual network alone can be a standalone resource without any
subnets, removing both subnets will not delete the VNet.
Option D is incorrect.
But that’s how the deployment modes work for resources. The subnet01
and subnet02 are defined here as properties of a VNet, not as separate
standalone resources. Hence those rules do not apply to the properties of
a resource.
If you update the properties of a resource, irrespective of the deployment
modes, they are updated in the target resource.
So, when you run the template in any deployment mode, both the subnets
will be removed. Option A is the correct answer.
Resources
Effect of updating a resource's property in incremental mode
Domain
Deploy and manage Azure compute resources
Question 49
You are creating a Virtual Machine in the Azure portal. You have the
option to select the following VM features (marked in boxes).
Which feature requires that the VM compulsorily use Azure managed
disks?
Correct answer
Availability Zone
Availability set
VM image
OS disk type
Overall explanation
From 2019, Azure Generation 2 virtual machines are generally available in
Azure. These Gen2 VMs provide advanced features and capabilities
compared to Gen1 VMs.
Reference Link: https://learn.microsoft.com/en-us/azure/virtual-
machines/generation-2#features-and-capabilities
https://azure.microsoft.com/en-in/updates/azure-generation-2-virtual-
machines-vms-are-now-generally-available/
Although Gen2 VMs always require using managed disk, you can use Gen
1 VMs to bypass this restriction.
The OS disks for the VM, similar to Azure Storage accounts, support
locally-redundant storage, and zone-redundant storage, based on their
type. But only the OS disks, especially those that support zone-
redundancy, require managed disks.
So, you can still use disks that support LRS to create VMs with unmanaged
disks. So, the type of OS disk too, doesn't compulsorily require that VMs
use managed disks. Option D is also incorrect.
There are two types of availability sets you can create: Managed
availability sets and unmanaged availability sets.
Since the user can use an unmanaged availability set to create the VM,
even availability sets don’t completely require the use of managed disks.
Option B is incorrect.
Resources
Azure VM feature requiring managed disks
Domain
Deploy and manage Azure compute resources
Question 50
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
User One has the following two roles at the rg-dev-01 resource group
scope:
Goal: You need to provide access to a SQL backup file to User One.
Solution: You upload the backup file in a blob container in the strdev011
storage account.
Note: After you upload the backup file, your team has configured the
below setting in both storage accounts:
Can User One access the SQL backup file?
Correct answer
Yes
No
Overall explanation
Since the Allow storage account key access is disabled, User One cannot
use access keys to download the backup file.
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-blob-data-owner
https://learn.microsoft.com/en-us/azure/storage/common/shared-key-
authorization-prevent?tabs=portal#disable-shared-key-authorization
So, User One can access and download the SQL file using Microsoft Entra
ID authentication.
Question 51
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
User One has the following two roles at the rg-dev-01 resource group
scope:
Goal: You need to provide access to a SQL backup file to User One.
Solution: You upload the backup file in a file share in the strdev011
storage account.
Note: After you upload the backup file, your team has configured the
below setting in both storage accounts:
Can User One access the SQL backup file?
Yes
Correct answer
No
Overall explanation
When the user accesses data in a file share, the Azure portal first checks if
the user has access to the storage account keys. Although the user is
assigned the storage account contributor role, which provides access to
the storage account keys, he will not be able to access the data using
those keys as the Allow storage account key access is disabled for the
storage account.
Since access was not possible with keys, the Azure portal attempts to
access data using the user’s Microsoft Entra account. However, the
Storage Blob Data Owner role provides access to blob data in the storage
account, not file data. So, he still cannot access the files.
To have access to files in the file share using a Microsoft Entra account,
the user should be assigned a role specific to file data, for example, the
Storage File Data Privileged Contributor role (Check the related lecture
video).
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/authorize-
oauth-rest?tabs=portal#authorize-access-using-filerest-data-plane-api
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles/storage#storage-file-data-privileged-contributor
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-account-contributor
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-
roles#storage-blob-data-owner
https://learn.microsoft.com/en-us/azure/storage/common/shared-key-
authorization-prevent?tabs=portal#disable-shared-key-authorization
Resources
Provide access to a file in Azure storage account - 2
Domain
Implement and manage storage
Question 52
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
User One has the following two roles at the rg-dev-01 resource group
scope:
Goal: You need to provide access to a SQL backup file to User One.
Solution: You upload the backup file in a file share in the strdev012
storage account and share a Shared Access Signature token to the
individual file.
Note: After you upload the backup file and generate the SAS token, your
team has configured the below setting in both storage accounts:
Can User One access the SQL backup file?
Yes
Correct answer
No
Overall explanation
Since the Allow storage account key access is disabled on both the
storage accounts, the generated SAS URLs will not work as they are
signed by the access keys.
So, when User One uses the SAS URL, he will get this error.
Option No is the correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/shared-
key-authorization-prevent?tabs=portal#understand-how-disallowing-
shared-key-affects-sas-tokens
https://learn.microsoft.com/en-us/azure/storage/common/shared-key-
authorization-prevent?tabs=portal#disable-shared-key-authorization
Resources
Provide access to a file in Azure storage account - 3
Domain
Implement and manage storage
Question 53
You configure the below scale out policy for your Virtual Machine Scale Set
(VMSS).
This is the scale in policy for the VMSS.
Here are the other details of the scaling policy, like the default, minimum,
and maximum no. of instances in the Virtual Machine Scale Set.
Based on the above information, answer the below two questions, which
depict two scenarios that occur in a sequence:
3, 3
3, 0
Correct answer
2, 2
2, 3
Overall explanation
Refer to Practice Test 1 (Related lecture video title: Autoscaling in
VMSS), where I detailed how autoscaling works. The explanation to this
question assumes you have watched that video.
Question 1:
As the scale set CPU utilization peaks at nearly 100% and there are two
VMs in the scale set, the average percentage CPU utilization for both VMs
is 100%.
Since the time window (duration) is 10 minutes for the scale out policy,
the autoscaling policy waits for 10 minutes to collect sufficient
CPU percentage data from the scale set before performing any scale
action. So, even though the CPU percentage utilization is 100% after 8
minutes, the autoscale doesn't trigger any scale action.
The scale set will have the default no. of VMs it’s created with. So,
question 1 -> 2.
Question 2:
Since the amount of time that the Autoscale engine will look back for
metrics is 10 minutes, the first policy trigger happens after 10 minutes,
creating a new VM.
Since there is a cool down period of 5 minutes after a scale action (per the
scale out policy), there will be no checks by the Autoscale engine for
another 5 minutes. As the scale set average percentage CPU utilization
peaks at 100% for the first 17 minutes, another scale out action occurs
after the cool down period, creating another VM instance.
Since the scale set cools down to 0% CPU utilization after 17 minutes, the
scale in policy kicks in and tries to remove 3 VMs from the scale set.
But since the required minimum number of VMs in the scale set is 2, the
scale action will not remove all 3 VMs. It will remove just 2 VMs to satisfy
the minimum VM condition.
Reference
Link: https://negatblog.wordpress.com/2018/07/06/autoscaling-scale-sets-
based-on-metrics/
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-
machine-scale-sets-autoscale-portal
Resources
Analyze running or removed VMs from a scale set
Domain
Deploy and manage Azure compute resources
Question 54
In the storage account below, your company has stored data for apps and
some company confidential information.
Your team realizes that they are unable to configure a lifecycle
management policy in this storage account for optimizing the storage
cost.
Which of the following actions would you perform so they can create and
manage lifecycle policies? Select an option that minimizes the effort.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-
management-overview
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-overview#types-of-storage-accounts
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/upgrade-to-
data-lake-storage-gen2-how-to?tabs=azure-portal
The v1 account does not support access tiers. So, while uploading files to
the container, you do not see an option to set access tiers.
You also cannot change the access tiers for the files that are already
uploaded.
Since the v1 accounts do not support access tiers, you also cannot
configure lifecycle management policies to transition blobs to the
appropriate access tier for this storage account.
The account upgradation doesn’t take much time, and once it is complete,
you can create lifecycle management policies.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-upgrade?tabs=azure-portal
Resources
Update a storage account property based on use case
Domain
Implement and manage storage
Question 55
Your team maintains a virtual machine running custom software in your
on-premises environment. You use the Hyper-V Manager to export a
specialized disk from the VM to lift and shift the on-premises VM with its
data, apps, & user accounts to Azure. Which of the following PowerShell
commands would you use if you need to transfer the disk to the Azure
Storage account?
New-AzImage -ResourceGroupName "rg-dev-03" -Destination "<Storage
account path>" -LocalFilePath “<Local path>”
New-AzDisk -ResourceGroupName "rg-dev-03" -Destination "Storage
account path" -LocalFilePath “<Local path>”
Add-AzDisk -ResourceGroupName "rg-dev-03" -Destination "Storage
account path" -LocalFilePath “<Local path>”
Correct answer
Add-AzVhd -ResourceGroupName "rg-dev-03" -Destination "Storage
account path" -LocalFilePath “<Local path>
Overall explanation
Before answering the question, let’s understand the context of the
question.
Once you have the VHD, export it to the Azure Storage account using the
Add-AzVhd PowerShell command. This command uploads the virtual hard
disk from your machine to Azure as a page blob.
Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
add-azvhd?view=azps-9.3.0
Once the VHD is uploaded to the storage account, you can use the New-
AzImage command to build an Azure VM image from the VHD.
So, the command New-AzImage creates an Azure VM image from the VHD
(stored as a page blob in the Azure Storage account), not to upload a VHD
from the on-premises environment to Azure. Option A is incorrect.
Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
new-azimage?view=azps-9.3.0#example-1
The New-AzDisk command, as the name indicates, creates a new
managed disk in Azure. After the disk is created, you can directly upload
the VHD from the on-premises environment to the managed disk using
the azcopy tool or any other PowerShell command. But New-AzDisk
doesn't upload the VHD to the storage account. Option B is also incorrect.
Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
new-azdisk?view=azps-9.3.0
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/disks-
upload-vhd-to-managed-disk-powershell#upload-a-vhd-1
In PowerShell, commands with the New verb create a new resource: for
example, New-AzDisk creates a new managed disk resource.
On the other hand, commands with the Add verb add something to an
existing resource. For example, Add-AzVhd uploads (adds) a VHD file to a
blob storage account or a managed disk (as discussed earlier).
Reference
Link: https://learn.microsoft.com/en-us/powershell/module/az.compute/
add-azvhd?view=azps-9.3.0#example-1-add-a-vhd-file-to-a-blob
https://learn.microsoft.com/en-us/powershell/module/az.compute/add-
azvhd?view=azps-9.3.0#example-5-add-a-vhd-file-directly-to-a-managed-
disk
Reference
Link: https://learn.microsoft.com/en-us/powershell/scripting/developer/
cmdlet/approved-verbs-for-windows-powershell-commands?
view=powershell-7.3#similar-verbs-for-different-actions
Question 56
You plan to move your 3-tier application architecture comprising a web
tier, a business tier, and a data tier (using SQL Server) to Azure. To
increase availability, all three workloads are replicated thrice in Azure with
the help of IaaS Virtual Machines.
Which of the following is the best way to architect the solution using
availability sets to ensure that the application is always running in case of
a power failure affecting a server rack in the data center?
Create three availability sets, each with three fault domains. Each
set contains a web server VM, a VM for processing business logic,
and a SQL Server VM for the data tier.
Correct answer
Create three availability sets, each with three fault domains. Web
server VMs in a set, VMs for processing business logic in another
set, and SQL Server VMs for data tier in the final set.
Create one availability set with three fault domains. Place all 9
VMs (3 web server VMs, 3 VMs for processing business logic, and
3 SQL Server VMs for data tier) in the availability set.
Create nine availability sets with three fault domains. Place a VM
in each availability set.
Overall explanation
Azure data centers are vast, and it’s safe to assume that there would be
many server racks. But your subscription is mapped to only three of those
racks. So, all VMs you place in the availability set is created in those three
server racks or fault domains.
If you mix and match workloads in each availability set, you should be
careful to line up different application components in each fault domain
across availability sets. Since mistakes can happen, there is a chance that
similar types of workloads end up in the same fault domain.
In case of a power failure to any server rack, one of the component tiers
will completely fail, bringing down the entire application. Option A is
incorrect.
Note: A load balancer will probe the instance health to redirect the traffic
between different application tiers.
Using solutions in options A and C, you can still achieve the desired
architecture, but there is an element of risk if the components are not
deployed in a certain way. Option B removes that element of risk.
Resources
Architect your solution with availability sets
Domain
Deploy and manage Azure compute resources
Question 57
The New-AzSubscriptionDeployment and the New-
AzResourceGroupDeployment PowerShell commands deploy resources to
the subscription and the resource group, respectively. Given below are
two statements based on these commands. Select Yes if the statement is
correct. Else select No.
No, Yes
Yes, No
Correct answer
No, No
Yes, Yes
Overall explanation
Recall from practice test 1 (Related lecture video title: Select resource
type and PowerShell command to deploy the ARM template) where we
create a deployment resource using the resource type
Microsoft.Resources/deployments in the ARM template to deploy
resources at a different scope from that defined by the PowerShell
command.
Statement 1:
2. Provide the location in the ARM template since this command also
requires a location to store the deployment data.
3. Provide the content for the nested template, which will be deployed at
the subscription scope.
https://learn.microsoft.com/en-us/powershell/module/az.resources/new-
azdeployment?view=azps-9.3.0
So, statement 1 -> No.
Statement 2:
https://learn.microsoft.com/en-us/powershell/module/az.resources/new-
azresourcegroupdeployment?view=azps-9.3.0
The log files consume a huge space, and it is not necessary to back them
up. So, you need to configure backup such that all files and folders except
the LogFiles folder are backed up on an hourly basis.
Detailed Answer:
Question 1:
You can configure partial backups to exclude specific files and folders.
However, partial backups are supported only for custom backups. So,
Question 1 -> Custom.
Question 2:
To exclude the log files folder, navigate to your App Service app’s
companion app, called the Kudu app at https://<app-
name>.scm.azurewebsites.net/
From there, choose either the Command line or the PowerShell debug
console.
But, before understanding how to exclude a folder in the backup process,
let’s first analyze the contents of a full backup stored in the storage
account.
To do so, download the zip file of any full backup. When you extract the
folder, you can view all the folders and files that you saw earlier in the
Kudu app.
So, let’s go back to the Kudu app and proceed to exclude the folder from
the backup process. In the Kudu app, navigate to Site -> wwwroot and
create a _backup.filter file.
In the file, add all directories and files that you want to exclude from the
custom backup in each line. For example, to exclude the LogFiles folder
from backups, add the below entry in the backup filter file.
Now, when the next custom backup runs, you will not see the log file
folder in the backup.
Reference
Link: https://learn.microsoft.com/en-us/azure/app-service/manage-
backup?tabs=portal#configure-partial-backups
https://learn.microsoft.com/en-us/azure/app-service/resources-kudu
Resources
Partial backups in Azure App Service
Domain
Deploy and manage Azure compute resources
Question 59
In an Azure subscription, a team has created the following two resources
in the rg-prod-01 resource group:
Select and order the steps you would perform before deleting the
resource group.
Delete the Private DNS zone
Correct answer
Delete the Read-only lock
Overall explanation
Since the virtual network is linked to the Private DNS zone, you cannot
delete the Private DNS zone without first deleting the linked/nested
resources. If you try to do so, you will get this below error:
Reference
Link: https://learn.microsoft.com/en-us/cli/azure/network/private-dns/
zone?view=azure-cli-latest#az-network-private-dns-zone-delete
The correct order is to first delete the read-only lock on the VNet and
delete the linked virtual network. With no virtual network links, you can
remove the Private DNS zone resource when you delete the resource
group (check the related lecture video).
Note: In a real-world scenario, the best way to delete this resource group
would be to:
To test the correct order of deleting resources, I have not mentioned the
option Delete the resource group . Else, the question becomes easy.
Resources
Order of deletion of resources (with locks)
Domain
Manage Azure identities and governance
Question 60
In your Microsoft Entra ID tenant, you add the below users as members of
the Azure administrative units IT Dept and HR Dept.
Further, you add the below security group to the IT Dept administrative
unit.
This means, he can reset passwords for users poc one and poc two , who
are members of the IT Dept administrative unit.
He cannot do a password reset for other users, poc three and poc four ,
who are members of the HR Dept administrative unit.
Options A and E are incorrect.
Further, admin one can only manage the properties of the group added in
the administrative unit, not the individual members of the group. So, he
can manage group properties that his role would allow, just that resetting
a password isn’t much relevant in the context of a group.
But since the user poc five is not added directly to the IT
Dept administrative unit, admin one cannot reset his password,
although poc five is a member of the group that's added to
the IT Dept administrative unit.
Resources
Reset password for an Azure Administrative Unit
Domain
Manage Azure identities and governance
Question 61
You receive a requirements document from your client to create ten Azure
storage accounts with the following details.
Just before you deploy the storage account, you receive a communication
to update the Account Kind of the storage accounts from StorageV2 to
BlockBlobStorage in the document.
While creating storage accounts in the Azure portal, which two other
property values (from the table) also require a change?
Correct selection
Redundancy
Hierarchical namespace
SkuName
Correct selection
Performance
Overall explanation
We don’t set the Account Kind property while creating the storage
account resource in the Azure portal. To create a storage account with
the Account Kind as StorageV2, we select Performance as Standard,
which creates a standard, general-purpose V2 account.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-overview
Only the standard storage accounts support all six replication types: LRS,
GRS, RA-GRS, ZRS, GZRS, and RA-GZRS. Premium storage accounts
support only LRS and ZRS. Since the initial replication type RAGRS is not
supported for premium accounts, the update affects the storage account's
redundancy too.
Option A is the other correct answer.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
account-create?tabs=azure-powershell#create-a-storage-account-1
Resources
Properties affected by the change in account kind in Azure storage
account
Domain
Implement and manage storage
Question 62
You configure the self-service password reset (SSPR) policy for the
following users in your organization:
The policy has the below authentication methods defined.
Based on the given information, select the correct answer choice for the
below two statements:
Only User One , 5
Only User One , User Two & User Three , 4
Correct answer
Only User One and User Three , 5
Only User One , 6
Only User One and User Two , 0
Overall explanation
Statement 1:
Statement 2:
And User Four needs to answer five questions from a total of 6 questions
to set up the self-service password reset. But when he resets the
password, he has to answer only 4 (check the related lecture video).
Question 63
Given below is a custom Azure RBAC role for managing storage accounts.
1. {
2. "properties": {
3. "roleName": "Cust role",
4. "description": "",
5. "roletype": "",
6. "assignableScopes": [
7. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
8. ],
9. "permissions": [
10. {
11. "actions": [
12. "Microsoft.Authorization/*/read",
13. "Microsoft.Resources/deployments/*",
14. "Microsoft.Resources/subscriptions/resourceGroups/read",
15. "Microsoft.Storage/storageAccounts/*",
16. "Microsoft.Support/*"
17. ],
18. "notActions": [],
19. "dataActions": [
20. "Microsoft.Storage/storageAccounts/*"
21. ],
22. "notDataActions": []
23. }
24. ]
25. }
26. }
Answer the below two questions based on the custom role definition:
Actions, assignableScopes
NotDataActions, roleType
Correct answer
NotActions, assignableScopes
DataActions, DataActions
Overall explanation
<<Refer to the question in Practice Test 1 (Related lecture video
title: Create a custom Azure RBAC role - 1 ) to understand control plane
and data plane operations>>
Question 1:
So, to ensure the custom role doesn’t have permission to read the access
keys, add Microsoft.Storage/storageAccounts/listkeys/action permission in
the NotActions section.
When the user tries to read the access keys, he will get the below error.
Since you need to modify the NotActions section, question 1 ->
NotActions.
Question 2:
The assignableScopes section list the scopes where the custom role is
available for assignment. In the given JSON, the custom role is available
for assignment across the entire subscription. To ensure the custom role
is available only at the rg-dev-01 scope, modify the assignableScopes
section.
Resources
Modify the custom role definition
Domain
Manage Azure identities and governance
Question 64
To move unstructured data in the traditional Network-attached storage
(NAS) devices to the cloud, you create an file share in Azure storage
account.
Which of the following is NOT the correct way to connect to the file share
from a Windows device?
To mount the Azure file share using the storage account key on a
Windows device, copy the PowerShell script provided by Azure Files.
Since the Windows OS uses the SMB protocol to connect to the file share,
the Test-NetConnection command checks if the file share can be accessed
over port 445, which is the port the SMB uses.
If the connection and the mount operation are a success, you will see this
message and the file share connection on your device.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-
to-use-files-windows#mount-the-azure-file-share
Option D is incorrect.
You can also copy just the UNC path from the script and map the network
drive from your Windows device.
Right-click on This PC (in Windows 10) and select Map network drive .
Paste the copied UNC path, and click Finish. This is another way to mount
Azure file share from your device.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-
to-use-files-windows#mount-the-azure-file-share-with-file-explorer
Rather than mounting a drive, you can also directly paste the previously
copied UNC path in the file explorer to access the file share.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-
to-use-files-windows#access-an-azure-file-share-via-its-unc-path
Option B is incorrect.
We use the Net share command to share folders from the command line,
not to access a shared folder. The Net use command is more suitable for
accessing/mounting the shared folder.
Reference
Link: https://learn.microsoft.com/en-us/previous-versions/windows/it-
pro/windows-server-2012-r2-and-2012/hh750728(v=ws.11)#examples
https://techcommunity.microsoft.com/t5/fasttrack-for-azure/mapping-a-
network-drive-to-an-azure-file-share-using-domain/ba-p/2220773
Resources
Connect to Azure file share from a Windows device
Domain
Implement and manage storage
Question 65
You have the following resources created in the rg-dev-01 resource group.
User One is already assigned the below custom role at
the Dev subscription scope.
1. {
2. "properties": {
3. "roleName": "Cust role",
4. "description": "",
5. "assignableScopes": [
6. "/subscriptions/0e54b3f0-731c-4755-b0cb-62356bbb2c00"
7. ],
8. "permissions": [
9. {
10. "actions": [
11. "Microsoft.Resources/*/read"
12. ],
13. "notActions": [],
14. "dataActions": [],
15. "notDataActions": []
16. }
17. ]
18. }
19. }
Which additional permission would you add to the Actions section of the
custom role to ensure User One can read the following resources in the
Azure portal?
None, None
Microsoft.Authorization/*/read , Microsoft.Authorization/*/read
Correct answer
Microsoft.Storage/*/read , Microsoft.Compute/*/read
Microsoft.Portal/*/read , Microsoft.Subscription/*/read
Overall explanation
Microsoft.Resources resource provider maps to the Azure Resource
Manager service, which is the deployment and management layer in
Azure. So, Microsoft.Resources/*/read provides read permissions to
resource types like subscriptions, tags and resource groups, etc., but not
to any individual Azure resources like Storage accounts or virtual
machines.
The current custom role cannot provide read access to any resource.
Option A is incorrect.
Curious about the error message displayed by the Azure portal when
accessing the storage account, I copy-pasted the Azure storage account
link (from a user who has access to the resource) into the browser
windows of User One . We get the below error that the user does not
have Microsoft.Storage/storageAccounts/read permission on the
resource.
Similarly, we need Microsoft.Compute/virtualMachines/read permissions
on the resource to access the VM.
Option B is incorrect.
Question 66
A company requires that all user’s devices that access corporate
resources be joined/registered to Microsoft Entra ID. An employee already
has his corporate Windows 10 & macOS laptop joined/registered to
Microsoft Entra ID. But he is unable to register his personal, iOS-based
mobile device, signed in with a local account, to Microsoft Entra ID.
https://learn.microsoft.com/en-us/entra/identity/devices/concept-directory-
join
From the above understanding, we can conclude that options B and D are
incorrect.
Option A is incorrect too, as the user has already registered the corporate
macOS laptop with Microsoft Entra ID. If the administrator has prevented
the user from registering the device, he wouldn’t be able to register his
macOS device in the first place.
Further, there is no separate option to disable users from registering their
personal devices with Microsoft Entra ID.
If a user tries to register a device beyond the specified limit, he will not be
allowed to do so. He either needs to request the admin to increase the
device limit per user or delete one of his registered/joined devices before
registering the new device.
Reference
Link: https://learn.microsoft.com/en-us/troubleshoot/azure/entra/entra-
id/ad-dmn-services/maximum-number-of-devices-joined-workplace
Resources
Register personal iOS mobile device with Microsoft Entra ID
Domain
Manage Azure identities and governance
Question 67
You have servers in your on-premises data center running Windows 10
Pro. To transfer the data from the servers to Azure Storage account, you
attach an external disk to the server.
Select and order the steps you will perform to copy the data to Azure Blob
storage using the Azure Import/Export service.
Prepare drives for the import job with the WAImportExport tool
Correct answer
Prepare drives for the import job with the WAImportExport tool
Prepare drives for the import job with the WAImportExport tool
Prepare drives for the import job with the WAImportExport tool
Overall explanation
The first step in importing the data to Azure is to prepare the drives.
Preparing the drive means encrypting the drive, and copying data using
the WAImportExport tool that ultimately creates a journal file, which
stores information such as the drive serial number, encryption key, and
storage account details.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-1-prepare-
the-drives
So, box 1 -> Prepare drives for the import job with the WAImportExport
tool.
In the next step, you create an import job using the Azure Import/Export
service to transfer the data to the target storage account.
In the import job, you upload the journal file created in the previous step.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-2-create-an-
import-job
So, box 2 -> Create an import job in the Azure Import/Export service.
After creating the job, you ship the encrypted disk drives containing your
data to an Azure data center using carriers like FedEx.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-4-ship-the-
drives
So, box 3 -> Ship the hard drives to the Azure data center.
After you ship the drives, it is important to update the job with tracking
information, or else the job will expire. So, return to the job and provide
key information like carrier and tracking number.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-5-update-
the-job-with-tracking-information
After Microsoft receives the disks, they process them and transfer the
data to your storage account. So, the step Data is transferred from the
hard drives to the Azure Storage account is performed by Microsoft,
and not you. Since this step is not one of the steps you need to perform,
all options A, C and E are incorrect.
Once the data is transferred to the storage account, verify the data from
your end.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-data-to-blobs?tabs=azure-portal-preview#step-6-verify-
data-upload-to-azure
Resources
Copy data to Azure Blob storage using the Azure Import/Export service
Domain
Implement and manage storage
Question 68
A SaaS company has to store the customer’s data in Azure Storage
account in a multi-tenant fashion.
Correct answer
Encryption scope
Shared access signatures
Immutable storage
Lifecycle management policies
Overall explanation
By default, the encryption key that encrypts the data at rest in your Azure
Storage account is scoped to the entire account. In other words, all data in
the Azure Storage account is encrypted with a single key.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
service-encryption#about-encryption-key-management
For example, let’s suppose that the SaaS company has two customers. We
can create a key for each customer and store them in Azure Key Vault.
In the Azure Storage account, we create two encryption scopes, scope1,
and scope2, using keys key1 and key2, respectively, for encryption.
Rather than using a single key vault for storing keys of all the customers
as shown above, you can provision and grant access to a key vault for
each customer so they have complete control of the keys that encrypt
their data.
So, encryption scopes ensure a secure boundary of customer A’s data
from that of customer B. Option A is the correct choice.
All the other options are also features of the Azure Storage account that
support multitenancy.
Reference
Link: https://learn.microsoft.com/en-us/azure/architecture/guide/multiten
ant/service/storage#shared-access-signatures
Option B is incorrect.
Reference
Link: https://learn.microsoft.com/en-us/azure/architecture/guide/multiten
ant/service/storage#immutable-storage
For example, in the below scenario, if any of the customers is inactive for
three weeks, the lifecycle management policy helps to reduce storage
costs by changing the access tier of the blobs to cool storage.
Option D is incorrect.
Reference
Link: https://learn.microsoft.com/en-us/azure/architecture/guide/multiten
ant/service/storage#lifecycle-management
Resources
A multi-tenant feature for secure boundary of customer's data
Domain
Implement and manage storage
Question 69
You deploy a public load balancer along with other resources as given in
the below architecture:
The public load balancer defines:
a. A load balancing rule that maps the frontend IP configuration to the two
VMs in the backend pool.
b. A health probe that checks the health status of the VM instances in the
backend pool (via port 80).
The Network Security Group (nsg01) associated with the subnet defines
custom rules that allow traffic from only a specific client IP address to
reach the VMs in the backend pool.
You realize that the client IP is unable to access the web server VM. Which
of the following actions will you perform to enable access? Select two
options.
Correct selection
Add an inbound security rule with priority 105 to allow HTTP
traffic from the Azure Load Balancer
Add an inbound security rule with priority 103 to allow traffic
from Azure Load Balancer via port 443
Correct selection
Update the port of the inbound security rule with priority 110
from 80 to 443
Update both port/backend ports in the load balancing rule from
80 to 53
Overall explanation
Short Answer for Revision:
Before the Azure Load Balancer can direct the client requests to the
backend VM, the load balancer checks the health of the VM instances
using a health probe. Only if the VM instances are healthy, does the load
balancer direct the requests to the VM.
In this case, the security rule with priority 110 blocks all requests over
port 80 (also used by the load balancer's health probe). So health probe
checks fail.
To fix the issue, either change the port in the security rule with priority
110 (so the health checks go through) [option C] or update the port used
by the health probe (so, it bypasses the security rule). Another way would
be to add a security rule that overrides rule 110 (option A).
Detailed Explanation:
From the answer options, we can guess that the issue of client IP being
unable to access the web server VM is due to a misconfiguration in the
network security rules.
Before an Azure Load Balancer sends the traffic to the VMs in the backend
pool, it checks whether the VM instances in the pool are healthy and can
receive traffic. Only if the health probe doesn’t fail, does the load balancer
send client requests to the healthy VMs.
From the question, the port used by the load balancer's health probe to
check the health of VM instances is 80.
So although one of the custom rules in the NSG allows the client IP to
access the VMs (via the load balancer's frontend IP), the other custom rule
in the NSG prevents the health probe from checking the health of the
VM instances. Since the load balancer is unable to check the health of the
VM instances, it assumes that the instances are not healthy and doesn't
direct any request to the VM. That's the reason why the client IP is unable
to access the web server VM.
To make things work, you can update the port of the custom security rule
with priority 110 from 80 to any other port, for example, 443.
This ensures that the health probe operation by the load balancer is not
affected. Only after the health probe detects a healthy instance can the
client IP access the web server VM in the backend pool.
Instead of updating the security rule, the other way to make things work
would be to change the port the health probe uses to detect a healthy
instance. Since the security rule with priority 110 blocks only port 80,
updating the port of the health probe to port, for example, say, 53, can
still enable the client IP to access the VM as the health probe checks go
through.
Note: 53 is the default port for the DNS server. Before you can do the
operation, you should turn on the DNS role for the VMs using a PowerShell
command. Refer to the end for more details.
However, option D doesn’t update the port of the health probe. It just
updates the port the load balancer uses to communicate with the VM.
Performing this change will not allow the client IP to access the VM. Option
D is incorrect.
Since we understand the issue is with Azure Load Balancer’s health probe,
we can also add an inbound security rule with a priority (105) higher than
that of the deny rule (110) to allow HTTP traffic from the load balancer.
After the rule is created too, the user can access the VM. Option A is the
other correct answer.
Given that the health probe uses port 80. So, if you are adding a new
inbound security rule, the rule needs to allow traffic from Azure Load
Balancer over port 80. Allowing traffic over port 443 will not have any
effect. Option B is incorrect.
Note: Sometimes you need to change browsers or restart VMs to see the
desired output.
Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/components#
health-probes
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-
custom-probe-overview#probe-source-ip-address
Run this PowerShell command to turn on DNS Server role for both
VMs:
Resources
Resolve connections to VMs in a load balancer's backend pool
Domain
Implement and manage virtual networking
Question 70
Given below are four storage accounts in an Azure subscription.
For these storage accounts, answer the below questions related to storage
redundancy.
1. Azure provides all four main types of redundancy options: LRS, GRS,
ZRS, and GZRS.
But all the premium storage accounts of Premium account type : block
blobs, file shares, and page blobs support only LRS and ZRS. That is, they
do not support any geo-replication.
So, RA-GRS is supported only by the standard, general-purpose storage
account, strdev011. Statement 1 -> Only strdev011.
While ZRS is supported by all the storage account types. Statement 2 ->
All.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#supported-storage-account-types
Resources
Storage redundancy for Azure Storage accounts
Domain
Implement and manage storage
Question 71
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You need to create an Azure Storage account that meets the following
requirements:
1. Protects data from disasters that affect a region, like tsunamis, floods,
earthquakes, etc.
Solution: You create the storage account with the redundancy setting
ZRS (Zone-redundant storage).
Yes
Correct answer
No
Overall explanation
For your Azure Storage account, you can configure any of the six available
redundancy setting: LRS, ZRS, GRS, GZRS, RA-GRS, and RA-GZRS.
Reference
Link: https://learn.microsoft.com/en-us/azure/reliability/availability-zones-
overview#availability-zones
https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#zone-redundant-storage
In the event of a data center failure, the data copy in only one of the
availability zones is unavailable. So, ZRS ensures that data is available
even in the event of a data center failure. So, requirement 3 is satisfied.
But since all the data centers are located in a single region, ZRS doesn’t
protect your data from regional outages like a tsunami. Since requirement
1 is not satisfied, the ZRS redundancy setting doesn’t meet the stated
goal.
Option No is the correct answer.
Resources
Create a storage account with the appropriate redundancy setting - 1
Domain
Implement and manage storage
Question 72
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You need to create an Azure Storage account that meets the following
requirements:
1. Protects data from disasters that affect a region, like tsunamis, floods,
earthquakes, etc.
Solution: You create the storage account with the redundancy setting
GRS (Geo-redundant storage).
Yes
Correct answer
No
Overall explanation
From the image in the previous question, it is evident that except LRS, all
other storage redundancy settings, including GRS, satisfy requirement 3.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/common/storage-
redundancy#geo-redundant-storage
In the primary region, GRS is very similar to LRS. Since all storage copies
are located in the same data center in the primary region, GRS doesn’t
provide high availability in the primary region. For satisfying requirement
2, a storage redundancy that uses availability zones in the primary region
like, ZRS, GZRS, or RA-GZRS is more suitable.
Option No is the correct answer.
Resources
Create a storage account with the appropriate redundancy setting - 2
Domain
Implement and manage storage
Question 73
This question is part of repeated scenario questions that contain the same
stem but with a different solution for each question. You need to identify if
the given solution solves a particular problem. Each set of repeated
scenario questions might contain either none, one, or many solutions.
You need to create an Azure Storage account that meets the following
requirements:
1. Protects data from disasters that affect a region, like tsunamis, floods,
earthquakes, etc.
Solution: You create the storage account with the redundancy setting
RA-GZRS (Read-access geo-zone-redundant storage).
Yes
Correct answer
No
Overall explanation
We already concluded that all storage redundancies except LRS satisfy
requirement 3. And all storage redundancies except LRS and ZRS satisfy
requirement 1.
After considering requirement 2, we have only two options left: GZRS and
RA-GZRS.
For the given set of requirements, choosing the GZRS redundancy solution
is the best answer. Option No is the correct answer.
Reference
Link: https://azure.microsoft.com/en-in/pricing/details/storage/blobs/
Note: The given pricing order (in the above chart) is true for most Azure
regions, except for East US, where RAGRS is more costly than RAGZRS.
Nevertheless, even in the East US region, GZRS is a more cost-efficient
solution than RAGZRS. So, for the given set of requirements, this solution
holds true for any Azure region.
Resources
Create a storage account with the appropriate redundancy setting - 3
Domain
Implement and manage storage
Question 74
You have four Azure Storage accounts in your subscription.
In which of the storage accounts can you move a blob to an archive tier?
Only strdev011
Correct answer
Only strdev011 and strdev012
Only strdev011 and strdev013
All the storage accounts
Overall explanation
Storage accounts that support zone redundancy do not support moving
blobs to the archive tier. That is, all storage accounts that are configured
with the replication types, like ZRS, GZRS, and RA-GZRS, do not support
moving data to an archive tier.
So, both strdev013 and strdev014 do not support the archive tier.
strdev011 and strdev012 do not support zone redundancy. So, they
support moving blobs to an archive tier.
Reference
Link: https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-
overview#archive-access-tier
Resources
Move blobs to an archive tier
Domain
Implement and manage storage
Question 75
You have two virtual machines in your Azure subscription.
The virtual machine vm01 has two data disks attached to it.
Select and order the steps you would perform to attach the disk01 disk to
vm02, that results in minimal downtime for end users.
Correct answer
Start vm02
Start vm02
Stop vm01
Start vm02
Overall explanation
Since the disk type of both the disks is standard HDD, we cannot enable
shared disks. So, you have to detach the disk from vm01 before attaching
it to vm02.
Option B is not the most optimal solution because after we detach the disk
from vm01, the downtime begins. Although starting a VM and attaching a
disk can be a synchronous operation (may not be sequential, as shown
below), a better solution would completely avoid the impact of delay
caused by starting the VM, especially after the disk is detached.
Option D is also not an optimal solution for the same reason. Although we
can endlessly argue if starting a VM could cause more delay than
attaching a disk, the fact remains that starting the VM should be a
precursor to detaching the disk to prevent any easily avoidable downtime
to end users.
Option A is the most optimal solution. Here, we first start the VM, so vm02
is ready. After detaching the disk from vm01, downtime for end users
begins. However, unlike earlier cases, there is now no additional
downtime associated with starting the VM. Subsequently, we can attach
the disk to vm02, and end users can begin to access the disk.
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/detach-
disk
Resources
Attach disk to an Azure VM
Domain
Deploy and manage Azure compute resources
Question 76
In the Dev subscription, there are three resource groups with the below
tags applied:
You create a resource in each resource group. Some of the resources are
also tagged.
Next, you assign the below policy to the Dev subscription scope.
After one week, you modify the access policy to grant access to all
cryptographic operations on the keys to the user principal in the key vault.
Correct answer
kv-ravi-dev01 -> Team: Compliance and Env: Dev
Overall explanation
Let’s use the given information to arrange the resources in a hierarchy.
Unlike locks, resources do not inherit the tags applied to a resource group
or a subscription.
Further:
1. When the policy is created, it will not have any effect on the three
existing resources.
2. Also, this built-in policy only affects the resources, not the resource
groups, as indicated by its name.
But when the access policy of a key vault is modified to grant access to all
cryptographic operations on the keys to the user principal, the policy
detects the resource update and adds the tag as specified in the policy
definition.
So, in addition to the Team: Compliance tag, the Key Vault resource will
also have the Env: Dev tag.
Question 77
You have a Standard Azure Load Balancer with a backend pool. There are
four Azure VMs in your subscription. The VMs with different images are
deployed across VNets and availability zones as shown:
Which VMs can you add to the load balancer’s backend pool?
For Standard Azure Load Balancers -> All VMs added to the backend pool
must be in the same virtual network.
For Basic Azure Load Balancers (soon to be retired) -> All VMs added to
the backend pool must be in the same availability set / virtual machine
scale sets.
Detailed Explanation:
If you navigate to the backend pool of an Azure Load Balancer and try to
add VMs to the pool, you will realize that, first, you need to select a virtual
network.
So, for a Standard Azure Load Balancer, all VMs added to the backend
pool must be in the same virtual network. Depending on the chosen
virtual network, you can add either only vm01 and vm03 or only vm02
and vm04.
Option C is the correct answer.
Other factors like the availability zones or availability sets and, the image
of the VM does not matter for a Standard SKU load balancer. So, all other
options are incorrect.
Reference
Link: https://learn.microsoft.com/en-us/azure/load-balancer/skus#skus
GitHub Repo Link: Add VMs to Azure Load Balancer's backend pool
Domain
Implement and manage virtual networking
Question 78
You have data on Product images, Product Inventory, and Customers
related to the business in your Azure Storage account. And Sales
transactions data is stored in the Cosmos DB. You would like to download
the data and host a copy of them in your on-premises servers.
Your colleague suggested you use the Azure Import/Export tool. Which of
the following business entities can you download from Azure using the
tool?
You can export the data only from Azure blobs with the Azure
Import/export tool.
Reference
Link: https://learn.microsoft.com/en-us/azure/import-export/storage-
import-export-requirements#supported-storage-types
So, you can download only the product images from the blob container.
Option C is the correct answer.
Resources
Download data with the Azure Import/Export tool
Domain
Implement and manage storage
Question 79
An organization (ravikirans.com) collaborates with organizations like:
contoso.com and fabrikam.com.
You need to ensure that your admins can add users from these partner
domains into your Microsoft Entra ID tenant but cannot add/invite users
from organizations like gmail.com, yahoo.com, and hotmail.com.
Detailed Explanation:
Admins cannot send B2B invitations to these domains. When they try to
do so, it displays an error message.
But the blocklist ensures that invitations are sent to other domains
successfully.
Reference
Link: https://learn.microsoft.com/en-us/entra/external-id/allow-deny-list
Option D is incorrect.
Option C is incorrect.
Option A is incorrect.
Resources
Block user invitations from gmail.com
Domain
Manage Azure identities and governance
Question 80
You have deployed a pilot app in a Shared App Service plan. The client
requires you to enable two new features for the app:
Correct selection
Enable Application Insights Profiler
Set Collection level to Basic
Scale out the App Service plan
Correct selection
Scale up the App Service plan
Overall explanation
Short Answer for Revision:
Scale up - get a fatty VM (with more vCPU, RAM, Storage) with additional
features like custom domains.
Detailed Answer:
The Free and the Shared plans do not come with any features like adding
a custom domain, autoscaling, daily backups, and staging slots. So, you
need to scale up the App Service Plan to at least Basic to add custom
domains.
Reference
Link: https://learn.microsoft.com/en-us/azure/app-service/manage-scale-
up
Resources
Scaling App Service Plan
Domain
Deploy and manage Azure compute resources
Question 81
You deploy a virtual network with two subnets in a resource group using
the below ARM template.
What would happen if you remove subnet1 and subnet2 from the ARM
template and redeploy the template to the resource group in a complete
deployment mode?
Since the virtual network alone can be a standalone resource without any
subnets, removing both subnets will not delete the VNet.
Option D is incorrect.
But that’s how the deployment modes work for top-level resources. The
subnet01 and subnet02 of resource type,
Microsoft.Network/virtualNetworks/subnets are defined here as child
resources of the Virtual Network of resource type,
Microsoft.Network/virtualNetworks. So, the subnets are child resources of
the VNet and are not top-level resources.
So, when you remove both the subnets and deploy the template, the
subnets will not be removed from the VNet resource. Option C is the
correct answer.
Reference Link: https://learn.microsoft.com/en-us/azure/azure-resource-
manager/templates/deployment-modes#complete-mode
Resources
Redeploying child resources in complete mode
Domain
Deploy and manage Azure compute resources