0% found this document useful (0 votes)
476 views649 pages

Azure Storage

Azure Files offers fully managed file shares in the cloud that are accessible via SMB and NFS protocols. Azure file shares can replace or supplement on-premises file servers and enable applications to store file data in the cloud. Key benefits include shared access, fully managed storage, scripting and tooling support, and high resiliency.

Uploaded by

Ridaah Daniels
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
476 views649 pages

Azure Storage

Azure Files offers fully managed file shares in the cloud that are accessible via SMB and NFS protocols. Azure file shares can replace or supplement on-premises file servers and enable applications to store file data in the cloud. Key benefits include shared access, fully managed storage, scripting and tooling support, and high resiliency.

Uploaded by

Ridaah Daniels
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 649

Contents

Azure Files documentation


Overview
What is Azure Files?
Quickstarts
Create file shares
Tutorials
Create a Windows SMB file share
Create a Linux NFS file share
Extend Windows file servers with Azure File Sync
Concepts
Planning for an Azure Files deployment
SMB file shares
NFS file shares
Identity-based authentication and authorization
Networking considerations for direct access
Disaster recovery and failover
Azure file share snapshots
Performance and scale
SMB Multichannel performance
Scalability and performance targets
Understanding billing
Prevent accidental data deletion
Back up file shares
Encryption
Encryption at-rest
Encryption at-rest with customer-managed keys
Compliance offerings
FAQ
What's new in Azure Files?
How-to guides
Deploy
Create an SMB file share
Create an NFS file share
Use DFS-N with Azure Files
SQL Server FCI with premium file shares
Purchase storage reservations
Mount
Mount SMB file share to Windows
Mount SMB file share to Linux
Mount NFS file share to Linux
Mount SMB file share to macOS
Network
Configure Azure Files network endpoints
Configuring DNS forwarding for Azure Files
Configure Site-to-Site VPN
Configure Point-to-Site VPN on Windows
Configure Point-to-Site VPN on Linux
Authenticate
Enable on-premises AD DS authentication and authorization
AD DS overview
Enable AD DS authentication
Assign share-level permissions
Assign directory/file-level permissions
Mount file share
Update password
Enable Azure AD DS authentication and authorization
Manage
Manage storage in Azure independent clouds
Initiate an account failover
Data protection
Enable soft delete
Manage data redundancy
Change how data is replicated
Design highly available applications
Manage disaster recovery
Check the Last Sync Time property
Initiate account failover
Back up file shares
Portal
CLI
PowerShell
REST API
Restore file shares
Portal
CLI
PowerShell
REST API
Manage file shares
Portal
CLI
PowerShell
REST API
Monitor
Monitor Azure Files
Migrate
Migrate to Azure file shares
Target a cloud-only deployment
RoboCopy to migrate to Azure file shares
Migrate from an on-premises NAS to Azure file shares with DataBox
Target a hybrid deployment
Migrate from Linux to a hybrid file server with Azure File Sync
Migrate from an on-premises NAS to a hybrid file server with Azure File Sync
Migrate from an on-premises NAS to a hybrid file server with DataBox
From StorSimple
StorSimple 8100 and 8600 migration guide
StorSimple 1200 migration guide
Develop
Configure connection strings
.NET
Java
C++
Python
Secure
Manage encryption keys for the storage account
Check the encryption key model for the account
Configure encryption with customer-managed keys
Store customer-managed keys in a key vault
Store customer-managed keys in a managed HSM
Enable infrastructure encryption for the account
Enable threat protection with Microsoft Defender for Storage
Configure firewalls and virtual networks
Require secure transfer
Disable SMB 1 on the Windows SMB client
Disable SMB on the Linux SMB client
Manage Transport Layer Security (TLS)
Enforce minimum TLS version for incoming requests
Configure TLS version for a client application
Transfer data
Get started with AzCopy
Use AzCopy with Files
Configure, optimize, troubleshoot - AzCopy
Troubleshoot
Troubleshoot Azure Files on Windows (SMB)
Troubleshoot Azure Files on Linux (SMB)
Troubleshoot Azure file shares performance
Troubleshoot Azure Files on Linux (NFS)
Troubleshoot backing up file shares
Reference
Azure Files API reference
Monitoring Azure Files reference
Resource Manager template
Samples
Resources
Azure updates
Azure Files on Microsoft Q&A
Azure Files on Stack Overflow
Pricing for Azure Files
Azure pricing calculator
Videos
Azure Files and Azure NetApp Files comparison
Compare access with NFS to Azure Files, Blob Storage, and NetApp Files
NuGet packages (.NET)
Microsoft.Azure.Storage.Common (version 11.x)
Azure.Storage.Common (version 12.x - preview)
Microsoft.Azure.Storage.File (version 11.x)
Azure.Storage.File (version 12.x - preview)
Azure Configuration Manager
Azure Storage Data Movement library
Storage Resource Provider library
Source code
.NET
Azure Storage client library
Version 12.x (preview)
Version 11.x and earlier
Data Movement library
Storage Resource Provider library
Java
Azure Storage client library version 12.x (preview)
Azure Storage client library version 8.x and earlier
Node.js
Azure Storage client library version 12.x (preview)
Azure Storage client library version 10.x
Python
Azure Storage client library version 12.x (preview)
Azure Storage client library version 2.1
What is Azure Files?
5/20/2022 • 4 minutes to read • Edit Online

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files file shares can be mounted
concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux,
and macOS clients. NFS Azure Files shares are accessible from Linux or macOS clients. Additionally, SMB Azure
file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being
used.
Here are some videos on the common use cases of Azure Files:
Replace your file server with a serverless Azure file share
Getting started with FSLogix profile containers on Azure Files in Azure Virtual Desktop leveraging AD
authentication

Why Azure Files is useful


Azure file shares can be used to:
Replace or supplement on-premises file ser vers :
Azure Files can be used to completely replace or supplement traditional on-premises file servers or NAS
devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file
shares wherever they are in the world. SMB Azure file shares can also be replicated with Azure File Sync
to Windows Servers, either on-premises or in the cloud, for performance and distributed caching of the
data where it's being used. With the recent release of Azure Files AD Authentication, SMB Azure file
shares can continue to work with AD hosted on-premises for access control.
"Lift and shift" applications :
Azure Files makes it easy to "lift and shift" applications to the cloud that expect a file share to store file
application or user data. Azure Files enables both the "classic" lift and shift scenario, where both the
application and its data are moved to Azure, and the "hybrid" lift and shift scenario, where the application
data is moved to Azure Files, and the application continues to run on-premises.
Simplify cloud development :
Azure Files can also be used in numerous ways to simplify new cloud development projects. For example:
Shared application settings :
A common pattern for distributed applications is to have configuration files in a centralized
location where they can be accessed from many application instances. Application instances can
load their configuration through the File REST API, and humans can access them as needed by
mounting the SMB share locally.
Diagnostic share :
An Azure file share is a convenient place for cloud applications to write their logs, metrics, and
crash dumps. Logs can be written by the application instances via the File REST API, and
developers can access them by mounting the file share on their local machine. This enables great
flexibility, as developers can embrace cloud development without having to abandon any existing
tooling they know and love.
Dev/Test/Debug :
When developers or administrators are working on VMs in the cloud, they often need a set of tools
or utilities. Copying such utilities and tools to each VM can be a time consuming exercise. By
mounting an Azure file share locally on the VMs, a developer and administrator can quickly access
their tools and utilities, no copying required.
Containerization :
Azure file shares can be used as persistent volumes for stateful containers. Containers deliver "build once,
run anywhere" capabilities that enable developers to accelerate innovation. For the containers that access
raw data at every start, a shared file system is required to allow these containers to access the file system
no matter which instance they run on.

Key benefits
Shared access . Azure file shares support the industry standard SMB and NFS protocols, meaning you can
seamlessly replace your on-premises file shares with Azure file shares without worrying about application
compatibility. Being able to share a file system across multiple machines, applications/instances is a
significant advantage with Azure Files for applications that need shareability.
Fully managed . Azure file shares can be created without the need to manage hardware or an OS. This
means you don't have to deal with patching the server OS with critical security upgrades or replacing faulty
hard disks.
Scripting and tooling . PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure
file shares as part of the administration of Azure applications. You can create and manage Azure file shares
using Azure portal and Azure Storage Explorer.
Resiliency . Azure Files has been built from the ground up to be always available. Replacing on-premises file
shares with Azure Files means you no longer have to wake up to deal with local power outages or network
issues.
Familiar programmability . Applications running in Azure can access data in the share via file system I/O
APIs. Developers can therefore leverage their existing code and skills to migrate existing applications. In
addition to System IO APIs, you can use Azure Storage Client Libraries or the Azure Storage REST API.

Next Steps
Plan for an Azure Files deployment
Create Azure file Share
Connect and mount an SMB share on Windows
Connect and mount an SMB share on Linux
Connect and mount an SMB share on macOS
Connect and mount an NFS share on Linux
Quickstart: Create and use an Azure file share
5/20/2022 • 11 minutes to read • Edit Online

Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted in Windows, Linux, and
macOS. This guide shows you how to create an SMB Azure file share using either the Azure portal, Azure CLI, or
Azure PowerShell module.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Pre-requisites
Portal
PowerShell
Azure CLI

If you don't have an Azure subscription, create a free account before you begin.

Create a storage account


Portal
PowerShell
Azure CLI

A storage account is a shared pool of storage in which you can deploy an Azure file share or other storage
resources, such as blobs or queues. A storage account can contain an unlimited number of shares. A share can
store an unlimited number of files, up to the capacity limits of the storage account.
To create a storage account using the Azure portal:
1. Under Azure ser vices , select + to create a resource.
2. Select Storage account to create a storage account.
3. Under Project details , select the Azure subscription in which to create the storage account. If you have
only one subscription, it should be the default.
4. Select Create new to create a new resource group. For the name, enter myResourceGroup.
5. Under Instance details , provide a name for the storage account such as mystorageacct followed by a
few random numbers to make it a globally unique name. A storage account name must be all lowercase
and numbers, and must be between 3 and 24 characters. Make a note of your storage account name. You
will use it later.
6. In Region , select East US .
7. In Performance , keep the default value of Standard .
8. In Redundancy , select Locally redundant storage (LRS) .
9. Select Review + Create to review your settings and create the storage account.
10. When you see the Validation passed notification, select Create . You should see a notification that
deployment is in progress.

Create an Azure file share


Portal
PowerShell
Azure CLI

To create an Azure file share:


1. Select the storage account from your dashboard.
2. On the storage account page, in the Ser vices section, select Files .
3. On the menu at the top of the File ser vice page, click + File share . The New file share page drops
down.
4. In Name type myshare. Leave Transaction optimized selected for Tier .
5. Select Create to create the Azure file share.
Share names must be all lower case letters, numbers, and single hyphens but cannot start with a hyphen. For
complete details about naming file shares and files, see Naming and Referencing Shares, Directories, Files, and
Metadata.
Create a directory
Portal
PowerShell
Azure CLI

To create a new directory named myDirectory at the root of your Azure file share:
1. On the File share settings page, select the myshare file share. The page for your file share opens,
indicating no files found.
2. On the menu at the top of the page, select + Add director y . The New director y page drops down.
3. Type myDirectory and then click OK .
Upload a file
Portal
PowerShell
Azure CLI

To demonstrate uploading a file, you first need to create or select a file to be uploaded. You may do this by
whatever means you see fit. Once you've decided on the file you would like to upload:
1. Select the myDirector y directory. The myDirector y panel opens.
2. In the menu at the top, select Upload . The Upload files panel opens.
3. Select the folder icon to open a window to browse your local files.
4. Select a file and then select Open .
5. In the Upload files page, verify the file name and then select Upload .
6. When finished, the file should appear in the list on the myDirector y page.
Download a file
Portal
PowerShell
Azure CLI

You can download a copy of the file you uploaded by right-clicking on the file and selecting Download . The
exact experience will depend on the operating system and browser you're using.

Clean up resources
Portal
PowerShell
Azure CLI

When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you want to delete.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .

Next steps
What is Azure Files?
Tutorial: Create an SMB Azure file share and
connect it to a Windows VM using the Azure portal
5/20/2022 • 5 minutes to read • Edit Online

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. In this tutorial, you will learn a few ways
you can use an SMB Azure file share in a Windows virtual machine (VM).
If you don't have an Azure subscription, create a free account before you begin.
Create a storage account
Create a file share
Deploy a VM
Connect to the VM
Mount an Azure file share to your VM
Create and delete a share snapshot

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Getting started
Create a storage account
Before you can work with an Azure file share, you have to create an Azure storage account.
1. Sign in to the Azure portal.
2. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
3. On the Storage Accounts window that appears, choose + New .
4. On the Basics tab, select the subscription in which to create the storage account.
5. Under the Resource group field, select your desired resource group, or create a new resource group.
6. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name
also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
7. Select a region for your storage account, or use the default region.
8. Select a performance tier. The default tier is Standard.
9. Specify how the storage account will be replicated. The default redundancy option is Geo-redundant storage
(GRS).
10. Select Review + Create to review your storage account settings and create the account.
11. Select Create .
The following image shows the settings on the Basics tab for a new storage account:

Create an Azure file share


Next, create an SMB Azure file share.
1. When the Azure storage account deployment is complete, select Go to resource .
2. Select File shares from the storage account pane.

3. Select + File Share .

4. Name the new file share qsfileshare, enter "1" for the Quota , leave Transaction optimized selected,
and select Create . The quota can be a maximum of 5 TiB (100 TiB, with large file shares enabled), but you
only need 1 GiB for this.
5. Create a new txt file called qsTestFile on your local machine.
6. Select the new file share, then on the file share location, select Upload .

7. Browse to the location where you created your .txt file > select qsTestFile.txt > select Upload .
Deploy a VM
So far, you've created an Azure storage account and a file share with one file in it. Next, create an Azure VM with
Windows Server 2019 Datacenter to represent the on-premises server.
1. Expand the menu on the left side of the portal and select Create a resource in the upper left-hand
corner of the Azure portal.
2. Under Popular ser vices select Vir tual machine .
3. In the Basics tab, under Project details , select the resource group you created earlier.

4. Under Instance details , name the VM qsVM.


5. For Image select Windows Ser ver 2019 Datacenter - Gen2 .
6. Leave the default settings for Region , Availability options , and Size .
7. Under Administrator account , add a Username and enter a Password for the VM.
8. Under Inbound por t rules , choose Allow selected por ts and then select RDP (3389) and HTTP
from the drop-down.
9. Select Review + create .
10. Select Create . Creating a new VM will take a few minutes to complete.
11. Once your VM deployment is complete, select Go to resource .
Connect to your VM
Now that you've created the VM, connect to it so you can mount your file share.
1. Select Connect on the virtual machine properties page.
2. In the Connect to vir tual machine page, keep the default options to connect by IP address over por t
number 3389 and select Download RDP file .
3. Open the downloaded RDP file and select Connect when prompted.
4. In the Windows Security window, select More choices and then Use a different account . Type the
username as localhost\username, where <username> is the VM admin username you created for the
virtual machine. Enter the password you created for the virtual machine, and then select OK .

5. You may receive a certificate warning during the sign-in process. Select Yes or Continue to create the
connection.
Map the Azure file share to a Windows drive
1. In the Azure portal, navigate to the qsfileshare fileshare and select Connect .
2. Select a drive letter then copy the contents of the second box and paste it in Notepad .
3. In the VM, open PowerShell and paste in the contents of the Notepad , then press enter to run the
command. It should map the drive.

Create a share snapshot


Now that you've mapped the drive, create a snapshot.
1. In the portal, navigate to your file share, select Snapshots , then select + Add snapshot .
2. In the VM, open the qstestfile.txt and type "this file has been modified". Save and close the file.
3. Create another snapshot.

Browse a share snapshot


1. On your file share, select Snapshots .
2. On the Snapshots tab, select the first snapshot in the list.

3. Open that snapshot, and select qsTestFile.txt.

Restore from a snapshot


1. From the file share snapshot tab, right-click the qsTestFile, and select the Restore button.
2. Select Over write original file .

3. In the VM, open the file. The unmodified version has been restored.

Delete a share snapshot


1. On your file share, select Snapshots .
2. On the Snapshots tab, select the last snapshot in the list and select Delete .
Use a share snapshot in Windows
Just like with on-premises VSS snapshots, you can view the snapshots from your mounted Azure file share by
using the Previous Versions tab.
1. In File Explorer, locate the mounted share.

2. Select qsTestFile.txt and > right-click and select Proper ties from the menu.

3. Select Previous Versions to see the list of share snapshots for this directory.
4. Select Open to open the snapshot.
Restore from a previous version
1. Select Restore . This copies the contents of the entire directory recursively to the original location at the
time the share snapshot was created.

NOTE
If your file has not changed, you will not see a previous version for that file because that file is the same version as
the snapshot. This is consistent with how this works on a Windows file server.

Clean up resources
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you want to delete.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .

Next steps
Use an Azure file share with Windows
Tutorial: Create an NFS Azure file share and mount
it on a Linux VM using the Azure Portal
5/20/2022 • 7 minutes to read • Edit Online

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. Both NFS and SMB protocols are
supported on Azure virtual machines (VMs) running Linux. This tutorial shows you how to create an Azure file
share using the NFS protocol and connect it to a Linux VM.
In this tutorial, you will:
Create a storage account
Deploy a Linux VM
Create an NFS file share
Connect to your VM
Mount the file share to your VM

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Getting started
If you don't have an Azure subscription, create a free account before you begin.
Sign in to the Azure portal.
Create a FileStorage storage account
Before you can work with an NFS 4.1 Azure file share, you have to create an Azure storage account with the
premium performance tier. Currently, NFS 4.1 shares are only available as premium file shares.
1. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
2. On the Storage Accounts window that appears, choose + Create .
3. On the Basics tab, select the subscription in which to create the storage account.
4. Under the Resource group field, select Create new to create a new resource group to use for this tutorial.
5. Enter a name for your storage account. The name you choose must be unique across Azure. The name also
must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
6. Select a region for your storage account, or use the default region. Azure supports NFS file shares in all the
same regions that support premium file storage.
7. Select the Premium performance tier to store your data on solid-state drives (SSD). Under Premium
account type , select File shares.
8. Leave replication set to its default value of Locally-redundant storage (LRS).
9. Select Review + Create to review your storage account settings and create the account.
10. When you see the Validation passed notification appear, select Create . You should see a notification that
deployment is in progress.
The following image shows the settings on the Basics tab for a new storage account:

Deploy an Azure VM running Linux


Next, create an Azure VM running Linux to represent the on-premises server. When you create the VM, a virtual
network will be created for you. The NFS protocol can only be used from a machine inside of a virtual network.
1. Select Home , and then select Vir tual machines under Azure ser vices .
2. Select + Create and then + Vir tual machine .
3. In the Basics tab, under Project details , make sure the correct subscription and resource group are
selected. Under Instance details , type myVM for the Vir tual machine name , and select the same
region as your storage account. Choose the default Ubuntu Server version for your Image . Leave the
other defaults. The default size and pricing is only shown as an example. Size availability and pricing is
dependent on your region and subscription.
4. Under Administrator account , select SSH public key . Leave the rest of the defaults.

5. Under Inbound por t rules > Public inbound por ts , choose Allow selected por ts and then select
SSH (22) and HTTP (80) from the drop-down.
IMPORTANT
Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later,
go back to the Basics tab.

6. Select the Review + create button at the bottom of the page.


7. On the Create a vir tual machine page, you can see the details about the VM you are about to create.
Note the name of the virtual network. When you are ready, select Create .
8. When the Generate new key pair window opens, select Download private key and create
resource . Your key file will be download as myKey.pem . Make sure you know where the .pem file was
downloaded, because you'll need the path to it to connect to your VM.
You'll see a message that deployment is in progress. Wait a few minutes for deployment to complete.

Create an NFS Azure file share


Now you're ready to create an NFS file share and provide network-level security for your NFS traffic.
Add a file share to your storage account
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select Data storage > File shares from the storage account pane.
4. Select + File Share .
5. Name the new file share qsfileshare and enter "100" for the minimum Provisioned capacity , or
provision more capacity (up to 102,400 GiB) to get more performance. Select NFS protocol, leave No
Root Squash selected, and select Create .
Set up a private endpoint
Next, you'll need to set up a private endpoint for your storage account. This gives your storage account a private
IP address from within the address space of your virtual network.
1. Select the file share qsfileshare. You should see a dialog that says Connect to this NFS share from Linux.
Under Network configuration , select Review options
2. Next, select Setup a private endpoint .

3. Select + Private endpoint .

4. Leave Subscription and Resource group the same. Under Instance , provide a name and select a
region for the new private endpoint. Your private endpoint must be in the same region as your virtual
network, so use the same region as you specified when creating the V M. When all the fields are
complete, select Next: Resource .
5. Confirm that the Subscription , Resource type and Resource are correct, and select File from the
Target sub-resource drop-down. Then select Next: Vir tual Network .

6. Under Networking , select the virtual network associated with your VM and leave the default subnet.
Select Yes for Integrate with private DNS zone . Select the correct subscription and resource group,
and then select Next: Tags .
7. You can optionally apply tags to categorize your resources, such as applying the name Environment and
the value Test to all testing resources. Enter name/value pairs if desired, and then select Next: Review +
create .

8. Azure will attempt to validate the private endpoint. When validation is complete, select Create . You'll see
a notification that deployment is in progress. After a few minutes, you should see a notification that
deployment is complete.
Disable secure transfer
Because the NFS protocol doesn't support encryption and relies instead on network-level security, you'll need to
disable secure transfer.
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select File shares from the storage account pane.
4. Select the NFS file share that you created. Under Secure transfer setting , select Change setting .
5. Change the Secure transfer required setting to Disabled , and select Save . The setting change may
take up to 30 seconds to take effect.

Connect to your VM
Create an SSH connection with the VM.
1. Select Home and then Vir tual machines .
2. Select the Linux VM you created for this tutorial and ensure that its status is Running . Take note of the
VM's public IP address and copy it to your clipboard.

3. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a
PowerShell prompt.
4. At your prompt, open an SSH connection to your VM. Replace the IP address with the one from your VM,
and replace the path to the .pem with the path to where the key file was downloaded.

ssh -i .\Downloads\myVM_key.pem azureuser@20.25.14.85

If you encounter a warning that the authenticity of the host can't be established, type yes to continue connecting
to the VM. Leave the ssh connection open for the next step.

TIP
The SSH key you created can be used the next time your create a VM in Azure. Just select the Use a key stored in
Azure for SSH public key source the next time you create a VM. You already have the private key on your computer,
so you won't need to download anything.

Mount the NFS share


Now that you've created an NFS share, to use it you have to mount it on your Linux client.
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select File shares from the storage account pane and select the NFS file share you created.
4. You should see Connect to this NFS share from Linux along with sample commands to use NFS on
your Linux distribution and a provided mounting script.

5. Select your Linux distribution (Ubuntu).


6. Using the ssh connection you created to your VM, enter the sample commands to use NFS and mount
the file share.
You have now mounted your NFS share, and it's ready to store files.

Clean up resources
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you created for this tutorial.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .

Next steps
Learn about using NFS Azure file shares
Tutorial: Extend Windows file servers with Azure File
Sync
5/20/2022 • 11 minutes to read • Edit Online

The article demonstrates the basic steps for extending the storage capacity of a Windows server by using Azure
File Sync. Although the tutorial features Windows Server as an Azure virtual machine (VM), you would typically
do this process for your on-premises servers. You can find instructions for deploying Azure File Sync in your
own environment in the Deploy Azure File Sync article.
Deploy the Storage Sync Service
Prepare Windows Server to use with Azure File Sync
Install the Azure File Sync agent
Register Windows Server with the Storage Sync Service
Create a sync group and a cloud endpoint
Create a server endpoint
If you don't have an Azure subscription, create a free account before you begin.

Sign in to Azure
Sign in to the Azure portal.

Prepare your environment


For this tutorial, you need to do the following before you can deploy Azure File Sync:
Create an Azure storage account and file share
Set up a Windows Server 2016 Datacenter VM
Prepare the Windows Server VM for Azure File Sync
Create a folder and .txt file
On your local computer, create a new folder named FilesToSync and add a text file named mytestdoc.txt. You'll
upload that file to the file share later in this tutorial.
Create a storage account
To create a general-purpose v2 storage account in the Azure portal, follow these steps:
1. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
2. On the Storage Accounts window that appears, choose + New .
3. On the Basics blade, select the subscription in which to create the storage account.
4. Under the Resource group field, select your desired resource group, or create a new resource group. For
more information on Azure resource groups, see Azure Resource Manager overview.
5. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name
also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
6. Select a region for your storage account, or use the default region.
7. Select a performance tier. The default tier is Standard.
8. Specify how the storage account will be replicated. The default redundancy option is Geo-redundant storage
(GRS). For more information about available replication options, see Azure Storage redundancy.
9. Additional options are available on the Advanced , Networking , Data protection , and Tags blades. To use
Azure Data Lake Storage, choose the Advanced blade, and then set Hierarchical namespace to Enabled .
For more information, see Azure Data Lake Storage Gen2 Introduction
10. Select Review + Create to review your storage account settings and create the account.
11. Select Create .
The following image shows the settings on the Basics blade for a new storage account:

Create a file share


After you deploy an Azure storage account, you create a file share.
1. In the Azure portal, select Go to resource .
2. Select Files from the storage account pane.

3. Select + File Share .


4. Name the new file share afsfileshare. Enter "5120" for the Quota , and then select Create . The quota can
be a maximum of 100 TiB, but you only need 5 TiB for this tutorial.

5. Select the new file share. On the file share location, select Upload .

6. Browse to the FilesToSync folder where you created your .txt file, select mytestdoc.txt and select Upload .

At this point, you've created a storage account and a file share with one file in it. Next, you deploy an Azure VM
with Windows Server 2016 Datacenter to represent the on-premises server in this tutorial.
Deploy a VM and attach a data disk
1. Go to the Azure portal and expand the menu on the left. Choose Create a resource in the upper left-
hand corner.
2. In the search box above the list of Azure Marketplace resources, search for Windows Ser ver 2016
Datacenter and select it in the results. Choose Create .
3. Go to the Basics tab. Under Project details , select the resource group you created for this tutorial.

4. Under Instance details , provide a VM name. For example, use myVM.


5. Don't change the default settings for Region , Availability options , Image , and Size .
6. Under Administrator account , provide a Username and Password for the VM.
7. Under Inbound por t rules , choose Allow selected por ts and then select RDP (3389) and HTTP
from the drop-down menu.
8. Before you create the VM, you need to create a data disk.
a. Select Next:Disks .

b. On the Disks tab, under Disk options , leave the defaults.


c. Under DATA DISKS , select Create and attach a new disk .
d. Use the default settings except for Size (GiB) , which you can change to 1 GiB for this tutorial.

e. Select OK .
9. Select Review + create .
10. Select Create .
You can select the Notifications icon to watch the Deployment progress . Creating a new VM might
take a few minutes to complete.
11. After your VM deployment is complete, select Go to resource .

At this point, you've created a new virtual machine and attached a data disk. Next you connect to the VM.
Connect to your VM
1. In the Azure portal, select Connect on the virtual machine properties page.

2. On the Connect to vir tual machine page, keep the default options to connect by IP address over port
3389. Select Download RDP file .

3. Open the downloaded RDP file and select Connect when prompted.
4. In the Windows Security window, select More choices and then Use a different account . Type the
username as localhost\username, enter the password you created for the virtual machine, and then select
OK .
5. You might receive a certificate warning during the sign-in process. Select Yes or Continue to create the
connection.
Prepare the Windows server
For the Windows Server 2016 Datacenter server, disable Internet Explorer Enhanced Security Configuration. This
step is required only for initial server registration. You can re-enable it after the server has been registered.
In the Windows Server 2016 Datacenter VM, Server Manager opens automatically. If Server Manager doesn't
open by default, search for it in Start Menu.
1. In Ser ver Manager , select Local Ser ver .

2. On the Proper ties pane, select the link for IE Enhanced Security Configuration .

3. In the Internet Explorer Enhanced Security Configuration dialog box, select Off for
Administrators and Users .
Now you can add the data disk to the VM.
Add the data disk
1. While still in the Windows Ser ver 2016 Datacenter VM, select Files and storage ser vices >
Volumes > Disks .

2. Right-click the 1 GiB disk named Msft Vir tual Disk and select New volume .
3. Complete the wizard. Use the default settings and make note of the assigned drive letter.
4. Select Create .
5. Select Close .
At this point, you've brought the disk online and created a volume. Open File Explorer in the Windows
Server VM to confirm the presence of the recently added data disk.
6. In File Explorer in the VM, expand This PC and open the new drive. It's the F: drive in this example.
7. Right-click and select New > Folder . Name the folder FilesToSync.
8. Open the FilesToSync folder.
9. Right-click and select New > Text Document . Name the text file MyTestFile.
10. Close File Explorer and Ser ver Manager .
Download the Azure PowerShell module
Next, in the Windows Server 2016 Datacenter VM, install the Azure PowerShell module on the server.
1. In the VM, open an elevated PowerShell window.
2. Run the following command:

Install-Module -Name Az

NOTE
If you have a NuGet version that is older than 2.8.5.201, you're prompted to download and install the latest
version of NuGet.

By default, the PowerShell gallery isn't configured as a trusted repository for PowerShellGet. The first
time you use the PSGallery, you see the following prompt:

Untrusted repository

You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet.

Are you sure you want to install the modules from 'PSGallery'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"):

3. Answer Yes or Yes to All to continue with the installation.


The Az module is a rollup module for the Azure PowerShell cmdlets. Installing it downloads all the available
Azure Resource Manager modules and makes their cmdlets available for use.
At this point, you've set up your environment for the tutorial. You're ready to deploy the Storage Sync Service.

Deploy the service


To deploy Azure File Sync, you first place a Storage Sync Ser vice resource into a resource group for your
selected subscription. The Storage Sync Service inherits access permissions from its subscription and resource
group.
1. In the Azure portal, select Create a resource and then search for Azure File Sync .
2. In the search results, select Azure File Sync .
3. Select Create to open the Deploy Storage Sync tab.

On the pane that opens, enter the following information:

VA L UE DESC RIP T IO N

Name A unique name (per subscription) for the Storage Sync


Service.

Use afssyncservice02 for this tutorial.

Subscription The Azure subscription you use for this tutorial.

Resource group The resource group that contains the Storage Sync
Service.

Use afsresgroup101918 for this tutorial.

Location East US

4. When you're finished, select Create to deploy the Storage Sync Ser vice .
5. Select the Notifications tab > Go to resource .

Install the agent


The Azure File Sync agent is a downloadable package that enables Windows Server to be synced with an Azure
file share.
1. In the Windows Ser ver 2016 Datacenter VM, open Internet Explorer .
2. Go to the Microsoft Download Center. Scroll down to the Azure File Sync Agent section and select
Download .
3. Select the check box for StorageSyncAgent_V3_WS2016.EXE and select Next .

4. Select Allow once > Run > Open .


5. If you haven't already done so, close the PowerShell window.
6. Accept the defaults in the Storage Sync Agent Setup Wizard .
7. Select Install .
8. Select Finish .
You've deployed the Azure Sync Service and installed the agent on the Windows Server 2016 Datacenter VM.
Now you need to register the VM with the Storage Sync Service.

Register Windows Server


Registering your Windows server with a Storage Sync Service establishes a trust relationship between your
server (or cluster) and the Storage Sync Service. A server can only be registered to one Storage Sync Service. It
can sync with other servers and Azure file shares that are associated with that Storage Sync Service.
The Server Registration UI should open automatically after you install the Azure File Sync agent. If it doesn't, you
can open it manually from its file location: C:\Program Files\Azure\StorageSyncAgent\ServerRegistration.exe.
1. When the Server Registration UI opens in the VM, select OK .
2. Select Sign-in to begin.
3. Sign in with your Azure account credentials and select Sign-in .
4. Provide the following information:
VA L UE DESC RIP T IO N

Azure Subscription The subscription that contains the Storage Sync Service
for this tutorial.

Resource Group The resource group that contains the Storage Sync
Service. Use afsresgroup101918 for this tutorial.

Storage Sync Ser vice The name of the Storage Sync Service. Use
afssyncservice02 for this tutorial.

5. Select Register to complete the server registration.


6. As part of the registration process, you're prompted for an additional sign-in. Sign in and select Next .
7. Select OK .

Create a sync group


A sync group defines the sync topology for a set of files. A sync group must contain one cloud endpoint, which
represents an Azure file share. A sync group also must contain one or more server endpoints. A server endpoint
represents a path on a registered server. To create a sync group:
1. In the Azure portal, select + Sync group from the Storage Sync Service. Use afssyncservice02 for this
tutorial.

2. Enter the following information to create a sync group with a cloud endpoint:
VA L UE DESC RIP T IO N

Sync group name This name must be unique within the Storage Sync
Service, but can be any name that is logical for you. Use
afssyncgroup for this tutorial.

Subscription The subscription where you deployed the Storage Sync


Service for this tutorial.

Storage account Choose Select storage account . On the pane that


appears, select the storage account that has the Azure
file share you created. Use afsstoracct101918 for this
tutorial.

Azure file share The name of the Azure file share you created. Use
afsfileshare for this tutorial.

3. Select Create .
If you select your sync group, you can see that you now have one cloud endpoint .

Add a server endpoint


A server endpoint represents a specific location on a registered server. For example, a folder on a server volume.
To add a server endpoint:
1. Select the newly created sync group and then select Add ser ver endpoint .

2. On the Add ser ver endpoint pane, enter the following information to create a server endpoint:

VA L UE DESC RIP T IO N

Registered ser ver The name of the server you created. Use afsvm101918
for this tutorial.

Path The Windows Server path to the drive you created. Use
f:\filestosync in this tutorial.

Cloud Tiering Leave disabled for this tutorial.

Volume Free Space Leave blank for this tutorial.

3. Select Create .
Your files are now in sync across your Azure file share and Windows Server.

Clean up resources
If you'd like to clean up the resources you created in this tutorial, first remove the endpoints from the storage
sync service. Then, unregister the server with your storage sync service, remove the sync groups, and delete the
sync service.
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you want to delete.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .

Next steps
In this tutorial, you learned the basic steps to extend the storage capacity of a Windows server by using Azure
File Sync. For a more thorough look at planning for an Azure File Sync deployment, see:
Plan for Azure File Sync deployment
Planning for an Azure Files deployment
5/20/2022 • 21 minutes to read • Edit Online

Azure Files can be deployed in two main ways: by directly mounting the serverless Azure file shares or by
caching Azure file shares on-premises using Azure File Sync. Deployment considerations will differ based on
which option you choose.
Direct mount of an Azure file share : Because Azure Files provides either Server Message Block (SMB)
or Network File System (NFS) access, you can mount Azure file shares on-premises or in the cloud using
the standard SMB or NFS clients available in your OS. Because Azure file shares are serverless, deploying
for production scenarios does not require managing a file server or NAS device. This means you don't
have to apply software patches or swap out physical disks.
Cache Azure file share on-premises with Azure File Sync : Azure File Sync enables you to
centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and
compatibility of an on-premises file server. Azure File Sync transforms an on-premises (or cloud)
Windows Server into a quick cache of your SMB Azure file share.
This article primarily addresses deployment considerations for deploying an Azure file share to be directly
mounted by an on-premises or cloud client. To plan for an Azure File Sync deployment, see Planning for an
Azure File Sync deployment.

Available protocols
Azure Files offers two industry-standard file system protocols for mounting Azure file shares: the Server
Message Block (SMB) protocol and the Network File System (NFS) protocol, allowing you to choose the protocol
that is the best fit for your workload. Azure file shares do not support both the SMB and NFS protocols on the
same file share, although you can create SMB and NFS Azure file shares within the same storage account. NFS
4.1 is currently only supported within new FileStorage storage account type (premium file shares only).
With both SMB and NFS file shares, Azure Files offers enterprise-grade file shares that can scale up to meet your
storage needs and can be accessed concurrently by thousands of clients.

F EAT URE SM B NFS

Supported protocol versions SMB 3.1.1, SMB 3.0, SMB 2.1 NFS 4.1

Recommended OS Windows 10, version 21H1+ Linux kernel version 4.3+


Windows Server 2019+
Linux kernel version 5.3+

Available tiers Premium, transaction optimized, hot, Premium


and cool

Billing model Provisioned capacity for Provisioned capacity


premium file shares
Pay-as-you-go for standard file
shares

Redundancy LRS, ZRS, GRS, GZRS LRS, ZRS


F EAT URE SM B NFS

File system semantics Win32 POSIX

Authentication Identity-based authentication Host-based authentication


(Kerberos), shared key authentication
(NTLMv2)

Authorization Win32-style access control lists (ACLs) UNIX-style permissions

Case sensitivity Case insensitive, case preserving Case sensitive

Deleting or modifying open files With lock only Yes

File sharing Windows sharing mode Byte-range advisory network lock


manager

Hard link support Not supported Supported

Symbolic link support Not supported Supported

Optionally internet accessible Yes (SMB 3.0+ only) No

Supports FileREST Yes Subset:


Operations on the
FileService
Operations on FileShares
Operations on Directories
Operations on Files

Management concepts
Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of
storage. This pool of storage can be used to deploy multiple file shares, as well as other storage resources such
as blob containers, queues, or tables. All storage resources that are deployed into a storage account share the
limits that apply to that storage account. For current storage account limits, see Azure Files scalability and
performance targets.
There are two main types of storage accounts you will use for Azure Files deployments:
General purpose version 2 (GPv2) storage accounts : GPv2 storage accounts allow you to deploy Azure
file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2
storage accounts can store other storage resources such as blob containers, queues, or tables.
FileStorage storage accounts : FileStorage storage accounts allow you to deploy Azure file shares on
premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store Azure
file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a FileStorage
account. Only FileStorage accounts can deploy both SMB and NFS file shares.
There are several other storage account types you may come across in the Azure portal, PowerShell, or CLI. Two
storage account types, BlockBlobStorage and BlobStorage storage accounts, cannot contain Azure file shares.
The other two storage account types you may see are general purpose version 1 (GPv1) and classic storage
accounts, both of which can contain Azure file shares. Although GPv1 and classic storage accounts may contain
Azure file shares, most new features of Azure Files are available only in GPv2 and FileStorage storage accounts.
We therefore recommend to only use GPv2 and FileStorage storage accounts for new deployments, and to
upgrade GPv1 and classic storage accounts if they already exist in your environment.
When deploying Azure file shares into storage accounts, we recommend:
Only deploying Azure file shares into storage accounts with other Azure file shares. Although GPv2
storage accounts allow you to have mixed purpose storage accounts, because storage resources such as
Azure file shares and blob containers share the storage account's limits, mixing resources together may
make it more difficult to troubleshoot performance issues later on.
Paying attention to a storage account's IOPS limitations when deploying Azure file shares. Ideally, you
would map file shares 1:1 with storage accounts. However, this may not always be possible due to various
limits and restrictions, both from your organization and from Azure. When it is not possible to have only
one file share deployed in one storage account, consider which shares will be highly active and which
shares will be less active to ensure that the hottest file shares don't get put in the same storage account
together.
Only deploying GPv2 and FileStorage accounts and upgrading GPv1 and classic storage accounts when
you find them in your environment.

Identity
To access an Azure file share, the user of the file share must be authenticated and authorized to access the share.
This is done based on the identity of the user accessing the file share. Azure Files integrates with three main
identity providers:
On-premises Active Director y Domain Ser vices (AD DS, or on-premises AD DS) : Azure storage
accounts can be domain joined to a customer-owned Active Directory Domain Services, just like a Windows
Server file server or NAS device. You can deploy a domain controller on-premises, in an Azure VM, or even
as a VM in another cloud provider; Azure Files is agnostic to where your domain controller is hosted. Once a
storage account is domain-joined, the end user can mount a file share with the user account they signed into
their PC with. AD-based authentication uses the Kerberos authentication protocol.
Azure Active Director y Domain Ser vices (Azure AD DS) : Azure AD DS provides a Microsoft-managed
domain controller that can be used for Azure resources. Domain joining your storage account to Azure AD
DS provides similar benefits to domain joining it to a customer-owned Active Directory. This deployment
option is most useful for application lift-and-shift scenarios that require AD-based permissions. Since Azure
AD DS provides AD-based authentication, this option also uses the Kerberos authentication protocol.
Azure storage account key : Azure file shares may also be mounted with an Azure storage account key. To
mount a file share this way, the storage account name is used as the username and the storage account key
is used as a password. Using the storage account key to mount the Azure file share is effectively an
administrator operation, because the mounted file share will have full permissions to all of the files and
folders on the share, even if they have ACLs. When using the storage account key to mount over SMB, the
NTLMv2 authentication protocol is used.
For customers migrating from on-premises file servers, or creating new file shares in Azure Files intended to
behave like Windows file servers or NAS appliances, domain joining your storage account to Customer-
owned Active Director y is the recommended option. To learn more about domain joining your storage
account to a customer-owned Active Directory, see Azure Files Active Directory overview.
If you intend to use the storage account key to access your Azure file shares, we recommend using service
endpoints as described in the Networking section.

Networking
Directly mounting your Azure file share often requires some thought about networking configuration because:
The port that SMB file shares use for communication, port 445, is frequently blocked by many organizations
and internet service providers (ISPs) for outbound (internet) traffic.
NFS file shares rely on network-level authentication and are therefore only accessible via restricted networks.
Using an NFS file share always requires some level of networking configuration.
To configure networking, Azure Files provides an internet accessible public endpoint and integration with Azure
networking features like service endpoints, which help restrict the public endpoint to specified virtual networks,
and private endpoints, which give your storage account a private IP address from within a virtual network IP
address space.
From a practical perspective, this means you will need to consider the following network configurations:
If the required protocol is SMB, and all access over SMB is from clients in Azure, no special networking
configuration is required.
If the required protocol is SMB, and the access is from clients on-premises, a VPN or ExpressRoute
connection from on-premises to your Azure network is required, with Azure Files exposed on your internal
network using private endpoints.
If the required protocol is NFS, you can use either service endpoints or private endpoints to restrict the
network to specified virtual networks.
To learn more about how to configure networking for Azure Files, see Azure Files networking considerations.
In addition to directly connecting to the file share using the public endpoint or using a VPN/ExpressRoute
connection with a private endpoint, SMB provides an additional client access strategy: SMB over QUIC. SMB over
QUIC offers zero-config "SMB VPN" for SMB access over the QUIC transport protocol. Although Azure Files does
not directly support SMB over QUIC, you can create a lightweight cache of your Azure file shares on a Windows
Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see SMB over QUIC with
Azure File Sync.

Encryption
Azure Files supports two different types of encryption: encryption in transit, which relates to the encryption
used when mounting/accessing the Azure file share, and encryption at rest, which relates to how the data is
encrypted when it is stored on disk.
Encryption in transit

IMPORTANT
This section covers encryption in transit details for SMB shares. For details regarding encryption in transit with NFS
shares, see Security and networking.

By default, all Azure storage accounts have encryption in transit enabled. This means that when you mount a file
share over SMB or access it via the FileREST protocol (such as through the Azure portal, PowerShell/CLI, or
Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.x with encryption or HTTPS.
Clients that do not support SMB 3.x or clients that support SMB 3.x but not SMB encryption will not be able to
mount the Azure file share if encryption in transit is enabled. For more information about which operating
systems support SMB 3.x with encryption, see our detailed documentation for Windows, macOS, and Linux. All
current versions of the PowerShell, CLI, and SDKs support HTTPS.
You can disable encryption in transit for an Azure storage account. When encryption is disabled, Azure Files will
also allow SMB 2.1 and SMB 3.x without encryption, and unencrypted FileREST API calls over HTTP. The primary
reason to disable encryption in transit is to support a legacy application that must be run on an older operating
system, such as Windows Server 2008 R2 or an older Linux distribution. Azure Files only allows SMB 2.1
connections within the same Azure region as the Azure file share; an SMB 2.1 client outside of the Azure region
of the Azure file share, such as on-premises or in a different Azure region, will not be able to access the file
share.
We strongly recommend ensuring encryption of data in-transit is enabled.
For more information about encryption in transit, see requiring secure transfer in Azure storage.
Encryption at rest
All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service
encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because
data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access
to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the
SMB and NFS protocols.
By default, data stored in Azure Files is encrypted with Microsoft-managed keys. With Microsoft-managed keys,
Microsoft holds the keys to encrypt/decrypt the data, and is responsible for rotating them on a regular basis.
You can also choose to manage your own keys, which gives you control over the rotation process. If you choose
to encrypt your file shares with customer-managed keys, Azure Files is authorized to access your keys to fulfill
read and write requests from your clients. With customer-managed keys, you can revoke this authorization at
any time, but this means that your Azure file share will no longer be accessible via SMB or the FileREST API.
Azure Files uses the same encryption scheme as the other Azure storage services such as Azure Blob storage. To
learn more about Azure storage service encryption (SSE), see Azure storage encryption for data at rest.

Data protection
Azure Files has a multi-layered approach to ensuring your data is backed up, recoverable, and protected from
security threats.
Soft delete
Soft delete for file shares is a storage-account level setting that allows you to recover your file share when it is
accidentally deleted. When a file share is deleted, it transitions to a soft deleted state instead of being
permanently erased. You can configure the amount of time soft deleted data is recoverable before it's
permanently deleted, and undelete the share anytime during this retention period.
We recommend turning on soft delete for most file shares. If you have a workflow where share deletion is
common and expected, you may decide to have a short retention period or not have soft delete enabled at all.
For more information about soft delete, see Prevent accidental data deletion.
Backup
You can back up your Azure file share via share snapshots, which are read-only, point-in-time copies of your
share. Snapshots are incremental, meaning they only contain as much data as has changed since the previous
snapshot. You can have up to 200 snapshots per file share and retain them for up to 10 years. You can either
manually take these snapshots in the Azure portal, via PowerShell, or command-line interface (CLI), or you can
use Azure Backup. Snapshots are stored within your file share, meaning that if you delete your file share, your
snapshots will also be deleted. To protect your snapshot backups from accidental deletion, ensure soft delete is
enabled for your share.
Azure Backup for Azure file shares handles the scheduling and retention of snapshots. Its grandfather-father-son
(GFS) capabilities mean that you can take daily, weekly, monthly, and yearly snapshots, each with their own
distinct retention period. Azure Backup also orchestrates the enablement of soft delete and takes a delete lock
on a storage account as soon as any file share within it is configured for backup. Lastly, Azure Backup provides
certain key monitoring and alerting capabilities that allow customers to have a consolidated view of their
backup estate.
You can perform both item-level and share-level restores in the Azure portal using Azure Backup. All you need
to do is choose the restore point (a particular snapshot), the particular file or directory if relevant, and then the
location (original or alternate) you wish you restore to. The backup service handles copying the snapshot data
over and shows your restore progress in the portal.
For more information about backup, see About Azure file share backup.
Protect Azure Files with Microsoft Defender for Storage
Microsoft Defender for Storage provides an additional layer of security intelligence that generates alerts when it
detects anomalous activity on your storage account, for example unusual access attempts. It also runs malware
hash reputation analysis and will alert on known malware. You can configure Microsoft Defender for Storage at
the subscription or storage account level via Microsoft Defender for Cloud.
For more information, see Introduction to Microsoft Defender for Storage.

Storage tiers
Azure Files offers four different tiers of storage, premium, transaction optimized, hot, and cool to allow you to
tailor your shares to the performance and price requirements of your scenario:
Premium : Premium file shares are backed by solid-state drives (SSDs) and provide consistent high
performance and low latency, within single-digit milliseconds for most IO operations, for IO-intensive
workloads. Premium file shares are suitable for a wide variety of workloads like databases, web site hosting,
and development environments. Premium file shares can be used with both Server Message Block (SMB)
and Network File System (NFS) protocols.
Transaction optimized : Transaction optimized file shares enable transaction heavy workloads that don't
need the latency offered by premium file shares. Transaction optimized file shares are offered on the
standard storage hardware backed by hard disk drives (HDDs). Transaction optimized has historically been
called "standard", however this refers to the storage media type rather than the tier itself (the hot and cool
are also "standard" tiers, because they are on standard storage hardware).
Hot : Hot file shares offer storage optimized for general purpose file sharing scenarios such as team shares.
Hot file shares are offered on the standard storage hardware backed by HDDs.
Cool : Cool file shares offer cost-efficient storage optimized for online archive storage scenarios. Cool file
shares are offered on the standard storage hardware backed by HDDs.
Premium file shares are deployed in the FileStorage storage account kind and are only available in a
provisioned billing model. For more information on the provisioned billing model for premium file shares, see
Understanding provisioning for premium file shares. Standard file shares, including transaction optimized, hot,
and cool file shares, are deployed in the general purpose version 2 (GPv2) storage account kind, and are
available through pay as you go billing.
When selecting a storage tier for your workload, consider your performance and usage requirements. If your
workload requires single-digit latency, or you are using SSD storage media on-premises, the premium tier is
probably the best fit. If low latency isn't as much of a concern, for example with team shares mounted on-
premises from Azure or cached on-premises using Azure File Sync, standard storage may be a better fit from a
cost perspective.
Once you've created a file share in a storage account, you cannot move it to tiers exclusive to different storage
account kinds. For example, to move a transaction optimized file share to the premium tier, you must create a
new file share in a FileStorage storage account and copy the data from your original share to a new file share in
the FileStorage account. We recommend using AzCopy to copy data between Azure file shares, but you may also
use tools like robocopy on Windows or rsync for macOS and Linux.
File shares deployed within GPv2 storage accounts can be moved between the standard tiers (transaction
optimized, hot, and cool) without creating a new storage account and migrating data, but you will incur
transaction costs when you change your tier. When you move a share from a hotter tier to a cooler tier, you will
incur the cooler tier's write transaction charge for each file in the share. Moving a file share from a cooler tier to
a hotter tier will incur the cool tier's read transaction charge for each file in the share.
See Understanding Azure Files billing for more information.
Limitations
Standard file shares with 100 TiB capacity have certain limitations.
Currently, only locally redundant storage (LRS) and zone redundant storage (ZRS) accounts are supported.
Once you enable large file shares, you cannot convert storage accounts to geo-redundant storage (GRS) or
geo-zone-redundant storage (GZRS) accounts.
Once you enable large file shares, you can't disable it.

Redundancy
To protect the data in your Azure file shares against data loss or corruption, all Azure file shares store multiple
copies of each file as they are written. Depending on the requirements of your workload, you can select
additional degrees of redundancy. Azure Files currently supports the following data redundancy options:
Locally-redundant storage (LRS) : With LRS, every file is stored three times within an Azure storage
cluster. This protects against loss of data due to hardware faults, such as a bad disk drive. However, if a
disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may
be lost or unrecoverable.
Zone-redundant storage (ZRS) : With ZRS, three copies of each file stored, however these copies are
physically isolated in three distinct storage clusters in different Azure availability zones. Availability zones are
unique physical locations within an Azure region. Each zone is made up of one or more data centers
equipped with independent power, cooling, and networking. A write to storage is not accepted until it is
written to the storage clusters in all three availability zones.
Geo-redundant storage (GRS) : With GRS, you have two regions, a primary and secondary region. Files
are stored three times within an Azure storage cluster in the primary region. Writes are asynchronously
replicated to a Microsoft-defined secondary region. GRS provides six copies of your data spread between two
Azure regions. In the event of a major disaster such as the permanent loss of an Azure region due to a
natural disaster or other similar event, Microsoft will perform a failover and the secondary becomes the
primary, serving all operations. Since the replication between the primary and secondary regions are
asynchronous, in the event of a major disaster, data not yet replicated to the secondary region will be lost.
You can also perform a manual failover of a geo-redundant storage account.
Geo-zone-redundant storage (GZRS) : You can think of GZRS as if it were like ZRS but with geo-
redundancy. With GZRS, files are stored three times across three distinct storage clusters in the primary
region. All writes are then asynchronously replicated to a Microsoft-defined secondary region. The failover
process for GZRS works the same as GRS.
Standard Azure file shares up to 5-TiB support all four redundancy types. Standard file shares larger than 5-TiB
only support LRS and ZRS. Premium Azure file shares only support LRS and ZRS.
General purpose version 2 (GPv2) storage accounts provide two additional redundancy options that are not
supported by Azure Files: read accessible geo-redundant storage, often referred to as RA-GRS, and read
accessible geo-zone-redundant storage, often referred to as RA-GZRS. You can provision Azure file shares in
storage accounts with these options set, however Azure Files does not support reading from the secondary
region. Azure file shares deployed into read-accessible geo- or geo-zone redundant storage accounts will be
billed as geo-redundant or geo-zone-redundant storage, respectively.
Standard ZRS availability
ZRS for standard general-purpose v2 storage accounts is available for a subset of Azure regions:
(Africa) South Africa North
(Asia Pacific) Australia East
(Asia Pacific) Central India
(Asia Pacific) East Asia
(Asia Pacific) Japan East
(Asia Pacific) Korea Central
(Asia Pacific) South India
(Asia Pacific) Southeast Asia
(Europe) France Central
(Europe) Germany West Central
(Europe) North Europe
(Europe) Norway East
(Europe) Sweden Central
(Europe) UK South
(Europe) West Europe
(North America) Canada Central
(North America) Central US
(North America) East US
(North America) East US 2
(North America) South Central US
(North America) US Gov Virginia
(North America) West US 2
(North America) West US 3
(South America) Brazil South
Premium ZRS availability
ZRS for premium file shares is available for a subset of Azure regions:
(Asia Pacific) Australia East
(Asia Pacific) Japan East
(Asia Pacific) Southeast Asia
(Europe) France Central
(Europe) North Europe
(Europe) West Europe
(Europe) UK South
(North America) East US
(North America) East US 2
(North America) West US 2
(South America) Brazil South
Standard GZRS availability
GZRS is available for a subset of Azure regions:
(Africa) South Africa North
(Asia Pacific) Australia East
(Asia Pacific) East Asia
(Asia Pacific) Japan East
(Asia Pacific) Korea Central
(Asia Pacific) Southeast Asia
(Asia Pacific) Central India
(Europe) France Central
(Europe) North Europe
(Europe) Norway East
(Europe) Sweden Central
(Europe) UK South
(Europe) West Europe
(North America) Canada Central
(North America) Central US
(North America) East US
(North America) East US 2
(North America) South Central US
(North America) West US 2
(North America) West US 3
(North America) US Gov Virginia
(South America) Brazil South

Migration
In many cases, you will not be establishing a net new file share for your organization, but instead migrating an
existing file share from an on-premises file server or NAS device to Azure Files. Picking the right migration
strategy and tool for your scenario is important for the success of your migration.
The migration overview article briefly covers the basics and contains a table that leads you to migration guides
that likely cover your scenario.

Next steps
Planning for an Azure File Sync Deployment
Deploying Azure Files
Deploying Azure File Sync
Check out the migration overview article to find the migration guide for your scenario
SMB file shares in Azure Files
5/20/2022 • 11 minutes to read • Edit Online

Azure Files offers two industry-standard protocols for mounting Azure file share: the Server Message Block
(SMB) protocol and the Network File System (NFS) protocol. Azure Files enables you to pick the file system
protocol that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file
share with both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same
storage account. For all file shares, Azure Files offers enterprise-grade file shares that can scale up to meet your
storage needs and can be accessed concurrently by thousands of clients.
This article covers SMB Azure file shares. For information about NFS Azure file shares, see NFS file shares in
Azure Files.

Common scenarios
SMB file shares are used for a variety of applications including end-user file shares and file shares that back
databases and applications. SMB file shares are often used in the following scenarios:
End-user file shares such as team shares, home directories, etc.
Backing storage for Windows-based applications, such as SQL Server databases or line-of-business
applications written for Win32 or .NET local file system APIs.
New application and service development, particularly if that application or service has a requirement for
random IO and hierarchical storage.

Features
Azure Files supports the major features of SMB and Azure needed for production deployments of SMB file
shares:
AD domain join and discretionary access control lists (DACLs).
Integrated serverless backup with Azure Backup.
Network isolation with Azure private endpoints.
High network throughput using SMB Multichannel (premium file shares only).
SMB channel encryption including AES-256-GCM, AES-128-GCM, and AES-128-CCM.
Previous version support through VSS integrated share snapshots.
Automatic soft delete on Azure file shares to prevent accidental deletes.
Optionally internet-accessible file shares with internet-safe SMB 3.0+.
SMB file shares can be mounted directly on-premises or can also be cached on-premises with Azure File Sync.

Security
All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service
encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because
data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access
to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the
SMB and NFS protocols.
By default, all Azure storage accounts have encryption in transit enabled. This means that when you mount a file
share over SMB (or access it via the FileREST protocol), Azure Files will only allow the connection if it is made
with SMB 3.x with encryption or HTTPS. Clients that do not support SMB 3.x with SMB channel encryption will
not be able to mount the Azure file share if encryption in transit is enabled.
Azure Files supports AES-256-GCM with SMB 3.1.1 when used with Windows Server 2022 or Windows 11. SMB
3.1.1 also supports AES-128-GCM and SMB 3.0 supports AES-128-CCM. AES-128-GCM is negotiated by default
on Windows 10, version 21H1 for performance reasons.
You can disable encryption in transit for an Azure storage account. When encryption is disabled, Azure Files will
also allow SMB 2.1 and SMB 3.x without encryption. The primary reason to disable encryption in transit is to
support a legacy application that must be run on an older operating system, such as Windows Server 2008 R2
or older Linux distribution. Azure Files only allows SMB 2.1 connections within the same Azure region as the
Azure file share; an SMB 2.1 client outside of the Azure region of the Azure file share, such as on-premises or in
a different Azure region, will not be able to access the file share.

SMB protocol settings


Azure Files offers multiple settings that affect the behavior, performance, and security of the SMB protocol.
These are configured for all Azure file shares within a storage account.
SMB Multichannel
SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share.
Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account
kind). There is no additional cost for enabling SMB Multichannel in Azure Files. SMB Multichannel is disabled by
default.
Portal
PowerShell
Azure CLI

To view the status of SMB Multichannel, navigate to the storage account containing your premium file shares
and select File shares under the Data storage heading in the storage account table of contents. The status of
the SMB Multichannel can be seen under the File share settings section.

To enable or disable SMB Multichannel, select the current status (Enabled or Disabled depending on the
status). The resulting dialog provides a toggle to enable or disable SMB Multichannel. Select the desired state
and select Save .
SMB security settings
Azure Files exposes settings that let you toggle the SMB protocol to be more compatible or more secure,
depending on your organization's requirements. By default, Azure Files is configured to be maximally
compatible, so keep in mind that restricting these settings may cause some clients not to be able to connect.
Azure Files exposes the following settings:
SMB versions : Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0,
and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure
transfer" is enabled, because SMB 2.1 does not support encryption in transit.
Authentication methods : Which SMB authentication methods are allowed. Supported authentication
methods are NTLMv2 and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2
disallows using the storage account key to mount the Azure file share.
Kerberos ticket encr yption : Which encryption algorithms are allowed. Supported encryption algorithms
are AES-256 (recommended) and RC4-HMAC.
SMB channel encr yption : Which SMB channel encryption algorithms are allowed. Supported encryption
algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM.
The SMB security settings can be viewed and changed using the Azure portal, PowerShell, or CLI. Please select
the desired tab to see the steps on how to get and set the SMB security settings.

Portal
PowerShell
Azure CLI

To view or change the SMB security settings using the Azure portal, follow these steps:
1. Search for Storage accounts and select the storage account for which you want to view the security
settings.
2. Select Data storage > File shares .
3. Under File share settings , select the value associated with Security . If you haven't modified the
security settings, this value defaults to Maximum compatibility .

4. Under Profile , select Maximum compatibility , Maximum security , or Custom . Selecting Custom
allows you to create a custom profile for SMB protocol versions, SMB channel encryption, authentication
mechanisms, and Kerberos ticket encryption.

After you've entered the desired security settings, select Save .

Limitations
SMB file shares in Azure Files support a subset of features supported by SMB protocol and the NTFS file system.
Although most use cases and applications do not require these features, some applications may not work
properly with Azure Files if they rely on unsupported features. The following features are not supported:
SMB Direct
SMB directory leasing
VSS for SMB file shares (this enables VSS providers to flush their data to the SMB file share before a
snapshot is taken)
Alternate data streams
Extended attributes
Object identifiers
Hard links
Soft links
Reparse points
Sparse files
Short file names (8.3 alias)
Compression

Regional availability
SMB Azure file shares are available in every Azure region, including all public and sovereign regions. Premium
SMB file shares are available in a subset of regions.

Next steps
Plan for an Azure Files deployment
Create an Azure file share
Mount SMB file shares on your preferred operating system:
Mounting SMB file shares on Windows
Mounting SMB file shares on Linux
Mounting SMB file shares on macOS
NFS file shares in Azure Files
5/20/2022 • 4 minutes to read • Edit Online

Azure Files offers two industry-standard file system protocols for mounting Azure file shares: the Server
Message Block (SMB) protocol and the Network File System (NFS) protocol, allowing you to pick the protocol
that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file share with
both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same storage
account. Azure Files offers enterprise-grade file shares that can scale up to meet your storage needs and can be
accessed concurrently by thousands of clients.
This article covers NFS Azure file shares. For information about SMB Azure file shares, see SMB file shares in
Azure Files.

IMPORTANT
Before using NFS file shares for production, see the Troubleshoot Azure NFS file shares article for a list of known issues.

Common scenarios
NFS file shares are often used in the following scenarios:
Backing storage for Linux/UNIX-based applications, such as line-of-business applications written using Linux
or POSIX file system APIs (even if they don't require POSIX-compliance).
Workloads that require POSIX-compliant file shares, case sensitivity, or Unix style permissions (UID/GID).
New application and service development, particularly if that application or service has a requirement for
random IO and hierarchical storage.

Features
Fully POSIX-compliant file system.
Hard link support.
Symbolic link support.
NFS file shares currently only support most features from the 4.1 protocol specification. Some features such
as delegations and callback of all kinds, Kerberos authentication, and encryption-in-transit are not supported.

Security and networking


All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service
encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because
data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access
to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the
SMB and NFS protocols.
For encryption in transit, Azure provides a layer of encryption for all data in transit between Azure datacenters
using MACSec. Through this, encryption exists when data is transferred between Azure datacenters. Unlike
Azure Files using the SMB protocol, file shares using the NFS protocol do not offer user-based authentication.
Authentication for NFS shares is based on the configured network security rules. Due to this, to ensure only
secure connections are established to your NFS share, you must use either service endpoints or private
endpoints. If you want to access shares from on-premises, then you must set up a VPN or ExpressRoute in
addition to a private endpoint. Requests that do not originate from the following sources will be rejected:
A private endpoint
Azure VPN Gateway
Point-to-site (P2S) VPN
Site-to-Site
ExpressRoute
A restricted public endpoint
For more details on the available networking options, see Azure Files networking considerations.

Support for Azure Storage features


The following table shows the current level of support for Azure Storage features in accounts that have the NFS
4.1 feature enabled.
The status of items that appear in this table may change over time as support continues to expand.

STO RA GE F EAT URE SUP P O RT ED F O R N F S SH A RES

File management plane REST API ️


File data plane REST API ⛔

Encryption at rest ️

Encryption in transit ⛔

LRS or ZRS redundancy types ️


LRS to ZRS conversion ⛔

Private endpoints ️

Subdirectory mounts ️

Grant network access to specific Azure virtual networks ️


Grant network access to specific IP addresses ⛔

Premium tier ️

Standard tiers (Hot, Cool, and Transaction optimized) ⛔

POSIX-permissions ️

Root squash ️

Access same data from Windows and Linux client ⛔

Identity-based authentication ⛔

Azure file share soft delete ⛔


STO RA GE F EAT URE SUP P O RT ED F O R N F S SH A RES

Azure File Sync ⛔

Azure file share backups ⛔

Azure file share snapshots ⛔

GRS or GZRS redundancy types ⛔

AzCopy ⛔

Azure Storage Explorer ⛔

Support for more than 16 groups ⛔

Regional availability
Azure NFS file shares is supported in all the same regions that support premium file storage.
For the most up-to-date list, see the Premium Files Storage entry on the Azure Products available by region
page.

Performance
NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The
IOPS and throughput of NFS shares scale with the provisioned capacity. See the provisioned model section of
the Understanding billing article to understand the formulas for IOPS, IO bursting, and throughput. The
average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are
high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress may face
additional latencies due to the high number of open and close operations.

Workloads
IMPORTANT
Before using NFS file shares for production, see the Troubleshoot Azure NFS file shares article for a list of known issues.

NFS has been validated to work well with workloads such as SAP application layer, database backups, database
replication, messaging queues, home directories for general purpose file servers, and content repositories for
application workloads.
The following workloads have known issues. See the Troubleshoot Azure NFS file shares article for list of known
issues:
Oracle Database will experience incompatibility with its dNFS feature.

Next steps
Create an NFS file share
Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS
Overview of Azure Files identity-based
authentication options for SMB access
5/20/2022 • 11 minutes to read • Edit Online

Azure Files supports identity-based authentication over Server Message Block (SMB) through on-premises
Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (Azure AD DS). This
article focuses on how Azure file shares can use domain services, either on-premises or in Azure, to support
identity-based access to Azure file shares over SMB. Enabling identity-based access for your Azure file shares
allows you to replace existing file servers with Azure file shares without replacing your existing directory
service, maintaining seamless user access to shares.
Azure Files enforces authorization on user access to both the share and the directory/file levels. Share-level
permission assignment can be performed on Azure Active Directory (Azure AD) users or groups managed
through the Azure role-based access control (Azure RBAC) model. With RBAC, the credentials you use for file
access should be available or synced to Azure AD. You can assign Azure built-in roles like Storage File Data SMB
Share Reader to users or groups in Azure AD to grant read access to an Azure file share.
At the directory/file level, Azure Files supports preserving, inheriting, and enforcing Windows DACLs just like
any Windows file servers. You can choose to keep Windows DACLs when copying data over SMB between your
existing file share and your Azure file shares. Whether you plan to enforce authorization or not, you can use
Azure file shares to back up ACLs along with your data.
To learn how to enable on-premises Active Directory Domain Services authentication for Azure file shares, see
Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares.
To learn how to enable Azure AD DS authentication for Azure file shares, see Enable Azure Active Directory
Domain Services authentication on Azure Files.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Glossary
It's helpful to understand some key terms relating to Azure AD Domain Service authentication over SMB for
Azure file shares:
Kerberos authentication
Kerberos is an authentication protocol that is used to verify the identity of a user or host. For more
information on Kerberos, see Kerberos Authentication Overview.
Ser ver Message Block (SMB) protocol
SMB is an industry-standard network file-sharing protocol. SMB is also known as Common Internet File
System or CIFS. For more information on SMB, see Microsoft SMB Protocol and CIFS Protocol Overview.
Azure Active Director y (Azure AD)
Azure Active Directory (Azure AD) is Microsoft's multi-tenant cloud-based directory and identity
management service. Azure AD combines core directory services, application access management, and
identity protection into a single solution. Storing FSLogix profiles on Azure file shares for Azure AD-joined
VMs is currently in public preview. For more information, see Create a profile container with Azure Files
and Azure Active Directory (preview).
Azure Active Director y Domain Ser vices (Azure AD DS)
Azure AD DS provides managed domain services such as domain join, group policies, LDAP, and
Kerberos/NTLM authentication. These services are fully compatible with Active Directory Domain
Services. For more information, see Azure Active Directory Domain Services.
On-premises Active Director y Domain Ser vices (AD DS)
On-premises Active Directory Domain Services (AD DS) integration with Azure Files provides the
methods for storing directory data while making it available to network users and administrators.
Security is integrated with AD DS through logon authentication and access control to objects in the
directory. With a single network logon, administrators can manage directory data and organization
throughout their network, and authorized network users can access resources anywhere on the network.
AD DS is commonly adopted by enterprises in on-premises environments and AD DS credentials are
used as the identity for access control. For more information, see Active Directory Domain Services
Overview.
Azure role-based access control (Azure RBAC)
Azure role-based access control (Azure RBAC) enables fine-grained access management for Azure. Using
Azure RBAC, you can manage access to resources by granting users the fewest permissions needed to
perform their jobs. For more information on Azure RBAC, see What is Azure role-based access control
(Azure RBAC)?.

Common use cases


Identity-based authentication and support for Windows ACLs on Azure Files is best leveraged for the following
use cases:
Replace on-premises file servers
Deprecating and replacing scattered on-premises file servers is a common problem that every enterprise
encounters in their IT modernization journey. Azure file shares with on-premises AD DS authentication is the
best fit here, when you can migrate the data to Azure Files. A complete migration will allow you to take
advantage of the high availability and scalability benefits while also minimizing the client-side changes. It
provides a seamless migration experience to end users, so they can continue to access their data with the same
credentials using their existing domain joined machines.
Lift and shift applications to Azure
When you lift and shift applications to the cloud, you want to keep the same authentication model for your data.
As we extend the identity-based access control experience to Azure file shares, it eliminates the need to change
your application to modern auth methods and expedite cloud adoption. Azure file shares provide the option to
integrate with either Azure AD DS or on-premises AD DS for authentication. If your plan is to be 100% cloud
native and minimize the efforts managing cloud infrastructures, Azure AD DS would be a better fit as a fully
managed domain service. If you need full compatibility with AD DS capabilities, you may want to consider
extending your AD DS environment to cloud by self-hosting domain controllers on VMs. Either way, we provide
the flexibility to choose the domain services that suits your business needs.
Backup and disaster recovery (DR )
If you are keeping your primary file storage on-premises, Azure file shares can serve as an ideal storage for
backup or DR, to improve business continuity. You can use Azure file shares to back up your data from existing
file servers, while preserving Windows DACLs. For DR scenarios, you can configure an authentication option to
support proper access control enforcement at failover.

Supported scenarios
The following table summarizes the supported Azure file shares authentication scenarios for Azure AD DS and
on-premises AD DS. We recommend selecting the domain service that you adopted for your client environment
for integration with Azure Files. If you have AD DS already setup on-premises or in Azure where your devices
are domain joined to your AD, you should choose to leverage AD DS for Azure file shares authentication.
Similarly, if you've already adopted Azure AD DS, you should use that for authenticating to Azure file shares.

A Z URE A D DS A UT H EN T IC AT IO N O N - P REM ISES A D DS A UT H EN T IC AT IO N

Azure AD DS-joined Windows machines can access Azure file On-premises AD DS-joined or Azure AD DS-joined Windows
shares with Azure AD credentials over SMB. machines can access Azure file shares with on-premises
Active Directory credentials that are synched to Azure AD
over SMB. Your client must have line of sight to your AD DS.

Restrictions
Azure AD DS and on-premises AD DS authentication do not support authentication against computer
accounts. You can consider using a service logon account instead.
Neither Azure AD DS authentication nor on-premises AD DS authentication is supported against Azure AD-
joined devices or Azure AD-registered devices.
Azure file shares only support identity-based authentication against one of the following domain services,
either Azure Active Directory Domain Services (Azure AD DS) or on-premises Active Directory Domain
Services (AD DS).
Neither identity-based authentication method is supported with Network File System (NFS) shares.

Advantages of identity-based authentication


Identity-based authentication for Azure Files offers several benefits over using Shared Key authentication:
Extend the traditional identity-based file share access experience to the cloud with on-
premises AD DS and Azure AD DS
If you plan to lift and shift your application to the cloud, replacing traditional file servers with Azure file
shares, then you may want your application to authenticate with either on-premises AD DS or Azure AD
DS credentials to access file data. Azure Files supports using both on-premises AD DS or Azure AD DS
credentials to access Azure file shares over SMB from either on-premises AD DS or Azure AD DS domain-
joined VMs.
Enforce granular access control on Azure file shares
You can grant permissions to a specific identity at the share, directory, or file level. For example, suppose
that you have several teams using a single Azure file share for project collaboration. You can grant all
teams access to non-sensitive directories, while limiting access to directories containing sensitive
financial data to your Finance team only.
Back up Windows ACLs (also known as NTFS) along with your data
You can use Azure file shares to back up your existing on-premises file shares. Azure Files preserves your
ACLs along with your data when you back up a file share to Azure file shares over SMB.
How it works
Azure file shares leverages Kerberos protocol for authenticating with either on-premises AD DS or Azure AD DS.
When an identity associated with a user or application running on a client attempts to access data in Azure file
shares, the request is sent to the domain service, either AD DS or Azure AD DS, to authenticate the identity. If
authentication is successful, it returns a Kerberos token. The client sends a request that includes the Kerberos
token and Azure file shares use that token to authorize the request. Azure file shares only receive the Kerberos
token, not access credentials.
Before you can enable identity-based authentication on Azure file shares, you must first set up your domain
environment.
AD DS
For on-premises AD DS authentication, you must set up your AD domain controllers and domain join your
machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain
joined clients must have line of sight to the domain service, so they must be within the corporate network or
virtual network (VNET) of your domain service.
The following diagram depicts on-premises AD DS authentication to Azure file shares over SMB. The on-prem
AD DS must be synced to Azure AD using Azure AD Connect sync. Only hybrid users that exist in both on-
premises AD DS and Azure AD can be authenticated and authorized for Azure file share access. This is because
the share level permission is configured against the identity represented in Azure AD where the directory/file
level permission is enforced with that in AD DS. Make sure that you configure the permissions correctly against
the same hybrid user.

Azure AD DS
For Azure AD DS authentication, you should enable Azure AD Domain Services and domain join the VMs you
plan to access file data from. Your domain-joined VM must reside in the same virtual network (VNET) as your
Azure AD DS.
The following diagram represents the workflow for Azure AD DS authentication to Azure file shares over SMB. It
follows a similar pattern to on-prem AD DS authentication to Azure file shares. There are two major differences:
First, you don't need to create the identity in Azure AD DS to represent the storage account. This is
performed by the enablement process in the background.
Second, all users that exist in Azure AD can be authenticated and authorized. The user can be cloud only
or hybrid. The sync from Azure AD to Azure AD DS is managed by the platform without requiring any
user configuration. However, the client must be domain joined to Azure AD DS, it cannot be Azure AD
joined or registered.
Enable identity-based authentication
You can enable identity-based authentication with either Azure AD DS or on-premises AD DS for Azure file
shares on your new and existing storage accounts. Only one domain service can be used for file access
authentication on the storage account, which applies to all file shares in the account. Detailed guidance on
setting up your file shares for authentication with Azure AD DS in our article Enable Azure Active Directory
Domain Services authentication on Azure Files and guidance for on-premises AD DS in our other article, Enable
on-premises Active Directory Domain Services authentication over SMB for Azure file shares.
Configure share -level permissions for Azure Files
Once either Azure AD DS or on-premises AD DS authentication is enabled, you can use Azure built-in roles or
configure custom roles for Azure AD identities and assign access rights to any file shares in your storage
accounts. The assigned permission allows the granted identity to get access to the share only, nothing else, not
even the root directory. You still need to separately configure directory or file-level permissions for Azure file
shares.
Configure directory or file -level permissions for Azure Files
Azure file shares enforce standard Windows file permissions at both the directory and file level, including the
root directory. Configuration of directory or file-level permissions is supported over both SMB and REST. Mount
the target file share from your VM and configure permissions using Windows File Explorer, Windows icacls, or
the Set-ACL command.
Use the storage account key for superuser permissions
A user with the storage account key can access Azure file shares with superuser permissions. Superuser
permissions bypass all access control restrictions.

IMPORTANT
Our recommended security best practice is to avoid sharing your storage account keys and leverage identity-based
authentication whenever possible.

Preserve directory and file ACLs when importing data to Azure file shares
Azure Files supports preserving directory or file level ACLs when copying data to Azure file shares. You can copy
ACLs on a directory or file to Azure file shares using either Azure File Sync or common file movement toolsets.
For example, you can use robocopy with the /copy:s flag to copy data as well as ACLs to an Azure file share.
ACLs are preserved by default, you are not required to enable identity-based authentication on your storage
account to preserve ACLs.

Pricing
There is no additional service charge to enable identity-based authentication over SMB on your storage account.
For more information on pricing, see Azure Files pricing and Azure AD Domain Services pricing.
Next steps
For more information about Azure Files and identity-based authentication over SMB, see these resources:
Planning for an Azure Files deployment
Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares
Enable Azure Active Directory Domain Services authentication on Azure Files
FAQ
Azure Files networking considerations
5/20/2022 • 12 minutes to read • Edit Online

You can access your Azure file shares over the public internet accessible endpoint, over one or more private
endpoints on your network(s), or by caching your Azure file share on-premises with Azure File Sync (SMB file
shares only). This article focuses on how to configure Azure Files for direct access over public and/or private
endpoints. To learn how to cache your Azure file share on-premises with Azure File Sync, see Introduction to
Azure File Sync.
We recommend reading Planning for an Azure Files deployment prior to reading this conceptual guide.
Directly accessing the Azure file share often requires additional thought with respect to networking:
SMB file shares communicate over port 445, which many organizations and internet service providers
(ISPs) block for outbound (internet) traffic. This practice originates from legacy security guidance about
deprecated and non-internet safe versions of the SMB protocol. Although SMB 3.x is an internet-safe
protocol, organizational or ISP policies may not be possible to change. Therefore, mounting an SMB file
share often requires additional networking configuration to use outside of Azure.
NFS file shares rely on network-level authentication and are therefore only accessible via restricted
networks. Using an NFS file share always requires some level of networking configuration.
Configuring public and private endpoints for Azure Files is done on the top-level management object for Azure
Files, the Azure storage account. A storage account is a management construct that represents a shared pool of
storage in which you can deploy multiple Azure file shares, as well as the storage resources for other Azure
storage services, such as blob containers or queues.
https://www.youtube-nocookie.com/embed/jd49W33DxkQ
This video is a guide and demo for how to securely expose Azure file shares directly to information workers and
apps in five simple steps. The sections below provide links and additional context to the documentation
referenced in the video.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Secure transfer
By default, Azure storage accounts require secure transfer, regardless of whether data is accessed over the public
or private endpoint. For Azure Files, the require secure transfer setting is enforced for all protocol access to
the data stored on Azure file shares, including SMB, NFS, and FileREST. The require secure transfer setting
may be disabled to allow unencrypted traffic. You may also see this setting mislabeled as "require secure
transfer for REST API operations".
The SMB, NFS, and FileREST protocols have slightly different behavior with respect to the require secure
transfer setting:
When require secure transfer is enabled on a storage account, all SMB file shares in that storage
account will require the SMB 3.x protocol with AES-128-CCM, AES-128-GCM, or AES-256-GCM
encryption algorithms, depending on the available/required encryption negotiation between the SMB
client and Azure Files. You can toggle which SMB encryption algorithms are allowed via the SMB security
settings. Disabling the require secure transfer setting enables SMB 2.1 and SMB 3.x mounts without
encryption.
NFS file shares do not support an encryption mechanism, so in order to use the NFS protocol to access
an Azure file share, you must disable require secure transfer for the storage account.
When secure transfer is required, the FileREST protocol may only be used with HTTPS. FileREST is only
supported on SMB file shares today.

Public endpoint
The public endpoint for the Azure file shares within a storage account is an internet exposed endpoint. The
public endpoint is the default endpoint for a storage account, however, it can be disabled if desired.
The SMB, NFS, and FileREST protocols can all use the public endpoint. However, each has slightly different rules
for access:
SMB file shares are accessible from anywhere in the world via the storage account's public endpoint with
SMB 3.x with encryption. This means that authenticated requests, such as requests authorized by a user's
logon identity, can originate securely from inside or outside of the Azure region. If SMB 2.1 or SMB 3.x
without encryption is desired, two conditions must be met:
1. The storage account's require secure transfer setting must be disabled.
2. The request must originate from inside of the Azure region. As previously mentioned, encrypted SMB
requests are allowed from anywhere, inside or outside of the Azure region.
NFS file shares are accessible from the storage account's public endpoint if and only if the storage
account's public endpoint is restricted to specific virtual networks using service endpoints. See public
endpoint firewall settings for additional information on service endpoints.
FileREST is accessible via the public endpoint. If secure transfer is required, only HTTPS requests are
accepted. If secure transfer is disabled, HTTP requests are accepted by the public endpoint regardless of
origin.
Public endpoint firewall settings
The storage account firewall restricts access to the public endpoint for a storage account. Using the storage
account firewall, you can restrict access to certain IP addresses/IP address ranges, to specific virtual networks, or
disable the public endpoint entirely.
When you restrict the traffic of the public endpoint to one or more virtual networks, you are using a capability of
the virtual network called service endpoints. Requests directed to the service endpoint of Azure Files are still
going to the storage account public IP address; however, the networking layer is doing additional verification of
the request to validate that it is coming from an authorized virtual network. The SMB, NFS, and FileREST
protocols all support service endpoints. Unlike SMB and FileREST, however, NFS file shares can only be accessed
with the public endpoint through use of a service endpoint.
To learn more about how to configure the storage account firewall, see configure Azure storage firewalls and
virtual networks.
Public endpoint network routing
Azure Files supports multiple network routing options. The default option, Microsoft routing, works with all
Azure Files configurations. The internet routing option does not support AD domain join scenarios or Azure File
Sync.

Private endpoints
In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or
more private endpoints. A private endpoint is an endpoint that is only accessible within an Azure virtual
network. When you create a private endpoint for your storage account, your storage account gets a private IP
address from within the address space of your virtual network, much like how an on-premises file server or
NAS device receives an IP address within the dedicated address space of your on-premises network.
An individual private endpoint is associated with a specific Azure virtual network subnet. A storage account may
have private endpoints in more than one virtual network.
Using private endpoints with Azure Files enables you to:
Securely connect to your Azure file shares from on-premises networks using a VPN or ExpressRoute
connection with private-peering.
Secure your Azure file shares by configuring the storage account firewall to block all connections on the
public endpoint. By default, creating a private endpoint does not block connections to the public endpoint.
Increase security for the virtual network by enabling you to block exfiltration of data from the virtual network
(and peering boundaries).
To create a private endpoint, see Configuring private endpoints for Azure Files.
Tunneling traffic over a virtual private network or ExpressRoute
To use private endpoints to access SMB or NFS file shares from on-premises, you must establish a network
tunnel between your on-premises network and Azure. A virtual network, or VNet, is similar to a traditional on-
premises network. Like an Azure storage account or an Azure VM, a VNet is an Azure resource that is deployed
in a resource group.
Azure Files supports the following mechanisms to tunnel traffic between your on-premises workstations and
servers and Azure SMB/NFS file shares:
Azure VPN Gateway: A VPN gateway is a specific type of virtual network gateway that is used to send
encrypted traffic between an Azure virtual network and an alternate location (such as on-premises) over the
internet. An Azure VPN Gateway is an Azure resource that can be deployed in a resource group alongside of
a storage account or other Azure resources. VPN gateways expose two different types of connections:
Point-to-Site (P2S) VPN gateway connections, which are VPN connections between Azure and an
individual client. This solution is primarily useful for devices that are not part of your organization's
on-premises network. A common use case is for telecommuters who want to be able to mount their
Azure file share from home, a coffee shop, or hotel while on the road. To use a P2S VPN connection
with Azure Files, you'll need to configure a P2S VPN connection for each client that wants to connect.
To simplify the deployment of a P2S VPN connection, see Configure a Point-to-Site (P2S) VPN on
Windows for use with Azure Files and Configure a Point-to-Site (P2S) VPN on Linux for use with Azure
Files.
Site-to-Site (S2S) VPN, which are VPN connections between Azure and your organization's network. A
S2S VPN connection enables you to configure a VPN connection once for a VPN server or device
hosted on your organization's network, rather than configuring a connection for every client device
that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see
Configure a Site-to-Site (S2S) VPN for use with Azure Files.
ExpressRoute, which enables you to create a defined route between Azure and your on-premises network
that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-
premises datacenter and Azure, ExpressRoute may be useful when network performance is a consideration.
ExpressRoute is also a good option when your organization's policy or regulatory requirements require a
deterministic path to your resources in the cloud.

NOTE
Although we recommend using private endpoints to assist in extending your on-premises network into Azure, it is
technically possible to route to the public endpoint over the VPN connection. However, this requires hard-coding the IP
address for the public endpoint for the Azure storage cluster that serves your storage account. Because storage accounts
may be moved between storage clusters at any time and new clusters are frequently added and removed, this requires
regularly hard-coding all the possible Azure storage IP addresses into your routing rules.

DNS configuration
When you create a private endpoint, by default we also create a (or update an existing) private DNS zone
corresponding to the privatelink subdomain. Strictly speaking, creating a private DNS zone is not required to
use a private endpoint for your storage account. However, it is highly recommended in general and explicitly
required when mounting your Azure file share with an Active Directory user principal or accessing it from the
FileREST API.

NOTE
This article uses the storage account DNS suffix for the Azure Public regions, core.windows.net . This commentary also
applies to Azure Sovereign clouds such as the Azure US Government cloud and the Azure China cloud - just substitute
the appropriate suffixes for your environment.

In your private DNS zone, we create an A record for storageaccount.privatelink.file.core.windows.net and a


CNAME record for the regular name of the storage account, which follows the pattern
storageaccount.file.core.windows.net . Because your Azure private DNS zone is connected to the virtual
network containing the private endpoint, you can observe the DNS configuration by calling the Resolve-DnsName
cmdlet from PowerShell in an Azure VM (alternately nslookup in Windows and Linux):

Resolve-DnsName -Name "storageaccount.file.core.windows.net"

For this example, the storage account storageaccount.file.core.windows.net resolves to the private IP address of
the private endpoint, which happens to be 192.168.0.4 .
Name Type TTL Section NameHost
---- ---- --- ------- --------
storageaccount.file.core.windows. CNAME 29 Answer csostoracct.privatelink.file.core.windows.net
net

Name : storageaccount.privatelink.file.core.windows.net
QueryType : A
TTL : 1769
Section : Answer
IP4Address : 192.168.0.4

Name : privatelink.file.core.windows.net
QueryType : SOA
TTL : 269
Section : Authority
NameAdministrator : azureprivatedns-host.microsoft.com
SerialNumber : 1
TimeToZoneRefresh : 3600
TimeToZoneFailureRetry : 300
TimeToExpiration : 2419200
DefaultTTL : 300

If you run the same command from on-premises, you'll see that the same storage account name resolves to the
public IP address of the storage account instead; storageaccount.file.core.windows.net is a CNAME record for
storageaccount.privatelink.file.core.windows.net , which in turn is a CNAME record for the Azure storage
cluster hosting the storage account:

Name Type TTL Section NameHost


---- ---- --- ------- --------
storageaccount.file.core.windows. CNAME 60 Answer storageaccount.privatelink.file.core.windows.net
net
storageaccount.privatelink.file.c CNAME 60 Answer file.par20prdstr01a.store.core.windows.net
ore.windows.net

Name : file.par20prdstr01a.store.core.windows.net
QueryType : A
TTL : 60
Section : Answer
IP4Address : 52.239.194.40

This reflects the fact that the storage account can expose both the public endpoint and one or more private
endpoints. To ensure that the storage account name resolves to the private endpoint's private IP address, you
must change the configuration on your on-premises DNS servers. This can be accomplished in several ways:
Modifying the hosts file on your clients to make storageaccount.file.core.windows.net resolve to the desired
private endpoint's private IP address. This is strongly discouraged for production environments, because you
will need to make these changes to every client that wants to mount your Azure file shares, and changes to
the storage account or private endpoint will not be automatically handled.
Creating an A record for storageaccount.file.core.windows.net in your on-premises DNS servers. This has
the advantage that clients in your on-premises environment will be able to automatically resolve the storage
account without needing to configure each client. However, this solution is similarly brittle to modifying the
hosts file because changes are not reflected. Although this solution is brittle, it may be the best choice for
some environments.
Forward the core.windows.net zone from your on-premises DNS servers to your Azure private DNS zone.
The Azure private DNS host can be reached through a special IP address ( 168.63.129.16 ) that is only
accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this
limitation, you can run additional DNS servers within your virtual network that will forward
core.windows.net on to the Azure private DNS zone. To simplify this set up, we have provided PowerShell
cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To
learn how to set up DNS forwarding, see Configuring DNS with Azure Files.

SMB over QUIC


Windows Server 2022 Azure Edition supports a new transport protocol called QUIC for the SMB server
provided by the File Server role. QUIC is a replacement for TCP that is built on top of UDP, providing numerous
advantages over TCP while still providing a reliable transport mechanism. Although there are multiple
advantages to QUIC as a transport protocol, one key advantage for the SMB protocol is that all transport is done
over port 443, which is widely open outbound to support HTTPS. This effectively means that SMB over QUIC
offers a "SMB VPN" for file sharing over the public internet. Windows 11 ships with a SMB over QUIC capable
client.
At this time, Azure Files does not directly support SMB over QUIC. However, you can create a lightweight cache
of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more
about this option, see Deploy Azure File Sync and SMB over QUIC.

See also
Azure Files overview
Planning for an Azure Files deployment
Disaster recovery and storage account failover
5/20/2022 • 13 minutes to read • Edit Online

Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may
occur. If your application requires resiliency, Microsoft recommends using geo-redundant storage, so that your
data is copied to a second region. Additionally, customers should have a disaster recovery plan in place for
handling a regional service outage. An important part of a disaster recovery plan is preparing to fail over to the
secondary endpoint in the event that the primary endpoint becomes unavailable.
Azure Storage supports account failover for geo-redundant storage accounts. With account failover, you can
initiate the failover process for your storage account if the primary endpoint becomes unavailable. The failover
updates the secondary endpoint to become the primary endpoint for your storage account. Once the failover is
complete, clients can begin writing to the new primary endpoint.
Account failover is available for general-purpose v1, general-purpose v2, and Blob storage account types with
Azure Resource Manager deployments. Account failover is not supported for storage accounts with a
hierarchical namespace enabled.
This article describes the concepts and process involved with an account failover and discusses how to prepare
your storage account for recovery with the least amount of customer impact. To learn how to initiate an account
failover in the Azure portal or PowerShell, see Initiate an account failover.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Choose the right redundancy option


Azure Storage maintains multiple copies of your storage account to ensure durability and high availability.
Which redundancy option you choose for your account depends on the degree of resiliency you need. For
protection against regional outages, configure your account for geo-redundant storage, with or without the
option of read access from the secondary region:
Geo-redundant storage (GRS) or geo-zone-redundant storage (GZRS) copies your data
asynchronously in two geographic regions that are at least hundreds of miles apart. If the primary region
suffers an outage, then the secondary region serves as a redundant source for your data. You can initiate a
failover to transform the secondary endpoint into the primary endpoint.
Read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-
GZRS) provides geo-redundant storage with the additional benefit of read access to the secondary endpoint. If
an outage occurs in the primary endpoint, applications configured for read access to the secondary and
designed for high availability can continue to read from the secondary endpoint. Microsoft recommends RA-
GZRS for maximum availability and durability for your applications.
For more information about redundancy in Azure Storage, see Azure Storage redundancy.
WARNING
Geo-redundant storage carries a risk of data loss. Data is copied to the secondary region asynchronously, meaning there
is a delay between when data written to the primary region is written to the secondary region. In the event of an outage,
write operations to the primary endpoint that have not yet been copied to the secondary endpoint will be lost.

Design for high availability


It's important to design your application for high availability from the start. Refer to these Azure resources for
guidance in designing your application and planning for disaster recovery:
Designing resilient applications for Azure: An overview of the key concepts for architecting highly available
applications in Azure.
Resiliency checklist: A checklist for verifying that your application implements the best design practices for
high availability.
Use geo-redundancy to design highly available applications: Design guidance for building applications to
take advantage of geo-redundant storage.
Tutorial: Build a highly available application with Blob storage: A tutorial that shows how to build a highly
available application that automatically switches between endpoints as failures and recoveries are simulated.
Additionally, keep in mind these best practices for maintaining high availability for your Azure Storage data:
Disks: Use Azure Backup to back up the VM disks used by your Azure virtual machines. Also consider using
Azure Site Recovery to protect your VMs in the event of a regional disaster.
Block blobs: Turn on soft delete to protect against object-level deletions and overwrites, or copy block blobs
to another storage account in a different region using AzCopy, Azure PowerShell, or the Azure Data
Movement library.
Files: Use Azure Backup to back up your file shares. Also enable soft delete to protect against accidental file
share deletions. For geo-redundancy when GRS is not available, use AzCopy or Azure PowerShell to copy
your files to another storage account in a different region.
Tables: use AzCopy to export table data to another storage account in a different region.

Track outages
Customers may subscribe to the Azure Service Health Dashboard to track the health and status of Azure Storage
and other Azure services.
Microsoft also recommends that you design your application to prepare for the possibility of write failures. Your
application should expose write failures in a way that alerts you to the possibility of an outage in the primary
region.

Understand the account failover process


Customer-managed account failover enables you to fail your entire storage account over to the secondary
region if the primary becomes unavailable for any reason. When you force a failover to the secondary region,
clients can begin writing data to the secondary endpoint after the failover is complete. The failover typically
takes about an hour.
NOTE
Customer-managed account failover is not yet supported in accounts that have a hierarchical namespace (Azure Data
Lake Storage Gen2). To learn more, see Blob storage features available in Azure Data Lake Storage Gen2.
In the event of a disaster that affects the primary region, Microsoft will manage the failover for accounts with a
hierarchical namespace. For more information, see Microsoft-managed failover.

How an account failover works


Under normal circumstances, a client writes data to an Azure Storage account in the primary region, and that
data is copied asynchronously to the secondary region. The following image shows the scenario when the
primary region is available:

If the primary endpoint becomes unavailable for any reason, the client is no longer able to write to the storage
account. The following image shows the scenario where the primary has become unavailable, but no recovery
has happened yet:

The customer initiates the account failover to the secondary endpoint. The failover process updates the DNS
entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your
storage account, as shown in the following image:

Write access is restored for geo-redundant accounts once the DNS entry has been updated and requests are
being directed to the new primary endpoint. Existing storage service endpoints for blobs, tables, queues, and
files remain the same after the failover.

IMPORTANT
After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint. To
resume replication to the new secondary, configure the account for geo-redundancy again.
Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For
more information, see Important implications of account failover.

Anticipate data loss


Cau t i on

An account failover usually involves some data loss. It's important to understand the implications of initiating an
account failover.
Because data is written asynchronously from the primary region to the secondary region, there is always a delay
before a write to the primary region is copied to the secondary region. If the primary region becomes
unavailable, the most recent writes may not yet have been copied to the secondary region.
When you force a failover, all data in the primary region is lost as the secondary region becomes the new
primary region. The new primary region is configured to be locally redundant after the failover.
All data already copied to the secondary is maintained when the failover happens. However, any data written to
the primary that has not also been copied to the secondary is lost permanently.
The Last Sync Time property indicates the most recent time that data from the primary region is guaranteed
to have been written to the secondary region. All data written prior to the last sync time is available on the
secondary, while data written after the last sync time may not have been written to the secondary and may be
lost. Use this property in the event of an outage to estimate the amount of data loss you may incur by initiating
an account failover.
As a best practice, design your application so that you can use the last sync time to evaluate expected data loss.
For example, if you are logging all write operations, then you can compare the time of your last write operations
to the last sync time to determine which writes have not been synced to the secondary.
For more information about checking the Last Sync Time property, see Check the Last Sync Time property for
a storage account.
Use caution when failing back to the original primary
After you fail over from the primary to the secondary region, your storage account is configured to be locally
redundant in the new primary region. You can then configure the account in the new primary region for geo-
redundancy. When the account is configured for geo-redundancy after a failover, the new primary region
immediately begins copying data to the new secondary region, which was the primary before the original
failover. However, it may take some time before existing data in the new primary is fully copied to the new
secondary.
After the storage account is reconfigured for geo-redundancy, it's possible to initiate a failback from the new
primary to the new secondary. In this case, the original primary region prior to the failover becomes the
primary region again, and is configured to be either locally redundant or zone-redundant, depending on
whether the original primary configuration was GRS/RA-GRS or GZRS/RA-GZRS. All data in the post-failover
primary region (the original secondary) is lost during the failback. If most of the data in the storage account has
not been copied to the new secondary before you fail back, you could suffer a major data loss.
To avoid a major data loss, check the value of the Last Sync Time property before failing back. Compare the
last sync time to the last times that data was written to the new primary to evaluate expected data loss.
After a failback operation, you can configure the new primary region to be geo-redundant again. If the original
primary was configured for LRS, you can configure it to be GRS or RA-GRS. If the original primary was
configured for ZRS, you can configure it to be GZRS or RA-GZRS. For additional options, see Change how a
storage account is replicated.

Initiate an account failover


You can initiate an account failover from the Azure portal, PowerShell, Azure CLI, or the Azure Storage resource
provider API. For more information on how to initiate a failover, see Initiate an account failover.

Additional considerations
Review the additional considerations described in this section to understand how your applications and services
may be affected when you force a failover.
Storage account containing archived blobs
Storage accounts containing archived blobs support account failover. After failover is complete, all archived
blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy.
Storage resource provider
Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis of all
actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in
your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API
enables you to manage the storage account and related resources.
After a failover is complete, clients can again read and write Azure Storage data in the new primary region.
However, the Azure Storage resource provider does not fail over, so resource management operations must still
take place in the primary region. If the primary region is unavailable, you will not be able to perform
management operations on the storage account.
Because the Azure Storage resource provider does not fail over, the Location property will return the original
primary location after the failover is complete.
Azure virtual machines
Azure virtual machines (VMs) do not fail over as part of an account failover. If the primary region becomes
unavailable, and you fail over to the secondary region, then you will need to recreate any VMs after the failover.
Also, there is a potential data loss associated with the account failover. Microsoft recommends the following
high availability and disaster recovery guidance specific to virtual machines in Azure.
Azure unmanaged disks
As a best practice, Microsoft recommends converting unmanaged disks to managed disks. However, if you need
to fail over an account that contains unmanaged disks attached to Azure VMs, you will need to shut down the
VM before initiating the failover.
Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged
disks attached to the VM are leased. An account failover cannot proceed when there is a lease on a blob. To
perform the failover, follow these steps:
1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to
which they are attached. Doing so will make it easier to reattach the disks after the failover.
2. Shut down the VM.
3. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
4. Wait until the Last Sync Time has updated, and is later than the time at which you deleted the VM. This step
is important, because if the secondary endpoint has not been fully updated with the VHD files when the
failover occurs, then the VM may not function properly in the new primary region.
5. Initiate the account failover.
6. Wait until the account failover is complete and the secondary region has become the new primary region.
7. Create a VM in the new primary region and reattach the VHDs.
8. Start the new VM.
Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.

Unsupported features and services


The following features and services are not supported for account failover:
Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares
being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop
working and may also cause unexpected data loss in the case of newly tiered files.
Storage accounts that have hierarchical namespace enabled (such as for Data Lake Storage Gen2) are not
supported at this time.
A storage account containing premium block blobs cannot be failed over. Storage accounts that support
premium block blobs do not currently support geo-redundancy.
A storage account containing any WORM immutability policy enabled containers cannot be failed over.
Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.

Copying data as an alternative to failover


If your storage account is configured for read access to the secondary, then you can design your application to
read from the secondary endpoint. If you prefer not to fail over in the event of an outage in the primary region,
you can use tools such as AzCopy, Azure PowerShell, or the Azure Data Movement library to copy data from
your storage account in the secondary region to another storage account in an unaffected region. You can then
point your applications to that storage account for both read and write availability.
Cau t i on

An account failover should not be used as part of your data migration strategy.
Microsoft-managed failover
In extreme circumstances where a region is lost due to a significant disaster, Microsoft may initiate a regional
failover. In this case, no action on your part is required. Until the Microsoft-managed failover has completed, you
won't have write access to your storage account. Your applications can read from the secondary region if your
storage account is configured for RA-GRS or RA-GZRS.

See also
Use geo-redundancy to design highly available applications
Initiate an account failover
Check the Last Sync Time property for a storage account
Tutorial: Build a highly available application with Blob storage
Overview of share snapshots for Azure Files
5/20/2022 • 6 minutes to read • Edit Online

Azure Files provides the capability to take share snapshots of file shares. Share snapshots capture the share
state at that point in time. In this article, we describe what capabilities share snapshots provide and how you can
take advantage of them in your custom use case.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

When to use share snapshots


Protection against application error and data corruption
Applications that use file shares perform operations such as writing, reading, storage, transmission, and
processing. If an application is misconfigured or an unintentional bug is introduced, accidental overwrite or
damage can happen to a few blocks. To help protect against these scenarios, you can take a share snapshot
before you deploy new application code. If a bug or application error is introduced with the new deployment,
you can go back to a previous version of your data on that file share.
Protection against accidental deletions or unintended changes
Imagine that you're working on a text file in a file share. After the text file is closed, you lose the ability to undo
your changes. In these cases, you then need to recover a previous version of the file. You can use share
snapshots to recover previous versions of the file if it's accidentally renamed or deleted.
General backup purposes
After you create a file share, you can periodically create a share snapshot of the file share to use it for data
backup. A share snapshot, when taken periodically, helps maintain previous versions of data that can be used for
future audit requirements or disaster recovery. We recommend using Azure file share backup as a backup
solution for taking and managing snapshots. You may also take and manage snapshots yourself, using either CLI
or PowerShell.

Capabilities
A share snapshot is a point-in-time, read-only copy of your data. You can create, delete, and manage snapshots
by using the REST API. Same capabilities are also available in the client library, Azure CLI, and Azure portal.
You can view snapshots of a share by using both the REST API and SMB. You can retrieve the list of versions of
the directory or file, and you can mount a specific version directly as a drive (only available on Windows - see
Limits).
After a share snapshot is created, it can be read, copied, or deleted, but not modified. You can't copy a whole
share snapshot to another storage account. You have to do that file by file, by using AzCopy or other copying
mechanisms.
Share snapshot capability is provided at the file share level. Retrieval is provided at individual file level, to allow
for restoring individual files. You can restore a complete file share by using SMB, the REST API, the portal, the
client library, or PowerShell/CLI tooling.
A share snapshot of a file share is identical to its base file share. The only difference is that a DateTime value is
appended to the share URI to indicate the time at which the share snapshot was taken. For example, if a file
share URI is http://storagesample.core.file.windows.net/myshare, the share snapshot URI is similar to:

http://storagesample.core.file.windows.net/myshare?snapshot=2011-03-09T01:42:34.9360000Z

Share snapshots persist until they are explicitly deleted. A share snapshot cannot outlive its base file share. You
can enumerate the snapshots associated with the base file share to track your current snapshots.
When you create a share snapshot of a file share, the files in the share's system properties are copied to the
share snapshot with the same values. The base files and the file share's metadata are also copied to the share
snapshot, unless you specify separate metadata for the share snapshot when you create it.
You cannot delete a share that has share snapshots unless you delete all the share snapshots first.

Space usage
Share snapshots are incremental in nature. Only the data that has changed after your most recent share
snapshot is saved. This minimizes the time required to create the share snapshot and saves on storage costs.
Any write operation to the object or property or metadata update operation is counted toward "changed
content" and is stored in the share snapshot.
To conserve space, you can delete the share snapshot for the period when the churn was highest.
Even though share snapshots are saved incrementally, you need to retain only the most recent share snapshot in
order to restore the share. When you delete a share snapshot, only the data unique to that share snapshot is
removed. Active snapshots contain all the information that you need to browse and restore your data (from the
time the share snapshot was taken) to the original location or an alternate location. You can restore at the item
level.
Snapshots don't count towards the share size limit. There is no limit to how much space share snapshots occupy
in total. Storage account limits still apply.

Limits
The maximum number of share snapshots that Azure Files allows today is 200. After 200 share snapshots, you
have to delete older share snapshots in order to create new ones.
There is no limit to the simultaneous calls for creating share snapshots. There is no limit to amount of space that
share snapshots of a particular file share can consume.
Today, it is not possible to mount share snapshots on Linux. This is because the Linux SMB client does not
support mounting snapshots like Windows does.

Copying data back to a share from share snapshot


Copy operations that involve files and share snapshots follow these rules:
You can copy individual files in a file share snapshot over to its base share or any other location. You can restore
an earlier version of a file or restore the complete file share by copying file by file from the share snapshot. The
share snapshot is not promoted to base share.
The share snapshot remains intact after copying, but the base file share is overwritten with a copy of the data
that was available in the share snapshot. All the restored files count toward "changed content."
You can copy a file in a share snapshot to a different destination with a different name. The resulting destination
file is a writable file and not a share snapshot. In this case, your base file share will remain intact.
When a destination file is overwritten with a copy, any share snapshots associated with the original destination
file remain intact.

General best practices


We recommend using Azure file share backup as a backup solution for automating taking snapshots, as well as
managing snapshots. When you're running infrastructure on Azure, automate backups for data recovery
whenever possible. Automated actions are more reliable than manual processes, helping to improve data
protection and recoverability. You can use the Azure file share backup, the REST API, the Client SDK, or scripting
for automation.
Before you deploy the share snapshot scheduler, carefully consider your share snapshot frequency and retention
settings to avoid incurring unnecessary charges.
Share snapshots provide only file-level protection. Share snapshots don't prevent fat-finger deletions on a file
share or storage account. To help protect a storage account from accidental deletions, you can either enable soft
delete, or lock the storage account and/or the resource group.

Next steps
Working with share snapshots in:
Azure file share backup
PowerShell
CLI
Windows
Share snapshot FAQ
SMB Multichannel performance
5/20/2022 • 7 minutes to read • Edit Online

SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share.
Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account
kind). There is no additional cost for enabling SMB Multichannel in Azure Files. SMB Multichannel is disabled by
default.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Benefits
SMB Multichannel enables clients to use multiple network connections that provide increased performance
while lowering the cost of ownership. Increased performance is achieved through bandwidth aggregation over
multiple NICs and utilizing Receive Side Scaling (RSS) support for NICs to distribute the IO load across multiple
CPUs.
Increased throughput : Multiple connections allow data to be transferred over multiple paths in parallel
and thereby significantly benefits workloads that use larger file sizes with larger IO sizes, and require high
throughput from a single VM or a smaller set of VMs. Some of these workloads include media and
entertainment for content creation or transcoding, genomics, and financial services risk analysis.
Higher IOPS : NIC RSS capability allows effective load distribution across multiple CPUs with multiple
connections. This helps achieve higher IOPS scale and effective utilization of VM CPUs. This is useful for
workloads that have small IO sizes, such as database applications.
Network fault tolerance : Multiple connections mitigate the risk of disruption since clients no longer rely
on an individual connection.
Automatic configuration : When SMB Multichannel is enabled on clients and storage accounts, it allows for
dynamic discovery of existing connections, and can create addition connection paths as necessary.
Cost optimization : Workloads can achieve higher scale from a single VM, or a small set of VMs, while
connecting to premium shares. This could reduce the total cost of ownership by reducing the number of VMs
necessary to run and manage a workload.
To learn more about SMB Multichannel, refer to the Windows documentation.
This feature provides greater performance benefits to multi-threaded applications but typically does not help
single-threaded applications. See the Performance comparison section for more details.

Limitations
SMB Multichannel for Azure file shares currently has the following restrictions:
Only supported on Windows and Linux clients that are using SMB 3.1.1. Ensure SMB client operating systems
are patched to recommended levels.
Maximum number of channels is four, for details see here.

Configuration
SMB Multichannel only works when the feature is enabled on both client-side (your client) and service-side
(your Azure storage account).
On Windows clients, SMB Multichannel is enabled by default. You can verify your configuration by running the
following PowerShell command:

Get-SmbClientConfiguration | Select-Object -Property EnableMultichannel

On your Azure storage account, you will need to enable SMB Multichannel. See Enable SMB Multichannel.
Disable SMB Multichannel
In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB
Multichannel. However, some specific scenarios such as single-threaded workloads or for testing purposes, you
may want to disable SMB Multichannel. See Performance comparison for more details.

Verify SMB Multichannel is configured correctly


1. Create a premium file share or use an existing one.
2. Ensure your client supports SMB Multichannel (one or more network adapters has receive-side scaling
enabled). Refer to the Windows documentation for more details.
3. Mount a file share to your client.
4. Generate load with your application. A copy tool such as robocopy /MT, or any performance tool such as
Diskspd to read/write files can generate load.
5. Open PowerShell as an admin and use the following command: Get-SmbMultichannelConnection |fl
6. Look for MaxChannels and CurrentChannels properties

Performance comparison
There are two categories of read/write workload patterns - single-threaded and multi-threaded. Most workloads
use multiple files, but there could be specific use cases where the workload works with a single file in a share.
This section covers different use cases and the performance impact for each of them. In general, most workloads
are multi-threaded and distribute workload over multiple files so they should observe significant performance
improvements with SMB Multichannel.
Multi-threaded/multiple files : Depending on the workload pattern, you should see significant
performance improvement in read and write IOs over multiple channels. The performance gains vary from
anywhere between 2x to 4x in terms of IOPS, throughput, and latency. For this category, SMB Multichannel
should be enabled for the best performance.
Multi-threaded/single file : For most use cases in this category, workloads will benefit from having SMB
Multichannel enabled, especially if the workload has an average IO size > ~16k. A few example scenarios
that benefit from SMB Multichannel are backup or recovery of a single large file. An exception where you
may want to disable SMB Multichannel is if your workload is small IOs heavy. In that case, you may observe a
slight performance loss of ~10%. Depending on the use case, consider spreading load across multiple files,
or disable the feature. See the Configuration section for details.
Single-threaded/multiple files or single file : For most single-threaded workloads, there are minimum
performance benefits due to lack of parallelism, usually there is a slight performance degradation of ~10% if
SMB Multichannel is enabled. In this case, it's ideal to disable SMB Multichannel, with one exception. If the
single-threaded workload can distribute load across multiple files and uses on an average larger IO size (>
~16k), then there should be slight performance benefits from SMB Multichannel.
Performance test configuration
For the charts in this article, the following configuration was used: A single Standard D32s v3 VM with a single
RSS enabled NIC with four channels. Load was generated using diskspd.exe, multiple-threaded with IO depth of
10, and random IOs with various IO sizes.

MAX
C A C H ED
AND
T EM P
STO RA GE
T H RO UG MAX EXP EC T E
H P UT : UN C A C H D
IO P S/ M B ED DISK N ET W O R
PS T H RO UG K
T EM P MAX (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B MAX TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) PS N IC S ( M B P S)

Standard_ 32 128 256 32 64000/51 51200/76 8 16000


D32s_v3 2 (800) 8

Mutli-threaded/multiple files with SMB Multichannel


Load was generated against 10 files with various IO sizes. The scale up test results showed significant
improvements in both IOPS and throughput test results with SMB Multichannel enabled. The following
diagrams depict the results:
On a single NIC, for reads, performance increase of 2x-3x was observed and for writes, gains of 3x-4x in
terms of both IOPS and throughput.
SMB Multichannel allowed IOPS and throughput to reach VM limits even with a single NIC and the four
channel limit.
Since egress (or reads to storage) is not metered, read throughput was able to exceed the VM published limit
of 16,000 Mbps (2 GiB/s). The test achieved >2.7 GiB/s. Ingress (or writes to storage) are still subject to VM
limits.
Spreading load over multiple files allowed for substantial improvements.
An example command that was used in this testing is:
diskspd.exe -W300 -C5 -r -w100 -b4k -t8 -o8 -Sh -d60 -L -c2G -Z1G z:\write0.dat z:\write1.dat z:\write2.dat
z:\write3.dat z:\write4.dat z:\write5.dat z:\write6.dat z:\write7.dat z:\write8.dat z:\write9.dat
.
Multi-threaded/single file workloads with SMB Multichannel
The load was generated against a single 128 GiB file. With SMB Multichannel enabled, the scale up test with
multi-threaded/single files showed improvements in most cases. The following diagrams depict the results:
On a single NIC with larger average IO size (> ~16k), there were significant improvements in both reads and
writes.
For smaller IO sizes, there was a slight impact of ~10% on performance when SMB Multichannel was
enabled. This could be mitigated by spreading the load over multiple files, or disabling the feature.
Performance is still bound by single file limits.

Optimizing performance
The following tips may help you optimize your performance:
Ensure that your storage account and your client are colocated in the same Azure region to reduce network
latency.
Use multi-threaded applications and spread load across multiple files.
Performance benefits of SMB Multichannel increase with the number of files distributing load.
Premium share performance is bound by provisioned share size (IOPS/egress/ingress) and single file limits.
For details, see Understanding provisioning for premium file shares.
Maximum performance of a single VM client is still bound to VM limits. For example, Standard_D32s_v3 can
support a maximum bandwidth of 16,000 MBps (or 2GBps), egress from the VM (writes to storage) is
metered, ingress (reads from storage) is not. File share performance is subject to machine network limits,
CPUs, internal storage available network bandwidth, IO sizes, parallelism, as well as other factors.
The initial test is usually a warm-up, discard its results and repeat the test.
If performance is limited by a single client and workload is still below provisioned share limits, higher
performance can be achieved by spreading load over multiple clients.
The relationship between IOPS, throughput, and IO sizes
Throughput = IO size * IOPS
Higher IO sizes drive higher throughput and will have higher latencies, resulting in a lower number of net IOPS.
Smaller IO sizes will drive higher IOPS but results in lower net throughput and latencies.

Next steps
Enable SMB Multichannel
See the Windows documentation to learn more about SMB Multichannel.
Azure Files scalability and performance targets
5/20/2022 • 10 minutes to read • Edit Online

Azure Files offers fully managed file shares in the cloud that are accessible via the SMB and NFS file system
protocols. This article discusses the scalability and performance targets for Azure Files and Azure File Sync.
The targets listed here might be affected by other variables in your deployment. For example, the performance
of IO for a file might be impacted by your SMB client's behavior and by your available network bandwidth. You
should test your usage pattern to determine whether the scalability and performance of Azure Files meet your
requirements. You should also expect these limits will increase over time.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Azure Files scale targets


Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of
storage. This pool of storage can be used to deploy multiple file shares. There are therefore three categories to
consider: storage accounts, Azure file shares, and files.
Storage account scale targets
There are two main types of storage accounts for Azure Files:
General purpose version 2 (GPv2) storage accounts : GPv2 storage accounts allow you to deploy
Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file
shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or
tables. File shares can be deployed into the transaction optimized (default), hot, or cool tiers.
FileStorage storage accounts : FileStorage storage accounts allow you to deploy Azure file shares on
premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store
Azure file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a
FileStorage account.

GP V2 STO RA GE A C C O UN T S F IL ESTO RA GE STO RA GE A C C O UN T S


AT T RIB UT E ( STA N DA RD) ( P REM IUM )

Number of storage accounts per 250 250


region per subscription

Maximum storage account capacity 5 PiB1 100 TiB (provisioned)


GP V2 STO RA GE A C C O UN T S F IL ESTO RA GE STO RA GE A C C O UN T S
AT T RIB UT E ( STA N DA RD) ( P REM IUM )

Maximum number of file shares Unlimited Unlimited, total provisioned size of all
shares must be less than max than the
max storage account capacity

Maximum concurrent request rate 20,000 IOPS1 100,000 IOPS

Throughput (ingress + egress) for Ingress: 7,152 MiB/sec 10,340 MiB/sec


LRS/GRS Egress: 14,305 MiB/sec
Australia East
Central US
East Asia
East US 2
Japan East
Korea Central
North Europe
South Central US
Southeast Asia
UK South
West Europe
West US

Throughput (ingress + egress) for ZRS Ingress: 7,152 MiB/sec 10,340 MiB/sec
Australia East Egress: 14,305 MiB/sec
Central US
East US
East US 2
Japan East
North Europe
South Central US
Southeast Asia
UK South
West Europe
West US 2

Throughput (ingress + egress) for Ingress: 2,980 MiB/sec 10,340 MiB/sec


redundancy/region combinations not Egress: 5,960 MiB/sec
listed in the previous row

Maximum number of virtual network 200 200


rules

Maximum number of IP address rules 200 200

Management read operations 800 per 5 minutes 800 per 5 minutes

Management write operations 10 per second/1200 per hour 10 per second/1200 per hour

Management list operations 100 per 5 minutes 100 per 5 minutes

1 General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by
request. To request an increase in account limits, contact Azure Support.
Azure file share scale targets
AT T RIB UT E STA N DA RD F IL E SH A RES 1 P REM IUM F IL E SH A RES

Minimum size of a file share No minimum 100 GiB (provisioned)

Provisioned size increase/decrease unit N/A 1 GiB

Maximum size of a file share 100 TiB, with large file share 100 TiB
feature enabled2
5 TiB, default

Maximum number of files in a file No limit No limit


share

Maximum request rate (Max IOPS) 20,000, with large file share Baseline IOPS: 3000 + 1 IOPS
feature enabled2 per GiB, up to 100,000
1,000 or 100 requests per 100 IOPS bursting: Max (10000, 3x
ms, default IOPS per GiB), up to 100,000

Throughput (ingress + egress) for a Up to 300 MiB/sec, with large 100 + CEILING(0.04 * ProvisionedGiB)
single file share (MiB/sec) file share feature enabled2 + CEILING(0.06 * ProvisionedGiB)
Up to 60 MiB/sec, default

Maximum number of share snapshots 200 snapshots 200 snapshots

Maximum object name length (total 2,048 characters 2,048 characters


pathname including all directories and
filename)

Maximum individual pathname 255 characters 255 characters


component length (in the path
\A\B\C\D, each letter represents a
directory or file that is an individual
component)

Hard link limit (NFS only) N/A 178

Maximum number of SMB N/A 4


Multichannel channels

Maximum number of stored access 5 5


policies per file share

1 The limits forstandard file shares apply to all three of the tiers available for standard file shares: transaction
optimized, hot, and cool.
2 Default on standard file shares is 5
TiB, see Create an Azure file share for the details on how to create file
shares with 100 TiB size and increase existing standard file shares up to 100 TiB. To take advantage of the larger
scale targets, you must change your quota so that it is larger than 5 TiB.
File scale targets
AT T RIB UT E F IL ES IN STA N DA RD F IL E SH A RES F IL ES IN P REM IUM F IL E SH A RES

Maximum file size 4 TiB 4 TiB


AT T RIB UT E F IL ES IN STA N DA RD F IL E SH A RES F IL ES IN P REM IUM F IL E SH A RES

Maximum concurrent request rate 1,000 IOPS Up to 8,0001

Maximum ingress for a file 60 MiB/sec 200 MiB/sec (Up to 1 GiB/s with SMB
Multichannel)2

Maximum egress for a file 60 MiB/sec 300 MiB/sec (Up to 1 GiB/s with SMB
Multichannel)2

Maximum concurrent handles 2,000 handles 2,000 handles

1 Applies to read and write IOs (typically smaller IO sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be
lower. 2 Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see SMB Multichannel
performance.

Azure File Sync scale targets


The following table indicates which target are soft, representing the Microsoft tested boundary, and hard,
indicating an enforced maximum:

RESO URC E TA RGET H A RD L IM IT

Storage Sync Services per region 100 Storage Sync Services Yes

Sync groups per Storage Sync Service 200 sync groups Yes

Registered servers per Storage Sync 99 servers Yes


Service

Cloud endpoints per sync group 1 cloud endpoint Yes

Server endpoints per sync group 100 server endpoints Yes

Server endpoints per server 30 server endpoints Yes

File system objects (directories and 100 million objects No


files) per sync group

Maximum number of file system 5 million objects Yes


objects (directories and files) in a
directory (not recursive)

Maximum object (directories and files) 64 KiB Yes


security descriptor size

File size 100 GiB No

Minimum file size for a file to be tiered Based on file system cluster size Yes
(double file system cluster size). For
example, if the file system cluster size is
4 KiB, the minimum file size will be 8
KiB.
NOTE
An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync
will not be able to operate.

Azure File Sync performance metrics


Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the
effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the
underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total
dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance
characteristics of an Azure File Sync-based solution should be measured by the number of objects (files and
directories) processed per second.
For Azure File Sync, performance is critical in two stages:
1. Initial one-time provisioning : To optimize performance on initial provisioning, refer to Onboarding with
Azure File Sync for the optimal deployment details.
2. Ongoing sync : After the data is initially seeded in the Azure file shares, Azure File Sync keeps multiple
endpoints in sync.

NOTE
When many server endpoints in the same sync group are syncing at the same time, they are contending for cloud service
resources. As a result, upload performance will be impacted. In extreme cases, some sync sessions will fail to access the
resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is
reduced.

To help you plan your deployment for each of the stages, below are the results observed during the internal
testing on a system with a config

SY ST EM C O N F IGURAT IO N DETA IL S

CPU 64 Virtual Cores with 64 MiB L3 cache

Memory 128 GiB

Disk SAS disks with RAID 10 with battery backed cache

Network 1 Gbps Network

Workload General Purpose File Server

IN IT IA L O N E- T IM E P RO VISIO N IN G DETA IL S

Number of objects 25 million objects

Dataset Size ~4.7 TiB

Average File Size ~200 KiB (Largest File: 100 GiB)

Initial cloud change enumeration 20 objects per second


IN IT IA L O N E- T IM E P RO VISIO N IN G DETA IL S

Upload Throughput 20 objects per second per sync group

Namespace Download Throughput 400 objects per second

Initial one -time provisioning


Initial cloud change enumeration : When a new sync group is created, initial cloud change enumeration is
the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share.
During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server
endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once
initial cloud change enumeration completes. The rate of performance is 20 objects per second. Customers can
estimate the time it will take to complete initial cloud change enumeration by determining the number of items
in the cloud share and using the following formulae to get the time in days.
Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(20 * 60 * 60
* 24)
Initial sync of data from Windows Ser ver to Azure File share :Many Azure File Sync deployments start
with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud
change enumeration is fast and the majority of time will be spent syncing changes from the Windows Server
into the Azure file share(s).
While sync uploads data to the Azure file share, there is no downtime on the local file server, and administrators
can setup network limits to restrict the amount of bandwidth used for background data upload.
Initial sync is typically limited by the initial upload rate of 20 files per second per sync group. Customers can
estimate the time to upload all their data to Azure using the following formulae to get time in days:
Time (in days) for uploading files to a sync group = (Number of objects in ser ver endpoint)/(20 *
60 * 60 * 24)
Splitting your data into multiple server endpoints and sync groups can speed up this initial data upload, because
the upload can be done in parallel for multiple sync groups at a rate of 20 items per second each. So, two sync
groups would be running at a combined rate of 40 items per second. The total time to complete would be the
time estimate for the sync group with the most files to sync.
Namespace download throughput When a new server endpoint is added to an existing sync group, the
Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full
namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is
enabled, to the cloud tiering policy set on the server endpoint.

O N GO IN G SY N C DETA IL S

Number of objects synced 125,000 objects (~1% churn)

Dataset Size 50 GiB

Average File Size ~500 KiB

Upload Throughput 20 objects per second per sync group

Full Download Throughput* 60 objects per second

*If cloud tiering is enabled, you are likely to observe better performance as only some of the file data is
downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the
endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only
syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as
they are accessed by the user.

NOTE
The numbers above are not an indication of the performance that you will experience. The actual performance will depend
on multiple factors as outlined in the beginning of this section.

As a general guide for your deployment, you should keep a few things in mind:
The object throughput approximately scales in proportion to the number of sync groups on the server.
Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the
server and network.
The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you will
experience higher throughput in terms of the number of objects processed per second, but lower MiB per
second throughput. Conversely, for larger files, you will get fewer objects processed per second, but higher
MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.

See also
Planning for an Azure Files deployment
Planning for an Azure File Sync deployment
Understand Azure Files billing
5/20/2022 • 25 minutes to read • Edit Online

Azure Files provides two distinct billing models: provisioned and pay-as-you-go. The provisioned model is only
available for premium file shares, which are file shares deployed in the FileStorage storage account kind. The
pay-as-you-go model is only available for standard file shares, which are file shares deployed in the general
purpose version 2 (GPv2) storage account kind. This article explains how both models work in order to help
you understand your monthly Azure Files bill.
https://www.youtube-nocookie.com/embed/m5_-GsKv4-o
This video is an interview that discusses the basics of the Azure Files billing model, including how to optimize
Azure file shares to achieve the lowest costs possible and how to compare Azure Files to other file storage
offerings on-premises and in the cloud.
For Azure Files pricing information, see Azure Files pricing page.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Storage units
Azure Files uses the base-2 units of measurement to represent storage capacity: KiB, MiB, GiB, and TiB.

A C RO N Y M DEF IN IT IO N UN IT

KiB 1,024 bytes kibibyte

MiB 1,024 KiB (1,048,576 bytes) mebibyte

GiB 1024 MiB (1,073,741,824 bytes) gibibyte

TiB 1024 GiB (1,099,511,627,776 bytes) tebibyte

Although the base-2 units of measure are commonly used by most operating systems and tools to measure
storage quantities, they are frequently mislabeled as the base-10 units, which you may be more familiar with:
KB, MB, GB, and TB. Although the reasons for the mislabeling may vary, the common reason why operating
systems like Windows mislabel the storage units is because many operating systems began using these
acronyms before they were standardized by the IEC, BIPM, and NIST.
The following table shows how common operating systems measure and label storage:
O P ERAT IN G SY ST EM M EA SUREM EN T SY ST EM L A B EL IN G

Windows Base-2 Consistently mislabels as base-10.

Linux distributions Commonly base-2, some software may Inconsistent labeling, alignment
use base-10 between measurement and labeling
depends on the software package.

macOS, iOS, and iPad OS Base-10 Consistently labels as base-10.

Check with your operating system vendor if your operating system is not listed.

File share total cost of ownership checklist


If you are migrating to Azure Files from on-premises or comparing Azure Files to other cloud storage solutions,
you should consider the following factors to ensure a fair, apples-to-apples comparison:
How do you pay for storage, IOPS, and bandwidth? With Azure Files, the billing model you use
depends on whether you are deploying premium or standard file shares. Most cloud solutions have
models that align with the principles of either provisioned storage, such as price determinism and
simplicity, or pay-as-you-go storage, which can optimize costs by only charging you for what you actually
use. Of particular interest for provisioned models are minimum provisioned share size, the provisioning
unit, and the ability to increase and decrease provisioning.
Are there any methods to optimize storage costs? With Azure Files, you can use capacity
reservations to achieve an up to 36% discount on storage. Other solutions may employ storage efficiency
strategies like deduplication or compression to optionally optimize storage, but remember, these storage
optimization strategies often have non-monetary costs, such as reducing performance. Azure Files
capacity reservations have no side effects on performance.
How do you achieve storage resiliency and redundancy? With Azure Files, storage resiliency and
redundancy are baked into the product offering. All tiers and redundancy levels ensure that data is highly
available and at least three copies of your data are accessible. When considering other file storage
options, consider whether storage resiliency and redundancy is built-in or something you must assemble
yourself.
What do you need to manage? With Azure Files, the basic unit of management is a storage account.
Other solutions may require additional management, such as operating system updates or virtual
resource management (VMs, disks, network IP addresses, etc.).
What are the costs of value-added products, like backup, security, etc.? Azure Files supports
integrations with multiple first- and third-party value-added services. Value-added services such as Azure
Backup, Azure File Sync, and Azure Defender provide backup, replication and caching, and security
functionality for Azure Files. Value-added solutions, whether on-premises or in the cloud, have their own
licensing and product costs, but are often considered part of the total cost of ownership for file storage.

Reserve capacity
Azure Files supports storage capacity reservations, which enable you to achieve a discount on storage by pre-
committing to storage utilization. You should consider purchasing reserved instances for any production
workload, or dev/test workloads with consistent footprints. When you purchase reserved capacity, your
reservation must specify the following dimensions:
Capacity size : Capacity reservations can be for either 10 TiB or 100 TiB, with more significant discounts for
purchasing a higher capacity reservation. You can purchase multiple reservations, including reservations of
different capacity sizes to meet your workload requirements. For example, if your production deployment
has 120 TiB of file shares, you could purchase one 100 TiB reservation and two 10 TiB reservations to meet
the total capacity requirements.
Term : Reservations can be purchased for either a one-year or three-year term, with more significant
discounts for purchasing a longer reservation term.
Tier : The tier of Azure Files for the capacity reservation. Reservations for Azure Files currently are available
for the premium, hot, and cool tiers.
Location : The Azure region for the capacity reservation. Capacity reservations are available in a subset of
Azure regions.
Redundancy : The storage redundancy for the capacity reservation. Reservations are supported for all
redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.
Once you purchase a capacity reservation, it will automatically be consumed by your existing storage utilization.
If you use more storage than you have reserved, you will pay list price for the balance not covered by the
capacity reservation. Transaction, bandwidth, data transfer, and metadata storage charges are not included in the
reservation.
For more information on how to purchase storage reservations, see Optimize costs for Azure Files with reserved
capacity.

Provisioned model
Azure Files uses a provisioned model for premium file shares. In a provisioned business model, you proactively
specify to the Azure Files service what your storage requirements are, rather than being billed based on what
you use. A provisioned model for storage is similar to buying an on-premises storage solution because when
you provision an Azure file share with a certain amount of storage capacity, you pay for that storage capacity
regardless of whether you use it or not. Unlike purchasing physical media on-premises, provisioned file shares
can be dynamically scaled up or down depending on your storage and IO performance characteristics.
The provisioned size of the file share can be increased at any time but can be decreased only after 24 hours
since the last increase. After waiting for 24 hours without a quota increase, you can decrease the share quota as
many times as you like, until you increase it again. IOPS/throughput scale changes will be effective within a few
minutes after the provisioned size change.
It is possible to decrease the size of your provisioned share below your used GiB. If you do this, you will not lose
data, but you will still be billed for the size used and receive the performance of the provisioned share, not the
size used.
Provisioning method
When you provision a premium file share, you specify how many GiBs your workload requires. Each GiB that
you provision entitles you to additional IOPS and throughput on a fixed ratio. In addition to the baseline IOPS
for which you are guaranteed, each premium file share supports bursting on a best effort basis. The formulas
for IOPS and throughput are as follows:

IT EM VA L UE

Minimum size of a file share 100 GiB

Provisioning unit 1 GiB

Baseline IOPS formula MIN(3000 + 1 * ProvisionedGiB, 100000)

Burst limit MIN(MAX(10000, 3 * ProvisionedGiB), 100000)


IT EM VA L UE

Burst credits (BurstLimit - BaselineIOPS) * 3600

Throughput rate (ingress + egress) (MiB/sec) 100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06


* ProvisionedGiB)

The following table illustrates a few examples of these formulae for the provisioned share sizes:

T H RO UGH P UT
( IN GRESS + EGRESS)
C A PA C IT Y ( GIB ) B A SEL IN E IO P S B URST IO P S B URST C REDIT S ( M IB / SEC )

100 3,100 Up to 10,000 24,840,000 110

500 3,500 Up to 10,000 23,400,000 150

1,024 4,024 Up to 10,000 21,513,600 203

5,120 8,120 Up to 15,360 26,064,000 613

10,240 13,240 Up to 30,720 62,928,000 1,125

33,792 36,792 Up to 100,000 227,548,800 3,480

51,200 54,200 Up to 100,000 164,880,000 5,220

102,400 100,000 Up to 100,000 0 10,340

Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, and
parallelism, among many other factors. For example, based on internal testing with 8 KiB read/write IO sizes, a
single Windows virtual machine without SMB Multichannel enabled, Standard F16s_v2, connected to premium
file share over SMB could achieve 20K read IOPS and 15K write IOPS. With 512 MiB read/write IO sizes, the
same VM could achieve 1.1 GiB/s egress and 370 MiB/s ingress throughput. The same client can achieve up to
~3x performance if SMB Multichannel is enabled on the premium shares. To achieve maximum performance
scale, enable SMB Multichannel and spread the load across multiple VMs. Refer to SMB Multichannel
performance and troubleshooting guide for some common performance issues and workarounds.
Bursting
If your workload needs the extra performance to meet peak demand, your share can use burst credits to go
above the share's baseline IOPS limit to give the share the performance it needs to meet the demand. Premium
file shares can burst their IOPS up to 4,000 or up to a factor of three, whichever is a higher value. Bursting is
automated and operates based on a credit system. Bursting works on a best effort basis, and the burst limit is
not a guarantee.
Credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. For example, a
100 GiB share has 500 baseline IOPS. If actual traffic on the share was 100 IOPS for a specific 1-second interval,
then the 400 unused IOPS are credited to a burst bucket. Similarly, an idle 1 TiB share accrues burst credit at
1,424 IOPS. These credits will then be used later when operations would exceed the baseline IOPS.
Whenever a share exceeds the baseline IOPS and has credits in a burst bucket, it will burst up to the maximum
allowed peak burst rate. Shares can continue to burst as long as credits are remaining, but this is based on the
number of burst credits accrued. Each IO beyond baseline IOPS consumes one credit, and once all credits are
consumed, the share would return to the baseline IOPS.
Share credits have three states:
Accruing, when the file share is using less than the baseline IOPS.
Declining, when the file share is using more than the baseline IOPS and in the bursting mode.
Constant, when the files share is using exactly the baseline IOPS, there are either no credits accrued or used.
New file shares start with the full number of credits in its burst bucket. Burst credits will not be accrued if the
share IOPS fall below baseline IOPS due to throttling by the server.

Pay-as-you-go model
Azure Files uses a pay-as-you-go business model for standard file shares. In a pay-as-you-go business model,
the amount you pay is determined by how much you actually use, rather than based on a provisioned amount.
At a high level, you pay a cost for the amount of logical data stored, and then an additional set of transactions
based on your usage of that data. A pay-as-you-go model can be cost-efficient, because you don't need to
overprovision to account for future growth or performance requirements, or deprovision if your workload and
data footprint vary over time. On the other hand, a pay-as-you-go model can also be difficult to plan as part of a
budgeting process, because the pay-as-you-go billing model is driven by end-user consumption.
Differences in standard tiers
When you create a standard file share, you pick between the following tiers: transaction optimized, hot, and cool.
All three tiers are stored on the exact same standard storage hardware. The main difference for these three tiers
is their data at-rest storage prices, which are lower in cooler tiers, and the transaction prices, which are higher in
the cooler tiers. This means:
Transaction optimized, as the name implies, optimizes the price for high transaction workloads. Transaction
optimized has the highest data at-rest storage price, but the lowest transaction prices.
Hot is for active workloads that do not involve a large number of transactions, and has a slightly lower data
at-rest storage price, but slightly higher transaction prices as compared to transaction optimized. Think of it
as the middle ground between the transaction optimized and cool tiers.
Cool optimizes the price for workloads that do not have much activity, offering the lowest data at-rest
storage price, but the highest transaction prices.
If you put an infrequently accessed workload in the transaction optimized tier, you will pay almost nothing for
the few times in a month that you make transactions against your share, but you will pay a high amount for the
data storage costs. If you were to move this same share to the cool tier, you would still pay almost nothing for
the transaction costs, simply because you are infrequently making transactions for this workload, but the cool
tier has a much cheaper data storage price. Selecting the appropriate tier for your use case allows you to
considerably reduce your costs.
Similarly, if you put a highly accessed workload in the cool tier, you will pay a lot more in transaction costs, but
less for data storage costs. This can lead to a situation where the increased costs from the transaction prices
increase outweigh the savings from the decreased data storage price, leading you to pay more money on cool
than you would have on transaction optimized. For some usage levels, it's possible that the hot tier will be the
most cost efficient, and the cool tier will be more expensive than transaction optimized.
Your workload and activity level will determine the most cost efficient tier for your standard file share. In
practice, the best way to pick the most cost efficient tier involves looking at the actual resource consumption of
the share (data stored, write transactions, etc.).
Choosing a tier
Regardless of how you migrate existing data into Azure Files, we recommend initially creating the file share in
transaction optimized tier due to the large number of transactions incurred during migration. After your
migration is complete and you've operated for a few days/weeks with regular usage, you can plug your
transaction counts into the pricing calculator to figure out which tier is best suited for your workload.
Because standard file shares only show transaction information at the storage account level, using the storage
metrics to estimate which tier is cheaper at the file share level is an imperfect science. If possible, we
recommend deploying only one file share in each storage account to ensure full visibility into billing.
To see previous transactions:
1. Go to your storage account and select Metrics in the left navigation bar.
2. Select Scope as your storage account name, Metric Namespace as "File", Metric as "Transactions", and
Aggregation as "Sum".
3. Select Apply Splitting .
4. Select Values as "API Name". Select your desired Limit and Sor t .
5. Select your desired time period.

NOTE
Make sure you view transactions over a period of time to get a better idea of average number of transactions. Ensure that
the chosen time period does not overlap with initial provisioning. Multiply the average number of transactions during this
time period to get the estimated transactions for an entire month.

What are transactions?


Transactions are operations or requests against Azure Files to upload, download, or otherwise manipulate the
contents of the file share. Every action taken on a file share translates to one or more transactions, and on
standard shares that use the pay-as-you-go billing model, that translates to transaction costs.
There are five basic transaction categories: write, list, read, other, and delete. All operations done via the REST API
or SMB are bucketed into one of these 4 categories as follows:

T RA N SA C T IO N B UC K ET M A N A GEM EN T O P ERAT IO N S DATA O P ERAT IO N S

Write transactions CreateShare CopyFile


SetFileServiceProperties Create
SetShareMetadata CreateDirectory
SetShareProperties CreateFile
SetShareACL PutRange
PutRangeFromURL
SetDirectoryMetadata
SetFileMetadata
SetFileProperties
SetInfo
Write
PutFilePermission

List transactions ListShares ListFileRanges


ListFiles
ListHandles
T RA N SA C T IO N B UC K ET M A N A GEM EN T O P ERAT IO N S DATA O P ERAT IO N S

Read transactions GetFileServiceProperties FilePreflightRequest


GetShareAcl GetDirectoryMetadata
GetShareMetadata GetDirectoryProperties
GetShareProperties GetFile
GetShareStats GetFileCopyInformation
GetFileMetadata
GetFileProperties
QueryDirectory
QueryInfo
Read
GetFilePermission

Other/protocol transactions AbortCopyFile


Cancel
ChangeNotify
Close
Echo
Ioctl
Lock
Logoff
Negotiate
OplockBreak
SessionSetup
TreeConnect
TreeDisconnect
CloseHandles
AcquireFileLease
BreakFileLease
ChangeFileLease
ReleaseFileLease

Delete transactions DeleteShare ClearRange


DeleteDirectory
DeleteFile

NOTE
NFS 4.1 is only available for premium file shares, which use the provisioned billing model. Transactions do not affect billing
for premium file shares.

Provisioned/quota, logical size, and physical size


Azure Files tracks three distinct quantities with respect to share capacity:
Provisioned size or quota : With both premium and standard file shares, you specify the maximum size
that the file share is allowed to grow to. In premium file shares, this value is called the provisioned size,
and whatever amount you provision is what you pay for, regardless of how much you actually use. In
standard file shares, this value is called quota and does not directly affect your bill. Provisioned size is a
required field for premium file shares, while standard file shares will default if not directly specified to the
maximum value supported by the storage account, either 5 TiB or 100 TiB, depending on the storage
account type and settings.
Logical size : The logical size of a file share or file relates to how big it is without considering how it is
actually stored, where additional optimizations may be applied. One way to think about this is that the
logical size of the file is how many KiB/MiB/GiB will be transferred over the wire if you copy it to a
different location. In both premium and standard file shares, the total logical size of the file share is what
is used for enforcement against provisioned size/quota. In standard file shares, the logical size is the
quantity used for the data at-rest usage billing. Logical size is referred to as "size" in the Windows
properties dialog for a file/folder and as "content length" by Azure Files metrics.
Physical size : The physical size of the file relates to the size of the file as encoded on disk. This may align
with the file's logical size, or it may be smaller, depending on how the file has been written to by the
operating system. A common reason for the logical size and physical size to be different is through the
use of sparse files. The physical size of the files in the share is used for snapshot billing, although
allocated ranges are shared between snapshots if they are unchanged (differential storage). To learn more
about how snapshots are billed in Azure Files, see Snapshots.

Snapshots
Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server.
Snapshots are always differential from the live share and from each other, meaning that you are always paying
only for what's different in each snapshot. For more information on share snapshots, see Overview of snapshots
for Azure Files.
Snapshots do not count against file share size limits, although you are limited to a specific number of snapshots.
To see the current snapshot limits, see Azure file share scale targets.
Snapshots are always billed based on the differential storage utilization of each snapshot, however this looks
slightly different between premium file shares and standard file shares:
In premium file shares, snapshots are billed against their own snapshot meter, which has a reduced price
over the provisioned storage price. This means that you will see a separate line item on your bill
representing snapshots for premium file shares for each FileStorage storage account on your bill.
In standard file shares, snapshots are billed as part of the normal used storage meter, although you are
still only billed for the differential cost of the snapshot. This means that you will not see a separate line
item on your bill representing snapshots for each standard storage account containing Azure file shares.
This also means that differential snapshot usage counts against capacity reservations that are purchased
for standard file shares.
Value-added services for Azure Files may use snapshots as part of their value proposition. See value-added
services for Azure Files for more information on how snapshots are used.

Value-added services
Like on-premises storage solutions which offer first- and third-party features/product integrations to bring
additional value to the hosted file shares, Azure Files provides integration points for first- and third-party
products to integrate with customer-owned file shares. Although these solutions may provide considerable extra
value to Azure Files, you should consider the additional costs that these services add to the total cost of an Azure
Files solution.
Costs are generally broken down into three buckets:
Licensing costs for the value-added ser vice. These may come in the form of a fixed cost per
customer, end user (sometimes referred to as a "head cost"), Azure file share or storage account, or in
units of storage utilization, such as a fixed cost for every 500 GiB chunk of data in the file share.
Transaction costs for the value-added ser vice. Some value-added services have their own concept
of transactions distinct from what Azure Files views as a transaction. These transactions will show up on
your bill under the value-added service's charges; however, they relate directly to how you use the value-
added service with your file share.
Azure Files costs for using a value-added ser vice. Azure Files does not directly charge customers
costs for adding value-added services, but as part of adding value to the Azure file share, the value-added
service might increase the costs that you see on your Azure file share. This is easy to see with standard
file shares, because standard file shares have a pay-as-you-go model with transaction charges. If the
value-added service does transactions against the file share on your behalf, they will show up in your
Azure Files transaction bill even though you didn't directly do those transactions yourself. This applies to
premium file shares as well, although it may be less noticeable. Additional transactions against premium
file shares from value-added services count against your provisioned IOPS numbers, meaning that value-
added services may require provisioning additional storage to have enough IOPS or throughput available
for your workload.
When computing the total cost of ownership for your file share, you should consider the costs of Azure Files and
of all value-added services that you would like to use with Azure Files.
There are multiple value-added first- and third-party services. This document covers a subset of the common
first-party services customers use with Azure file shares. You can learn more about services not listed here by
reading the pricing page for that service.
Azure File Sync
Azure File Sync is a value-added service for Azure Files that synchronizes one or more on-premises Windows
file shares with an Azure file share. Because the cloud Azure file share has a complete copy of the data in a
synchronized file share that is available on-premises, you can transform your on-premises Windows File Server
into a cache of the Azure file share to reduce your on-premises footprint. Learn more by reading Introduction to
Azure File Sync.
When considering the total cost of ownership for a solution deployed using Azure File Sync, you should
consider the following cost aspects:
Capital and operational costs of Windows File Ser vers with one or more ser ver endpoints.
Azure File Sync as a replication solution is agnostic of where the Windows File Servers that are
synchronized with Azure Files are; they could be hosted on-premises, in an Azure VM, or even in another
cloud. Unless you are using Azure File Sync with a Windows File Server that is hosted in an Azure VM, the
capital (i.e. the upfront hardware costs of your solution) and operating (i.e. cost of labor, electricity, etc.)
costs will not be part of your Azure bill, but will still be very much a part of your total cost of ownership.
You should consider the amount of data you need to cache on-premises, the number of CPUs and
amount of memory your Windows File Servers need to host Azure File Sync workloads (see
recommended system resources for more information), and other organization-specific costs you might
have.
Per ser ver licensing cost for ser vers registered with Azure File Sync. To use Azure File Sync with
a specific Windows File Server, you must first register it with Azure File Sync's Azure resource, the Storage
Sync Service. Each server that you register after the first server has a flat monthly fee. Although this fee is
very small, it is one component of your bill to consider. To see the current price of the server registration
fee for your desired region, see the File Sync section on Azure Files pricing page.
Azure Files costs. Because Azure File Sync is a synchronization solution for Azure Files, it will cause you
to consume Azure Files resources. Some of these resources, like storage consumption, are relatively
obvious, while others such as transaction and snapshot utilization may not be. For most customers, we
recommend using standard file shares with Azure File Sync, although Azure File Sync is fully supported
with premium file shares if desired.
Storage utilization. Azure File Sync will replicate any changes you have made to the path on
your Windows File Server specified on your server endpoint to your Azure file share, thus causing
storage to be consumed. On standard file shares, this means that adding or increasing the size of
existing files on server endpoints will cause storage costs to grow, because the changes will be
replicated. On premium file shares, changes will be consume provisioned space - it is your
responsibility to periodically increase provisioning as needed to account for file share growth.
Snapshot utilization. Azure File Sync takes share and file-level snapshots as part of regular
usage. Although snapshot utilization is always differential, this can contribute in a noticeable way
to the total Azure Files bill.
Transactions from churn. As files change on server endpoints, the changes are uploaded to the
cloud share, which generates transactions. When cloud tiering is enabled, additional transactions
are generated for managing tiered files, including I/O happening on tiered files, in addition to
egress costs. Although the quantity and type of transactions is difficult to predict due to churn
rates and cache efficiency, you can use your previous transaction patterns to estimate future costs
if you believe your future usage will be similar to your current usage.
Transactions from cloud enumeration. Azure File Sync enumerates the Azure File Share in the
cloud once per day to discover changes that were made directly to the share so that they can sync
down to the server endpoints. This scan generates transactions which are billed to the storage
account at a rate of one ListFiles transaction per directory per day. You can put this number into
the pricing calculator to estimate the scan cost.

TIP
If you don't know how many folders you have, check out the TreeSize tool from JAM Software GmbH.

To optimize costs for Azure Files with Azure File Sync, you should consider the tier of your file share. For more
information on how to pick the tier for each file share, see choosing a file share tier.
If you are migrating to Azure File Sync from StorSimple, see Comparing the costs of StorSimple to Azure File
Sync.
Azure Backup
Azure Backup provides a serverless backup solution for Azure Files that seamlessly integrates with your file
shares, as well as other value-added services such as Azure File Sync. Azure Backup for Azure Files is a
snapshot-based backup solution, meaning that Azure Backup provides a scheduling mechanism for
automatically taking snapshots on an administrator-defined schedule and a user-friendly interface for restoring
deleted files/folders or the entire share to a particular point in time. To learn more about Azure Backup for Azure
Files, see About Azure file share backup.
When considering the costs of using Azure Backup to back up your Azure file shares, you should consider the
following:
Protected instance licensing cost for Azure file share data. Azure Backup charges a protected
instance licensing cost per storage account containing backed up Azure file shares. A protected instance is
defined as 250 GiB of Azure file share storage. Storage accounts containing less than 250 GiB of Azure file
share storage are subject to a fractional protected instance cost. See Azure Backup pricing for more
information (note that you must select Azure Files from the list of services Azure Backup can protect).
Azure Files costs. Azure Backup increases the costs of Azure Files in the following ways:
Differential costs from Azure file share snapshots. Azure Backup automates taking Azure
file share snapshots on an administrator-defined schedule. Snapshots are always differential;
however, the additional cost added to the total bill depends on the length of time snapshots are
kept and the amount of churn on the file share during that time, because that dictates how
different the snapshot is from the live file share and therefore how much additional data is stored
by Azure Files.
Transaction costs from restore operations. Restore operations from the snapshot to the live
share will cause transactions. For standard file shares, this means that reads from snapshots/writes
from restores will be billed as normal file share transactions. For premium file shares, these
operations are counted against the provisioned IOPS for the file share.
Microsoft Defender for Storage
Microsoft Defender provides support for Azure Files as part of its Microsoft Defender for Storage product.
Microsoft Defender for Storage detects unusual and potentially harmful attempts to access or exploit your Azure
file shares over SMB or FileREST. Microsoft Defender for Storage is enabled on the subscription level for all file
shares in storage accounts in that subscription.
Microsoft Defender for Storage does not support antivirus capabilities for Azure file shares.
The main cost from Microsoft Defender for Storage is an additional set of transaction costs that the product
levies on top of the transactions that are done against the Azure file share. Although these costs are based on
the transactions incurred in Azure Files, they are not part of the billing for Azure Files, but rather are part of the
Microsoft Defender pricing. Microsoft Defender for Storage charges a transaction rate even on premium file
shares, where Azure Files includes transactions as part of IOPS provisioning. The current transaction rate can be
found on Microsoft Defender for Cloud pricing page under the Microsoft Defender for Storage table row.
Transaction heavy file shares will incur significant costs using Microsoft Defender for Storage. Based on these
costs, you may wish to opt-out of Microsoft Defender for Storage for specific storage accounts. For more
information, see Exclude a storage account from Microsoft Defender for Storage protections.

See also
Azure Files pricing page.
Planning for an Azure Files deployment and Planning for an Azure File Sync deployment.
Create a file share and Deploy Azure File Sync.
Prevent accidental deletion of Azure file shares
5/20/2022 • 3 minutes to read • Edit Online

Azure Files offers soft delete for file shares. Soft delete allows you to recover your file share when it is
mistakenly deleted by an application or other storage account user.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

How soft delete works


When soft delete for Azure file shares is enabled, if a file share is deleted, it transitions to a soft deleted state
instead of being permanently erased. You can configure the amount of time soft deleted data is recoverable
before it's permanently deleted, and undelete the share anytime during this retention period. After being
undeleted, the share and all of contents, including snapshots, will be restored to the state it was in prior to
deletion. Soft delete only works on a file share level - individual files that are deleted will still be permanently
erased.
Soft delete can be enabled on either new or existing file shares. Soft delete is also backwards compatible, so you
don't have to make any changes to your applications to take advantage of the protections of soft delete.
To permanently delete a file share in a soft delete state before its expiry time, you must undelete the share,
disable soft delete, and then delete the share again. Then you should re-enable soft delete, since any other file
shares in that storage account will be vulnerable to accidental deletion while soft delete is off.
For soft-deleted premium file shares, the file share quota (the provisioned size of a file share) is used in the total
storage account quota calculation until the soft-deleted share expiry date, when the share is fully deleted.

Configuration settings
Enabling or disabling soft delete
Soft delete for file shares is enabled at the storage account level, because of this, the soft delete settings apply to
all file shares within a storage account. Soft delete is enabled by default for new storage accounts and can be
disabled or enabled at any time. Soft delete is not automatically enabled for existing storage accounts unless
Azure file share backup was configured for a Azure file share in that storage account. If Azure file share backup
was configured, then soft delete for Azure file shares are automatically enabled on that share's storage account.
If you enable soft delete for file shares, delete some file shares, and then disable soft delete, if the shares were
saved in that period you can still access and recover those file shares. When you enable soft delete, you also
need to configure the retention period.
Retention period
The retention period is the amount of time that soft deleted file shares are stored and available for recovery. For
file shares that are explicitly deleted, the retention period clock starts when the data is deleted. Currently you can
specify a retention period between 1 and 365 days. You can change the soft delete retention period at any time.
An updated retention period will only apply to shares deleted after the retention period has been updated.
Shares deleted before the retention period update will expire based on the retention period that was configured
when that data was deleted.

Pricing and billing


Both standard and premium file shares are billed on the used capacity when soft deleted, rather than
provisioned capacity. Additionally, premium file shares are billed at the snapshot rate while in the soft delete
state. Standard file shares are billed at the regular rate while in the soft delete state. You won't be charged for
data that is permanently deleted after the configured retention period.
For more information on prices for Azure Files in general, see the Azure Files Pricing Page.
When you initially enable soft delete, we recommend using a small retention period to better understand how
the feature affects your bill.

Next steps
To learn how to enable and use soft delete, continue to Enable soft delete.
To learn how to prevent a storage account from being deleted or modified, see Apply an Azure Resource
Manager lock to a storage account.
To learn how to apply locks to resources and resource groups, see Lock resources to prevent unexpected
changes.
About Azure file share backup
5/20/2022 • 4 minutes to read • Edit Online

Azure file share backup is a native, cloud based backup solution that protects your data in the cloud and
eliminates additional maintenance overheads involved in on-premises backup solutions. The Azure Backup
service smoothly integrates with Azure File Sync, and allows you to centralize your file share data as well as
your backups. This simple, reliable, and secure solution enables you to configure protection for your enterprise
file shares in a few simple steps with an assurance that you can recover your data in case of any accidental
deletion.

Key benefits of Azure file share backup


Zero infrastructure : No deployment is needed to configure protection for your file shares.
Customized retention : You can configure backups with daily/weekly/monthly/yearly retention according
to your requirements.
Built in management capabilities : You can schedule backups and specify the desired retention period
without the additional overhead of data pruning.
Instant restore : Azure file share backup uses file share snapshots, so you can select just the files you want
to restore instantly.
Aler ting and repor ting : You can configure alerts for backup and restore failures and use the reporting
solution provided by Azure Backup to get insights on backups across your files shares.
Protection against accidental deletion of file shares : Azure Backup enables the soft delete feature on a
storage account level with a retention period of 14 days. Even if a malicious actor deletes the file share, the
file share’s contents and recovery points (snapshots) are retained for a configurable retention period,
allowing the successful and complete recovery of source contents and snapshots with no data loss.
Protection against accidental deletion of snapshots : Azure Backup acquires a lease on the snapshots
taken by scheduled/on-demand backup jobs. The lease acts as a lock that adds a layer of protection and
secures the snapshots against accidental deletion.

Architecture
How the backup process works
1. The first step in configuring backup for Azure file shares is creating a Recovery Services vault. The vault
gives you a consolidated view of the backups configured across different workloads.
2. Once you create a vault, the Azure Backup service discovers the storage accounts that can be registered
with the vault. You can select the storage account hosting the file shares you want to protect.
3. After you select the storage account, the Azure Backup service lists the set of file shares present in the
storage account and stores their names in the management layer catalog.
4. You then configure the backup policy (schedule and retention) according to your requirements, and select
the file shares to back up. The Azure Backup service registers the schedules in the control plane to do
scheduled backups.
5. Based on the policy specified, the Azure Backup scheduler triggers backups at the scheduled time. As part
of that job, the file share snapshot is created using the File share API. Only the snapshot URL is stored in
the metadata store.

NOTE
The file share data isn't transferred to the Backup service, since the Backup service creates and manages
snapshots that are part of your storage account, and backups aren't transferred to the vault.

6. You can restore the Azure file share contents (individual files or the full share) from snapshots available
on the source file share. Once the operation is triggered, the snapshot URL is retrieved from the metadata
store and the data is listed and transferred from the source snapshot to the target file share of your
choice.
7. If you're using Azure File Sync, the Backup service indicates to the Azure File Sync service the paths of the
files being restored, which then triggers a background change detection process on these files. Any files
that have changed are synced down to the server endpoint. This process happens in parallel with the
original restore to the Azure file share.
8. The backup and restore job monitoring data is pushed to the Azure Backup Monitoring service. This
allows you to monitor cloud backups for your file shares in a single dashboard. In addition, you can also
configure alerts or email notifications when backup health is affected. Emails are sent via the Azure email
service.

Backup costs
There are two costs associated with Azure file share backup solution:
1. Snapshot storage cost : Storage charges incurred for snapshots are billed along with Azure Files usage
according to the pricing details mentioned here
2. Protected Instance fee : Starting September 1, 2020, customers will be charged a protected instance fee
according to the pricing details mentioned here. The protected instance fee depends on the total size of
protected file shares in a storage account.
To get detailed estimates for backing up Azure file shares, you can download the detailed Azure Backup pricing
estimator.

How lease snapshot works?


When Azure Backup takes a snapshot, scheduled or on-demand, it adds a lock on the snapshot using the lease
snapshot capability of the Files platform. The lock protects the snapshots from accidental deletion, and the lock’s
duration is infinite. If a file share has leased snapshots, the deletion is no more a one-click operation. Therefore,
you also get protection against accidental deletion of the backed-up file share.
To protect a snapshot from deletion while restore operation is in progress, Azure Backup checks the lease status
on the snapshot. If it's non-leased, it adds a lock by taking a lease on the snapshot.
The following diagram explains the lifecycle of the lease acquired by Azure Backup:

Next steps
Learn how to Back up Azure file shares
Find answers to Questions about backing up Azure Files
Azure Storage encryption for data at rest
5/20/2022 • 4 minutes to read • Edit Online

Azure Storage uses server-side encryption (SSE) to automatically encrypt your data when it is persisted to the
cloud. Azure Storage encryption protects your data and to help you to meet your organizational security and
compliance commitments.

About Azure Storage encryption


Data in Azure Storage is encrypted and decrypted transparently using 256-bit AES encryption, one of the
strongest block ciphers available, and is FIPS 140-2 compliant. Azure Storage encryption is similar to BitLocker
encryption on Windows.
Azure Storage encryption is enabled for all storage accounts, including both Resource Manager and classic
storage accounts. Azure Storage encryption cannot be disabled. Because your data is secured by default, you
don't need to modify your code or applications to take advantage of Azure Storage encryption.
Data in a storage account is encrypted regardless of performance tier (standard or premium), access tier (hot or
cool), or deployment model (Azure Resource Manager or classic). All blobs in the archive tier are also encrypted.
All Azure Storage redundancy options support encryption, and all data in both the primary and secondary
regions is encrypted when geo-replication is enabled. All Azure Storage resources are encrypted, including
blobs, disks, files, queues, and tables. All object metadata is also encrypted. There is no additional cost for Azure
Storage encryption.
Every block blob, append blob, or page blob that was written to Azure Storage after October 20, 2017 is
encrypted. Blobs created prior to this date continue to be encrypted by a background process. To force the
encryption of a blob that was created before October 20, 2017, you can rewrite the blob. To learn how to check
the encryption status of a blob, see Check the encryption status of a blob.
For more information about the cryptographic modules underlying Azure Storage encryption, see Cryptography
API: Next Generation.
For information about encryption and key management for Azure managed disks, see Server-side encryption of
Azure managed disks.

About encryption key management


Data in a new storage account is encrypted with Microsoft-managed keys by default. You can continue to rely on
Microsoft-managed keys for the encryption of your data, or you can manage encryption with your own keys. If
you choose to manage encryption with your own keys, you have two options. You can use either type of key
management, or both:
You can specify a customer-managed key to use for encrypting and decrypting data in Blob storage and in
Azure Files.1,2 Customer-managed keys must be stored in Azure Key Vault or Azure Key Vault Managed
Hardware Security Model (HSM) (preview). For more information about customer-managed keys, see Use
customer-managed keys for Azure Storage encryption.
You can specify a customer-provided key on Blob storage operations. A client making a read or write request
against Blob storage can include an encryption key on the request for granular control over how blob data is
encrypted and decrypted. For more information about customer-provided keys, see Provide an encryption
key on a request to Blob storage.
The following table compares key management options for Azure Storage encryption.
K EY M A N A GEM EN T M IC RO SO F T - M A N A GED C USTO M ER- M A N A GED
PA RA M ET ER K EY S K EY S C USTO M ER- P RO VIDED K EY S

Encryption/decryption Azure Azure Azure


operations

Azure Storage services All Blob storage, Azure Files1,2 Blob storage
supported

Key storage Microsoft key store Azure Key Vault or Key Customer's own key store
Vault HSM

Key rotation responsibility Microsoft Customer Customer

Key control Microsoft Customer Customer

1 For information about creating an account that supports using customer-managed keys with Queue storage,
see Create an account that supports customer-managed keys for queues.
2 For information about creating an account that supports using customer-managed keys with Table storage, see

Create an account that supports customer-managed keys for tables.

NOTE
Microsoft-managed keys are rotated appropriately per compliance requirements. If you have specific key rotation
requirements, Microsoft recommends that you move to customer-managed keys so that you can manage and audit the
rotation yourself.

Doubly encrypt data with infrastructure encryption


Customers who require high levels of assurance that their data is secure can also enable 256-bit AES encryption
at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account is
encrypted twice — once at the service level and once at the infrastructure level — with two different encryption
algorithms and two different keys. Double encryption of Azure Storage data protects against a scenario where
one of the encryption algorithms or keys may be compromised. In this scenario, the additional layer of
encryption continues to protect your data.
Service-level encryption supports the use of either Microsoft-managed keys or customer-managed keys with
Azure Key Vault. Infrastructure-level encryption relies on Microsoft-managed keys and always uses a separate
key.
For more information about how to create a storage account that enables infrastructure encryption, see Create a
storage account with infrastructure encryption enabled for double encryption of data.

Next steps
What is Azure Key Vault?
Customer-managed keys for Azure Storage encryption
Encryption scopes for Blob storage
Provide an encryption key on a request to Blob storage
Customer-managed keys for Azure Storage
encryption
5/20/2022 • 7 minutes to read • Edit Online

You can use your own encryption key to protect the data in your storage account. When you specify a customer-
managed key, that key is used to protect and control access to the key that encrypts your data. Customer-
managed keys offer greater flexibility to manage access controls.
You must use one of the following Azure key stores to store your customer-managed keys:
Azure Key Vault
Azure Key Vault Managed Hardware Security Module (HSM)
You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure
Key Vault APIs to generate keys. The storage account and the key vault or managed HSM must be in the same
region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.

NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.

About customer-managed keys


The following diagram shows how Azure Storage uses Azure AD and a key vault or managed HSM to make
requests using the customer-managed key:

The following list explains the numbered steps in the diagram:


1. An Azure Key Vault admin grants permissions to encryption keys to a managed identity. The managed
identity may be either a user-assigned managed identity that you create and manage, or a system-assigned
managed identity that is associated with the storage account.
2. An Azure Storage admin configures encryption with a customer-managed key for the storage account.
3. Azure Storage uses the managed identity to which the Azure Key Vault admin granted permissions in step 1
to authenticate access to Azure Key Vault via Azure AD.
4. Azure Storage wraps the account encryption key with the customer-managed key in Azure Key Vault.
5. For read/write operations, Azure Storage sends requests to Azure Key Vault to unwrap the account
encryption key to perform encryption and decryption operations.
The managed identity that is associated with the storage account must have these permissions at a minimum to
access a customer-managed key in Azure Key Vault:
wrapkey
unwrapkey
get
For more information about key permissions, see Key types, algorithms, and operations.
Azure Policy provides a built-in policy to require that storage accounts use customer-managed keys for Blob
Storage and Azure Files workloads. For more information, see the Storage section in Azure Policy built-in policy
definitions.

Customer-managed keys for queues and tables


Data stored in Queue and Table storage is not automatically protected by a customer-managed key when
customer-managed keys are enabled for the storage account. You can optionally configure these services to be
included in this protection at the time that you create the storage account.
For more information about how to create a storage account that supports customer-managed keys for queues
and tables, see Create an account that supports customer-managed keys for tables and queues.
Data in Blob storage and Azure Files is always protected by customer-managed keys when customer-managed
keys are configured for the storage account.

Enable customer-managed keys for a storage account


When you configure a customer-managed key, Azure Storage wraps the root data encryption key for the
account with the customer-managed key in the associated key vault or managed HSM. Enabling customer-
managed keys does not impact performance, and takes effect immediately.
When you enable or disable customer-managed keys, or when you modify the key or the key version, the
protection of the root encryption key changes, but the data in your Azure Storage account does not need to be
re-encrypted.
You can enable customer-managed keys on both new and existing storage accounts. When you enable
customer-managed keys, you must specify a managed identity to be used to authorize access to the key vault
that contains the key. The managed identity may be either a user-assigned or system-assigned managed
identity:
When you configure customer-managed keys at the time that you create a storage account, you must use a
user-assigned managed identity.
When you configure customer-managed keys on an existing storage account, you can use either a user-
assigned managed identity or a system-assigned managed identity.
To learn more about system-assigned versus user-assigned managed identities, see Managed identities for
Azure resources.
You can switch between customer-managed keys and Microsoft-managed keys at any time. For more
information about Microsoft-managed keys, see About encryption key management.
To learn how to configure Azure Storage encryption with customer-managed keys in a key vault, see Configure
encryption with customer-managed keys stored in Azure Key Vault. To configure customer-managed keys in a
managed HSM, see Configure encryption with customer-managed keys stored in Azure Key Vault Managed
HSM.
IMPORTANT
Customer-managed keys rely on managed identities for Azure resources, a feature of Azure AD. Managed identities do
not currently support cross-tenant scenarios. When you configure customer-managed keys in the Azure portal, a
managed identity is automatically assigned to your storage account under the covers. If you subsequently move the
subscription, resource group, or storage account from one Azure AD tenant to another, the managed identity associated
with the storage account is not transferred to the new tenant, so customer-managed keys may no longer work. For more
information, see Transferring a subscription between Azure AD directories in FAQs and known issues with
managed identities for Azure resources.

Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more
information about keys, see About keys.
Using a key vault or managed HSM has associated costs. For more information, see Key Vault pricing.

Update the key version


When you configure encryption with customer-managed keys, you have two options for updating the key
version:
Automatically update the key version: To automatically update a customer-managed key when a
new version is available, omit the key version when you enable encryption with customer-managed keys
for the storage account. If the key version is omitted, then Azure Storage checks the key vault or managed
HSM daily for a new version of a customer-managed key. If a new key version is available, then Azure
Storage automatically uses the latest version of the key.
Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure
to wait 24 hours before disabling the older version.
Manually update the key version: To use a specific version of a key for Azure Storage encryption,
specify that key version when you enable encryption with customer-managed keys for the storage
account. If you specify the key version, then Azure Storage uses that version for encryption until you
manually update the key version.
When the key version is explicitly specified, then you must manually update the storage account to use
the new key version URI when a new version is created. To learn how to update the storage account to
use a new version of the key, see Configure encryption with customer-managed keys stored in Azure Key
Vault or Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM.
When you update the key version, the protection of the root encryption key changes, but the data in your Azure
Storage account is not re-encrypted. There is no further action required from the user.

NOTE
To rotate a key, create a new version of the key in the key vault or managed HSM, according to your compliance policies.
You can rotate your key manually or create a function to rotate it on a schedule.

Revoke access to customer-managed keys


You can revoke the storage account's access to the customer-managed key at any time. After access to customer-
managed keys is revoked, or after the key has been disabled or deleted, clients cannot call operations that read
from or write to a blob or its metadata. Attempts to call any of the following operations will fail with error code
403 (Forbidden) for all users:
List Blobs, when called with the include=metadata parameter on the request URI
Get Blob
Get Blob Properties
Get Blob Metadata
Set Blob Metadata
Snapshot Blob, when called with the x-ms-meta-name request header
Copy Blob
Copy Blob From URL
Set Blob Tier
Put Block
Put Block From URL
Append Block
Append Block From URL
Put Blob
Put Page
Put Page From URL
Incremental Copy Blob
To call these operations again, restore access to the customer-managed key.
All data operations that are not listed in this section may proceed after customer-managed keys are revoked or a
key is disabled or deleted.
To revoke access to customer-managed keys, use PowerShell or Azure CLI.

Customer-managed keys for Azure managed disks


Customer-managed keys are also available for managing encryption of Azure managed disks. Customer-
managed keys behave differently for managed disks than for Azure Storage resources. For more information,
see Server-side encryption of Azure managed disks for Windows or Server side encryption of Azure managed
disks for Linux.

Next steps
Azure Storage encryption for data at rest
Configure encryption with customer-managed keys stored in Azure Key Vault
Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM
Azure Storage compliance offerings
5/20/2022 • 2 minutes to read • Edit Online

To help organizations comply with national, regional, and industry-specific requirements governing the
collection and use of individuals' data, Microsoft Azure & Azure Storage offer the most comprehensive set of
certifications and attestations of any cloud service provider.
You can find below compliance offerings on Azure Storage to ensure your service regulated in using Azure
Storage service. They are applicable to the following Azure Storage offerings: Blobs(ADLS Gen2), Files, Queues,
Tables, Disks, Cool Storage, and Premium Storage.

Global
CSA-STAR-Attestation
CSA-Star-Certification
CSA-STAR-Self-Assessment
ISO 20000-1:2011
ISO 22301
ISO 27001
ISO 27017
ISO 27018
ISO 9001
WCAG 2.0

US Government
DoD DISA L2, L4, L5
DoE 10 CFR Part 810
EAR (US Export Administration Regulations)
FDA CFR Title 21 Part 11
FedRAMP
FERPA
FIPS 140-2
NIST 800-171
Section 508 VPATS

Industry
23 NYCRR Part 500
APRA (Australia)
CDSA
DPP (UK)
FACT (UK)
FCA (UK)
FFIEC
FISC (Japan)
GLBA
GxP
HIPAA/HITECH
HITRUST
MARS-E
MAS + ABS (Singapore)
MPAA
NEN-7510 (Netherlands)
PCI DSS
Shared Assessments
SOX

Regional
BIR 2012 (Netherlands)
C5 (Germany)
CCSL/IRAP (Australia)
CS Gold Mark (Japan)
DJCP (China)
ENISA IAF (EU)
ENS (Spain)
EU-Model-Clauses
EU-U.S. Privacy Shield
GB 18030 (China)
GDPR (EU)
IT Grundschutz Workbook (Germany)
LOPD (Spain)
MTCS (Singapore)
My Number (Japan)
NZ CC Framework (New Zealand)
PASF (UK)
PDPA (Argentina)
PIPEDA (Canada)
TRUCS (China)
UK-G-Cloud

Next steps
Microsoft Azure & Azure Storage keep leading in compliance offerings, you can find the latest coverage and
details in Microsoft TrustCenter.
Frequently asked questions (FAQ) about Azure Files
5/20/2022 • 15 minutes to read • Edit Online

Azure Files offers fully managed file shares in the cloud that are accessible via the industry-standard Server
Message Block (SMB) protocol and the Network File System (NFS) protocol. You can mount Azure file shares
concurrently on cloud or on-premises deployments of Windows, Linux, and macOS. You also can cache Azure
file shares on Windows Server machines by using Azure File Sync for fast access close to where the data is used.

Azure File Sync


Can I have domain-joined and non-domain-joined ser vers in the same sync group?
Yes. A sync group can contain server endpoints that have different Active Directory memberships, even if
they are not domain-joined. Although this configuration technically works, we do not recommend this as
a typical configuration because access control lists (ACLs) that are defined for files and folders on one
server might not be able to be enforced by other servers in the sync group. For best results, we
recommend syncing between servers that are in the same Active Directory forest, between servers that
are in different Active Directory forests but which have established trust relationships, or between servers
that are not in a domain. We recommend that you avoid using a mix of these configurations.
I created a file directly in my Azure file share by using SMB or in the por tal. How long does
it take for the file to sync to the ser vers in the sync group?
Changes made to the Azure file share by using the Azure portal or SMB are not immediately detected and
replicated like changes to the server endpoint. Azure Files does not yet have change notifications or
journaling, so there's no way to automatically initiate a sync session when files are changed. On Windows
Server, Azure File Sync uses Windows USN journaling to automatically initiate a sync session when files
change.
To detect changes to the Azure file share, Azure File Sync has a scheduled job called a change detection
job. A change detection job enumerates every file in the file share, and then compares it to the sync
version for that file. When the change detection job determines that files have changed, Azure File Sync
initiates a sync session. The change detection job is initiated every 24 hours. Because the change
detection job works by enumerating every file in the Azure file share, change detection takes longer in
larger namespaces than in smaller namespaces. For large namespaces, it might take longer than once
every 24 hours to determine which files have changed.
To immediately sync files that are changed in the Azure file share, the Invoke-
AzStorageSyncChangeDetection PowerShell cmdlet can be used to manually initiate the detection of
changes in the Azure file share. This cmdlet is intended for scenarios where some type of automated
process is making changes in the Azure file share or the changes are done by an administrator (like
moving files and directories into the share). For end user changes, the recommendation is to install the
Azure File Sync agent in an IaaS VM and have end users access the file share through the IaaS VM. This
way all changes will quickly sync to other agents without the need to use the Invoke-
AzStorageSyncChangeDetection cmdlet. To learn more, see the Invoke-AzStorageSyncChangeDetection
documentation.

NOTE
The Invoke-AzStorageSyncChangeDetection PowerShell cmdlet can only detect a maximum of 10,000 items.
For other limitations, see the Invoke-AzStorageSyncChangeDetection documentation.
NOTE
Changes made to an Azure file share using REST does not update the SMB last modified time and will not be seen
as a change by sync.

We are exploring adding change detection for an Azure file share similar to USN for volumes on
Windows Server. Help us prioritize this feature for future development by voting for it at Azure
Community Feedback.
If the same file is changed on two ser vers at approximately the same time, what happens?
Azure File Sync uses a simple conflict-resolution strategy: we keep both changes to files that are changed
in two endpoints at the same time. The most recently written change keeps the original file name. The
older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to
the filename. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the
endpoint name is Cloud . The name follows this taxonomy:
<FileNameWithoutExtension>-<endpointName>[-#].<ext>
For example, the first conflict of CompanyReport.docx would become CompanyReport-CentralServer.docx
if CentralServer is where the older write occurred. The second conflict would be named CompanyReport-
CentralServer-1.docx. Azure File Sync supports 100 conflict files per file. Once the maximum number of
conflict files has been reached, the file will fail to sync until the number of conflict files is less than 100.
I have cloud tiering disabled, why are there tiered files in the ser ver endpoint location?
There are two reasons why tiered files may exist in the server endpoint location:
When adding a new server endpoint to an existing sync group, if you choose either the recall
namespace first option or recall namespace only option for initial download mode, files will show
up as tiered until they're downloaded locally. To avoid this, select the avoid tiered files option for
initial download mode. To manually recall files, use the Invoke-StorageSyncFileRecall cmdlet.
If cloud tiering was enabled on the server endpoint and then disabled, files will remain tiered until
they're accessed.
Why are my tiered files not showing thumbnails or previews in Windows Explorer?
For tiered files, thumbnails and previews won't be visible at your server endpoint. This behavior is
expected since the thumbnail cache feature in Windows intentionally skips reading files with the offline
attribute. With Cloud Tiering enabled, reading through tiered files would cause them to be downloaded
(recalled).
This behavior is not specific to Azure File Sync, Windows Explorer displays a "grey X" for any files that
have the offline attribute set. You will see the X icon when accessing files over SMB. For a detailed
explanation of this behavior, refer to Why don’t I get thumbnails for files that are marked offline?
For questions on how to manage tiered files, please see How to manage tiered files.
Why do tiered files exist outside of the ser ver endpoint namespace?
Prior to Azure File Sync agent version 3, Azure File Sync blocked the move of tiered files outside the
server endpoint but on the same volume as the server endpoint. Copy operations, moves of non-tiered
files, and moves of tiered to other volumes were unaffected. The reason for this behavior was the implicit
assumption that File Explorer and other Windows APIs have that move operations on the same volume
are (nearly) instantaneous rename operations. This means moves will make File Explorer or other move
methods (such as command line or PowerShell) appear unresponsive while Azure File Sync recalls the
data from the cloud. Starting with Azure File Sync agent version 3.0.12.0, Azure File Sync will allow you to
move a tiered file outside of the server endpoint. We avoid the negative effects previously mentioned by
allowing the tiered file to exist as a tiered file outside of the server endpoint and then recalling the file in
the background. This means that moves on the same volume are instantaneous, and we do all the work
to recall the file to disk after the move has completed.
I'm having an issue with Azure File Sync on my ser ver (sync, cloud tiering, etc.). Should I
remove and recreate my ser ver endpoint?
No: removing a server endpoint isn't like rebooting a server! Removing and recreating the server
endpoint is almost never an appropriate solution to fixing issues with sync, cloud tiering, or other aspects
of Azure File Sync. Removing a server endpoint is a destructive operation. It may result in data loss in the
case that tiered files exist outside of the server endpoint namespace. For more information, see why do
tiered files exist outside of the server endpoint namespace for more information. Or it may result in
inaccessible files for tiered files that exist within the server endpoint namespace. These issues won't
resolve when the server endpoint is recreated. Tiered files may exist within your server endpoint
namespace even if you never had cloud tiering enabled. That's why we recommend that you don't
remove the server endpoint unless you would like to stop using Azure File Sync with this particular folder
or have been explicitly instructed to do so by a Microsoft engineer. For more information on remove
server endpoints, see Remove a server endpoint.
Can I move the storage sync ser vice and/or storage account to a different resource group,
subscription, or Azure AD tenant?
Yes, the storage sync service and/or storage account can be moved to a different resource group,
subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to
give the Microsoft.StorageSync application access to the storage account (see Ensure Azure File Sync has
access to the storage account).

NOTE
When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD
tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to
different Azure AD tenants.

Does Azure File Sync preser ve director y/file level NTFS ACLs along with data stored in
Azure Files?
As of February 24th, 2020, new and existing ACLs tiered by Azure file sync will be persisted in NTFS
format, and ACL modifications made directly to the Azure file share will sync to all servers in the sync
group. Any changes on ACLs made to Azure Files will sync down via Azure file sync. When copying data
to Azure Files, make sure you use a copy tool that supports the necessary "fidelity" to copy attributes,
timestamps and ACLs into an Azure file share - either via SMB or REST. When using Azure copy tools,
such as AzCopy, it is important to use the latest version. Check the file copy tools table to get an overview
of Azure copy tools to ensure you can copy all of the important metadata of a file.
If you have enabled Azure Backup on your file sync managed file shares, file ACLs can continue to be
restored as part of the backup restore workflow. This works either for the entire share or individual
files/directories.
If you are using snapshots as part of the self-managed backup solution for file shares managed by file
sync, your ACLs may not be restored properly to NTFS ACLs if the snapshots were taken prior to
February 24th, 2020. If this occurs, consider contacting Azure Support.
Does Azure File Sync sync the LastWriteTime for directories?
No, Azure File Sync does not sync the LastWriteTime for directories.

Security, authentication, and access control


How can I audit file access and changes in Azure Files?
There are two options that provide auditing functionality for Azure Files:
If users are accessing the Azure file share directly, Azure Storage logs can be used to track file changes
and user access. These logs can be used for troubleshooting purposes and the requests are logged on
a best-effort basis.
If users are accessing the Azure file share via a Windows Server that has the Azure File Sync agent
installed, use an audit policy or 3rd party product to track file changes and user access on the
Windows Server.
AD DS & Azure AD DS Authentication
Does Azure Active Director y Domain Ser vices (Azure AD DS) suppor t SMB access using
Azure AD credentials from devices joined to or registered with Azure AD?
No, this scenario is not supported.
Can I access Azure file shares with Azure AD credentials from a VM under a different
subscription?
If the subscription under which the file share is deployed is associated with the same Azure AD tenant as
the Azure AD DS deployment to which the VM is domain-joined, you can then access Azure file shares
using the same Azure AD credentials. The limitation is imposed not on the subscription but on the
associated Azure AD tenant.
Can I enable either Azure AD DS or on-premises AD DS authentication for Azure file shares
using an Azure AD tenant that is different from the Azure file share's primar y tenant?
No, Azure Files only supports Azure AD DS or on-premises AD DS integration with an Azure AD tenant
that resides in the same subscription as the file share. Only one subscription can be associated with an
Azure AD tenant. This limitation applies to both Azure AD DS and on-premises AD DS authentication
methods. When using on-premises AD DS for authentication, the AD DS credential must be synced to the
Azure AD that the storage account is associated with.
Does on-premises AD DS authentication for Azure file shares suppor t integration with an
AD DS environment using multiple forests?
Azure Files on-premises AD DS authentication only integrates with the forest of the domain service that
the storage account is registered to. To support authentication from another forest, your environment
must have a forest trust configured correctly. The way Azure Files register in AD DS almost the same as a
regular file server, where it creates an identity (computer or service logon account) in AD DS for
authentication. The only difference is that the registered SPN of the storage account ends with
"file.core.windows.net" which does not match with the domain suffix. Consult your domain administrator
to see if any update to your suffix routing policy is required to enable multiple forest authentication due
to the different domain suffix. We provide an example below to configure suffix routing policy.
Example: When users in forest A domain want to reach an file share with the storage account registered
against a domain in forest B, this will not automatically work because the service principal of the storage
account does not have a suffix matching the suffix of any domain in forest A. We can address this issue by
manually configuring a suffix routing rule from forest A to forest B for a custom suffix of
"file.core.windows.net". First, you must add a new custom suffix on forest B. Make sure you have the
appropriate administrative permissions to change the configuration, then follow these steps:
1. Logon to a machine domain joined to forest B
2. Open up "Active Directory Domains and Trusts" console
3. Right click on "Active Directory Domains and Trusts"
4. Click on "Properties"
5. Click on "Add"
6. Add "file.core.windows.net" as the UPN Suffixes
7. Click on "Apply", then "OK" to close the wizard
Next, add the suffix routing rule on forest A, so that it redirects to forest B.
1. Logon to a machine domain joined to forest A
2. Open up "Active Directory Domains and Trusts" console
3. Right-click on the domain that you want to access the file share, then click on the "Trusts" tab and
select forest B domain from outgoing trusts. If you haven't configure trust between the two forests,
you need to setup the trust first
4. Click on "Properties…" then "Name Suffix Routing"
5. Check if the "*.file.core.windows.net" suffix shows up. If not, click on 'Refresh'
6. Select "*.file.core.windows.net", then click on "Enable" and "Apply"
Is there any difference in creating a computer account or ser vice logon account to represent
my storage account in AD?
Creating either a computer account (default) or a service logon account has no difference on how the
authentication would work with Azure Files. You can make your own choice on how to represent a
storage account as an identity in your AD environment. The default DomainAccountType set in Join-
AzStorageAccountForAuth cmdlet is computer account. However, the password expiration age configured
in your AD environment can be different for computer or service logon account and you need to take that
into consideration for Update the password of your storage account identity in AD.
How to remove cached credentials with storage account key and delete existing SMB
connections before initializing new connection with Azure AD or AD credentials?
You can follow the two step process below to remove the saved credential associated with the storage
account key and remove the SMB connection:
1. Run the cmdlet below in Windows Cmd.exe to remove the credential. If you cannot find one, it
means that you have not persisted the credential and can skip this step.
cmdkey /delete:Domain:target=storage-account-name.file.core.windows.net
2. Delete the existing connection to the file share. You can specify the mount path as either the
mounted drive letter or the storage-account-name.file.core.windows.net path.
net use <drive-letter/share-path> /delete

Network File System (NFS v4.1)


When should I use Azure Files NFS?
See NFS shares.
How do I backup data stored in NFS shares?
Backing up your data on NFS shares can either be orchestrated using familiar tooling like rsync or
products from one of our third-party backup partners. Multiple backup partners including Commvault,
Veeam, and Veritas and have extended their solutions to work with both SMB 3.x and NFS 4.1 for Azure
Files.
Can I migrate existing data to an NFS share?
Within a region, you can use standard tools like scp, rsync, or SSHFS to move data. Because Azure Files
NFS can be accessed from multiple compute instances concurrently, you can improve copying speeds
with parallel uploads. If you want to bring data from outside of a region, use a VPN or a Expressroute to
mount to your file system from your on-premises data center.
Can you run IBM MQ (including multi-instance) on Azure Files NFS?
Azure Files NFS v4.1 file shares meets the three requirements set by IBM MQ
https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-requirements-shared-file-
systems
Data write integrity
Guaranteed exclusive access to files
Release locks on failure
The following test cases run successfully
1. https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-verifying-shared-file-
system-behavior
2. https://www.ibm.com/docs/en/ibm-mq/9.2?topic=multiplatforms-running-amqsfhac-test-
message-integrity

Share snapshots
Create share snapshots
Are my share snapshots geo-redundant?
Share snapshots have the same redundancy as the Azure file share for which they were taken. If you have
selected geo-redundant storage for your account, your share snapshot also is stored redundantly in the
paired region.
Clean up share snapshots
Can I delete my share but not delete my share snapshots?
If you have active share snapshots on your share, you cannot delete your share. You can use an API to delete
share snapshots, along with the share. You also can delete both the share snapshots and the share in the
Azure portal.

Billing and pricing


How much do share snapshots cost?
Share snapshots are incremental in nature. The base share snapshot is the share itself. All subsequent share
snapshots are incremental and store only the difference from the preceding share snapshot. You are billed
only for the changed content. If you have a share with 100 GiB of data but only 5 GiB has changed since your
last share snapshot, the share snapshot consumes only 5 additional GiB, and you are billed for 105 GiB. For
more information about transaction and standard egress charges, see the Pricing page.

See also
Troubleshoot Azure Files in Windows
Troubleshoot Azure Files in Linux
Troubleshoot Azure File Sync
What's new in Azure Files
5/20/2022 • 6 minutes to read • Edit Online

Azure Files is updated regularly to offer new features and enhancements. This article provides detailed
information about what's new in Azure Files.

2021 quarter 4 (October, November, December)


Increased IOPS for premium file shares
Premium Azure file shares now have additional included baseline IOPS and a higher minimum burst IOPS. The
baseline IOPS included with a provisioned share was increased from 400 to 3,000, meaning that a 100 GiB share
(the minimum share size) is guaranteed 3,100 baseline IOPS. Additionally, the floor for burst IOPS was increased
from 4,000 to 10,000, meaning that every premium file share will be able to burst up to at least 10,000 IOPS.
Formula changes:

IT EM O L D VA L UE N EW VA L UE

Baseline IOPS formula MIN(400 + 1 * ProvisionedGiB, MIN(3000 + 1 * ProvisionedGiB,


100000) 100000)

Burst limit MIN(MAX(4000, 3 * MIN(MAX(10000, 3 *


ProvisionedGiB), 100000) ProvisionedGiB), 100000)

For more information, see:


The provisioned model for premium Azure file shares
Azure Files pricing
NFS 4.1 protocol support is generally available
Premium Azure file shares now support either the SMB or the NFS 4.1 protocols. NFS 4.1 is available in all
regions where Azure Files supports the premium tier, for both locally redundant storage and zone-redundant
storage. Azure file shares created with the NFS 4.1 protocol enabled are fully POSIX-compliant, distributed file
shares that support a wide variety of Linux and container-based workloads. Some example workloads include:
highly available SAP application layer, enterprise messaging, user home directories, custom line-of-business
applications, database backups, database replication, and Azure Pipelines.
For more information, see:
NFS file shares in Azure Files
High availability for SAP NetWeaver on Azure VMs with NFS on Azure Files
Azure Files pricing
Symmetric throughput for premium file shares
Premium Azure file shares now support symmetric throughput provisioning, which enables the provisioned
throughput for an Azure file share to be used for 100% ingress, 100% egress, or some mixture of ingress and
egress. Symmetric throughput provides the flexibility to make full utilization of available throughput and aligns
premium file shares with standard file shares.
Formula changes:
IT EM O L D VA L UE N EW VA L UE

Throughput (MiB/sec) Ingress: 100 + CEILING(0.04 *


40 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 *
ProvisionedGiB) ProvisionedGiB)

Egress:
60 + CEILING(0.06 *
ProvisionedGiB)

For more information, see:


The provisioned model for premium Azure file shares
Azure Files pricing

2021 quarter 3 (July, August, September)


SMB Multichannel is generally available
SMB Multichannel enables SMB clients to establish multiple parallel connections to an Azure file share. This
allows SMB clients to take full advantage of all available network bandwidth and makes them resilient to
network failures, reducing total cost of ownership and enabling 2-3x for reads and 3-4x for writes through a
single client. SMB Multichannel is available for premium file shares (file shares deployed in the FileStorage
storage account kind) and is disabled by default.
For more information, see:
SMB Multichannel performance in Azure Files
Enable SMB Multichannel
Overview on SMB Multichannel in the Windows Server documentation
SMB 3.1.1 and SMB security settings
SMB 3.1.1 is the most recent version of the SMB protocol, released with Windows 10, containing important
security and performance updates. Azure Files SMB 3.1.1 ships with two additional encryption modes, AES-128-
GCM and AES-256-GCM, in addition to AES-128-CCM which was already supported. To maximize performance,
AES-128-GCM is negotiated as the default SMB channel encryption option; AES-128-CCM will only be
negotiated on older clients that don't support AES-128-GCM.
Depending on your organization's regulatory and compliance requirements, AES-256-GCM can be negotiated
instead of AES-128-GCM by either restricting allowed SMB channel encryption options on the SMB clients, in
Azure Files, or both. Support for AES-256-GCM was added in Windows Server 2022 and Windows 10, version
21H1.
In addition to SMB 3.1.1, Azure Files exposes security settings that change the behavior of the SMB protocol.
With this release, you may configure allowed SMB protocol versions, SMB channel encryption options,
authentication methods, and Kerberos ticket encryption options. By default, Azure Files enables the most
compatible options, however these options may be toggled at any time.
For more information, see:
SMB security settings
Windows and Linux SMB version information
Overview of SMB features in the Windows Server documentation

2021 quarter 2 (April, May, June)


Premium, hot, and cool storage capacity reservations
Azure Files supports storage capacity reservations (also referred to as reserve instances). Storage capacity
reservations allow you to achieve a discount on storage by pre-committing to storage utilization. Azure Files
supports capacity reservations on the premium, hot, and cool tiers. Capacity reservations are sold in units of 10
TiB or 100 TiB, for terms of either one year or three years.
For more information, see:
Understanding Azure Files billing
Optimized costs for Azure Files with reserved capacity
Azure Files pricing
Improved portal experience for domain joining to Active Directory
The experience for domain joining an Azure storage account has been improved to help guide first-time Azure
file share admins through the process. When you select Active Directory under File share settings in the File
shares section of the Azure portal, you will be guided through the steps required to domain join.

For more information, see:


Overview of Azure Files identity-based authentication options for SMB access
Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares

2021 quarter 1 (January, February, March)


Azure Files management now available through the control plane
Management APIs for Azure Files resources, the file service and file shares, are now available through control
plane ( Microsoft.Storage resource provider). This enables Azure file shares to be created with an Azure
Resource Manager or Bicep template, to be fully manageable when the data plane (i.e. the FileREST API) is
inaccessible (like when the storage account's public endpoint is disabled), and to support full role-based access
control (RBAC) semantics.
We recommend you manage Azure Files through the control plane in most cases. To support management of
the file service and file shares through the control plane, the Azure portal, Azure storage PowerShell module,
and Azure CLI have been updated to support most management actions through the control plane.
To preserve existing script behavior, the Azure storage PowerShell module and the Azure CLI maintain the
existing commands that use the data plane to manage the file service and file shares, as well as adding new
commands to use the control plane. Portal requests only go through the control plane resource provider.
PowerShell and CLI commands are named as follows:
Az.Storage PowerShell:
Control plane file share cmdlets are prefixed with Rm , for example: New-AzRmStorageShare ,
Get-AzRmStorageShare , Update-AzRmStorageShare , and Remove-AzRmStorageShare .
Traditional data plane file share cmdlets don't have a prefix, for example New-AzStorageShare ,
Get-AzStorageShare , Set-AzStorageShareQuota , and Remove-AzStorageShare .
Cmdlets to manage the file service are only available through the control plane and don't have any
special prefix, for example Get-AzStorageFileServiceProperty and
Update-AzStorageFileServiceProperty .
Azure storage CLI:
Control plane file share commands are available under the az storage share-rm command group, for
example: az storage share-rm create , az storage share-rm update , etc.
Traditional file share commands are available under the az storage share command group, for
example: az storage share create , az storage share update , etc.
Commands to manage the file service are only available through the control plane, and are available
through the az storage account file-service-properties command group, for example:
az storage account file-service-properties show and
az storage account file-service-properties update .

To learn more about the Azure Files management API, see:


Azure Files REST API overview
Control plane ( Microsoft.Storage resource provider) API for Azure Files resources:
FileService
FileShare
Azure PowerShell and Azure CLI

See also
What is Azure Files?
Planning for an Azure Files deployment
Create an Azure file share
Create an Azure file share
5/20/2022 • 15 minutes to read • Edit Online

To create an Azure file share, you need to answer three questions about how you will use it:
What are the performance requirements for your Azure file share?
Azure Files offers standard file shares which are hosted on hard disk-based (HDD-based) hardware, and
premium file shares, which are hosted on solid-state disk-based (SSD-based) hardware.
What are your redundancy requirements for your Azure file share?
Standard file shares offer locally-redundant (LRS), zone redundant (ZRS), geo-redundant (GRS), or geo-
zone-redundant (GZRS) storage, however the large file share feature is only supported on locally
redundant and zone redundant file shares. Premium file shares do not support any form of geo-
redundancy.
Premium file shares are available with locally redundancy and zone redundancy in a subset of regions. To
find out if premium file shares are currently available in your region, see the products available by region
page for Azure. For information about regions that support ZRS, see Azure Storage redundancy.
What size file share do you need?
In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo-
and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB.
For more information on these three choices, see Planning for an Azure Files deployment.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
This article assumes that you have already created an Azure subscription. If you don't already have a
subscription, then create a free account before you begin.
If you intend to use Azure PowerShell, install the latest version.
If you intend to use the Azure CLI, install the latest version.

Create a storage account


Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of
storage. This pool of storage can be used to deploy multiple file shares.
Azure supports multiple types of storage accounts for different storage scenarios customers may have, but
there are two main types of storage accounts for Azure Files. Which storage account type you need to create
depends on whether you want to create a standard file share or a premium file share:
General purpose version 2 (GPv2) storage accounts : GPv2 storage accounts allow you to deploy
Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file
shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or
tables. File shares can be deployed into the transaction optimized (default), hot, or cool tiers.
FileStorage storage accounts : FileStorage storage accounts allow you to deploy Azure file shares on
premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store
Azure file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a
FileStorage account.

Portal
PowerShell
Azure CLI

To create a storage account via the Azure portal, select + Create a resource from the dashboard. In the
resulting Azure Marketplace search window, search for storage account and select the resulting search result.
This will lead to an overview page for storage accounts; select Create to proceed with the storage account
creation wizard.

Basics
The first section to complete to create a storage account is labeled Basics . This contains all of the required fields
to create a storage account. To create a GPv2 storage account, ensure the Performance radio button is set to
Standard and the Account kind drop-down list is selected to StorageV2 (general purpose v2).

To create a FileStorage storage account, ensure the Performance radio button is set to Premium and
Fileshares is selected in the Premium account type drop-down list.
The other basics fields are independent from the choice of storage account:
Storage account name : The name of the storage account resource to be created. This name must be
globally unique. The storage account name will be used as the server name when you mount an Azure file
share via SMB. Storage account names must be between 3 and 24 characters in length and may contain
numbers and lowercase letters only.
Location : The region for the storage account to be deployed into. This can be the region associated with the
resource group, or any other available region.
Replication : Although this is labeled replication, this field actually means redundancy ; this is the desired
redundancy level: locally redundancy (LRS), zone redundancy (ZRS), geo-redundancy (GRS), and geo-zone-
redundancy (GZRS). This drop-down list also contains read-access geo-redundancy (RA-GRS) and read-
access geo-zone redundancy (RA-GZRS), which do not apply to Azure file shares; any file share created in a
storage account with these selected will actually be either geo-redundant or geo-zone-redundant,
respectively.
Networking
The networking section allows you to configure networking options. These settings are optional for the creation
of the storage account and can be configured later if desired. For more information on these options, see Azure
Files networking considerations.
Data protection
The data protection section allows you to configure the soft-delete policy for Azure file shares in your storage
account. Other settings related to soft-delete for blobs, containers, point-in-time restore for containers,
versioning, and change feed apply only to Azure Blob storage.
Advanced
The advanced section contains several important settings for Azure file shares:
Secure transfer required : This field indicates whether the storage account requires encryption in
transit for communication to the storage account. If you require SMB 2.1 support, you must disable this.

Large file shares : This field enables the storage account for file shares spanning up to 100 TiB. Enabling
this feature will limit your storage account to only locally redundant and zone redundant storage options.
Once a GPv2 storage account has been enabled for large file shares, you cannot disable the large file
share capability. FileStorage storage accounts (storage accounts for premium file shares) do not have this
option, as all premium file shares can scale up to 100 TiB.
The other settings that are available in the advanced tab (hierarchical namespace for Azure Data Lake storage
gen 2, default blob tier, NFSv3 for blob storage, etc.) do not apply to Azure Files.

IMPORTANT
Selecting the blob access tier does not affect the tier of the file share.

Tags
Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the
same tag to multiple resources and resource groups. These are optional and can be applied after storage
account creation.
Review + create
The final step to create the storage account is to select the Create button on the Review + create tab. This
button won't be available if all of the required fields for a storage account are not filled.
Enable large files shares on an existing account
Before you create an Azure file share on an existing storage account, you may want to enable it for large file
shares (up to 100 TiB) if you haven't already. Standard storage accounts using either LRS or ZRS can be
upgraded to support large file shares without causing downtime for existing file shares on the storage account.
If you have a GRS, GZRS, RA-GRS, or RA-GZRS account, you will need to convert it to an LRS account before
proceeding.

Portal
PowerShell
Azure CLI

1. Open the Azure portal, and navigate to the storage account where you want to enable large file shares.
2. Open the storage account and select File shares .
3. Select Enabled on Large file shares , and then select Save .
4. Select Over view and select Refresh .
5. Select Share capacity then select 100 TiB and Save .
Create a file share
Once you've created your storage account, you can create your file share. This process is mostly the same
regardless of whether you're using a premium file share or a standard file share. You should consider the
following differences:
Standard file shares may be deployed into one of the standard tiers: transaction optimized (default), hot, or cool.
This is a per file share tier that is not affected by the blob access tier of the storage account (this property only
relates to Azure Blob storage - it does not relate to Azure Files at all). You can change the tier of the share at any
time after it has been deployed. Premium file shares cannot be directly converted to any standard tier.

IMPORTANT
You can move file shares between tiers within GPv2 storage account types (transaction optimized, hot, and cool). Share
moves between tiers incur transactions: moving from a hotter tier to a cooler tier will incur the cooler tier's write
transaction charge for each file in the share, while a move from a cooler tier to a hotter tier will incur the cool tier's read
transaction charge for each file the share.

The quota property means something slightly different between premium and standard file shares:
For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot
go. If a quota is not specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares
property is not set for a storage account). If you did not create your storage account with large file shares
enabled, see Enable large files shares on an existing account for how to enable 100 TiB file shares.
For premium file shares, quota means provisioned size . The provisioned size is the amount that you
will be billed for, regardless of actual usage. The IOPS and throughput available on a premium file share is
based on the provisioned size. For more information on how to plan for a premium file share, see
provisioning premium file shares.

Portal
PowerShell
Azure CLI

If you just created your storage account, you can navigate to it from the deployment screen by selecting Go to
resource . Once in the storage account, select the File shares in the table of contents for the storage account.
In the file share listing, you should see any file shares you have previously created in this storage account; an
empty table if no file shares have been created yet. Select + File share to create a new file share.
The new file share blade should appear on the screen. Complete the fields in the new file share blade to create a
file share:
Name : the name of the file share to be created.
Quota : the quota of the file share for standard file shares; the provisioned size of the file share for premium
file shares. For standard file shares, the quota will also determine what performance you receive.
Tiers : the selected tier for a file share. This field is only available in a general purpose (GPv2) storage
account . You can choose transaction optimized, hot, or cool. The share's tier can be changed at any time. We
recommend picking the hottest tier possible during a migration, to minimize transaction expenses, and then
switching to a lower tier if desired after the migration is complete.
Select Create to finishing creating the new share.

NOTE
The name of your file share must be all lowercase. For complete details about naming file shares and files, see Naming and
referencing shares, directories, files, and metadata.

Change the tier of an Azure file share


File shares deployed in general purpose v2 (GPv2) storage account can be in the transaction optimized,
hot, or cool tiers. You can change the tier of the Azure file share at any time, subject to transaction costs as
described above.
Portal
PowerShell
Azure CLI

On the main storage account page, select File shares select the tile labeled File shares (you can also navigate
to File shares via the table of contents for the storage account).
In the table list of file shares, select the file share for which you would like to change the tier. On the file share
overview page, select Change tier from the menu.

On the resulting dialog, select the desired tier: transaction optimized, hot, or cool.

Expand existing file shares


If you enable large file shares on an existing storage account, you must expand existing file shares in that
storage account to take advantage of the increased capacity and scale.

Portal
PowerShell
Azure CLI

1. From your storage account, select File shares .


2. Right-click your file share, and then select Quota .
3. Enter the new size that you want, and then select OK .

Next steps
Plan for a deployment of Azure Files or Plan for a deployment of Azure File Sync.
Networking overview.
Connect and mount a file share on Windows, macOS, and Linux.
Tutorial: Create an NFS Azure file share and mount
it on a Linux VM using the Azure Portal
5/20/2022 • 7 minutes to read • Edit Online

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. Both NFS and SMB protocols are
supported on Azure virtual machines (VMs) running Linux. This tutorial shows you how to create an Azure file
share using the NFS protocol and connect it to a Linux VM.
In this tutorial, you will:
Create a storage account
Deploy a Linux VM
Create an NFS file share
Connect to your VM
Mount the file share to your VM

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Getting started
If you don't have an Azure subscription, create a free account before you begin.
Sign in to the Azure portal.
Create a FileStorage storage account
Before you can work with an NFS 4.1 Azure file share, you have to create an Azure storage account with the
premium performance tier. Currently, NFS 4.1 shares are only available as premium file shares.
1. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
2. On the Storage Accounts window that appears, choose + Create .
3. On the Basics tab, select the subscription in which to create the storage account.
4. Under the Resource group field, select Create new to create a new resource group to use for this tutorial.
5. Enter a name for your storage account. The name you choose must be unique across Azure. The name also
must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
6. Select a region for your storage account, or use the default region. Azure supports NFS file shares in all the
same regions that support premium file storage.
7. Select the Premium performance tier to store your data on solid-state drives (SSD). Under Premium
account type , select File shares.
8. Leave replication set to its default value of Locally-redundant storage (LRS).
9. Select Review + Create to review your storage account settings and create the account.
10. When you see the Validation passed notification appear, select Create . You should see a notification that
deployment is in progress.
The following image shows the settings on the Basics tab for a new storage account:

Deploy an Azure VM running Linux


Next, create an Azure VM running Linux to represent the on-premises server. When you create the VM, a virtual
network will be created for you. The NFS protocol can only be used from a machine inside of a virtual network.
1. Select Home , and then select Vir tual machines under Azure ser vices .
2. Select + Create and then + Vir tual machine .
3. In the Basics tab, under Project details , make sure the correct subscription and resource group are
selected. Under Instance details , type myVM for the Vir tual machine name , and select the same
region as your storage account. Choose the default Ubuntu Server version for your Image . Leave the
other defaults. The default size and pricing is only shown as an example. Size availability and pricing is
dependent on your region and subscription.
4. Under Administrator account , select SSH public key . Leave the rest of the defaults.

5. Under Inbound por t rules > Public inbound por ts , choose Allow selected por ts and then select
SSH (22) and HTTP (80) from the drop-down.
IMPORTANT
Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later,
go back to the Basics tab.

6. Select the Review + create button at the bottom of the page.


7. On the Create a vir tual machine page, you can see the details about the VM you are about to create.
Note the name of the virtual network. When you are ready, select Create .
8. When the Generate new key pair window opens, select Download private key and create
resource . Your key file will be download as myKey.pem . Make sure you know where the .pem file was
downloaded, because you'll need the path to it to connect to your VM.
You'll see a message that deployment is in progress. Wait a few minutes for deployment to complete.

Create an NFS Azure file share


Now you're ready to create an NFS file share and provide network-level security for your NFS traffic.
Add a file share to your storage account
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select Data storage > File shares from the storage account pane.
4. Select + File Share .
5. Name the new file share qsfileshare and enter "100" for the minimum Provisioned capacity , or
provision more capacity (up to 102,400 GiB) to get more performance. Select NFS protocol, leave No
Root Squash selected, and select Create .
Set up a private endpoint
Next, you'll need to set up a private endpoint for your storage account. This gives your storage account a private
IP address from within the address space of your virtual network.
1. Select the file share qsfileshare. You should see a dialog that says Connect to this NFS share from Linux.
Under Network configuration , select Review options
2. Next, select Setup a private endpoint .

3. Select + Private endpoint .

4. Leave Subscription and Resource group the same. Under Instance , provide a name and select a
region for the new private endpoint. Your private endpoint must be in the same region as your virtual
network, so use the same region as you specified when creating the V M. When all the fields are
complete, select Next: Resource .
5. Confirm that the Subscription , Resource type and Resource are correct, and select File from the
Target sub-resource drop-down. Then select Next: Vir tual Network .

6. Under Networking , select the virtual network associated with your VM and leave the default subnet.
Select Yes for Integrate with private DNS zone . Select the correct subscription and resource group,
and then select Next: Tags .
7. You can optionally apply tags to categorize your resources, such as applying the name Environment and
the value Test to all testing resources. Enter name/value pairs if desired, and then select Next: Review +
create .

8. Azure will attempt to validate the private endpoint. When validation is complete, select Create . You'll see
a notification that deployment is in progress. After a few minutes, you should see a notification that
deployment is complete.
Disable secure transfer
Because the NFS protocol doesn't support encryption and relies instead on network-level security, you'll need to
disable secure transfer.
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select File shares from the storage account pane.
4. Select the NFS file share that you created. Under Secure transfer setting , select Change setting .
5. Change the Secure transfer required setting to Disabled , and select Save . The setting change may
take up to 30 seconds to take effect.

Connect to your VM
Create an SSH connection with the VM.
1. Select Home and then Vir tual machines .
2. Select the Linux VM you created for this tutorial and ensure that its status is Running . Take note of the
VM's public IP address and copy it to your clipboard.

3. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a
PowerShell prompt.
4. At your prompt, open an SSH connection to your VM. Replace the IP address with the one from your VM,
and replace the path to the .pem with the path to where the key file was downloaded.

ssh -i .\Downloads\myVM_key.pem azureuser@20.25.14.85

If you encounter a warning that the authenticity of the host can't be established, type yes to continue connecting
to the VM. Leave the ssh connection open for the next step.

TIP
The SSH key you created can be used the next time your create a VM in Azure. Just select the Use a key stored in
Azure for SSH public key source the next time you create a VM. You already have the private key on your computer,
so you won't need to download anything.

Mount the NFS share


Now that you've created an NFS share, to use it you have to mount it on your Linux client.
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select File shares from the storage account pane and select the NFS file share you created.
4. You should see Connect to this NFS share from Linux along with sample commands to use NFS on
your Linux distribution and a provided mounting script.

5. Select your Linux distribution (Ubuntu).


6. Using the ssh connection you created to your VM, enter the sample commands to use NFS and mount
the file share.
You have now mounted your NFS share, and it's ready to store files.

Clean up resources
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you created for this tutorial.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .

Next steps
Learn about using NFS Azure file shares
How to use DFS Namespaces with Azure Files
5/20/2022 • 12 minutes to read • Edit Online

Distributed File Systems Namespaces, commonly referred to as DFS Namespaces or DFS-N, is a Windows
Server server role that is widely used to simplify the deployment and maintenance of SMB file shares in
production. DFS Namespaces is a storage namespace virtualization technology, which means that it enables you
to provide a layer of indirection between the UNC path of your file shares and the actual file shares themselves.
DFS Namespaces works with SMB file shares, agnostic of those file shares are hosted: it can be used with SMB
shares hosted on an on-premises Windows File Server with or without Azure File Sync, Azure file shares directly,
SMB file shares hosted in Azure NetApp Files or other third-party offerings, and even with file shares hosted in
other clouds.
At its core, DFS Namespaces provides a mapping between a user-friendly UNC path, like
\\contoso\shares\ProjectX and the underlying UNC path of the SMB share like \\Server01-Prod\ProjectX or
\\storageaccount.file.core.windows.net\projectx . When the end user wants to navigate to their file share, they
type in the user-friendly UNC path, but their SMB client accesses the underlying SMB path of the mapping. You
can also extend this basic concept to take over an existing file server name, such as \\MyServer\ProjectX . You
can use this capability to achieve the following scenarios:
Provide a migration-proof name for a logical set of data. In this example, you have a mapping like
\\contoso\shares\Engineering that maps to \\OldServer\Engineering . When you complete your
migration to Azure Files, you can change your mapping so your user-friendly UNC path points to
\\storageaccount.file.core.windows.net\engineering . When an end user accesses the user-friendly UNC
path, they will be seamlessly redirected to the Azure file share path.
Establish a common name for a logical set of data that is distributed to multiple servers at different
physical sites, such as through Azure File Sync. In this example, a name such as
\\contoso\shares\FileSyncExample is mapped to multiple UNC paths such as
\\FileSyncServer1\ExampleShare , \\FileSyncServer2\DifferentShareName , \\FileSyncServer3\ExampleShare .
When the user accesses the user-friendly UNC, they are given a list of possible UNC paths and choose the
one closest to them based on Windows Server Active Directory (AD) site definitions.
Extend a logical set of data across size, IO, or other scale thresholds. This is common when dealing with
user directories, where every user gets their own folder on a share, or with scratch shares, where users
get arbitrary space to handle temporary data needs. With DFS Namespaces, you stitch together multiple
folders into a cohesive namespace. For example, \\contoso\shares\UserShares\user1 maps to
\\storageaccount.file.core.windows.net\user1 , \\contoso\shares\UserShares\user2 maps to
\\storageaccount.file.core.windows.net\user2 , and so on.

You can see an example of how to use DFS Namespaces with your Azure Files deployment in the following video
overview.
NOTE
Skip to 10:10 in the video to see how to set up DFS Namespaces.

If you already have a DFS Namespace in place, no special steps are required to use it with Azure Files and File
Sync. If you're accessing your Azure file share from on-premises, normal networking considerations apply; see
Azure Files networking considerations for more information.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Namespace types
DFS Namespaces provides two main namespace types:
Domain-based namespace : A namespace hosted as part of your Windows Server AD domain.
Namespaces hosted as part of AD will have a UNC path containing the name of your domain, for example,
\\contoso.com\shares\myshare , if your domain is contoso.com . Domain-based namespaces support larger
scale limits and built-in redundancy through AD. Domain-based namespaces can't be a clustered resource on
a failover cluster.
Standalone namespace : A namespace hosted on an individual server, not hosted as part of Windows
Server AD. Standalone namespaces will have a name based on the name of the standalone server, such as
\\MyStandaloneServer\shares\myshare , where your standalone server is named MyStandaloneServer .
Standalone namespaces support lower scale targets than domain-based namespaces but can be hosted as a
clustered resource on a failover cluster.

Requirements
To use DFS Namespaces with Azure Files and File Sync, you must have the following resources:
An Active Directory domain. This can be hosted anywhere you like, such as on-premises, in an Azure virtual
machine (VM), or even in another cloud.
A Windows Server that can host the namespace. A common pattern deployment pattern for DFS
Namespaces is to use the Active Directory domain controller to host the namespaces, however the
namespaces can be setup from any server with the DFS Namespaces server role installed. DFS Namespaces
is available on all supported Windows Server versions.
An SMB file share hosted in a domain-joined environment, such as an Azure file share hosted within a
domain-joined storage account, or a file share hosted on a domain-joined Windows File Server using Azure
File Sync. For more on domain-joining your storage account, see Identity-based authentication. Windows File
Servers are domain-joined the same way regardless of whether you are using Azure File Sync.
The SMB file shares you want to use with DFS Namespaces must be reachable from your on-premises
networks. This is primarily a concern for Azure file shares, however, technically applies to any file share
hosted in Azure or any other cloud. For more information on networking, see Networking considerations for
direct access.

Install the DFS Namespaces server role


If you are already using DFS Namespaces, or wish to set up DFS Namespaces on your domain controller, you
may safely skip these steps.

Portal
PowerShell

To install the DFS Namespaces server role, open the Server Manager on your server. Select Manage , and then
select Add Roles and Features . The resulting wizard guides you through the installation of the necessary
Windows components to run and manage DFS Namespaces.
In the Installation Type section of the installation wizard, select the Role-based or feature-based
installation radio button and select Next . On the Ser ver Selection section, select the desired server(s) on
which you would like to install the DFS Namespaces server role, and select Next .
In the Ser ver Roles section, select and check the DFS Namespaces role from role list. You can find this under
File and Storage Ser vices > File and ISCSI Ser vices . When you select the DFS Namespaces server role, it
may also add any required supporting server roles or features that you don't already have installed.
After you have checked the DFS Namespaces role, you may select Next on all subsequent screens, and select
Install as soon as the wizard enables the button. When the installation is complete, you may configure your
namespace.

Take over existing server names with root consolidation


An important use for DFS Namespaces is to take over an existing server name for the purposes of refactoring
the physical layout of the file shares. For example, you may wish to consolidate file shares from multiple old file
servers together on a single file server during a modernization migration. Traditionally, end user familiarity and
document-linking limit your ability to consolidate file shares from disparate file servers together on one host,
but the DFS Namespaces root consolidation feature allows you to stand-up a single server to multiple server
names and route to the appropriate share name.
Although useful for various datacenter migration scenarios, root consolidation is especially useful for adopting
cloud-native Azure file shares because:
Azure file shares don't allow you to keep existing on-premises server names.
Azure file shares must be accessed via the fully qualified domain name (FQDN) of the storage account. For
example, an Azure file share called share in storage account storageaccount is always accessed through the
\\storageaccount.file.core.windows.net\share UNC path. This can be confusing to end users who expect a
short name (ex. \\MyServer\share ) or a name that is a subdomain of the organization's domain name (ex.
\\MyServer.contoso.com\share ).

Root consolidation may only be used with standalone namespaces. If you already have an existing domain-
based namespace for your file shares, you do not need to create a root consolidated namespace.
Enabling root consolidation
Root consolidation can be enabled by setting the following registry keys from an elevated PowerShell session
(or using PowerShell remoting).
New-Item `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs" `
-Type Registry `
-ErrorAction SilentlyContinue
New-Item `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters" `
-Type Registry `
-ErrorAction SilentlyContinue
New-Item `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" `
-Type Registry `
-ErrorAction SilentlyContinue
Set-ItemProperty `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" `
-Name "ServerConsolidationRetry" `
-Value 1

Creating DNS entries for existing file server names


In order for DFS Namespaces to respond to existing file server names, create alias (CNAME) records for your
existing file servers that point at the DFS Namespaces server name. The exact procedure for updating your DNS
records may depend on what servers your organization is using and if your organization is using custom
tooling to automate the management of DNS. The following steps are shown for the DNS server included with
Windows Server and automatically used by Windows AD.

Portal
PowerShell

On a Windows DNS server, open the DNS management console. This can be found by selecting the Star t button
and typing DNS . Navigate to the forward lookup zone for your domain. For example, if your domain is
contoso.com , the forward lookup zone can be found under For ward Lookup Zones > contoso.com in the
management console. The exact hierarchy shown in this dialog will depend on the DNS configuration for your
network.
Right-click on your forward lookup zone and select New Alias (CNAME) . In the resulting dialog, enter the
short name for the file server you're replacing (the fully qualified domain name will be auto-populated in the
textbox labeled Fully qualified domain name ). In the textbox labeled Fully qualified domain name
(FQDN) for the target host , enter the name of the DFS-N server you have set up. You can use the Browse
button to help you select the server if desired. Select OK to create the CNAME record for your server.
Create a namespace
The basic unit of management for DFS Namespaces is the namespace. The namespace root, or name, is the
starting point of the namespace, such that in the UNC path \\contoso.com\Public\ , the namespace root is
Public .

If you are using DFS Namespaces to take over an existing server name with root consolidation, the name of the
namespace should be the name of server name you want to take over, prepended with the # character. For
example, if you wanted to take over an existing server named MyServer , you would create a DFS-N namespace
called #MyServer . The PowerShell section below takes care of prepending the # , but if you create via the DFS
Management console, you will need to prepend as appropriate.

Portal
PowerShell

To create a new namespace, open the DFS Management console. This can be found by selecting the Star t
button and typing DFS Management . The resulting management console has two sections Namespaces and
Replication , which refer to DFS Namespaces and DFS Replication (DFS-R) respectively. Azure File Sync provides
a modern replication and synchronization mechanism that may be used in place of DFS-R if replication is also
desired.
Select the Namespaces section, and select the New Namespace button (you may also right-click on the
Namespaces section). The resulting New Namespace Wizard walks you through creating a namespace.
The first section in the wizard requires you to pick the DFS Namespace server to host the namespace. Multiple
servers can host a namespace, but you will need to set up DFS Namespaces with one server at a time. Enter the
name of the desired DFS Namespace server and select Next . In the Namespace Name and Settings section,
you can enter the desired name of your namespace and select Next .
The Namespace Type section allows you to choose between a Domain-based namespace and a Stand-
alone namespace . If you intend to use DFS Namespaces to preserve an existing file server/NAS device name,
you should select the standalone namespace option. For any other scenarios, you should select a domain-based
namespace. Refer to namespace types above for more information on choosing between namespace types.
Select the desired namespace type for your environment and select Next . The wizard will then summarize the
namespace to be created. Select Create to create the namespace and Close when the dialog completes.

Configure folders and folder targets


For a namespace to be useful, it must have folders and folder targets. Each folder can have one or more folder
targets, which are pointers to the SMB file share(s) that host that content. When users browse a folder with
folder targets, the client computer receives a referral that transparently redirects the client computer to one of
the folder targets. You can also have folders without folder targets to add structure and hierarchy to the
namespace.
You can think of DFS Namespaces folders as analogous to file shares.

Portal
PowerShell

In the DFS Management console, select the namespace you just created and select New Folder . The resulting
New Folder dialog will allow you to create both the folder and its targets.
In the textbox labeled Name provide the name of the folder. Select Add... to add folder targets for this folder.
The resulting Add Folder Target dialog provides a textbox labeled Path to folder target where you can
provide the UNC path to the desired folder. Select OK on the Add Folder Target dialog. If you are adding a
UNC path to an Azure file share, you may receive a message reporting that the server
storageaccount.file.core.windows.net cannot be contacted. This is expected; select Yes to continue. Finally,
select OK on the New Folder dialog to create the folder and folder targets.
Now that you have created a namespace, a folder, and a folder target, you should be able to mount your file
share through DFS Namespaces. If you are using a domain-based namespace, the full path for your share
should be \\<domain-name>\<namespace>\<share> . If you are using a standalone namespace, the full path for your
share should be \\<DFS-server>\<namespace>\<share> . If you are using a standalone namespace with root
consolidation, you can access directly through your old server name, such as \\<old-server>\<share> .

See also
Deploying an Azure file share: Planning for an Azure Files deployment and How to create an file share.
Configuring file share access: Identity-based authentication and Networking considerations for direct access.
Windows Server Distributed File System Namespaces
Optimize costs for Azure Files with reserved
capacity
5/20/2022 • 7 minutes to read • Edit Online

You can save money on the storage costs for Azure file shares with capacity reservations. Azure Files reserved
capacity offers you a discount on capacity for storage costs when you commit to a reservation for either one
year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation.
Azure Files reserved capacity can significantly reduce your capacity costs for storing data in your Azure file
shares. How much you save will depend on the duration of your reservation, the total capacity you choose to
reserve, and the tier and redundancy settings that you've chosen for you Azure file shares. Reserved capacity
provides a billing discount and doesn't affect the state of your Azure file shares.
For pricing information about reservation capacity for Azure Files, see Azure Files pricing.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Reservation terms for Azure Files


The following sections describe the terms of an Azure Files capacity reservation.
Reservation capacity
You can purchase Azure Files reserved capacity in units of 10 TiB and 100 TiB per month for a one-year or three-
year term.
Reservation scope
Azure Files reserved capacity is available for a single subscription, multiple subscriptions (shared scope), and
management groups. When scoped to a single subscription, the reservation discount is applied to the selected
subscription only. When scoped to multiple subscriptions, the reservation discount is shared across those
subscriptions within the customer's billing context. When scoped to a management group, the reservation
discount is applied to subscriptions that are a part of both the management group and billing scope. A
reservation applies to your usage within the purchased scope and cannot be limited to a specific storage
account, container, or object within the subscription.
A capacity reservation for Azure Files covers only the amount of data that is stored in a subscription or shared
resource group. Transaction, bandwidth, data transfer, and metadata storage charges are not included in the
reservation. As soon as you buy a reservation, the capacity charges that match the reservation attributes are
charged at the discount rates instead of the pay-as-you go rates. For more information on Azure reservations,
see What are Azure Reservations?.
Supported tiers and redundancy options
Azure Files reserved capacity is available for premium, hot, and cool file shares. Reserved capacity is not
available for Azure file shares in the transaction optimized tier. All storage redundancies support reservations.
For more information about redundancy options, see Azure Files redundancy.
Security requirements for purchase
To purchase reserved capacity:
You must be in the Owner role for at least one Enterprise or individual subscription with pay-as-you-go
rates.
For Enterprise subscriptions, Add Reser ved Instances must be enabled in the EA portal. Or, if that setting is
disabled, you must be an EA Admin on the subscription.
For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure Files
reserved capacity.

Determine required capacity before purchase


When you purchase an Azure Files reservation, you must choose the region, tier, and redundancy option for the
reservation. Your reservation is valid only for data stored in that region, tier, and redundancy level. For example,
suppose you purchase a reservation for data in US West for the hot tier using zone-redundant storage (ZRS).
That reservation will not apply to data in US East, data in the cool tier, or data in geo-redundant storage (GRS).
However, you can purchase another reservation for your additional needs.
Reservations are available for 10 TiB or 100 TiB blocks, with higher discounts for 100 TiB blocks. When you
purchase a reservation in the Azure portal, Microsoft may provide you with recommendations based on your
previous usage to help determine which reservation you should purchase.

Purchase Azure Files reserved capacity


You can purchase Azure Files reserved capacity through the Azure portal. Pay for the reservation up front or
with monthly payments. For more information about purchasing with monthly payments, see Purchase Azure
reservations with up front or monthly payments.
For help identifying the reservation terms that are right for your scenario, see Understand the Azure Storage
reserved capacity discount.
Follow these steps to purchase reserved capacity:
1. Navigate to the Purchase reservations blade in the Azure portal.
2. Select Azure Files to buy a new reservation.
3. Fill in the required fields as described in the following table:

F IEL D DESC RIP T IO N


F IEL D DESC RIP T IO N

Scope Indicates how many subscriptions can use the billing


benefit associated with the reservation. It also controls
how the reservation is applied to specific subscriptions.

If you select Shared , the reservation discount is applied


to Azure Files capacity in any subscription within your
billing context. The billing context is based on how you
signed up for Azure. For enterprise customers, the
shared scope is the enrollment and includes all
subscriptions within the enrollment. For pay-as-you-go
customers, the shared scope includes all individual
subscriptions with pay-as-you-go rates created by the
account administrator.

If you select Single subscription , the reservation


discount is applied to Azure Files capacity in the selected
subscription.

If you select Single resource group , the reservation


discount is applied to Azure Files capacity in the selected
subscription and the selected resource group within that
subscription.

You can change the reservation scope after you purchase


the reservation.

Subscription The subscription that's used to pay for the Azure Files
reservation. The payment method on the selected
subscription is used in charging the costs. The
subscription must be one of the following types:

Enterprise Agreement (offer numbers: MS-AZR-0017P or


MS-AZR-0148P): For an Enterprise subscription, the
charges are deducted from the enrollment's Azure
Prepayment (previously called monetary commitment)
balance or charged as overage.

Individual subscription with pay-as-you-go rates (offer


numbers: MS-AZR-0003P or MS-AZR-0023P): For an
individual subscription with pay-as-you-go rates, the
charges are billed to the credit card or invoice payment
method on the subscription.

Region The region where the reservation is in effect.

Tier The tier where the for which the reservation is in effect.
Options include Premium, Hot, and Cool.

Redundancy The redundancy option for the reservation. Options


include LRS, ZRS, GRS, and GZRS. For more information
about redundancy options, see Azure Files redundancy.

Billing frequency Indicates how often the account is billed for the
reservation. Options include Monthly or Upfront.

Size The amount of capacity to reserve.

Term One year or three years.


4. After you select the parameters for your reservation, the Azure portal displays the cost. The portal also
shows the discount percentage over pay-as-you-go billing.
5. In the Purchase reser vations blade, review the total cost of the reservation. You can also provide a
name for the reservation.
After you purchase a reservation, it is automatically applied to any existing Azure file shares that match the
terms of the reservation. If you haven't created any Azure file shares yet, the reservation will apply whenever
you create a resource that matches the terms of the reservation. In either case, the term of the reservation
begins immediately after a successful purchase.

Exchange or refund a reservation


You can exchange or refund a reservation, with certain limitations. These limitations are described in the
following sections.
To exchange or refund a reservation, navigate to the reservation details in the Azure portal. Select Exchange or
Refund , and follow the instructions to submit a support request. When the request has been processed,
Microsoft will send you an email to confirm completion of the request.
For more information about Azure Reservations policies, see Self-service exchanges and refunds for Azure
Reservations.
Exchange a reservation
Exchanging a reservation enables you to receive a prorated refund based on the unused portion of the
reservation. You can then apply the refund to the purchase price of a new Azure Files reservation.
There's no limit on the number of exchanges you can make. Additionally, there's no fee associated with an
exchange. The new reservation that you purchase must be of equal or greater value than the prorated credit
from the original reservation. An Azure Files reservation can be exchanged only for another Azure Files
reservation, and not for a reservation for any other Azure service.
Refund a reservation
You may cancel an Azure Files reservation at any time. When you cancel, you'll receive a prorated refund based
on the remaining term of the reservation, minus a 12 percent early termination fee. The maximum refund per
year is $50,000.
Cancelling a reservation immediately terminates the reservation and returns the remaining months to
Microsoft. The remaining prorated balance, minus the fee, will be refunded to your original form of purchase.

Expiration of a reservation
When a reservation expires, any Azure Files capacity that you are using under that reservation is billed at the
pay-as-you go rate. Reservations don't renew automatically.
You will receive an email notification 30 days prior to the expiration of the reservation, and again on the
expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later
than the expiration date.

Need help? Contact us


If you have questions or need help, create a support request.

Next steps
What are Azure Reservations?
Understand how reservation discounts are applied to Azure storage services
Mount SMB Azure file share on Windows
5/20/2022 • 6 minutes to read • Edit Online

Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be seamlessly used in Windows
and Windows Server. This article discusses the considerations for using an Azure file share with Windows and
Windows Server.
In order to use an Azure file share via the public endpoint outside of the Azure region it is hosted in, such as on-
premises or in a different Azure region, the OS must support SMB 3.x. Older versions of Windows that support
only SMB 2.1 cannot mount Azure file shares via the public endpoint.

A Z URE F IL ES SM B M A XIM UM SM B C H A N N EL
W IN DO W S VERSIO N SM B VERSIO N M ULT IC H A N N EL EN C RY P T IO N

Windows Server 2022 SMB 3.1.1 Yes AES-256-GCM

Windows 11 SMB 3.1.1 Yes AES-256-GCM

Windows 10, version 21H1 SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM
newer

Windows Server, version SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM


20H2 newer

Windows 10, version 20H2 SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM
newer

Windows Server, version SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM


2004 newer

Windows 10, version 2004 SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM
newer

Windows Server 2019 SMB 3.1.1 Yes, with KB5003703 or AES-128-GCM


newer

Windows 10, version 1809 SMB 3.1.1 Yes, with KB5003703 or AES-128-GCM
newer

Windows Server 2016 SMB 3.1.1 Yes, with KB5004238 or AES-128-GCM


newer and applied registry
key

Windows 10, version 1607 SMB 3.1.1 Yes, with KB5004238 or AES-128-GCM
newer and applied registry
key

Windows 10, version 1507 SMB 3.1.1 Yes, with KB5004249 or AES-128-GCM
newer and applied registry
key

Windows Server 2012 R2 SMB 3.0 No AES-128-CCM


A Z URE F IL ES SM B M A XIM UM SM B C H A N N EL
W IN DO W S VERSIO N SM B VERSIO N M ULT IC H A N N EL EN C RY P T IO N

Windows 8.1 SMB 3.0 No AES-128-CCM

Windows Server 2012 SMB 3.0 No AES-128-CCM

Windows Server 2008 R21 SMB 2.1 No Not supported

Windows 71 SMB 2.1 No Not supported

1Regular Microsoft support for Windows 7 and Windows Server 2008 R2 has ended. It is possible to purchase
additional support for security updates only through the Extended Security Update (ESU) program. We strongly
recommend migrating off of these operating systems.

NOTE
We always recommend taking the most recent KB for your version of Windows.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
Ensure port 445 is open: The SMB protocol requires TCP port 445 to be open; connections will fail if port 445 is
blocked. You can check if your firewall is blocking port 445 with the Test-NetConnection cmdlet. To learn about
ways to work around a blocked 445 port, see the Cause 1: Port 445 is blocked section of our Windows
troubleshooting guide.

Using an Azure file share with Windows


To use an Azure file share with Windows, you must either mount it, which means assigning it a drive letter or
mount point path, or access it via its UNC path.
This article uses the storage account key to access the file share. A storage account key is an administrator key
for a storage account, including administrator permissions to all files and folders within the file share you're
accessing, and for all file shares and other storage resources (blobs, queues, tables, etc.) contained within your
storage account. If this is not sufficient for your workload, Azure File Sync may be used, or you may use identity-
based authentication over SMB.
A common pattern for lifting and shifting line-of-business (LOB) applications that expect an SMB file share to
Azure is to use an Azure file share as an alternative for running a dedicated Windows file server in an Azure VM.
One important consideration for successfully migrating a line-of-business application to use an Azure file share
is that many line-of-business applications run under the context of a dedicated service account with limited
system permissions rather than the VM's administrative account. Therefore, you must ensure that you
mount/save the credentials for the Azure file share from the context of the service account rather than your
administrative account.
Mount the Azure file share
The Azure portal provides you with a script that you can use to mount your file share directly to a host. We
recommend using this provided script.
To get this script:
1. Sign in to the Azure portal.
2. Navigate to the storage account that contains the file share you'd like to mount.
3. Select File shares .
4. Select the file share you'd like to mount.

5. Select Connect .

6. Select the drive letter to mount the share to.


7. Copy the provided script.
8. Paste the script into a shell on the host you'd like to mount the file share to, and run it.
You have now mounted your Azure file share.
Mount the Azure file share with File Explorer

NOTE
Note that the following instructions are shown on Windows 10 and may differ slightly on older releases.

1. Open File Explorer. This can be done by opening from the Start Menu, or by pressing Win+E shortcut.
2. Navigate to This PC on the left-hand side of the window. This will change the menus available in the
ribbon. Under the Computer menu, select Map network drive .
3. Select the drive letter and enter the UNC path, the UNC path format is
\\<storageAccountName>.file.core.windows.net\<fileShareName> . For example:
\\anexampleaccountname.file.core.windows.net\example-share-name .

4. Use the storage account name prepended with AZURE\ as the username and a storage account key as the
password.

5. Use Azure file share as desired.

6. When you are ready to dismount the Azure file share, you can do so by right-clicking on the entry for the
share under the Network locations in File Explorer and selecting Disconnect .
Accessing share snapshots from Windows
If you have taken a share snapshot, either manually or automatically through a script or service like Azure
Backup, you can view previous versions of a share, a directory, or a particular file from file share on Windows.
You can take a share snapshot using Azure PowerShell, Azure CLI, or the Azure portal.
List previous versions
Browse to the item or parent item that needs to be restored. Double-click to go to the desired directory. Right-
click and select Proper ties from the menu.

Select Previous Versions to see the list of share snapshots for this directory. The list might take a few seconds
to load, depending on the network speed and the number of share snapshots in the directory.

You can select Open to open a particular snapshot.


Restore from a previous version
Select Restore to copy the contents of the entire directory recursively at the share snapshot creation time to the
original location.

Enable SMB Multichannel


Support for SMB Multichannel in Azure Files requires ensuring Windows has all the relevant patches applied to
be up-to-date. Several older Windows versions, including Windows Server 2016, Windows 10 version 1607,
and Windows 10 version 1507, require additional registry keys to be set for all relevant SMB Multichannel fixes
to be applied on fully patched installations. If you are running a version of Windows that is newer than these
three versions, no additional action is required.
Windows Server 2016 and Windows 10 version 1607
To enable all SMB Multichannel fixes for Windows Server 2016 and Windows 10 version 1607, run the following
PowerShell command:

Set-ItemProperty `
-Path "HKLM:SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides" `
-Name "2291605642" `
-Value 1 `
-Force

Windows 10 version 1507


To enable all SMB Multichannel fixes for Windows 10 version 1507, run the following PowerShell command:

Set-ItemProperty `
-Path "HKLM:\SYSTEM\CurrentControlSet\Services\MRxSmb\KBSwitch" `
-Name "{FFC376AE-A5D2-47DC-A36F-FE9A46D53D75}" `
-Value 1 `
-Force

Next steps
See these links for more information about Azure Files:
Planning for an Azure Files deployment
FAQ
Troubleshooting on Windows
Mount SMB Azure file share on Linux
5/20/2022 • 8 minutes to read • Edit Online

Azure Files is Microsoft's easy to use cloud file system. Azure file shares can be mounted in Linux distributions
using the SMB kernel client.
The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires
encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support
encryption in transit, but you may not mount Azure file shares with SMB 2.1 from another Azure region or on-
premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1.

DIST RIB UT IO N SM B 3. 1. 1 SM B 3. 0

Linux kernel version Basic 3.1.1 support: 4.17 Basic 3.0 support: 3.12
Default mount: 5.0 AES-128-CCM encryption: 4.11
AES-128-GCM encryption: 5.3

Ubuntu AES-128-GCM encryption: 18.04.5 AES-128-CCM encryption: 16.04.4


LTS+ LTS+

Red Hat Enterprise Linux (RHEL) Basic: 8.0+ 7.5+


Default mount: 8.2+
AES-128-GCM encryption:
8.2+

Debian Basic: 10+ AES-128-CCM encryption: 10+

SUSE Linux Enterprise Server AES-128-GCM encryption: 15 SP2+ AES-128-CCM encryption: 12 SP2+

If your Linux distribution isn't listed in the above table, you can check the Linux kernel version with the uname
command:

uname -r

NOTE
SMB 2.1 support was added to Linux kernel version 3.7. If you are using a version of the Linux kernel after 3.7, it should
support SMB 2.1.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS


F IL E SH A RE T Y P E SM B NFS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
Ensure the cifs-utils package is installed.
The cifs-utils package can be installed using the package manager on the Linux distribution of your
choice.
On Ubuntu and Debian , use the apt package manager:

sudo apt update


sudo apt install cifs-utils

On Red Hat Enterprise Linux 8+ use the dnf package manager:

sudo dnf install cifs-utils

On older versions of Red Hat Enterprise Linux use the yum package manager:

sudo yum install cifs-utils

On SUSE Linux Enterprise Ser ver , use the zypper package manager:

sudo zypper install cifs-utils

On other distributions, use the appropriate package manager or compile from source.
The most recent version of the Azure Command Line Interface (CLI). For more information on
how to install the Azure CLI, see Install the Azure CLI and select your operating system. If you prefer to
use the Azure PowerShell module in PowerShell 6+, you may, however the instructions in this article are
for the Azure CLI.
Ensure por t 445 is open : SMB communicates over TCP port 445 - check to see if your firewall is not
blocking TCP ports 445 from client machine. Replace <your-resource-group> and <your-storage-account>
then run the following script:

resourceGroupName="<your-resource-group>"
storageAccountName="<your-storage-account>"

# This command assumes you have logged in with az login


httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \
--name $storageAccountName \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))
fileHost=$(echo $smbPath | tr -d "/")

nc -zvw3 $fileHost 445

If the connection was successful, you should see something similar to the following output:
Connection to <your-storage-account> 445 port [tcp/microsoft-ds] succeeded!

If you are unable to open up port 445 on your corporate network or are blocked from doing so by an ISP,
you may use a VPN connection or ExpressRoute to work around port 445. For more information, see
Networking considerations for direct Azure file share access.

Mount the Azure file share on-demand with mount


When you mount a file share on a Linux OS, your remote file share is represented as a folder in your local file
system. You can mount file shares to anywhere on your system. The following example mounts under the
/mount path. You can change this to your preferred path you want by modifying the $mntRoot variable.

Replace <resource-group-name> , <storage-account-name> , and <file-share-name> with the appropriate


information for your environment:

resourceGroupName="<resource-group-name>"
storageAccountName="<storage-account-name>"
fileShareName="<file-share-name>"

mntRoot="/mount"
mntPath="$mntRoot/$storageAccountName/$fileShareName"

sudo mkdir -p $mntPath

Next, mount the file share using the mount command. In the following example, the $smbPath command is
populated using the fully qualified domain name for the storage account's file endpoint and $storageAccountKey
is populated with the storage account key.

SMB 3.1.1
SMB 3.0
SMB 2.1

NOTE
Starting in Linux kernel version 5.0, SMB 3.1.1 is the default negotiated protocol. If you're using a version of the Linux
kernel older than 5.0, specify vers=3.1.1 in the mount options list.

# This command assumes you have logged in with az login


httpEndpoint=$(az storage account show \
--resource-group $resourceGroupName \
--name $storageAccountName \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName

storageAccountKey=$(az storage account keys list \


--resource-group $resourceGroupName \
--account-name $storageAccountName \
--query "[0].value" --output tsv | tr -d '"')

sudo mount -t cifs $smbPath $mntPath -o


username=$storageAccountName,password=$storageAccountKey,serverino,nosharesock,actimeo=30

You can use uid / gid or dir_mode and file_mode in the mount options for the mount command to set
permissions. For more information on how to set permissions, see UNIX numeric notation on Wikipedia.
You can also mount the same Azure file share to multiple mount points if desired. When you are done using the
Azure file share, use sudo umount $mntPath to unmount the share.

Automatically mount file shares


When you mount a file share on a Linux OS, your remote file share is represented as a folder in your local file
system. You can mount file shares to anywhere on your system. The following example mounts under the
/mount path. You can change this to your preferred path you want by modifying the $mntRoot variable.

mntRoot="/mount"
sudo mkdir -p $mntRoot

To mount an Azure file share on Linux, use the storage account name as the username of the file share, and the
storage account key as the password. Since the storage account credentials may change over time, you should
store the credentials for the storage account separately from the mount configuration.
The following example shows how to create a file to store the credentials. Remember to replace
<resource-group-name> and <storage-account-name> with the appropriate information for your environment.

resourceGroupName="<resource-group-name>"
storageAccountName="<storage-account-name>"

# Create a folder to store the credentials for this storage account and
# any other that you might set up.
credentialRoot="/etc/smbcredentials"
sudo mkdir -p "/etc/smbcredentials"

# Get the storage account key for the indicated storage account.
# You must be logged in with az login and your user identity must have
# permissions to list the storage account keys for this command to work.
storageAccountKey=$(az storage account keys list \
--resource-group $resourceGroupName \
--account-name $storageAccountName \
--query "[0].value" --output tsv | tr -d '"')

# Create the credential file for this individual storage account


smbCredentialFile="$credentialRoot/$storageAccountName.cred"
if [ ! -f $smbCredentialFile ]; then
echo "username=$storageAccountName" | sudo tee $smbCredentialFile > /dev/null
echo "password=$storageAccountKey" | sudo tee -a $smbCredentialFile > /dev/null
else
echo "The credential file $smbCredentialFile already exists, and was not modified."
fi

# Change permissions on the credential file so only root can read or modify the password file.
sudo chmod 600 $smbCredentialFile

To automatically mount a file share, you have a choice between using a static mount via the /etc/fstab utility
or using a dynamic mount via the autofs utility.
Static mount with /etc/fstab
Using the earlier environment, create a folder for your storage account/file share under your mount folder.
Replace <file-share-name> with the appropriate name of your Azure file share.

fileShareName="<file-share-name>"

mntPath="$mntRoot/$storageAccountName/$fileShareName"
sudo mkdir -p $mntPath
Finally, create a record in the /etc/fstab file for your Azure file share. In the command below, the default 0755
Linux file and folder permissions are used, which means read, write, and execute for the owner (based on the
file/directory Linux owner), read and execute for users in owner group, and read and execute for others on the
system. You may wish to set alternate uid and gid or dir_mode and file_mode permissions on mount as
desired. For more information on how to set permissions, see UNIX numeric notation on Wikipedia.

httpEndpoint=$(az storage account show \


--resource-group $resourceGroupName \
--name $storageAccountName \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName

if [ -z "$(grep $smbPath\ $mntPath /etc/fstab)" ]; then


echo "$smbPath $mntPath cifs nofail,credentials=$smbCredentialFile,serverino,nosharesock,actimeo=30" |
sudo tee -a /etc/fstab > /dev/null
else
echo "/etc/fstab was not modified to avoid conflicting entries as this Azure file share was already
present. You may want to double check /etc/fstab to ensure the configuration is as desired."
fi

sudo mount -a

NOTE
Starting in Linux kernel version 5.0, SMB 3.1.1 is the default negotiated protocol. You can specify alternate protocol
versions using the vers mount option (protocol versions are 3.1.1 , 3.0 , and 2.1 ).

Dynamically mount with autofs


To dynamically mount a file share with the autofs utility, install it using the package manager on the Linux
distribution of your choice.
On Ubuntu and Debian distributions, use the apt package manager:

sudo apt update


sudo apt install autofs

On Red Hat Enterprise Linux 8+ , use the dnf package manager:

sudo dnf install autofs

On older versions of Red Hat Enterprise Linux , use the yum package manager:

sudo yum install autofs

On SUSE Linux Enterprise Ser ver , use the zypper package manager:

sudo zypper install autofs

Next, update the autofs configuration files.


fileShareName="<file-share-name>"

httpEndpoint=$(az storage account show \


--resource-group $resourceGroupName \
--name $storageAccountName \
--query "primaryEndpoints.file" --output tsv | tr -d '"')
smbPath=$(echo $httpEndpoint | cut -c7-$(expr length $httpEndpoint))$fileShareName

echo "$fileShareName -fstype=cifs,credentials=$smbCredentialFile :$smbPath" > /etc/auto.fileshares

echo "/fileshares /etc/auto.fileshares --timeout=60" > /etc/auto.master

The final step is to restart the autofs service.

sudo systemctl restart autofs

Next steps
See these links for more information about Azure Files:
Planning for an Azure Files deployment
Remove SMB 1 on Linux
Troubleshooting
How to mount an NFS file share
5/20/2022 • 2 minutes to read • Edit Online

Azure Files is Microsoft's easy to use cloud file system. Azure file shares can be mounted in Linux distributions
using either the Server Message Block protocol (SMB) or the Network File System (NFS) protocol. This article is
focused on mounting with NFS, for details on mounting with SMB, see Use Azure Files with Linux. For details on
each of the available protocols, see Azure file share protocols.

Limitations
NFS Azure file shares are only available for the premium tier.
Regional availability
Azure NFS file shares is supported in all the same regions that support premium file storage.
For the most up-to-date list, see the Premium Files Storage entry on the Azure Products available by region
page.

Prerequisites
Create an NFS share.
Open port 2049 on any client you want to mount your NFS share to.

IMPORTANT
NFS shares can only be accessed from trusted networks. Connections to your NFS share must originate from one
of the following sources:

Use one of the following networking solutions:


Either create a private endpoint (recommended) or restrict access to your public endpoint.
Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files.
Configure a Site-to-Site VPN for use with Azure Files.
Configure ExpressRoute.

Disable secure transfer


1. Sign in to the Azure portal and access the storage account containing the NFS share you created.
2. Select Configuration .
3. Select Disabled for Secure transfer required .
4. Select Save .
Mount an NFS share
1. Once the file share is created, select the share and select Connect from Linux .
2. Enter the mount path you'd like to use, then copy the script.
3. Connect to your client and use the provided mounting script.

You have now mounted your NFS share.


Validate connectivity
If your mount failed, it's possible that your private endpoint was not setup correctly or is inaccessible. For details
on confirming connectivity, see the Verify connectivity section of the networking endpoints article.

Next steps
Learn more about Azure Files with our article, Planning for an Azure Files deployment.
If you experience any issues, see Troubleshoot Azure NFS file shares.
Mount SMB Azure file share on macOS
5/20/2022 • 2 minutes to read • Edit Online

Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted with the industry
standard SMB 3 protocol by macOS High Sierra 10.13+. This article shows two different ways to mount an
Azure file share on macOS: with the Finder UI and using the Terminal.

Prerequisites for mounting an Azure file share on macOS


Storage account name : To mount an Azure file share, you will need the name of the storage account.
Storage account key : To mount an Azure file share, you will need the primary (or secondary) storage
key. SAS keys are not currently supported for mounting.
Ensure por t 445 is open : SMB communicates over TCP port 445. On your client machine (the Mac),
check to make sure your firewall is not blocking TCP port 445.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Mount an Azure file share via Finder


1. Open Finder : Finder is open on macOS by default, but you can ensure it is the currently selected
application by clicking the "macOS face icon" on the dock:

2. Select "Connect to Ser ver" from the "Go" Menu : Using the UNC path from the prerequisites,
convert the beginning double backslash ( \\ ) to smb:// and all other backslashes ( \ ) to forwards
slashes ( / ). Your link should look like the following:
3. Use the storage account name and storage account key when prompted for a username and
password : When you click "Connect" on the "Connect to Server" dialog, you will be prompted for the
username and password (This will be autopopulated with your macOS username). You have the option of
placing the storage account name/storage account key in your macOS Keychain.
4. Use the Azure file share as desired : After substituting the share name and storage account key in for
the username and password, the share will be mounted. You may use this as you would normally use a
local folder/file share, including dragging and dropping files into the file share:

Mount an Azure file share via Terminal


1. Replace <storage-account-name> , <storage-account-key> , and <share-name> with the appropriate values
for your environment.
open smb://<storage-account-name>:<storage-account-key>@<storage-account-
name>.file.core.windows.net/<share-name>

2. Use the Azure file share as desired : The Azure file share will be mounted at the mount point
specified by the previous command.

Next steps
Connect your Mac to shared computers and servers - Apple Support
Configuring Azure Files network endpoints
5/20/2022 • 17 minutes to read • Edit Online

Azure Files provides two main types of endpoints for accessing Azure file shares:
Public endpoints, which have a public IP address and can be accessed from anywhere in the world.
Private endpoints, which exist within a virtual network and have a private IP address from within the address
space of that virtual network.
Public and private endpoints exist on the Azure storage account. A storage account is a management construct
that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage
resources, such as blob containers or queues.
This article focuses on how to configure a storage account's endpoints for accessing the Azure file share directly.
Most of the detail provided within this document also applies to how Azure File Sync interoperates with public
and private endpoints for the storage account, however for more information about networking considerations
for an Azure File Sync deployment, see configuring Azure File Sync proxy and firewall settings.
We recommend reading Azure Files networking considerations prior to reading this how to guide.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
This article assumes that you have already created an Azure subscription. If you don't already have a
subscription, then create a free account before you begin.
This article assumes that you have already created an Azure file share in a storage account that you would
like to connect to from on-premises. To learn how to create an Azure file share, see Create an Azure file share.
If you intend to use Azure PowerShell, install the latest version.
If you intend to use the Azure CLI, install the latest version.

Endpoint configurations
You can configure your endpoints to restrict network access to your storage account. There are two approaches
to restricting access to a storage account to a virtual network:
Create one or more private endpoints for the storage account and restrict all access to the public endpoint.
This ensures that only traffic originating from within the desired virtual networks can access the Azure file
shares within the storage account.
Restrict the public endpoint to one or more virtual networks. This works by using a capability of the virtual
network called service endpoints. When you restrict the traffic to a storage account via a service endpoint,
you are still accessing the storage account via the public IP address, but access is only possible from the
locations you specify in your configuration.
Create a private endpoint
Creating a private endpoint for your storage account will result in the following Azure resources being deployed:
A private endpoint : An Azure resource representing the storage account's private endpoint. You can think
of this as a resource that connects a storage account and a network interface.
A network interface (NIC) : The network interface that maintains a private IP address within the specified
virtual network/subnet. This is the exact same resource that gets deployed when you deploy a virtual
machine, however instead of being assigned to a VM, it's owned by the private endpoint.
A private DNS zone : If you've never deployed a private endpoint for this virtual network before, a new
private DNS zone will be deployed for your virtual network. A DNS A record will also be created for the
storage account in this DNS zone. If you've already deployed a private endpoint in this virtual network, a new
A record for the storage account will be added to the existing DNS zone. Deploying a DNS zone is optional,
however highly recommended, and required if you are mounting your Azure file shares with an AD service
principal or using the FileREST API.

NOTE
This article uses the storage account DNS suffix for the Azure Public regions, core.windows.net . This commentary also
applies to Azure Sovereign clouds such as the Azure US Government cloud and the Azure China cloud - just substitute
the the appropriate suffixes for your environment.

Portal
PowerShell
Azure CLI

Navigate to the storage account for which you would like to create a private endpoint. In the table of contents
for the storage account, select Networking , Private endpoint connections , and then + Private endpoint to
create a new private endpoint.
The resulting wizard has multiple pages to complete.
In the Basics blade, select the desired resource group, name, and region for your private endpoint. These can be
whatever you want, they don't have to match the storage account in any way, although you must create the
private endpoint in the same region as the virtual network you wish to create the private endpoint in.

In the Resource blade, select the radio button for Connect to an Azure resource in my director y . Under
Resource type , select Microsoft.Storage/storageAccounts for the resource type. The Resource field is the
storage account with the Azure file share you wish to connect to. Target sub-resource is file , since this is for
Azure Files.
The Configuration blade allows you to select the specific virtual network and subnet you would like to add
your private endpoint to. You must select a distinct subnet from the subnet you added your service endpoint to
above. The Configuration blade also contains the information for creating/update the private DNS zone. We
recommend using the default privatelink.file.core.windows.net zone.

Click Review + create to create the private endpoint.

Verify connectivity
Portal
PowerShell
Azure CLI

If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described
in Configuring DNS forwarding for Azure Files, you can test that your private endpoint has been set up correctly
by running the following commands from PowerShell, the command line, or the terminal (works for Windows,
Linux, or macOS). You must replace <storage-account-name> with the appropriate storage account name:

nslookup <storage-account-name>.file.core.windows.net

If everything has worked successfully, you should see the following output, where 192.168.0.5 is the private IP
address of the private endpoint in your virtual network (output shown for Windows):

Server: UnKnown
Address: 10.2.4.4

Non-authoritative answer:
Name: storageaccount.privatelink.file.core.windows.net
Address: 192.168.0.5
Aliases: storageaccount.file.core.windows.net

Restrict public endpoint access


Limiting public endpoint access first requires you to disable general access to the public endpoint. Disabling
access to the public endpoint does not impact private endpoints. After the public endpoint has been disabled,
you can select specific networks or IP addresses that may continue to access it. In general, most firewall policies
for a storage account restrict networking access to one or more virtual networks.
Disable access to the public endpoint
When access to the public endpoint is disabled, the storage account can still be accessed through its private
endpoints. Otherwise valid requests to the storage account's public endpoint will be rejected, unless they are
from a specifically allowed source.

Portal
PowerShell
Azure CLI

Navigate to the storage account for which you would like to restrict all access to the public endpoint. In the table
of contents for the storage account, select Networking .
At the top of the page, select the Selected networks radio button. This will un-hide a number of settings for
controlling the restriction of the public endpoint. Check Allow trusted Microsoft ser vices to access this
ser vice account to allow trusted first party Microsoft services such as Azure File Sync to access the storage
account.

Restrict access to the public endpoint to specific virtual networks


When you restrict the storage account to specific virtual networks, you are allowing requests to the public
endpoint from within the specified virtual networks. This works by using a capability of the virtual network
called service endpoints. This can be used with or without private endpoints.

Portal
PowerShell
Azure CLI

Navigate to the storage account for which you would like to restrict the public endpoint to specific virtual
networks. In the table of contents for the storage account, select Networking .
At the top of the page, select the Selected networks radio button. This will un-hide a number of settings for
controlling the restriction of the public endpoint. Click +Add existing vir tual network to select the specific
virtual network that should be allowed to access the storage account via the public endpoint. This will require
selecting a virtual network and a subnet for that virtual network.
Check Allow trusted Microsoft ser vices to access this ser vice account to allow trusted first party
Microsoft services such as Azure File Sync to access the storage account.

See also
Azure Files networking considerations
Configuring DNS forwarding for Azure Files
Configuring S2S VPN for Azure Files
Configuring DNS forwarding for Azure Files
5/20/2022 • 6 minutes to read • Edit Online

Azure Files enables you to create private endpoints for the storage accounts containing your file shares.
Although useful for many different applications, private endpoints are especially useful for connecting to your
Azure file shares from your on-premises network using a VPN or ExpressRoute connection using private-
peering.
In order for connections to your storage account to go over your network tunnel, the fully qualified domain
name (FQDN) of your storage account must resolve to your private endpoint's private IP address. To achieve
this, you must forward the storage endpoint suffix ( core.windows.net for public cloud regions) to the Azure
private DNS service accessible from within your virtual network. This guide will show how to setup and
configure DNS forwarding to properly resolve to your storage account's private endpoint IP address.
We strongly recommend that you read Planning for an Azure Files deployment and Azure Files networking
considerations before you complete the steps described in this article.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Overview
Azure Files provides two main types of endpoints for accessing Azure file shares:
Public endpoints, which have a public IP address and can be accessed from anywhere in the world.
Private endpoints, which exist within a virtual network and have a private IP address from within the address
space of that virtual network.
Public and private endpoints exist on the Azure storage account. A storage account is a management construct
that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage
resources, such as blob containers or queues.
Every storage account has a fully qualified domain name (FQDN). For the public cloud regions, this FQDN
follows the pattern storageaccount.file.core.windows.net where storageaccount is the name of the storage
account. When you make requests against this name, such as mounting the share on your workstation, your
operating system performs a DNS lookup to resolve the fully qualified domain name to an IP address.
By default, storageaccount.file.core.windows.net resolves to the public endpoint's IP address. The public
endpoint for a storage account is hosted on an Azure storage cluster which hosts many other storage accounts'
public endpoints. When you create a private endpoint, a private DNS zone is linked to the virtual network it was
added to, with a CNAME record mapping storageaccount.file.core.windows.net to an A record entry for the
private IP address of your storage account's private endpoint. This enables you to use
storageaccount.file.core.windows.net FQDN within the virtual network and have it resolve to the private
endpoint's IP address.
Since our ultimate objective is to access the Azure file shares hosted within the storage account from on-
premises using a network tunnel such as a VPN or ExpressRoute connection, you must configure your on-
premises DNS servers to forward requests made to the Azure Files service to the Azure private DNS service. To
accomplish this, you need to set up conditional forwarding of *.core.windows.net (or the appropriate storage
endpoint suffix for the US Government, Germany, or China national clouds) to a DNS server hosted within your
Azure virtual network. This DNS server will then recursively forward the request on to Azure's private DNS
service that will resolve the fully qualified domain name of the storage account to the appropriate private IP
address.
Configuring DNS forwarding for Azure Files will require running a virtual machine to host a DNS server to
forward the requests, however this is a one time step for all the Azure file shares hosted within your virtual
network. Additionally, this is not an exclusive requirement to Azure Files - any Azure service that supports
private endpoints that you want to access from on-premises can make use of the DNS forwarding you will
configure in this guide: Azure Blob storage, SQL Azure, Cosmos DB, etc.
This guide shows the steps for configuring DNS forwarding for the Azure storage endpoint, so in addition to
Azure Files, DNS name resolution requests for all of the other Azure storage services (Azure Blob storage, Azure
Table storage, Azure Queue storage, etc.) will be forwarded to Azure's private DNS service. Additional endpoints
for other Azure services can also be added if desired. DNS forwarding back to your on-premises DNS servers
will also be configured, enabling cloud resources within your virtual network (such as a DFS-N server) to
resolve on-premises machine names.

Prerequisites
Before you can setup DNS forwarding to Azure Files, you need to have completed the following steps:
A storage account containing an Azure file share you would like to mount. To learn how to create a storage
account and an Azure file share, see Create an Azure file share.
A private endpoint for the storage account. To learn how to create a private endpoint for Azure Files, see
Create a private endpoint.
The latest version of the Azure PowerShell module.

IMPORTANT
This guide assumes you're using the DNS server within Windows Server in your on-premises environment. All of the steps
described in this guide are possible with any DNS server, not just the Windows DNS Server.

Configuring DNS forwarding


If you already have DNS servers in place within your Azure virtual network, or if you simply prefer to deploy
your own virtual machines to be DNS servers by whatever methodology your organization uses, you can
configure DNS with the built-in DNS server PowerShell cmdlets.
On your on-premises DNS servers, create a conditional forwarder using Add-DnsServerConditionalForwarderZone .
This conditional forwarder must be deployed on all of your on-premises DNS servers to be effective at properly
forwarding traffic to Azure. Remember to replace <azure-dns-server-ip> with the appropriate IP addresses for
your environment.
$vnetDnsServers = "<azure-dns-server-ip>", "<azure-dns-server-ip>"

$storageAccountEndpoint = Get-AzContext | `
Select-Object -ExpandProperty Environment | `
Select-Object -ExpandProperty StorageEndpointSuffix

Add-DnsServerConditionalForwarderZone `
-Name $storageAccountEndpoint `
-MasterServers $vnetDnsServers

On the DNS servers within your Azure virtual network, you also will need to put a forwarder in place such that
requests for the storage account DNS zone are directed to the Azure private DNS service, which is fronted by
the reserved IP address 168.63.129.16 . (Remember to populate $storageAccountEndpoint if you're running the
commands within a different PowerShell session.)

Add-DnsServerConditionalForwarderZone `
-Name $storageAccountEndpoint `
-MasterServers "168.63.129.16"

Confirm DNS forwarders


Before testing to see if the DNS forwarders have successfully been applied, we recommend clearing the DNS
cache on your local workstation using Clear-DnsClientCache . To test to see if you can successfully resolve the
fully qualified domain name of your storage account, use Resolve-DnsName or nslookup .

# Replace storageaccount.file.core.windows.net with the appropriate FQDN for your storage account.
# Note the proper suffix (core.windows.net) depends on the cloud you're deployed in.
Resolve-DnsName -Name storageaccount.file.core.windows.net

If the name resolution is successful, you should see the resolved IP address match the IP address of your storage
account.

Name Type TTL Section NameHost


---- ---- --- ------- --------
storageaccount.file.core.windows. CNAME 29 Answer csostoracct.privatelink.file.core.windows.net
net

Name : storageaccount.privatelink.file.core.windows.net
QueryType : A
TTL : 1769
Section : Answer
IP4Address : 192.168.0.4

If you're mounting an SMB file share, you can also use the following Test-NetConnection command to see that a
TCP connection can be successfully made to your storage account.

Test-NetConnection -ComputerName storageaccount.file.core.windows.net -CommonTCPPort SMB

See also
Planning for an Azure Files deployment
Azure Files networking considerations
Configuring Azure Files network endpoints
Configure a Site-to-Site VPN for use with Azure
Files
5/20/2022 • 8 minutes to read • Edit Online

You can use a Site-to-Site (S2S) VPN connection to mount your Azure file shares from your on-premises
network, without sending data over the open internet. You can set up a Site-to-Site VPN using Azure VPN
Gateway, which is an Azure resource offering VPN services, and is deployed in a resource group alongside
storage accounts or other Azure resources.

We strongly recommend that you read Azure Files networking overview before continuing with this how to
article for a complete discussion of the networking options available for Azure Files.
The article details the steps to configure a Site-to-Site VPN to mount Azure file shares directly on-premises. If
you're looking to route sync traffic for Azure File Sync over a Site-to-Site VPN, please see configuring Azure File
Sync proxy and firewall settings.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS


F IL E SH A RE T Y P E SM B NFS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage
accounts, which are management constructs that represent a shared pool of storage in which you can
deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can
learn more about how to deploy Azure file shares and storage accounts in Create an Azure file share.
A private endpoint for the storage account containing the Azure file share you want to mount on-
premises. To learn more about how to create a private endpoint, see Configuring Azure Files network
endpoints.
A network appliance or server in your on-premises datacenter that is compatible with Azure VPN
Gateway. Azure Files is agnostic of the on-premises network appliance chosen but Azure VPN Gateway
maintains a list of tested devices. Different network appliances offer different features, performance
characteristics, and management functionalities, so consider these when selecting a network appliance.
If you do not have an existing network appliance, Windows Server contains a built-in Server Role, Routing
and Remote Access (RRAS), which may be used as the on-premises network appliance. To learn more
about how to configure Routing and Remote Access in Windows Server, see RAS Gateway.

Add storage account to VNet


In the Azure portal, navigate to the storage account containing the Azure file share you would like to mount on-
premises. In the table of contents for the storage account, select the Firewalls and vir tual networks entry.
Unless you added a virtual network to your storage account when you created it, the resulting pane should have
the Allow access from radio button for All networks selected.
To add your storage account to the desired virtual network, select Selected networks . Under the Vir tual
networks subheading, click either + Add existing vir tual network or +Add new vir tual network
depending on the desired state. Creating a new virtual network will result in a new Azure resource being
created. The new or existing VNet resource does not need to be in the same resource group or subscription as
the storage account, however it must be in the same region as the storage account and the resource group and
subscription you deploy your VNet into must match the one you will deploy your VPN Gateway into.

If you add existing virtual network, you will be asked to select one or more subnets of that virtual network
which the storage account should be added to. If you select a new virtual network, you will create a subnet as
part of the creation of the virtual network, and you can add more later through the resulting Azure resource for
the virtual network.
If you have not added a storage account to your subscription before, the Microsoft.Storage service endpoint will
need to be added to the virtual network. This may take some time, and until this operation has completed, you
will not be able to access the Azure file shares within that storage account, including via the VPN connection.
Deploy an Azure VPN Gateway
In the table of contents for the Azure portal, select Create a new resource and search for Virtual network
gateway. Your virtual network gateway must be in the same subscription, Azure region, and resource group as
the virtual network you deployed in the previous step (note that resource group is automatically selected when
the virtual network is picked).
For the purposes of deploying an Azure VPN Gateway, you must populate the following fields:
Name : The name of the Azure resource for the VPN Gateway. This name may be any name you find useful
for your management.
Region : The region into which the VPN Gateway will be deployed.
Gateway type : For the purpose of deploying a Site-to-Site VPN, you must select VPN .
VPN type : You may choose either Route-based* or Policy-based depending on your VPN device. Route-
based VPNs support IKEv2, while policy-based VPNs only support IKEv1. To learn more about the two types
of VPN gateways, see About policy-based and route-based VPN gateways
SKU : The SKU controls the number of allowed Site-to-Site tunnels and desired performance of the VPN. To
select the appropriate SKU for your use case, consult the Gateway SKU listing. The SKU of the VPN Gateway
may be changed later if necessary.
Vir tual network : The virtual network you created in the previous step.
Public IP address : The IP address of VPN Gateway that will be exposed to the internet. Likely, you will need
to create a new IP address, however you may also use an existing unused IP address if that is appropriate. If
you select to Create new , a new IP address Azure resource will be created in the same resource group as the
VPN Gateway and the Public IP address name will be the name of the newly created IP address. If you
select Use existing , you must select the existing unused IP address.
Enable active-active mode : Only select Enabled if you are creating an active-active gateway
configuration, otherwise leave Disabled selected. To learn more about active-active mode, see Highly
available cross-premises and VNet-to-VNet connectivity.
Configure BGP ASN : Only select Enabled if your configuration specifically requires this setting. To learn
more about this setting, see About BGP with Azure VPN Gateway.
Select Review + create to create the VPN Gateway. A VPN Gateway may take up to 45 minutes to fully create
and deploy.
Create a local network gateway for your on-premises gateway
A local network gateway is an Azure resource that represents your on-premises network appliance. In the table
of contents for the Azure portal, select Create a new resource and search for local network gateway. The local
network gateway is an Azure resource that will be deployed alongside your storage account, virtual network,
and VPN Gateway, but does not need to be in the same resource group or subscription as the storage account.
For the purposes of deploying the local network gateway resource, you must populate the following fields:
Name : The name of the Azure resource for the local network gateway. This name may be any name you find
useful for your management.
IP address : The public IP address of your local gateway on-premises.
Address space : The address ranges for the network this local network gateway represents. You can add
multiple address space ranges, but make sure that the ranges you specify here do not overlap with ranges of
other networks that you want to connect to.
Configure BGP settings : Only configure BGP settings if your configuration requires this setting. To learn
more about this setting, see About BGP with Azure VPN Gateway.
Subscription : The desired subscription. This does not need to match the subscription used for the VPN
Gateway or the storage account.
Resource group : The desired resource group. This does not need to match the resource group used for the
VPN Gateway or the storage account.
Location : The Azure Region the local network gateway resource should be created in. This should match the
region you selected for the VPN Gateway and the storage account.
Select Create to create the local network gateway resource.

Configure on-premises network appliance


The specific steps to configure your on-premises network appliance depend based on the network appliance
your organization has selected. Depending on the device your organization has chosen, the list of tested devices
may have a link out to your device vendor's instructions for configuring with Azure VPN Gateway.

Create the Site-to-Site connection


To complete the deployment of a S2S VPN, you must create a connection between your on-premises network
appliance (represented by the local network gateway resource) and the VPN Gateway. To do this, navigate to the
VPN Gateway you created above. In the table of contents for the VPN Gateway, select Connections , and click
Add . The resulting Add connection pane requires the following fields:
Name : The name of the connection. A VPN Gateway can host multiple connections, so pick a name helpful
for your management that will distinguish this particular connection.
Connection type : Since this a S2S connection, select Site-to-site (IPSec) in the drop-down list.
Vir tual network gateway : This field is auto-selected to the VPN Gateway you're making the connection to
and can't be changed.
Local network gateway : This is the local network gateway you want to connect to your VPN Gateway. The
resulting selection pane should have the name of the local network gateway you created above.
Shared key (PSK) : A mixture of letters and numbers, used to establish encryption for the connection. The
same shared key must be used in both the virtual network and local network gateways. If your gateway
device doesn't provide one, you can make one up here and provide it to your device.
Select OK to create the connection. You can verify the connection has been made successfully through the
Connections page.

Mount Azure file share


The final step in configuring a S2S VPN is verifying that it works for Azure Files. You can do this by mounting
your Azure file share on-premises with your preferred OS. See the instructions to mount by OS here:
Windows
macOS
Linux (NFS)
Linux (SMB)

See also
Azure Files networking overview
Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files
Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files
Configure a Point-to-Site (P2S) VPN on Windows
for use with Azure Files
5/20/2022 • 9 minutes to read • Edit Online

You can use a Point-to-Site (P2S) VPN connection to mount your Azure file shares over SMB from outside of
Azure, without opening up port 445. A Point-to-Site VPN connection is a VPN connection between Azure and an
individual client. To use a P2S VPN connection with Azure Files, a P2S VPN connection will need to be configured
for each client that wants to connect. If you have many clients that need to connect to your Azure file shares
from your on-premises network, you can use a Site-to-Site (S2S) VPN connection instead of a Point-to-Site
connection for each client. To learn more, see Configure a Site-to-Site VPN for use with Azure Files.
We strongly recommend that you read Networking considerations for direct Azure file share access before
continuing with this how to article for a complete discussion of the networking options available for Azure Files.
The article details the steps to configure a Point-to-Site VPN on Windows (Windows client and Windows Server)
to mount Azure file shares directly on-premises. If you're looking to route Azure File Sync traffic over a VPN,
please see configuring Azure File Sync proxy and firewall settings.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
The most recent version of the Azure PowerShell module. For more information on how to install the
Azure PowerShell, see Install the Azure PowerShell module and select your operating system. If you
prefer to use the Azure CLI on Windows, you may, however the instructions below are presented for
Azure PowerShell.
An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage
accounts, which are management constructs that represent a shared pool of storage in which you can
deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can
learn more about how to deploy Azure file shares and storage accounts in Create an Azure file share.
A virtual network with a private endpoint for the storage account containing the Azure file share you
want to mount on-premises. To learn more about how to create a private endpoint, see Configuring Azure
Files network endpoints.

Collect environment information


In order to set up the point-to-site VPN, we first need to collect some information about your environment for
use throughout the guide. See the prerequisites section if you have not already created a storage account,
virtual network, and/or private endpoints.
Remember to replace <resource-group> , <vnet-name> , <subnet-name> , and <storage-account-name> with the
appropriate values for your environment.

$resourceGroupName = "<resource-group-name>"
$virtualNetworkName = "<vnet-name>"
$subnetName = "<subnet-name>"
$storageAccountName = "<storage-account-name>"

$virtualNetwork = Get-AzVirtualNetwork `
-ResourceGroupName $resourceGroupName `
-Name $virtualNetworkName

$subnetId = $virtualNetwork | `
Select-Object -ExpandProperty Subnets | `
Where-Object { $_.Name -eq "StorageAccountSubnet" } | `
Select-Object -ExpandProperty Id

$storageAccount = Get-AzStorageAccount `
-ResourceGroupName $resourceGroupName `
-Name $storageAccountName

$privateEndpoint = Get-AzPrivateEndpoint | `
Where-Object {
$subnets = $_ | `
Select-Object -ExpandProperty Subnet | `
Where-Object { $_.Id -eq $subnetId }

$connections = $_ | `
Select-Object -ExpandProperty PrivateLinkServiceConnections | `
Where-Object { $_.PrivateLinkServiceId -eq $storageAccount.Id }

$null -ne $subnets -and $null -ne $connections


} | `
Select-Object -First 1

Create root certificate for VPN authentication


In order for VPN connections from your on-premises Windows machines to be authenticated to access your
virtual network, you must create two certificates: a root certificate, which will be provided to the virtual machine
gateway, and a client certificate, which will be signed with the root certificate. The following PowerShell creates
the root certificate; the client certificate will be created after the Azure virtual network gateway is created with
information from the gateway.
$rootcertname = "CN=P2SRootCert"
$certLocation = "Cert:\CurrentUser\My"
$vpnTemp = "C:\vpn-temp\"
$exportedencodedrootcertpath = $vpnTemp + "P2SRootCertencoded.cer"
$exportedrootcertpath = $vpnTemp + "P2SRootCert.cer"

if (-Not (Test-Path $vpnTemp)) {


New-Item -ItemType Directory -Force -Path $vpnTemp | Out-Null
}

if ($PSVersionTable.PSVersion -ge [System.Version]::new(6, 0)) {


Install-Module WindowsCompatibility
Import-WinModule PKI
}

$rootcert = New-SelfSignedCertificate `
-Type Custom `
-KeySpec Signature `
-Subject $rootcertname `
-KeyExportPolicy Exportable `
-HashAlgorithm sha256 `
-KeyLength 2048 `
-CertStoreLocation $certLocation `
-KeyUsageProperty Sign `
-KeyUsage CertSign

Export-Certificate `
-Cert $rootcert `
-FilePath $exportedencodedrootcertpath `
-NoClobber | Out-Null

certutil -encode $exportedencodedrootcertpath $exportedrootcertpath | Out-Null

$rawRootCertificate = Get-Content -Path $exportedrootcertpath

[System.String]$rootCertificate = ""
foreach($line in $rawRootCertificate) {
if ($line -notlike "*Certificate*") {
$rootCertificate += $line
}
}

Deploy virtual network gateway


The Azure virtual network gateway is the service that your on-premises Windows machines will connect to.
Deploying this service requires two basic components: a public IP that will identify the gateway to your clients
wherever they are in the world and a root certificate you created earlier which will be used to authenticate your
clients.
Remember to replace <desired-vpn-name-here> with the name you would like for these resources.

NOTE
Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this
PowerShell script will block for the deployment to be completed. This is expected.
$vpnName = "<desired-vpn-name-here>"
$publicIpAddressName = "$vpnName-PublicIP"

$publicIPAddress = New-AzPublicIpAddress `
-ResourceGroupName $resourceGroupName `
-Name $publicIpAddressName `
-Location $region `
-Sku Basic `
-AllocationMethod Dynamic

$gatewayIpConfig = New-AzVirtualNetworkGatewayIpConfig `
-Name "vnetGatewayConfig" `
-SubnetId $gatewaySubnet.Id `
-PublicIpAddressId $publicIPAddress.Id

$azRootCertificate = New-AzVpnClientRootCertificate `
-Name "P2SRootCert" `
-PublicCertData $rootCertificate

$vpn = New-AzVirtualNetworkGateway `
-ResourceGroupName $resourceGroupName `
-Name $vpnName `
-Location $region `
-GatewaySku VpnGw1 `
-GatewayType Vpn `
-VpnType RouteBased `
-IpConfigurations $gatewayIpConfig `
-VpnClientAddressPool "172.16.201.0/24" `
-VpnClientProtocol IkeV2 `
-VpnClientRootCertificates $azRootCertificate

Create client certificate


The client certificate is created with the URI of the virtual network gateway. This certificate is signed with the
root certificate you created earlier.
$clientcertpassword = "1234"

$vpnClientConfiguration = New-AzVpnClientConfiguration `
-ResourceGroupName $resourceGroupName `
-Name $vpnName `
-AuthenticationMethod EAPTLS

Invoke-WebRequest `
-Uri $vpnClientConfiguration.VpnProfileSASUrl `
-OutFile "$vpnTemp\vpnclientconfiguration.zip"

Expand-Archive `
-Path "$vpnTemp\vpnclientconfiguration.zip" `
-DestinationPath "$vpnTemp\vpnclientconfiguration"

$vpnGeneric = "$vpnTemp\vpnclientconfiguration\Generic"
$vpnProfile = ([xml](Get-Content -Path "$vpnGeneric\VpnSettings.xml")).VpnProfile

$exportedclientcertpath = $vpnTemp + "P2SClientCert.pfx"


$clientcertname = "CN=" + $vpnProfile.VpnServer

$clientcert = New-SelfSignedCertificate `
-Type Custom `
-DnsName $vpnProfile.VpnServer `
-KeySpec Signature `
-Subject $clientcertname `
-KeyExportPolicy Exportable `
-HashAlgorithm sha256 `
-KeyLength 2048 `
-CertStoreLocation $certLocation `
-Signer $rootcert `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2")

$mypwd = ConvertTo-SecureString -String $clientcertpassword -Force -AsPlainText

Export-PfxCertificate `
-FilePath $exportedclientcertpath `
-Password $mypwd `
-Cert $clientcert | Out-Null

Configure the VPN client


The Azure virtual network gateway will create a downloadable package with configuration files required to
initialize the VPN connection on your on-premises Windows machine. We will configure the VPN connection
using the Always On VPN feature of Windows 10/Windows Server 2016+. This package also contains
executable packages which will configure the legacy Windows VPN client, if so desired. This guide uses Always
On VPN rather than the legacy Windows VPN client as the Always On VPN client allows end-users to
connect/disconnect from the Azure VPN without having administrator permissions to their machine.
The following script will install the client certificate required for authentication against the virtual network
gateway, download, and install the VPN package. Remember to replace <computer1> and <computer2> with the
desired computers. You can run this script on as many machines as you desire by adding more PowerShell
sessions to the $sessions array. Your use account must be an administrator on each of these machines. If one of
these machines is the local machine you are running the script from, you must run the script from an elevated
PowerShell session.

$sessions = [System.Management.Automation.Runspaces.PSSession[]]@()
$sessions += New-PSSession -ComputerName "<computer1>"
$sessions += New-PSSession -ComputerName "<computer2>"

foreach ($session in $sessions) {


Invoke-Command -Session $session -ArgumentList $vpnTemp -ScriptBlock {
Invoke-Command -Session $session -ArgumentList $vpnTemp -ScriptBlock {
$vpnTemp = $args[0]
if (-Not (Test-Path $vpnTemp)) {
New-Item `
-ItemType Directory `
-Force `
-Path "C:\vpn-temp" | Out-Null
}
}

Copy-Item `
-Path $exportedclientcertpath, $exportedrootcertpath, "$vpnTemp\vpnclientconfiguration.zip" `
-Destination $vpnTemp `
-ToSession $session

Invoke-Command `
-Session $session `
-ArgumentList `
$mypwd, `
$vpnTemp, `
$virtualNetworkName `
-ScriptBlock {
$mypwd = $args[0]
$vpnTemp = $args[1]
$virtualNetworkName = $args[2]

Import-PfxCertificate `
-Exportable `
-Password $mypwd `
-CertStoreLocation "Cert:\LocalMachine\My" `
-FilePath "$vpnTemp\P2SClientCert.pfx" | Out-Null

Import-Certificate `
-FilePath "$vpnTemp\P2SRootCert.cer" `
-CertStoreLocation "Cert:\LocalMachine\Root" | Out-Null

Expand-Archive `
-Path "$vpnTemp\vpnclientconfiguration.zip" `
-DestinationPath "$vpnTemp\vpnclientconfiguration"
$vpnGeneric = "$vpnTemp\vpnclientconfiguration\Generic"

$vpnProfile = ([xml](Get-Content -Path "$vpnGeneric\VpnSettings.xml")).VpnProfile

Add-VpnConnection `
-Name $virtualNetworkName `
-ServerAddress $vpnProfile.VpnServer `
-TunnelType Ikev2 `
-EncryptionLevel Required `
-AuthenticationMethod MachineCertificate `
-SplitTunneling `
-AllUserConnection

Add-VpnConnectionRoute `
-Name $virtualNetworkName `
-DestinationPrefix $vpnProfile.Routes `
-AllUserConnection

Add-VpnConnectionRoute `
-Name $virtualNetworkName `
-DestinationPrefix $vpnProfile.VpnClientAddressPool `
-AllUserConnection

rasdial $virtualNetworkName
}
}

Remove-Item -Path $vpnTemp -Recurse


Mount Azure file share
Now that you have set up your Point-to-Site VPN, you can use it to mount the Azure file share on the computers
you setup via PowerShell. The following example will mount the share, list the root directory of the share to
prove the share is actually mounted, and the unmount the share. Unfortunately, it is not possible to mount the
share persistently over PowerShell remoting. To mount persistently, see Use an Azure file share with Windows.

$myShareToMount = "<file-share>"

$storageAccountKeys = Get-AzStorageAccountKey `
-ResourceGroupName $resourceGroupName `
-Name $storageAccountName
$storageAccountKey = ConvertTo-SecureString `
-String $storageAccountKeys[0].Value `
-AsPlainText `
-Force

$nic = Get-AzNetworkInterface -ResourceId $privateEndpoint.NetworkInterfaces[0].Id


$storageAccountPrivateIP = $nic.IpConfigurations[0].PrivateIpAddress

Invoke-Command `
-Session $sessions `
-ArgumentList `
$storageAccountName, `
$storageAccountKey, `
$storageAccountPrivateIP, `
$myShareToMount `
-ScriptBlock {
$storageAccountName = $args[0]
$storageAccountKey = $args[1]
$storageAccountPrivateIP = $args[2]
$myShareToMount = $args[3]

$credential = [System.Management.Automation.PSCredential]::new(
"AZURE\$storageAccountName",
$storageAccountKey)

New-PSDrive `
-Name Z `
-PSProvider FileSystem `
-Root "\\$storageAccountPrivateIP\$myShareToMount" `
-Credential $credential `
-Persist | Out-Null
Get-ChildItem -Path Z:\
Remove-PSDrive -Name Z
}

Rotate VPN Root Certificate


If a root certificate needs to be rotated due to expiration or new requirements, you can add a new root certificate
to the existing virtual network gateway without the need for redeploying the virtual network gateway. Once the
root certificate is added using the following sample script, you will need to re-create VPN client certificate.
Replace <resource-group-name> , <desired-vpn-name-here> , and <new-root-cert-name> with your own values, then
run the script.
#Creating the new Root Certificate
$ResourceGroupName = "<resource-group-name>"
$vpnName = "<desired-vpn-name-here>"
$NewRootCertName = "<new-root-cert-name>"

$rootcertname = "CN=$NewRootCertName"
$certLocation = "Cert:\CurrentUser\My"
$date = get-date -Format "MM_yyyy"
$vpnTemp = "C:\vpn-temp_$date\"
$exportedencodedrootcertpath = $vpnTemp + "P2SRootCertencoded.cer"
$exportedrootcertpath = $vpnTemp + "P2SRootCert.cer"

if (-Not (Test-Path $vpnTemp)) {


New-Item -ItemType Directory -Force -Path $vpnTemp | Out-Null
}

$rootcert = New-SelfSignedCertificate `
-Type Custom `
-KeySpec Signature `
-Subject $rootcertname `
-KeyExportPolicy Exportable `
-HashAlgorithm sha256 `
-KeyLength 2048 `
-CertStoreLocation $certLocation `
-KeyUsageProperty Sign `
-KeyUsage CertSign

Export-Certificate `
-Cert $rootcert `
-FilePath $exportedencodedrootcertpath `
-NoClobber | Out-Null

certutil -encode $exportedencodedrootcertpath $exportedrootcertpath | Out-Null

$rawRootCertificate = Get-Content -Path $exportedrootcertpath

[System.String]$rootCertificate = ""
foreach($line in $rawRootCertificate) {
if ($line -notlike "*Certificate*") {
$rootCertificate += $line
}
}

#Fetching gateway details and adding the newly created Root Certificate.
$gateway = Get-AzVirtualNetworkGateway -Name $vpnName -ResourceGroupName $ResourceGroupName

Add-AzVpnClientRootCertificate `
-PublicCertData $rootCertificate `
-ResourceGroupName $ResourceGroupName `
-VirtualNetworkGatewayName $gateway `
-VpnClientRootCertificateName $NewRootCertName

See also
Networking considerations for direct Azure file share access
Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files
Configure a Site-to-Site (S2S) VPN for use with Azure Files
Configure a Point-to-Site (P2S) VPN on Linux for
use with Azure Files
5/20/2022 • 6 minutes to read • Edit Online

You can use a Point-to-Site (P2S) VPN connection to mount your Azure file shares from outside of Azure,
without sending data over the open internet. A Point-to-Site VPN connection is a VPN connection between Azure
and an individual client. To use a P2S VPN connection with Azure Files, a P2S VPN connection will need to be
configured for each client that wants to connect. If you have many clients that need to connect to your Azure file
shares from your on-premises network, you can use a Site-to-Site (S2S) VPN connection instead of a Point-to-
Site connection for each client. To learn more, see Configure a Site-to-Site VPN for use with Azure Files.
We strongly recommend that you read Azure Files networking overview before continuing with this how to
article for a complete discussion of the networking options available for Azure Files.
The article details the steps to configure a Point-to-Site VPN on Linux to mount Azure file shares directly on-
premises.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
The most recent version of the Azure CLI. For more information on how to install the Azure CLI, see Install
the Azure PowerShell CLI and select your operating system. If you prefer to use the Azure PowerShell
module on Linux, you may, however the instructions below are presented for Azure CLI.
An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage
accounts, which are management constructs that represent a shared pool of storage in which you can
deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can
learn more about how to deploy Azure file shares and storage accounts in Create an Azure file share.
A private endpoint for the storage account containing the Azure file share you want to mount on-
premises. To learn more about how to create a private endpoint, see Configuring Azure Files network
endpoints.

Install required software


The Azure virtual network gateway can provide VPN connections using several VPN protocols, including IPsec
and OpenVPN. This guide shows how to use IPsec and uses the strongSwan package to provide the support on
Linux.

Verified with Ubuntu 18.10.


sudo apt update
sudo apt install strongswan strongswan-pki libstrongswan-extra-plugins curl libxml2-utils cifs-utils unzip

installDir="/etc/"

Deploy a virtual network


To access your Azure file share and other Azure resources from on-premises via a Point-to-Site VPN, you must
create a virtual network, or VNet. The P2S VPN connection you will automatically create is a bridge between
your on-premises Linux machine and this Azure virtual network.
The following script will create an Azure virtual network with three subnets: one for your storage account's
service endpoint, one for your storage account's private endpoint, which is required to access the storage
account on-premises without creating custom routing for the public IP of the storage account that may change,
and one for your virtual network gateway that provides the VPN service.
Remember to replace <region> , <resource-group> , and <desired-vnet-name> with the appropriate values for
your environment.

region="<region>"
resourceGroupName="<resource-group>"
virtualNetworkName="<desired-vnet-name>"

virtualNetwork=$(az network vnet create \


--resource-group $resourceGroupName \
--name $virtualNetworkName \
--location $region \
--address-prefixes "192.168.0.0/16" \
--query "newVNet.id" | tr -d '"')

serviceEndpointSubnet=$(az network vnet subnet create \


--resource-group $resourceGroupName \
--vnet-name $virtualNetworkName \
--name "ServiceEndpointSubnet" \
--address-prefixes "192.168.0.0/24" \
--service-endpoints "Microsoft.Storage" \
--query "id" | tr -d '"')

privateEndpointSubnet=$(az network vnet subnet create \


--resource-group $resourceGroupName \
--vnet-name $virtualNetworkName \
--name "PrivateEndpointSubnet" \
--address-prefixes "192.168.1.0/24" \
--query "id" | tr -d '"')

gatewaySubnet=$(az network vnet subnet create \


--resource-group $resourceGroupName \
--vnet-name $virtualNetworkName \
--name "GatewaySubnet" \
--address-prefixes "192.168.2.0/24" \
--query "id" | tr -d '"')

Create certificates for VPN authentication


In order for VPN connections from your on-premises Linux machines to be authenticated to access your virtual
network, you must create two certificates: a root certificate, which will be provided to the virtual machine
gateway, and a client certificate, which will be signed with the root certificate. The following script creates the
required certificates.
rootCertName="P2SRootCert"
username="client"
password="1234"

mkdir temp
cd temp

sudo ipsec pki --gen --outform pem > rootKey.pem


sudo ipsec pki --self --in rootKey.pem --dn "CN=$rootCertName" --ca --outform pem > rootCert.pem

rootCertificate=$(openssl x509 -in rootCert.pem -outform der | base64 -w0 ; echo)

sudo ipsec pki --gen --size 4096 --outform pem > "clientKey.pem"
sudo ipsec pki --pub --in "clientKey.pem" | \
sudo ipsec pki \
--issue \
--cacert rootCert.pem \
--cakey rootKey.pem \
--dn "CN=$username" \
--san $username \
--flag clientAuth \
--outform pem > "clientCert.pem"

openssl pkcs12 -in "clientCert.pem" -inkey "clientKey.pem" -certfile rootCert.pem -export -out "client.p12"
-password "pass:$password"

Deploy virtual network gateway


The Azure virtual network gateway is the service that your on-premises Linux machines will connect to.
Deploying this service requires two basic components: a public IP that will identify the gateway to your clients
wherever they are in the world and a root certificate you created earlier that will be used to authenticate your
clients.
Remember to replace <desired-vpn-name-here> with the name you would like for these resources.

NOTE
Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this bash
script script will block for the deployment to be completed.
P2S IKEv2/OpenVPN connections are not supported with the Basic SKU. This script uses the VpnGw1 SKU for the virtual
network gateway, accordingly.
vpnName="<desired-vpn-name-here>"
publicIpAddressName="$vpnName-PublicIP"

publicIpAddress=$(az network public-ip create \


--resource-group $resourceGroupName \
--name $publicIpAddressName \
--location $region \
--sku "Basic" \
--allocation-method "Dynamic" \
--query "publicIp.id" | tr -d '"')

az network vnet-gateway create \


--resource-group $resourceGroupName \
--name $vpnName \
--vnet $virtualNetworkName \
--public-ip-addresses $publicIpAddress \
--location $region \
--sku "VpnGw1" \
--gateway-typ "Vpn" \
--vpn-type "RouteBased" \
--address-prefixes "172.16.201.0/24" \
--client-protocol "IkeV2" > /dev/null

az network vnet-gateway root-cert create \


--resource-group $resourceGroupName \
--gateway-name $vpnName \
--name $rootCertName \
--public-cert-data $rootCertificate \
--output none

Configure the VPN client


The Azure virtual network gateway will create a downloadable package with configuration files required to
initialize the VPN connection on your on-premises Linux machine. The following script will place the certificates
you created in the correct spot and configure the ipsec.conf file with the correct values from the configuration
file in the downloadable package.
vpnClient=$(az network vnet-gateway vpn-client generate \
--resource-group $resourceGroupName \
--name $vpnName \
--authentication-method EAPTLS | tr -d '"')

curl $vpnClient --output vpnClient.zip


unzip vpnClient.zip

vpnServer=$(xmllint --xpath "string(/VpnProfile/VpnServer)" Generic/VpnSettings.xml)


vpnType=$(xmllint --xpath "string(/VpnProfile/VpnType)" Generic/VpnSettings.xml | tr '[:upper:]'
'[:lower:]')
routes=$(xmllint --xpath "string(/VpnProfile/Routes)" Generic/VpnSettings.xml)

sudo cp "${installDir}ipsec.conf" "${installDir}ipsec.conf.backup"


sudo cp "Generic/VpnServerRoot.cer" "${installDir}ipsec.d/cacerts"
sudo cp "${username}.p12" "${installDir}ipsec.d/private"

echo -e "\nconn $virtualNetworkName" | sudo tee -a "${installDir}ipsec.conf" > /dev/null


echo -e "\tkeyexchange=$vpnType" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\ttype=tunnel" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\tleftfirewall=yes" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\tleft=%any" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\tleftauth=eap-tls" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\tleftid=%client" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\tright=$vpnServer" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\trightid=%$vpnServer" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\trightsubnet=$routes" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\tleftsourceip=%config" | sudo tee -a "${installDir}ipsec.conf" > /dev/null
echo -e "\tauto=add" | sudo tee -a "${installDir}ipsec.conf" > /dev/null

echo ": P12 client.p12 '$password'" | sudo tee -a "${installDir}ipsec.secrets" > /dev/null

sudo ipsec restart


sudo ipsec up $virtualNetworkName

Mount Azure file share


Now that you have set up your Point-to-Site VPN, you can mount your Azure file share. The following example
will mount the share non-persistently. To mount persistently, see Mount SMB file shares to Linux or Mount NFS
file share to Linux.

See also
Azure Files networking overview
Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files
Configure a Site-to-Site (S2S) VPN for use with Azure Files
Overview - on-premises Active Directory Domain
Services authentication over SMB for Azure file
shares
5/20/2022 • 5 minutes to read • Edit Online

Azure Filessupports identity-based authentication over Server Message Block (SMB) throughtwo types of
Domain Services: on-premises Active Directory Domain Services (AD DS) and Azure Active Directory Domain
Services (Azure AD DS). We strongly recommend you to review the How it works section to select the right
domain service for authentication. The setup is different depending on the domain service you choose. These
series of articles focus on enabling and configuring on-premises AD DS for authentication with Azure file shares.
If you are new to Azure file shares, we recommend reading our planning guide before reading the following
series of articles.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Supported scenarios and restrictions


AD DS Identities used for Azure Files on-premises AD DS authentication must be synced to Azure AD or use a
default share-level permission. Password hash synchronization is optional.
Supports Azure file shares managed by Azure File Sync.
Supports Kerberos authentication with AD with AES 256 encryption (recommended) and RC4-HMAC. AES
128 Kerberos encryption is not yet supported.
Supports single sign-on experience.
Only supported on clients running on OS versions newer than Windows 7 or Windows Server 2008 R2.
Only supported against the AD forest that the storage account is registered to. You can only access Azure file
shares with the AD DS credentials from a single forest by default. If you need to access your Azure file share
from a different forest, make sure that you have the proper forest trust configured, see the FAQ for details.
Does not support authentication against computer accounts created in AD DS.
Does not support authentication against Network File System (NFS) file shares.
When you enable AD DS for Azure file shares over SMB, your AD DS-joined machines can mount Azure file
shares using your existing AD DS credentials. This capability can be enabled with an AD DS environment hosted
either in on-prem machines or hosted in Azure.

Videos
To help you setup Azure Files AD authentication for some common use cases, we published two videos with step
by step guidance for the following scenarios:

REP L A C IN G O N - P REM ISES F IL E SERVERS W IT H A Z URE F IL ES USIN G A Z URE F IL ES A S T H E P RO F IL E C O N TA IN ER F O R


( IN C L UDIN G SET UP O N P RIVAT E L IN K F O R F IL ES A N D A D A Z URE VIRT UA L DESK TO P ( IN C L UDIN G SET UP O N A D
A UT H EN T IC AT IO N ) A UT H EN T IC AT IO N A N D F SLO GIX C O N F IGURAT IO N )

Prerequisites
Before you enable AD DS authentication for Azure file shares, make sure you have completed the following
prerequisites:
Select or create your AD DS environment and sync it to Azure AD with Azure AD Connect.
You can enable the feature on a new or existing on-premises AD DS environment. Identities used for
access must be synced to Azure AD or use a default share-level permission. The Azure AD tenant and the
file share that you are accessing must be associated with the same subscription.
Domain-join an on-premises machine or an Azure VM to on-premises AD DS. For information about how
to domain-join, refer to Join a Computer to a Domain.
If your machine is not domain joined to an AD DS, you may still be able to leverage AD credentials for
authentication if your machine has line of sight of the AD domain controller.
Select or create an Azure storage account. For optimal performance, we recommend that you deploy the
storage account in the same region as the client from which you plan to access the share. Then, mount
the Azure file share with your storage account key. Mounting with the storage account key verifies
connectivity.
Make sure that the storage account containing your file shares is not already configured for Azure AD DS
Authentication. If Azure Files Azure AD DS authentication is enabled on the storage account, it needs to be
disabled before changing to use on-premises AD DS. This implies that existing ACLs configured in Azure
AD DS environment will need to be reconfigured for proper permission enforcement.
If you experience issues in connecting to Azure Files, refer to the troubleshooting tool we published for
Azure Files mounting errors on Windows.
Make any relevant networking configuration prior to enabling and configuring AD DS authentication to
your Azure file shares. See Azure Files networking considerations for more information.

Regional availability
Azure Files authentication with AD DS is available in all Azure Public, China and Gov regions.

Overview
If you plan to enable any networking configurations on your file share, we recommend you to read the
networking considerations article and complete the related configuration before enabling AD DS authentication.
Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares
with your on-prem AD DS credentials. Further, it allows you to better manage your permissions to allow
granular access control. Doing this requires synching identities from on-prem AD DS to Azure AD with AD
connect. You control the share level access with identities synced to Azure AD while managing file/share level
access with on-prem AD DS credentials.
Next, follow the steps below to set up Azure Files for AD DS Authentication:
1. Part one: enable AD DS authentication on your storage account
2. Part two: assign access permissions for a share to the Azure AD identity (a user, group, or service
principal) that is in sync with the target AD identity
3. Part three: configure Windows ACLs over SMB for directories and files
4. Part four: mount an Azure file share to a VM joined to your AD DS
5. Update the password of your storage account identity in AD DS
The following diagram illustrates the end-to-end workflow for enabling Azure AD authentication over SMB for
Azure file shares.

Identities used to access Azure file shares must be synced to Azure AD to enforce share level file permissions
through the Azure role-based access control (Azure RBAC) model. Alternatively, you can use a default share-
level permission. Windows-style DACLs on files/directories carried over from existing file servers will be
preserved and enforced. This offers seamless integration with your enterprise AD DS environment. As you
replace on-prem file servers with Azure file shares, existing users can access Azure file shares from their current
clients with a single sign-on experience, without any change to the credentials in use.

Next steps
To enable on-premises AD DS authentication for your Azure file share, continue to the next article:
Part one: enable AD DS authentication for your account
Part one: enable AD DS authentication for your
Azure file shares
5/20/2022 • 9 minutes to read • Edit Online

This article describes the process for enabling Active Directory Domain Services (AD DS) authentication on your
storage account. After enabling the feature, you must configure your storage account and your AD DS, to use AD
DS credentials for authenticating to your Azure file share.

IMPORTANT
Before you enable AD DS authentication, make sure you understand the supported scenarios and requirements in the
overview article and complete the necessary prerequisites.

To enable AD DS authentication over SMB for Azure file shares, you need to register your storage account with
AD DS and then set the required domain properties on the storage account. To register your storage account
with AD DS, create an account representing it in your AD DS. You can think of this process as if it were like
creating an account representing an on-premises Windows file server in your AD DS. When the feature is
enabled on the storage account, it applies to all new and existing file shares in the account.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Option one (recommended): Use AzFilesHybrid PowerShell module


The cmdlets in the AzFilesHybrid PowerShell module make the necessary modifications and enables the feature
for you. Since some parts of the cmdlets interact with your on-premises AD DS, we explain what the cmdlets do,
so you can determine if the changes align with your compliance and security policies, and ensure you have the
proper permissions to execute the cmdlets. Though we recommend using AzFilesHybrid module, if you are
unable to do so, we provide the steps so that you may perform them manually.
Download AzFilesHybrid module
If you don't have .NET Framework 4.7.2 installed, install it now. It is required for the module to import
successfully.
Download and unzip the AzFilesHybrid module (GA module: v0.2.0+) Note that AES 256 kerberos encryption
is supported on v0.2.2 or above. If you have enabled the feature with a AzFilesHybrid version below v0.2.2
and want to update to support AES 256 Kerberos encryption, please refer to this article.
Install and execute the module in a device that is domain joined to on-premises AD DS with AD DS
credentials that have permissions to create a service logon account or a computer account in the target AD.
Run the script using an on-premises AD DS credential that is synced to your Azure AD. The on-premises AD
DS credential must have either Owner or Contributor Azure role on the storage account.
Run Join-AzStorageAccount
The Join-AzStorageAccount cmdlet performs the equivalent of an offline domain join on behalf of the specified
storage account. The script uses the cmdlet to create a computer account in your AD domain. If for whatever
reason you cannot use a computer account, you can alter the script to create a service logon account instead. If
you choose to run the command manually, you should select the account best suited for your environment.
The AD DS account created by the cmdlet represents the storage account. If the AD DS account is created under
an organizational unit (OU) that enforces password expiration, you must update the password before the
maximum password age. Failing to update the account password before that date results in authentication
failures when accessing Azure file shares. To learn how to update the password, see Update AD DS account
password.
Replace the placeholder values with your own in the parameters below before executing it in PowerShell.

IMPORTANT
The domain join cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to
register as a computer account or service logon account, see FAQ for details. For computer accounts, there is a default
password expiration age set in AD at 30 days. Similarly, the service logon account may have a default password expiration
age set on the AD domain or Organizational Unit (OU). For both account types, we recommend you check the password
expiration age configured in your AD environment and plan to update the password of your storage account identity of
the AD account before the maximum password age. You can consider creating a new AD Organizational Unit (OU) in AD
and disabling password expiration policy on computer accounts or service logon accounts accordingly.

# Change the execution policy to unblock importing AzFilesHybrid.psm1 module


Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

# Navigate to where AzFilesHybrid is unzipped and stored and run to copy the files into your path
.\CopyToPSPath.ps1

# Import AzFilesHybrid module


Import-Module -Name AzFilesHybrid

# Login with an Azure AD credential that has either storage account owner or contributor Azure role
assignment
# If you are logging into an Azure environment other than Public (ex. AzureUSGovernment) you will need to
specify that.
# See https://docs.microsoft.com/azure/azure-government/documentation-government-get-started-connect-with-ps
# for more information.
Connect-AzAccount

# Define parameters
# $StorageAccountName is the name of an existing storage account that you want to join to AD
# $SamAccountName is an AD object, see https://docs.microsoft.com/en-us/windows/win32/adschema/a-
samaccountname
# for more information.
# If you want to use AES256 encryption (recommended), except for the trailing '$', the storage account name
must be the same as the computer object's SamAccountName.
$SubscriptionId = "<your-subscription-id-here>"
$ResourceGroupName = "<resource-group-name-here>"
$StorageAccountName = "<storage-account-name-here>"
$SamAccountName = "<sam-account-name-here>"
$DomainAccountType = "<ComputerAccount|ServiceLogonAccount>" # Default is set as ComputerAccount
# If you don't provide the OU name as an input parameter, the AD identity that represents the storage
account is created under the root directory.
$OuDistinguishedName = "<ou-distinguishedname-here>"
# Specify the encryption algorithm used for Kerberos authentication. Using AES256 is recommended.
$EncryptionType = "<AES256|RC4|AES256,RC4>"

# Select the target subscription for the current session


# Select the target subscription for the current session
Select-AzSubscription -SubscriptionId $SubscriptionId

# Register the target storage account with your active directory environment under the target OU (for
example: specify the OU with Name as "UserAccounts" or DistinguishedName as
"OU=UserAccounts,DC=CONTOSO,DC=COM").
# You can use to this PowerShell cmdlet: Get-ADOrganizationalUnit to find the Name and DistinguishedName of
your target OU. If you are using the OU Name, specify it with -OrganizationalUnitName as shown below. If you
are using the OU DistinguishedName, you can set it with -OrganizationalUnitDistinguishedName. You can choose
to provide one of the two names to specify the target OU.
# You can choose to create the identity that represents the storage account as either a Service Logon
Account or Computer Account (default parameter value), depends on the AD permission you have and preference.
# Run Get-Help Join-AzStorageAccountForAuth for more details on this cmdlet.

Join-AzStorageAccount `
-ResourceGroupName $ResourceGroupName `
-StorageAccountName $StorageAccountName `
-SamAccountName $SamAccountName `
-DomainAccountType $DomainAccountType `
-OrganizationalUnitDistinguishedName $OuDistinguishedName `
-EncryptionType $EncryptionType

#Run the command below to enable AES256 encryption. If you plan to use RC4, you can skip this step.
Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -StorageAccountName
$StorageAccountName

#You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration
with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more details on
the checks performed in this cmdlet, see Azure Files Windows troubleshooting guide.
Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -
Verbose

Option two: Manually perform the enablement actions


If you have already executed the Join-AzStorageAccount script above successfully, go to the Confirm the feature
is enabled section. You don't need to perform the following manual steps.
Check the environment
First, you must check the state of your environment. Specifically, you must check if Active Directory PowerShell
is installed, and if the shell is being executed with administrator privileges. Then check to see if the Az.Storage
2.0 module (or newer) is installed, and install it if it isn't. After completing those checks, check your AD DS to see
if there is either a computer account (default) or service logon account that has already been created with
SPN/UPN as "cifs/your-storage-account-name-here.file.core.windows.net". If the account doesn't exist, create
one as described in the following section.
Create an identity representing the storage account in your AD manually
To create this account manually, create a new Kerberos key for your storage account. Then, use that Kerberos key
as the password for your account with the PowerShell cmdlets below. This key is only used during setup and
cannot be used for any control or data plane operations against the storage account.

# Create the Kerberos key on the storage account and get the Kerb1 key as the password for the AD identity
to represent the storage account
$ResourceGroupName = "<resource-group-name-here>"
$StorageAccountName = "<storage-account-name-here>"

New-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -KeyName kerb1


Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -ListKerbKey |
where-object{$_.Keyname -contains "kerb1"}

Once you have that key, create either a service or computer account under your OU. Use the following
specification (remember to replace the example text with your storage account name):
SPN: "cifs/your-storage-account-name-here.file.core.windows.net" Password: Kerberos key for your storage
account.
If your OU enforces password expiration, you must update the password before the maximum password age to
prevent authentication failures when accessing Azure file shares. See Update the password of your storage
account identity in AD for details.
Keep the SID of the newly created identity, you'll need it for the next step. The identity you've created that
represent the storage account doesn't need to be synced to Azure AD.
Enable the feature on your storage account
Modify the following command to include configuration details for the domain properties in the following
command, then run it to enable the feature. The storage account SID required in the following command is the
SID of the identity you created in your AD DS in the previous section.

# Set the feature flag on the target storage account and provide the required AD domain information
Set-AzStorageAccount `
-ResourceGroupName "<your-resource-group-name-here>" `
-Name "<your-storage-account-name-here>" `
-EnableActiveDirectoryDomainServicesForFile $true `
-ActiveDirectoryDomainName "<your-domain-dns-root-here>" `
-ActiveDirectoryNetBiosDomainName "<your-domain-dns-root-here>" `
-ActiveDirectoryForestName "<your-forest-name-here>" `
-ActiveDirectoryDomainGuid "<your-guid-here>" `
-ActiveDirectoryDomainsid "<your-domain-sid-here>" `
-ActiveDirectoryAzureStorageSid "<your-storage-account-sid>"

Enable AES-256 encryption (recommended)


To enable AES-256 encryption, follow the steps in this section. If you plan to use RC4, skip this section.
The domain object that represents your storage account must meet the following requirements:
The domain object must be created as a computer object in the on-premises AD domain.
Except for the trailing '$', the storage account name must be the same as the computer object's
SamAccountName.
If your domain object doesn't meet those requirements, delete it and create a new domain object that does.
Replace <domain-object-identity> and <domain-name> with your values, then run the following cmdlet to
configure AES-256 support:

Set-ADComputer -Identity <domain-object-identity> -Server <domain-name> -KerberosEncryptionType "AES256"

After you've run that cmdlet, replace <domain-object-identity> in the following script with your value, then run
the script to refresh your domain object password:

$KeyName = "kerb1" # Could be either the first or second kerberos key, this script assumes we're refreshing
the first
$KerbKeys = New-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -KeyName
$KeyName
$KerbKey = $KerbKeys | Where-Object {$_.KeyName -eq $KeyName} | Select-Object -ExpandProperty Value
$NewPassword = ConvertTo-SecureString -String $KerbKey -AsPlainText -Force

Set-ADAccountPassword -Identity <domain-object-identity> -Reset -NewPassword $NewPassword

Debugging
You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration
with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more information on
the checks performed in this cmdlet, see Unable to mount Azure Files with AD credentials in the troubleshooting
guide for Windows.

Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -


Verbose

Confirm the feature is enabled


You can check to confirm whether the feature is enabled on your storage account with the following script:

# Get the target storage account


$storageaccount = Get-AzStorageAccount `
-ResourceGroupName "<your-resource-group-name-here>" `
-Name "<your-storage-account-name-here>"

# List the directory service of the selected service account


$storageAccount.AzureFilesIdentityBasedAuth.DirectoryServiceOptions

# List the directory domain information if the storage account has enabled AD DS authentication for file
shares
$storageAccount.AzureFilesIdentityBasedAuth.ActiveDirectoryProperties

If successful, the output should look like this:

DomainName:<yourDomainHere>
NetBiosDomainName:<yourNetBiosDomainNameHere>
ForestName:<yourForestNameHere>
DomainGuid:<yourGUIDHere>
DomainSid:<yourSIDHere>
AzureStorageID:<yourStorageSIDHere>

Next steps
You've now successfully enabled the feature on your storage account. To use the feature, you must assign share-
level permissions. Continue to the next section.
Part two: assign share-level permissions to an identity
Part two: assign share-level permissions to an
identity
5/20/2022 • 8 minutes to read • Edit Online

Before you begin this article, make sure you've completed the previous article, Enable AD DS authentication for
your account.
Once you've enabled Active Directory Domain Services (AD DS) authentication on your storage account, you
must configure share-level permissions in order to get access to your file shares. There are two ways you can
assign share-level permissions. You can assign them to specific Azure AD users/user groups and you can assign
them to all authenticated identities as a default share level permission.

IMPORTANT
Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage
account key. Administrative control is not supported with Azure AD credentials.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Which configuration should you use


Most users should assign share-level permissions to specific Azure AD users or groups and then use Windows
ACLs for granular access control at the directory and file level. This is the most stringent and secure
configuration.
There are three scenarios where we instead recommend using default share-level permissions assigned to all
authenticated identities:
If you are unable to sync your on-premises AD DS to Azure AD, you can alternatively use a default share-level
permission. Assigning a default share-level permission allows you to work around the sync requirement as
you don't need to specify the permission to identities in Azure AD. Then you can use Windows ACLs for
granular permission enforcement on your files and directories.
Identities that are tied to an AD but aren't synching to Azure AD DS can also leverage the default
share-level permission. This could include standalone Managed Service Accounts (sMSA), group
Managed Service Accounts (gMSA), and computer service accounts.
The on-premises AD DS you're using is synched to a different Azure AD than the Azure AD the file share is
deployed in.
This is typical when you are managing multi-tenant environments. Using the default share-level
permission allows you to bypass the requirement for a Azure AD hybrid identity. You can still use
Windows ACLs on your files and directories for granular permission enforcement.
You prefer to enforce authentication only using Windows ACLS at the file and directory level.

Share-level permissions
The following table lists the share-level permissions and how they align with the built-in RBAC roles:

SUP P O RT ED B UILT - IN RO L ES DESC RIP T IO N

Storage File Data SMB Share Reader Allows for read access to files and directories in Azure file
shares. This role is analogous to a file share ACL of read on
Windows File servers. Learn more.

Storage File Data SMB Share Contributor Allows for read, write, and delete access on files and
directories in Azure file shares. Learn more.

Storage File Data SMB Share Elevated Contributor Allows for read, write, delete, and modify ACLs on files and
directories in Azure file shares. This role is analogous to a file
share ACL of change on Windows file servers. Learn more.

Share-level permissions for specific Azure AD users or groups


If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be
a hybrid identity that exists in both on-premises AD DS and Azure AD . For example, say you have a
user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com
using Azure AD Connect sync. For this user to access Azure Files, you must assign the share-level permissions to
user1@contoso.com. The same concept applies to groups or service principals.

IMPORTANT
Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (*)
character. If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are
granted access for all possible data actions. This means that all such identities will also be granted any new data action
added to the platform.The additional access and permissions granted through new actions or data actions may be
unwanted behavior for customers using wildcard. To mitigate any unintended future impact, we highly recommend
declaring actions and data actions explicitly as opposed to using the wildcard.

In order for share-level permissions to work, you must:


Sync the users and the groups from your local AD to Azure AD using Azure AD Connect sync
Add AD synced groups to RBAC role so they can access your storage account
Share-level permissions must be assigned to the Azure AD identity representing the same user or group in your
AD DS to support AD DS authentication to your Azure file share. Authentication and authorization against
identities that only exist in Azure AD, such as Azure Managed Identities (MSIs), are not supported with AD DS
authentication.
You can use the Azure portal, Azure PowerShell module, or Azure CLI to assign the built-in roles to the Azure AD
identity of a user for granting share-level permissions.

IMPORTANT
The share-level permissions will take up to three hours to take effect once completed. Please wait for the permissions to
sync before connecting to your file share using your credentials.
Portal
Azure PowerShell
Azure CLI

To assign an Azure role to an Azure AD identity, using the Azure portal, follow these steps:
1. In the Azure portal, go to your file share, or create a file share.
2. Select Access Control (IAM) .
3. Select Add a role assignment
4. In the Add role assignment blade, select the appropriate built-in role from the Role list.
a. Storage File Data SMB Share Reader
b. Storage File Data SMB Share Contributor
c. Storage File Data SMB Share Elevated Contributor
5. Leave Assign access to at the default setting: Azure AD user, group, or ser vice principal . Select the
target Azure AD identity by name or email address. The selected Azure AD identity must be a hybrid
identity and cannot be a cloud only identity. This means that the same identity is also represented in
AD DS.
6. Select Save to complete the role assignment operation.

Share-level permissions for all authenticated identities


You can add a default share-level permission on your storage account, instead of configuring share-level
permissions for Azure AD users or groups. A default share-level permission assigned to your storage account
applies to all file shares contained in the storage account.
When you set a default share-level permission, all authenticated users and groups will have the same
permission. Authenticated users or groups are identified as the identity can be authenticated against the on-
premises AD DS the storage account is associated with. The default share level permission is set to None at
initialization, implying that no access is allowed to files & directories in Azure file share.

Portal
Azure PowerShell
Azure CLI

You can't currently assign permissions to the storage account with the Azure portal. Use either the Azure
PowerShell module or the Azure CLI, instead.

What happens if you use both configurations


You could also assign permissions to all authenticated Azure AD users and specific Azure AD users/groups. With
this configuration, a specific user or group will have whichever is the higher-level permission from the default
share-level permission and RBAC assignment. In other words, say you granted a user the Storage File Data
SMB Reader role on the target file share. You also granted the default share-level permission Storage File
Data SMB Share Elevated Contributor to all authenticated users. With this configuration, that particular
user will have Storage File Data SMB Share Elevated Contributor level of access to the file share. Higher-
level permissions always take precedence.

Next steps
Now that you've assigned share-level permissions, you must configure directory and file-level permissions.
Continue to the next article.
Part three: configure directory and file-level permissions over SMB
Part three: configure directory and file level
permissions over SMB
5/20/2022 • 6 minutes to read • Edit Online

Before you begin this article, make sure you completed the previous article, Assign share-level permissions to
an identity to ensure that your share-level permissions are in place.
After you assign share-level permissions with Azure RBAC, you must configure proper Windows ACLs at the
root, directory, or file level, to take advantage of granular access control. The Azure RBAC share-level
permissions act as a high-level gatekeeper that determines whether a user can access the share. While the
Windows ACLs operate at a more granular level to control what operations the user can do at the directory or
file level. Both share-level and file/directory level permissions are enforced when a user attempts to access a
file/directory, so if there is a difference between either of them, only the most restrictive one will be applied. For
example, if a user has read/write access at the file-level, but only read at a share-level, then they can only read
that file. The same would be true if it was reversed, and a user had read/write access at the share-level, but only
read at the file-level, they can still only read the file.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Azure RBAC permissions


The following table contains the Azure RBAC permissions related to this configuration:

B UILT - IN RO L E N T F S P ERM ISSIO N RESULT IN G A C C ESS

Storage File Data SMB Share Reader Full control, Modify, Read, Write, Read & execute
Execute

Read Read

Storage File Data SMB Share Full control Modify, Read, Write, Execute
Contributor

Modify Modify

Read & execute Read & execute

Read Read
B UILT - IN RO L E N T F S P ERM ISSIO N RESULT IN G A C C ESS

Write Write

Storage File Data SMB Share Elevated Full control Modify, Read, Write, Edit (Change
Contributor permissions), Execute

Modify Modify

Read & execute Read & execute

Read Read

Write Write

Supported permissions
Azure Files supports the full set of basic and advanced Windows ACLs. You can view and configure Windows
ACLs on directories and files in an Azure file share by mounting the share and then using Windows File Explorer,
running the Windows icacls command, or the Set-ACL command.
To configure ACLs with superuser permissions, you must mount the share by using your storage account key
from your domain-joined VM. Follow the instructions in the next section to mount an Azure file share from the
command prompt and to configure Windows ACLs.
The following permissions are included on the root directory of a file share:
BUILTIN\Administrators:(OI)(CI)(F)
BUILTIN\Users:(RX)
BUILTIN\Users:(OI)(CI)(IO)(GR,GE)
NT AUTHORITY\Authenticated Users:(OI)(CI)(M)
NT AUTHORITY\SYSTEM:(OI)(CI)(F)
NT AUTHORITY\SYSTEM:(F)
CREATOR OWNER:(OI)(CI)(IO)(F)

USERS DEF IN IT IO N

BUILTIN\Administrators Built-in security group representing administrators of the file


server. This group is empty, and no one can be added to it.

BUILTIN\Users Built-in security group representing users of the file server. It


includes NT AUTHORITY\Authenticated Users by default.
For a traditional file server, you can configure the
membership definition per server. For Azure Files, there isn't
a hosting server, hence BUILTIN\Users includes the same
set of users as NT AUTHORITY\Authenticated Users .

NT AUTHORITY\SYSTEM The service account of the operating system of the file


server. Such service account doesn't apply in Azure Files
context. It is included in the root directory to be consistent
with Windows Files Server experience for hybrid scenarios.

NT AUTHORITY\Authenticated Users All users in AD that can get a valid Kerberos token.
USERS DEF IN IT IO N

CREATOR OWNER Each object either directory or file has an owner for that
object. If there are ACLs assigned to CREATOR OWNER on
that object, then the user that is the owner of this object has
the permissions to the object defined by the ACL.

Mount a file share from the command prompt


Use the Windows net use command to mount the Azure file share. Remember to replace the placeholder
values in the following example with your own values. For more information about mounting file shares, see
Use an Azure file share with Windows.

NOTE
You may see the Full Control* ACL applied to a role already. This typically already offers the ability to assign permissions.
However, because there are access checks at two levels (the share-level and the file-level), this is restricted. Only users
who have the SMB Elevated Contributor role and create a new file or folder can assign permissions on those specific
new files or folders without the use of the storage account key. All other permission assignment requires mounting the
share with the storage account key, first.

$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445


if ($connectTestResult.TcpTestSucceeded)
{
net use <desired-drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name> /user:Azure\
<storage-account-name> <storage-account-key>
}
else
{
Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your
organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to
tunnel SMB traffic over a different port."
}

If you experience issues in connecting to Azure Files, refer to the troubleshooting tool we published for Azure
Files mounting errors on Windows.

Configure Windows ACLs


Once your file share has been mounted with the storage account key, you must configure the Windows ACLs
(also known as NTFS permissions). You can configure the Windows ACLs using either Windows File Explorer or
icacls.
If you have directories or files in on-premises file servers with Windows DACLs configured against the AD DS
identities, you can copy it over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or
Azure AzCopy v 10.4+. If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs
are carried over and persisted in their native format.
Configure Windows ACLs with icacls
Use the following Windows command to grant full permissions to all directories and files under the file share,
including the root directory. Remember to replace the placeholder values in the example with your own values.

icacls <mounted-drive-letter>: /grant <user-upn>:(f)


For more information on how to use icacls to set Windows ACLs and on the different types of supported
permissions, see the command-line reference for icacls.
Configure Windows ACLs with Windows File Explorer
Use Windows File Explorer to grant full permission to all directories and files under the file share, including the
root directory. If you are not able to load the AD domain information correctly in Windows File Explorer, this is
likely due to trust configuration in your on-prem AD environment. The client machine was not able to reach the
AD domain controller registered for Azure Files authentication. In this case, use icacls for configurating Windows
ACLs.
1. Open Windows File Explorer and right click on the file/directory and select Proper ties .
2. Select the Security tab.
3. Select Edit.. to change permissions.
4. You can change the permissions of existing users or select Add... to grant permissions to new users.
5. In the prompt window for adding new users, enter the target username you want to grant permissions to in
the Enter the object names to select box, and select Check Names to find the full UPN name of the
target user.
6. Select OK .
7. In the Security tab, select all permissions you want to grant your new user.
8. Select Apply .

Next steps
Now that the feature is enabled and configured, continue to the next article, where you mount your Azure file
share from a domain-joined VM.
Part four: mount a file share from a domain-joined VM
Part four: mount a file share from a domain-joined
VM
5/20/2022 • 2 minutes to read • Edit Online

Before you begin this article, make sure you complete the previous article, configure directory and file level
permissions over SMB.
The process described in this article verifies that your file share and access permissions are set up correctly and
that you can access an Azure File share from a domain-joined VM. Share-level Azure role assignment can take
some time to take effect.
Sign in to the client by using the credentials that you granted permissions to, as shown in the following image.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Mounting prerequisites
Before you can mount the file share, make sure you've gone through the following pre-requisites:
If you are mounting the file share from a client that has previously mounted the file share using your storage
account key, make sure that you have disconnected the share, removed the persistent credentials of the
storage account key, and are currently using AD DS credentials for authentication. For instructions to clear
the mounted share with storage account key, refer to FAQ page.
Your client must have line of sight to your AD DS. If your machine or VM is out of the network managed by
your AD DS, you will need to enable VPN to reach AD DS for authentication.
Replace the placeholder values with your own values, then use the following command to mount the Azure file
share. You always need to mount using the path shown below. Using CNAME for file mount is not supported for
identity based authentication (AD DS or Azure AD DS).

# Always mount your share using.file.core.windows.net, even if you setup a private endpoint for your share.
$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
if ($connectTestResult.TcpTestSucceeded)
{
net use <desired-drive letter>: \\<storage-account-name>.file.core.windows.net\<fileshare-name>
}
else
{
Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your
organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to
tunnel SMB traffic over a different port."
}

If you run into issues mounting with AD DS credentials, refer to Unable to mount Azure Files with AD credentials
for guidance.
If mounting your file share succeeded, then you have successfully enabled and configured on-premises AD DS
authentication for your Azure file shares.

Next steps
If the identity you created in AD DS to represent the storage account is in a domain or OU that enforces
password rotation, continue to the next article for instructions on updating your password:
Update the password of your storage account identity in AD DS
Update the password of your storage account
identity in AD DS
5/20/2022 • 2 minutes to read • Edit Online

If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage
account in an organizational unit or domain that enforces password expiration time, you must change the
password before the maximum password age. Your organization may run automated cleanup scripts that delete
accounts once their password expires. Because of this, if you do not change your password before it expires,
your account could be deleted, which will cause you to lose access to your Azure file shares.
To trigger password rotation, you can run the Update-AzStorageAccountADObjectPassword command from the
AzFilesHybrid module. This command must be run in an on-premises AD DS-joined environment using a hybrid
user with owner permission to the storage account and AD DS permissions to change the password of the
identity representing the storage account. The command performs actions similar to storage account key
rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password
of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and
updates the password of the registered account in AD DS. You must run this command in an on-premises AD
DS-joined environment.
To prevent password rotation, during the onboarding of the Azure Storage account in the domain, make sure to
place the Azure Storage Account into a separate organizational unit in AD DS. Disable Group Policy inheritance
on this organizational unit to prevent default domain policies or specific password policies to be applied.

# Update the password of the AD DS account registered for the storage account
# You may use either kerb1 or kerb2
Update-AzStorageAccountADObjectPassword `
-RotateToKerbKey kerb2 `
-ResourceGroupName "<your-resource-group-name-here>" `
-StorageAccountName "<your-storage-account-name-here>"

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS
Enable Azure Active Directory Domain Services
authentication on Azure Files
5/20/2022 • 14 minutes to read • Edit Online

Azure Filessupports identity-based authentication over Server Message Block (SMB) throughtwo types of
Domain Services: on-premises Active Directory Domain Services (AD DS) and Azure Active Directory Domain
Services (Azure AD DS). We strongly recommend you to review the How it works section to select the right
domain service for authentication. The setup is different depending on the domain service you choose. This
article focuses on enabling and configuring Azure AD DS for authentication with Azure file shares.
If you are new to Azure file shares, we recommend reading our planning guide before reading the following
series of articles.

NOTE
Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC and AES-256 encryption. We
recommend using AES-256.
Azure Files supports authentication for Azure AD DS with full synchronization with Azure AD. If you have enabled scoped
synchronization in Azure AD DS which only sync a limited set of identities from Azure AD, authentication and
authorization is not supported.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
Before you enable Azure AD over SMB for Azure file shares, make sure you have completed the following
prerequisites:
1. Select or create an Azure AD tenant.
You can use a new or existing tenant for Azure AD authentication over SMB. The tenant and the file share
that you want to access must be associated with the same subscription.
To create a new Azure AD tenant, you can Add an Azure AD tenant and an Azure AD subscription. If you
have an existing Azure AD tenant but want to create a new tenant for use with Azure file shares, see
Create an Azure Active Directory tenant.
2. Enable Azure AD Domain Ser vices on the Azure AD tenant.
To support authentication with Azure AD credentials, you must enable Azure AD Domain Services for
your Azure AD tenant. If you aren't the administrator of the Azure AD tenant, contact the administrator
and follow the step-by-step guidance to Enable Azure Active Directory Domain Services using the Azure
portal.
It typically takes about 15 minutes for an Azure AD DS deployment to complete. Verify that the health
status of Azure AD DS shows Running , with password hash synchronization enabled, before proceeding
to the next step.
3. Domain-join an Azure VM with Azure AD DS.
To access a file share by using Azure AD credentials from a VM, your VM must be domain-joined to Azure
AD DS. For more information about how to domain-join a VM, see Join a Windows Server virtual
machine to a managed domain.

NOTE
Azure AD DS authentication over SMB with Azure file shares is supported only on Azure VMs running on OS
versions above Windows 7 or Windows Server 2008 R2.

4. Select or create an Azure file share.


Select a new or existing file share that's associated with the same subscription as your Azure AD tenant.
For information about creating a new file share, see Create a file share in Azure Files. For optimal
performance, we recommend that your file share be in the same region as the VM from which you plan
to access the share.
5. Verify Azure Files connectivity by mounting Azure file shares using your storage account
key.
To verify that your VM and file share are properly configured, try mounting the file share using your
storage account key. For more information, see Mount an Azure file share and access the share in
Windows.

Regional availability
Azure Files authentication with Azure AD DS is available in all Azure Public, Gov, and China regions.

Overview of the workflow


Before you enable Azure AD DS Authentication over SMB for Azure file shares, verify that your Azure AD and
Azure Storage environments are properly configured. We recommend that you walk through the prerequisites
to make sure you've completed all the required steps.
Next, do the following things to grant access to Azure Files resources with Azure AD credentials:
1. Enable Azure AD DS authentication over SMB for your storage account to register the storage account with
the associated Azure AD DS deployment.
2. Assign access permissions for a share to an Azure AD identity (a user, group, or service principal).
3. Configure NTFS permissions over SMB for directories and files.
4. Mount an Azure file share from a domain-joined VM.
The following diagram illustrates the end-to-end workflow for enabling Azure AD DS authentication over SMB
for Azure Files.
Enable Azure AD DS authentication for your account
To enable Azure AD DS authentication over SMB for Azure Files, you can set a property on storage accounts by
using the Azure portal, Azure PowerShell, or Azure CLI. Setting this property implicitly "domain joins" the
storage account with the associated Azure AD DS deployment. Azure AD DS authentication over SMB is then
enabled for all new and existing file shares in the storage account.
Keep in mind that you can enable Azure AD DS authentication over SMB only after you have successfully
deployed Azure AD DS to your Azure AD tenant. For more information, see the prerequisites.
Portal
PowerShell
Azure CLI

To enable Azure AD DS authentication over SMB with the Azure portal, follow these steps:
1. In the Azure portal, go to your existing storage account, or create a storage account.
2. In the File shares section, select Active director y: Not Configured .
3. Select Azure Active Director y Domain Ser vices then switch the toggle to Enabled .
4. Select Save .

Recommended: Use AES-256 encryption


By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use
Kerberos AES-256 encryption instead by following these steps:
As an Azure AD DS user with the required permissions (typically, members of the AAD DC Administrators
group will have the necessary permissions), open the Azure Cloud Shell.
Execute the following commands:
# 1. Find the service account in your managed domain that represents the storage account.

$storageAccountName= “<InsertStorageAccountNameHere>”
$searchFilter = "Name -like '*{0}*'" -f $storageAccountName
$userObject = Get-ADUser -filter $searchFilter

if ($userObject -eq $null)


{
Write-Error "Cannot find AD object for storage account:$storageAccountName" -ErrorAction Stop
}

# 2. Set the KerberosEncryptionType of the object

Set-ADUser $userObject -KerberosEncryptionType AES256

# 3. Validate that the object now has the expected (AES256) encryption type.

Get-ADUser $userObject -properties KerberosEncryptionType

Assign access permissions to an identity


To access Azure Files resources with identity based authentication, an identity (a user, group, or service principal)
must have the necessary permissions at the share level. This process is similar to specifying Windows share
permissions, where you specify the type of access that a particular user has to a file share. The guidance in this
section demonstrates how to assign read, write, or delete permissions for a file share to an identity. We highly
recommend assigning permissions by declaring actions and data actions explicitly as opposed to
using the wildcard (*) character.
We have introduced three Azure built-in roles for granting share-level permissions to users:
Storage File Data SMB Share Reader allows read access in Azure Storage file shares over SMB.
Storage File Data SMB Share Contributor allows read, write, and delete access in Azure Storage file
shares over SMB.
Storage File Data SMB Share Elevated Contributor allows read, write, delete and modify NTFS
permissions in Azure Storage file shares over SMB.

IMPORTANT
Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage
account key. Administrative control is not supported with Azure AD credentials.

You can use the Azure portal, PowerShell, or Azure CLI to assign the built-in roles to the Azure AD identity of a
user for granting share-level permissions. Be aware that the share level Azure role assignment can take some
time to be in effect.

NOTE
Remember to sync your AD DS credentials to Azure AD if you plan to use your on-premises AD DS for authentication.
Password hash sync from AD DS to Azure AD is optional. Share level permission will be granted to the Azure AD identity
that is synced from your on-premises AD DS.

The general recommendation is to use share level permission for high level access management to an AD group
representing a group of users and identities, then leverage NTFS permissions for granular access control on
directory/file level.
Assign an Azure role to an AD identity
IMPORTANT
Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (*)
character. If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are
granted access for all possible data actions. This means that all such identities will also be granted any new data action
added to the platform.The additional access and permissions granted through new actions or data actions may be
unwanted behavior for customers using wildcard.

Portal
PowerShell
Azure CLI

To assign an Azure role to an Azure AD identity, using the Azure portal, follow these steps:
1. In the Azure portal, go to your file share, or Create a file share.
2. Select Access Control (IAM) .
3. Select Add a role assignment
4. In the Add role assignment blade, select the appropriate built-in role (Storage File Data SMB Share Reader,
Storage File Data SMB Share Contributor) from the Role list. Leave Assign access to at the default setting:
Azure AD user, group, or ser vice principal . Select the target Azure AD identity by name or email
address.
5. Select Save to complete the role assignment operation.

Configure NTFS permissions over SMB


After you assign share-level permissions with RBAC, you must assign proper NTFS permissions at the root,
directory, or file level. Think of share-level permissions as the high-level gatekeeper that determines whether a
user can access the share. Whereas NTFS permissions act at a more granular level to determine what operations
the user can do at the directory or file level.
Azure Files supports the full set of NTFS basic and advanced permissions. You can view and configure NTFS
permissions on directories and files in an Azure file share by mounting the share and then using Windows File
Explorer or running the Windows icacls or Set-ACL command.
To configure NTFS with superuser permissions, you must mount the share by using your storage account key
from your domain-joined VM. Follow the instructions in the next section to mount an Azure file share from the
command prompt and to configure NTFS permissions accordingly.
The following sets of permissions are supported on the root directory of a file share:
BUILTIN\Administrators:(OI)(CI)(F)
NT AUTHORITY\SYSTEM:(OI)(CI)(F)
BUILTIN\Users:(RX)
BUILTIN\Users:(OI)(CI)(IO)(GR,GE)
NT AUTHORITY\Authenticated Users:(OI)(CI)(M)
NT AUTHORITY\SYSTEM:(F)
CREATOR OWNER:(OI)(CI)(IO)(F)
Mount a file share from the command prompt
Use the Windows net use command to mount the Azure file share. Remember to replace the placeholder values
in the following example with your own values. For more information about mounting file shares, see Use an
Azure file share with Windows.
$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
if ($connectTestResult.TcpTestSucceeded)
{
net use <desired-drive letter>: \\<storage-account-name>.file.core.windows.net\<fileshare-name>
}
else
{
Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your
organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to
tunnel SMB traffic over a different port."
}

If you experience issues in connecting to Azure Files, please refer to the troubleshooting tool we published for
Azure Files mounting errors on Windows.
Configure NTFS permissions with Windows File Explorer
Use Windows File Explorer to grant full permission to all directories and files under the file share, including the
root directory.
1. Open Windows File Explorer and right click on the file/directory and select Proper ties .
2. Select the Security tab.
3. Select Edit.. to change permissions.
4. You can change the permissions of existing users or select Add... to grant permissions to new users.
5. In the prompt window for adding new users, enter the target user name you want to grant permission to in
the Enter the object names to select box, and select Check Names to find the full UPN name of the
target user.
6. Select OK .
7. In the Security tab, select all permissions you want to grant your new user.
8. Select Apply .
Configure NTFS permissions with icacls
Use the following Windows command to grant full permissions to all directories and files under the file share,
including the root directory. Remember to replace the placeholder values in the example with your own values.

icacls <mounted-drive-letter>: /grant <user-email>:(f)

For more information on how to use icacls to set NTFS permissions and on the different types of supported
permissions, see the command-line reference for icacls.

Mount a file share from a domain-joined VM


The following process verifies that your file share and access permissions were set up correctly and that you can
access an Azure File share from a domain-joined VM. Be aware that the share level Azure role assignment can
take some time to be in effect.
Sign in to the VM by using the Azure AD identity to which you have granted permissions, as shown in the
following image. If you have enabled on-premises AD DS authentication for Azure Files, use your AD DS
credentials. For Azure AD DS authentication, sign in with Azure AD credentials.
Use the following command to mount the Azure file share. Remember to replace the placeholder values with
your own values. Because you've been authenticated, you don't need to provide the storage account key, the on-
premises AD DS credentials, or the Azure AD DS credentials. Single sign-on experience is supported for
authentication with either on-premises AD DS or Azure AD DS. If you run into issues mounting with AD DS
credentials, refer to Troubleshoot Azure Files problems in Windows for guidance.

$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445


if ($connectTestResult.TcpTestSucceeded)
{
net use <desired-drive letter>: \\<storage-account-name>.file.core.windows.net\<fileshare-name>
}
else
{
Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your
organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to
tunnel SMB traffic over a different port."
}

You have now successfully enabled Azure AD DS authentication over SMB and assigned a custom role that
provides access to an Azure file share with an Azure AD identity. To grant additional users access to your file
share, follow the instructions in the Assign access permissions to use an identity and Configure NTFS
permissions over SMB sections.

Next steps
For more information about Azure Files and how to use Azure AD over SMB, see these resources:
Overview of Azure Files identity-based authentication support for SMB access
FAQ
Managing Storage in the Azure independent clouds
using PowerShell
5/20/2022 • 3 minutes to read • Edit Online

Most people use Azure Public Cloud for their global Azure deployment. There are also some independent
deployments of Microsoft Azure for reasons of sovereignty and so on. These independent deployments are
referred to as "environments." The following list details the independent clouds currently available.
Azure Government Cloud
Azure China 21Vianet Cloud operated by 21Vianet in China
Azure German Cloud

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Using an independent cloud


To use Azure Storage in one of the independent clouds, you connect to that cloud instead of Azure Public. To use
one of the independent clouds rather than Azure Public:
You specify the environment to which to connect.
You determine and use the available regions.
You use the correct endpoint suffix, which is different from Azure Public.
The examples require Azure PowerShell module Az version 0.7 or later. In a PowerShell window, run
Get-Module -ListAvailable Az to find the version. If nothing is listed, or you need to upgrade, see Install Azure
PowerShell module.

Log in to Azure
Run the Get-AzEnvironment cmdlet to see the available Azure environments:

Get-AzEnvironment

Sign in to your account that has access to the cloud to which you want to connect and set the environment. This
example shows how to sign into an account that uses the Azure Government Cloud.

Connect-AzAccount –Environment AzureUSGovernment

To access the China Cloud, use the environment AzureChinaCloud . To access the German Cloud, use
AzureGermanCloud .
At this point, if you need the list of locations to create a storage account or another resource, you can query the
locations available for the selected cloud using Get-AzLocation.
Get-AzLocation | select Location, DisplayName

The following table shows the locations returned for the German cloud.

LO C AT IO N DISP L AY N A M E

germanycentral Germany Central

germanynortheast Germany Northeast

Endpoint suffix
The endpoint suffix for each of these environments is different from the Azure Public endpoint. For example, the
blob endpoint suffix for Azure Public is blob.core.windows.net . For the Government Cloud, the blob endpoint
suffix is blob.core.usgovcloudapi.net .
Get endpoint using Get-AzEnvironment
Retrieve the endpoint suffix using Get-AzEnvironment. The endpoint is the StorageEndpointSuffix property of
the environment.
The following code snippets show how to retrieve the endpoint suffix. All of these commands return something
like "core.cloudapp.net" or "core.cloudapi.de", etc. Append the suffix to the storage service to access that service.
For example, "queue.core.cloudapi.de" will access the queue service in German Cloud.
This code snippet retrieves all of the environments and the endpoint suffix for each one.

Get-AzEnvironment | select Name, StorageEndpointSuffix

This command returns the following results.

NAME STO RA GEEN DP O IN T SUF F IX

AzureChinaCloud core.chinacloudapi.cn

AzureCloud core.windows.net

AzureGermanCloud core.cloudapi.de

AzureUSGovernment core.usgovcloudapi.net

To retrieve all of the properties for the specified environment, call Get-AzEnvironment and specify the cloud
name. This code snippet returns a list of properties; look for StorageEndpointSuffix in the list. The following
example is for the German Cloud.

Get-AzEnvironment -Name AzureGermanCloud

The results are similar to the following values:

P RO P ERT Y N A M E VA L UE

Name AzureGermanCloud
P RO P ERT Y N A M E VA L UE

EnableAdfsAuthentication False

ActiveDirectoryServiceEndpointResourceI http://management.core.cloudapi.de/

GalleryURL https://gallery.cloudapi.de/

ManagementPortalUrl https://portal.microsoftazure.de/

ServiceManagementUrl https://manage.core.cloudapi.de/

PublishSettingsFileUrl https://manage.microsoftazure.de/publishsettings/index

ResourceManagerUrl http://management.microsoftazure.de/

SqlDatabaseDnsSuffix .database.cloudapi.de

StorageEndpointSuffix core.cloudapi.de

... ...

To retrieve just the storage endpoint suffix property, retrieve the specific cloud and ask for just that one property.

$environment = Get-AzEnvironment -Name AzureGermanCloud


Write-Host "Storage EndPoint Suffix = " $environment.StorageEndpointSuffix

This command returns the following information:


Storage Endpoint Suffix = core.cloudapi.de

Get endpoint from a storage account


You can also examine the properties of a storage account to retrieve the endpoints:

# Get a reference to the storage account.


$resourceGroup = "myexistingresourcegroup"
$storageAccountName = "myexistingstorageaccount"
$storageAccount = Get-AzStorageAccount `
-ResourceGroupName $resourceGroup `
-Name $storageAccountName
# Output the endpoints.
Write-Host "blob endpoint = " $storageAccount.PrimaryEndPoints.Blob
Write-Host "file endpoint = " $storageAccount.PrimaryEndPoints.File
Write-Host "queue endpoint = " $storageAccount.PrimaryEndPoints.Queue
Write-Host "table endpoint = " $storageAccount.PrimaryEndPoints.Table

For a storage account in the Government Cloud, this command returns the following output:

blob endpoint = http://myexistingstorageaccount.blob.core.usgovcloudapi.net/


file endpoint = http://myexistingstorageaccount.file.core.usgovcloudapi.net/
queue endpoint = http://myexistingstorageaccount.queue.core.usgovcloudapi.net/
table endpoint = http://myexistingstorageaccount.table.core.usgovcloudapi.net/
After setting the environment
You can now use PowerShell to manage your storage accounts and access blob, queue, file, and table data. For
more information, see Az.Storage.

Clean up resources
If you created a new resource group and a storage account for this exercise, you can remove both assets by
deleting the resource group. Deleting the resource group deletes all resources contained within the group.

Remove-AzResourceGroup -Name $resourceGroup

Next steps
Persisting user logins across PowerShell sessions
Azure Government storage
Microsoft Azure Government Developer Guide
Developer Notes for Azure China 21Vianet Applications
Azure Germany Documentation
Initiate a storage account failover
5/20/2022 • 5 minutes to read • Edit Online

If the primary endpoint for your geo-redundant storage account becomes unavailable for any reason, you can
initiate an account failover. An account failover updates the secondary endpoint to become the primary
endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary
region. Forced failover enables you to maintain high availability for your applications.
This article shows how to initiate an account failover for your storage account using the Azure portal,
PowerShell, or Azure CLI. To learn more about account failover, see Disaster recovery and storage account
failover.

WARNING
An account failover typically results in some data loss. To understand the implications of an account failover and to
prepare for data loss, review Understand the account failover process.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Prerequisites
Before you can perform an account failover on your storage account, make sure that your storage account is
configured for geo-replication. Your storage account can use any of the following redundancy options:
Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
For more information about Azure Storage redundancy, see Azure Storage redundancy.
Keep in mind that the following features and services are not supported for account failover:
Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares
being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop
working and may also cause unexpected data loss in the case of newly tiered files.
Storage accounts that have hierarchical namespace enabled (such as for Data Lake Storage Gen2) are not
supported at this time.
A storage account containing premium block blobs cannot be failed over. Storage accounts that support
premium block blobs do not currently support geo-redundancy.
A storage account containing any WORM immutability policy enabled containers cannot be failed over.
Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.

Initiate the failover


Portal
PowerShell
Azure CLI

To initiate an account failover from the Azure portal, follow these steps:
1. Navigate to your storage account.
2. Under Settings , select Geo-replication . The following image shows the geo-replication and failover
status of a storage account.

3. Verify that your storage account is configured for geo-redundant storage (GRS) or read-access geo-
redundant storage (RA-GRS). If it's not, then select Configuration under Settings to update your
account to be geo-redundant.
4. The Last Sync Time property indicates how far the secondary is behind from the primary. Last Sync
Time provides an estimate of the extent of data loss that you will experience after the failover is
completed. For more information about checking the Last Sync Time property, see Check the Last Sync
Time property for a storage account.
5. Select Prepare for failover .
6. Review the confirmation dialog. When you are ready, enter Yes to confirm and initiate the failover.
Important implications of account failover
When you initiate an account failover for your storage account, the DNS records for the secondary endpoint are
updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the
potential impact to your storage account before you initiate a failover.
To estimate the extent of likely data loss before you initiate a failover, check the Last Sync Time property. For
more information about checking the Last Sync Time property, see Check the Last Sync Time property for a
storage account.
The time it takes to failover after initiation can vary though typically less than one hour.
After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the
new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage
(RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost
is due to the network egress charges to re-replicate the data to the new secondary region. For additional
information, see Bandwidth Pricing Details.
After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the
new secondary region. Replication time depends on many factors, which include:
The number and size of the objects in the storage account. Many small objects can take longer than fewer
and larger objects.
The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live
traffic takes priority over geo replication.
If using Blob storage, the number of snapshots per blob.
If using Table storage, the data partitioning strategy. The replication process can't scale beyond the number
of partition keys that you use.

Next steps
Disaster recovery and storage account failover
Check the Last Sync Time property for a storage account
Use geo-redundancy to design highly available applications
Tutorial: Build a highly available application with Blob storage
Enable soft delete on Azure file shares
5/20/2022 • 3 minutes to read • Edit Online

Azure Files offers soft delete for file shares so that you can more easily recover your data when it's mistakenly
deleted by an application or other storage account user. To learn more about soft delete, see How to prevent
accidental deletion of Azure file shares.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Prerequisites
If you intend to use Azure PowerShell, install the latest version.
If you intend to use the Azure CLI, install the latest version.

Getting started
The following sections show how to enable and use soft delete for Azure file shares on an existing storage
account:
Portal
PowerShell
Azure CLI

1. Sign into the Azure portal.


2. Navigate to your storage account and select File shares under Data storage .
3. Select Enabled next to Soft delete .
4. Select Enabled for Soft delete for all file shares .
5. Select File share retention period in days and enter a number of your choosing.
6. Select Save to confirm your data retention settings.
Restore soft deleted file share
Portal
PowerShell
Azure CLI

To restore a soft deleted file share:


1. Navigate to your storage account and select File shares .
2. On the file share blade, enable Show deleted shares to display any shares that have been soft deleted.
This will display any shares currently in a Deleted state.

3. Select the share and select undelete , this will restore the share.
You can confirm the share is restored since its status switches to Active .
Disable soft delete
If you wish to stop using soft delete, follow these instructions. To permanently delete a file share that has been
soft deleted, you must undelete it, disable soft delete, and then delete it again.
Portal
PowerShell
Azure CLI

1. Navigate to your storage account and select File shares under Data storage .
2. Select the link next to Soft delete .
3. Select Disabled for Soft delete for all file shares .
4. Select Save to confirm your data retention settings.

Next steps
To learn about another form of data protection and recovery, see our article Overview of share snapshots for
Azure Files.
Change how a storage account is replicated
5/20/2022 • 11 minutes to read • Edit Online

Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned
events, including transient hardware failures, network or power outages, and massive natural disasters.
Redundancy ensures that your storage account meets the Service-Level Agreement (SLA) for Azure Storage
even in the face of failures.
Azure Storage offers the following types of replication:
Locally redundant storage (LRS)
Zone-redundant storage (ZRS)
Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
For an overview of each of these options, see Azure Storage redundancy.

Switch between types of replication


You can switch a storage account from one type of replication to any other type, but some scenarios are more
straightforward than others. If you want to add or remove geo-replication or read access to the secondary
region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting in some
scenarios; other scenarios require a manual or live migration. If you want to change how data is replicated in the
primary region, by moving from LRS to ZRS or vice versa, then you must either perform a manual migration or
request a live migration. And if you want to move from ZRS to GZRS or RA-GZRS, then you must perform a live
migration, unless you are performing a failback operation after failover.
The following table provides an overview of how to switch from each type of replication to another:

SW ITC H IN G … TO L RS … TO GRS/ RA - GRS … TO Z RS … TO GZ RS/ RA - GZ RS

…from LRS N/A Use Azure portal, Perform a manual Perform a manual
PowerShell, or CLI to migration migration
change the
replication setting1,2 OR OR

Request a live Switch to GRS/RA-


migration5 GRS first and then
request a live
migration3

…from GRS/RA- Use Azure portal, N/A Perform a manual Perform a manual
GRS PowerShell, or CLI to migration migration
change the
replication setting OR OR

Switch to LRS first Request a live


and then request a migration3
live migration3
SW ITC H IN G … TO L RS … TO GRS/ RA - GRS … TO Z RS … TO GZ RS/ RA - GZ RS

…from ZRS Perform a manual Perform a manual N/A Request a live


migration migration migration3

OR

Use PowerShell or
Azure CLI to change
the replication
setting as part of a
failback operation
only4

…from GZRS/RA- Perform a manual Perform a manual Use Azure portal, N/A
GZRS migration migration PowerShell, or CLI to
change the
replication setting

1 Incurs a one-time egress charge.


2 Migrating from LRS to GRS is not supported if the storage account contains blobs in the archive tier.
3 Live migration is supported for standard general-purpose v2 and premium file share storage accounts. Live
migration is not supported for premium block blob or page blob storage accounts.
4 After an account failover to the secondary region, it's possible to initiate a fail back from the new primary back

to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see Use
caution when failing back to the original primary.
5 Migrating from LRS to ZRS is not supported if the NFSv3 protocol support is enabled for Azure Blob Storage

or if the storage account contains Azure Files NFSv4.1 shares.


Cau t i on

If you performed an account failover for your (RA-)GRS or (RA-)GZRS account, the account is locally redundant
(LRS) in the new primary region after the failover. Live migration to ZRS or GZRS for an LRS account resulting
from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you
perform an account failover from RA-GZRS to the LRS in the secondary region, and then configure it again to
RA-GRS and perform another account failover to the original primary region, you can't contact support for the
original live migration to RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to
ZRS or GZRS.
To change the redundancy configuration for a storage account that contains blobs in the Archive tier, you must
first rehydrate all archived blobs to the Hot or Cool tier. Microsoft recommends that you avoid changing the
redundancy configuration for a storage account that contains archived blobs if at all possible, because
rehydration operations can be costly and time-consuming.

Change the replication setting


You can use the Azure portal, PowerShell, or Azure CLI to change the replication setting for a storage account, as
long as you are not changing how data is replicated in the primary region. If you are migrating from LRS in the
primary region to ZRS in the primary region or vice versa, then you must perform either a manual migration or
a live migration.
Changing how your storage account is replicated does not result in down time for your applications.

Portal
PowerShell
Azure CLI

To change the redundancy option for your storage account in the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal.
2. Under Settings select Configuration .
3. Update the Replication setting.

Perform a manual migration to ZRS, GZRS, or RA-GZRS


If you want to change how data in your storage account is replicated in the primary region, by moving from LRS
to ZRS or vice versa, then you may opt to perform a manual migration. A manual migration provides more
flexibility than a live migration. You control the timing of a manual migration, so use this option if you need the
migration to complete by a certain date.
When you perform a manual migration from LRS to ZRS in the primary region or vice versa, the destination
storage account can be geo-redundant and can also be configured for read access to the secondary region. For
example, you can migrate an LRS account to a GZRS or RA-GZRS account in one step.
You cannot use a manual migration to migrate from ZRS to GZRS or RA-GZRS. You must request a live
migration.
A manual migration can result in application downtime. If your application requires high availability, Microsoft
also provides a live migration option. A live migration is an in-place migration with no downtime.
With a manual migration, you copy the data from your existing storage account to a new storage account that
uses ZRS in the primary region. To perform a manual migration, you can use one of the following options:
Copy data by using an existing tool such as AzCopy, one of the Azure Storage client libraries, or a reliable
third-party tool.
If you're familiar with Hadoop or HDInsight, you can attach both the source storage account and destination
storage account account to your cluster. Then, parallelize the data copy process with a tool like DistCp.

Request a live migration to ZRS, GZRS, or RA-GZRS


If you need to migrate your storage account from LRS to ZRS in the primary region with no application
downtime, you can request a live migration from Microsoft. To migrate from LRS to GZRS or RA-GZRS, first
switch to GRS or RA-GRS and then request a live migration. Similarly, you can request a live migration from
ZRS, GRS, or RA-GRS to GZRS or RA-GZRS. To migrate from GRS or RA-GRS to ZRS, first switch to LRS, then
request a live migration.
During a live migration, you can access data in your storage account with no loss of durability or availability. The
Azure Storage SLA is maintained during the migration process. There is no data loss associated with a live
migration. Service endpoints, access keys, shared access signatures, and other account options remain
unchanged after the migration.
For standard performance, ZRS supports general-purpose v2 accounts only, so make sure to upgrade your
storage account if it is a general-purpose v1 account prior to submitting a request for a live migration to ZRS.
For more information, see Upgrade to a general-purpose v2 storage account. A storage account must contain
data to be migrated via live migration.
For premium performance, live migration is supported for premium file share accounts, but not for premium
block blob or premium page blob accounts.
If your account uses RA-GRS, then you need to first change your account's replication type to either LRS or GRS
before proceeding with a live migration. This intermediary step removes the secondary read-only endpoint
provided by RA-GRS.
While Microsoft handles your request for live migration promptly, there's no guarantee as to when a live
migration will complete. If you need your data migrated to ZRS by a certain date, then Microsoft recommends
that you perform a manual migration instead. Generally, the more data you have in your account, the longer it
takes to migrate that data.
You must perform a manual migration if:
You want to migrate your data into a ZRS storage account that is located in a region different than the source
account.
Your storage account is a premium page blob or block blob account.
You want to migrate data from ZRS to LRS, GRS or RA-GRS.
Your storage account includes data in the archive tier.
You can request live migration through the Azure Support portal.

IMPORTANT
If you need to migrate more than one storage account, create a single support ticket and specify the names of the
accounts to convert on the Details tab.

Follow these steps to request a live migration:


1. In the Azure portal, navigate to a storage account that you want to migrate.
2. Under Suppor t + troubleshooting , select New Suppor t Request .
3. Complete the Basics tab based on your account information:
Issue type : Select Technical .
Ser vice : Select My Ser vices , then Storage Account Management .
Resource : Select a storage account to migrate. If you need to specify multiple storage accounts, you
can do so in the Details section.
Problem type : Choose Data Migration .
Problem subtype : Choose Migrate to ZRS, GZRS, or RA-GZRS .
4. Select Next . On the Solutions tab, you can check the eligibility of your storage accounts for migration.
5. Select Next . If you have more than one storage account to migrate, then on the Details tab, specify the
name for each account, separated by a semicolon.

6. Fill out the additional required information on the Details tab, then select Review + create to review
and submit your support ticket. A support person will contact you to provide any assistance you may
need.
NOTE
Premium file shares are available only for LRS and ZRS.
GZRS storage accounts do not currently support the archive tier. See Hot, Cool, and Archive access tiers for blob data for
more details.
Managed disks are only available for LRS and cannot be migrated to ZRS. You can store snapshots and images for
standard SSD managed disks on standard HDD storage and choose between LRS and ZRS options. For information about
integration with availability sets, see Introduction to Azure managed disks.

Switch from ZRS Classic


IMPORTANT
Microsoft will deprecate and migrate ZRS Classic accounts on March 31, 2021. More details will be provided to ZRS Classic
customers before deprecation.
After ZRS becomes generally available in a given region, customers will no longer be able to create ZRS Classic accounts
from the Azure portal in that region. Using Microsoft PowerShell and Azure CLI to create ZRS Classic accounts is an
option until ZRS Classic is deprecated. For information about where ZRS is available, see Azure Storage redundancy.

ZRS Classic asynchronously replicates data across data centers within one to two regions. Replicated data may
not be available unless Microsoft initiates failover to the secondary. A ZRS Classic account can't be converted to
or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
ZRS Classic is available only for block blobs in general-purpose V1 (GPv1) storage accounts. For more
information about storage accounts, see Azure storage account overview.
To manually migrate ZRS account data to or from an LRS, GRS, RA-GRS, or ZRS Classic account, use one of the
following tools: AzCopy, Azure Storage Explorer, PowerShell, or Azure CLI. You can also build your own migration
solution with one of the Azure Storage client libraries.
You can also upgrade your ZRS Classic storage account to ZRS by using the Azure portal, PowerShell, or Azure
CLI in regions where ZRS is available.

Portal
PowerShell
Azure CLI

To upgrade to ZRS in the Azure portal, navigate to the Configuration settings of the account and choose
Upgrade :
Costs associated with changing how data is replicated
The costs associated with changing how data is replicated depend on your conversion path. Ordering from least
to the most expensive, Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-
GZRS.
For example, going from LRS to any other type of replication will incur additional charges because you are
moving to a more sophisticated redundancy level. Migrating to GRS or RA-GRS will incur an egress bandwidth
charge at the time of migration because your entire storage account is being replicated to the secondary region.
All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the
secondary region. For details on bandwidth charges, see Azure Storage Pricing page.
If you migrate your storage account from GRS to LRS, there is no additional cost, but your replicated data is
deleted from the secondary location.

IMPORTANT
If you migrate your storage account from RA-GRS to GRS or LRS, that account is billed as RA-GRS for an additional 30
days beyond the date that it was converted.

See also
Azure Storage redundancy
Check the Last Sync Time property for a storage account
Use geo-redundancy to design highly available applications
Use geo-redundancy to design highly available
applications
5/20/2022 • 19 minutes to read • Edit Online

A common feature of cloud-based infrastructures like Azure Storage is that they provide a highly available and
durable platform for hosting data and applications. Developers of cloud-based applications must consider
carefully how to leverage this platform to maximize those advantages for their users. Azure Storage offers geo-
redundant storage to ensure high availability even in the event of a regional outage. Storage accounts
configured for geo-redundant replication are synchronously replicated in the primary region, and then
asynchronously replicated to a secondary region that is hundreds of miles away.
Azure Storage offers two options for geo-redundant replication. The only difference between these two options
is how data is replicated in the primary region:
Geo-zone-redundant storage (GZRS): Data is replicated synchronously across three Azure availability
zones in the primary region using zone-redundant storage (ZRS), then replicated asynchronously to the
secondary region. For read access to data in the secondary region, enable read-access geo-zone-
redundant storage (RA-GZRS).
Microsoft recommends using GZRS/RA-GZRS for scenarios that require maximum availability and
durability.
Geo-redundant storage (GRS): Data is replicated synchronously three times in the primary region using
locally redundant storage (LRS), then replicated asynchronously to the secondary region. For read access
to data in the secondary region, enable read-access geo-redundant storage (RA-GRS).
This article shows how to design your application to handle an outage in the primary region. If the primary
region becomes unavailable, your application can adapt to perform read operations against the secondary
region instead. Make sure that your storage account is configured for RA-GRS or RA-GZRS before you get
started.

Application design considerations when reading from the secondary


The purpose of this article is to show you how to design an application that will continue to function (albeit in a
limited capacity) even in the event of a major disaster at the primary data center. You can design your
application to handle transient or long-running issues by reading from the secondary region when there is a
problem that interferes with reading from the primary region. When the primary region is available again, your
application can return to reading from the primary region.
Keep in mind these key points when designing your application for RA-GRS or RA-GZRS:
Azure Storage maintains a read-only copy of the data you store in your primary region in a secondary
region. As noted above, the storage service determines the location of the secondary region.
The read-only copy is eventually consistent with the data in the primary region.
For blobs, tables, and queues, you can query the secondary region for a Last Sync Time value that tells
you when the last replication from the primary to the secondary region occurred. (This is not supported
for Azure Files, which doesn't have RA-GRS redundancy at this time.)
You can use the Storage Client Library to read and write data in either the primary or secondary region.
You can also redirect read requests automatically to the secondary region if a read request to the primary
region times out.
If the primary region becomes unavailable, you can initiate an account failover. When you fail over to the
secondary region, the DNS entries pointing to the primary region are changed to point to the secondary
region. After the failover is complete, write access is restored for GRS and RA-GRS accounts. For more
information, see Disaster recovery and storage account failover.
Using eventually consistent data
The proposed solution assumes that it is acceptable to return potentially stale data to the calling application.
Because data in the secondary region is eventually consistent, it is possible the primary region may become
inaccessible before an update to the secondary region has finished replicating.
For example, suppose your customer submits an update successfully, but the primary region fails before the
update is propagated to the secondary region. When the customer asks to read the data back, they receive the
stale data from the secondary region instead of the updated data. When designing your application, you must
decide whether this is acceptable, and if so, how you will message the customer.
Later in this article, we show how to check the Last Sync Time for the secondary data to check whether the
secondary is up-to-date.
Handling services separately or all together
While unlikely, it is possible for one service to become unavailable while the other services are still fully
functional. You can handle the retries and read-only mode for each service separately (blobs, queues, tables), or
you can handle retries generically for all the storage services together.
For example, if you use queues and blobs in your application, you may decide to put in separate code to handle
retryable errors for each of these. Then if you get a retry from the blob service, but the queue service is still
working, only the part of your application that handles blobs will be impacted. If you decide to handle all storage
service retries generically and a call to the blob service returns a retryable error, then requests to both the blob
service and the queue service will be impacted.
Ultimately, this depends on the complexity of your application. You may decide not to handle the failures by
service, but instead to redirect read requests for all storage services to the secondary region and run the
application in read-only mode when you detect a problem with any storage service in the primary region.
Other considerations
These are the other considerations we will discuss in the rest of this article.
Handling retries of read requests using the Circuit Breaker pattern
Eventually-consistent data and the Last Sync Time
Testing

Running your application in read-only mode


To effectively prepare for an outage in the primary region, you must be able to handle both failed read requests
and failed update requests (with update in this case meaning inserts, updates, and deletions). If the primary
region fails, read requests can be redirected to the secondary region. However, update requests cannot be
redirected to the secondary because the secondary is read-only. For this reason, you need to design your
application to run in read-only mode.
For example, you can set a flag that is checked before any update requests are submitted to Azure Storage.
When one of the update requests comes through, you can skip it and return an appropriate response to the
customer. You may even want to disable certain features altogether until the problem is resolved and notify
users that those features are temporarily unavailable.
If you decide to handle errors for each service separately, you will also need to handle the ability to run your
application in read-only mode by service. For example, you may have read-only flags for each service that can
be enabled and disabled. Then you can handle the flag in the appropriate places in your code.
Being able to run your application in read-only mode has another side benefit – it gives you the ability to ensure
limited functionality during a major application upgrade. You can trigger your application to run in read-only
mode and point to the secondary data center, ensuring nobody is accessing the data in the primary region while
you're making upgrades.

Handling updates when running in read-only mode


There are many ways to handle update requests when running in read-only mode. We won't cover this
comprehensively, but generally, there are a couple of patterns that you consider.
You can respond to your user and tell them you are not currently accepting updates. For example, a
contact management system could enable customers to access contact information but not make
updates.
You can enqueue your updates in another region. In this case, you would write your pending update
requests to a queue in a different region, and then have a way to process those requests after the primary
data center comes online again. In this scenario, you should let the customer know that the update
requested is queued for later processing.
You can write your updates to a storage account in another region. Then when the primary data center
comes back online, you can have a way to merge those updates into the primary data, depending on the
structure of the data. For example, if you are creating separate files with a date/time stamp in the name,
you can copy those files back to the primary region. This works for some workloads such as logging and
iOT data.

Handling retries
The Azure Storage client library helps you determine which errors can be retried. For example, a 404 error
(resource not found) would not be retried because retrying it is not likely to result in success. On the other hand,
a 500 error can be retried because it is a server error, and the problem may simply be a transient issue. For
more details, check out the open source code for the ExponentialRetry class in the .NET storage client library.
(Look for the ShouldRetry method.)
Read requests
Read requests can be redirected to secondary storage if there is a problem with primary storage. As noted
above in Using Eventually Consistent Data, it must be acceptable for your application to potentially read stale
data. If you are using the storage client library to access data from the secondary, you can specify the retry
behavior of a read request by setting a value for the LocationMode property to one of the following:
Primar yOnly (the default)
Primar yThenSecondar y
Secondar yOnly
Secondar yThenPrimar y
When you set the LocationMode to Primar yThenSecondar y , if the initial read request to the primary
endpoint fails with an error that can be retried, the client automatically makes another read request to the
secondary endpoint. If the error is a server timeout, then the client will have to wait for the timeout to expire
before it receives a retryable error from the service.
There are basically two scenarios to consider when you are deciding how to respond to a retryable error:
This is an isolated problem and subsequent requests to the primary endpoint will not return a retryable
error. An example of where this might happen is when there is a transient network error.
In this scenario, there is no significant performance penalty in having LocationMode set to
Primar yThenSecondar y as this only happens infrequently.
This is a problem with at least one of the storage services in the primary region and all subsequent
requests to that service in the primary region are likely to return retryable errors for a period of time. An
example of this is if the primary region is completely inaccessible.
In this scenario, there is a performance penalty because all your read requests will try the primary
endpoint first, wait for the timeout to expire, then switch to the secondary endpoint.
For these scenarios, you should identify that there is an ongoing issue with the primary endpoint and send all
read requests directly to the secondary endpoint by setting the LocationMode property to Secondar yOnly .
At this time, you should also change the application to run in read-only mode. This approach is known as the
Circuit Breaker Pattern.
Update requests
The Circuit Breaker pattern can also be applied to update requests. However, update requests cannot be
redirected to secondary storage, which is read-only. For these requests, you should leave the LocationMode
property set to Primar yOnly (the default). To handle these errors, you can apply a metric to these requests –
such as 10 failures in a row – and when your threshold is met, switch the application into read-only mode. You
can use the same methods for returning to update mode as those described below in the next section about the
Circuit Breaker pattern.

Circuit Breaker pattern


Using the Circuit Breaker pattern in your application can prevent it from retrying an operation that is likely to
fail repeatedly. It allows the application to continue to run rather than taking up time while the operation is
retried exponentially. It also detects when the fault has been fixed, at which time the application can try the
operation again.
How to implement the circuit breaker pattern
To identify that there is an ongoing problem with a primary endpoint, you can monitor how frequently the client
encounters retryable errors. Because each case is different, you have to decide on the threshold you want to use
for the decision to switch to the secondary endpoint and run the application in read-only mode. For example,
you could decide to perform the switch if there are 10 failures in a row with no successes. Another example is to
switch if 90% of the requests in a 2-minute period fail.
For the first scenario, you can simply keep a count of the failures, and if there is a success before reaching the
maximum, set the count back to zero. For the second scenario, one way to implement it is to use the
MemoryCache object (in .NET). For each request, add a CacheItem to the cache, set the value to success (1) or
fail (0), and set the expiration time to 2 minutes from now (or whatever your time constraint is). When an entry's
expiration time is reached, the entry is automatically removed. This will give you a rolling 2-minute window.
Each time you make a request to the storage service, you first use a Linq query across the MemoryCache object
to calculate the percent success by summing the values and dividing by the count. When the percent success
drops below some threshold (such as 10%), set the LocationMode property for read requests to
Secondar yOnly and switch the application into read-only mode before continuing.
The threshold of errors used to determine when to make the switch may vary from service to service in your
application, so you should consider making them configurable parameters. This is also where you decide to
handle retryable errors from each service separately or as one, as discussed previously.
Another consideration is how to handle multiple instances of an application, and what to do when you detect
retryable errors in each instance. For example, you may have 20 VMs running with the same application loaded.
Do you handle each instance separately? If one instance starts having problems, do you want to limit the
response to just that one instance, or do you want to try to have all instances respond in the same way when
one instance has a problem? Handling the instances separately is much simpler than trying to coordinate the
response across them, but how you do this depends on your application's architecture.
Options for monitoring the error frequency
You have three main options for monitoring the frequency of retries in the primary region in order to determine
when to switch over to the secondary region and change the application to run in read-only mode.
Add a handler for the Retr ying event on the OperationContext object you pass to your storage
requests – this is the method displayed in this article and used in the accompanying sample. These events
fire whenever the client retries a request, enabling you to track how often the client encounters retryable
errors on a primary endpoint.
.NET v12 SDK
.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client
libraries. For more information, see Announcing the Azure Storage v12 Client Libraries.
In the Evaluate method in a custom retry policy, you can run custom code whenever a retry takes place.
In addition to recording when a retry happens, this also gives you the opportunity to modify your retry
behavior.

.NET v12 SDK


.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client
libraries. For more information, see Announcing the Azure Storage v12 Client Libraries.
The third approach is to implement a custom monitoring component in your application that continually
pings your primary storage endpoint with dummy read requests (such as reading a small blob) to
determine its health. This would take up some resources, but not a significant amount. When a problem is
discovered that reaches your threshold, you would then perform the switch to Secondar yOnly and
read-only mode.
At some point, you will want to switch back to using the primary endpoint and allowing updates. If using one of
the first two methods listed above, you could simply switch back to the primary endpoint and enable update
mode after an arbitrarily selected amount of time or number of operations has been performed. You can then let
it go through the retry logic again. If the problem has been fixed, it will continue to use the primary endpoint
and allow updates. If there is still a problem, it will once more switch back to the secondary endpoint and read-
only mode after failing the criteria you've set.
For the third scenario, when pinging the primary storage endpoint becomes successful again, you can trigger
the switch back to Primar yOnly and continue allowing updates.

Handling eventually consistent data


Geo-redundant storage works by replicating transactions from the primary to the secondary region. This
replication process guarantees that the data in the secondary region is eventually consistent. This means that all
the transactions in the primary region will eventually appear in the secondary region, but that there may be a
lag before they appear, and that there is no guarantee the transactions arrive in the secondary region in the
same order as that in which they were originally applied in the primary region. If your transactions arrive in the
secondary region out of order, you may consider your data in the secondary region to be in an inconsistent state
until the service catches up.
The following table shows an example of what might happen when you update the details of an employee to
make them a member of the administrators role. For the sake of this example, this requires you update the
employee entity and update an administrator role entity with a count of the total number of administrators.
Notice how the updates are applied out of order in the secondary region.

T IM E T RA N SA C T IO N REP L IC AT IO N L A ST SY N C T IM E RESULT

T0 Transaction A: Transaction A
Insert employee inserted to primary,
entity in primary not replicated yet.

T1 Transaction A T1 Transaction A
replicated to replicated to
secondary secondary.
Last Sync Time
updated.

T2 Transaction B: T1 Transaction B written


Update to primary,
employee entity not replicated yet.
in primary

T3 Transaction C: T1 Transaction C written


Update to primary,
administrator not replicated yet.
role entity in
primary

T4 Transaction C T1 Transaction C
replicated to replicated to
secondary secondary.
LastSyncTime not
updated because
transaction B has not
been replicated yet.

T5 Read entities T1 You get the stale


from secondary value for employee
entity because
transaction B hasn't
replicated yet. You
get the new value for
administrator role
entity because C has
replicated. Last Sync
Time still hasn't
been updated
because transaction
B
hasn't replicated. You
can tell the
administrator role
entity is inconsistent
because the entity
date/time is after
the Last Sync Time.
T IM E T RA N SA C T IO N REP L IC AT IO N L A ST SY N C T IM E RESULT

T6 Transaction B T6 T6 – All transactions


replicated to through C have
secondary been replicated, Last
Sync Time
is updated.

In this example, assume the client switches to reading from the secondary region at T5. It can successfully read
the administrator role entity at this time, but the entity contains a value for the count of administrators that is
not consistent with the number of employee entities that are marked as administrators in the secondary region
at this time. Your client could simply display this value, with the risk that it is inconsistent information.
Alternatively, the client could attempt to determine that the administrator role is in a potentially inconsistent
state because the updates have happened out of order, and then inform the user of this fact.
To recognize that it has potentially inconsistent data, the client can use the value of the Last Sync Time that you
can get at any time by querying a storage service. This tells you the time when the data in the secondary region
was last consistent and when the service had applied all the transactions prior to that point in time. In the
example shown above, after the service inserts the employee entity in the secondary region, the last sync time
is set to T1. It remains at T1 until the service updates the employee entity in the secondary region when it is set
to T6. If the client retrieves the last sync time when it reads the entity at T5, it can compare it with the timestamp
on the entity. If the timestamp on the entity is later than the last sync time, then the entity is in a potentially
inconsistent state, and you can take whatever is the appropriate action for your application. Using this field
requires that you know when the last update to the primary was completed.
To learn how to check the last sync time, see Check the Last Sync Time property for a storage account.

Testing
It's important to test that your application behaves as expected when it encounters retryable errors. For
example, you need to test that the application switches to the secondary and into read-only mode when it
detects a problem, and switches back when the primary region becomes available again. To do this, you need a
way to simulate retryable errors and control how often they occur.
You can use Fiddler to intercept and modify HTTP responses in a script. This script can identify responses that
come from your primary endpoint and change the HTTP status code to one that the Storage Client Library
recognizes as a retryable error. This code snippet shows a simple example of a Fiddler script that intercepts
responses to read requests against the employeedata table to return a 502 status:

Java v12 SDK


Java v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Next steps
For a complete sample showing how to make the switch back and forth between the primary and secondary
endpoints, see Azure Samples – Using the Circuit Breaker Pattern with RA-GRS storage.
Check the Last Sync Time property for a storage
account
5/20/2022 • 2 minutes to read • Edit Online

When you configure a storage account, you can specify that your data is copied to a secondary region that is
hundreds of miles from the primary region. Geo-replication offers durability for your data in the event of a
significant outage in the primary region, such as a natural disaster. If you additionally enable read access to the
secondary region, your data remains available for read operations if the primary region becomes unavailable.
You can design your application to switch seamlessly to reading from the secondary region if the primary
region is unresponsive.
Geo-redundant storage (GRS) and geo-zone-redundant storage (GZRS) both replicate your data asynchronously
to a secondary region. For read access to the secondary region, enable read-access geo-redundant storage (RA-
GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information about the various options
for redundancy offered by Azure Storage, see Azure Storage redundancy.
This article describes how to check the Last Sync Time property for your storage account so that you can
evaluate any discrepancy between the primary and secondary regions.

About the Last Sync Time property


Because geo-replication is asynchronous, it is possible that data written to the primary region has not yet been
written to the secondary region at the time an outage occurs. The Last Sync Time property indicates the last
time that data from the primary region was written successfully to the secondary region. All writes made to the
primary region before the last sync time are available to be read from the secondary location. Writes made to
the primary region after the last sync time property may or may not be available for reads yet.
The Last Sync Time property is a GMT date/time value.

Get the Last Sync Time property


You can use PowerShell or Azure CLI to retrieve the value of the Last Sync Time property.

PowerShell
Azure CLI

To get the last sync time for the storage account with PowerShell, install version 1.11.0 or later of the Az.Storage
module. Then check the storage account's GeoReplicationStats.LastSyncTime property. Remember to
replace the placeholder values with your own values:

$lastSyncTime = $(Get-AzStorageAccount -ResourceGroupName <resource-group> `


-Name <storage-account> `
-IncludeGeoReplicationStats).GeoReplicationStats.LastSyncTime

See also
Azure Storage redundancy
Change the redundancy option for a storage account
Use geo-redundancy to design highly available applications
Initiate a storage account failover
5/20/2022 • 5 minutes to read • Edit Online

If the primary endpoint for your geo-redundant storage account becomes unavailable for any reason, you can
initiate an account failover. An account failover updates the secondary endpoint to become the primary
endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary
region. Forced failover enables you to maintain high availability for your applications.
This article shows how to initiate an account failover for your storage account using the Azure portal,
PowerShell, or Azure CLI. To learn more about account failover, see Disaster recovery and storage account
failover.

WARNING
An account failover typically results in some data loss. To understand the implications of an account failover and to
prepare for data loss, review Understand the account failover process.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Prerequisites
Before you can perform an account failover on your storage account, make sure that your storage account is
configured for geo-replication. Your storage account can use any of the following redundancy options:
Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
For more information about Azure Storage redundancy, see Azure Storage redundancy.
Keep in mind that the following features and services are not supported for account failover:
Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares
being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop
working and may also cause unexpected data loss in the case of newly tiered files.
Storage accounts that have hierarchical namespace enabled (such as for Data Lake Storage Gen2) are not
supported at this time.
A storage account containing premium block blobs cannot be failed over. Storage accounts that support
premium block blobs do not currently support geo-redundancy.
A storage account containing any WORM immutability policy enabled containers cannot be failed over.
Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.

Initiate the failover


Portal
PowerShell
Azure CLI

To initiate an account failover from the Azure portal, follow these steps:
1. Navigate to your storage account.
2. Under Settings , select Geo-replication . The following image shows the geo-replication and failover
status of a storage account.

3. Verify that your storage account is configured for geo-redundant storage (GRS) or read-access geo-
redundant storage (RA-GRS). If it's not, then select Configuration under Settings to update your
account to be geo-redundant.
4. The Last Sync Time property indicates how far the secondary is behind from the primary. Last Sync
Time provides an estimate of the extent of data loss that you will experience after the failover is
completed. For more information about checking the Last Sync Time property, see Check the Last Sync
Time property for a storage account.
5. Select Prepare for failover .
6. Review the confirmation dialog. When you are ready, enter Yes to confirm and initiate the failover.
Important implications of account failover
When you initiate an account failover for your storage account, the DNS records for the secondary endpoint are
updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the
potential impact to your storage account before you initiate a failover.
To estimate the extent of likely data loss before you initiate a failover, check the Last Sync Time property. For
more information about checking the Last Sync Time property, see Check the Last Sync Time property for a
storage account.
The time it takes to failover after initiation can vary though typically less than one hour.
After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the
new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage
(RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost
is due to the network egress charges to re-replicate the data to the new secondary region. For additional
information, see Bandwidth Pricing Details.
After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the
new secondary region. Replication time depends on many factors, which include:
The number and size of the objects in the storage account. Many small objects can take longer than fewer
and larger objects.
The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live
traffic takes priority over geo replication.
If using Blob storage, the number of snapshots per blob.
If using Table storage, the data partitioning strategy. The replication process can't scale beyond the number
of partition keys that you use.

Next steps
Disaster recovery and storage account failover
Check the Last Sync Time property for a storage account
Use geo-redundancy to design highly available applications
Tutorial: Build a highly available application with Blob storage
Back up Azure file shares
5/20/2022 • 8 minutes to read • Edit Online

This article explains how to back up Azure file shares from the Azure portal.
In this article, you'll learn how to:
Create a Recovery Services vault.
Configure backup from the Recovery Services vault
Configure backup from the file share pane
Run an on-demand backup job to create a restore point

Prerequisites
Learn about the Azure file share snapshot-based backup solution.
Ensure that the file share is present in one of the supported storage account types.
Identify or create a Recovery Services vault in the same region and subscription as the storage account that
hosts the file share.

Create a Recovery Services vault


A Recovery Services vault is a management entity that stores recovery points created over time and provides an
interface to perform backup-related operations. These operations include taking on-demand backups,
performing restores, and creating backup policies.
To create a Recovery Services vault:
1. Sign in to your subscription in the Azure portal.
2. Search for Backup center in the Azure portal, and go to the Backup Center dashboard.

3. Select +Vault from the Over view tab.


4. Select Recover y Ser vices vault > Continue .

5. The Recover y Ser vices vault dialog opens. Provide the following values:
Subscription : Choose the subscription to use. If you're a member of only one subscription, you'll
see that name. If you're not sure which subscription to use, use the default (suggested)
subscription. There are multiple choices only if your work or school account is associated with
more than one Azure subscription.
Resource group : Use an existing resource group or create a new one. To see the list of available
resource groups in your subscription, select Use existing , and then select a resource from the
dropdown list. To create a new resource group, select Create new and enter the name. For more
information about resource groups, see Azure Resource Manager overview.
Vault name : Enter a friendly name to identify the vault. The name must be unique to the Azure
subscription. Specify a name that has at least 2 but not more than 50 characters. The name must
start with a letter and consist only of letters, numbers, and hyphens.
Region : Select the geographic region for the vault. For you to create a vault to help protect any
data source, the vault must be in the same region as the data source.

IMPORTANT
If you're not sure of the location of your data source, close the dialog. Go to the list of your resources in
the portal. If you have data sources in multiple regions, create a Recovery Services vault for each region.
Create the vault in the first location before you create the vault for another location. There's no need to
specify storage accounts to store the backup data. The Recovery Services vault and Azure Backup handle
that automatically.

6. After you provide the values, select Review + create .

7. When you're ready to create the Recovery Services vault, select Create .

8. It can take a while to create the Recovery Services vault. Monitor the status notifications in the
Notifications area at the upper-right corner of the portal. After your vault is created, it's visible in the list
of Recovery Services vaults. If you don't see your vault, select Refresh .
Configure backup from the Recovery Services vault
To configure backup for multiple file shares from the Recovery Services vault pane, follow these steps:
1. In the Azure portal, go to Backup center and click +Backup .

2. Select Azure Files (Azure Storage) as the datasource type, select the vault that you wish to protect the
file shares with, and then click Continue .

3. Click Select to select the storage account that contains the file shares to be backed-up.
The Select Storage Account Pane opens on the right, listing a set of discovered supported storage
accounts. They're either associated with this vault or present in the same region as the vault, but not yet
associated to any Recovery Services vault.
4. From the list of discovered storage accounts, select an account, and select OK .

NOTE
If a storage account is present in a different region than the vault, it won't be present in the list of discovered
storage accounts.

5. The next step is to select the file shares you want to back up. Select the Add button in the FileShares to
Backup section.
6. The Select File Shares context pane opens on the right. Azure searches the storage account for file
shares that can be backed up. If you recently added your file shares and don't see them in the list, allow
some time for the file shares to appear.
7. From the Select File Shares list, select one or more of the file shares you want to back up. Select OK .
8. To choose a backup policy for your file share, you have three options:
Choose the default policy.
This option allows you to enable daily backup that will be retained for 30 days. If you don’t have an
existing backup policy in the vault, the backup pane opens with the default policy settings. If you
want to choose the default settings, you can directly select Enable backup .
Create a new policy
a. To create a new backup policy for your file share, select the link text below the drop-down
list in the Backup Policy section.
b. Follow the steps 3-7 in the Create a new policy section.
c. After defining all attributes of the policy, click OK .

Choose one of the existing backup policies


To choose one of the existing backup policies for configuring protection, select the desired policy
from the Backup policy drop-down list.
9. Select Enable Backup to start protecting the file share.
After you set a backup policy, a snapshot of the file shares is taken at the scheduled time. The recovery point is
also retained for the chosen period.

Configure backup from the file share pane


The following steps explain how you can configure backup for individual file shares from the respective file
share pane:
1. In the Azure portal, open the storage account hosting the file share you want to back up.
2. Once in the storage account, select the tile labeled File shares . You can also navigate to File shares via
the table of contents for the storage account.
3. In the file share listing, you should see all the file shares present in the storage account. Select the file
share you want to back up.

4. Select Backup under the Operations section of the file share pane. The Configure backup pane will
load on the right.
5. For the Recovery Services vault selection, do one of the following:
If you already have a vault, select the Select existing Recovery Services vault radio button, and
choose one of the existing vaults from Vault Name drop down menu.

If you don't have a vault, select the Create new Recovery Services vault radio button. Specify a
name for the vault. It's created in the same region as the file share. By default, the vault is created
in the same resource group as the file share. If you want to choose a different resource group,
select Create New link below the Resource Type drop down and specify a name for the
resource group. Select OK to continue.
IMPORTANT
If the storage account is registered with a vault, or there are few protected shares within the storage
account hosting the file share you're trying to protect, the Recovery Services vault name will be pre-
populated and you won’t be allowed to edit it Learn more here.

6. For the Backup Policy selection, do one of the following:


Leave the default policy. It will schedule daily backups with a retention of 30 days.
Select an existing backup policy, if you have one, from the Backup Policy drop-down menu.

Create a new policy with daily/weekly/monthly/yearly retention according to your requirement.


a. Select the Create a new policy link text.
b. Follow the steps 3-7 in the Create a new policy section.
c. After defining all attributes of the policy, click OK .
7. Select Enable backup to start protecting the file share.

8. You can track the configuration progress in the portal notifications, or by monitoring the backup jobs
under the vault you're using to protect the file share.
9. After the completion of the configure backup operation, select Backup under the Operations section of
the file share pane. The context pane listing Vault Essentials will load on the right. From there, you can
trigger on-demand backup and restore operations.

Run an on-demand backup job


Occasionally, you might want to generate a backup snapshot, or recovery point, outside of the times scheduled
in the backup policy. A common reason to generate an on-demand backup is right after you've configured the
backup policy. Based on the schedule in the backup policy, it might be hours or days until a snapshot is taken. To
protect your data until the backup policy engages, initiate an on-demand backup. Creating an on-demand
backup is often required before you make planned changes to your file shares.
From Backup center
1. Go to Backup center and click Backup Instances from the menu.
Filter for Azure Files (Azure Storage) as the datasource type.
2. Select the item for which you want to run an on-demand backup job.
3. In the Backup Item menu, select Backup now . Because this backup job is on demand, there's no
retention policy associated with the recovery point.

4. The Backup Now pane opens. Specify the last day you want to retain the recovery point. You can have a
maximum retention of 10 years for an on-demand backup.

5. Select OK to confirm the on-demand backup job that runs.


6. Monitor the portal notifications to keep a track of backup job run completion.
To monitor the job progress in the Backup center dashboard, select Backup center -> Backup Jobs -
> In progress .
From the file share pane
1. Open the file share’s Over view pane for which you want to take an on-demand backup.
2. Select Backup under the Operation section. The context pane listing Vault Essentials will load on the
right. Select Backup Now to take an on-demand backup.

3. The Backup Now pane opens. Specify the retention for the recovery point. You can have a maximum
retention of 10 years for an on-demand backup.

4. Select OK to confirm.

NOTE
Azure Backup locks the storage account when you configure protection for any file share in the corresponding account.
This provides protection against accidental deletion of a storage account with backed up file shares.

Best practices
Don't delete snapshots created by Azure Backup. Deleting snapshots can result in loss of recovery points
and/or restore failures.
Don't remove the lock taken on the storage account by Azure Backup. If you delete the lock, your storage
account will be prone to accidental deletion and if it's deleted, you'll lose your snapshots or backups.

Next steps
Learn how to:
Restore Azure file shares
Manage Azure file share backups
Back up Azure file shares with Azure CLI
5/20/2022 • 4 minutes to read • Edit Online

The Azure CLI provides a command-line experience for managing Azure resources. It's a great tool for building
custom automation to use Azure resources. This article details how to back up Azure file shares with Azure CLI.
You can also perform these steps via Azure PowerShell or the Azure portal.
By the end of this tutorial, you'll learn how to perform the operations below with Azure CLI:
Create a Recovery Services vault
Enable backup for Azure file shares
Trigger an on-demand backup for file shares

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.

Create a Recovery Services vault


A Recovery Services vault is an entity that gives you a consolidated view and management capability across all
backup items. When the backup job for a protected resource runs, it creates a recovery point inside the
Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time.
Follow these steps to create a Recovery Services vault:
1. A vault is placed in a resource group. If you don’t have an existing resource group, create a new one with
az group create . In this tutorial, we create the new resource group azurefiles in the East US region.

az group create --name AzureFiles --location eastus --output table


Location Name
---------- ----------
eastus AzureFiles

2. Use the az backup vault create cmdlet to create the vault. Specify the same location for the vault as was
used for the resource group.
The following example creates a Recovery Services vault named azurefilesvault in the East US region.

az backup vault create --resource-group azurefiles --name azurefilesvault --location eastus --output
table

Location Name ResourceGroup


---------- ---------------- ---------------
eastus azurefilesvault azurefiles

Enable backup for Azure file shares


This section assumes that you already have an Azure file share for which you want to configure backup. If you
don't have one, create an Azure file share using the az storage share create command.
To enable backup for file shares, you need to create a protection policy that defines when a backup job runs and
how long recovery points are stored. You can create a backup policy using the az backup policy create cmdlet.
The following example uses the az backup protection enable-for-azurefileshare cmdlet to enable backup for the
azurefiles file share in the afsaccount storage account using the schedule 1 backup policy:

az backup protection enable-for-azurefileshare --vault-name azurefilesvault --resource-group azurefiles --


policy-name schedule1 --storage-account afsaccount --azure-file-share azurefiles --output table

Name ResourceGroup
------------------------------------ ---------------
0caa93f4-460b-4328-ac1d-8293521dd928 azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your enable backup operation. To track status of the job, use the az backup job show cmdlet.

Trigger an on-demand backup for file share


If you want to trigger an on-demand backup for your file share instead of waiting for the backup policy to run
the job at the scheduled time, use the az backup protection backup-now cmdlet.
You need to define the following parameters to trigger an on-demand backup:
--container-name is the name of the storage account hosting the file share. To retrieve the name or
friendly name of your container, use the az backup container list command.
--item-name is the name of the file share for which you want to trigger an on-demand backup. To retrieve
the name or friendly name of your backed-up item, use the az backup item list command.
--retain-until specifies the date until when you want to retain the recovery point. The value should be set in
UTC time format (dd-mm-yyyy).
The following example triggers an on-demand backup for the azurefiles fileshare in the afsaccount storage
account with retention until 20-01-2020.
az backup protection backup-now --vault-name azurefilesvault --resource-group azurefiles --container-name
"StorageContainer;Storage;AzureFiles;afsaccount" --item-name "AzureFileShare;azurefiles" --retain-until 20-
01-2020 --output table

Name ResourceGroup
------------------------------------ ---------------
9f026b4f-295b-4fb8-aae0-4f058124cb12 azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your “on-demand backup” operation. To track the status of a job, use the az backup job show cmdlet.

Next steps
Learn how to Restore Azure file shares with CLI
Learn how to Manage Azure file share backups with CLI
Back up an Azure file share by using PowerShell
5/20/2022 • 9 minutes to read • Edit Online

This article describes how to use Azure PowerShell to back up an Azure Files file share through an Azure Backup
Recovery Services vault.
This article explains how to:
Set up PowerShell and register the Recovery Services provider.
Create a Recovery Services vault.
Configure backup for an Azure file share.
Run a backup job.

Before you start


Learn more about Recovery Services vaults.
Review the Az.RecoveryServices cmdlet reference reference in the Azure library.
Review the following PowerShell object hierarchy for Recovery Services:

Set up PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

NOTE
Azure PowerShell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this
feature. Learn more

Set up PowerShell as follows:


1. Download the latest version of Azure PowerShell.
NOTE
The minimum PowerShell version required for backup of Azure file shares is Az.RecoveryServices 2.6.0. Using the
latest version, or at least the minimum version, helps you avoid issues with existing scripts. Install the minimum
version by using the following PowerShell command:

Install-module -Name Az.RecoveryServices -RequiredVersion 2.6.0

2. Find the PowerShell cmdlets for Azure Backup by using this command:

Get-Command *azrecoveryservices*

3. Review the aliases and cmdlets for Azure Backup, Azure Site Recovery, and the Recovery Services vault.
Here's an example of what you might see. It's not a complete list of cmdlets.
4. Sign in to your Azure account by using Connect-AzAccount .
5. On the webpage that appears, you're prompted to enter your account credentials.
Alternatively, you can include your account credentials as a parameter in the Connect-AzAccount
cmdlet by using -Credential .
If you're a CSP partner working on behalf of a tenant, specify the customer as a tenant. Use their tenant ID
or tenant primary domain name. An example is Connect-AzAccount -Tenant "fabrikam.com" .
6. Associate the subscription that you want to use with the account, because an account can have several
subscriptions:
Select-AzSubscription -SubscriptionName $SubscriptionName

7. If you're using Azure Backup for the first time, use the Register-AzResourceProvider cmdlet to register
the Azure Recovery Services provider with your subscription:

Register-AzResourceProvider -ProviderNamespace "Microsoft.RecoveryServices"

8. Verify that the providers registered successfully:

Get-AzResourceProvider -ProviderNamespace "Microsoft.RecoveryServices"

9. In the command output, verify that RegistrationState changes to Registered . If it doesn't, run the
Register-AzResourceProvider cmdlet again.

Create a Recovery Services vault


The Recovery Services vault is a Resource Manager resource, so you must place it in a resource group. You can
use an existing resource group, or you can create a resource group by using the New-AzResourceGroup
cmdlet. When you create a resource group, specify the name and location for it.
Follow these steps to create a Recovery Services vault:
1. If you don't have an existing resource group, create a new one by using the New-AzResourceGroup
cmdlet. In this example, we create a resource group in the West US region:

New-AzResourceGroup -Name "test-rg" -Location "West US"

2. Use the New-AzRecoveryServicesVault cmdlet to create the vault. Specify the same location for the vault
that you used for the resource group.

New-AzRecoveryServicesVault -Name "testvault" -ResourceGroupName "test-rg" -Location "West US"

View the vaults in a subscription


To view all vaults in the subscription, use Get-AzRecoveryServicesVault:

Get-AzRecoveryServicesVault

The output is similar to the following. Note that the output provides the associated resource group and location.

Name : Contoso-vault
ID : /subscriptions/1234
Type : Microsoft.RecoveryServices/vaults
Location : WestUS
ResourceGroupName : Contoso-docs-rg
SubscriptionId : 1234-567f-8910-abc
Properties : Microsoft.Azure.Commands.RecoveryServices.ARSVaultProperties

Set the vault context


Store the vault object in a variable, and set the vault context.
Many Azure Backup cmdlets require the Recovery Services vault object as an input, so it's convenient to store
the vault object in a variable.
The vault context is the type of data protected in the vault. Set it by using Set-AzRecoveryServicesVaultContext.
After the context is set, it applies to all subsequent cmdlets.
The following example sets the vault context for testvault :

Get-AzRecoveryServicesVault -Name "testvault" | Set-AzRecoveryServicesVaultContext

Fetch the vault ID


We plan to deprecate the vault context setting in accordance with Azure PowerShell guidelines. Instead, you can
store or fetch the vault ID, and pass it to relevant commands. If you haven't set the vault context or you want to
specify the command to run for a certain vault, pass the vault ID as -vaultID to all relevant commands as
follows:

$vaultID = Get-AzRecoveryServicesVault -ResourceGroupName "Contoso-docs-rg" -Name "testvault" | select -


ExpandProperty ID
New-AzRecoveryServicesBackupProtectionPolicy -Name "NewAFSPolicy" -WorkloadType "AzureFiles" -
RetentionPolicy $retPol -SchedulePolicy $schPol -VaultID $vaultID

Configure a backup policy


A backup policy specifies the schedule for backups, and how long backup recovery points should be kept.
A backup policy is associated with at least one retention policy. A retention policy defines how long a recovery
point is kept before it's deleted. You can configure backups with daily, weekly, monthly, or yearly retention. With
multiple backups policy, you can also configure backups hourly retention.
Choose a policy type :
Daily backup policy
Multiple backups policy

Here are some cmdlets for backup policies:


View the default backup policy retention by using Get-AzRecoveryServicesBackupRetentionPolicyObject.
View the default backup policy schedule by using Get-AzRecoveryServicesBackupSchedulePolicyObject.
Create a new backup policy by using New-AzRecoveryServicesBackupProtectionPolicy. You enter the
schedule and retention policy objects as input.
By default, a start time is defined in the schedule policy object. Use the following example to change the start
time to the desired start time. The desired start time should be in Universal Coordinated Time (UTC). The
example assumes that the desired start time is 01:00 AM UTC for daily backups.

$schPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType "AzureFiles"


$UtcTime = Get-Date -Date "2019-03-20 01:30:00Z"
$UtcTime = $UtcTime.ToUniversalTime()
$schpol.ScheduleRunTimes[0] = $UtcTime

IMPORTANT
You need to provide the start time in 30-minute multiples only. In the preceding example, it can be only "01:00:00" or
"02:30:00". The start time can't be "01:15:00".
The following example stores the schedule policy and the retention policy in variables. It then uses those
variables as parameters for a new policy (NewAFSPolicy ). NewAFSPolicy takes a daily backup and retains it
for 30 days.

$schPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType "AzureFiles"


$retPol = Get-AzRecoveryServicesBackupRetentionPolicyObject -WorkloadType "AzureFiles"
New-AzRecoveryServicesBackupProtectionPolicy -Name "NewAFSPolicy" -WorkloadType "AzureFiles" -
RetentionPolicy $retPol -SchedulePolicy $schPol

The output is similar to the following:

Name WorkloadType BackupManagementType BackupTime DaysOfWeek


---- ------------ -------------------- ---------- ----------
NewAFSPolicy AzureFiles AzureStorage 10/24/2019 1:30:00 AM

Enable backup
After you define the backup policy, you can enable protection for the Azure file share by using the policy.
Retrieve a backup policy
You fetch the relevant policy object by using Get-AzRecoveryServicesBackupProtectionPolicy. Use this cmdlet to
view the policies associated with a workload type, or to get a specific policy.
Retrieve a policy for a workload type
The following example retrieves policies for the workload type AzureFiles :

Get-AzRecoveryServicesBackupProtectionPolicy -WorkloadType "AzureFiles"

The output is similar to the following:

Name WorkloadType BackupManagementType BackupTime DaysOfWeek


---- ------------ -------------------- ---------- ----------
dailyafs AzureFiles AzureStorage 1/10/2018 12:30:00 AM

NOTE
The time zone of the BackupTime field in PowerShell is in UTC. When the backup time is shown in the Azure portal, the
time is adjusted to your local time zone.

Retrieve a specific policy


The following policy retrieves the backup policy named dailyafs :

$afsPol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "dailyafs"

Enable protection and apply the policy


Enable protection by using Enable-AzRecoveryServicesBackupProtection. After the policy is associated with the
vault, backups are triggered in accordance with the policy schedule.
The following example enables protection for the Azure file share testAzureFS in storage account
testStorageAcct , with the policy dailyafs :
Enable-AzRecoveryServicesBackupProtection -StorageAccountName "testStorageAcct" -Name "testAzureFS" -Policy
$afsPol

The command waits until the configure-protection job is finished and gives an output that's similar to the
following example:

WorkloadName Operation Status StartTime


EndTime JobID
------------ --------- ------ ---------
------- -----
testAzureFS ConfigureBackup Completed 11/12/2018 2:15:26 PM 11/12/2018 2:16:11 PM
ec7d4f1d-40bd-46a4-9edb-3193c41f6bf6

For more information on how to get a list of file shares for a storage account, see this article.

Important notice: Backup item identification


This section outlines an important change in backups of Azure file shares in preparation for general availability.
While enabling a backup for Azure file shares, the user gives the customer a file-share name as the entity name,
and a backup item is created. The backup item's name is a unique identifier that the Azure Backup service
creates. Usually the identifier is a user-friendly name. But to handle the scenario of soft delete, where a file share
can be deleted and another file share can be created with the same name, the unique identity of an Azure file
share is now an ID.
To know the unique ID of each item, run the Get-AzRecover ySer vicesBackupItem command with the
relevant filters for backupManagementType and WorkloadType to get all the relevant items. Then observe
the name field in the returned PowerShell object/response.
We recommend that you list items and then retrieve their unique name from the name field in the response. Use
this value to filter the items with the Name parameter. Otherwise, use the FriendlyName parameter to retrieve
the item with its ID.

IMPORTANT
Make sure that PowerShell is upgraded to the minimum version (Az.RecoveryServices 2.6.0) for backups of Azure file
shares. With this version, the FriendlyName filter is available for the Get-AzRecover ySer vicesBackupItem command.
Pass the name of the Azure file share to the FriendlyName parameter. If you pass the name of the file share to the Name
parameter, this version throws a warning to pass the name to the FriendlyName parameter.
Not installing the minimum version might result in a failure of existing scripts. Install the minimum version of PowerShell
by using the following command:

Install-module -Name Az.RecoveryServices -RequiredVersion 2.6.0

Trigger an on-demand backup


Use Backup-AzRecoveryServicesBackupItem to run an on-demand backup for a protected Azure file share:
1. Retrieve the storage account from the container in the vault that holds your backup data by using Get-
AzRecoveryServicesBackupContainer.
2. To start a backup job, obtain information about the Azure file share by using Get-
AzRecoveryServicesBackupItem.
3. Run an on-demand backup by using Backup-AzRecoveryServicesBackupItem.
Run the on-demand backup as follows:

$afsContainer = Get-AzRecoveryServicesBackupContainer -FriendlyName "testStorageAcct" -ContainerType


AzureStorage
$afsBkpItem = Get-AzRecoveryServicesBackupItem -Container $afsContainer -WorkloadType "AzureFiles" -
FriendlyName "testAzureFS"
$job = Backup-AzRecoveryServicesBackupItem -Item $afsBkpItem

The command returns a job with an ID to be tracked, as shown in the following example:

WorkloadName Operation Status StartTime EndTime


JobID
------------ --------- ------ --------- -------
-----
testAzureFS Backup Completed 11/12/2018 2:42:07 PM 11/12/2018 2:42:11 PM
8bdfe3ab-9bf7-4be6-83d6-37ff1ca13ab6

Azure file share snapshots are used while the backups are taken. Usually the job finishes by the time the
command returns this output.

Next steps
Learn about backing up Azure Files in the Azure portal.
Refer to the sample script on GitHub for using an Azure Automation runbook to schedule backups.
Backup Azure file share using Azure Backup via
REST API
5/20/2022 • 9 minutes to read • Edit Online

This article describes how to back up an Azure File share using Azure Backup via REST API.
This article assumes you've already created a Recovery Services vault and policy for configuring backup for your
file share. If you haven’t, refer to the create vault and create policy REST API tutorials for creating new vaults and
policies.
For this article, we'll use the following resources:
Recover ySer vicesVault : azurefilesvault
Policy: schedule1
Resource group : azurefiles
Storage Account : testvault2
File Share : testshare

Configure backup for an unprotected Azure file share using REST API
Discover storage accounts with unprotected Azure file shares
The vault needs to discover all Azure storage accounts in the subscription with file shares that can be backed up
to the Recovery Services vault. This is triggered using the refresh operation. It's an asynchronous POST
operation that ensures the vault gets the latest list of all unprotected Azure File shares in the current
subscription and 'caches' them. Once the file share is 'cached', Recovery services can access the file share and
protect it.

POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/provider
s/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/refreshContainers?api-
version=2016-12-01&$filter={$filter}

The POST URI has {subscriptionId} , {vaultName} , {vaultresourceGroupName} , and {fabricName} parameters. In
our example, the value for the different parameters will be as follows:
{fabricName} is Azure
{vaultName} is azurefilesvault
{vaultresourceGroupName} is azurefiles
$filter=backupManagementType eq 'AzureStorage'
Since all the required parameters are given in the URI, there's no need for a separate request body.

POST https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/refreshContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'
Responses to the refresh operation
The 'refresh' operation is an asynchronous operation. It means this operation creates another operation that
needs to be tracked separately.
It returns two responses: 202 (Accepted) when another operation is created, and 200 (OK) when that operation
completes.
Ex a m p l e r e sp o n se s t o t h e r e fr e sh o p e r a t i o n

Once the POST request is submitted, a 202 (Accepted) response is returned.

HTTP/1.1 202 Accepted


'Pragma': 'no-cache'
'Expires': '-1'
'Location': ‘https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/ResourceGroups
/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/operationResults
/
cca47745-12d2-42f9-b3a4-75335f18fdf6?api-version=2016-12-01’
'Retry-After': '60'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '6cc12ceb-90a2-430d-a1ec-9b6b6fdea92b'
'x-ms-client-request-id': ‘3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61’
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-reads': '11996'
'x-ms-correlation-request-id': '6cc12ceb-90a2-430d-a1ec-9b6b6fdea92b'
'x-ms-routing-request-id': CENTRALUSEUAP:20200203T091326Z:6cc12ceb-90a2-430d-a1ec-9b6b6fdea92b'
'Date': 'Mon, 03 Feb 2020 09:13:25 GMT'

Track the resulting operation using the "Location" header with a simple GET command

GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/operationResults/cca47745-12d2-42f9-b3a4-75335f18fdf6?api-version=2016-12-01

Once all the Azure Storage accounts are discovered, the GET command returns a 204 (No Content) response.
The vault is now able to discover any storage account with file shares that can be backed up within the
subscription.

HTTP/1.1 200 NoContent


Cache-Control : no-cache
Pragma : no-cache
X-Content-Type-Options : nosniff
x-ms-request-id : d9bdb266-8349-4dbd-9688-de52f07648b2
x-ms-client-request-id : 3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61
Strict-Transport-Security : max-age=31536000; includeSubDomains
X-Powered-By : ASP.NET
x-ms-ratelimit-remaining-subscription-reads: 11933
x-ms-correlation-request-id : d9bdb266-8349-4dbd-9688-de52f07648b2
x-ms-routing-request-id : CENTRALUSEUAP:20200127T105304Z:d9bdb266-8349-4dbd-9688-de52f07648b2
Date : Mon, 27 Jan 2020 10:53:04 GMT

Get List of storage accounts with file shares that can be backed up with Recovery Services vault
To confirm that “caching” is done, list all the storage accounts in the subscription with file shares that can be
backed up with the Recovery Services vault. Then locate the desired storage account in the response. This is
done using the GET ProtectableContainers operation.
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectableContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'

The GET URI has all the required parameters. No additional request body is needed.
Example of response body:

"value": [

"id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers
/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/

protectableContainers/StorageContainer;Storage;AzureFiles;testvault2",

"name": "StorageContainer;Storage;AzureFiles;testvault2",

"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectableContainers",

"properties": {

"friendlyName": "testvault2",

"backupManagementType": "AzureStorage",

"protectableContainerType": "StorageContainer",

"healthStatus": "Healthy",

"containerId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/
AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2"

Since we can locate the testvault2 storage account in the response body with the friendly name, the refresh
operation performed above was successful. The Recovery Services vault can now successfully discover storage
accounts with unprotected files shares in the same subscription.
Register storage account with Recovery Services vault
This step is only needed if you didn't register the storage account with the vault earlier. You can register the vault
via the ProtectionContainers-Register operation.

PUT
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}?
api-version=2016-12-01

Set the variables for the URI as follows:


{resourceGroupName} - azurefiles
{fabricName} - Azure
{vaultName} - azurefilesvault
{containerName} - This is the name attribute in the response body of the GET ProtectableContainers
operation. In our example, it's StorageContainer;Storage;AzureFiles;testvault2

NOTE
Always take the name attribute of the response and fill it in this request. Don't hard-code or create the container-name
format. If you create or hard-code it, the API call will fail if the container-name format changes in the future.

PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2?api-version=2016-12-01

The create request body is as follows:

"properties": {

"containerType": "StorageContainer",

"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",

"resourceGroup": "AzureFiles",

"friendlyName": "testvault2",

"backupManagementType": "AzureStorage"

}
}

For the complete list of definitions of the request body and other details, refer to ProtectionContainers-Register.
This is an asynchronous operation and returns two responses: "202 Accepted" when the operation is accepted
and "200 OK" when the operation is complete. To track the operation status, use the location header to get the
latest status of the operation.

GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/operationresults/1a3c8ee7-
e0e5-43ed-b8b3-73cc992b6db9?api-version=2016-12-01

Example of response body when operation is complete:


{
"id": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/
protectionContainers/StorageContainer;Storage;AzureFiles;testvault2",
"name": "StorageContainer;Storage;AzureFiles;testvault2",
"properties": {
"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"protectedItemCount": 0,
"friendlyName": "testvault2",
"backupManagementType": "AzureStorage",
"registrationStatus": "Registered",
"healthStatus": "Healthy",
"containerType": "StorageContainer",
"protectableObjectType": "StorageContainer"
}
}

You can verify if the registration was successful from the value of the registrationstatus parameter in the
response body. In our case, it shows the status as registered for testvault2, so the registration operation was
successful.
Inquire all unprotected files shares under a storage account
You can inquire about protectable items in a storage account using the Protection Containers-Inquire operation.
It's an asynchronous operation and the results should be tracked using the location header.

POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/i
nquire?api-version=2016-12-01

Set the variables for the above URI as follows:


{vaultName} - azurefilesvault
{fabricName} - Azure
{containerName}- Refer to the name attribute in the response body of the GET ProtectableContainers
operation. In our example, it's StorageContainer;Storage;AzureFiles;testvault2

https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/inquire?api-version=2016-12-
01

Once the request is successful, it returns the status code “OK”

Cache-Control : no-cache
Pragma : no-cache
X-Content-Type-Options: nosniff
x-ms-request-id : 68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
x-ms-client-request-id : 3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61
Strict-Transport-Security: max-age=31536000; includeSubDomains
Server : Microsoft-IIS/10.0
X-Powered-B : ASP.NET
x-ms-ratelimit-remaining-subscription-reads: 11932
x-ms-correlation-request-id : 68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
x-ms-routing-request-id : CENTRALUSEUAP:20200127T105305Z:68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
Date : Mon, 27 Jan 2020 10:53:05 GMT
Select the file share you want to back up
You can list all protectable items under the subscription and locate the desired file share to be backed up using
the GET backupprotectableItems operation.

GET
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupProtectableItems?api-version=2016-12-01&$filter={$filter}

Construct the URI as follows:


{vaultName} - azurefilesvault
{$filter} - backupManagementType eq 'AzureStorage'

GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPro
tectableItems?$filter=backupManagementType eq 'AzureStorage'&api-version=2016-12-01

Sample response:
Status Code:200

{
"value": [
{
"id": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afaccount1/protectableItems/azurefilesha
re;azurefiles1",
"name": "azurefileshare;azurefiles1",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afaccount1",
"parentContainerFriendlyName": "afaccount1",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "azurefiles1",
"protectionState": "NotProtected"
}
},
{
"id": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afsaccount/protectableItems/azurefilesha
re;afsresource",
"name": "azurefileshare;afsresource",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"parentContainerFriendlyName": "afsaccount",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "afsresource",
"protectionState": "NotProtected"
}
},
{
"id": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;testvault2/protectableItems/azurefilesha
re;testshare",
"name": "azurefileshare;testshare",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"parentContainerFriendlyName": "testvault2",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "testshare",
"protectionState": "NotProtected"
}
}
]
}

The response contains the list of all unprotected file shares and contains all the information required by the
Azure Recovery Service to configure the backup.
Enable backup for the file share
After the relevant file share is "identified" with the friendly name, select the policy to protect. To learn more
about existing policies in the vault, refer to list Policy API. Then select the relevant policy by referring to the
policy name. To create policies, refer to create policy tutorial.
Enabling protection is an asynchronous PUT operation that creates a "protected item".

PUT
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupName}/provider
s/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerNa
me}/protectedItems/{protectedItemName}?api-version=2019-05-13

Set the containername and protecteditemname variables using the ID attribute in the response body of the
GET backupprotectableitems operation.
In our example, the ID of file share we want to protect is:

"/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;testvault2/protectableItems/azurefilesha
re;testshare

{containername} - storagecontainer;storage;azurefiles;testvault2
{protectedItemName} - azurefileshare;testshare
Or you can refer to the name attribute of the protection container and protectable item responses.

NOTE
Always take the name attribute of the response and fill it in this request. Don't hard-code or create the container-name
format or protected item name format. If you create or hard-code it, the API call will fail if the container-name format or
protected item name format changes in the future.

PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare
;testshare?api-version=2016-12-01

Create a request body:


The following request body defines properties required to create a protected item.

{
"properties": {
"protectedItemType": "AzureFileShareProtectedItem",
"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"policyId": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPol
icies/schedule1"
}
}

The sourceResourceId is the parentcontainerFabricID in response of GET backupprotectableItems.


Sample Response
The creation of a protected item is an asynchronous operation, which creates another operation that needs to be
tracked. It returns two responses: 202 (Accepted) when another operation is created and 200 (OK) when that
operation completes.
Once you submit the PUT request for protected item creation or update, the initial response is 202 (Accepted)
with a location header.

HTTP/1.1 202 Accepted


Cache-Control : no-cache
Pragma : no-cache
Location : https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare
;testshare/operationResults/c3a52d1d-0853-4211-8141-477c65740264?api-version=2016-12-01
Retry-Afte : 60
Azure-AsyncOperation : https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare
;testshare/operationResults/c3a52d1d-0853-4211-8141-477c65740264?api-version=2016-12-01
X-Content-Type-Options : nosniff
x-ms-request-id : b55527fa-f473-4f09-b169-9cc3a7a39065
x-ms-client-request-id: 3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61
Strict-Transport-Security : max-age=31536000; includeSubDomains
X-Powered-By : ASP.NET
x-ms-ratelimit-remaining-subscription-writes: 1198
x-ms-correlation-request-id : b55527fa-f473-4f09-b169-9cc3a7a39065
x-ms-routing-request-id : CENTRALUSEUAP:20200127T105412Z:b55527fa-f473-4f09-b169-9cc3a7a39065
Date : Mon, 27 Jan 2020 10:54:12 GMT

Then track the resulting operation using the location header or Azure-AsyncOperation header with a GET
command.

GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOpe
rations/c3a52d1d-0853-4211-8141-477c65740264?api-version=2016-12-01

Once the operation completes, it returns 200 (OK) with the protected item content in the response body.
Sample Response Body:

{
"id": "c3a52d1d-0853-4211-8141-477c65740264",
"name": "c3a52d1d-0853-4211-8141-477c65740264",
"status": "Succeeded",
"startTime": "2020-02-03T18:10:48.296012Z",
"endTime": "2020-02-03T18:10:48.296012Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b"
}
}

This confirms that protection is enabled for the file share and the first backup will be triggered according to the
policy schedule.

Trigger an on-demand backup for file share


Once an Azure file share is configured for backup, backups run according to the policy schedule. You can wait for
the first scheduled backup or trigger an on-demand backup anytime.
Triggering an on-demand backup is a POST operation.

POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/backup?api-version=2016-12-01

{containerName} and {protectedItemName} are as constructed above while enabling backup. For our example,
this translates to:

POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/backup?api-version=2017-07-01

Create request body


To trigger an on-demand backup, following are the components of the request body.

NAME TYPE DESC RIP T IO N

Properties AzurefilesharebackupReques BackupRequestResource properties

For the complete list of definitions of the request body and other details, refer to trigger backups for protected
items REST API document.
Request Body example

"properties":{

"objectType":"AzureFileShareBackupRequest",
"recoveryPointExpiryTimeInUTC":"2020-03-07T18:29:59.000Z"
}

Responses to the on-demand backup operation


Triggering an on-demand backup is an asynchronous operation. It means this operation creates another
operation that needs to be tracked separately.
It returns two responses: 202 (Accepted) when another operation is created and 200 (OK) when that operation
completes.
Example responses to the on-demand backup operation
Once you submit the POST request for an on-demand backup, the initial response is 202 (Accepted) with a
location header or Azure-async-header.
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Expires': '-1'
'Location': https://management.azure.com/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/operationResults/dc62d524-427a-4093-968d-e951c0a0726e?api-version=2017-07-01
'Retry-After': '60'
'Azure-AsyncOperation': https://management.azure.com/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/operationsStatus/dc62d524-427a-4093-968d-e951c0a0726e?api-version=2017-07-01
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '2e03b8d4-66b1-48cf-8094-aa8bff57e8fb'
'x-ms-client-request-id': 'a644712a-4895-11ea-ba57-0a580af42708, a644712a-4895-11ea-ba57-0a580af42708'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-writes': '1199'
'x-ms-correlation-request-id': '2e03b8d4-66b1-48cf-8094-aa8bff57e8fb'
'x-ms-routing-request-id': 'WESTEUROPE:20200206T040339Z:2e03b8d4-66b1-48cf-8094-aa8bff57e8fb'
'Date': 'Thu, 06 Feb 2020 04:03:38 GMT'
'Content-Length': '0'

Then track the resulting operation using the location header or Azure-AsyncOperation header with a GET
command.

GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOpe
rations/dc62d524-427a-4093-968d-e951c0a0726e?api-version=2016-12-01

Once the operation completes, it returns 200 (OK) with the ID of the resulting backup job in the response body.
Sample response body

{
"id": "dc62d524-427a-4093-968d-e951c0a0726e",
"name": "dc62d524-427a-4093-968d-e951c0a0726e",
"status": "Succeeded",
"startTime": "2020-02-06T11:06:02.1327954Z",
"endTime": "2020-02-06T11:06:02.1327954Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "39282261-cb52-43f5-9dd0-ffaf66beeaef"
}
}

Since the backup job is a long running operation, it needs to be tracked as explained in the monitor jobs using
REST API document.

Next steps
Learn how to restore Azure file shares using REST API.
Restore Azure file shares
5/20/2022 • 5 minutes to read • Edit Online

This article explains how to use the Azure portal to restore an entire file share or specific files from a restore
point created by Azure Backup.
In this article, you'll learn how to:
Restore a full Azure file share.
Restore individual files or folders.
Track the restore operation status.

Steps to perform a restore operation


To perform a restore operation, follow these steps.
Select the file share to restore
1. In the Azure portal, go to Backup center and click Restore .

2. Select Azure Files (Azure Storage) as the datasource type, select the file share that you wish to restore,
and then click Continue .
Full share recovery
You can use this restore option to restore the complete file share in the original location or an alternate location.
1. After you select Continue in the previous step, the Restore pane opens. To select the restore point you
want to use for performing the restore operation, choose the Select link text below the Restore Point
text box.

2. The Select Restore Point context pane opens on the right, listing the restore points available for the
selected file share. Select the restore point you want to use to perform the restore operation, and select
OK .
NOTE
By default, the Select Restore Point pane lists restore points from the last 30 days. If you want to look at the
restore points created during a specific duration, specify the range by selecting the appropriate Star t Time and
End Time and select the Refresh button.

3. The next step is to choose the Restore Location . In the Recover y Destination section, specify where
or how to restore the data. Select one of the following two options by using the toggle button:
Original Location : Restore the complete file share to the same location as the original source.
Alternate Location : Restore the complete file share to an alternate location and keep the original file
share as is.
Restore to the original location (full share recovery )
1. Select Original Location as the Recover y Destination , and select whether to skip or overwrite if there
are conflicts, by choosing the appropriate option from the In case of Conflicts drop-down list.
2. Select Restore to start the restore operation.

Restore to an alternate location (full share recovery )


1. Select Alternate Location as the Recover y Destination .
2. Select the destination storage account where you want to restore the backed-up content from the
Storage Account drop-down list.
3. The Select File Share drop-down list displays the file shares present in the storage account you selected
in step 2. Select the file share where you want to restore the backed-up contents.
4. In the Folder Name box, specify a folder name you want to create in the destination file share with the
restored contents.
5. Select whether to skip or overwrite if there are conflicts.
6. After you enter the appropriate values in all boxes, select Restore to start the restore operation.
Item-level recovery
You can use this restore option to restore individual files or folders in the original location or an alternate
location.
1. Go to Backup center and select Backup Instances from the menu, with the datasource type selected as
Azure Storage (Azure Files) .
2. Select the file share you wish to do an item level recovery for.
The backup item menu appears with a File Recover y option.

3. After you select File Recover y , the Restore pane opens. To select the restore point you want to use for
performing the restore operation, select the Select link text below the Restore Point text box.

4. The Select Restore Point context pane opens on the right, listing the restore points available for the
selected file share. Select the restore point you want to use to perform the restore operation, and select
OK .
5. The next step is to choose the Restore Location . In the Recover y Destination section, specify where
or how to restore the data. Select one of the following two options by using the toggle button:
Original Location : Restore selected files or folders to the same file share as the original source.
Alternate Location : Restore selected files or folders to an alternate location and keep the original file
share contents as is.
Restore to the original location (item-level recovery )
1. Select Original Location as the Recover y Destination , and select whether to skip or overwrite if there
are conflicts by choosing the appropriate option from the In case of conflicts drop-down list.

2. To select the files or folders you want to restore, select the Add File button. This will open a context pane
on the right, displaying the contents of the file share recovery point you selected for restore.

3. Select the check box that corresponds to the file or folder you want to restore, and choose Select .

4. Repeat steps 2 through 4 to select multiple files or folders for restore.


5. After you select all the items you want to restore, select Restore to start the restore operation.
Restore to an alternate location (item-level recovery )
1. Select Alternate Location as the Recover y Destination .
2. Select the destination storage account where you want to restore the backed-up content from the
Storage Account drop-down list.
3. The Select File Share drop-down list displays the file shares present in the storage account you selected
in step 2. Select the file share where you want to restore the backed-up contents.
4. In the Folder Name box, specify a folder name you want to create in the destination file share with the
restored contents.
5. Select whether to skip or overwrite if there are conflicts.
6. To select the files or folders you want to restore, select the Add File button. This will open a context pane
on the right displaying the contents of the file share recovery point you selected for restore.
7. Select the check box that corresponds to the file or folder you want to restore, and choose Select .
8. Repeat steps 6 through 8 to select multiple files or folders for restore.
9. After you select all the items you want to restore, select Restore to start the restore operation.
Track a restore operation
After you trigger the restore operation, the backup service creates a job for tracking. Azure Backup displays
notifications about the job in the portal. To view operations for the job, select the notifications hyperlink.

You can also monitor restore progress from the Recovery Services vault:
1. Go to Backup center and click Backup Jobs from the menu.
2. Filter for jobs for the required datasource type and job status.
3. Select the workload name that corresponds to your file share to view more details about the restore
operation, like Data Transferred and Number of Restored Files .

NOTE
Folders will be restored with original permissions if there is atleast one file present in them.

NOTE
Trailing dots in any directory path can lead to failures in the restore.

Next steps
Learn how to Manage Azure file share backups.
Restore Azure file shares with the Azure CLI
5/20/2022 • 8 minutes to read • Edit Online

The Azure CLI provides a command-line experience for managing Azure resources. It's a great tool for building
custom automation to use Azure resources. This article explains how to restore an entire file share or specific
files from a restore point created by Azure Backup by using the Azure CLI. You can also perform these steps with
Azure PowerShell or in the Azure portal.
By the end of this article, you'll learn how to perform the following operations with the Azure CLI:
View restore points for a backed-up Azure file share.
Restore a full Azure file share.
Restore individual files or folders.

NOTE
Azure Backup now supports restoring multiple files or folders to the original or an alternate location using Azure CLI.
Refer to the Restore multiple files or folders to original or alternate location section of this document to learn more.

Prerequisites
This article assumes that you already have an Azure file share that's backed up by Azure Backup. If you don't
have one, see Back up Azure file shares with the CLI to configure backup for your file share. For this article, you
use the following resources:

F IL E SH A RE STO RA GE A C C O UN T REGIO N DETA IL S

azurefiles afsaccount EastUS Original source backed up


by using Azure Backup

azurefiles1 afaccount1 EastUS Destination source used for


alternate location recovery

You can use a similar structure for your file shares to try out the different types of restores explained in this
article.
Prepare your environment for the Azure CLI
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.

Fetch recovery points for the Azure file share


Use the az backup recoverypoint list cmdlet to list all recovery points for the backed-up file share.
The following example fetches the list of recovery points for the azurefiles file share in the afsaccount storage
account.

az backup recoverypoint list --vault-name azurefilesvault --resource-group azurefiles --container-name


"StorageContainer;Storage;AzureFiles;afsaccount" --backup-management-type azurestorage --item-name
"AzureFileShare;azurefiles" --workload-type azurefileshare --out table

You can also run the previous cmdlet by using the friendly name for the container and the item by providing the
following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare

az backup recoverypoint list --vault-name azurefilesvault --resource-group azurefiles --container-name


afsaccount --backup-management-type azurestorage --item-name azurefiles --workload-type azurefileshare --out
table

The result set is a list of recovery points with time and consistency details for each restore point.

Name Time Consistency


------------------ ------------------------- --------------------
932887541532871865 2020-01-05T07:08:23+00:00 FileSystemConsistent
932885927361238054 2020-01-05T07:08:10+00:00 FileSystemConsistent
932879614553967772 2020-01-04T21:33:04+00:00 FileSystemConsistent

The Name attribute in the output corresponds to the recovery point name that can be used as a value for the --
rp-name parameter in recovery operations.

Full share recovery by using the Azure CLI


You can use this restore option to restore the complete file share in the original or an alternate location.
Define the following parameters to perform restore operations:
--container-name : The name of the storage account that hosts the backed-up original file share. To retrieve
the name or friendly name of your container, use the az backup container list command.
--item-name : The name of the backed-up original file share you want to use for the restore operation. To
retrieve the name or friendly name of your backed-up item, use the az backup item list command.
Restore a full share to the original location
When you restore to an original location, you don't need to specify target-related parameters. Only Resolve
Conflict must be provided.
The following example uses the az backup restore restore-azurefileshare cmdlet with restore mode set to
originallocation to restore the azurefiles file share in the original location. You use the recovery point
932883129628959823, which you obtained in Fetch recovery points for the Azure file share:

az backup restore restore-azurefileshare --vault-name azurefilesvault --resource-group azurefiles --rp-name


932887541532871865 --container-name "StorageContainer;Storage;AzureFiles;afsaccount" --item-name
"AzureFileShare;azurefiles" --restore-mode originallocation --resolve-conflict overwrite --out table

Name ResourceGroup
------------------------------------ ---------------
6a27cc23-9283-4310-9c27-dcfb81b7b4bb azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
Restore a full share to an alternate location
You can use this option to restore a file share to an alternate location and keep the original file share as is.
Specify the following parameters for alternate location recovery:
--target-storage-account : The storage account to which the backed-up content is restored. The target
storage account must be in the same location as the vault.
--target-file-share : The file share within the target storage account to which the backed-up content is
restored.
--target-folder : The folder under the file share to which data is restored. If the backed-up content is to be
restored to a root folder, give the target folder values as an empty string.
--resolve-conflict : Instruction if there's a conflict with the restored data. Accepts Over write or Skip .
The following example uses az backup restore restore-azurefileshare with restore mode as alternatelocation to
restore the azurefiles file share in the afsaccount storage account to the azurefiles1" file share in the afaccount1
storage account.

az backup restore restore-azurefileshare --vault-name azurefilesvault --resource-group azurefiles --rp-name


932883129628959823 --container-name "StorageContainer;Storage;AzureFiles;afsaccount" --item-name
"AzureFileShare;azurefiles" --restore-mode alternatelocation --target-storage-account afaccount1 --target-
file-share azurefiles1 --target-folder restoredata --resolve-conflict overwrite --out table

Name ResourceGroup
------------------------------------ ---------------
babeb61c-d73d-4b91-9830-b8bfa83c349a azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.

Item-level recovery
You can use this restore option to restore individual files or folders in the original or an alternate location.
Define the following parameters to perform restore operations:
--container-name : The name of the storage account that hosts the backed-up original file share. To retrieve
the name or friendly name of your container, use the az backup container list command.
--item-name : The name of the backed-up original file share you want to use for the restore operation. To
retrieve the name or friendly name of your backed-up item, use the az backup item list command.
Specify the following parameters for the items you want to recover:
SourceFilePath : The absolute path of the file, to be restored within the file share, as a string. This path is the
same path used in the az storage file download or az storage file show CLI commands.
SourceFileType : Choose whether a directory or a file is selected. Accepts Director y or File .
ResolveConflict : Instruction if there's a conflict with the restored data. Accepts Over write or Skip .
Restore individual files or folders to the original location
Use the az backup restore restore-azurefiles cmdlet with restore mode set to originallocation to restore specific
files or folders to their original location.
The following example restores the RestoreTest.txt file in its original location: the azurefiles file share.

az backup restore restore-azurefiles --vault-name azurefilesvault --resource-group azurefiles --rp-name


932881556234035474 --container-name "StorageContainer;Storage;AzureFiles;afsaccount" --item-name
"AzureFileShare;azurefiles" --restore-mode originallocation --source-file-type file --source-file-path
"Restore/RestoreTest.txt" --resolve-conflict overwrite --out table

Name ResourceGroup
------------------------------------ ---------------
df4d9024-0dcb-4edc-bf8c-0a3d18a25319 azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
Restore individual files or folders to an alternate location
To restore specific files or folders to an alternate location, use the az backup restore restore-azurefiles cmdlet
with restore mode set to alternatelocation and specify the following target-related parameters:
--target-storage-account : The storage account to which the backed-up content is restored. The target
storage account must be in the same location as the vault.
--target-file-share : The file share within the target storage account to which the backed-up content is
restored.
--target-folder : The folder under the file share to which data is restored. If the backed-up content is to be
restored to a root folder, give the target folder's value as an empty string.
The following example restores the RestoreTest.txt file originally present in the azurefiles file share to an
alternate location: the restoredata folder in the azurefiles1 file share hosted in the afaccount1 storage account.

az backup restore restore-azurefiles --vault-name azurefilesvault --resource-group azurefiles --rp-name


932881556234035474 --container-name "StorageContainer;Storage;AzureFiles;afsaccount" --item-name
"AzureFileShare;azurefiles" --restore-mode alternatelocation --target-storage-account afaccount1 --target-
file-share azurefiles1 --target-folder restoredata --resolve-conflict overwrite --source-file-type file --
source-file-path "Restore/RestoreTest.txt" --out table

Name ResourceGroup
------------------------------------ ---------------
df4d9024-0dcb-4edc-bf8c-0a3d18a25319 azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.

Restore multiple files or folders to original or alternate location


To perform restore for multiple items, pass the value for the source-file-path parameter as space separated
paths of all files or folders you want to restore.
The following example restores the Restore.txt and AFS testing Report.docx files in their original location.

az backup restore restore-azurefiles --vault-name azurefilesvault --resource-group azurefiles --rp-name


932889937058317910 --container-name "StorageContainer;Storage;AzureFiles;afsaccount" --item-name
"AzureFileShare;azurefiles" --restore-mode originallocation --source-file-type file --source-file-path
"Restore Test.txt" "AFS Testing Report.docx" --resolve-conflict overwrite --out table

The output will be similar to the following:

Name ResourceGroup
------------------------------------ ---------------
649b0c14-4a94-4945-995a-19e2aace0305 azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
If you want to restore multiple items to an alternate location, use the command above by specifying target-
related parameters as explained in the Restore individual files or folders to an alternate location section.

Next steps
Learn how to Manage Azure file share backups with the Azure CLI.
Restore Azure Files with PowerShell
5/20/2022 • 4 minutes to read • Edit Online

This article explains how to restore an entire file share, or specific files, from a restore point created by the Azure
Backup service using Azure PowerShell.
You can restore an entire file share or specific files on the share. You can restore to the original location, or to an
alternate location.

WARNING
Make sure the PowerShell version is upgraded to the minimum version for 'Az.RecoveryServices 2.6.0' for AFS backups.
For more information, see the section outlining the requirement for this change.

NOTE
Azure Backup now supports restoring multiple files or folders to the original or alternate Location using PowerShell. Refer
to this section of the document to learn how.

Fetch recovery points


Use Get-AzRecoveryServicesBackupRecoveryPoint to list all recovery points for the backed-up item.
In the following script:
The variable $rp is an array of recovery points for the selected backup item from the past seven days.
The array is sorted in reverse order of time with the latest recovery point at index 0 .
Use standard PowerShell array indexing to pick the recovery point.
In the example, $rp[0] selects the latest recovery point.

$vault = Get-AzRecoveryServicesVault -ResourceGroupName "azurefiles" -Name "azurefilesvault"


$Container = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -
FriendlyName "afsaccount" -VaultId $vault.ID
$BackupItem = Get-AzRecoveryServicesBackupItem -Container $Container -WorkloadType AzureFiles -VaultId
$vault.ID -FriendlyName "azurefiles"
$startDate = (Get-Date).AddDays(-7)
$endDate = Get-Date
$rp = Get-AzRecoveryServicesBackupRecoveryPoint -Item $BackupItem -VaultId $vault.ID -StartDate
$startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime()
$rp[0] | fl

The output is similar to the following.


FileShareSnapshotUri : https://testStorageAcct.file.core.windows.net/testAzureFS?sharesnapshot=2018-11-
20T00:31:04.00000
00Z
RecoveryPointType : FileSystemConsistent
RecoveryPointTime : 11/20/2018 12:31:05 AM
RecoveryPointId : 86593702401459
ItemName : testAzureFS
Id : /Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/testVaultRG/providers/Micros
oft.RecoveryServices/vaults/testVault/backupFabrics/Azure/protectionContainers/StorageContainer;storage;test
storageRG;testStorageAcct/protectedItems/AzureFileShare;testAzureFS/recoveryPoints/86593702401462
WorkloadType : AzureFiles
ContainerName : storage;teststorageRG;testStorageAcct
ContainerType : AzureStorage
BackupManagementType : AzureStorage

After the relevant recovery point is selected, you restore the file share or file to the original location, or to an
alternate location.

Restore an Azure file share to an alternate location


Use the Restore-AzRecoveryServicesBackupItem to restore to the selected recovery point. Specify these
parameters to identify the alternate location:
TargetStorageAccountName : The storage account to which the backed-up content is restored. The target
storage account must be in the same location as the vault.
TargetFileShareName : The file shares within the target storage account to which the backed-up content is
restored.
TargetFolder : The folder under the file share to which data is restored. If the backed-up content is to be
restored to a root folder, give the target folder values as an empty string.
ResolveConflict : Instruction if there's a conflict with the restored data. Accepts Over write or Skip .
Run the cmdlet with the parameters as follows:

Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -TargetStorageAccountName "TargetStorageAcct" -


TargetFileShareName "DestAFS" -TargetFolder "testAzureFS_restored" -ResolveConflict Overwrite

The command returns a job with an ID to be tracked, as shown in the following example.

WorkloadName Operation Status StartTime EndTime


JobID
------------ --------- ------ --------- -------
-----
testAzureFS Restore InProgress 12/10/2018 9:56:38 AM
9fd34525-6c46-496e-980a-3740ccb2ad75

Restore an Azure file to an alternate location


Use the Restore-AzRecoveryServicesBackupItem to restore to the selected recovery point. Specify these
parameters to identify the alternate location, and to uniquely identify the file you want to restore.
TargetStorageAccountName : The storage account to which the backed-up content is restored. The target
storage account must be in the same location as the vault.
TargetFileShareName : The file shares within the target storage account to which the backed-up content is
restored.
TargetFolder : The folder under the file share to which data is restored. If the backed-up content is to be
restored to a root folder, give the target folder values as an empty string.
SourceFilePath : The absolute path of the file, to be restored within the file share, as a string. This path is the
same path used in the Get-AzStorageFile PowerShell cmdlet.
SourceFileType : Whether a directory or a file is selected. Accepts Director y or File .
ResolveConflict : Instruction if there's a conflict with the restored data. Accepts Over write or Skip .
The additional parameters (SourceFilePath and SourceFileType) are related only to the individual file you want to
restore.

Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -TargetStorageAccountName "TargetStorageAcct" -


TargetFileShareName "DestAFS" -TargetFolder "testAzureFS_restored" -SourceFileType File -SourceFilePath
"TestDir/TestDoc.docx" -ResolveConflict Overwrite

This command returns a job with an ID to be tracked, as shown in the previous section.

Restore Azure file shares and files to the original location


When you restore to an original location, you don't need to specify destination- and target-related parameters.
Only ResolveConflict must be provided.
Overwrite an Azure file share

Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -ResolveConflict Overwrite

Overwrite an Azure file

Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -SourceFileType File -SourceFilePath


"TestDir/TestDoc.docx" -ResolveConflict Overwrite

Restore multiple files or folders to original or alternate location


Use the Restore-AzRecoveryServicesBackupItem command by passing the path of all files or folders you want to
restore as a value for the MultipleSourceFilePath parameter.
Restore multiple files
In the following script, we're trying to restore the FileSharePage.png and MyTestFile.txt files.

$vault = Get-AzRecoveryServicesVault -ResourceGroupName "azurefiles" -Name "azurefilesvault"

$Container = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -


FriendlyName "afsaccount" -VaultId $vault.ID

$BackupItem = Get-AzRecoveryServicesBackupItem -Container $Container -WorkloadType AzureFiles -VaultId


$vault.ID -FriendlyName "azurefiles"

$RP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $BackupItem -VaultId $vault.ID

$files = ("FileSharePage.png", "MyTestFile.txt")

Restore-AzRecoveryServicesBackupItem -RecoveryPoint $RP[0] -MultipleSourceFilePath $files -SourceFileType


File -ResolveConflict Overwrite -VaultId $vault.ID -VaultLocation $vault.Location

Restore multiple directories


In the following script, we're trying to restore the zrs1_restore and Restore directories.
$vault = Get-AzRecoveryServicesVault -ResourceGroupName "azurefiles" -Name "azurefilesvault"

$Container = Get-AzRecoveryServicesBackupContainer -ContainerType AzureStorage -Status Registered -


FriendlyName "afsaccount" -VaultId $vault.ID

$BackupItem = Get-AzRecoveryServicesBackupItem -Container $Container -WorkloadType AzureFiles -VaultId


$vault.ID -FriendlyName "azurefiles"

$RP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $BackupItem -VaultId $vault.ID

$files = ("Restore","zrs1_restore")

Restore-AzRecoveryServicesBackupItem -RecoveryPoint $RP[0] -MultipleSourceFilePath $files -SourceFileType


Directory -ResolveConflict Overwrite -VaultId $vault.ID -VaultLocation $vault.Location

The output will be similar to the following:

WorkloadName Operation Status StartTime EndTime JobID


------------ --------- ------ --------- ------- -----
azurefiles Restore InProgress 4/5/2020 8:01:24 AM cd36abc3-0242-
44b1-9964-0a9102b74d57

If you want to restore multiple files or folders to alternate location, use the scripts above by specifying the target
location-related parameter values, as explained above in Restore an Azure file to an alternate location.

Next steps
Learn about restoring Azure Files in the Azure portal.
Restore Azure File Shares using REST API
5/20/2022 • 6 minutes to read • Edit Online

This article explains how to restore an entire file share or specific files from a restore point created by Azure
Backup by using the REST API.
By the end of this article, you'll learn how to perform the following operations using REST API:
View restore points for a backed-up Azure file share.
Restore a full Azure file share.
Restore individual files or folders.

Prerequisites
We assume that you already have a backed-up file share you want to restore. If you don’t, check Backup Azure
file share using REST API to learn how to create one.
For this article, we'll use the following resources:
Recover ySer vicesVault : azurefilesvault
Resource group : azurefiles
Storage Account : afsaccount
File Share : azurefiles

Fetch ContainerName and ProtectedItemName


For most of the restore related API calls, you need to pass values for the {containerName} and
{protectedItemName} URI parameters. Use the ID attribute in the response body of the GET
backupprotectableitems operation to retrieve values for these parameters. In our example, the ID of the file
share we want to protect is:
"/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/storagecontainer;storage;azure

So the values translate as follows:


{containername} - storagecontainer;storage;azurefiles;afsaccount
{protectedItemName} - azurefileshare;azurefiles

Fetch recovery points for backed up Azure file share


To restore any backed-up file share or files, first select a recovery point to perform the restore operation. The
available recovery points of a backed-up item can be listed using the Recovery Point-List REST API call. It's a GET
operation with all the relevant values.

GET
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/recoveryPoints?api-version=2019-05-13&$filter={$filter}

Set the URI values as follows:


{fabricName}: Azure
{vaultName}: azurefilesvault
{containername}: storagecontainer;storage;azurefiles;afsaccount
{protectedItemName}: azurefileshare;azurefiles
{ResourceGroupName}: azurefiles
The GET URI has all the required parameters. There's no need for another request body.

GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/recoveryPoints?api-version=2019-05-13

Example response for fetch recovery points


Once the GET URI is submitted, a 200 response is returned:
HTTP/1.1" 200 None
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Transfer-Encoding': 'chunked'
'Content-Type': 'application/json'
'Content-Encoding': 'gzip'
'Expires': '-1'
'Vary': 'Accept-Encoding'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': 'd68d7951-7d97-4c49-9a2d-7fbaab55233a'
'x-ms-client-request-id': '4edb5a58-47ea-11ea-a27a-0a580af41908, 4edb5a58-47ea-11ea-a27a-0a580af41908'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'Server': 'Microsoft-IIS/10.0'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-reads': '11998'
'x-ms-correlation-request-id': 'd68d7951-7d97-4c49-9a2d-7fbaab55233a'
'x-ms-routing-request-id': 'WESTEUROPE:20200205T073708Z:d68d7951-7d97-4c49-9a2d-7fbaab55233a'
'Date': 'Wed, 05 Feb 2020 07:37:08 GMT'
{
“value”:[
{
"eTag": null,
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/recoveryPoints/932881138555802864",
"location": null,
"name": "932881138555802864",
"properties": {
"fileShareSnapshotUri": "https://afsaccount.file.core.windows.net/azurefiles?sharesnapshot=2020-02-
04T08:01:35.0000000Z",
"objectType": "AzureFileShareRecoveryPoint",
"recoveryPointSizeInGb": 1,
"recoveryPointTime": "2020-02-04T08:01:35+00:00",
"recoveryPointType": "FileSystemConsistent"
},
"resourceGroup": "azurefiles",
"tags": null,
"type":
"Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints"
},
{
"eTag": null,
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/recoveryPoints/932878582606969225",
"location": null,
"name": "932878582606969225",
"properties": {
"fileShareSnapshotUri": "https://afsaccount.file.core.windows.net/azurefiles?sharesnapshot=2020-02-
03T08:05:30.0000000Z",
"objectType": "AzureFileShareRecoveryPoint",
"recoveryPointSizeInGb": 1,
"recoveryPointTime": "2020-02-03T08:05:30+00:00",
"recoveryPointType": "FileSystemConsistent"
},
"resourceGroup": "azurefiles",
"tags": null,
"type":
"Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints"
},
{
"eTag": null,
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/recoveryPoints/932890167574511261",
"location": null,
"name": "932890167574511261",
"properties": {
"fileShareSnapshotUri": "https://afsaccount.file.core.windows.net/azurefiles?sharesnapshot=2020-02-
02T08:03:50.0000000Z",
"objectType": "AzureFileShareRecoveryPoint",
"recoveryPointSizeInGb": 1,
"recoveryPointTime": "2020-02-02T08:03:50+00:00",
"recoveryPointType": "FileSystemConsistent"
},
"resourceGroup": "azurefiles",
"tags": null,
"type":
"Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints"
},

The recovery point is identified with the {name} field in the response above.

Full share recovery using REST API


Use this restore option to restore the complete file share in the original or an alternate location. Triggering
restore is a POST request and you can perform this operation using the trigger restore REST API.

POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13

The values {containerName} and {protectedItemName} are as set here and recoveryPointID is the {name} field of
the recovery point mentioned above.
POST https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
%3Bazurefiles/recoveryPoints/932886657837421071/restore?api-version=2019-05-13'

Create request body


To trigger a restore for an Azure file share, the following are the components of the request body:

NAME TYPE DESC RIP T IO N

Properties AzureFileShareRestoreRequest RestoreRequestResource properties

For the complete list of definitions of the request body and other details, refer to the trigger Restore REST API
document.
Restore to original location
Request body example for restore to original location
The following request body defines properties required to trigger an Azure file share restore:

{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"recoveryType":"OriginalLocation",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"copyOptions":"Overwrite",
"restoreRequestType":"FullShareRestore"
}
}

Restore to alternate location


Specify the following parameters for alternate location recovery:
targetResourceId : The storage account to which the backed-up content is restored. The target storage
account must be in the same location as the vault.
name : The file share within the target storage account to which the backed-up content is restored.
targetFolderPath : The folder under the file share to which data is restored.
Request body example for restore to alternate location
The following request body restores the azurefiles file share in the afsaccount storage account to the azurefiles1
file share in the afaccount1 storage account.

{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"recoveryType":"AlternateLocation",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"copyOptions":"Overwrite",
"restoreRequestType":"FullShareRestore",
"restoreFileSpecs":[
{
"targetFolderPath":"restoredata"
}
],
"targetDetails":{
"name":"azurefiles1",
"targetResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afaccount1"
}
}
}

Response
The triggering of a restore operation is an asynchronous operation. This operation creates another operation
that needs to be tracked separately. It returns two responses: 202 (Accepted) when another operation is created,
and 200 (OK) when that operation completes.
Response example
Once you submit the POST URI for triggering a restore, the initial response is 202 (Accepted) with a location
header or Azure-async-header.

HTTP/1.1" 202
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Expires': '-1'
'Location': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/operationResults/68ccfbc1-a64f-4b29-b955-314b5790cfa9?api-version=2019-05-13'
'Retry-After': '60'
'Azure-AsyncOperation': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/operationsStatus/68ccfbc1-a64f-4b29-b955-314b5790cfa9?api-version=2019-05-13'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '2426777d-c5ec-44b6-a324-384f8947460c'
'x-ms-client-request-id': '3c743096-47eb-11ea-ae90-0a580af41908, 3c743096-47eb-11ea-ae90-0a580af41908'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-writes': '1198'
'x-ms-correlation-request-id': '2426777d-c5ec-44b6-a324-384f8947460c'
'x-ms-routing-request-id': 'WESTEUROPE:20200205T074347Z:2426777d-c5ec-44b6-a324-384f8947460c'
'Date': 'Wed, 05 Feb 2020 07:43:47 GMT'
Then track the resulting operation using the location header or the Azure-AsyncOperation header with a GET
command.

GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOpe
rations/68ccfbc1-a64f-4b29-b955-314b5790cfa9?api-version=2016-12-01

Once the operation completes, it returns 200 (OK) with the ID of the resulting restore job in the response body.

HTTP/1.1" 200
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Transfer-Encoding': 'chunked'
'Content-Type': 'application/json'
'Content-Encoding': 'gzip'
'Expires': '-1'
'Vary': 'Accept-Encoding'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '41ee89b2-3be4-40d8-8ff6-f5592c2571e3'
'x-ms-client-request-id': '3c743096-47eb-11ea-ae90-0a580af41908, 3c743096-47eb-11ea-ae90-0a580af41908'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'Server': 'Microsoft-IIS/10.0'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-reads': '11998'
'x-ms-correlation-request-id': '41ee89b2-3be4-40d8-8ff6-f5592c2571e3'
'x-ms-routing-request-id': 'WESTEUROPE:20200205T074348Z:41ee89b2-3be4-40d8-8ff6-f5592c2571e3'
'Date': 'Wed, 05 Feb 2020 07:43:47 GMT'
{
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/a7e97e42-4e54-4d4b-b449-26fcf946f42c",
"location": null,
"name": "a7e97e42-4e54-4d4b-b449-26fcf946f42c",
"properties": {
"actionsInfo": [
"Cancellable"
],
"activityId": "3c743096-47eb-11ea-ae90-0a580af41908",
"backupManagementType": "AzureStorage",
"duration": "0:00:01.863098",
"endTime": null,
"entityFriendlyName": "azurefiles",
"errorDetails": null,
"extendedInfo": {
"dynamicErrorMessage": null,
"propertyBag": {},
"tasksList": []
},
"jobType": "AzureStorageJob",
"operation": "Restore",
"startTime": "2020-02-05T07:43:47.144961+00:00",
"status": "InProgress",
"storageAccountName": "afsaccount",
"storageAccountVersion": "Storage"
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupJobs"
}

For alternate location recovery, the response body will be like this:
{
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/7e0ee41e-6e31-4728-a25c-98ff6b777641",
"location": null,
"name": "7e0ee41e-6e31-4728-a25c-98ff6b777641",
"properties": {
"actionsInfo": [
"Cancellable"
],
"activityId": "6077be6e-483a-11ea-a915-0a580af4ad72",
"backupManagementType": "AzureStorage",
"duration": "0:00:02.171965",
"endTime": null,
"entityFriendlyName": "azurefiles",
"errorDetails": null,
"extendedInfo": {
"dynamicErrorMessage": null,
"propertyBag": {
"Data Transferred (in MB)": "0",
"Job Type": "Recover to an alternate file share",
"Number Of Failed Files": "0",
"Number Of Restored Files": "0",
"Number Of Skipped Files": "0",
"RestoreDestination": "afaccount1/azurefiles1/restoredata",
"Source File Share Name": "azurefiles",
"Source Storage Account Name": "afsaccount",
"Target File Share Name": "azurefiles1",
"Target Storage Account Name": "afaccount1"
},
"tasksList": []
},
"jobType": "AzureStorageJob",
"operation": "Restore",
"startTime": "2020-02-05T17:10:18.106532+00:00",
"status": "InProgress",
"storageAccountName": "afsaccount",
"storageAccountVersion": "ClassicCompute"
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupJobs"
}

Since the backup job is a long running operation, it should be tracked as explained in the monitor jobs using
REST API document.

Item level recovery using REST API


You can use this restore option to restore individual files or folders in the original or an alternate location.

POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13

The values {containerName} and {protectedItemName} are as set here and recoveryPointID is the {name} field of
the recovery point mentioned above.

POST https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
%3Bazurefiles/recoveryPoints/932886657837421071/restore?api-version=2019-05-13'

Create request body for item-level recovery using REST API


To trigger a restore for an Azure file share, the following are the components of the request body:

NAME TYPE DESC RIP T IO N

Properties AzureFileShareRestoreRequest RestoreRequestResource properties

For the complete list of definitions of the request body and other details, refer to the trigger Restore REST API
document.
Restore to original location for item-level recovery using REST API
The following request body is to restore the Restoretest.txt file in the azurefiles file share in the afsaccount
storage account.
Create Request Body
{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"copyOptions":"Overwrite",
"recoveryType":"OriginalLocation",
"restoreFileSpecs":[
{
"fileSpecType":"File",
"path":"RestoreTest.txt",
"targetFolderPath":null
}
],
"restoreRequestType":"ItemLevelRestore",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.storage/storageAccounts/afsaccount",
"targetDetails":null
}
}

Restore to alternate location for item-level recovery using REST API


The following request body is to restore the Restoretest.txt file in the azurefiles file share in the afsaccount
storage account to the restoredata folder of the azurefiles1 file share in the afaccount1 storage account.
Create request body

{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"recoveryType":"AlternateLocation",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"copyOptions":"Overwrite",
"restoreRequestType":"ItemLevelRestore",
"restoreFileSpecs":[
{
"path":"Restore/RestoreTest.txt",
"fileSpecType":"File",
"targetFolderPath":"restoredata"
}
],
"targetDetails":{
"name":"azurefiles1",
"targetResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afaccount1"
}
}
}

The response should be handled in the same way as explained above for full share restores.

Next steps
Learn how to manage Azure file shares backup using REST API.
Manage Azure file share backups
5/20/2022 • 6 minutes to read • Edit Online

This article describes common tasks for managing and monitoring the Azure file shares that are backed up by
Azure Backup. You'll learn how to do management tasks in Backup center .

Monitor jobs
When you trigger a backup or restore operation, the backup service creates a job for tracking. You can monitor
the progress of all jobs on the Backup Jobs page.
To open the Backup Jobs page:
1. Go to Backup center and select Backup Jobs under the Monitoring section.

The Backup Jobs pane lists the status of all jobs.


2. Select Azure Files (Azure Storage) as the datasource type and select any row to see details of the
particular job.

NOTE
Since there is no data transferred to the vault, the data transferred in MB is 0 for backup jobs corresponding to
Azure Files.
Monitor using Azure Backup reports
Azure Backup provides a reporting solution that uses Azure Monitor logs and Azure workbooks. These
resources help you get rich insights into your backups. You can leverage these reports to gain visibility into
Azure Files backup items, jobs at item level and details of active policies. Using the Email Report feature
available in Backup Reports, you can create automated tasks to receive periodic reports via email. Learn how to
configure and view Azure Backup reports.

Create a new policy


You can create a new policy to back up Azure file shares from the Backup policies section of Backup center .
All policies created when you configured backup for file shares show up with the Policy Type as Azure File
Share .
To create a new backup policy, follow these steps:
1. In the Backup policies pane of the Backup center , select +Add .

2. Select Azure Files (Azure Storage) as the datasource type, select the vault under which the policy
should be created, and then click Continue .
3. As the Backup policy pane for Azure File Share opens, specify the policy name.
4. In Backup schedule , select an appropriate frequency for the backups - Daily or Hourly .

Daily : Triggers one backup per day. For daily frequency, select the appropriate values for:
Time : The timestamp when the backup job needs to be triggered.
Time zone : The corresponding time zone for the backup job.
Hourly : Triggers multiple backups per day. For hourly frequency, select the appropriate values for:
Schedule : The time interval (in hours) between the consecutive backups.
Star t time : The time when the first backup job of the day needs to be triggered.
Duration : Represents the backup window (in hours), that is, the time span in which the backup
jobs need to be triggered as per the selected schedule.
Time zone : The corresponding time zone for the backup job.
For example, you’ve the RPO (recovery point objective) requirement of 4 hours and your working
hours are 9 AM to 9 PM. To meet these requirements, the configuration for backup schedule would
be:
Schedule: Every 4 hours
Start time: 9 AM
Duration: 12 hours

Based on your selection, the backup job details (the time stamps when backup job would be
triggered) display on the backup policy blade.
5. In Retention range , specify appropriate retention values for backups - tagged as daily, weekly, monthly,
or yearly.
6. After defining all attributes of the policy, click Create .
View policy
To view the existing backup policies:
1. Go to Backup center and select Backup policies under the Manage section.
All Backup policies configured across your vault appear.
2. To view policies specific to Azure Files (Azure Storage) , select Azure File Share as the datasource
type.

Modify policy
You can modify a backup policy to change the backup frequency or retention range.
To modify a policy:
1. Go to Backup center and select Backup policies under the Manage section.
All Backup policies configured across your vaults appear.

2. To view policies specific to an Azure file share, select Azure Files (Azure Storage) as the datasource
type.
Click the policy you want to update.
3. The Schedule pane opens. Edit the Backup schedule and Retention range as required, and select
Save .
You'll see an Update in Progress message in the pane. After the policy changes update successfully, you'll
see the message Successfully updated the backup policy.

Stop protection on a file share


There are two ways to stop protecting Azure file shares:
Stop all future backup jobs, and delete all recovery points.
Stop all future backup jobs, but leave the recovery points.
There might be a cost associated with leaving the recovery points in storage, because the underlying snapshots
created by Azure Backup will be retained. The benefit of leaving the recovery points is that you can restore the
file share later. For information about the cost of leaving the recovery points, see the pricing details. If you
decide to delete all the recovery points, you can't restore the file share.
To stop protection for an Azure file share:
1. Go to Backup center , select Backup Instances from the menu, and then select Azure Files (Azure
Storage) as the datasource type.

2. Select the backup item for which you want to stop protection.
3. Select the Stop backup option.
4. In the Stop Backup pane, select Retain Backup Data or Delete Backup Data . Then select Stop
backup .

Resume protection on a file share


If the Retain Backup Data option was selected when protection for the file share was stopped, it's possible to
resume protection. If the Delete Backup Data option was selected, protection for the file share can't resume.
To resume protection for the Azure file share:
1. Go to Backup center , select Backup Instances from the menu, and then select Azure Files (Azure
Storage) as the datasource type.

2. Select the backup item for which you want to resume protection.
3. Select the Resume backup option.
4. The Backup Policy pane opens. Select a policy of your choice to resume backup.
5. After you select a backup policy, select Save .
You'll see an Update in Progress message in the portal. After the backup successfully resumes, you'll see
the message Successfully updated backup policy for the Protected Azure File Share.

Delete backup data


You can delete the backup of a file share during the Stop backup job, or anytime after you stop protection. It
might be beneficial to wait days or even weeks before you delete the recovery points. When you delete backup
data, you can't choose specific recovery points to delete. If you decide to delete your backup data, you delete all
recovery points associated with the file share.
The following procedure assumes that the protection was stopped for the file share.
To delete backup data for the Azure file share:
1. After the backup job is stopped, the Resume backup and Delete backup data options are available in
the Backup Item dashboard. Select the Delete backup data option.

2. The Delete Backup Data pane opens. Enter the name of the file share to confirm deletion. Optionally,
provide more information in the Reason or Comments boxes. After you're sure about deleting the
backup data, select Delete .
Unregister a storage account
To protect your file shares in a particular storage account by using a different Recovery Services vault, first stop
protection for all file shares in that storage account. Then unregister the account from the current Recovery
Services vault used for protection.
The following procedure assumes that the protection was stopped for all file shares in the storage account you
want to unregister.
To unregister the storage account:
1. Open the Recovery Services vault where your storage account is registered.
2. On the Over view pane, select the Backup Infrastructure option under the Manage section.
3. The Backup Infrastructure pane opens. Select Storage Accounts under the Azure Storage
Accounts section.

4. After you select Storage Accounts , a list of storage accounts registered with the vault appears.
5. Right-click the storage account you want to unregister, and select Unregister .

Next steps
For more information, see Troubleshoot Azure file shares backup.
Manage Azure file share backups with the Azure CLI
5/20/2022 • 9 minutes to read • Edit Online

The Azure CLI provides a command-line experience for managing Azure resources. It's a great tool for building
custom automation to use Azure resources. This article explains how to perform tasks for managing and
monitoring the Azure file shares that are backed up by Azure Backup. You can also perform these steps with the
Azure portal.

Prerequisites
This article assumes you already have an Azure file share backed up by Azure Backup. If you don't have one, see
Back up Azure file shares with the CLI to configure backup for your file shares. For this article, you use the
following resources:
Resource group : azurefiles
Recover ySer vicesVault : azurefilesvault
Storage Account : afsaccount
File Share : azurefiles
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.

Monitor jobs
When you trigger backup or restore operations, the backup service creates a job for tracking. To monitor
completed or currently running jobs, use the az backup job list cmdlet. With the CLI, you also can suspend a
currently running job or wait until a job finishes.
The following example displays the status of backup jobs for the azurefilesvault Recovery Services vault:

az backup job list --resource-group azurefiles --vault-name azurefilesvault


[
{
"eTag": null,
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/d477dfb6-b292-4f24-bb43-6b14e9d06ab5",
"location": null,
"name": "d477dfb6-b292-4f24-bb43-6b14e9d06ab5",
"properties": {
"actionsInfo": null,
"activityId": "3cef43ed-0af4-43e2-b9cb-1322c496ccb4",
"backupManagementType": "AzureStorage",
"duration": "0:00:29.718011",
"endTime": "2020-01-13T08:05:29.180606+00:00",
"entityFriendlyName": "azurefiles",
"errorDetails": null,
"extendedInfo": null,
"jobType": "AzureStorageJob",
"operation": "Backup",
"startTime": "2020-01-13T08:04:59.462595+00:00",
"status": "Completed",
"storageAccountName": "afsaccount",
"storageAccountVersion": "MicrosoftStorage"
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupJobs"
},
{
"eTag": null,
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/1b9399bf-c23c-4caa-933a-5fc2bf884519",
"location": null,
"name": "1b9399bf-c23c-4caa-933a-5fc2bf884519",
"properties": {
"actionsInfo": null,
"activityId": "2663449c-94f1-4735-aaf9-5bb991e7e00c",
"backupManagementType": "AzureStorage",
"duration": "0:00:28.145216",
"endTime": "2020-01-13T08:05:27.519826+00:00",
"entityFriendlyName": "azurefilesresource",
"errorDetails": null,
"extendedInfo": null,
"jobType": "AzureStorageJob",
"operation": "Backup",
"startTime": "2020-01-13T08:04:59.374610+00:00",
"status": "Completed",
"storageAccountName": "afsaccount",
"storageAccountVersion": "MicrosoftStorage"
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupJobs"
}
]

Create policy
You can create a backup policy by executing the az backup policy create command with the following
parameters:
--backup-management-type – Azure Storage
--workload-type - AzureFileShare
--name – Name of the policy
--policy - JSON file with appropriate details for schedule and retention
--resource-group - Resource group of the vault
--vault-name – Name of the vault
Example

az backup policy create --resource-group azurefiles --vault-name azurefilesvault --name schedule20 --backup-
management-type AzureStorage --policy samplepolicy.json --workload-type AzureFileShare

Sample JSON (samplepolicy.json)

{
"eTag": null,
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPol
icies/schedule20",
"location": null,
"name": "schedule20",
"properties": {
"backupManagementType": "AzureStorage",
"protectedItemsCount": 0,
"retentionPolicy": {
"dailySchedule": {
"retentionDuration": {
"count": 30,
"durationType": "Days"
},
"retentionTimes": [
"2020-01-05T08:00:00+00:00"
]
},
"monthlySchedule": null,
"retentionPolicyType": "LongTermRetentionPolicy",
"weeklySchedule": null,
"yearlySchedule": null
},
"schedulePolicy": {
"schedulePolicyType": "SimpleSchedulePolicy",
"scheduleRunDays": null,
"scheduleRunFrequency": "Daily",
"scheduleRunTimes": [
"2020-01-05T08:00:00+00:00"
],
"scheduleWeeklyFrequency": 0
},
"timeZone": "UTC",
"workLoadType": “AzureFileShare”
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupPolicies"
}

Example to create a backup policy that configures multiple backups a day


This sample JSON is for the following requirements:
Schedule : Back up every 4 hours starting from 8 AM (UTC) for the next 12 hours.
Retention : Daily - 5 days, Weekly - Every Sunday for 12 weeks, Monthly - First Sunday of every month for
60 months, and Yearly - First Sunday of January for 10 years.
{
"properties":{
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"schedulePolicy": {
"schedulePolicyType": "SimpleSchedulePolicy",
"scheduleRunFrequency": "Hourly",
"hourlySchedule": {
"interval": 4,
"scheduleWindowStartTime": "2021-09-29T08:00:00.000Z",
"scheduleWindowDuration": 12
}
},
"timeZone": "UTC",
"retentionPolicy": {
"retentionPolicyType": "LongTermRetentionPolicy",
"dailySchedule": {
"retentionTimes": null,
"retentionDuration": {
"count": 5,
"durationType": "Days"
}
},
"weeklySchedule": {
"daysOfTheWeek": [
"Sunday"
],
"retentionTimes": null,
"retentionDuration": {
"count": 12,
"durationType": "Weeks"
}
},
"monthlySchedule": {
"retentionScheduleFormatType": "Weekly",
"retentionScheduleDaily": null,
"retentionScheduleWeekly": {
"daysOfTheWeek": [
"Sunday"
],
"weeksOfTheMonth": [
"First"
]
},
"retentionTimes": null,
"retentionDuration": {
"count": 60,
"durationType": "Months"
}
},
"yearlySchedule": {
"retentionScheduleFormatType": "Weekly",
"monthsOfYear": [
"January"
],
"retentionScheduleDaily": null,
"retentionScheduleWeekly": {
"daysOfTheWeek": [
"Sunday"
],
"weeksOfTheMonth": [
"First"
]
},
"retentionTimes": null,
"retentionDuration": {
"count": 10,
"durationType": "Years"
"durationType": "Years"
}
}
}
}
}

Once the policy is created successfully, the output of the command will display the policy JSON that you have
passed as a parameter while executing the command.
You can modify the schedule and retention section of the policy as required.
Example
If you want to retain the backup of first Sunday of every month for two months, update the monthly schedule as
below:

"monthlySchedule": {
"retentionDuration": {
"count": 2,
"durationType": "Months"
},
"retentionScheduleDaily": null,
"retentionScheduleFormatType": "Weekly",
"retentionScheduleWeekly": {
"daysOfTheWeek": [
"Sunday"
],
"weeksOfTheMonth": [
"First"
]
},
"retentionTimes": [
"2020-01-05T08:00:00+00:00"
]
}

Modify policy
You can modify a backup policy to change backup frequency or retention range by using az backup item set-
policy.
To change the policy, define the following parameters:
--container-name : The name of the storage account that hosts the file share. To retrieve the name or
friendly name of your container, use the az backup container list command.
--name : The name of the file share for which you want to change the policy. To retrieve the name or
friendly name of your backed-up item, use the az backup item list command.
--policy-name : The name of the backup policy you want to set for your file share. You can use az backup
policy list to view all the policies for your vault.
The following example sets the schedule2 backup policy for the azurefiles file share present in the afsaccount
storage account.

az backup item set-policy --policy-name schedule2 --name azurefiles --vault-name azurefilesvault --resource-
group azurefiles --container-name "StorageContainer;Storage;AzureFiles;afsaccount" --name
"AzureFileShare;azurefiles" --backup-management-type azurestorage --out table
You can also run the previous command by using the friendly names for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare

az backup item set-policy --policy-name schedule2 --name azurefiles --vault-name azurefilesvault --resource-
group azurefiles --container-name afsaccount --name azurefiles --backup-management-type azurestorage --out
table

Name ResourceGroup
------------------------------------ ---------------
fec6f004-0e35-407f-9928-10a163f123e5 azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your change policy operation. To track the status of the job, use the az backup job show cmdlet.

Stop protection on a file share


There are two ways to stop protecting Azure file shares:
Stop all future backup jobs and delete all recovery points.
Stop all future backup jobs but leave the recovery points.
There might be a cost associated with leaving the recovery points in storage, because the underlying snapshots
created by Azure Backup will be retained. The benefit of leaving the recovery points is the option to restore the
file share later, if you want. For information about the cost of leaving the recovery points, see the pricing details.
If you choose to delete all recovery points, you can't restore the file share.
To stop protection for the file share, define the following parameters:
--container-name : The name of the storage account that hosts the file share. To retrieve the name or
friendly name of your container, use the az backup container list command.
--item-name : The name of the file share for which you want to stop protection. To retrieve the name or
friendly name of your backed-up item, use the az backup item list command.
Stop protection and retain recovery points
To stop protection while retaining data, use the az backup protection disable cmdlet.
The following example stops protection for the azurefiles file share but retains all recovery points.

az backup protection disable --vault-name azurefilesvault --resource-group azurefiles --container-name


"StorageContainer;Storage;AzureFiles;afsaccount" --item-name “AzureFileShare;azurefiles” --out table

You can also run the previous command by using the friendly name for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare

az backup protection disable --vault-name azurefilesvault --resource-group azurefiles --container-name


afsaccount --item-name azurefiles --workload-type azurefileshare --backup-management-type Azurestorage --out
table
Name ResourceGroup
------------------------------------ ---------------
fec6f004-0e35-407f-9928-10a163f123e5 azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your stop protection operation. To track the status of the job, use the az backup job show cmdlet.
Stop protection without retaining recovery points
To stop protection without retaining recovery points, use the az backup protection disable cmdlet with the
delete-backup-data option set to true .
The following example stops protection for the azurefiles file share without retaining recovery points.

az backup protection disable --vault-name azurefilesvault --resource-group azurefiles --container-name


"StorageContainer;Storage;AzureFiles;afsaccount" --item-name “AzureFileShare;azurefiles” --delete-backup-
data true --out table

You can also run the previous command by using the friendly name for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare

az backup protection disable --vault-name azurefilesvault --resource-group azurefiles --container-name


afsaccount --item-name azurefiles --workload-type azurefileshare --backup-management-type Azurestorage --
delete-backup-data true --out table

Resume protection on a file share


If you stopped protection for an Azure file share but retained recovery points, you can resume protection later. If
you don't retain the recovery points, you can't resume protection.
To resume protection for the file share, define the following parameters:
--container-name : The name of the storage account that hosts the file share. To retrieve the name or
friendly name of your container, use the az backup container list command.
--item-name : The name of the file share for which you want to resume protection. To retrieve the name or
friendly name of your backed-up item, use the az backup item list command.
--policy-name : The name of the backup policy for which you want to resume the protection for the file
share.
The following example uses the az backup protection resume cmdlet to resume protection for the azurefiles file
share by using the schedule1 backup policy.

az backup protection resume --vault-name azurefilesvault --resource-group azurefiles --container-name


"StorageContainer;Storage;AzureFiles;afsaccount” --item-name “AzureFileShare;azurefiles” --policy-name
schedule2 --out table

You can also run the previous command by using the friendly name for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare
az backup protection resume --vault-name azurefilesvault --resource-group azurefiles --container-name
afsaccount --item-name azurefiles --workload-type azurefileshare --backup-management-type Azurestorage --
policy-name schedule2 --out table

Name ResourceGroup
------------------------------------ ---------------
75115ab0-43b0-4065-8698-55022a234b7f azurefiles

The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your resume protection operation. To track the status of the job, use the az backup job show cmdlet.

Unregister a storage account


If you want to protect your file shares in a particular storage account by using a different Recovery Services
vault, first stop protection for all file shares in that storage account. Then unregister the account from the
Recovery Services vault currently used for protection.
You need to provide a container name to unregister the storage account. To retrieve the name or the friendly
name of your container, use the az backup container list command.
The following example unregisters the afsaccount storage account from azurefilesvault by using the az backup
container unregister cmdlet.

az backup container unregister --vault-name azurefilesvault --resource-group azurefiles --container-name


"StorageContainer;Storage;AzureFiles;afsaccount" --out table

You can also run the previous cmdlet by using the friendly name for the container by providing the following
additional parameter:
--backup-management-type : azurestorage

az backup container unregister --vault-name azurefilesvault --resource-group azurefiles --container-name


afsaccount --backup-management-type azurestorage --out table

Next steps
For more information, see Troubleshoot Azure file shares backup.
Manage Azure file share backups with PowerShell
5/20/2022 • 2 minutes to read • Edit Online

This article describes how to use Azure PowerShell to manage and monitor the Azure file shares that are backed
up by the Azure Backup service.

WARNING
Make sure the PowerShell version is upgraded to the minimum version for 'Az.RecoveryServices 2.6.0' for AFS backups.
For more details, refer to the section outlining the requirement for this change.

Modify the protection policy


To change the policy used for backing up the Azure file share, use Enable-AzRecoveryServicesBackupProtection.
Specify the relevant backup item and the new backup policy.
The following example changes the testAzureFS protection policy from dailyafs to monthlyafs .

$monthlyafsPol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "monthlyafs"


$afsContainer = Get-AzRecoveryServicesBackupContainer -FriendlyName "testStorageAcct" -ContainerType
AzureStorage
$afsBkpItem = Get-AzRecoveryServicesBackupItem -Container $afsContainer -WorkloadType AzureFiles -Name
"testAzureFS"
Enable-AzRecoveryServicesBackupProtection -Item $afsBkpItem -Policy $monthlyafsPol

Track backup and restore jobs


On-demand backup and restore operations return a job along with an ID, as shown when you run an on-
demand backup. Use the Get-AzRecoveryServicesBackupJobDetails cmdlet to track the job progress and details.
$job = Get-AzRecoveryServicesBackupJob -JobId 00000000-6c46-496e-980a-3740ccb2ad75 -VaultId $vaultID

$job | fl

IsCancellable : False
IsRetriable : False
ErrorDetails :
{Microsoft.Azure.Commands.RecoveryServices.Backup.Cmdlets.Models.AzureFileShareJobErrorInfo}
ActivityId : 00000000-5b71-4d73-9465-8a4a91f13a36
JobId : 00000000-6c46-496e-980a-3740ccb2ad75
Operation : Restore
Status : Failed
WorkloadName : testAFS
StartTime : 12/10/2018 9:56:38 AM
EndTime : 12/10/2018 11:03:03 AM
Duration : 01:06:24.4660027
BackupManagementType : AzureStorage

$job.ErrorDetails

ErrorCode ErrorMessage Recommendations


--------- ------------ ---------------
1073871825 Microsoft Azure Backup encountered an internal error. Wait for a few minutes and then try the
operation again. If the issue persists, please contact Microsoft support.

Stop protection on a file share


There are two ways to stop protecting Azure file shares:
Stop all future backup jobs and delete all recovery points
Stop all future backup jobs but leave the recovery points
There may be a cost associated with leaving the recovery points in storage, as the underlying snapshots created
by Azure Backup will be retained. However, the benefit of leaving the recovery points is you can restore the file
share later, if desired. For information about the cost of leaving the recovery points, see the pricing details. If you
choose to delete all recovery points, you can't restore the file share.

Stop protection and retain recovery points


To stop protection while retaining data, use the Disable-AzRecoveryServicesBackupProtection cmdlet.
The following example stops protection for the afsfileshare file share but retains all recovery points:

$vaultID = Get-AzRecoveryServicesVault -ResourceGroupName "afstesting" -Name "afstest" | select -


ExpandProperty ID
$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -
Name "afsfileshare" -VaultId $vaultID
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $vaultID

WorkloadName Operation Status StartTime EndTime JobID


------------ --------- ------ --------- ------- -----
afsfileshare DisableBackup Completed 1/26/2020 2:43:59 PM 1/26/2020 2:44:21 PM
98d9f8a1-54f2-4d85-8433-c32eafbd793f

The Job ID attribute in the output corresponds to the Job ID of the job that's created by the backup service for
your “stop protection” operation. To track the status of the job, use the Get-AzRecoveryServicesBackupJob
cmdlet.
Stop protection without retaining recovery points
To stop protection without retaining recovery points, use the Disable-AzRecoveryServicesBackupProtection
cmdlet and add the -RemoveRecover yPoints parameter.
The following example stops protection for the afsfileshare file share without retaining recovery points:

$vaultID = Get-AzRecoveryServicesVault -ResourceGroupName "afstesting" -Name "afstest" | select -


ExpandProperty ID
$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureStorage -WorkloadType AzureFiles -
Name "afsfileshare" -VaultId $vaultID
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $vaultID -RemoveRecoveryPoints

WorkloadName Operation Status StartTime EndTime


JobID
------------ --------- ------ --------- ------- ---
--
afsfileshare DeleteBackupData Completed 1/26/2020 2:50:57 PM 1/26/2020 2:51:39 PM
b1a61c0b-548a-4687-9d15-9db1cc5bcc85

Next steps
Learn about managing Azure file share backups in the Azure portal.
Manage Azure File share backup with REST API
5/20/2022 • 3 minutes to read • Edit Online

This article explains how to perform tasks for managing and monitoring the Azure file shares that are backed up
by Azure Backup.

Monitor jobs
The Azure Backup service triggers jobs that run in the background. This includes scenarios such as triggering
backup, restore operations, and disabling backup. These jobs can be tracked using their IDs.
Fetch job information from operations
An operation such as triggering backup will always return a jobID in the response.
For example, the final response of a trigger backup REST API operation is as follows:

{
"id": "c3a52d1d-0853-4211-8141-477c65740264",
"name": "c3a52d1d-0853-4211-8141-477c65740264",
"status": "Succeeded",
"startTime": "2020-02-03T18:10:48.296012Z",
"endTime": "2020-02-03T18:10:48.296012Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b"
}
}

The Azure file share backup job is identified by the jobId field and can be tracked as mentioned here using a
GET request.
Tracking the job

GET
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupJobs/{jobName}?api-version=2019-05-13

The { jobName} is the "jobId" mentioned above. The response is always "200 OK" with the status field indicating
the status of the job. Once it's "Completed" or "CompletedWithWarnings", the extendedInfo section reveals
more details about the job.

GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b?api-version=2019-05-13'

Response

NAME TYPE DESC RIP T IO N

200 OK JobResource OK

Response example
Once the GET URI is submitted, a 200 response is returned.
HTTP/1.1" 200
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Transfer-Encoding': 'chunked'
'Content-Type': 'application/json'
'Content-Encoding': 'gzip'
'Expires': '-1'
'Vary': 'Accept-Encoding'
'Server': 'Microsoft-IIS/10.0, Microsoft-IIS/10.0'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': 'dba43f00-5cdb-43b1-a9ec-23e419db67c5'
'x-ms-client-request-id': 'a644712a-4895-11ea-ba57-0a580af42708, a644712a-4895-11ea-ba57-0a580af42708'
'X-Powered-By': 'ASP.NET'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'x-ms-ratelimit-remaining-subscription-reads': '11999'
'x-ms-correlation-request-id': 'dba43f00-5cdb-43b1-a9ec-23e419db67c5'
'x-ms-routing-request-id': 'WESTEUROPE:20200206T040341Z:dba43f00-5cdb-43b1-a9ec-23e419db67c5'
'Date': 'Thu, 06 Feb 2020 04:03:40 GMT'
{
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b",
"name": "e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b",
"type": "Microsoft.RecoveryServices/vaults/backupJobs",
"properties": {
"jobType": "AzureStorageJob",
"duration": "00:00:43.1809140",
"storageAccountName": "testvault2",
"storageAccountVersion": "Storage",
"extendedInfo": {
"tasksList": [],
"propertyBag": {
"File Share Name": "testshare",
"Storage Account Name": "testvault2",
"Policy Name": "schedule1"
}
},
"entityFriendlyName": "testshare",
"backupManagementType": "AzureStorage",
"operation": "ConfigureBackup",
"status": "Completed",
"startTime": "2020-02-03T18:10:48.296012Z",
"endTime": "2020-02-03T18:11:31.476926Z",
"activityId": "3677cec0-942d-4eac-921f-8f3c873140d7"
}
}

Modify policy
To change the policy with which the file share is protected, you can use the same format as enabling protection.
Just provide the new policy ID in the request policy and submit the request.
For example: To change the protection policy of testshare from schedule1 to schedule2, provide the schedule2 ID
in the request body.
{
"properties": {
"protectedItemType": "AzureFileShareProtectedItem",
"sourceResourceId": "/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"policyId": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPol
icies/schedule2"
}
}

Stop protection but retain existing data


You can remove protection on a protected file share but retain the data already backed up. To do so, remove the
policy in the request body you used toenable backup and submit the request. Once the association with the
policy is removed, backups are no longer triggered, and no new recovery points are created.

{
"properties": {
"protectedItemType": "AzureFileShareProtectedItem",
"sourceResourceId": "/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"policyId": "" ,
"protectionState":"ProtectionStopped"
}
}

Sample response
Stopping protection for a file share is an asynchronous operation. The operation creates another operation that
needs to be tracked. It returns two responses: 202 (Accepted) when another operation is created, and 200 when
that operation completes.
Response header when operation is successfully accepted:

HTTP/1.1" 202
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Expires': '-1'
'Location': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/operationResults/b300922a-ad9c-4181-b4cd-d42ea780ad77?api-version=2019-05-13'
'Retry-After': '60'
msrest.http_logger : 'Azure-AsyncOperation': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-
4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/operationsStatus/b300922a-ad9c-4181-b4cd-d42ea780ad77?api-version=2019-05-13'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '3895e8a1-e4b9-4da5-bec7-2cf0266405f8'
'x-ms-client-request-id': 'd331c15e-48ab-11ea-84c0-0a580af46a50, d331c15e-48ab-11ea-84c0-0a580af46a50'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-writes': '1199'
'x-ms-correlation-request-id': '3895e8a1-e4b9-4da5-bec7-2cf0266405f8'
'x-ms-routing-request-id': 'WESTEUROPE:20200206T064224Z:3895e8a1-e4b9-4da5-bec7-2cf0266405f8'
'Date': 'Thu, 06 Feb 2020 06:42:24 GMT'
'Content-Length': '0'

Then track the resulting operation using the location header or Azure-AsyncOperation header with a GET
command:

GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupope
rations/b300922a-ad9c-4181-b4cd-d42ea780ad77?api-version=2016-12-01

Response body

{
"id": "b300922a-ad9c-4181-b4cd-d42ea780ad77",
"name": "b300922a-ad9c-4181-b4cd-d42ea780ad77",
"status": "Succeeded",
"startTime": "2020-02-06T06:42:24.4001299Z",
"endTime": "2020-02-06T06:42:24.4001299Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "7816fca8-d5be-4c41-b911-1bbd922e5826"
}
}

Stop protection and delete data


To remove the protection on a protected file share and delete the backup data as well, perform a delete
operation as detailed here.

DELETE
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}?api-version=2019-05-13

The parameters {containerName} and {protectedItemName} are as set here.


The following example triggers an operation to stop protection for the testshare file share protected with
azurefilesvault.

DELETE https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare
;testshare?api-version=2016-12-01

Responses
Delete protection is an asynchronous operation. The operation creates another operation that needs to be
tracked separately. It returns two responses: 202 (Accepted) when another operation is created and 204
(NoContent) when that operation completes.

Next steps
Learn how to troubleshoot problems while configuring backup for Azure File shares.
Monitoring Azure Files
5/20/2022 • 25 minutes to read • Edit Online

When you have critical applications and business processes that rely on Azure resources, you want to monitor
those resources for their availability, performance, and operation. This article describes the monitoring data
that's generated by Azure Files and how you can use the features of Azure Monitor to analyze alerts on this data.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Monitor overview
The Over view page in the Azure portal for each Azure Files resource includes a brief view of the resource
usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring
data is available. Some of this data is collected automatically and is available for analysis as soon as you create
the resource. You can enable additional types of data collection with some configuration.

What is Azure Monitor?


Azure Files creates monitoring data by using Azure Monitor, which is a full stack monitoring service in Azure.
Azure Monitor provides a complete set of features to monitor your Azure resources and resources in other
clouds and on-premises.
Start with the article Monitoring Azure resources with Azure Monitor, which describes the following:
What is Azure Monitor?
Costs associated with monitoring
Monitoring data collected in Azure
Configuring data collection
Standard tools in Azure for analyzing and alerting on monitoring data
The following sections build on this article by describing the specific data gathered from Azure Files. Examples
show how to configure data collection and analyze this data with Azure tools.

Monitoring data
Azure Files collects the same kinds of monitoring data as other Azure resources, which are described in
Monitoring data from Azure resources.
See Azure File monitoring data reference for detailed information on the metrics and logs metrics created by
Azure Files.
Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor
doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you
need to migrate to an Azure Resource Manager storage account. See Migrate to Azure Resource Manager.

Collection and routing


Platform metrics and the Activity log are collected automatically, but can be routed to other locations by using a
diagnostic setting.
To collect resource logs, you must create a diagnostic setting. When you create the setting, choose file as the
type of storage that you want to enable logs for. Then, specify one of the following categories of operations for
which you want to collect logs.

C AT EGO RY DESC RIP T IO N

StorageRead Read operations on objects.


C AT EGO RY DESC RIP T IO N

StorageWrite Write operations on objects.

StorageDelete Delete operations on objects.

To get the list of SMB and REST operations that are logged, see Storage logged operations and status messages
and Azure Files monitoring data reference.

Creating a diagnostic setting


This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure
CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic
setting, see Create diagnostic setting to collect platform logs and metrics in Azure.

TIP
You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A
policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
This section doesn't describe templates or policy definitions.
To view an Azure Resource Manager template that creates a diagnostic setting, see Diagnostic setting for Azure
Storage.
To learn how to create a diagnostic setting by using a policy definition, see Azure Policy built-in definitions for
Azure Storage.

Azure portal
PowerShell
Azure CLI

1. Sign in to the Azure portal.


2. Navigate to your storage account.
3. In the Monitoring section, click Diagnostic settings .

4. Choose file as the type of storage that you want to enable logs for.
5. Click Add diagnostic setting .
The Diagnostic settings page appears.
6. In the Name field of the page, enter a name for this Resource log setting. Then, select which operations
you want logged (read, write, and delete operations), and where you want the logs to be sent.
Archive logs to a storage account
If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the
storage account. For specific pricing, see the Platform Logs section of the Azure Monitor pricing page. You
can't send logs to the same storage account that you are monitoring with this setting. This would lead to
recursive logs in which a log entry describes the writing of another log entry. You must create an account or use
another existing account to store log information.
1. Select the Archive to a storage account checkbox, and then click the Configure button.
2. In the Storage account drop-down list, select the storage account that you want to archive your logs to,
click the OK button, and then click the Save button.

IMPORTANT
You can't set a retention policy. However, you can manage the retention policy of a log container by defining a
lifecycle management policy. To learn how, see Optimize costs by automating Azure Blob Storage access tiers.

NOTE
Before you choose a storage account as the export destination, see Archive Azure resource logs to understand
prerequisites on the storage account.

Stream logs to Azure Event Hubs


If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event
hub. For specific pricing, see the Platform Logs section of the Azure Monitor pricing page. You'll need access to
an existing event hub, or you'll need to create one before you complete this step.
1. Select the Stream to an event hub checkbox, and then click the Configure button.
2. In the Select an event hub pane, choose the namespace, name, and policy name of the event hub that
you want to stream your logs to.
3. Click the OK button, and then click the Save button.
Send logs to Azure Log Analytics
1. Select the Send to Log Analytics checkbox, select a log analytics workspace, and then click the Save
button. You'll need access to an existing log analytics workspace, or you'll need to create one before you
complete this step.
IMPORTANT
You can't set a retention policy. However, you can manage the data retention period of Log Analytics at the workspace
level or even specify different retention settings by data type. To learn how, see Change the data retention period.

Send to a partner solution


You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner
integration into your subscription. Configuration options will vary by partner. Check the Azure Monitor partner
integrations documentation for details.

Analyzing metrics
For a list of all Azure Monitor support metrics, which includes Azure Files, see Azure Monitor supported metrics.

Azure portal
PowerShell
Azure CLI

You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer.
Open Metrics Explorer by choosing Metrics from the Azure Monitor menu. For details on using this tool, see
Getting started with Azure Metrics Explorer.
For metrics that support dimensions, you can filter the metric with the desired dimension value. For a complete
list of the dimensions that Azure Storage supports, see Metrics dimensions. Metrics for Azure Files are in these
namespaces:
Microsoft.Storage/storageAccounts
Microsoft.Storage/storageAccounts/fileServices

Analyze metrics by using code


Azure Monitor provides the .NET SDK to read metric definition and values. The sample code shows how to use
the SDK with different parameters. You need to use 0.18.0-preview or a later version for storage metrics.
In these examples, replace the <resource-ID> placeholder with the resource ID of the entire storage account or
the Azure Files service. You can find these resource IDs on the Proper ties pages of your storage account in the
Azure portal.
Replace the <subscription-ID> variable with the ID of your subscription. For guidance on how to obtain values
for <tenant-ID> , <application-ID> , and <AccessKey> , see Use the portal to create an Azure AD application and
service principal that can access resources.
List the account-level metric definition
The following example shows how to list a metric definition at the account level:

public static async Task ListStorageMetricDefinition()


{
var resourceId = "<resource-ID>";
var subscriptionId = "<subscription-ID>";
var tenantId = "<tenant-ID>";
var applicationId = "<application-ID>";
var accessKey = "<AccessKey>";

MonitorManagementClient readOnlyClient = AuthenticateWithReadOnlyClient(tenantId, applicationId,


accessKey, subscriptionId).Result;
IEnumerable<MetricDefinition> metricDefinitions = await
readOnlyClient.MetricDefinitions.ListAsync(resourceUri: resourceId, cancellationToken: new
CancellationToken());

foreach (var metricDefinition in metricDefinitions)


{
// Enumrate metric definition:
// Id
// ResourceId
// Name
// Unit
// MetricAvailabilities
// PrimaryAggregationType
// Dimensions
// IsDimensionRequired
}
}

Reading account-level metric values


The following example shows how to read UsedCapacity data at the account level:

public static async Task ReadStorageMetricValue()


{
var resourceId = "<resource-ID>";
var subscriptionId = "<subscription-ID>";
var tenantId = "<tenant-ID>";
var applicationId = "<application-ID>";
var accessKey = "<AccessKey>";

MonitorClient readOnlyClient = AuthenticateWithReadOnlyClient(tenantId, applicationId, accessKey,


subscriptionId).Result;

Microsoft.Azure.Management.Monitor.Models.Response Response;

string startDate = DateTime.Now.AddHours(-3).ToUniversalTime().ToString("o");


string endDate = DateTime.Now.ToUniversalTime().ToString("o");
string timeSpan = startDate + "/" + endDate;

Response = await readOnlyClient.Metrics.ListAsync(


resourceUri: resourceId,
timespan: timeSpan,
interval: System.TimeSpan.FromHours(1),
metricnames: "UsedCapacity",

aggregation: "Average",
resultType: ResultType.Data,
cancellationToken: CancellationToken.None);

foreach (var metric in Response.Value)


{
// Enumrate metric value
// Id
// Name
// Type
// Unit
// Timeseries
// - Data
// - Metadatavalues
}
}

Reading multidimensional metric values


For multidimensional metrics, you need to define metadata filters if you want to read metric data on specific
dimension values.
The following example shows how to read metric data on the metric supporting multidimension:
public static async Task ReadStorageMetricValueTest()
{
// Resource ID for Azure Files
var resourceId =
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccou
nts/{storageAccountName}/fileServices/default";
var subscriptionId = "<subscription-ID}";
// How to identify Tenant ID, Application ID and Access Key:
https://azure.microsoft.com/documentation/articles/resource-group-create-service-principal-portal/
var tenantId = "<tenant-ID>";
var applicationId = "<application-ID>";
var accessKey = "<AccessKey>";

MonitorManagementClient readOnlyClient = AuthenticateWithReadOnlyClient(tenantId, applicationId,


accessKey, subscriptionId).Result;

Microsoft.Azure.Management.Monitor.Models.Response Response;

string startDate = DateTime.Now.AddHours(-3).ToUniversalTime().ToString("o");


string endDate = DateTime.Now.ToUniversalTime().ToString("o");
string timeSpan = startDate + "/" + endDate;
// It's applicable to define meta data filter when a metric support dimension
// More conditions can be added with the 'or' and 'and' operators, example: BlobType eq 'BlockBlob'
or BlobType eq 'PageBlob'
ODataQuery<MetadataValue> odataFilterMetrics = new ODataQuery<MetadataValue>(
string.Format("BlobType eq '{0}'", "BlockBlob"));

Response = readOnlyClient.Metrics.List(
resourceUri: resourceId,
timespan: timeSpan,
interval: System.TimeSpan.FromHours(1),
metricnames: "BlobCapacity",
odataQuery: odataFilterMetrics,
aggregation: "Average",
resultType: ResultType.Data);

foreach (var metric in Response.Value)


{
//Enumrate metric value
// Id
// Name
// Type
// Unit
// Timeseries
// - Data
// - Metadatavalues
}
}

Analyzing logs
You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic
queries.
To get the list of SMB and REST operations that are logged, see Storage logged operations and status messages
and Azure Files monitoring data reference.
Log entries are created only if there are requests made against the service endpoint. For example, if a storage
account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure
File service are created. Azure Storage logs contain detailed information about successful and failed requests to
a storage service. This information can be used to monitor individual requests and to diagnose issues with a
storage service. Requests are logged on a best-effort basis.
Log authenticated requests
The following types of authenticated requests are logged:
Successful requests
Failed requests, including timeout, throttling, network, authorization, and other errors
Requests that use Kerberos, NTLM or shared access signature (SAS), including failed and successful requests
Requests to analytics data (classic log data in the $logs container and classic metric data in the $metric
tables)
Requests made by the Azure Files service itself, such as log creation or deletion, aren't logged. For a full list of
the SMB and REST requests that are logged, see Storage logged operations and status messages and Azure Files
monitoring data reference.
Accessing logs in a storage account
Logs appear as blobs stored to a container in the target storage account. Data is collected and stored inside a
single blob as a line-delimited JSON payload. The name of the blob follows this naming convention:
https://<destination-storage-account>.blob.core.windows.net/insights-logs-<storage-
operation>/resourceId=/subscriptions/<subscription-ID>/resourceGroups/<resource-group-
name>/providers/Microsoft.Storage/storageAccounts/<source-storage-account>/fileServices/default/y=<year>/m=
<month>/d=<day>/h=<hour>/m=<minute>/PT1H.json

Here's an example:
https://mylogstorageaccount.blob.core.windows.net/insights-logs-storagewrite/resourceId=/subscriptions/
208841be-a4v3-4234-9450-
08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/fileServices/default/y=2019/m=07/d=30/h=23/m=12

Accessing logs in an event hub


Logs sent to an event hub aren't stored as a file, but you can verify that the event hub received the log
information. In the Azure portal, go to your event hub and verify that the incoming messages count is greater
than zero.

You can access and read log data that's sent to your event hub by using security information and event
management and monitoring tools. For more information, see What can I do with the monitoring data being
sent to my event hub?.
Accessing logs in a Log Analytics workspace
You can access logs sent to a Log Analytics workspace by using Azure Monitor log queries. Data is stored in the
StorageFileLogs table.
For more information, see Log Analytics tutorial.
Sample Kusto queries
Here are some queries that you can enter in the Log search bar to help you monitor your Azure Files. These
queries work with the new language.

IMPORTANT
When you select Logs from the storage account resource group menu, Log Analytics is opened with the query scope set
to the current resource group. This means that log queries will only include data from that resource group. If you want to
run a query that includes data from other resources or data from other Azure services, select Logs from the Azure
Monitor menu. See Log query scope and time range in Azure Monitor Log Analytics for details.

Use these queries to help you monitor your Azure file shares:
View SMB errors over the last week

StorageFileLogs
| where Protocol == "SMB" and TimeGenerated >= ago(7d) and StatusCode contains "-"
| sort by StatusCode

Create a pie chart of SMB operations over the last week

StorageFileLogs
| where Protocol == "SMB" and TimeGenerated >= ago(7d)
| summarize count() by OperationName
| sort by count_ desc
| render piechart

View REST errors over the last week


StorageFileLogs
| where Protocol == "HTTPS" and TimeGenerated >= ago(7d) and StatusText !contains "Success"
| sort by StatusText asc

Create a pie chart of REST operations over the last week

StorageFileLogs
| where Protocol == "HTTPS" and TimeGenerated >= ago(7d)
| summarize count() by OperationName
| sort by count_ desc
| render piechart

To view the list of column names and descriptions for Azure Files, see StorageFileLogs.
For more information on how to write queries, see Log Analytics tutorial.

Alerts
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They
allow you to identify and address issues in your system before your customers notice them. You can set alerts
on metrics, logs, and the activity log.
The following table lists some example scenarios to monitor and the proper metric to use for the alert:

SC EN A RIO M ET RIC TO USE F O R A L ERT

File share is throttled. Metric: Transactions


Dimension name: Response type
Dimension name: FileShare (premium file share only)

File share size is 80% of capacity. Metric: File Capacity


Dimension name: FileShare (premium file share only)

File share egress has exceeded 500 GiB in one day. Metric: Egress
Dimension name: FileShare (premium file share only)

How to create alerts for Azure Files


1. Go to your storage account in the Azure por tal .
2. Click Aler ts and then click + New aler t rule .
3. Click Edit resource , select the File resource type and then click Done .
4. Click Add condition and provide the following information for the alert:
Metric
Dimension name
Aler t logic
5. Click Add action groups and add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
6. Fill in the Aler t details like Aler t rule name , Description , and Severity .
7. Click Create aler t rule to create the alert.

NOTE
If you create an alert and it's too noisy, adjust the threshold value and alert logic.

How to create an alert if a file share is throttled


1. Go to your storage account in the Azure por tal .
2. In the Monitoring section, click Aler ts , and then click + New aler t rule .
3. Click Edit resource , select the File resource type for the storage account and then click Done . For
example, if the storage account name is contoso , select the contoso/file resource.
4. Click Add condition to add a condition.
5. You will see a list of signals supported for the storage account, select the Transactions metric.
6. On the Configure signal logic blade, click the Dimension name drop-down and select Response
type .
7. Click the Dimension values drop-down and select the appropriate response types for your file share.
For standard file shares, select the following response types:
SuccessWithShareIopsThrottling
SuccessWithThrottling
ClientShareIopsThrottlingError
For premium file shares, select the following response types:
SuccessWithShareEgressThrottling
SuccessWithShareIngressThrottling
SuccessWithShareIopsThrottling
ClientShareEgressThrottlingError
ClientShareIngressThrottlingError
ClientShareIopsThrottlingError

NOTE
If the response types are not listed in the Dimension values drop-down, this means the resource has not been
throttled. To add the dimension values, next to the Dimension values drop-down list, select Add custom
value , enter the respone type (for example, SuccessWithThrottling ), select OK , and then repeat these steps to
add all applicable response types for your file share.

8. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #10 .

NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Throttling alerts for standard file shares will be triggered if any
file share within the storage account is throttled and the alert will not identify which file share was throttled. Since
per-share metrics are not available for standard file shares, the recommendation is to have one file share per
storage account.

9. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
10. Define the aler t parameters (threshold value, operator, aggregation granularity and frequency of
evaluation) and click Done .

TIP
If you are using a static threshold, the metric chart can help determine a reasonable threshold value if the file
share is currently being throttled. If you are using a dynamic threshold, the metric chart will display the calculated
thresholds based on recent data.

11. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
12. Fill in the Aler t details like Aler t rule name , Description , and Severity .
13. Click Create aler t rule to create the alert.
How to create an alert if the Azure file share size is 80% of capacity
1. Go to your storage account in the Azure por tal .
2. In the Monitoring section, click Aler ts and then click + New aler t rule .
3. Click Edit resource , select the File resource type for the storage account and then click Done . For
example, if the storage account name is contoso , select the contoso/file resource.
4. Click Add condition to add a condition.
5. You will see a list of signals supported for the storage account, select the File Capacity metric.
6. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #8 .

NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Alerts for standard file shares are based on all file shares in the
storage account. Since per-share metrics are not available for standard file shares, the recommendation is to have
one file share per storage account.

7. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
8. Enter the Threshold value in bytes. For example, if the file share size is 100 TiB and you want to receive
an alert when the file share size is 80% of capacity, the threshold value in bytes is 87960930222080.
9. Define the rest of the aler t parameters (aggregation granularity and frequency of evaluation) and click
Done .
10. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
11. Fill in the Aler t details like Aler t rule name , Description , and Severity .
12. Click Create aler t rule to create the alert.
How to create an alert if the Azure file share egress has exceeded 500 GiB in a day
1. Go to your storage account in the Azure por tal .
2. In the Monitoring section, click Aler ts and then click + New aler t rule .
3. Click Edit resource , select the File resource type for the storage account and then click Done . For
example, if the storage account name is contoso, select the contoso/file resource.
4. Click Add condition to add a condition.
5. You will see a list of signals supported for the storage account, select the Egress metric.
6. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #8 .

NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Alerts for standard file shares are based on all file shares in the
storage account. Since per-share metrics are not available for standard file shares, the recommendation is to have
one file share per storage account.

7. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
8. Enter 536870912000 bytes for Threshold value.
9. Click the Aggregation granularity drop-down and select 24 hours .
10. Select the Frequency of evaluation and click Done .
11. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
12. Fill in the Aler t details like Aler t rule name , Description , and Severity .
13. Click Create aler t rule to create the alert.

Next steps
Azure Files monitoring data reference
Monitor Azure resources with Azure Monitor
Azure Storage metrics migration
Planning for an Azure Files deployment
How to deploy Azure Files
Troubleshoot Azure Files on Windows
Troubleshoot Azure Files on Linux
Migrate to Azure file shares
5/20/2022 • 7 minutes to read • Edit Online

This article covers the basic aspects of a migration to Azure file shares.
This article contains migration basics and a table of migration guides. These guides help you move your files
into Azure file shares. The guides are organized based on where your data is and what deployment model
(cloud-only or hybrid) you're moving to.

Migration basics
Azure has multiple available types of cloud storage. A fundamental aspect of file migrations to Azure is
determining which Azure storage option is right for your data.
Azure file shares are suitable for general-purpose file data. This data includes anything you use an on-premises
SMB or NFS share for. With Azure File Sync, you can cache the contents of several Azure file shares on servers
running Windows Server on-premises.
For an app that currently runs on an on-premises server, storing files in an Azure file share might be a good
choice. You can move the app to Azure and use Azure file shares as shared storage. You can also consider Azure
Disks for this scenario.
Some cloud apps don't depend on SMB or on machine-local data access or shared access. For those apps, object
storage like Azure blobs is often the best choice.
The key in any migration is to capture all the applicable file fidelity when moving your files from their current
storage location to Azure. How much fidelity the Azure storage option supports and how much your scenario
requires also helps you pick the right Azure storage. General-purpose file data traditionally depends on file
metadata. App data might not.
Here are the two basic components of a file:
Data stream : The data stream of a file stores the file content.
File metadata : The file metadata has these subcomponents:
File attributes like read-only
File permissions, which can be referred to as NTFS permissions or file and folder ACLs
Timestamps, most notably the creation, and last-modified timestamps
An alternative data stream, which is a space to store larger amounts of nonstandard properties
File fidelity in a migration can be defined as the ability to:
Store all applicable file information on the source.
Transfer files with the migration tool.
Store files in the target storage of the migration.
Ultimately, the target for migration guides on this page is one or more Azure file shares. Consider this list of
features / file fidelity that Azure file shares don't support.
To ensure your migration proceeds smoothly, identify the best copy tool for your needs and match a storage
target to your source.
Taking the previous information into account, you can see that the target storage for general-purpose files in
Azure is Azure file shares.
Unlike object storage in Azure blobs, an Azure file share can natively store file metadata. Azure file shares also
preserve the file and folder hierarchy, attributes, and permissions. NTFS permissions can be stored on files and
folders because they're on-premises.
A user of Active Directory, which is their on-premises domain controller, can natively access an Azure file share.
So can a user of Azure Active Directory Domain Services (Azure AD DS). Each uses their current identity to get
access based on share permissions and on file and folder ACLs. This behavior is similar to a user connecting to
an on-premises file share.
The alternative data stream is the primary aspect of file fidelity that currently can't be stored on a file in an Azure
file share. It's preserved on-premises when Azure File Sync is used.
Learn more about on-premises Active Directory authentication and Azure AD DS authentication for Azure file
shares.

Migration guides
The following table lists detailed migration guides.
How to use the table:
1. Locate the row for the source system your files are currently stored on.
2. Choose one of these targets:
A hybrid deployment using Azure File Sync to cache the content of Azure file shares on-premises
Azure file shares in the cloud
Select the target column that matches your choice.
3. Within the intersection of source and target, a table cell lists available migration scenarios. Select one to
directly link to the detailed migration guide.
A scenario without a link doesn't yet have a published migration guide. Check this table occasionally for updates.
New guides will be published when they're available.

TA RGET : TA RGET :
SO URC E H Y B RID DEP LO Y M EN T C LO UD- O N LY DEP LO Y M EN T

Tool combination: Tool combination:

Windows Server 2012 R2 and later Azure File Sync Via RoboCopy to a mounted
Azure File Sync and Azure Azure file share
DataBox Via Azure File Sync

Windows Server 2012 and earlier Via DataBox and Azure File Via Storage Migration Service
Sync to recent server OS to recent server with Azure File
Via Storage Migration Service Sync
to recent server with Azure File Via RoboCopy to a mounted
Sync, then upload Azure file share

Network-attached storage (NAS) Via Azure File Sync upload Via DataBox
Via DataBox + Azure File Sync Via RoboCopy to a mounted
Azure file share
TA RGET : TA RGET :
SO URC E H Y B RID DEP LO Y M EN T C LO UD- O N LY DEP LO Y M EN T

Linux / Samba Azure File Sync and RoboCopy Via RoboCopy to a mounted
Azure file share

Microsoft Azure StorSimple 8100 or Via dedicated data migration Via dedicated data migration
8600 series appliances cloud service cloud service

StorSimple 1200 virtual appliance Via Azure File Sync

Migration toolbox
File -copy tools
There are several file-copy tools available from Microsoft and others. To select the right tool for your migration
scenario, you must consider these fundamental questions:
Does the tool support the source and target locations for your file copy?
Does the tool support your network path or available protocols (such as REST, SMB, or NFS) between the
source and target storage locations?
Does the tool preserve the necessary file fidelity supported by your source and target locations?
In some cases, your target storage doesn't support the same fidelity as your source. If the target storage
is sufficient for your needs, the tool must match only the target's file-fidelity capabilities.
Does the tool have features that let it fit into your migration strategy?
For example, consider whether the tool lets you minimize your downtime.
When a tool supports an option to mirror a source to a target, you can often run it multiple times on the
same source and target while the source stays accessible.
The first time you run the tool, it copies the bulk of the data. This initial run might last a while. It often
lasts longer than you want for taking the data source offline for your business processes.
By mirroring a source to a target (as with robocopy /MIR ), you can run the tool again on that same
source and target. The run is much faster because it needs to transport only source changes that occur
after the previous run. Rerunning a copy tool this way can reduce downtime significantly.
The following table classifies Microsoft tools and their current suitability for Azure file shares:

SUP P O RT F O R A Z URE F IL E P RESERVAT IO N O F F IL E


REC O M M EN DED TO O L SH A RES F IDEL IT Y

RoboCopy Supported. Azure file shares Full fidelity.*


can be mounted as network
drives.

Azure File Sync Natively integrated into Full fidelity.*


Azure file shares.
SUP P O RT F O R A Z URE F IL E P RESERVAT IO N O F F IL E
REC O M M EN DED TO O L SH A RES F IDEL IT Y

Storage Migration Service Indirectly supported. Azure Full fidelity.*


file shares can be mounted
as network drives on SMS
target servers.

Data Box (including the Supported. Data Box and Data Box
data copy service to load (Data Box Disks does not Heavy fully support
files onto the device) support large file shares) metadata.
Data Box Disks does not
preserve file metadata.

AzCopy Supported but not fully Doesn't support differential


latest version recommended. copies at scale, and some
file fidelity might be lost.
Learn how to use AzCopy
with Azure file shares

Azure Storage Explorer Supported but not Loses most file fidelity, like
latest version recommended. ACLs. Supports timestamps.

Azure Data Factory Supported. Doesn't copy metadata.

* Full fidelity: meets or exceeds Azure file-share capabilities.


Migration helper tools
This section describes tools that help you plan and run migrations.
RoboCopy from Microsoft Corporation
RoboCopy is one of the tools most applicable to file migrations. It comes as part of Windows. The main
RoboCopy documentation is a helpful resource for this tool's many options.
TreeSize from JAM Software GmbH
Azure File Sync scales primarily with the number of items (files and folders) and not with the total storage
amount. The TreeSize tool lets you determine the number of items on your Windows Server volumes.
You can use the tool to create a perspective before an Azure File Sync deployment. You can also use it when
cloud tiering is engaged after deployment. In that scenario, you see the number of items and which directories
use your server cache the most.
The tested version of the tool is version 4.4.1. It's compatible with cloud-tiered files. The tool won't cause recall
of tiered files during its normal operation.

Next steps
1. Create a plan for which deployment of Azure file shares (cloud-only or hybrid) you want.
2. Review the list of available migration guides to find the detailed guide that matches your source and
deployment of Azure file shares.
More information about the Azure Files technologies mentioned in this article:
Azure file share overview
Planning for an Azure File Sync deployment
Azure File Sync: Cloud tiering
Use RoboCopy to migrate to Azure file shares
5/20/2022 • 26 minutes to read • Edit Online

This migration article describes the use of RoboCopy to move or migrate files to an Azure file share. RoboCopy
is a trusted and well-known file copy utility with a feature set that makes it well suited for migrations. It uses the
SMB protocol, which makes it broadly applicable to any source and target combination, supporting SMB.
Data sources: Any source supporting the SMB protocol, such as Network Attached Storage (NAS), Windows
or Linux servers, another Azure file share and many more
Migration route: From source storage ⇒ Windows machine with RoboCopy ⇒ Azure file share
There are many different migration routes for different source and deployment combinations. Look through the
table of migration guides to find the migration that best suits your needs.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

AzCopy vs. RoboCopy


AzCopy and RoboCopy are two fundamentally different file copy tools. RoboCopy uses any version of the SMB
protocol. AzCopy is a "born-in-the-cloud" tool that can be used to move data as long as the target is in Azure
storage. AzCopy depends on a REST protocol.
RoboCopy, as a trusted, Windows-based copy tool, has the home-turf advantage when it comes to copying files
at full fidelity. RoboCopy supports many migration scenarios due to its rich set of features and the ability to copy
files and folders in full fidelity. Check out the file fidelity section in the migration overview article to learn more
about the importance of copying files at maximum possible fidelity.
AzCopy, on the other hand, has only recently expanded to support file copy with some fidelity and added the
first features needed to be considered as a migration tool. However, there are still gaps and there can easily be
misunderstandings of functionality when comparing AzCopy flags to RoboCopy flags.
An example: RoboCopy /MIR will mirror source to target - that means added, changed, and deleted files are
considered. An important difference in using AzCopy -sync is that deleted files on the source are not removed
on the target. That makes for an incomplete differential-copy feature set. AzCopy will continue to evolve. At this
time, AzCopy is not a recommended tool for migration scenarios with Azure file shares as the target.

Migration goals
The goal is to move the data from existing file share locations to Azure. In Azure, you'll store you data in native
Azure file shares you can use without a need for a Windows Server. This migration needs to be done in a way
that guarantees the integrity of the production data and availability during the migration. The latter requires
keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.

Migration overview
The migration process consists of several phases. You'll need to deploy Azure storage accounts and file shares.
Furthermore, you'll configure networking, consider a DFS Namespace deployment (DFS-N) or update your
existing one. Once it's time for the actual data copy, you'll need to consider repeated, differential RoboCopy runs
to minimize downtime, and finally, cut-over your users to the newly created Azure file shares. The following
sections describe the phases of the migration process in detail.

TIP
If you are returning to this article, use the navigation on the right side to jump to the migration phase where you left off.

Phase 1: Identify how many Azure file shares you need


In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
A structured approach to a deployment map
Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.

TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.

IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.

It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.

Download a namespace-mapping template.

Phase 2: Deploy Azure storage resources


In this phase, consult the mapping table from Phase 1 and use it to provision the correct number of Azure
storage accounts and file shares within them.
An Azure file share is stored in the cloud in an Azure storage account. Another level of performance
considerations applies here.
If you have highly active shares (shares used by many users and/or applications), two Azure file shares might
reach the performance limit of a storage account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares
into the same storage account if you have archival shares or you expect low day-to-day activity in them.
These considerations apply more to direct cloud access (through an Azure VM) than to Azure File Sync. If you
plan to use only Azure File Sync on these shares, grouping several into a single Azure storage account is fine.
If you've made a list of your shares, you should map each share to the storage account it will be in.
In the previous phase, you determined the appropriate number of shares. In this step, you have a mapping of
storage accounts to file shares. Now deploy the appropriate number of Azure storage accounts with the
appropriate number of Azure file shares in them.
Make sure the region of each of your storage accounts is the same and matches the region of the Storage Sync
Service resource you've already deployed.
Cau t i on

If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.

Phase 3: Preparing to use Azure file shares


With the information in this phase, you will be able to decide how your servers and users in Azure and outside
of Azure will be enabled to utilize your Azure file shares. The most critical decisions are:
Networking: Enable your networks to route SMB traffic.
Authentication: Configure Azure storage accounts for Kerberos authentication. AdConnect and Domain
joining your storage account will allow your apps and users to use their AD identity to for authentication
Authorization: Share-level ACLs for each Azure file share will allow AD users and groups to access a given
share and within an Azure file share, native NTFS ACLs will take over. Authorization based on file and folder
ACLs then works like it does for on-premises SMB shares.
Business continuity: Integration of Azure file shares into an existing environment often entails to preserve
existing share addresses. If you are not already using DFS-Namespaces, consider establishing that in your
environment. You'd be able to keep share addresses your users and scripts use, unchanged. DFS-N provides
a namespace routing service for SMB, by redirecting clients to Azure file shares.
https://www.youtube-nocookie.com/embed/jd49W33DxkQ
This video is a guide and demo for how to securely expose Azure file shares directly to information workers and
apps in five simple steps.
The video references dedicated documentation for some topics:
Identity overview
How to domain join a storage account
Networking overview for Azure file shares
How to configure public and private endpoints
How to configure a S2S VPN
How to configure a Windows P2S VPN
How to configure a Linux P2S VPN
How to configure DNS forwarding
Configure DFS-N
Mounting an Azure file share
Before you can use RoboCopy, you need to make the Azure file share accessible over SMB. The easiest way is to
mount the share as a local network drive to the Windows Server you are planning on using for RoboCopy.
IMPORTANT
Make sure you mount the Azure file share using the storage account access key. Don't use a domain identity. Before you
can successfully mount an Azure file share to a local Windows Server, you need to have completed Phase 3: Preparing to
use Azure file shares.

Once you are ready, review the Use an Azure file share with Windows how-to article. Then mount the Azure file
share you want to start the RoboCopy for.

Phase 4: RoboCopy
The following RoboCopy command will copy only the differences (updated files and folders) from your source
storage to your Azure file share.

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>

SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .
SW ITC H M EA N IN G

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.
SW ITC H M EA N IN G

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

TIP
Check out the Troubleshooting section if RoboCopy is impacting your production environment, reports lots of errors or is
not progressing as fast as expected.

Phase 5: User cut-over


When you run the RoboCopy command for the first time, your users and applications are still accessing files on
the source of your migration and potentially change them. It is possible, that RoboCopy has processed a
directory, moves on to the next and then a user on the source location adds, changes, or deletes a file that will
now not be processed in this current RoboCopy run. This behavior is expected.
The first run is about moving the bulk of the churned data to your Azure file share. This first copy can take a
while. Check out the Troubleshooting section for more insight into what can affect RoboCopy speeds.
Once the initial run is complete, run the command again.
A second time you run RoboCopy for the same share, it will finish faster, because it only needs to transport
changes that happened since the last run. You can run repeated jobs for the same share.
When you consider the downtime acceptable, then you need to remove user access to your source shares. You
can do that by any steps that prevent users from changing the file and folder structure and content. An example
is to point your DFS-Namespace to a non-existing location or change the ACLs on each share.
Run one last RoboCopy round. It will pick up any changes, that might have been missed. How long this final step
takes, dependents on the speed of the RoboCopy scan. You can estimate the time (which is equal to your
downtime) by measuring how long the previous run took.
In a previous section, you've configured your users to access the share with their identity and should have
established a strategy for your users to use established paths to your new Azure file shares (DFS-N).
You can try to run a few of these copies between different source and target shares in parallel. When doing so,
keep your network throughput and core to thread count ratio in mind to not overtax the system.

Troubleshoot and optimize


Speed and success rate of a given RoboCopy run will depend on several factors:
IOPS on the source and target storage
the available network bandwidth between source and target
the ability to quickly process files and folders in a namespace
the number of changes between RoboCopy runs
IOPS and bandwidth considerations
In this category, you need to consider abilities of the source storage , the target storage , and the network
connecting them. The maximum possible throughput is determined by the slowest of these three components.
Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
Cau t i on

While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.

TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.

Avoid unnecessary work


Avoid large-scale changes in your namespace. For example, moving files between directories, changing
properties at a large scale, or changing permissions (NTFS ACLs). Especially ACL changes can have a high
impact because they often have a cascading change effect on files lower in the folder hierarchy. Consequences
can be:
extended RoboCopy job run time because each file and folder affected by an ACL change needing to be
updated
reusing data moved earlier may need to be recopied. For instance, more data will need to be copied when
folder structures change after files had already been copied earlier. A RoboCopy job can't "play back" a
namespace change. The next job must purge the files previously transported to the old folder structure and
upload the files in the new folder structure again.
Another important aspect is to use the RoboCopy tool effectively. With the recommended RoboCopy script,
you'll create and save a log file for errors. Copy errors can occur - that is normal. These errors often make it
necessary to run multiple rounds of a copy tool like RoboCopy. An initial run, say from a NAS to DataBox or a
server to an Azure file share. And one or more extra runs with the /MIR switch to catch and retry files that didn't
get copied.
You should be prepared to run multiple rounds of RoboCopy against a given namespace scope. Successive runs
will finish faster as they have less to copy but are constrained increasingly by the speed of processing the
namespace. When you run multiple rounds, you can speed up each round by not having RoboCopy try
unreasonably hard to copy everything in a given run. These RoboCopy switches can make a significant
difference:
/R:n n = how often you retry to copy a failed file and
/W:n n = how many seconds to wait between retries
/R:5 /W:5 is a reasonable setting that you can adjust to your liking. In this example, a failed file will be retried
five times, with five-second wait time between retries. If the file still fails to copy, the next RoboCopy job will try
again. Often files that failed because they are in use or because of timeout issues might eventually be copied
successfully this way.

Next steps
There is more to discover about Azure file shares. The following articles help understand advanced options, best
practices, and also contain troubleshooting help. These articles link to Azure file share documentation as
appropriate.
Migration overview
Backup: Azure file share snapshots
How to use DFS Namespaces with Azure Files
Use DataBox to migrate from Network Attached
Storage (NAS) to Azure file shares
5/20/2022 • 31 minutes to read • Edit Online

This migration article is one of several involving the keywords NAS and Azure DataBox. Check if this article
applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: NAS ⇒ DataBox ⇒ Azure file share
Caching files on-premises: No, the final goal is to use the Azure file shares directly in the cloud. There is no
plan to use Azure File Sync.
If your scenario is different, look through the table of migration guides.
This article guides you end-to-end through the planning, deployment, and networking configurations needed to
migrate from your NAS appliance to functional Azure file shares. This guide uses Azure DataBox for bulk data
transport (offline data transport).

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Migration goals
The goal is to move the shares on your NAS appliance to Azure and have them become native Azure file shares.
You can use native Azure file shares without a need for a Windows Server. This migration needs to be done in a
way that guarantees the integrity of the production data and availability during the migration. The latter requires
keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.

Migration overview
The migration process consists of several phases. You'll need to deploy Azure storage accounts and file shares
and configure networking. Then you'll migrate your files using Azure DataBox, and RoboCopy to catch-up with
changes. Finally, you'll cut-over your users and apps to the newly created Azure file shares. The following
sections describe the phases of the migration process in detail.

TIP
If you are returning to this article, use the navigation on the right side to jump to the migration phase where you left off.

Phase 1: Identify how many Azure file shares you need


In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.

A structured approach to a deployment map


Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.

TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.

IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.

It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.

Download a namespace-mapping template.

Phase 2: Deploy Azure storage resources


In this phase, consult the mapping table from Phase 1 and use it to provision the correct number of Azure
storage accounts and file shares within them.
An Azure file share is stored in the cloud in an Azure storage account. Another level of performance
considerations applies here.
If you have highly active shares (shares used by many users and/or applications), two Azure file shares might
reach the performance limit of a storage account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares
into the same storage account if you have archival shares or you expect low day-to-day activity in them.
These considerations apply more to direct cloud access (through an Azure VM) than to Azure File Sync. If you
plan to use only Azure File Sync on these shares, grouping several into a single Azure storage account is fine.
If you've made a list of your shares, you should map each share to the storage account it will be in.
In the previous phase, you determined the appropriate number of shares. In this step, you have a mapping of
storage accounts to file shares. Now deploy the appropriate number of Azure storage accounts with the
appropriate number of Azure file shares in them.
Make sure the region of each of your storage accounts is the same and matches the region of the Storage Sync
Service resource you've already deployed.
Cau t i on

If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.

Phase 3: Determine how many Azure DataBox appliances you need


Start this step only, when you have completed the previous phase. Your Azure storage resources (storage
accounts and file shares) should be created at this time. During your DataBox order, you need to specify into
which storage accounts the DataBox is moving data.
In this phase, you need to map the results of the migration plan from the previous phase to the limits of the
available DataBox options. These considerations will help you make a plan for which DataBox options you
should choose and how many of them you will need to move your NAS shares to Azure file shares.
To determine how many devices of which type you need, consider these important limits:
Any Azure DataBox can move data into up to 10 storage accounts.
Each DataBox option comes at their own usable capacity. See DataBox options.
Consult your migration plan for the number of storage accounts you have decided to create and the shares in
each one. Then look at the size of each of the shares on your NAS. Combining this information will allow you to
optimize and decide which appliance should be sending data to which storage accounts. You can have two
DataBox devices move files into the same storage account, but don't split content of a single file share across 2
DataBoxes.
DataBox options
For a standard migration, one or a combination of these three DataBox options should be chosen:
DataBox Disks Microsoft will send you one and up to five SSD disks with a capacity of 8 TiB each, for a
maximum total of 40 TiB. The usable capacity is about 20% less, due to encryption and file system overhead.
For more information, see DataBox Disks documentation.
DataBox This is the most common option. A ruggedized DataBox appliance, that works similar to a NAS, will
be shipped to you. It has a usable capacity of 80 TiB. For more information, see DataBox documentation.
DataBox Heavy This option features a ruggedized DataBox appliance on wheels, that works similar to a NAS,
with a capacity of 1 PiB. The usable capacity is about 20% less, due to encryption and file system overhead.
For more information, see DataBox Heavy documentation.

Phase 4: Provision a temporary Windows Server


While you wait for your Azure DataBox(es) to arrive, you can already deploy one or more Windows Servers you
will need for running RoboCopy jobs.
The first use of these servers will be to copy files onto the DataBox.
The second use of these servers will be to catch-up with changes that have occurred on the NAS appliance
while DataBox was in transport. This approach keeps downtime on the source side to a minimum.
The speed in which your RoboCopy jobs work depend on mainly these factors:
IOPS on the source and target storage
the available network bandwidth between them
Find more details in the Troubleshooting section: IOPS and Bandwidth considerations
the ability to quickly process files and folders in a namespace
Find more details in the Troubleshooting section: Processing speed
the number of changes between RoboCopy runs
Find more details in the Troubleshooting section: Avoid unnecessary work
It is important to keep the referenced details in mind when deciding on the RAM and thread count you will
provide to your temporary Windows Server(s).

Phase 5: Preparing to use Azure file shares


To save time, you should proceed with this phase while you wait for your DataBox to arrive. With the
information in this phase, you will be able to decide how your servers and users in Azure and outside of Azure
will be enabled to utilize your Azure file shares. The most critical decisions are:
Networking: Enable your networks to route SMB traffic.
Authentication: Configure Azure storage accounts for Kerberos authentication. AdConnect and Domain
joining your storage account will allow your apps and users to use their AD identity to for authentication
Authorization: Share-level ACLs for each Azure file share will allow AD users and groups to access a given
share and within an Azure file share, native NTFS ACLs will take over. Authorization based on file and folder
ACLs then works like it does for on-premises SMB shares.
Business continuity: Integration of Azure file shares into an existing environment often entails to preserve
existing share addresses. If you are not already using DFS-Namespaces, consider establishing that in your
environment. You'd be able to keep share addresses your users and scripts use, unchanged. You would use
DFS-N as a namespace routing service for SMB, by redirecting DFS-Namespace targets to Azure file shares
after their migration.
https://www.youtube-nocookie.com/embed/jd49W33DxkQ
This video is a guide and demo for how to securely expose Azure file shares directly to information workers and
apps in five simple steps.
The video references dedicated documentation for some topics:
Identity overview
How to domain join a storage account
Networking overview for Azure file shares
How to configure public and private endpoints
How to configure a S2S VPN
How to configure a Windows P2S VPN
How to configure a Linux P2S VPN
How to configure DNS forwarding
Configure DFS-N

Phase 6: Copy files onto your DataBox


When your DataBox arrives, you need to set up your DataBox in a line of sight to your NAS appliance. Follow the
setup documentation for the DataBox type you ordered.
Set up Data Box
Set up Data Box Disk
Set up Data Box Heavy
Depending on the DataBox type, there maybe DataBox copy tools available to you. At this point, they are not
recommended for migrations to Azure file shares as they do not copy your files with full fidelity to the DataBox.
Use RoboCopy instead.
When your DataBox arrives, it will have pre-provisioned SMB shares available for each storage account you
specified at the time of ordering it.
If your files go into a premium Azure file share, there will be one SMB share per premium "File storage"
storage account.
If your files go into a standard storage account, there will be three SMB shares per standard (GPv1 and GPv2)
storage account. Only the file shares ending with _AzFiles are relevant for your migration. Ignore any block
and page blob shares.
Follow the steps in the Azure DataBox documentation:
1. Connect to Data Box
2. Copy data to Data Box
3. Prepare your DataBox for departure to Azure
The linked DataBox documentation specifies a RoboCopy command. However, the command is not suitable to
preserve the full file and folder fidelity. Use this command instead:

Robocopy /MT:32 /NP /NFL /NDL /B /MIR /IT /COPY:DATSO /DCOPY:DAT /UNILOG:<FilePathAndName> <SourcePath>
<Dest.Path>

To learn more about the details of the individual RoboCopy flags, check out the table in the upcoming
RoboCopy section.
To learn more about how to appropriately size the thread count /MT:n , optimize RoboCopy speed, and make
RoboCopy a good neighbor in your data center, take a look at the RoboCopy troubleshooting section.

TIP
As an alternative to Robocopy, Data Box has created a data copy service. You can use this service to load files onto your
Data Box with full fidelity. Follow this data copy service tutorial and make sure to set the correct Azure file share target.

Phase 7: Catch-up RoboCopy from your NAS


Once your DataBox reports that all files and folders have been placed into the planned Azure file shares, you can
continue with this phase. A catch-up RoboCopy is only needed if the data on the NAS may have changed since
the DataBox copy was started. In certain scenarios where you use a share for archiving purposes, you might be
able to stop changes to the share on your NAS until the migration is complete. You might also have the ability to
serve your business requirements by setting NAS shares to read-only during the migration.
In cases where you need a share to be read-write during the migration and can only absorb a small downtime
window, this catch-up RoboCopy step will be important to complete before the fail-over of user access directly
to the Azure file share.
In this step, you will run RoboCopy jobs to catch up your cloud shares with the latest changes on your NAS since
the time you forked your shares onto the DataBox. This catch-up RoboCopy may finish quickly or take a while,
depending on the amount of churn that happened on your NAS shares.
Run the first local copy to your Windows Server target folder:
1. Identify the first location on your NAS appliance.
2. Identify the matching Azure file share.
3. Mount the Azure file share as a local network drive on your temporary Windows Server
4. Start the copy using RoboCopy as described
Mounting an Azure file share
Before you can use RoboCopy, you need to make the Azure file share accessible over SMB. The easiest way is to
mount the share as a local network drive to the Windows Server you are planning on using for RoboCopy.

IMPORTANT
Before you can successfully mount an Azure file share to a local Windows Server, you need to have completed Phase :
Preparing to use Azure file shares!

Once you are ready, review the Use an Azure file share with Windows how-to article and mount the Azure file
share you want to start the NAS catch-up RoboCopy for.
RoboCopy
The following RoboCopy command will copy only the differences (updated files and folders) from your NAS
storage to your Azure file share.

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>

SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.
SW ITC H M EA N IN G

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
SW ITC H M EA N IN G

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
SW ITC H M EA N IN G

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

TIP
Check out the Troubleshooting section if RoboCopy is impacting your production environment, reports lots of errors or is
not progressing as fast as expected.

User cut-over
When you run the RoboCopy command for the first time, your users and applications are still accessing files on
the NAS and potentially change them. It is possible, that RoboCopy has processed a directory, moves on to the
next and then a user on the source location (NAS) adds, changes, or deletes a file that will now not be processed
in this current RoboCopy run. This behavior is expected.
The first run is about moving the bulk of the churned data to your Azure file share. This first copy can take a
while. Check out the Troubleshooting section for more insight into what can affect RoboCopy speeds.
Once the initial run is complete, run the command again.
A second time you run RoboCopy for the same share, it will finish faster, because it only needs to transport
changes that happened since the last run. You can run repeated jobs for the same share.
When you consider the downtime acceptable, then you need to remove user access to your NAS-based shares.
You can do that by any steps that prevent users from changing the file and folder structure and content. An
example is to point your DFS-Namespace to a non-existing location or change the root ACLs on the share.
Run one last RoboCopy round. It will pick up any changes, that might have been missed. How long this final step
takes, is dependent on the speed of the RoboCopy scan. You can estimate the time (which is equal to your
downtime) by measuring how long the previous run took.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions as on your NAS SMB share. If you had an enterprise-class domain-
joined NAS, then the user SIDs will automatically match as the users exist in Active Directory and RoboCopy
copies files and metadata at full fidelity. If you have used local users on your NAS, you need to re-create these
users as Windows Server local users and map the existing SIDs RoboCopy moved over to your Windows Server
to the SIDs of your new, Windows Server local users.
You have finished migrating a share / group of shares into a common root or volume. (Depending on your
mapping from Phase 1)
You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share
at a time.

Troubleshoot
Speed and success rate of a given RoboCopy run will depend on several factors:
IOPS on the source and target storage
the available network bandwidth between source and target
the ability to quickly process files and folders in a namespace
the number of changes between RoboCopy runs
IOPS and bandwidth considerations
In this category, you need to consider abilities of the source storage , the target storage , and the network
connecting them. The maximum possible throughput is determined by the slowest of these three components.
Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
Cau t i on

While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.

TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.

Avoid unnecessary work


Avoid large-scale changes in your namespace. For example, moving files between directories, changing
properties at a large scale, or changing permissions (NTFS ACLs). Especially ACL changes can have a high
impact because they often have a cascading change effect on files lower in the folder hierarchy. Consequences
can be:
extended RoboCopy job run time because each file and folder affected by an ACL change needing to be
updated
reusing data moved earlier may need to be recopied. For instance, more data will need to be copied when
folder structures change after files had already been copied earlier. A RoboCopy job can't "play back" a
namespace change. The next job must purge the files previously transported to the old folder structure and
upload the files in the new folder structure again.
Another important aspect is to use the RoboCopy tool effectively. With the recommended RoboCopy script,
you'll create and save a log file for errors. Copy errors can occur - that is normal. These errors often make it
necessary to run multiple rounds of a copy tool like RoboCopy. An initial run, say from a NAS to DataBox or a
server to an Azure file share. And one or more extra runs with the /MIR switch to catch and retry files that didn't
get copied.
You should be prepared to run multiple rounds of RoboCopy against a given namespace scope. Successive runs
will finish faster as they have less to copy but are constrained increasingly by the speed of processing the
namespace. When you run multiple rounds, you can speed up each round by not having RoboCopy try
unreasonably hard to copy everything in a given run. These RoboCopy switches can make a significant
difference:
/R:n n = how often you retry to copy a failed file and
/W:n n = how many seconds to wait between retries
/R:5 /W:5 is a reasonable setting that you can adjust to your liking. In this example, a failed file will be retried
five times, with five-second wait time between retries. If the file still fails to copy, the next RoboCopy job will try
again. Often files that failed because they are in use or because of timeout issues might eventually be copied
successfully this way.

Next steps
There is more to discover about Azure file shares. The following articles help understand advanced options, best
practices, and also contain troubleshooting help. These articles link to Azure file share documentation as
appropriate.
Migration overview
Monitor, diagnose, and troubleshoot Microsoft Azure Storage
Networking considerations for direct access
Backup: Azure file share snapshots
Migrate from Linux to a hybrid cloud deployment
with Azure File Sync
5/20/2022 • 27 minutes to read • Edit Online

This migration article is one of several involving the keywords NFS and Azure File Sync. Check if this article
applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: Linux Server with SAMBA ⇒ Windows Server 2012R2 or later ⇒ sync with Azure file
share(s)
Caching files on-premises: Yes, the final goal is an Azure File Sync deployment.
If your scenario is different, look through the table of migration guides.
Azure File Sync works on Windows Server instances with direct attached storage (DAS). It does not support sync
to and from Linux clients, or a remote Server Message Block (SMB) share, or Network File System (NFS) shares.
As a result, transforming your file services into a hybrid deployment makes a migration to Windows Server
necessary. This article guides you through the planning and execution of such a migration.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Migration goals
The goal is to move the shares that you have on your Linux Samba server to a Windows Server instance. Then
use Azure File Sync for a hybrid cloud deployment. This migration needs to be done in a way that guarantees
the integrity of the production data and availability during the migration. The latter requires keeping downtime
to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.

Migration overview
As mentioned in the Azure Files migration overview article, using the correct copy tool and approach is
important. Your Linux Samba server is exposing SMB shares directly on your local network. Robocopy, built into
Windows Server, is the best way to move your files in this migration scenario.
If you're not running Samba on your Linux server and rather want to migrate folders to a hybrid deployment on
Windows Server, you can use Linux copy tools instead of Robocopy. Be aware of the fidelity capabilities of your
copy tool. Review the migration basics section in the migration overview article to learn what to look for in a
copy tool.
Phase 1: Identify how many Azure file shares you need
In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.

A structured approach to a deployment map


Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.

TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.

IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.

It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.

Download a namespace-mapping template.

Phase 2: Provision a suitable Windows Server instance on-premises


Create a Windows Server 2019 instance as a virtual machine or physical server. Windows Server 2012
R2 is the minimum requirement. A Windows Server failover cluster is also supported.
Provision or add direct attached storage (DAS). Network attached storage (NAS) is not supported.
The amount of storage that you provision can be smaller than what you're currently using on your Linux
Samba server, if you use the Azure File Sync cloud tiering feature.
The amount of storage you provision can be smaller than what you are currently using on your Linux Samba
server. This configuration choice requires that you also make use of Azure File Syncs cloud tiering feature.
However, when you copy your files from the larger Linux Samba server space to the smaller Windows Server
volume in a later phase, you'll need to work in batches:
1. Move a set of files that fits onto the disk.
2. Let file sync and cloud tiering engage.
3. When more free space is created on the volume, proceed with the next batch of files. Alternatively, review the
RoboCopy command in the upcoming RoboCopy section for use of the new /LFSM switch. Using /LFSM can
significantly simplify your RoboCopy jobs, but it is not compatible with some other RoboCopy switches you
might depend on.
You can avoid this batching approach by provisioning the equivalent space on the Windows Server instance that
your files occupy on the Linux Samba server. Consider enabling deduplication on Windows. If you don't want to
permanently commit this high amount of storage to your Windows Server instance, you can reduce the volume
size after the migration and before adjusting the cloud tiering policies. That creates a smaller on-premises cache
of your Azure file shares.
The resource configuration (compute and RAM) of the Windows Server instance that you deploy depends
mostly on the number of items (files and folders) you'll be syncing. We recommend going with a higher-
performance configuration if you have any concerns.
Learn how to size a Windows Server instance based on the number of items (files and folders) you need to sync.

NOTE
The previously linked article presents a table with a range for server memory (RAM). You can orient toward the smaller
number for your server, but expect that initial sync can take significantly more time.

Phase 3: Deploy the Azure File Sync cloud resource


To complete this step, you need your Azure subscription credentials.
The core resource to configure for Azure File Sync is called a Storage Sync Service. We recommend that you
deploy only one for all servers that are syncing the same set of files now or in the future. Create multiple
Storage Sync Services only if you have distinct sets of servers that must never exchange data. For example, you
might have servers that must never sync the same Azure file share. Otherwise, using a single Storage Sync
Service is the best practice.
Choose an Azure region for your Storage Sync Service that's close to your location. All other cloud resources
must be deployed in the same region. To simplify management, create a new resource group in your
subscription that houses sync and storage resources.
For more information, see the section about deploying the Storage Sync Service in the article about deploying
Azure File Sync. Follow only this section of the article. There will be links to other sections of the article in later
steps.

Phase 4: Deploy Azure storage resources


In this phase, consult the mapping table from Phase 1 and use it to provision the correct number of Azure
storage accounts and file shares within them.
An Azure file share is stored in the cloud in an Azure storage account. Another level of performance
considerations applies here.
If you have highly active shares (shares used by many users and/or applications), two Azure file shares might
reach the performance limit of a storage account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares
into the same storage account if you have archival shares or you expect low day-to-day activity in them.
These considerations apply more to direct cloud access (through an Azure VM) than to Azure File Sync. If you
plan to use only Azure File Sync on these shares, grouping several into a single Azure storage account is fine.
If you've made a list of your shares, you should map each share to the storage account it will be in.
In the previous phase, you determined the appropriate number of shares. In this step, you have a mapping of
storage accounts to file shares. Now deploy the appropriate number of Azure storage accounts with the
appropriate number of Azure file shares in them.
Make sure the region of each of your storage accounts is the same and matches the region of the Storage Sync
Service resource you've already deployed.
Cau t i on

If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.

Phase 5: Deploy the Azure File Sync agent


In this section, you install the Azure File Sync agent on your Windows Server instance.
The deployment guide explains that you need to turn off Internet Explorer Enhanced Security
Configuration . This security measure isn't applicable with Azure File Sync. Turning it off allows you to
authenticate to Azure without any problems.
Open PowerShell. Install the required PowerShell modules by using the following commands. Be sure to install
the full module and the NuGet provider when you're prompted to do so.

Install-Module -Name Az -AllowClobber


Install-Module -Name Az.StorageSync

If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Phase 6: Configure Azure File Sync on the Windows Server
deployment
Your registered on-premises Windows Server instance must be ready and connected to the internet for this
process.
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.

IMPORTANT
Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the
cloud, yet have the full namespace available. Locally interesting data is also cached locally for fast access performance.
Cloud tiering is an optional feature for each Azure File Sync server endpoint.

WARNING
If you provisioned less storage on your Windows Server volumes than your data used on the Linux Samba server, then
cloud tiering is mandatory. If you don't turn on cloud tiering, your server will not free up space to store all files. Set your
tiering policy, temporarily for the migration, to 99 percent free space for a volume. Be sure to return to your cloud tiering
settings after the migration is complete, and set the policy to a more useful level for the long term.

Repeat the steps of sync group creation and the addition of the matching server folder as a server endpoint for
all Azure file shares and server locations that need to be configured for sync.
After the creation of all server endpoints, sync is working. You can create a test file and see it sync up from your
server location to the connected Azure file share (as described by the cloud endpoint in the sync group).
Both locations, the server folders and the Azure file shares, are otherwise empty and awaiting data. In the next
step, you'll begin to copy files into the Windows Server instance for Azure File Sync to move them up to the
cloud. If you've enabled cloud tiering, the server will then begin to tier files if you run out of capacity on the local
volumes.

Phase 7: Robocopy
The basic migration approach is to use Robocopy to copy files and use Azure File Sync to do the syncing.
Run the first local copy to your Windows Server target folder:
1. Identify the first location on your Linux Samba server.
2. Identify the matching folder on the Windows Server instance that already has Azure File Sync configured on
it.
3. Start the copy by using Robocopy.
The following Robocopy command will copy files from your Linux Samba server's storage to your Windows
Server target folder. Windows Server will sync it to the Azure file shares.
If you provisioned less storage on your Windows Server instance than your files take up on the Linux Samba
server, then you have configured cloud tiering. As the local Windows Server volume gets full, cloud tiering will
start and tier files that have successfully synced already. Cloud tiering will generate enough space to continue
the copy from the Linux Samba server. Cloud tiering checks once an hour to see what has synced and to free up
disk space to reach the policy of 99 percent free space for a volume.
It's possible that Robocopy moves files faster than you can sync to the cloud and tier locally, causing you to run
out of local disk space. Robocopy will then fail. We recommend that you work through the shares in a sequence
that prevents the problem. For example, consider not starting Robocopy jobs for all shares at the same time. Or
consider moving shares that fit on the current amount of free space on the Windows Server instance. If your
Robocopy job does fail, you can always rerun the command as long as you use the following mirror/purge
option:

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>

SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
SW ITC H M EA N IN G

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.
SW ITC H M EA N IN G

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

Phase 8: User cut-over


When you run the Robocopy command for the first time, your users and applications are still accessing files on
the Linux Samba server and potentially changing them. It's possible that Robocopy has processed a directory
and moves on to the next, and then a user in the source location (Linux) adds, changes, or deletes a file that now
won't be processed in this current Robocopy run. This behavior is expected.
The first run is about moving the bulk of the data to your Windows Server instance and into the cloud via Azure
File Sync. This first copy can take a long time, depending on:
Your download bandwidth.
The upload bandwidth.
The local network speed, and the number of how optimally the number of Robocopy threads matches it.
The number of items (files and folders) that Robocopy and Azure File Sync need to process.
After the initial run is complete, run the command again.
It finishes faster the second time, because it needs to transport only changes that happened since the last run.
During this second, run new changes can still accumulate.
Repeat this process until you're satisfied that the amount of time it takes to complete a Robocopy operation for a
specific location is within an acceptable window for downtime.
When you consider the downtime acceptable and you're prepared to take the Linux location offline, you can
change ACLs on the share root such that users can no longer access the location. Or you can take any other
appropriate step that prevents content from changing in this folder on your Linux server.
Run one last Robocopy round. It will pick up any changes that might have been missed. How long this final step
takes depends on the speed of the Robocopy scan. You can estimate the time (which is equal to your downtime)
by measuring how long the previous run took.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions as on your Linux Samba server SMB shares. If you have used local users
on your Linux Samba server, you need to re-create these users as Windows Server local users. You also need to
map the existing SIDs that Robocopy moved over to your Windows Server instance to the SIDs of your new
Windows Server local users. If you used Active Directory accounts and ACLs, Robocopy will move them as is,
and no further action is necessary.
You have finished migrating a share or a group of shares into a common root or volume (depending on your
mapping from Phase 1).
You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share
at a time.

WARNING
After you've moved all the data from your Linux Samba server to the Windows Server instance, and your migration is
complete, return to all sync groups in the Azure portal. Adjust the percentage of free space for cloud tiering volume to
something better suited for cache utilization, such as 20 percent.

The policy for free space in cloud tiering volume acts on a volume level with potentially multiple server
endpoints syncing from it. If you forget to adjust the free space on even one server endpoint, sync will continue
to apply the most restrictive rule and attempt to keep free disk space at 99 percent. The local cache then might
not perform as you expect. The performance might be acceptable if your goal is to have the namespace for a
volume that contains only rarely accessed archival data, and you're reserving the rest of the storage space for
another scenario.

Troubleshoot
The most common problem is that the Robocopy command fails with Volume full on the Windows Server side.
Cloud tiering acts once every hour to evacuate content from the local Windows Server disk that has synced. Its
goal is to reach free space of 99 percent on the volume.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on Windows Server.
When your Windows Server instance has enough available capacity, rerunning the command will resolve the
problem. Nothing breaks when you get into this situation, and you can move forward with confidence. The
inconvenience of running the command again is the only consequence.
Check the link in the following section for troubleshooting Azure File Sync problems.

Next steps
There's more to discover about Azure file shares and Azure File Sync. The following articles contain advanced
options, best practices, and troubleshooting help. These articles link to Azure file share documentation as
appropriate.
Azure File Sync overview
Deploy Azure File Sync
Azure File Sync troubleshooting
Migrate from Network Attached Storage (NAS) to a
hybrid cloud deployment with Azure File Sync
5/20/2022 • 27 minutes to read • Edit Online

This migration article is one of several involving the keywords NAS and Azure File Sync. Check if this article
applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: NAS ⇒ Windows Server ⇒ upload and sync with Azure file share(s)
Caching files on-premises: Yes, the final goal is an Azure File Sync deployment.
If your scenario is different, look through the table of migration guides.
Azure File Sync works on Direct Attached Storage (DAS) locations and does not support sync to Network
Attached Storage (NAS) locations. This fact makes a migration of your files necessary and this article guides you
through the planning and execution of such a migration.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Migration goals
The goal is to move the shares that you have on your NAS appliance to a Windows Server. Then utilize Azure
File Sync for a hybrid cloud deployment. Generally, migrations need to be done in a way that guaranty the
integrity of the production data and it's availability during the migration. The latter requires keeping downtime
to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.

Migration overview
As mentioned in the Azure Files migration overview article, using the correct copy tool and approach is
important. Your NAS appliance is exposing SMB shares directly on your local network. RoboCopy, built-into
Windows Server, is the best way to move your files in this migration scenario.
Phase 1: Identify how many Azure file shares you need
Phase 2: Provision a suitable Windows Server on-premises
Phase 3: Deploy the Azure File Sync cloud resource
Phase 4: Deploy Azure storage resources
Phase 5: Deploy the Azure File Sync agent
Phase 6: Configure Azure File Sync on the Windows Server
Phase 7: RoboCopy
Phase 8: User cut-over

Phase 1: Identify how many Azure file shares you need


In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.

A structured approach to a deployment map


Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.

TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.

IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.

It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.

Download a namespace-mapping template.

Phase 2: Provision a suitable Windows Server on-premises


Create a Windows Server 2022 or Windows Server 2019 virtual machine, or deploy a physical server. A
Windows Server failover cluster is also supported.
Provision or add Direct Attached Storage (DAS as compared to NAS, which is not supported).
The amount of storage you provision can be smaller than what you are currently using on your NAS
appliance. This configuration choice requires that you also make use of Azure File Syncs cloud tiering
feature. However, when you copy your files from the larger NAS space to the smaller Windows Server
volume in a later phase, you will need to work in batches:
1. Move a set of files that fits onto the disk
2. let file sync and cloud tiering engage
3. when more free space is created on the volume, proceed with the next batch of files. Alternatively,
review the RoboCopy command in the RoboCopy section of this article for use of the new /LFSM
switch. Using /LFSM can significantly simplify your RoboCopy jobs, but it is not compatible with some
other RoboCopy switches you might depend on. Only use the /LFSM switch when the migration
destination is local storage. It's not supported when the destination is a remote SMB share.
You can avoid this batching approach by provisioning the equivalent space on the Windows Server that
your files occupy on the NAS appliance. Consider deduplication on NAS / Windows. If you don't want to
permanently commit this high amount of storage to your Windows Server, you can reduce the volume
size after the migration and before you adjust the cloud tiering policies. That creates a smaller on-
premises cache of your Azure file shares.
The resource configuration (compute and RAM) of the Windows Server you deploy depends mostly on the
number of items (files and folders) you will be syncing. We recommend going with a higher performance
configuration if you have any concerns.
Learn how to size a Windows Server based on the number of items (files and folders) you need to sync.

NOTE
The previously linked article presents a table with a range for server memory (RAM). You can orient towards the smaller
number for your server but expect that initial sync can take significantly more time.

Phase 3: Deploy the Azure File Sync cloud resource


To complete this step, you need your Azure subscription credentials.
The core resource to configure for Azure File Sync is called a Storage Sync Service. We recommend that you
deploy only one for all servers that are syncing the same set of files now or in the future. Create multiple
Storage Sync Services only if you have distinct sets of servers that must never exchange data. For example, you
might have servers that must never sync the same Azure file share. Otherwise, using a single Storage Sync
Service is the best practice.
Choose an Azure region for your Storage Sync Service that's close to your location. All other cloud resources
must be deployed in the same region. To simplify management, create a new resource group in your
subscription that houses sync and storage resources.
For more information, see the section about deploying the Storage Sync Service in the article about deploying
Azure File Sync. Follow only this section of the article. There will be links to other sections of the article in later
steps.

Phase 4: Deploy Azure storage resources


In this phase, consult the mapping table from Phase 1 and use it to provision the correct number of Azure
storage accounts and file shares within them.
An Azure file share is stored in the cloud in an Azure storage account. Another level of performance
considerations applies here.
If you have highly active shares (shares used by many users and/or applications), two Azure file shares might
reach the performance limit of a storage account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares
into the same storage account if you have archival shares or you expect low day-to-day activity in them.
These considerations apply more to direct cloud access (through an Azure VM) than to Azure File Sync. If you
plan to use only Azure File Sync on these shares, grouping several into a single Azure storage account is fine.
If you've made a list of your shares, you should map each share to the storage account it will be in.
In the previous phase, you determined the appropriate number of shares. In this step, you have a mapping of
storage accounts to file shares. Now deploy the appropriate number of Azure storage accounts with the
appropriate number of Azure file shares in them.
Make sure the region of each of your storage accounts is the same and matches the region of the Storage Sync
Service resource you've already deployed.
Cau t i on
If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.

Phase 5: Deploy the Azure File Sync agent


In this section, you install the Azure File Sync agent on your Windows Server instance.
The deployment guide explains that you need to turn off Internet Explorer Enhanced Security
Configuration . This security measure isn't applicable with Azure File Sync. Turning it off allows you to
authenticate to Azure without any problems.
Open PowerShell. Install the required PowerShell modules by using the following commands. Be sure to install
the full module and the NuGet provider when you're prompted to do so.

Install-Module -Name Az -AllowClobber


Install-Module -Name Az.StorageSync

If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Phase 6: Configure Azure File Sync on the Windows Server
Your registered on-premises Windows Server must be ready and connected to the internet for this process.
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.

IMPORTANT
Cloud tiering is the AFS feature that allows the local server to have less storage capacity than is stored in the cloud, yet
have the full namespace available. Locally interesting data is also cached locally for fast access performance. Cloud tiering
is an optional feature per Azure File Sync "server endpoint".

WARNING
If you provisioned less storage on your Windows server volume(s) than your data used on the NAS appliance, then cloud
tiering is mandatory. If you do not turn on cloud tiering, your server will not free up space to store all files. Set your
tiering policy, temporarily for the migration, to 99% volume free space. Be sure to return to your cloud tiering settings
after the migration is complete, and set it to a more long-term useful level.

Repeat the steps of sync group creation and addition of the matching server folder as a server endpoint for all
Azure file shares / server locations, that need to be configured for sync.
After the creation of all server endpoints, sync is working. You can create a test file and see it sync up from your
server location to the connected Azure file share (as described by the cloud endpoint in the sync group).
Both locations, the server folders and the Azure file shares are otherwise empty and awaiting data in either
location. In the next step, you will begin to copy files into the Windows Server for Azure File Sync to move them
up to the cloud. In case you've enabled cloud tiering, the server will then begin to tier files, should you run out of
capacity on the local volume(s).

Phase 7: RoboCopy
The basic migration approach is a RoboCopy from your NAS appliance to your Windows Server, and Azure File
Sync to Azure file shares.
Run the first local copy to your Windows Server target folder:
Identify the first location on your NAS appliance.
Identify the matching folder on the Windows Server, that already has Azure File Sync configured on it.
Start the copy using RoboCopy
The following RoboCopy command will copy files from your NAS storage to your Windows Server target folder.
The Windows Server will sync it to the Azure file share(s).
If you provisioned less storage on your Windows Server than your files take up on the NAS appliance, then you
have configured cloud tiering. As the local Windows Server volume gets full, cloud tiering will kick in and tier
files that have successfully synced already. Cloud tiering will generate enough space to continue the copy from
the NAS appliance. Cloud tiering checks once an hour to see what has synced and to free up disk space to reach
the 99% volume free space. It is possible, that RoboCopy moves files faster than you can sync to the cloud and
tier locally, thus running out of local disk space. RoboCopy will fail. It is recommended that you work through
the shares in a sequence that prevents that. For example, not starting RoboCopy jobs for all shares at the same
time, or only moving shares that fit on the current amount of free space on the Windows Server, to mention a
few.

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>

SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .
SW ITC H M EA N IN G

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.
SW ITC H M EA N IN G

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

Phase 8: User cut-over


When you run the RoboCopy command for the first time, your users and applications are still accessing files on
the NAS and potentially change them. It is possible, that RoboCopy has processed a directory, moves on to the
next and then a user on the source location (NAS) adds, changes, or deletes a file that will now not be processed
in this current RoboCopy run. This behavior is expected.
The first run is about moving the bulk of the data to your Windows Server and into the cloud via Azure File
Sync. This first copy can take a long time, depending on:
your download bandwidth
the upload bandwidth
the local network speed and number of how optimally the number of RoboCopy threads matches it
the number of items (files and folders), that need to be processed by RoboCopy and Azure File Sync
Once the initial run is complete, run the command again.
The second time it will finish faster, because it only needs to transport changes that happened since the last run.
During this second run, still, new changes can accumulate.
Repeat this process until you are satisfied that the amount of time it takes to complete a RoboCopy for a specific
location is within an acceptable window for downtime.
When you consider the downtime acceptable, then you need to remove user access to your NAS-based shares.
You can do that by any steps that prevent users from changing the file and folder structure and content. An
example is to point your DFS-Namespace to a non-existing location or change the root ACLs on the share.
Run one last RoboCopy round. It will pick up any changes, that might have been missed. How long this final step
takes, is dependent on the speed of the RoboCopy scan. You can estimate the time (which is equal to your
downtime) by measuring how long the previous run took.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions as on your NAS SMB share. If you had an enterprise-class domain-
joined NAS, then the user SIDs will automatically match as the users exist in Active Directory and RoboCopy
copies files and metadata at full fidelity. If you have used local users on your NAS, you need to re-create these
users as Windows Server local users and map the existing SIDs RoboCopy moved over to your Windows Server
to the SIDs of your new, Windows Server local users.
You have finished migrating a share / group of shares into a common root or volume. (Depending on your
mapping from Phase 1)
You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share
at a time.

WARNING
Once you have moved all the data from you NAS to the Windows Server, and your migration is complete: Return to all
sync groups in the Azure portal and adjust the cloud tiering volume free space percent value to something better suited
for cache utilization, say 20%.

The cloud tiering volume free space policy acts on a volume level with potentially multiple server endpoints
syncing from it. If you forget to adjust the free space on even one server endpoint, sync will continue to apply
the most restrictive rule and attempt to keep 99% free disk space, making the local cache not performing as you
might expect. Unless it is your goal to only have the namespace for a volume that only contains rarely accessed,
archival data and you are reserving the rest of the storage space for another scenario.

Troubleshoot
The most likely issue you can run into, is that the RoboCopy command fails with "Volume full" on the Windows
Server side. Cloud tiering acts once every hour to evacuate content from the local Windows Server disk, that
has synced. Its goal is to reach your 99% free space on the volume.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows
Server.
When your Windows Server has sufficient available capacity, rerunning the command will resolve the problem.
Nothing breaks when you get into this situation and you can move forward with confidence. Inconvenience of
running the command again is the only consequence.
Check the link in the following section for troubleshooting Azure File Sync issues.
Next steps
There is more to discover about Azure file shares and Azure File Sync. The following articles help understand
advanced options, best practices, and also contain troubleshooting help. These articles link to Azure file share
documentation as appropriate.
Azure File Sync overview
Deploy Azure File Sync
Azure File Sync troubleshooting
Use Data Box to migrate from Network Attached
Storage (NAS) to a hybrid cloud deployment by
using Azure File Sync
5/20/2022 • 36 minutes to read • Edit Online

This migration article is one of several that apply to the keywords NAS, Azure File Sync, and Azure Data Box.
Check if this article applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: NAS ⇒ Data Box ⇒ Azure file share ⇒ sync with Windows Server
Caching files on-premises: Yes, the final goal is an Azure File Sync deployment
If your scenario is different, look through the table of migration guides.
Azure File Sync works on Direct Attached Storage (DAS) locations. It doesn't support sync to Network Attached
Storage (NAS) locations. So you need to migrate your files. This article guides you through the planning and
implementation of that migration.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Migration goals
The goal is to move the shares that you have on your NAS appliance to Windows Server. You'll then use Azure
File Sync for a hybrid cloud deployment. This migration needs to be done in a way that guarantees the integrity
of the production data and availability during the migration. The latter requires keeping downtime to a
minimum so that it meets or only slightly exceeds regular maintenance windows.

Migration overview
The migration process consists of several phases. You'll need to:
Deploy Azure storage accounts and file shares.
Deploy an on-premises computer running Windows Server.
Configure Azure File Sync.
Migrate files by using Robocopy.
Do the cutover.
The following sections describe the phases of the migration process in detail.
TIP
If you're returning to this article, use the navigation on the right side of the screen to jump to the migration phase where
you left off.

Phase 1: Determine how many Azure file shares you need


In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.

A structured approach to a deployment map


Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.

TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.

IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.

It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.

Download a namespace-mapping template.

Phase 2: Deploy Azure storage resources


In this phase, consult the mapping table from Phase 1 and use it to provision the correct number of Azure
storage accounts and file shares within them.
An Azure file share is stored in the cloud in an Azure storage account. Another level of performance
considerations applies here.
If you have highly active shares (shares used by many users and/or applications), two Azure file shares might
reach the performance limit of a storage account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares
into the same storage account if you have archival shares or you expect low day-to-day activity in them.
These considerations apply more to direct cloud access (through an Azure VM) than to Azure File Sync. If you
plan to use only Azure File Sync on these shares, grouping several into a single Azure storage account is fine.
If you've made a list of your shares, you should map each share to the storage account it will be in.
In the previous phase, you determined the appropriate number of shares. In this step, you have a mapping of
storage accounts to file shares. Now deploy the appropriate number of Azure storage accounts with the
appropriate number of Azure file shares in them.
Make sure the region of each of your storage accounts is the same and matches the region of the Storage Sync
Service resource you've already deployed.
Cau t i on

If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.

Phase 3: Determine how many Azure Data Box appliances you need
Start this step only after you've finished the previous phase. Your Azure storage resources (storage accounts and
file shares) should be created at this time. When you order your Data Box, you need to specify the storage
accounts into which the Data Box is moving data.
In this phase, you need to map the results of the migration plan from the previous phase to the limits of the
available Data Box options. These considerations will help you make a plan for which Data Box options to choose
and how many of them you'll need to move your NAS shares to Azure file shares.
To determine how many devices you need and their types, consider these important limits:
Any Azure Data Box appliance can move data into up to 10 storage accounts.
Each Data Box option comes with its own usable capacity. See Data Box options.
Consult your migration plan to find the number of storage accounts you've decided to create and the shares in
each one. Then look at the size of each of the shares on your NAS. Combining this information will allow you to
optimize and decide which appliance should be sending data to which storage accounts. Two Data Box devices
can move files into the same storage account, but don't split content of a single file share across two Data Boxes.
Data Box options
For a standard migration, choose one or a combination of these Data Box options:
Data Box Disk . Microsoft will send you between one and five SSD disks that have a capacity of 8 TiB each,
for a maximum total of 40 TiB. The usable capacity is about 20 percent less because of encryption and file-
system overhead. For more information, see Data Box Disk documentation.
Data Box . This option is the most common one. Microsoft will send you a ruggedized Data Box appliance
that works similar to a NAS. It has a usable capacity of 80 TiB. For more information, see Data Box
documentation.
Data Box Heavy . This option features a ruggedized Data Box appliance on wheels that works similar to a
NAS. It has a capacity of 1 PiB. The usable capacity is about 20 percent less because of encryption and file-
system overhead. For more information, see Data Box Heavy documentation.

Phase 4: Provision a suitable Windows Server instance on-premises


While you wait for your Azure Data Box devices to arrive, you can start reviewing the needs of one or more
Windows Server instances you'll be using with Azure File Sync.
Create a Windows Server 2019 instance (at a minimum, Windows Server 2012 R2) as a virtual machine or
physical server. A Windows Server failover cluster is also supported.
Provision or add Direct Attached Storage. (DAS, as opposed to NAS, which isn't supported.)
The resource configuration (compute and RAM) of the Windows Server instance you deploy depends mostly on
the number of files and folders you'll be syncing. We recommend a higher performance configuration if you
have any concerns.
Learn how to size a Windows Server instance based on the number of items you need to sync.

NOTE
The previously linked article includes a table with a range for server memory (RAM). You can use numbers at the lower
end of the range for your server, but expect the initial sync to take significantly longer.

Phase 5: Copy files onto your Data Box


When your Data Box arrives, you need to set it up in the line of sight to your NAS appliance. Follow the setup
documentation for the type of Data Box you ordered:
Set up Data Box.
Set up Data Box Disk.
Set up Data Box Heavy.
Depending on the type of Data Box, Data Box copy tools might be available. At this point, we don't recommend
them for migrations to Azure file shares because they don't copy your files to the Data Box with full fidelity. Use
Robocopy instead.
When your Data Box arrives, it will have pre-provisioned SMB shares available for each storage account you
specified when you ordered it.
If your files go into a premium Azure file share, there will be one SMB share per premium "File storage"
storage account.
If your files go into a standard storage account, there will be three SMB shares per standard (GPv1 and GPv2)
storage account. Only the file shares that end with _AzFiles are relevant for your migration. Ignore any
block and page blob shares.
Follow the steps in the Azure Data Box documentation:
1. Connect to Data Box.
2. Copy data to Data Box.
You can use Robocopy (follow instruction below) or the new Data Box data copy service.
3. Prepare your Data Box for upload to Azure.

TIP
As an alternative to Robocopy, Data Box has created a data copy service. You can use this service to load files onto your
Data Box with full fidelity. Follow this data copy service tutorial and make sure to set the correct Azure file share target.

Data Box documentation specifies a Robocopy command. That command isn't suitable for preserving the full file
and folder fidelity. Use this command instead:

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.
SW ITC H M EA N IN G

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.
SW ITC H M EA N IN G

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

Phase 6: Deploy the Azure File Sync cloud resource


Before you continue with this guide, wait until all of your files have arrived in the correct Azure file shares. The
process of shipping and ingesting Data Box data will take time.
To complete this step, you need your Azure subscription credentials.
The core resource to configure for Azure File Sync is called a Storage Sync Service. We recommend that you
deploy only one for all servers that are syncing the same set of files now or in the future. Create multiple
Storage Sync Services only if you have distinct sets of servers that must never exchange data. For example, you
might have servers that must never sync the same Azure file share. Otherwise, using a single Storage Sync
Service is the best practice.
Choose an Azure region for your Storage Sync Service that's close to your location. All other cloud resources
must be deployed in the same region. To simplify management, create a new resource group in your
subscription that houses sync and storage resources.
For more information, see the section about deploying the Storage Sync Service in the article about deploying
Azure File Sync. Follow only this section of the article. There will be links to other sections of the article in later
steps.

Phase 7: Deploy the Azure File Sync agent


In this section, you install the Azure File Sync agent on your Windows Server instance.
The deployment guide explains that you need to turn off Internet Explorer Enhanced Security
Configuration . This security measure isn't applicable with Azure File Sync. Turning it off allows you to
authenticate to Azure without any problems.
Open PowerShell. Install the required PowerShell modules by using the following commands. Be sure to install
the full module and the NuGet provider when you're prompted to do so.

Install-Module -Name Az -AllowClobber


Install-Module -Name Az.StorageSync

If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.

Phase 8: Configure Azure File Sync on the Windows Server instance


Your registered on-premises Windows Server instance must be ready and connected to the internet for this
process.
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.
Turn on the cloud tiering feature and select Namespace only in the initial download section.

IMPORTANT
Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the
cloud but have the full namespace available. Locally interesting data is also cached locally for fast access performance.
Cloud tiering is optional. You can set it individually for each Azure File Sync server endpoint. You need to use this feature if
you don't have enough local disk capacity on the Windows Server instance to hold all cloud data and you want to avoid
downloading all data from the cloud.

For all Azure file shares / server locations that you need to configure for sync, repeat the steps to create sync
groups and to add the matching server folders as server endpoints. Wait until the sync of the namespace is
complete. The following section will explain how you can ensure the sync is complete.

NOTE
After you create a server endpoint, sync is working. But sync needs to enumerate (discover) the files and folders you
moved via Data Box into the Azure file share. Depending on the size of the namespace, it can take a long time before the
namespace from the cloud appears on the server.

Phase 9: Wait for the namespace to fully appear on the server


Before you continue with the next steps of your migration, wait until the server has fully downloaded the
namespace from the cloud share. If you start moving files onto the server too early, you risk unnecessary
uploads and even file sync conflicts.
To determine if your server has completed the initial download sync, open Event Viewer on your syncing
Windows Server instance and use the Azure File Sync telemetry event log. The telemetry event log is in Event
Viewer under Applications and Services\Microsoft\FileSync\Agent.
Search for the most recent 9102 event. Event ID 9102 is logged when a sync session completes. In the event text,
there's a field for the download sync direction. ( HResult needs to be zero, and files need to be downloaded.)
You want to see two consecutive events of this type, with this content, to ensure that the server has finished
downloading the namespace. It's OK if there are other events between the two 9102 events.

Phase 10: Run Robocopy from your NAS


After your server completes the initial sync of the entire namespace from the cloud share, you can continue with
this step. The initial sync must be complete before you continue with this step. See the previous section for
details.
In this step, you'll run Robocopy jobs to sync your cloud shares with the latest changes on your NAS that
occurred since you forked your shares onto the Data Box. This Robocopy run might finish quickly or take a while,
depending on the amount of churn that happened on your NAS shares.

WARNING
Because of regressed Robocopy behavior in Windows Server 2019, the Robocopy /MIR switch isn't compatible with
tiered target directories. You can't use Windows Server 2019 or Windows 10 client for this phase of the migration. Use
Robocopy on an intermediate Windows Server 2016 instance.
Here's the basic migration approach:
Run Robocopy from your NAS appliance to sync your Windows Server instance.
Use Azure File Sync to sync the Azure file shares from Windows Server.
Run the first local copy to your Windows Server target folder:
1. Identify the first location on your NAS appliance.
2. Identify the matching folder on the Windows Server instance that already has Azure File Sync configured on
it.
3. Start the copy by using Robocopy.
The following Robocopy command will copy only the differences (updated files and folders) from your NAS
storage to your Windows Server target folder. The Windows Server instance will then sync them to the Azure
file shares.

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>

SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .
SW ITC H M EA N IN G

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.
SW ITC H M EA N IN G

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

If you provisioned less storage on your Windows Server instance than your files use on the NAS appliance,
you've configured cloud tiering. As the local Windows Server volume becomes full, cloud tiering will kick in and
tier files that have already successfully synced. Cloud tiering will generate enough space to continue the copy
from the NAS appliance. Cloud tiering checks once an hour to determine what has synced and to free up disk
space to reach the 99 percent volume free space.
Robocopy might need to move more files than you can store locally on the Windows Server instance. You can
expect Robocopy to move faster than Azure File Sync can upload your files and tier them off your local volume.
In this situation, Robocopy will fail. We recommend that you work through the shares in a sequence that
prevents this scenario. For example, move only shares that fit in the free space available on the Windows Server
instance. Or avoid starting Robocopy jobs for all shares at the same time. The good news is that the /MIR
switch will ensure that only deltas are moved. After a delta has been moved, a restarted job won't need to move
the file again.
Do the cutover
When you run the Robocopy command for the first time, your users and applications will still be accessing files
on the NAS and potentially changing them. Robocopy will process a directory and then move on to the next one.
A user on the NAS might then add, change, or delete a file on the first directory that won't be processed during
the current Robocopy run. This behavior is expected.
The first run is about moving the bulk of the churned data to your Windows Server instance and into the cloud
via Azure File Sync. This first copy can take a long time, depending on:
The upload bandwidth.
The local network speed and how optimally the number of Robocopy threads matches it.
The number of items (files and folders) that need to be processed by Robocopy and Azure File Sync.
After the initial run is complete, run the command again.
Robocopy will finish faster the second time you run it for a share. It needs to transport only changes that
happened since the last run. You can run repeated jobs for the same share.
When you consider downtime acceptable, you need to remove user access to your NAS-based shares. You can
do that in any way that prevents users from changing the file and folder structure and the content. For example,
you can point your DFS namespace to a location that doesn't exist or change the root ACLs on the share.
Run Robocopy one last time. It will pick up any changes that have been missed. How long this final step takes
depends on the speed of the Robocopy scan. You can estimate the time (which is equal to your downtime) by
measuring the length of the previous run.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions that are on your NAS SMB share. If you had an enterprise-class,
domain-joined NAS, the user SIDs will automatically match because the users are in Active Directory and
Robocopy copies files and metadata at full fidelity. If you have used local users on your NAS, you need to:
Re-create these users as Windows Server local users.
Map the existing SIDs that Robocopy moved over to your Windows Server instance to the SIDs of your new
Windows Server local users.
You've finished migrating a share or group of shares into a common root or volume (depending on your
mapping from Phase 1).
You can try to run a few of these copies in parallel. We recommend that you process the scope of one Azure file
share at a time.

Deprecated option: "offline data transfer"


Before Azure File Sync agent version 13 released, Data Box integration was accomplished through a process
called "offline data transfer". This process is deprecated. With agent version 13, it was replaced with the much
simpler and faster steps described in this article. If you know you want to use the deprecated "offline data
transfer" functionality, you can still do so. It is still available by using a specific, older AFS PowerShell module:

Install-Module Az.StorageSync -RequiredVersion 1.4.0


Import-module Az.StorageSync -RequiredVersion 1.4.0
# Verify the specific version is loaded:
Get-module Az.StorageSync
WARNING
After May 15, 2022 you will no longer be able to create a server endpoint in the "offline data transfer" mode. Migrations
in progress with this method must finish before July 15, 2022. If your migration continues to run with an "offline data
transfer" enabled server endpoint, the server will begin to upload remaining files from the server on July 15, 2022 and no
longer leverage files transferred with Azure Data Box to the staging share.

Troubleshooting
The most common problem is for the Robocopy command to fail with "Volume full" on the Windows Server
side. Cloud tiering acts once every hour to evacuate content from the local Windows Server disk that has
synced. Its goal is to reach your 99 percent free space on the volume.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows
Server instance.
When your Windows Server instance has enough available capacity, run the command again to resolve the
problem. Nothing breaks in this situation. You can move forward with confidence. The inconvenience of running
the command again is the only consequence.
To troubleshoot Azure File Sync problems, see the article listed in the next section.

Next steps
There's more to discover about Azure file shares and Azure File Sync. The following articles will help you
understand advanced options and best practices. They also provide help with troubleshooting. These articles
contain links to the Azure file share documentation where appropriate.
Migration overview
Planning for an Azure File Sync deployment
Create a file share
Troubleshoot Azure File Sync
StorSimple 8100 and 8600 migration to Azure File
Sync
5/20/2022 • 68 minutes to read • Edit Online

The StorSimple 8000 series is represented by either the 8100 or the 8600 physical, on-premises appliances and
their cloud service components. StorSimple 8010 and 8020 virtual appliances are also covered in this migration
guide. It's possible to migrate the data from either of these appliances to Azure file shares with optional Azure
File Sync. Azure File Sync is the default and strategic long-term Azure service that replaces the StorSimple on-
premises functionality.
The StorSimple 8000 series will reach its end of life in December 2022. It's important to begin planning your
migration as soon as possible. This article provides the necessary background knowledge and migration steps
for a successful migration to Azure File Sync.

Phase 1: Prepare for migration


This section contains the steps you should take at the beginning of your migration from StorSimple volumes to
Azure file shares.
Inventory
When you begin planning your migration, first identify all the StorSimple appliances and volumes you need to
migrate. After you've done that, you can decide on the best migration path for you.
StorSimple physical appliances (8000 series) use this migration guide.
Virtual appliances, StorSimple 1200 series, use a different migration guide.
Migration cost summary
Migrations to Azure file shares from StorSimple volumes via migration jobs in a StorSimple Data Manager
resource are free of charge. Other costs might be incurred during and after a migration:
Network egress: Your StorSimple files live in a storage account within a specific Azure region. If you
provision the Azure file shares you migrate into a storage account that's located in the same Azure region, no
egress cost will occur. You can move your files to a storage account in a different region as part of this
migration. In that case, egress costs will apply to you.
Azure file share transactions: When files are copied into an Azure file share (as part of a migration or
outside of one), transaction costs apply as files and metadata are being written. As a best practice, start your
Azure file share on the transaction optimized tier during the migration. Switch to your desired tier after the
migration is finished. The following phases will call this out at the appropriate point.
Change an Azure file share tier : Changing the tier of an Azure file share costs transactions. In most cases,
it will be more cost efficient to follow the advice from the previous point.
Storage cost: When this migration starts copying files into an Azure file share, storage is consumed and
billed. Migrated backups will become Azure file share snapshots. File share snapshots only consume storage
capacity for the differences they contain.
StorSimple: Until you have a chance to deprovision the StorSimple devices and storage accounts,
StorSimple cost for storage, backups, and appliances will continue to occur.
Direct-share -access vs. Azure File Sync
Azure file shares open up a whole new world of opportunities for structuring your file services deployment. An
Azure file share is just an SMB share in the cloud that you can set up to have users access directly over the SMB
protocol with the familiar Kerberos authentication and existing NTFS permissions (file and folder ACLs) working
natively. Learn more about identity-based access to Azure file shares.
An alternative to direct access is Azure File Sync. Azure File Sync is a direct analog for StorSimple's ability to
cache frequently used files on-premises.
Azure File Sync is a Microsoft cloud service, based on two main components:
File synchronization and cloud tiering to create a performance access cache on any Windows Server.
File shares as native storage in Azure that can be accessed over multiple protocols like SMB and file REST.
Azure file shares retain important file fidelity aspects on stored files like attributes, permissions, and timestamps.
With Azure file shares, there's no longer a need for an application or service to interpret the files and folders
stored in the cloud. You can access them natively over familiar protocols and clients like Windows File Explorer.
Azure file shares allow you to store general-purpose file server data and application data in the cloud. Backup of
an Azure file share is a built-in functionality and can be further enhanced by Azure Backup.
This article focuses on the migration steps. If you want to learn more about Azure File Sync before migrating,
see the following articles:
Azure File Sync overview
Azure File Sync deployment guide
StorSimple service data encryption key
When you first set up your StorSimple appliance, it generated a "service data encryption key" and instructed you
to securely store the key. This key is used to encrypt all data in the associated Azure storage account where the
StorSimple appliance stores your files.
The "service data encryption key" is necessary for a successful migration. Now is a good time to retrieve this key
from your records, one for each of the appliances in your inventory.
If you can't find the keys in your records, you can generate a new key from the appliance. Each appliance has a
unique encryption key.
Change the service data encryption key
Service data encryption keys are used to encrypt confidential customer data, such as storage account
credentials, that are sent from your StorSimple Manager service to the StorSimple device. You will need to
change these keys periodically if your IT organization has a key rotation policy on the storage devices. The key
change process can be slightly different depending on whether there is a single device or multiple devices
managed by the StorSimple Manager service. For more information, go to StorSimple security and data
protection.
Changing the service data encryption key is a 3-step process:
1. Using Windows PowerShell scripts for Azure Resource Manager, authorize a device to change the service
data encryption key.
2. Using Windows PowerShell for StorSimple, initiate the service data encryption key change.
3. If you have more than one StorSimple device, update the service data encryption key on the other devices.
Step 1: Use Windows PowerShell script to Authorize a device to change the service data encryption key
Typically, the device administrator will request that the service administrator authorize a device to change
service data encryption keys. The service administrator will then authorize the device to change the key.
This step is performed using the Azure Resource Manager based script. The service administrator can select a
device that is eligible to be authorized. The device is then authorized to start the service data encryption key
change process.
For more information about using the script, go to Authorize-ServiceEncryptionRollover.ps1
Which devices can be authorized to change service data encryption keys?
A device must meet the following criteria before it can be authorized to initiate service data encryption key
changes:
The device must be online to be eligible for service data encryption key change authorization.
You can authorize the same device again after 30 minutes if the key change has not been initiated.
You can authorize a different device, provided that the key change has not been initiated by the previously
authorized device. After the new device has been authorized, the old device cannot initiate the change.
You cannot authorize a device while the rollover of the service data encryption key is in progress.
You can authorize a device when some of the devices registered with the service have rolled over the
encryption while others have not.
Step 2: Use Windows PowerShell for StorSimple to initiate the service data encryption key change
This step is performed in the Windows PowerShell for StorSimple interface on the authorized StorSimple device.

NOTE
No operations can be performed in the Azure portal of your StorSimple Manager service until the key rollover is
completed.

If you are using the device serial console to connect to the Windows PowerShell interface, perform the following
steps.
To initiate the service data encryption key change
1. Select option 1 to log on with full access.
2. At the command prompt, type:
Invoke-HcsmServiceDataEncryptionKeyChange

3. After the cmdlet has successfully completed, you will get a new service data encryption key. Copy and
save this key for use in step 3 of this process. This key will be used to update all the remaining devices
registered with the StorSimple Manager service.

NOTE
This process must be initiated within four hours of authorizing a StorSimple device.

This new key is then sent to the service to be pushed to all the devices that are registered with the
service. An alert will then appear on the service dashboard. The service will disable all the operations on
the registered devices, and the device administrator will then need to update the service data encryption
key on the other devices. However, the I/Os (hosts sending data to the cloud) will not be disrupted.
If you have a single device registered to your service, the rollover process is now complete and you can
skip the next step. If you have multiple devices registered to your service, proceed to step 3.
Step 3: Update the service data encryption key on other StorSimple devices
These steps must be performed in the Windows PowerShell interface of your StorSimple device if you have
multiple devices registered to your StorSimple Manager service. The key that you obtained in Step 2 must be
used to update all the remaining StorSimple device registered with the StorSimple Manager service.
Perform the following steps to update the service data encryption on your device.
To update the service data encryption key on physical devices
1. Use Windows PowerShell for StorSimple to connect to the console. Select option 1 to log on with full access.
2. At the command prompt, type: Invoke-HcsmServiceDataEncryptionKeyChange – ServiceDataEncryptionKey
3. Provide the service data encryption key that you obtained in Step 2: Use Windows PowerShell for StorSimple
to initiate the service data encryption key change.
To update the service data encryption key on all the 8010/8020 cloud appliances
1. Download and setup Update-CloudApplianceServiceEncryptionKey.ps1 PowerShell script.
2. Open PowerShell and at the command prompt, type:
Update-CloudApplianceServiceEncryptionKey.ps1 -SubscriptionId [subscription] -TenantId [tenantid] -
ResourceGroupName [resource group] -ManagerName [device manager]

This script will ensure that service data encryption key is set on all the 8010/8020 cloud appliances under the
device manager.
Cau t i on

When you're deciding how to connect to your StorSimple appliance, consider the following:
Connecting through an HTTPS session is the most secure and recommended option.
Connecting directly to the device serial console is secure, but connecting to the serial console over network
switches is not.
HTTP session connections are an option but are not encrypted. They're not recommended unless they're used
within in a closed, trusted network.
Known limitations
The StorSimple Data Manager and Azure file shares have a few limitations you should consider before you
begin your migration, as they can prevent a migration:
Only NTFS volumes from your StorSimple appliance are supported. ReFS volumes are not supported.
Any volume placed on Windows Server Dynamic Disks is not supported. (deprecated before Windows
Server 2012)
The service doesn't work with volumes that are BitLocker encrypted or have Data Deduplication enabled.
Corrupted StorSimple backups can't be migrated.
Special networking options, such as firewalls or private endpoint-only communication can't be enabled on
either the source storage account where StorSimple backups are stored, nor on the target storage account
that holds you Azure file shares.
File fidelity
If none of the limitations in Known limitations prevent a migration. There are still limitations on what can be
stored in Azure file shares that you need to be aware of. File fidelity refers to the multitude of attributes,
timestamps, and data that compose a file. In a migration, file fidelity is a measure of how well the information on
the source (StorSimple volume) can be translated (migrated) to the target (Azure file share). Azure Files supports
a subset of the NTFS file properties. ACLs, common metadata, and some timestamps will be migrated. The
following items won't prevent a migration but will cause per-item issues during a migration:
Timestamps: File change time will not be set - it is currently read-only over the REST protocol. Last access
timestamp on a file will not be moved, it currently isn't a supported attribute on files stored in an Azure file
share.
Alternative Data Streams can't be stored in Azure file shares. Files holding Alternate Data Streams will be
copied, but Alternate Data Streams will be stripped from the file in the process.
Symbolic links, hard links, junctions, and reparse points are skipped during a migration. The migration copy
logs will list each skipped item and a reason.
EFS encrypted files will fail to copy. Copy logs will show the item failed to copy with "Access is denied".
Corrupt files are skipped. The copy logs may list different errors for each item that is corrupt on the
StorSimple disk: "The request failed due to a fatal device hardware error" or "The file or directory is
corrupted or unreadable" or "The access control list (ACL) structure is invalid".
Individual files larger than 4 TiB are skipped.
File path lengths need to be equal to or fewer than 2048 characters. Files and folders with longer paths will
be skipped.
StorSimple volume backups
StorSimple offers differential backups on the volume level. Azure file shares also have this ability, called share
snapshots. Your migration jobs can only move backups, not data from the live volume. So the most recent
backup should always be on the list of backups moved in a migration.
Decide if you need to move any older backups during your migration. Best practice is to keep this list as small as
possible, so your migration jobs complete faster.
To identify critical backups that must be migrated, make a checklist of your backup policies. For instance:
The most recent backup. (Note: The most recent backup should always be part of this list.)
One backup a month for 12 months.
One backup a year for three years.
Later on, when you create your migration jobs, you can use this list to identify the exact StorSimple volume
backups that must be migrated to satisfy the requirements on your list.
Cau t i on

Selecting more than 50 StorSimple volume backups is not supported. Your migration jobs can only move
backups, never data from the live volume. Therefore the most recent backup is closest to the live data and thus
should always be part of the list of backups to be moved in a migration.
Cau t i on

It's best to suspend all StorSimple backup retention policies before you select a backup for migration.
Migrating your backups takes several days or weeks. StorSimple offers backup retention policies that will delete
backups. Backups you have selected for this migration may get deleted before they had a chance to be migrated.
Map your existing StorSimple volumes to Azure file shares
In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.

A structured approach to a deployment map


Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.

TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.

IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table

Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.

Download a namespace-mapping template.

Number of storage accounts


Your migration will likely benefit from a deployment of multiple storage accounts that each hold a smaller
number of Azure file shares.
If your file shares are highly active (utilized by many users or applications), two Azure file shares might reach the
performance limit of your storage account. Because of this, the best practice is to migrate to multiple storage
accounts, each with their own individual file shares, and typically no more than two or three shares per storage
account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares
into the same storage account, if you have archival shares in them.
These considerations apply more to direct cloud access (through an Azure VM or service) than to Azure File
Sync. If you plan to exclusively use Azure File Sync on these shares, grouping several into a single Azure storage
account is fine. In the future, you may want to lift and shift an app into the cloud that would then directly access
a file share, this scenario would benefit from having higher IOPS and throughput. Or you could start using a
service in Azure that would also benefit from having higher IOPS and throughput.
If you've made a list of your shares, map each share to the storage account where it will reside.

IMPORTANT
Decide on an Azure region, and ensure each storage account and Azure File Sync resource matches the region you
selected. Don't configure network and firewall settings for the storage accounts now. Making these configurations at this
point would make a migration impossible. Configure these Azure storage settings after the migration is complete.

Storage account settings


There are many configurations you can make on a storage account. The following checklist should be used for
confirming your storage account configurations. You can change for instance the networking configuration after
your migration is complete.
Large file shares: Enabled - Large file shares improve performance and allow you to store up to 100TiB in a
share. This setting applies to target storage accounts with Azure file shares.
Firewall and virtual networks: Disabled - do not configure any IP restrictions or limit storage account access
to a specific VNET. The public endpoint of the storage account is used during the migration. All IP addresses
from Azure VMs must be allowed. It's best to configure any firewall rules on the storage account after the
migration. Configure both, your source and target storage accounts this way.
Private Endpoints: Supported - You can enable private endpoints but the public endpoint is used for the
migration and must remain available. This consideration applies to both, your source and target storage
accounts.
Phase 1 summary
At the end of Phase 1:
You have a good overview of your StorSimple devices and volumes.
The Data Manager service is ready to access your StorSimple volumes in the cloud because you've retrieved
your "service data encryption key" for each StorSimple device.
You have a plan for which volumes and backups (if any beyond the most recent) need to be migrated.
You know how to map your volumes to the appropriate number of Azure file shares and storage accounts.

Phase 2: Deploy Azure storage and migration resources


This section discusses considerations around deploying the different resource types that are needed in Azure.
Some will hold your data post migration, and some are needed solely for the migration. Don't start deploying
resources until you've finalized your deployment plan. It's difficult, sometimes impossible, to change certain
aspects of your Azure resources after they've been deployed.
Deploy storage accounts
You'll likely need to deploy several Azure storage accounts. Each one will hold a smaller number of Azure file
shares, as per your deployment plan, completed in the previous section of this article. Go to the Azure portal to
deploy your planned storage accounts. Consider adhering to the following basic settings for any new storage
account.
IMPORTANT
Do not configure network and firewall settings for your storage accounts now. Making those configurations at this point
would make a migration impossible. Configure these Azure storage settings after the migration is complete.

Subscription
You can use the same subscription you used for your StorSimple deployment or a different one. The only
limitation is that your subscription must be in the same Azure Active Directory tenant as the StorSimple
subscription. Consider moving the StorSimple subscription to the appropriate tenant before you start a
migration. You can only move the entire subscription, individual StorSimple resources can't be moved to a
different tenant or subscription.
Resource group
Resource groups are assisting with organization of resources and admin management permissions. Find out
more about resource groups in Azure.
Storage account name
The name of your storage account will become part of a URL and has certain character limitations. In your
naming convention, consider that storage account names have to be unique in the world, allow only lowercase
letters and numbers, require between 3 to 24 characters, and don't allow special characters like hyphens or
underscores. For more information, see Azure storage resource naming rules.
Location
The location or Azure region of a storage account is very important. If you use Azure File Sync, all of your
storage accounts must be in the same region as your Storage Sync Service resource. The Azure region you pick
should be close or central to your local servers and users. After your resource has been deployed, you can't
change its region.
You can pick a different region from where your StorSimple data (storage account) currently resides.

IMPORTANT
If you pick a different region from your current StorSimple storage account location, egress charges will apply during the
migration. Data will leave the StorSimple region and enter your new storage account region. No bandwidth charges apply
if you stay within the same Azure region.

Performance
You have the option to pick premium storage (SSD) for Azure file shares or standard storage. Standard storage
includes several tiers for a file share. Standard storage is the right option for most customers migrating from
StorSimple.
Still not sure?
Choose premium storage if you need the performance of a premium Azure file share.
Choose standard storage for general-purpose file server workloads, which includes hot data and archive
data. Also choose standard storage if the only workload on the share in the cloud will be Azure File Sync.
For premium file shares, choose File shares in the create storage account wizard.
Replication
There are several replication settings available. Learn more about the different replication types.
Only choose from either of the following two options:
Locally redundant storage (LRS).
Zone redundant storage (ZRS), which isn't available in all Azure regions.
NOTE
Only LRS and ZRS redundancy types are compatible with the large 100 TiB capacity Azure file shares.

Geo redundant storage (GRS) in all variations is currently not supported. You can switch your redundancy type
later, and switch to GRS when support for it arrives in Azure.
Enable 100 TiB capacity file shares

Under the Advanced section of the new storage account wizard in the Azure portal, you can enable Large file
shares support in this storage account. If this option isn't available to you, you most likely selected the wrong
redundancy type. Ensure you only select LRS or ZRS for this option to become available.
Opting for the large, 100 TiB capacity file shares has several benefits:
Your performance is greatly increased as compared to the smaller 5 TiB-capacity file shares (for example, 10
times the IOPS).
Your migration will finish significantly faster.
You ensure that a file share will have enough capacity to hold all the data you'll migrate into it, including the
storage capacity differential backups require.
Future growth is covered.

IMPORTANT
Do not apply special networking to your storage account before or during your migration. The public endpoint must be
accessible on source and target storage accounts. No limiting to specific IP ranges or VNETs is supported. You can change
the storage account networking configurations after the migration.

Azure file shares


After your storage accounts are created, go to the File share section of the storage account and deploy the
appropriate number of Azure file shares as per your migration plan from Phase 1. Consider adhering to the
following basic settings for your new file shares in Azure.
Name
Lowercase letters, numbers, and hyphens are supported.

Quota
Quota here is comparable to an SMB hard quota on a Windows Server instance. The best practice is to not set a
quota here because your migration and other services will fail when the quota is reached.

Tiers
Select Transaction optimized for your new file share. During the migration, many transactions will occur. Its
more cost efficient to change your tier later to the tier best suited to your workload.
StorSimple Data Manager
The Azure resource that will hold your migration jobs is called a StorSimple Data Manager . Select New
resource , and search for it. Then select Create .
This temporary resource is used for orchestration. You deprovision it after your migration completes. It should
be deployed in the same subscription, resource group, and region as your StorSimple storage account.
Azure File Sync
With Azure File Sync, you can add on-premises caching of the most frequently accessed files. Similar to the
caching abilities of StorSimple, the Azure File Sync cloud tiering feature offers local-access latency in
combination with improved control over the available cache capacity on the Windows Server instance and
multi-site sync. If having an on-premises cache is your goal, then in your local network, prepare a Windows
Server VM (physical servers and failover clusters are also supported) with sufficient direct-attached storage
capacity.

IMPORTANT
Don't set up Azure File Sync yet. It's best to set up Azure File Sync after the migration of your share is complete.
Deploying Azure File Sync shouldn't start before Phase 4 of a migration.

Phase 2 summary
At the end of Phase 2, you'll have deployed your storage accounts and all Azure file shares across them. You'll
also have a StorSimple Data Manager resource. You'll use the latter in Phase 3 when you configure your
migration jobs.

Phase 3: Create and run a migration job


This section describes how to set up a migration job and carefully map the directories on a StorSimple volume
that should be copied into the target Azure file share you select. To get started, go to your StorSimple Data
Manager, find Job definitions on the menu, and select + Job definition . The correct target storage type is the
default: Azure file share .
Job definition name
This name should indicate the set of files you're moving. Giving it a similar name as your Azure file share is a
good practice.

Location where the job runs


When selecting a region, you must select the same region as your StorSimple storage account or, if that isn't
available, then a region close to it.

Source
Source subscription
Select the subscription in which you store your StorSimple Device Manager resource.

StorSimple resource
Select your StorSimple Device Manager your appliance is registered with.

Ser vice data encr yption key


Check this prior section in this article in case you can't locate the key in your records.

Device
Select your StorSimple device that holds the volume where you want to migrate.

Volume
Select the source volume. Later you'll decide if you want to migrate the whole volume or subdirectories into the
target Azure file share.

Volume backups
You can select Select volume backups to choose specific backups to move as part of this job. An upcoming,
dedicated section in this article covers the process in detail.
Target
Select the subscription, storage account, and Azure file share as the target of this migration job.

Directory mapping
A dedicated section in this article, discusses all relevant details.
Selecting volume backups to migrate
There are important aspects around choosing backups that need to be migrated:
Your migration jobs can only move backups, not live volume data. So the most recent backup is closest to the
live data and should always be on the list of backups moved in a migration. When you open the Backup
selection dialog, it is selected by default.
Make sure your latest backup is recent to keep the delta to the live share as small as possible. It could be
worth manually triggering and completing another volume backup before creating a migration job. A small
delta to the live share will improve your migration experience. If this delta can be zero = no more changes to
the StorSimple volume happened after the newest backup was taken in your list - then Phase 5: User cut-over
will be drastically simplified and sped up.
Backups must be played back into the Azure file share from oldest to newest . An older backup cannot be
"sorted into" the list of backups on the Azure file share after a migration job has run. Therefore you must
ensure that your list of backups is complete before you create a job.
This list of backups in a job cannot be modified once the job is created - even if the job never ran.
In order to select backups, the StorSimple volume you want to migrate must be online.

To select backups of your StorSimple volume for your migration job, select the Select volume backups on the job
creation form.
When the backup selection blade opens, it is separated into two lists. In the first list, all available backups are
displayed. You can expand and narrow the result set by filtering for a specific time range. (see next section)

A selected backup will display as grayed-out and it is added to a second list on the lower half of the blade. The
second list displays all the backups selected for migration. A backup selected in error can also be removed again.
Cau t i on

You must select all backups you wish to migrate. You cannot add older backups later on. You cannot modify the
job to change your selection once the job is created.

By default, the list is filtered to show the StorSimple volume backups within the past seven days. The most
recent backup is selected by default, even if it didn't occur in the past seven days. For older backups, use the time
range filter at the top of the blade. You can either select from an existing filter or set a custom time range to filter
for only the backups taken during this period.
Cau t i on

Selecting more than 50 StorSimple volume backups is not supported. Jobs with a large number of backups may
fail. Make sure your backup retention policies don't delete a selected backup before it got a chance to be
migrated!
Directory mapping
Directory mapping is optional for your migration job. If you leave the section empty, all the files and folders on
the root of your StorSimple volume will be moved into the root of your target Azure file share. In most cases,
storing an entire volume's content in an Azure file share isn't the best approach. It's often better to split a
volume's content across multiple file shares in Azure. If you haven't made a plan already, see Map your
StorSimple volume to Azure file shares first.
As part of your migration plan, you might have decided that the folders on a StorSimple volume need to be split
across multiple Azure file shares. If that's the case, you can accomplish that split by:
1. Defining multiple jobs to migrate the folders on one volume. Each will have the same StorSimple volume
source but a different Azure file share as the target.
2. Specifying precisely which folders from the StorSimple volume need to be migrated into the specified file
share by using the Director y-mapping section of the job creation form and following the specific mapping
semantics.

IMPORTANT
The paths and mapping expressions in this form can't be validated when the form is submitted. If mappings are specified
incorrectly, a job might either fail completely or produce an undesirable result. In that case, it's usually best to delete the
Azure file share, re-create it, and then fix the mapping statements in a new migration job for the share. Running a new job
with fixed mapping statements can fix omitted folders and bring them into the existing share. However, only folders that
were omitted because of path misspellings can be addressed this way.

Semantic elements
A mapping is expressed from left to right: [\source path] > [\target path].

SEM A N T IC C H A RA C T ER M EA N IN G

\ Root level indicator.

> [Source] and [target-mapping] operator.

| or RETURN (new line) Separator of two folder-mapping instructions.


Alternatively, you can omit this character and select Enter to
get the next mapping expression on its own line.

Examples
Moves the content of folder User data to the root of the target file share:

\User data > \

Moves the entire volume content into a new path on the target file share:

\ > \Apps\HR tracker

Moves the source folder content into a new path on the target file share:

\HR resumes-Backup > \Backups\HR\resumes

Sorts multiple source locations into a new directory structure:


\HR\Candidate Tracker\v1.0 > \Apps\Candidate tracker
\HR\Candidates\Resumes > \HR\Candidates\New
\Archive\HR\Old Resumes > \HR\Candidates\Archived

Semantic rules
Always specify folder paths relative to the root level.
Begin each folder path with a root level indicator "\".
Don't include drive letters.
When specifying multiple paths, source or target paths can't overlap:
Invalid source path overlap example:
\folder\1 > \folder
\folder\1\2 > \folder2
Invalid target path overlap example:
\folder > \
\folder2 > \
Source folders that don't exist will be ignored.
Folder structures that don't exist on the target will be created.
Like Windows, folder names are case insensitive but case preserving.

NOTE
Contents of the \System Volume Information folder and the $Recycle.Bin on your StorSimple volume won't be copied by
the migration job.

Run a migration job


Your migration jobs are listed under Job definitions in the Data Manager resource you've deployed to a resource
group. From the list of job definitions, select the job you want to run.
In the job blade that opens, you can see your job's current status and a list of backups you've selected. The list of
backups is sorted by oldest to newest and will be migrated to your Azure file share in this order.

Initially, the migration job will have the status: Never ran .
When you are ready, you can start this migration job. (Select the image for a version with higher resolution.)
When a backup was successfully migrated, an automatic Azure file share snapshot will be taken. The original
backup date of your StorSimple backup will be placed in the Comments section of the Azure file share snapshot.
Utilizing this field will allow you to see when the data was originally backed up as compared to the time the file
share snapshot was taken.
Cau t i on

Backups must be processed from oldest to newest. Once a migration job is created, you can't change the list of
selected StorSimple volume backups. Don't start the job if the list of Backups is incorrect or incomplete. Delete
the job and make a new one with the correct backups selected. For each selected backup, check your retention
schedules. Backups may get deleted by one or more of your retention policies before they got a chance to be
migrated!
Per-item errors
The migration jobs have two columns in the list of backups that list any issues that may have occurred during
the copy:
Copy errors
This column lists files or folders that should have been copied but weren't. These errors are often
recoverable. When a backup lists item issues in this column, review the copy logs. If you need to migrate
these files, select Retr y backup . This option will become available once the backup finished processing. The
Managing a migration job section explains your options in more detail.
Unsupported files
This column lists files or folders that can't be migrated. Azure Storage has limitations in file names, path
lengths, and file types that currently or logically can't be stored in an Azure file share. A migration job won't
pause for these kind of errors. Retrying migration of the backup won't change the result. When a backup lists
item issues in this column, review the copy logs and take note. If such issues arise in your last backup and
you found in the copy log that the failure was due to a file name, path length or other issue you have
influence over, you may want to remedy the issue in the live StorSImple volume, take a StorSimple volume
backup and create a new migration job with just that backup. You will then migrate this remedied namespace
and it will become the most recent / live version of the Azure file share. This is a manual and time consuming
process. Review the copy logs carefully and evaluate if it's worth it.
These copy logs are *.csv files listing namespace items succeeded and items that failed to get copied. The errors
are further split into the previously discussed categories. From the log file location, you can find logs for failed
files by searching for "failed". The result should be a set of logs for files that failed to copy. Sort these logs by
size. There may be extra logs produced at 17 bytes in size. They are empty and can be ignored. With a sort, you
can focus on the logs with content.
The same process applies for log files recording successful copies.
Manage a migration job
Migration jobs have the following states:
Never ran
A new job, that has been defined but never ran before.
Waiting
A job in this state is waiting for resources to be provisioned in the migration service. It will automatically
switch to a different state when ready.
Failed
A failed job hit a fatal error that prevents it from processing more backups. A job is not expected to enter this
state. A support request is the best course of action.
Canceled / Canceling
Either and entire migration job or individual backups within the job can be canceled. Canceled backups won't
be processed, a canceled migration job will stop processing more backups. Expect that canceling a job will
take a long time. This doesn't prevent you from creating a new job. The best course of action is patience to let
a job fully arrive in the Canceled state. You can either ignore failed / canceled jobs or delete them at a later
time. You won't have to delete jobs before you can delete the Data Manager resource at the end of your
StorSimple migration.
Running

A running job is currently processing a backup. Refer to the table on the bottom half of the blade to see which
backup is currently being processed and which ones might have been migrated already.
Already migrated backups have a column with a link to a copy log. If there are any errors reported for a backup,
you should review its copy log.

Paused

A migration job is paused when there is a decision needed. This condition enables two command buttons on the
top of the blade:
Choose Retr y backup when the backup shows files that were supposed to move but didn't (Copy error
column).
Choose Skip backup when the backup is missing (was deleted by policy since you created the migration job) or
when the backup is corrupt. You can find detailed error information in the blade that opens when you click on
the failed backup.

When you skip or retry the current backup, the migration service will create a new snapshot in your target
Azure file share. You may want to delete the previous one later, it is likely incomplete.
Complete and Complete with warnings

A migration job is listed as Complete when all backups in the job have been successfully processed.
Complete with warnings is a state that occurs when:
A backup ran into a recoverable issue. This backup is marked as partial success or failed.
You decided to continue on the paused job by skipping the backup with said issues. (You chose Skip backup
instead of Retry backup)
If the migration job completes with warnings, you should always review the copy log for the relevant backups.
Run jobs in parallel
You will likely have multiple StorSimple volumes, each with their own shares that need to be migrated to an
Azure file share. It's important that you understand how much you can do in parallel. There are limitations that
aren't enforced in the user experience and will either degrade or inhibit a complete migration if jobs are
executed at the same time.
There are no limits in defining migration jobs. You can define the same StorSimple source volume, the same
Azure file share, across the same or different StorSimple appliances. However, running them has limitations:
Only one migration job with the same StorSimple source volume can run at the same time.
Only one migration job with the same target Azure file share can run at the same time.
Before starting the next job, you ensured that any of the previous jobs are in the copy stage and show
progress of moving files for at least 30 Minutes.
You can run up to four migration jobs in parallel per StorSimple device manager, as long as you also abide by
the previous rules.
When you attempt to start a migration job, the previous rules are checked. If there are jobs running, you may
not be able to start the current job. You'll receive an alert that lists the name of currently running job(s) that
must finish before you can start the new job.

TIP
It's a good idea to regularly check your migration jobs in the Job definition tab of your Data Manager resource, to see if
any of them have paused and need your input to complete.

Phase 3 summary
At the end of Phase 3, you'll have run at least one of your migration jobs from StorSimple volumes into Azure
file share(s). With your run, you will have migrated your specified backups into Azure file share snapshots. You
can now focus on either setting up Azure File Sync for the share (once migration jobs for a share have
completed) or direct-share-access for your information workers and apps to the Azure file share.

Phase 4: Access your Azure file shares


There are two main strategies for accessing your Azure file shares:
Azure File Sync : Deploy Azure File Sync to an on-premises Windows Server instance. Azure File Sync has
all the advantages of a local cache, just like StorSimple.
Direct-share-access : Deploy direct-share-access. Use this strategy if your access scenario for a given Azure
file share won't benefit from local caching, or you no longer have an ability to host an on-premises Windows
Server instance. Here, your users and apps will continue to access SMB shares over the SMB protocol. These
shares are no longer on an on-premises server but directly in the cloud.
You should have already decided which option is best for you in Phase 1 of this guide.
The remainder of this section focuses on deployment instructions.
Deploy Azure File Sync
It's time to deploy a part of Azure File Sync.
1. Create the Azure File Sync cloud resource.
2. Deploy the Azure File Sync agent on your on-premises server.
3. Register the server with the cloud resource.
Don't create any sync groups yet. Setting up sync with an Azure file share should only occur after your migration
jobs to an Azure file share have completed. If you started using Azure File Sync before your migration
completed, it would make your migration unnecessarily difficult since you couldn't easily tell when it was time to
initiate a cut-over.
Deploy the Azure File Sync cloud resource
To complete this step, you need your Azure subscription credentials.
The core resource to configure for Azure File Sync is called a Storage Sync Service. We recommend that you
deploy only one for all servers that are syncing the same set of files now or in the future. Create multiple
Storage Sync Services only if you have distinct sets of servers that must never exchange data. For example, you
might have servers that must never sync the same Azure file share. Otherwise, using a single Storage Sync
Service is the best practice.
Choose an Azure region for your Storage Sync Service that's close to your location. All other cloud resources
must be deployed in the same region. To simplify management, create a new resource group in your
subscription that houses sync and storage resources.
For more information, see the section about deploying the Storage Sync Service in the article about deploying
Azure File Sync. Follow only this section of the article. There will be links to other sections of the article in later
steps.

TIP
If you want to change the Azure region your data resides in after the migration is finished, deploy the Storage Sync
Service in the same region as the target storage accounts for this migration.

Deploy an on-premises Windows Server instance


Create Windows Server 2019 (at a minimum 2012R2) as a virtual machine or physical server. A Windows
Server failover cluster is also supported. Don't reuse the server fronting the StorSimple 8100 or 8600.
Provision or add direct-attached storage. Network-attached storage isn't supported.
It's best practice to give your new Windows Server instance an equal or larger amount of storage than your
StorSimple 8100 or 8600 appliance has locally available for caching. You'll use the Windows Server instance the
same way you used the StorSimple appliance. If it has the same amount of storage as the appliance, the caching
experience should be similar, if not the same. You can add or remove storage from your Windows Server
instance at will. This capability enables you to scale your local volume size and the amount of local storage
available for caching.
Prepare the Windows Server instance for file sync
In this section, you install the Azure File Sync agent on your Windows Server instance.
The deployment guide explains that you need to turn off Internet Explorer Enhanced Security
Configuration . This security measure isn't applicable with Azure File Sync. Turning it off allows you to
authenticate to Azure without any problems.
Open PowerShell. Install the required PowerShell modules by using the following commands. Be sure to install
the full module and the NuGet provider when you're prompted to do so.

Install-Module -Name Az -AllowClobber


Install-Module -Name Az.StorageSync

If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Configure Azure File Sync on the Windows Server instance
Your registered on-premises Windows Server instance must be ready and connected to the internet for this
process.

IMPORTANT
Your StorSimple migration of files and folders into the Azure file share must be complete before you proceed. Make sure
there are no more changes done to the file share.

This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.
IMPORTANT
Be sure to turn on cloud tiering. Cloud tiering is the Azure File Sync feature that allows the local server to have less
storage capacity than is stored in the cloud, yet have the full namespace available. Locally interesting data is also cached
locally for fast, local access performance. Another reason to turn on cloud tiering at this step is that we don't want to sync
file content at this stage. Only the namespace should be moving at this time.

Deploy direct-share -access


https://www.youtube-nocookie.com/embed/jd49W33DxkQ
This video is a guide and demo for how to securely expose Azure file shares directly to information workers and
apps in five simple steps.
The video references dedicated documentation for some topics:
Identity overview
How to domain join a storage account
Networking overview for Azure file shares
How to configure public and private endpoints
How to configure a S2S VPN
How to configure a Windows P2S VPN
How to configure a Linux P2S VPN
How to configure DNS forwarding
Configure DFS-N
Phase 4 summary
At the end of this phase, you've created and run multiple migration jobs in your StorSimple Data Manager. Those
jobs have migrated your files and folders and their backups to Azure file shares. You've also deployed Azure File
Sync or prepared your network and storage accounts for direct-share-access.

Phase 5: User cut-over


This phase is all about wrapping up your migration:
Plan your downtime.
Catch up with any changes your users and apps produced on the StorSimple side while the migration jobs in
Phase 3 have been running.
Fail over your users to the new Windows Server instance with Azure File Sync or to the Azure file shares via
direct-share-access.
Plan your downtime
This migration approach requires some downtime for your users and apps. The goal is to keep downtime to a
minimum. The following considerations can help:
Keep your StorSimple volumes available while running your migration jobs.
When you've finished running your data migration jobs for a share, it's time to remove user access (at least
write access) from the StorSimple volumes or shares. A final RoboCopy will catch up your Azure file share.
Then you can cut over your users. Where you run RoboCopy depends on whether you chose to use Azure
File Sync or direct-share-access. The upcoming section on RoboCopy covers that subject.
After you've completed the RoboCopy catch-up, you're ready to expose the new location to your users by
either the Azure file share directly or an SMB share on a Windows Server instance with Azure File Sync. Often
a DFS-N deployment will help accomplish a cut-over quickly and efficiently. It will keep your existing share
addresses consistent and repoint to a new location that contains your migrated files and folders.
For archival data, it is a fully viable approach to take downtime on your StorSimple volume (or subfolder), take
one more StorSimple volume backup, migrate and then open up the migration destination for access by users
and apps. This will spare you the need for a catch-up RoboCopy as described in this section. However, this
approach comes at the cost of a prolonged downtime window that might stretch to several days or longer
depending on the number of files and backups you need to migrate. This is likely only an option for archival
workloads that can do without write access for prolonged periods of time.
Determine when your namespace has fully synced to your server
When you use Azure File Sync for an Azure file share, it's important that you determine your entire namespace
has finished downloading to the server before you start any local RoboCopy. The time it takes to download your
namespace depends on the number of items in your Azure file share. There are two methods for determining
whether your namespace has fully arrived on the server.
Azure portal
You can use the Azure portal to see when your namespace has fully arrived.
Sign in to the Azure portal, and go to your sync group. Check the sync status of your sync group and server
endpoint.
The interesting direction is download. If the server endpoint is newly provisioned, it will show Initial sync ,
which indicates the namespace is still coming down. After that state changes to anything but Initial sync ,
your namespace will be fully populated on the server. You can now proceed with a local RoboCopy.
Windows Server Event Viewer
You can also use the Event Viewer on your Windows Server instance to tell when the namespace has fully
arrived.
1. Open the Event Viewer , and go to Applications and Ser vices .
2. Go to and open Microsoft\FileSync\Agent\Telemetr y .
3. Look for the most recent event 9102 , which corresponds to a completed sync session.
4. Select Details , and confirm that you're looking at an event where the SyncDirection value is Download .
5. For the time where your namespace has completed download to the server, there will be a single event with
Scenario , the value FullGhostedSync , and HResult = 0 .
6. If you miss that event, you can also look for other 9102 events with SyncDirection = Download and
Scenario = "RegularSync" . Finding one of these events also indicates that the namespace has finished
downloading and sync progressed to regular sync sessions, whether there's anything to sync or not at this
time.
A final RoboCopy
At this point, there are differences between your on-premises Windows Server instance and the StorSimple
8100 or 8600 appliance.
1. You need to catch up with the changes that users or apps produced on the StorSimple side while the
migration was ongoing.
2. For cases where you use Azure File Sync: The StorSimple appliance has a populated cache versus the
Windows Server instance with just a namespace with no file content stored locally at this time. The final
RoboCopy can help jump-start your local Azure File Sync cache by pulling over locally cached file content as
much as is available and can fit on the Azure File Sync server.
3. Some files might have been left behind by the migration job because of invalid characters. If so, copy them to
the Azure File Sync-enabled Windows Server instance. Later on, you can adjust them so that they will sync. If
you don't use Azure File Sync for a particular share, you're better off renaming the files with invalid
characters on the StorSimple volume. Then run the RoboCopy directly against the Azure file share.
WARNING
Robocopy in Windows Server 2019 currently experiences an issue that will cause files tiered by Azure File Sync on the
target server to be recopied from the source and re-uploaded to Azure when using the /MIR function of robocopy. It is
imperative that you use Robocopy on a Windows Server other than 2019. A preferred choice is Windows Server 2016.
This note will be updated should the issue be resolved via Windows Update.

WARNING
You must not start the RoboCopy before the server has the namespace for an Azure file share downloaded fully. For more
information, see Determine when your namespace has fully downloaded to your server.

You only want to copy files that were changed after the migration job last ran and files that haven't moved
through these jobs before. You can solve the problem as to why they didn't move later on the server, after the
migration is complete. For more information, see Azure File Sync troubleshooting.
RoboCopy has several parameters. The following example showcases a finished command and a list of reasons
for choosing these parameters.

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>

SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
SW ITC H M EA N IN G

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.
SW ITC H M EA N IN G

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

When you configure source and target locations of the RoboCopy command, make sure you review the
structure of the source and target to ensure they match. If you used the directory-mapping feature of the
migration job, your root-directory structure might be different than the structure of your StorSimple volume. If
that's the case, you might need multiple RoboCopy jobs, one for each subdirectory. If you unsure if the
command will perform as expected, you can use the /L parameter, which will simulate the command without
actually making any changes.
This RoboCopy command uses /MIR, so it won't move files that are the same (tiered files, for instance). But if you
get the source and target path wrong, /MIR also purges directory structures on your Windows Server instance
or Azure file share that aren't present on the StorSimple source path. They must match exactly for the RoboCopy
job to reach its intended goal of updating your migrated content with the latest changes made while the
migration is ongoing.
Consult the RoboCopy log file to see if files have been left behind. If issues exist, fix them, and rerun the
RoboCopy command. Don't deprovision any StorSimple resources before you fix outstanding issues for files or
folders you care about.
If you don't use Azure File Sync to cache the particular Azure file share in question but instead opted for direct-
share-access:
1. Mount your Azure file share as a network drive to a local Windows machine.
2. Perform the RoboCopy between your StorSimple and the mounted Azure file share. If files don't copy, fix up
their names on the StorSimple side to remove invalid characters. Then retry RoboCopy. The previously listed
RoboCopy command can be run multiple times without causing unnecessary recall to StorSimple.
Troubleshoot and optimize
Speed and success rate of a given RoboCopy run will depend on several factors:
IOPS on the source and target storage
the available network bandwidth between source and target
the ability to quickly process files and folders in a namespace
the number of changes between RoboCopy runs
IOPS and bandwidth considerations
In this category, you need to consider abilities of the source storage , the target storage , and the network
connecting them. The maximum possible throughput is determined by the slowest of these three components.
Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
Cau t i on

While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.

TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.

Avoid unnecessary work


Avoid large-scale changes in your namespace. For example, moving files between directories, changing
properties at a large scale, or changing permissions (NTFS ACLs). Especially ACL changes can have a high
impact because they often have a cascading change effect on files lower in the folder hierarchy. Consequences
can be:
extended RoboCopy job run time because each file and folder affected by an ACL change needing to be
updated
reusing data moved earlier may need to be recopied. For instance, more data will need to be copied when
folder structures change after files had already been copied earlier. A RoboCopy job can't "play back" a
namespace change. The next job must purge the files previously transported to the old folder structure and
upload the files in the new folder structure again.
Another important aspect is to use the RoboCopy tool effectively. With the recommended RoboCopy script,
you'll create and save a log file for errors. Copy errors can occur - that is normal. These errors often make it
necessary to run multiple rounds of a copy tool like RoboCopy. An initial run, say from a NAS to DataBox or a
server to an Azure file share. And one or more extra runs with the /MIR switch to catch and retry files that didn't
get copied.
You should be prepared to run multiple rounds of RoboCopy against a given namespace scope. Successive runs
will finish faster as they have less to copy but are constrained increasingly by the speed of processing the
namespace. When you run multiple rounds, you can speed up each round by not having RoboCopy try
unreasonably hard to copy everything in a given run. These RoboCopy switches can make a significant
difference:
/R:n n = how often you retry to copy a failed file and
/W:n n = how many seconds to wait between retries
/R:5 /W:5 is a reasonable setting that you can adjust to your liking. In this example, a failed file will be retried
five times, with five-second wait time between retries. If the file still fails to copy, the next RoboCopy job will try
again. Often files that failed because they are in use or because of timeout issues might eventually be copied
successfully this way.
User cut-over
If you use Azure File Sync, you likely need to create the SMB shares on that Azure File Sync-enabled Windows
Server instance that match the shares you had on the StorSimple volumes. You can front-load this step and do it
earlier to not lose time here. But you must ensure that before this point, nobody has access to cause changes to
the Windows Server instance.
If you have a DFS-N deployment, you can point the DFN-Namespaces to the new server folder locations. If you
don't have a DFS-N deployment, and you fronted your 8100 or 8600 appliance locally with a Windows Server
instance, you can take that server off the domain. Then domain-join your new Azure File Sync-enabled Windows
Server instance. During that process, give the server the same server name and share names as the old server
so that cut-over remains transparent for your users, group policy, and scripts.
Learn more about DFS-N.

Phase 6: Deprovision
When you deprovision a resource, you lose access to the configuration of that resource and its data.
Deprovisioning can't be undone. Don't proceed until you've confirmed that:
Your migration is complete.
There are no dependencies whatsoever on the StorSimple files, folders, or volume backups that you're about
to deprovision.
Before you begin, it's a best practice to observe your new Azure File Sync deployment in production for a while.
That time gives you the opportunity to fix any problems you might encounter. After you've observed your Azure
File Sync deployment for at least a few days, you can begin to deprovision resources in this order:
1. Deprovision your StorSimple Data Manager resource via the Azure portal. All of your DTS jobs will be deleted
with it. You won't be able to easily retrieve the copy logs. If they're important for your records, retrieve them
before you deprovision.
2. Make sure that your StorSimple physical appliances have been migrated, and then unregister them. If you
aren't sure that they've been migrated, don't proceed. If you deprovision these resources while they're still
necessary, you won't be able to recover the data or their configuration.
Optionally you can first deprovision the StorSimple volume resource, which will clean up the data on the
appliance. This process can take several days and won't forensically zero out the data on the appliance. If this
is important to you, handle disk zeroing separately from the resource deprovisioning and according to your
policies.
3. If there are no more registered devices left in a StorSimple Device Manager, you can proceed to remove that
Device Manager resource itself.
4. It's now time to delete the StorSimple storage account in Azure. Again, stop and confirm your migration is
complete and that nothing and no one depends on this data before you proceed.
5. Unplug the StorSimple physical appliance from your data center.
6. If you own the StorSimple appliance, you're free to PC Recycle it. If your device is leased, inform the lessor
and return the device as appropriate.
Your migration is complete.

NOTE
Still have questions or encountered any issues?
We're here to help:

Next steps
Get more familiar with Azure File Sync: aka.ms/AFS.
Understand the flexibility of cloud tiering policies.
Enable Azure Backup on your Azure file shares to schedule snapshots and define backup retention schedules.
If you see in the Azure portal that some files are permanently not syncing, review the Troubleshooting guide
for steps to resolve these issues.
StorSimple 1200 migration to Azure File Sync
5/20/2022 • 33 minutes to read • Edit Online

StorSimple 1200 series is a virtual appliance that is run in an on-premises data center. It is possible to migrate
the data from this appliance to an Azure File Sync environment. Azure File Sync is the default and strategic long-
term Azure service that StorSimple appliances can be migrated to.
StorSimple 1200 series will reach its end-of-life in December 2022. It is important to begin planning your
migration as soon as possible. This article provides the necessary background knowledge and migrations steps
for a successful migration to Azure File Sync.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Azure File Sync


Azure File Sync is a Microsoft cloud service, based on two main components:
File synchronization and cloud tiering.
File shares as native storage in Azure, that can be accessed over multiple protocols like SMB and file REST. An
Azure file share is comparable to a file share on a Windows Server, that you can natively mount as a network
drive. It supports important file fidelity aspects like attributes, permissions, and timestamps. Unlike with
StorSimple, no application/service is required to interpret the files and folders stored in the cloud. The ideal,
and most flexible approach to store general purpose file server data and some application data, in the cloud.
This article focuses on the migration steps. If before migrating you'd like to learn more about Azure File Sync,
we recommend the following articles:
Azure File Sync - overview
Azure File Sync - deployment guide

Migration goals
The goal is to guarantee the integrity of the production data and guaranteeing availability. The latter requires
keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.

StorSimple 1200 migration path to Azure File Sync


A local Windows Server is required to run an Azure File Sync agent. The Windows Server can be at a minimum a
2012R2 server but ideally is a Windows Server 2019.
There are numerous, alternative migration paths and it would create too long of an article to document all of
them and illustrate why they bear risk or disadvantages over the route we recommend as a best practice in this
article.

The previous image depicts steps that correspond to sections in this article.
Step 1: Provision your on-premises Windows Server and storage
1. Create a Windows Server 2019 - at a minimum 2012R2 - as a virtual machine or physical server. A Windows
Server fail-over cluster is also supported.
2. Provision or add Direct Attached Storage (DAS as compared to NAS, which is not supported). The size of the
Windows Server storage must be equal to or larger than the size of the available capacity of your virtual
StorSimple 1200 appliance.
Step 2: Configure your Windows Server storage
In this step, you map your StorSimple storage structure (volumes and shares) to your Windows Server storage
structure. If you plan to make changes to your storage structure, meaning the number of volumes, the
association of data folders to volumes, or the subfolder structure above or below your current SMB/NFS shares,
then now is the time to take these changes into consideration. Changing your file and folder structure after
Azure File Sync is configured, is cumbersome, and should be avoided. This article assumes you are mapping 1:1,
so you must take your mapping changes into consideration when you follow the steps in this article.
None of your production data should end up on the Windows Server system volume. Cloud tiering is not
supported on system volumes. However, this feature is required for the migration as well as continuous
operations as a StorSimple replacement.
Provision the same number of volumes on your Windows Server as you have on your StorSimple 1200
virtual appliance.
Configure any Windows Server roles, features, and settings you need. We recommend you opt into Windows
Server updates to keep your OS safe and up to date. Similarly, we recommend opting into Microsoft Update
to keep Microsoft applications up to date, including the Azure File Sync agent.
Do not configure any folders or shares before reading the following steps.
Step 3: Deploy the first Azure File Sync cloud resource
To complete this step, you need your Azure subscription credentials.
The core resource to configure for Azure File Sync is called a Storage Sync Service. We recommend that you
deploy only one for all servers that are syncing the same set of files now or in the future. Create multiple
Storage Sync Services only if you have distinct sets of servers that must never exchange data. For example, you
might have servers that must never sync the same Azure file share. Otherwise, using a single Storage Sync
Service is the best practice.
Choose an Azure region for your Storage Sync Service that's close to your location. All other cloud resources
must be deployed in the same region. To simplify management, create a new resource group in your
subscription that houses sync and storage resources.
For more information, see the section about deploying the Storage Sync Service in the article about deploying
Azure File Sync. Follow only this section of the article. There will be links to other sections of the article in later
steps.
Step 4: Match your local volume and folder structure to Azure File Sync and Azure file share resources
In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.

TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.

A structured approach to a deployment map


Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.

TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.

IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.

It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.

Download a namespace-mapping template.

Step 5: Provision Azure file shares


An Azure file share is stored in the cloud in an Azure storage account. Another level of performance
considerations applies here.
If you have highly active shares (shares used by many users and/or applications), two Azure file shares might
reach the performance limit of a storage account.
A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares
into the same storage account if you have archival shares or you expect low day-to-day activity in them.
These considerations apply more to direct cloud access (through an Azure VM) than to Azure File Sync. If you
plan to use only Azure File Sync on these shares, grouping several into a single Azure storage account is fine.
If you've made a list of your shares, you should map each share to the storage account it will be in.
In the previous phase, you determined the appropriate number of shares. In this step, you have a mapping of
storage accounts to file shares. Now deploy the appropriate number of Azure storage accounts with the
appropriate number of Azure file shares in them.
Make sure the region of each of your storage accounts is the same and matches the region of the Storage Sync
Service resource you've already deployed.
Cau t i on

If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.
Storage account settings
There are many configurations you can make on a storage account. The following checklist should be used for
your storage account configurations. You can change for instance the networking configuration after your
migration is complete.
Large file shares: Enabled - Large file shares improve performance and allow you to store up to 100TiB in a
share.
Firewall and virtual networks: Disabled - do not configure any IP restrictions or limit storage account access
to a specific VNET. The public endpoint of the storage account is used during the migration. All IP addresses
from Azure VMs must be allowed. It's best to configure any firewall rules on the storage account after the
migration.
Private Endpoints: Supported - You can enable private endpoints but the public endpoint is used for the
migration and must remain available.
Step 6: Configure Windows Server target folders
In previous steps, you have considered all aspects that will determine the components of your sync topologies. It
is now time, to prepare the server to receive files for upload.
Create all folders, that will sync each to its own Azure file share. It's important that you follow the folder
structure you've documented earlier. If for instance, you have decided to sync multiple, local SMB shares
together into a single Azure file share, then you need to place them under a common root folder on the volume.
Create this target root folder on the volume now.
The number of Azure file shares you have provisioned should match the number of folders you've created in
this step + the number of volumes you will sync at the root level.
Step 7: Deploy the Azure File Sync agent
In this section, you install the Azure File Sync agent on your Windows Server instance.
The deployment guide explains that you need to turn off Internet Explorer Enhanced Security
Configuration . This security measure isn't applicable with Azure File Sync. Turning it off allows you to
authenticate to Azure without any problems.
Open PowerShell. Install the required PowerShell modules by using the following commands. Be sure to install
the full module and the NuGet provider when you're prompted to do so.

Install-Module -Name Az -AllowClobber


Install-Module -Name Az.StorageSync

If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Step 8: Configure sync
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.

WARNING
Be sure to turn on cloud tiering! This is required if your local server does not have enough space to store the total
size of your data in the StorSimple cloud storage. Set your tiering policy, temporarily for the migration, to 99% volume
free space.

Repeat the steps of sync group creation and addition of the matching server folder as a server endpoint for all
Azure file shares / server locations, that need to be configured for sync.
Step 9: Copy your files
The basic migration approach is a RoboCopy from your StorSimple virtual appliance to your Windows Server,
and Azure File Sync to Azure file shares.
Run the first local copy to your Windows Server target folder:
Identify the first location on your virtual StorSimple appliance.
Identify the matching folder on the Windows Server, that already has Azure File Sync configured on it.
Start the copy using RoboCopy
The following RoboCopy command will recall files from your StorSimple Azure storage to your local StorSimple
and then move them over to the Windows Server target folder. The Windows Server will sync it to the Azure file
share(s). As the local Windows Server volume gets full, cloud tiering will kick in and tier files that have
successfully synced already. Cloud tiering will generate enough space to continue the copy from the StorSimple
virtual appliance. Cloud tiering checks once an hour to see what has synced and to free up disk space to reach
the 99% volume free space.

robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>

SW ITC H M EA N IN G

/MT:n Allows Robocopy to run multithreaded. Default for n is 8.


The maximum is 128 threads. While a high thread count
helps saturate the available bandwidth, it doesn't mean your
migration will always be faster with more threads. Tests with
Azure Files indicate between 8 and 20 shows balanced
performance for an initial copy run. Subsequent /MIR runs
are progressively affected by available compute vs available
network bandwidth. For subsequent runs, match your
thread count value more closely to your processor core
count and thread count per core. Consider whether cores
need to be reserved for other tasks that a production server
might have. Tests with Azure Files have shown that up to 64
threads produce a good performance, but onl if your
processors can keep them alive at the same time.

/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.

/W:n Specifies the time Robocopy waits before attempting to copy


a file that didn't successfully copy during a previous attempt.
n is the number of seconds to wait between retries. /W:n
is often used together with /R:n .
SW ITC H M EA N IN G

/B Runs Robocopy in the same mode that a backup application


would use. This switch allows Robocopy to move files that
the current user doesn't have permissions for. The backup
switch depends on running the Robocopy command in an
administrator elevated console or PowerShell window. If you
use Robocopy for Azure Files, make sure you mount the
Azure file share using the storage account access key vs. a
domain identity. If you don't, the error messages might not
intuitively lead you to a resolution of the problem.

/MIR (Mirror source to target.) Allows Robocopy to copy only


deltas between source and target. Empty subdirectories will
be copied. Items (files or folders) that have changed or don't
exist on the target will be copied. Items that exist on the
target but not on the source will be purged (deleted) from
the target. When you use this switch, match the source and
target folder structures exactly. Matching means copying
from the correct source and folder level to the matching
folder level on the target. Only then can a "catch up" copy
be successful. When source and target are mismatched,
using /MIR will lead to large-scale deletions and recopies.

/IT Ensures fidelity is preserved in certain mirror scenarios.


For example, if a file experiences an ACL change and an
attribute update between two Robocopy runs, it's marked
hidden. Without /IT , the ACL change might be missed by
Robocopy and not transferred to the target location.

/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.

/DCOPY:[copyflags] Fidelity for the copy of directories. Default: /DCOPY:DA .


Copy flags: D = Data, A = Attributes, T = Timestamps.

/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.

/NFL Specifies that file names aren't logged. Improves copy


performance.

/NDL Specifies that directory names aren't logged. Improves copy


performance.

/XD Specifies directories to be excluded. When running Robocopy


on the root of a volume, consider excluding the hidden
System Volume Information folder. If used as designed, all
information in there is specific to the exact volume on this
exact system and can be rebuilt on-demand. Copying this
information won't be helpful in the cloud or when the data is
ever copied back to another Windows volume. Leaving this
content behind should not be considered data loss.
SW ITC H M EA N IN G

/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)

/L Only for a test run


Files are to be listed only. They won't be copied, not deleted,
and not time stamped. Often used with /TEE for console
output. Flags from the sample script, like /NP , /NFL , and
/NDL , might need to be removed to achieve you properly
documented test results.

/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.

/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.

/ZB Use cautiously


Uses restart mode. If access is denied, this option uses
backup mode. This option significantly reduces copy
performance because of checkpointing.

IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.

When you run the RoboCopy command for the first time, your users and applications are still accessing the
StorSimple files and folders and potentially change it. It is possible, that RoboCopy has processed a directory,
moves on to the next and then a user on the source location (StorSimple) adds, changes, or deletes a file that will
now not be processed in this current RoboCopy run. That is fine.
The first run is about moving the bulk of the data back to on-premises, over to your Windows Server and
backup into the cloud via Azure File Sync. This can take a long time, depending on:
your download bandwidth
the recall speed of the StorSimple cloud service
the upload bandwidth
the number of items (files and folders), that need to be processed by either service
Once the initial run is complete, run the command again.
The second time it will finish faster, because it only needs to transport changes that happened since the last run.
Those changes are likely local to the StorSimple already, because they are recent. That is further reducing the
time because the need for recall from the cloud is reduced. During this second run, still, new changes can
accumulate.
Repeat this process until you are satisfied that the amount of time it takes to complete is an acceptable
downtime.
When you consider the downtime acceptable and you are prepared to take the StorSimple location offline, then
do so now: For example, remove the SMB share so that no user can access the folder or take any other
appropriate step that prevents content to change in this folder on StorSimple.
Run one last RoboCopy round. This will pick up any changes, that might have been missed. How long this final
step takes, is dependent on the speed of the RoboCopy scan. You can estimate the time (which is equal to your
downtime) by measuring how long the previous run took.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions as on your StorSimple SMB share.
You have finished migrating a share / group of shares into a common root or volume. (Depending on what you
mapped and decided that needed to go into the same Azure file share.)
You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share
at a time.

WARNING
Once you have moved all the data from you StorSimple to the Windows Server, and your migration is complete: Return to
all sync groups in the Azure portal and adjust the cloud tiering volume free space percent value to something better
suited for cache utilization, say 20%.

The cloud tiering volume free space policy acts on a volume level with potentially multiple server endpoints
syncing from it. If you forget to adjust the free space on even one server endpoint, sync will continue to apply
the most restrictive rule and attempt to keep 99% free disk space, making the local cache not performing as you
might expect. Unless it is your goal to only have the namespace for a volume that only contains rarely accessed,
archival data.

Troubleshoot
The most likely issue you can run into, is that the RoboCopy command fails with "Volume full" on the Windows
Server side. If that is the case, then your download speed is likely better than your upload speed. Cloud tiering
acts once every hour to evacuate content from the local Windows Server disk, that has synced.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows
Server.
When your Windows Server has sufficient available capacity, rerunning the command will resolve the problem.
Nothing breaks when you get into this situation and you can move forward with confidence. Inconvenience of
running the command again is the only consequence.
You can also run into other Azure File Sync issues. As unlikely as they may be, if that happens, take a look at the
LINK Azure File Sync troubleshooting guide .
Speed and success rate of a given RoboCopy run will depend on several factors:
IOPS on the source and target storage
the available network bandwidth between source and target
the ability to quickly process files and folders in a namespace
the number of changes between RoboCopy runs
IOPS and bandwidth considerations
In this category, you need to consider abilities of the source storage , the target storage , and the network
connecting them. The maximum possible throughput is determined by the slowest of these three components.
Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
Cau t i on

While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.

TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.

Avoid unnecessary work


Avoid large-scale changes in your namespace. For example, moving files between directories, changing
properties at a large scale, or changing permissions (NTFS ACLs). Especially ACL changes can have a high
impact because they often have a cascading change effect on files lower in the folder hierarchy. Consequences
can be:
extended RoboCopy job run time because each file and folder affected by an ACL change needing to be
updated
reusing data moved earlier may need to be recopied. For instance, more data will need to be copied when
folder structures change after files had already been copied earlier. A RoboCopy job can't "play back" a
namespace change. The next job must purge the files previously transported to the old folder structure and
upload the files in the new folder structure again.
Another important aspect is to use the RoboCopy tool effectively. With the recommended RoboCopy script,
you'll create and save a log file for errors. Copy errors can occur - that is normal. These errors often make it
necessary to run multiple rounds of a copy tool like RoboCopy. An initial run, say from a NAS to DataBox or a
server to an Azure file share. And one or more extra runs with the /MIR switch to catch and retry files that didn't
get copied.
You should be prepared to run multiple rounds of RoboCopy against a given namespace scope. Successive runs
will finish faster as they have less to copy but are constrained increasingly by the speed of processing the
namespace. When you run multiple rounds, you can speed up each round by not having RoboCopy try
unreasonably hard to copy everything in a given run. These RoboCopy switches can make a significant
difference:
/R:n n = how often you retry to copy a failed file and
/W:n n = how many seconds to wait between retries
/R:5 /W:5 is a reasonable setting that you can adjust to your liking. In this example, a failed file will be retried
five times, with five-second wait time between retries. If the file still fails to copy, the next RoboCopy job will try
again. Often files that failed because they are in use or because of timeout issues might eventually be copied
successfully this way.
NOTE
Still have questions or encountered any issues?
We're here to help:

Relevant links
Migration content:
StorSimple 8000 series migration guide
Azure File Sync content:
Azure File Sync overview
Deploy Azure File Sync
Azure File Sync troubleshooting guide
Configure Azure Storage connection strings
5/20/2022 • 7 minutes to read • Edit Online

A connection string includes the authorization information required for your application to access data in an
Azure Storage account at runtime using Shared Key authorization. You can configure connection strings to:
Connect to the Azurite storage emulator.
Access a storage account in Azure.
Access specified resources in Azure via a shared access signature (SAS).
To learn how to view your account access keys and copy a connection string, see Manage storage account access
keys.

Protect your access keys


Your storage account access keys are similar to a root password for your storage account. Always be careful to
protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. Avoid distributing
access keys to other users, hard-coding them, or saving them anywhere in plain text that is accessible to others.
Rotate your keys if you believe they may have been compromised.

NOTE
Microsoft recommends using Azure Active Directory (Azure AD) to authorize requests against blob and queue data if
possible, rather than using the account keys (Shared Key authorization). Authorization with Azure AD provides superior
security and ease of use over Shared Key authorization.
To protect an Azure Storage account with Azure AD Conditional Access policies, you must disallow Shared Key
authorization for the storage account. For more information about how to disallow Shared Key authorization, see Prevent
Shared Key authorization for an Azure Storage account.

Store a connection string


Your application needs to access the connection string at runtime to authorize requests made to Azure Storage.
You have several options for storing your connection string:
You can store your connection string in an environment variable.
An application running on the desktop or on a device can store the connection string in an app.config or
web.config file. Add the connection string to the AppSettings section in these files.
An application running in an Azure cloud service can store the connection string in the Azure service
configuration schema (.cscfg) file. Add the connection string to the ConfigurationSettings section of the
service configuration file.
Storing your connection string in a configuration file makes it easy to update the connection string to switch
between the Azurite storage emulator and an Azure storage account in the cloud. You only need to edit the
connection string to point to your target environment.
You can use the Microsoft Azure Configuration Manager to access your connection string at runtime regardless
of where your application is running.

Configure a connection string for Azurite


The emulator supports a single fixed account and a well-known authentication key for Shared Key
authentication. This account and key are the only Shared Key credentials permitted for use with the emulator.
They are:

Account name: devstoreaccount1


Account key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

NOTE
The authentication key supported by the emulator is intended only for testing the functionality of your client
authentication code. It does not serve any security purpose. You cannot use your production storage account and key
with the emulator. You should not use the development account with production data.
The emulator supports connection via HTTP only. However, HTTPS is the recommended protocol for accessing resources
in a production Azure storage account.

Connect to the emulator account using the shortcut


The easiest way to connect to the emulator from your application is to configure a connection string in your
application's configuration file that references the shortcut UseDevelopmentStorage=true . The shortcut is
equivalent to the full connection string for the emulator, which specifies the account name, the account key, and
the emulator endpoints for each of the Azure Storage services:

DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;
AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;
BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;
QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;
TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;

The following .NET code snippet shows how you can use the shortcut from a method that takes a connection
string. For example, the BlobContainerClient(String, String) constructor takes a connection string.

BlobContainerClient blobContainerClient = new BlobContainerClient("UseDevelopmentStorage=true", "sample-


container");
blobContainerClient.CreateIfNotExists();

Make sure that the emulator is running before calling the code in the snippet.
For more information about Azurite, see Use the Azurite emulator for local Azure Storage development.

Configure a connection string for an Azure storage account


To create a connection string for your Azure storage account, use the following format. Indicate whether you
want to connect to the storage account through HTTPS (recommended) or HTTP, replace myAccountName with the
name of your storage account, and replace myAccountKey with your account access key:
DefaultEndpointsProtocol=[http|https];AccountName=myAccountName;AccountKey=myAccountKey

For example, your connection string might look similar to:


DefaultEndpointsProtocol=https;AccountName=storagesample;AccountKey=<account-key>

Although Azure Storage supports both HTTP and HTTPS in a connection string, HTTPS is highly recommended.
TIP
You can find your storage account's connection strings in the Azure portal. Navigate to SETTINGS > Access keys in
your storage account's menu blade to see connection strings for both primary and secondary access keys.

Create a connection string using a shared access signature


If you possess a shared access signature (SAS) URL that grants you access to resources in a storage account, you
can use the SAS in a connection string. Because the SAS contains the information required to authenticate the
request, a connection string with a SAS provides the protocol, the service endpoint, and the necessary
credentials to access the resource.
To create a connection string that includes a shared access signature, specify the string in the following format:

BlobEndpoint=myBlobEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
FileEndpoint=myFileEndpoint;
SharedAccessSignature=sasToken

Each service endpoint is optional, although the connection string must contain at least one.

NOTE
Using HTTPS with a SAS is recommended as a best practice.
If you are specifying a SAS in a connection string in a configuration file, you may need to encode special characters in the
URL.

Service SAS example


Here's an example of a connection string that includes a service SAS for Blob storage:

BlobEndpoint=https://storagesample.blob.core.windows.net;
SharedAccessSignature=sv=2015-04-05&sr=b&si=tutorial-policy-
635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D

And here's an example of the same connection string with encoding of special characters:

BlobEndpoint=https://storagesample.blob.core.windows.net;
SharedAccessSignature=sv=2015-04-05&amp;sr=b&amp;si=tutorial-policy-
635959936145100803&amp;sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D

Account SAS example


Here's an example of a connection string that includes an account SAS for Blob and File storage. Note that
endpoints for both services are specified:

BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-
08&sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-04-12T03%3A24%3A31Z&se=2016-04-
13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl

And here's an example of the same connection string with URL encoding:
BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-
08&amp;sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&amp;spr=https&amp;st=2016-04-
12T03%3A24%3A31Z&amp;se=2016-04-13T03%3A29%3A31Z&amp;srt=s&amp;ss=bf&amp;sp=rwl

Create a connection string for an explicit storage endpoint


You can specify explicit service endpoints in your connection string instead of using the default endpoints. To
create a connection string that specifies an explicit endpoint, specify the complete service endpoint for each
service, including the protocol specification (HTTPS (recommended) or HTTP), in the following format:

DefaultEndpointsProtocol=[http|https];
BlobEndpoint=myBlobEndpoint;
FileEndpoint=myFileEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
AccountName=myAccountName;
AccountKey=myAccountKey

One scenario where you might wish to specify an explicit endpoint is when you've mapped your Blob storage
endpoint to a custom domain. In that case, you can specify your custom endpoint for Blob storage in your
connection string. You can optionally specify the default endpoints for the other services if your application uses
them.
Here is an example of a connection string that specifies an explicit endpoint for the Blob service:

# Blob endpoint only


DefaultEndpointsProtocol=https;
BlobEndpoint=http://www.mydomain.com;
AccountName=storagesample;
AccountKey=<account-key>

This example specifies explicit endpoints for all services, including a custom domain for the Blob service:

# All service endpoints


DefaultEndpointsProtocol=https;
BlobEndpoint=http://www.mydomain.com;
FileEndpoint=https://myaccount.file.core.windows.net;
QueueEndpoint=https://myaccount.queue.core.windows.net;
TableEndpoint=https://myaccount.table.core.windows.net;
AccountName=storagesample;
AccountKey=<account-key>

The endpoint values in a connection string are used to construct the request URIs to the storage services, and
dictate the form of any URIs that are returned to your code.
If you've mapped a storage endpoint to a custom domain and omit that endpoint from a connection string, then
you will not be able to use that connection string to access data in that service from your code.
For more information about configuring a custom domain for Azure Storage, see Map a custom domain to an
Azure Blob Storage endpoint.
IMPORTANT
Service endpoint values in your connection strings must be well-formed URIs, including https:// (recommended) or
http:// .

Create a connection string with an endpoint suffix


To create a connection string for a storage service in regions or instances with different endpoint suffixes, such
as for Azure China 21Vianet or Azure Government, use the following connection string format. Indicate whether
you want to connect to the storage account through HTTPS (recommended) or HTTP, replace myAccountName
with the name of your storage account, replace myAccountKey with your account access key, and replace
mySuffix with the URI suffix:

DefaultEndpointsProtocol=[http|https];
AccountName=myAccountName;
AccountKey=myAccountKey;
EndpointSuffix=mySuffix;

Here's an example connection string for storage services in Azure China 21Vianet:

DefaultEndpointsProtocol=https;
AccountName=storagesample;
AccountKey=<account-key>;
EndpointSuffix=core.chinacloudapi.cn;

Parsing a connection string


The Microsoft Azure Configuration Manager Library for .NET provides a class for parsing a connection string
from a configuration file. The CloudConfigurationManager class parses configuration settings. It parses settings
for client applications that run on the desktop, on a mobile device, in an Azure virtual machine, or in an Azure
cloud service.
To reference the CloudConfigurationManager package, add the following using directives:

using Microsoft.Azure; //Namespace for CloudConfigurationManager


using Microsoft.Azure.Storage;

Here's an example that shows how to retrieve a connection string from a configuration file:

// Parse the connection string and return a reference to the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

Using the Azure Configuration Manager is optional. You can also use an API such as the .NET Framework's
ConfigurationManager Class.

Next steps
Use the Azurite emulator for local Azure Storage development
Grant limited access to Azure Storage resources using shared access signatures (SAS)
Develop for Azure Files with .NET
5/20/2022 • 22 minutes to read • Edit Online

Learn the basics of developing .NET applications that use Azure Files to store data. This article shows how to
create a simple console application to do the following with .NET and Azure Files:
Get the contents of a file.
Set the maximum size, or quota, for a file share.
Create a shared access signature (SAS) for a file.
Copy a file to another file in the same storage account.
Copy a file to a blob in the same storage account.
Create a snapshot of a file share.
Restore a file from a share snapshot.
Use Azure Storage Metrics for troubleshooting.
To learn more about Azure Files, see What is Azure Files?

TIP
Check out the Azure Storage code samples repositor y
For easy-to-use end-to-end Azure Storage code samples that you can download and run, please check out our list of
Azure Storage Samples.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Understanding the .NET APIs


Azure Files provides two broad approaches to client applications: Server Message Block (SMB) and REST. Within
.NET, the System.IO and Azure.Storage.Files.Shares APIs abstract these approaches.

API W H EN TO USE N OT ES
API W H EN TO USE N OT ES

System.IO Your application: File I/O implemented with Azure Files


Needs to read/write files by over SMB is generally the same as I/O
using SMB with any network file share or local
Is running on a device that has storage device. For an introduction to
access over port 445 to your a number of features in .NET, including
Azure Files account file I/O, see the Console Application
Doesn't need to manage any of tutorial.
the administrative settings of
the file share

Azure.Storage.Files.Shares Your application: This article demonstrates the use of


Can't access Azure Files by Azure.Storage.Files.Shares for file
using SMB on port 445 I/O using REST instead of SMB and
because of firewall or ISP management of the file share.
constraints
Requires administrative
functionality, such as the ability
to set a file share's quota or
create a shared access
signature

Create the console application and obtain the assembly


You can use the Azure Files client library in any type of .NET app. These apps include Azure cloud, web, desktop,
and mobile apps. In this guide, we create a console application for simplicity.
In Visual Studio, create a new Windows console application. The following steps show you how to create a
console application in Visual Studio 2019. The steps are similar in other versions of Visual Studio.
1. Start Visual Studio and select Create a new project .
2. In Create a new project , choose Console App (.NET Framework) for C#, and then select Next .
3. In Configure your new project , enter a name for the app, and select Create .
Add all the code examples in this article to the Program class in the Program.cs file.

Use NuGet to install the required packages


Refer to these packages in your project:
Azure .NET SDK v12
Azure .NET SDK v11

Azure core library for .NET: This package is the implementation of the Azure client pipeline.
Azure Storage Blob client library for .NET: This package provides programmatic access to blob resources in
your storage account.
Azure Storage Files client library for .NET: This package provides programmatic access to file resources in
your storage account.
System Configuration Manager library for .NET: This package provides a class storing and retrieving values
in a configuration file.
You can use NuGet to obtain the packages. Follow these steps:
1. In Solution Explorer , right-click your project and choose Manage NuGet Packages .
2. In NuGet Package Manager , select Browse . Then search for and choose Azure.Core , and then select
Install .
This step installs the package and its dependencies.
3. Search for and install these packages:
Azure.Storage.Blobs
Azure.Storage.Files.Shares
System.Configuration.ConfigurationManager

Save your storage account credentials to the App.config file


Next, save your credentials in your project's App.config file. In Solution Explorer , double-click App.config and
edit the file so that it is similar to the following example.
Azure .NET SDK v12
Azure .NET SDK v11

Replace myaccount with your storage account name and mykey with your storage account key.

<?xml version="1.0" encoding="utf-8"?>


<configuration>
<appSettings>
<add key="StorageConnectionString"

value="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;EndpointSuffix=core.windows.net
" />
<add key="StorageAccountName" value="myaccount" />
<add key="StorageAccountKey" value="mykey" />
</appSettings>
</configuration>

NOTE
The Azurite storage emulator does not currently support Azure Files. Your connection string must target an Azure storage
account in the cloud to work with Azure Files.

Add using directives


In Solution Explorer , open the Program.cs file, and add the following using directives to the top of the file.

Azure .NET SDK v12


Azure .NET SDK v11

using System;
using System.Configuration;
using System.IO;
using System.Threading.Tasks;
using Azure;
using Azure.Storage;
using Azure.Storage.Blobs;
using Azure.Storage.Files.Shares;
using Azure.Storage.Files.Shares.Models;
using Azure.Storage.Sas;
Access the file share programmatically
In the Program.cs file, add the following code to access the file share programmatically.
Azure .NET SDK v12
Azure .NET SDK v11

The following method creates a file share if it doesn't already exist. The method starts by creating a ShareClient
object from a connection string. The sample then attempts to download a file we created earlier. Call this method
from Main() .
//-------------------------------------------------
// Create a file share
//-------------------------------------------------
public async Task CreateShareAsync(string shareName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instantiate a ShareClient which will be used to create and manipulate the file share
ShareClient share = new ShareClient(connectionString, shareName);

// Create the share if it doesn't already exist


await share.CreateIfNotExistsAsync();

// Ensure that the share exists


if (await share.ExistsAsync())
{
Console.WriteLine($"Share created: {share.Name}");

// Get a reference to the sample directory


ShareDirectoryClient directory = share.GetDirectoryClient("CustomLogs");

// Create the directory if it doesn't already exist


await directory.CreateIfNotExistsAsync();

// Ensure that the directory exists


if (await directory.ExistsAsync())
{
// Get a reference to a file object
ShareFileClient file = directory.GetFileClient("Log1.txt");

// Ensure that the file exists


if (await file.ExistsAsync())
{
Console.WriteLine($"File exists: {file.Name}");

// Download the file


ShareFileDownloadInfo download = await file.DownloadAsync();

// Save the data to a local file, overwrite if the file already exists
using (FileStream stream = File.OpenWrite(@"downloadedLog1.txt"))
{
await download.Content.CopyToAsync(stream);
await stream.FlushAsync();
stream.Close();

// Display where the file was saved


Console.WriteLine($"File downloaded: {stream.Name}");
}
}
}
}
else
{
Console.WriteLine($"CreateShareAsync failed");
}
}

Set the maximum size for a file share


Beginning with version 5.x of the Azure Files client library, you can set the quota (maximum size) for a file share.
You can also check to see how much data is currently stored on the share.
Setting the quota for a share limits the total size of the files stored on the share. If the total size of files on the
share exceeds the quota, clients can't increase the size of existing files. Clients also can't create new files, unless
those files are empty.
The example below shows how to check the current usage for a share and how to set the quota for the share.
Azure .NET SDK v12
Azure .NET SDK v11

//-------------------------------------------------
// Set the maximum size of a share
//-------------------------------------------------
public async Task SetMaxShareSizeAsync(string shareName, int increaseSizeInGiB)
{
const long ONE_GIBIBYTE = 10737420000; // Number of bytes in 1 gibibyte

// Get the connection string from app settings


string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instantiate a ShareClient which will be used to access the file share


ShareClient share = new ShareClient(connectionString, shareName);

// Create the share if it doesn't already exist


await share.CreateIfNotExistsAsync();

// Ensure that the share exists


if (await share.ExistsAsync())
{
// Get and display current share quota
ShareProperties properties = await share.GetPropertiesAsync();
Console.WriteLine($"Current share quota: {properties.QuotaInGB} GiB");

// Get and display current usage stats for the share


ShareStatistics stats = await share.GetStatisticsAsync();
Console.WriteLine($"Current share usage: {stats.ShareUsageInBytes} bytes");

// Convert current usage from bytes into GiB


int currentGiB = (int)(stats.ShareUsageInBytes / ONE_GIBIBYTE);

// This line sets the quota to be the current


// usage of the share plus the increase amount
await share.SetQuotaAsync(currentGiB + increaseSizeInGiB);

// Get the new quota and display it


properties = await share.GetPropertiesAsync();
Console.WriteLine($"New share quota: {properties.QuotaInGB} GiB");
}
}

Generate a shared access signature for a file or file share


Beginning with version 5.x of the Azure Files client library, you can generate a shared access signature (SAS) for
a file share or for an individual file.

Azure .NET SDK v12


Azure .NET SDK v11

The following example method returns a SAS on a file in the specified share.
//-------------------------------------------------
// Create a SAS URI for a file
//-------------------------------------------------
public Uri GetFileSasUri(string shareName, string filePath, DateTime expiration, ShareFileSasPermissions
permissions)
{
// Get the account details from app settings
string accountName = ConfigurationManager.AppSettings["StorageAccountName"];
string accountKey = ConfigurationManager.AppSettings["StorageAccountKey"];

ShareSasBuilder fileSAS = new ShareSasBuilder()


{
ShareName = shareName,
FilePath = filePath,

// Specify an Azure file resource


Resource = "f",

// Expires in 24 hours
ExpiresOn = expiration
};

// Set the permissions for the SAS


fileSAS.SetPermissions(permissions);

// Create a SharedKeyCredential that we can use to sign the SAS token


StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);

// Build a SAS URI


UriBuilder fileSasUri = new
UriBuilder($"https://{accountName}.file.core.windows.net/{fileSAS.ShareName}/{fileSAS.FilePath}");
fileSasUri.Query = fileSAS.ToSasQueryParameters(credential).ToString();

// Return the URI


return fileSasUri.Uri;
}

For more information about creating and using shared access signatures, see How a shared access signature
works.

Copy files
Beginning with version 5.x of the Azure Files client library, you can copy a file to another file, a file to a blob, or a
blob to a file.
You can also use AzCopy to copy one file to another or to copy a blob to a file or the other way around. See Get
started with AzCopy.

NOTE
If you are copying a blob to a file, or a file to a blob, you must use a shared access signature (SAS) to authorize access to
the source object, even if you are copying within the same storage account.

Copy a file to another file


The following example copies a file to another file in the same share. You can use Shared Key authentication to
do the copy because this operation copies files within the same storage account.

Azure .NET SDK v12


Azure .NET SDK v11
//-------------------------------------------------
// Copy file within a directory
//-------------------------------------------------
public async Task CopyFileAsync(string shareName, string sourceFilePath, string destFilePath)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Get a reference to the file we created previously


ShareFileClient sourceFile = new ShareFileClient(connectionString, shareName, sourceFilePath);

// Ensure that the source file exists


if (await sourceFile.ExistsAsync())
{
// Get a reference to the destination file
ShareFileClient destFile = new ShareFileClient(connectionString, shareName, destFilePath);

// Start the copy operation


await destFile.StartCopyAsync(sourceFile.Uri);

if (await destFile.ExistsAsync())
{
Console.WriteLine($"{sourceFile.Uri} copied to {destFile.Uri}");
}
}
}

Copy a file to a blob


The following example creates a file and copies it to a blob within the same storage account. The example
creates a SAS for the source file, which the service uses to authorize access to the source file during the copy
operation.

Azure .NET SDK v12


Azure .NET SDK v11
//-------------------------------------------------
// Copy a file from a share to a blob
//-------------------------------------------------
public async Task CopyFileToBlobAsync(string shareName, string sourceFilePath, string containerName, string
blobName)
{
// Get a file SAS from the method created ealier
Uri fileSasUri = GetFileSasUri(shareName, sourceFilePath, DateTime.UtcNow.AddHours(24),
ShareFileSasPermissions.Read);

// Get a reference to the file we created previously


ShareFileClient sourceFile = new ShareFileClient(fileSasUri);

// Ensure that the source file exists


if (await sourceFile.ExistsAsync())
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Get a reference to the destination container


BlobContainerClient container = new BlobContainerClient(connectionString, containerName);

// Create the container if it doesn't already exist


await container.CreateIfNotExistsAsync();

BlobClient destBlob = container.GetBlobClient(blobName);

await destBlob.StartCopyFromUriAsync(sourceFile.Uri);

if (await destBlob.ExistsAsync())
{
Console.WriteLine($"File {sourceFile.Name} copied to blob {destBlob.Name}");
}
}
}

You can copy a blob to a file in the same way. If the source object is a blob, then create a SAS to authorize access
to that blob during the copy operation.

Share snapshots
Beginning with version 8.5 of the Azure Files client library, you can create a share snapshot. You can also list or
browse share snapshots and delete share snapshots. Once created, share snapshots are read-only.
Create share snapshots
The following example creates a file share snapshot.

Azure .NET SDK v12


Azure .NET SDK v11
//-------------------------------------------------
// Create a share snapshot
//-------------------------------------------------
public async Task CreateShareSnapshotAsync(string shareName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instatiate a ShareServiceClient
ShareServiceClient shareServiceClient = new ShareServiceClient(connectionString);

// Instantiate a ShareClient which will be used to access the file share


ShareClient share = shareServiceClient.GetShareClient(shareName);

// Ensure that the share exists


if (await share.ExistsAsync())
{
// Create a snapshot
ShareSnapshotInfo snapshotInfo = await share.CreateSnapshotAsync();
Console.WriteLine($"Snapshot created: {snapshotInfo.Snapshot}");
}
}

List share snapshots


The following example lists the snapshots on a share.

Azure .NET SDK v12


Azure .NET SDK v11

//-------------------------------------------------
// List the snapshots on a share
//-------------------------------------------------
public void ListShareSnapshots()
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instatiate a ShareServiceClient
ShareServiceClient shareServiceClient = new ShareServiceClient(connectionString);

// Display each share and the snapshots on each share


foreach (ShareItem item in shareServiceClient.GetShares(ShareTraits.All, ShareStates.Snapshots))
{
if (null != item.Snapshot)
{
Console.WriteLine($"Share: {item.Name}\tSnapshot: {item.Snapshot}");
}
}
}

List files and directories within share snapshots


The following example browses files and directories within share snapshots.

Azure .NET SDK v12


Azure .NET SDK v11
//-------------------------------------------------
// List the snapshots on a share
//-------------------------------------------------
public void ListSnapshotContents(string shareName, string snapshotTime)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);

// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);

Console.WriteLine($"Share: {share.Name}");

// Get as ShareClient that points to a snapshot


ShareClient snapshot = share.WithSnapshot(snapshotTime);

// Get the root directory in the snapshot share


ShareDirectoryClient rootDir = snapshot.GetRootDirectoryClient();

// Recursively list the directory tree


ListDirTree(rootDir);
}

//-------------------------------------------------
// Recursively list a directory tree
//-------------------------------------------------
public void ListDirTree(ShareDirectoryClient dir)
{
// List the files and directories in the snapshot
foreach (ShareFileItem item in dir.GetFilesAndDirectories())
{
if (item.IsDirectory)
{
Console.WriteLine($"Directory: {item.Name}");
ShareDirectoryClient subDir = dir.GetSubdirectoryClient(item.Name);
ListDirTree(subDir);
}
else
{
Console.WriteLine($"File: {dir.Name}\\{item.Name}");
}
}
}

Restore file shares or files from share snapshots


Taking a snapshot of a file share enables you to recover individual files or the entire file share.
You can restore a file from a file share snapshot by querying the share snapshots of a file share. You can then
retrieve a file that belongs to a particular share snapshot. Use that version to directly read or to restore the file.
Azure .NET SDK v12
Azure .NET SDK v11
//-------------------------------------------------
// Restore file from snapshot
//-------------------------------------------------
public async Task RestoreFileFromSnapshot(string shareName, string directoryName, string fileName, string
snapshotTime)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);

// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);

// Get as ShareClient that points to a snapshot


ShareClient snapshot = share.WithSnapshot(snapshotTime);

// Get a ShareDirectoryClient, then a ShareFileClient to the snapshot file


ShareDirectoryClient snapshotDir = snapshot.GetDirectoryClient(directoryName);
ShareFileClient snapshotFile = snapshotDir.GetFileClient(fileName);

// Get a ShareDirectoryClient, then a ShareFileClient to the live file


ShareDirectoryClient liveDir = share.GetDirectoryClient(directoryName);
ShareFileClient liveFile = liveDir.GetFileClient(fileName);

// Restore the file from the snapshot


ShareFileCopyInfo copyInfo = await liveFile.StartCopyAsync(snapshotFile.Uri);

// Display the status of the operation


Console.WriteLine($"Restore status: {copyInfo.CopyStatus}");
}

Delete share snapshots


The following example deletes a file share snapshot.

Azure .NET SDK v12


Azure .NET SDK v11
//-------------------------------------------------
// Delete a snapshot
//-------------------------------------------------
public async Task DeleteSnapshotAsync(string shareName, string snapshotTime)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);

// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);

// Get a ShareClient that points to a snapshot


ShareClient snapshotShare = share.WithSnapshot(snapshotTime);

try
{
// Delete the snapshot
await snapshotShare.DeleteIfExistsAsync();
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Exception: {ex.Message}");
Console.WriteLine($"Error code: {ex.Status}\t{ex.ErrorCode}");
}
}

Troubleshoot Azure Files by using metrics


Azure Storage Analytics supports metrics for Azure Files. With metrics data, you can trace requests and diagnose
issues.
You can enable metrics for Azure Files from the Azure portal. You can also enable metrics programmatically by
calling the Set File Service Properties operation with the REST API or one of its analogs in the Azure Files client
library.
The following code example shows how to use the .NET client library to enable metrics for Azure Files.

Azure .NET SDK v12


Azure .NET SDK v11
//-------------------------------------------------
// Use metrics
//-------------------------------------------------
public async Task UseMetricsAsync()
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];

// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);

// Set metrics properties for File service


await shareService.SetPropertiesAsync(new ShareServiceProperties()
{
// Set hour metrics
HourMetrics = new ShareMetrics()
{
Enabled = true,
IncludeApis = true,
Version = "1.0",

RetentionPolicy = new ShareRetentionPolicy()


{
Enabled = true,
Days = 14
}
},

// Set minute metrics


MinuteMetrics = new ShareMetrics()
{
Enabled = true,
IncludeApis = true,
Version = "1.0",

RetentionPolicy = new ShareRetentionPolicy()


{
Enabled = true,
Days = 7
}
}
});

// Read the metrics properties we just set


ShareServiceProperties serviceProperties = await shareService.GetPropertiesAsync();

// Display the properties


Console.WriteLine();
Console.WriteLine($"HourMetrics.InludeApis: {serviceProperties.HourMetrics.IncludeApis}");
Console.WriteLine($"HourMetrics.RetentionPolicy.Days:
{serviceProperties.HourMetrics.RetentionPolicy.Days}");
Console.WriteLine($"HourMetrics.Version: {serviceProperties.HourMetrics.Version}");
Console.WriteLine();
Console.WriteLine($"MinuteMetrics.InludeApis: {serviceProperties.MinuteMetrics.IncludeApis}");
Console.WriteLine($"MinuteMetrics.RetentionPolicy.Days:
{serviceProperties.MinuteMetrics.RetentionPolicy.Days}");
Console.WriteLine($"MinuteMetrics.Version: {serviceProperties.MinuteMetrics.Version}");
Console.WriteLine();
}

If you encounter any problems, you can refer to Troubleshoot Azure Files problems in Windows.

Next steps
For more information about Azure Files, see the following resources:
Conceptual articles and videos
Azure Files: a frictionless cloud SMB file system for Windows and Linux
Use Azure Files with Linux
Tooling support for File storage
Get started with AzCopy
Troubleshoot Azure Files problems in Windows
Reference
Azure Storage APIs for .NET
File Service REST API
Develop for Azure Files with Java
5/20/2022 • 8 minutes to read • Edit Online

Learn the basics developing Java applications that use Azure Files to store data. Create a console application and
learn basic actions using Azure Files APIs:
Create and delete Azure file shares
Create and delete directories
Enumerate files and directories in an Azure file share
Upload, download, and delete a file

TIP
Check out the Azure Storage code samples repositor y
For easy-to-use end-to-end Azure Storage code samples that you can download and run, please check out our list of
Azure Storage Samples.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Create a Java application


To build the samples, you'll need the Java Development Kit (JDK) and the Azure Storage SDK for Java. You should
also have created an Azure storage account.

Set up your application to use Azure Files


To use the Azure Files APIs, add the following code to the top of the Java file from where you intend to access
Azure Files.
Azure Java SDK v12
Azure Java SDK v8

// Include the following imports to use Azure Files APIs


import com.azure.storage.file.share.*;

Set up an Azure storage connection string


To use Azure Files, you need to connect to your Azure storage account. Configure a connection string and use it
to connect to your storage account. Define a static variable to hold the connection string.
Azure Java SDK v12
Azure Java SDK v8

Replace <storage_account_name> and <storage_account_key> with the actual values for your storage account.

// Define the connection-string.


// Replace the values, including <>, with
// the values from your storage account.
public static final String connectStr =
"DefaultEndpointsProtocol=https;" +
"AccountName=<storage_account_name>;" +
"AccountKey=<storage_account_key>";

Access an Azure file share


Azure Java SDK v12
Azure Java SDK v8

To access Azure Files, create a ShareClient object. Use the ShareClientBuilder class to build a new ShareClient
object.

ShareClient shareClient = new ShareClientBuilder()


.connectionString(connectStr).shareName(shareName)
.buildClient();

Create a file share


All files and directories in Azure Files are stored in a container called a share.

Azure Java SDK v12


Azure Java SDK v8

The ShareClient.create method throws an exception if the share already exists. Put the call to create in a
try/catch block and handle the exception.

public static Boolean createFileShare(String connectStr, String shareName)


{
try
{
ShareClient shareClient = new ShareClientBuilder()
.connectionString(connectStr).shareName(shareName)
.buildClient();

shareClient.create();
return true;
}
catch (Exception e)
{
System.out.println("createFileShare exception: " + e.getMessage());
return false;
}
}

Delete a file share


The following sample code deletes a file share.

Azure Java SDK v12


Azure Java SDK v8

Delete a share by calling the ShareClient.delete method.

public static Boolean deleteFileShare(String connectStr, String shareName)


{
try
{
ShareClient shareClient = new ShareClientBuilder()
.connectionString(connectStr).shareName(shareName)
.buildClient();

shareClient.delete();
return true;
}
catch (Exception e)
{
System.out.println("deleteFileShare exception: " + e.getMessage());
return false;
}
}

Create a directory
Organize storage by putting files inside subdirectories instead of having all of them in the root directory.
Azure Java SDK v12
Azure Java SDK v8

The following code creates a directory by calling ShareDirectoryClient.create. The example method returns a
Boolean value indicating if it successfully created the directory.

public static Boolean createDirectory(String connectStr, String shareName,


String dirName)
{
try
{
ShareDirectoryClient dirClient = new ShareFileClientBuilder()
.connectionString(connectStr).shareName(shareName)
.resourcePath(dirName)
.buildDirectoryClient();

dirClient.create();
return true;
}
catch (Exception e)
{
System.out.println("createDirectory exception: " + e.getMessage());
return false;
}
}

Delete a directory
Deleting a directory is a straightforward task. You can't delete a directory that still contains files or
subdirectories.
Azure Java SDK v12
Azure Java SDK v8

The ShareDirectoryClient.delete method throws an exception if the directory doesn't exist or isn't empty. Put the
call to delete in a try/catch block and handle the exception.

public static Boolean deleteDirectory(String connectStr, String shareName,


String dirName)
{
try
{
ShareDirectoryClient dirClient = new ShareFileClientBuilder()
.connectionString(connectStr).shareName(shareName)
.resourcePath(dirName)
.buildDirectoryClient();

dirClient.delete();
return true;
}
catch (Exception e)
{
System.out.println("deleteDirectory exception: " + e.getMessage());
return false;
}
}

Enumerate files and directories in an Azure file share


Azure Java SDK v12
Azure Java SDK v8

Get a list of files and directories by calling ShareDirectoryClient.listFilesAndDirectories. The method returns a list
of ShareFileItem objects on which you can iterate. The following code lists files and directories inside the
directory specified by the dirName parameter.

public static Boolean enumerateFilesAndDirs(String connectStr, String shareName,


String dirName)
{
try
{
ShareDirectoryClient dirClient = new ShareFileClientBuilder()
.connectionString(connectStr).shareName(shareName)
.resourcePath(dirName)
.buildDirectoryClient();

dirClient.listFilesAndDirectories().forEach(
fileRef -> System.out.printf("Resource: %s\t Directory? %b\n",
fileRef.getName(), fileRef.isDirectory())
);

return true;
}
catch (Exception e)
{
System.out.println("enumerateFilesAndDirs exception: " + e.getMessage());
return false;
}
}
Upload a file
Learn how to upload a file from local storage.

Azure Java SDK v12


Azure Java SDK v8

The following code uploads a local file to Azure Files by calling the ShareFileClient.uploadFromFile method. The
following example method returns a Boolean value indicating if it successfully uploaded the specified file.

public static Boolean uploadFile(String connectStr, String shareName,


String dirName, String fileName)
{
try
{
ShareDirectoryClient dirClient = new ShareFileClientBuilder()
.connectionString(connectStr).shareName(shareName)
.resourcePath(dirName)
.buildDirectoryClient();

ShareFileClient fileClient = dirClient.getFileClient(fileName);


fileClient.create(1024);
fileClient.uploadFromFile(fileName);
return true;
}
catch (Exception e)
{
System.out.println("uploadFile exception: " + e.getMessage());
return false;
}
}

Download a file
One of the more frequent operations is to download files from an Azure file share.

Azure Java SDK v12


Azure Java SDK v8

The following example downloads the specified file to the local directory specified in the destDir parameter. The
example method makes the downloaded filename unique by prepending the date and time.
public static Boolean downloadFile(String connectStr, String shareName,
String dirName, String destDir,
String fileName)
{
try
{
ShareDirectoryClient dirClient = new ShareFileClientBuilder()
.connectionString(connectStr).shareName(shareName)
.resourcePath(dirName)
.buildDirectoryClient();

ShareFileClient fileClient = dirClient.getFileClient(fileName);

// Create a unique file name


String date = new java.text.SimpleDateFormat("yyyyMMdd-HHmmss").format(new java.util.Date());
String destPath = destDir + "/"+ date + "_" + fileName;

fileClient.downloadToFile(destPath);
return true;
}
catch (Exception e)
{
System.out.println("downloadFile exception: " + e.getMessage());
return false;
}
}

Delete a file
Another common Azure Files operation is file deletion.

Azure Java SDK v12


Azure Java SDK v8

The following code deletes the specified file specified. First, the example creates a ShareDirectoryClient based on
the dirName parameter. Then, the code gets a ShareFileClient from the directory client, based on the fileName
parameter. Finally, the example method calls ShareFileClient.delete to delete the file.

public static Boolean deleteFile(String connectStr, String shareName,


String dirName, String fileName)
{
try
{
ShareDirectoryClient dirClient = new ShareFileClientBuilder()
.connectionString(connectStr).shareName(shareName)
.resourcePath(dirName)
.buildDirectoryClient();

ShareFileClient fileClient = dirClient.getFileClient(fileName);


fileClient.delete();
return true;
}
catch (Exception e)
{
System.out.println("deleteFile exception: " + e.getMessage());
return false;
}
}

Next steps
If you would like to learn more about other Azure storage APIs, follow these links.
Azure for Java developers
Azure SDK for Java
Azure SDK for Android
Azure File Share client library for Java SDK Reference
Azure Storage Services REST API
Azure Storage Team Blog
Transfer data with the AzCopy Command-Line Utility
Troubleshooting Azure Files problems - Windows
Develop for Azure Files with C++
5/20/2022 • 6 minutes to read • Edit Online

TIP
Tr y the Microsoft Azure Storage Explorer
Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually with Azure
Storage data on Windows, macOS, and Linux.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

About this tutorial


In this tutorial, you'll learn how to do basic operations on Azure Files using C++. If you're new to Azure Files,
going through the concepts in the sections that follow will be helpful in understanding the samples. Some of the
samples covered are:
Create and delete Azure file shares
Create and delete directories
Upload, download, and delete a file
Set and list the metadata for a file

NOTE
Because Azure Files may be accessed over SMB, it is possible to write simple applications that access the Azure file share
using the standard C++ I/O classes and functions. This article will describe how to write applications that use the Azure
Storage C++ SDK, which uses the File REST API to talk to Azure Files.

Prerequisites
Azure subscription
Azure storage account
C++ compiler
CMake
Vcpkg - C and C++ package manager

Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
C++.
Install the packages
The vcpkg install command will install the Azure Storage Blobs SDK for C++ and necessary dependencies:

vcpkg.exe install azure-storage-files-shares-cpp:x64-windows

For more information, visit GitHub to acquire and build the Azure SDK for C++.
Create the project
In Visual Studio, create a new C++ console application for Windows called FilesShareQuickstartV12.

Copy your credentials from the Azure portal


When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. To view your storage account
credentials, follow these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the storage account menu pane, under Security + networking , select Access keys . Here, you can
view the account access keys and the complete connection string for each key.
4. In the Access keys pane, select Show keys .
5. In the key1 section, locate the Connection string value. Select the Copy to clipboard icon to copy the
connection string. You'll add the connection string value to an environment variable in the next section.

Configure your storage connection string


After you copy the connection string, write it to a new environment variable on the local machine running the
application. To set the environment variable, open a console window, and follow the instructions for your
operating system. Replace <yourconnectionstring> with your actual connection string.

Windows
Linux and macOS

setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"

After you add the environment variable in Windows, you must start a new instance of the command window.
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.

Code examples
These example code snippets show you how to do the following tasks with the Azure Files Share client library
for C++:
Add include files
Get the connection string
Create a files share
Upload files to a files share
Set the metadata of a file
List the metadata of a file
Download files
Delete a file
Delete a files share
Add include files
From the project directory:
1. Open the FilesShareQuickstartV12.sln solution file in Visual Studio.
2. Inside Visual Studio, open the FilesShareQuickstartV12.cpp source file.
3. Remove any code inside main that was autogenerated.
4. Add #include statements.
#include <iostream>
#include <stdlib.h>
#include <vector>

#include <azure/storage/files/shares.hpp>

Get the connection string


The code below retrieves the connection string for your storage account from the environment variable created
in Configure your storage connection string.
Add this code inside main() :

// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING.
// Note that _MSC_VER is set when using MSVC compiler.
static const char* AZURE_STORAGE_CONNECTION_STRING = "AZURE_STORAGE_CONNECTION_STRING";
#if !defined(_MSC_VER)
const char* connectionString = std::getenv(AZURE_STORAGE_CONNECTION_STRING);
#else
// Use getenv_s for MSVC
size_t requiredSize;
getenv_s(&requiredSize, NULL, NULL, AZURE_STORAGE_CONNECTION_STRING);
if (requiredSize == 0) {
throw std::runtime_error("missing connection string from env.");
}
std::vector<char> value(requiredSize);
getenv_s(&requiredSize, value.data(), value.size(), AZURE_STORAGE_CONNECTION_STRING);
std::string connectionStringStr = std::string(value.begin(), value.end());
const char* connectionString = connectionStringStr.c_str();
#endif

Create a Files Share


Create an instance of the ShareClient class by calling the CreateFromConnectionString function. Then call
CreateIfNotExists to create the actual files share in your storage account.
Add this code to the end of main() :

using namespace Azure::Storage::Files::Shares;

std::string shareName = "sample-share";

// Initialize a new instance of ShareClient


auto shareClient = ShareClient::CreateFromConnectionString(connectionString, shareName);

// Create the files share. This will do nothing if the files share already exists.
std::cout << "Creating files share: " << shareName << std::endl;
shareClient.CreateIfNotExists();

Upload files to a Files Share


The following code snippet:
1. Declares a string containing "Hello Azure!".
2. Gets a reference to a ShareFileClient object by getting the root ShareDirectoryClient and then calling
GetFileClient on the files share from the Create a Files Share section.
3. Uploads the string to the file by calling the UploadFrom function. This function creates the file if it doesn't
already exist, or updates it if it does.
Add this code to the end of main() :

std::string fileName = "sample-file";


uint8_t fileContent[] = "Hello Azure!";

// Create the ShareFileClient


ShareFileClient fileClient = shareClient.GetRootDirectoryClient().GetFileClient(fileName);

// Upload the file


std::cout << "Uploading file: " << fileName << std::endl;
fileClient.UploadFrom(fileContent, sizeof(fileContent));

Set the metadata of a File


Set the metadata properties for a file by calling the ShareFileClient.SetMetadata function.
Add this code to the end of main() :

Azure::Storage::Metadata fileMetadata = { {"key1", "value1"}, {"key2", "value2"} };


fileClient.SetMetadata(fileMetadata);

List the metadata of a File


Get the metadata properties for a file by calling the ShareFileClient.GetProperties function. The metadata is
under the Metadata field of the returned Value . The metadata will be a key-value pair, similar to the example in
Set the metadata of a File.

// Retrieve the file properties


auto properties = fileClient.GetProperties().Value;
std::cout << "Listing blob metadata..." << std::endl;
for (auto metadata : properties.Metadata)
{
std::cout << metadata.first << ":" << metadata.second << std::endl;
}

Download files
Having retrieved the properties of the file in List the metadata of a File a new std::vector<uint8_t> object by
using the properties of the uploaded file. Download the previously created file into the new
std::vector<uint8_t> object by calling the DownloadTo function in the ShareFileClient base class. Finally,
display the downloaded file data.
Add this code to the end of main() :

std::vector<uint8_t> fileDownloaded(properties.FileSize);
fileClient.DownloadTo(fileDownloaded.data(), fileDownloaded.size());

std::cout << "Downloaded file contents: " << std::string(fileDownloaded.begin(), fileDownloaded.end()) <<
std::endl;

Delete a file
The following code deletes the blob from the Azure Storage Files Share by calling the ShareFileClient.Delete
function.

std::cout << "Deleting file: " << fileName << std::endl;


fileClient.DeleteIfExists();
Delete a files share
The following code cleans up the resources the app created by deleting the entire Files Share by using
ShareClient.Delete.
Add this code to the end of main() :

std::cout << "Deleting files share: " << shareName << std::endl;
shareClient.DeleteIfExists();

Run the code


This app creates a container and uploads a text file to Azure Blob Storage. The example then lists the blobs in the
container, downloads the file, and displays the file contents. Finally, the app deletes the blob and the container.
The output of the app is similar to the following example:

Azure Files Shares storage v12 - C++ quickstart sample


Creating files share: sample-share
Uploading file: sample-file
Listing file metadata...
key1:value1
key2:value2
Downloaded file contents: Hello Azure!
Deleting file: sample-file
Deleting files share: sample-share

Next steps
In this quickstart, you learned how to upload, download, and list files using C++. You also learned how to create
and delete an Azure Storage Files Share.
To see a C++ Blob Storage sample, continue to:
Azure Storage Files Share SDK v12 for C++ samples
Develop for Azure Files with Python
5/20/2022 • 8 minutes to read • Edit Online

Learn the basics of using Python to develop apps or services that use Azure Files to store file data. Create a
simple console app and learn how to perform basic actions with Python and Azure Files:
Create Azure file shares
Create directories
Enumerate files and directories in an Azure file share
Upload, download, and delete a file
Create file share backups by using snapshots

NOTE
Because Azure Files may be accessed over SMB, it is possible to write simple applications that access the Azure file share
using the standard Python I/O classes and functions. This article will describe how to write apps that use the Azure
Storage SDK for Python, which uses the Azure Files REST API to talk to Azure Files.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Download and Install Azure Storage SDK for Python


NOTE
If you are upgrading from the Azure Storage SDK for Python version 0.36 or earlier, uninstall the older SDK using
pip uninstall azure-storage before installing the latest package.

Azure Python SDK v12


Azure Python SDK v2

The Azure Files client library v12.x for Python requires Python 2.7 or 3.6+.

Install via PyPI


To install via the Python Package Index (PyPI), type:
Azure Python SDK v12
Azure Python SDK v2
pip install azure-storage-file-share

Set up your application to use Azure Files


Add the following near the top of a Python source file to use the code snippets in this article.
Azure Python SDK v12
Azure Python SDK v2

from azure.core.exceptions import (


ResourceExistsError,
ResourceNotFoundError
)

from azure.storage.fileshare import (


ShareServiceClient,
ShareClient,
ShareDirectoryClient,
ShareFileClient
)

Set up a connection to Azure Files


Azure Python SDK v12
Azure Python SDK v2

ShareServiceClient lets you work with shares, directories, and files. The following code creates a
ShareServiceClient object using the storage account connection string.

# Create a ShareServiceClient from a connection string


service_client = ShareServiceClient.from_connection_string(connection_string)

Create an Azure file share


Azure Python SDK v12
Azure Python SDK v2

The following code example uses a ShareClient object to create the share if it doesn't exist.

def create_file_share(self, connection_string, share_name):


try:
# Create a ShareClient from a connection string
share_client = ShareClient.from_connection_string(
connection_string, share_name)

print("Creating share:", share_name)


share_client.create_share()

except ResourceExistsError as ex:


print("ResourceExistsError:", ex.message)

Create a directory
You can organize storage by putting files inside subdirectories instead of having all of them in the root directory.

Azure Python SDK v12


Azure Python SDK v2

The following method creates a directory in the root of the specified file share by using a ShareDirectoryClient
object.

def create_directory(self, connection_string, share_name, dir_name):


try:
# Create a ShareDirectoryClient from a connection string
dir_client = ShareDirectoryClient.from_connection_string(
connection_string, share_name, dir_name)

print("Creating directory:", share_name + "/" + dir_name)


dir_client.create_directory()

except ResourceExistsError as ex:


print("ResourceExistsError:", ex.message)

Upload a file
In this section, you'll learn how to upload a file from local storage into Azure Files.
Azure Python SDK v12
Azure Python SDK v2

The following method uploads the contents of the specified file into the specified directory in the specified Azure
file share.

def upload_local_file(self, connection_string, local_file_path, share_name, dest_file_path):


try:
source_file = open(local_file_path, "rb")
data = source_file.read()

# Create a ShareFileClient from a connection string


file_client = ShareFileClient.from_connection_string(
connection_string, share_name, dest_file_path)

print("Uploading to:", share_name + "/" + dest_file_path)


file_client.upload_file(data)

except ResourceExistsError as ex:


print("ResourceExistsError:", ex.message)

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Enumerate files and directories in an Azure file share


Azure Python SDK v12
Azure Python SDK v2

To list the files and directories in a subdirectory, use the list_directories_and_files method. This method returns
an auto-paging iterable. The following code outputs the name of each file and subdirectory in the specified
directory to the console.
def list_files_and_dirs(self, connection_string, share_name, dir_name):
try:
# Create a ShareClient from a connection string
share_client = ShareClient.from_connection_string(
connection_string, share_name)

for item in list(share_client.list_directories_and_files(dir_name)):


if item["is_directory"]:
print("Directory:", item["name"])
else:
print("File:", dir_name + "/" + item["name"])

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Download a file
Azure Python SDK v12
Azure Python SDK v2

To download data from a file, use download_file.


The following example demonstrates using download_file to get the contents of the specified file and store it
locally with DOWNLOADED- prepended to the filename.

def download_azure_file(self, connection_string, share_name, dir_name, file_name):


try:
# Build the remote path
source_file_path = dir_name + "/" + file_name

# Add a prefix to the filename to


# distinguish it from the uploaded file
dest_file_name = "DOWNLOADED-" + file_name

# Create a ShareFileClient from a connection string


file_client = ShareFileClient.from_connection_string(
connection_string, share_name, source_file_path)

print("Downloading to:", dest_file_name)

# Open a file for writing bytes on the local system


with open(dest_file_name, "wb") as data:
# Download the file from Azure into a stream
stream = file_client.download_file()
# Write the stream to the local file
data.write(stream.readall())

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Create a share snapshot


You can create a point in time copy of your entire file share.

Azure Python SDK v12


Azure Python SDK v2
def create_snapshot(self, connection_string, share_name):
try:
# Create a ShareClient from a connection string
share_client = ShareClient.from_connection_string(
connection_string, share_name)

# Create a snapshot
snapshot = share_client.create_snapshot()
print("Created snapshot:", snapshot["snapshot"])

# Return the snapshot time so


# it can be accessed later
return snapshot["snapshot"]

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

List shares and snapshots


You can list all the snapshots for a particular share.

Azure Python SDK v12


Azure Python SDK v2

def list_shares_snapshots(self, connection_string):


try:
# Create a ShareServiceClient from a connection string
service_client = ShareServiceClient.from_connection_string(connection_string)

# List the shares in the file service


shares = list(service_client.list_shares(include_snapshots=True))

for share in shares:


if (share["snapshot"]):
print("Share:", share["name"], "Snapshot:", share["snapshot"])
else:
print("Share:", share["name"])

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Browse share snapshot


You can browse each share snapshot to retrieve files and directories from that point in time.

Azure Python SDK v12


Azure Python SDK v2
def browse_snapshot_dir(self, connection_string, share_name, snapshot_time, dir_name):
try:
# Create a ShareClient from a connection string
snapshot = ShareClient.from_connection_string(
conn_str=connection_string, share_name=share_name, snapshot=snapshot_time)

print("Snapshot:", snapshot_time)

for item in list(snapshot.list_directories_and_files(dir_name)):


if item["is_directory"]:
print("Directory:", item["name"])
else:
print("File:", dir_name + "/" + item["name"])

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Get file from share snapshot


You can download a file from a share snapshot. This enables you to restore a previous version of a file.

Azure Python SDK v12


Azure Python SDK v2

def download_snapshot_file(self, connection_string, share_name, snapshot_time, dir_name, file_name):


try:
# Build the remote path
source_file_path = dir_name + "/" + file_name

# Add a prefix to the local filename to


# indicate it's a file from a snapshot
dest_file_name = "SNAPSHOT-" + file_name

# Create a ShareFileClient from a connection string


snapshot_file_client = ShareFileClient.from_connection_string(
conn_str=connection_string, share_name=share_name,
file_path=source_file_path, snapshot=snapshot_time)

print("Downloading to:", dest_file_name)

# Open a file for writing bytes on the local system


with open(dest_file_name, "wb") as data:
# Download the file from Azure into a stream
stream = snapshot_file_client.download_file()
# Write the stream to the local file
data.write(stream.readall())

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Delete a single share snapshot


You can delete a single share snapshot.

Azure Python SDK v12


Azure Python SDK v2
def delete_snapshot(self, connection_string, share_name, snapshot_time):
try:
# Create a ShareClient for a snapshot
snapshot_client = ShareClient.from_connection_string(conn_str=connection_string,
share_name=share_name, snapshot=snapshot_time)

print("Deleting snapshot:", snapshot_time)

# Delete the snapshot


snapshot_client.delete_share()

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Delete a file
Azure Python SDK v12
Azure Python SDK v2

To delete a file, call delete_file.

def delete_azure_file(self, connection_string, share_name, file_path):


try:
# Create a ShareFileClient from a connection string
file_client = ShareFileClient.from_connection_string(
connection_string, share_name, file_path)

print("Deleting file:", share_name + "/" + file_path)

# Delete the file


file_client.delete_file()

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Delete share when share snapshots exist


Azure Python SDK v12
Azure Python SDK v2

To delete a share that contains snapshots, call delete_share with delete_snapshots=True .

def delete_share(self, connection_string, share_name):


try:
# Create a ShareClient from a connection string
share_client = ShareClient.from_connection_string(
connection_string, share_name)

print("Deleting share:", share_name)

# Delete the share and snapshots


share_client.delete_share(delete_snapshots=True)

except ResourceNotFoundError as ex:


print("ResourceNotFoundError:", ex.message)

Next steps
Now that you've learned how to manipulate Azure Files with Python, follow these links to learn more.
Python Developer Center
Azure Storage Services REST API
Microsoft Azure Storage SDK for Python
Determine which Azure Storage encryption key
model is in use for the storage account
5/20/2022 • 2 minutes to read • Edit Online

Data in your storage account is automatically encrypted by Azure Storage. Azure Storage encryption offers two
options for managing encryption keys at the level of the storage account:
Microsoft-managed keys. By default, Microsoft manages the keys used to encrypt your storage account.
Customer-managed keys. You can optionally choose to manage encryption keys for your storage account.
Customer-managed keys must be stored in Azure Key Vault.
Additionally, you can provide an encryption key at the level of an individual request for some Blob storage
operations. When an encryption key is specified on the request, that key overrides the encryption key that is
active on the storage account. For more information, see Specify a customer-provided key on a request to Blob
storage.
For more information about encryption keys, see Azure Storage encryption for data at rest.

Check the encryption key model for the storage account


To determine whether a storage account is using Microsoft-managed keys or customer-managed keys for
encryption, use one of the following approaches.
Azure portal
PowerShell
Azure CLI

To check the encryption model for the storage account by using the Azure portal, follow these steps:
1. In the Azure portal, navigate to your storage account.
2. Select the Encr yption setting and note the setting.
The following image shows a storage account that is encrypted with Microsoft-managed keys:

And the following image shows a storage account that is encrypted with customer-managed keys:
Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Configure encryption with customer-managed keys
stored in Azure Key Vault
5/20/2022 • 18 minutes to read • Edit Online

Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-
managed keys. For additional control over encryption keys, you can manage your own keys. Customer-
managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM).
This article shows how to configure encryption with customer-managed keys stored in a key vault by using the
Azure portal, PowerShell, or Azure CLI. To learn how to configure encryption with customer-managed keys
stored in a managed HSM, see Configure encryption with customer-managed keys stored in Azure Key Vault
Managed HSM.

NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.

Configure a key vault


You can use a new or existing key vault to store customer-managed keys. The storage account and key vault may
be in different regions or subscriptions in the same tenant. To learn more about Azure Key Vault, see Azure Key
Vault Overview and What is Azure Key Vault?.
Using customer-managed keys with Azure Storage encryption requires that both soft delete and purge
protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and
cannot be disabled. You can enable purge protection either when you create the key vault or after it is created.
Azure portal
PowerShell
Azure CLI

To learn how to create a key vault with the Azure portal, see Quickstart: Create a key vault using the Azure
portal. When you create the key vault, select Enable purge protection , as shown in the following image.
To enable purge protection on an existing key vault, follow these steps:
1. Navigate to your key vault in the Azure portal.
2. Under Settings , choose Proper ties .
3. In the Purge protection section, choose Enable purge protection .

Add a key
Next, add a key to the key vault.
Azure Storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more
information about supported key types, see About keys.
Azure portal
PowerShell
Azure CLI

To learn how to add a key with the Azure portal, see Quickstart: Set and retrieve a key from Azure Key Vault
using the Azure portal.

Choose a managed identity to authorize access to the key vault


When you enable customer-managed keys for a storage account, you must specify a managed identity that will
be used to authorize access to the key vault that contains the key. The managed identity must have permissions
to access the key in the key vault.
The managed identity that authorizes access to the key vault may be either a user-assigned or system-assigned
managed identity, depending on your scenario:
When you configure customer-managed keys at the time that you create a storage account, you must specify
a user-assigned managed identity.
When you configure customer-managed keys on an existing storage account, you can specify either a user-
assigned managed identity or a system-assigned managed identity.
To learn more about system-assigned versus user-assigned managed identities, see Managed identity types.
Use a user-assigned managed identity to authorize access
A user-assigned is a standalone Azure resource. To learn how to create and manage a user-assigned managed
identity, see Manage user-assigned managed identities.
Both new and existing storage accounts can use a user-assigned identity to authorize access to the key vault. You
must create the user-assigned identity before you configure customer-managed keys.
Azure portal
PowerShell
Azure CLI

When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned
identity through the portal user interface. For details, see one of the following sections:
Configure customer-managed keys for a new account
Configure customer-managed keys for an existing account
Use a system-assigned managed identity to authorize access
A system-assigned managed identity is associated with an instance of an Azure service, in this case an Azure
Storage account. You must explicitly assign a system-assigned managed identity to a storage account before you
can use the system-assigned managed identity to authorize access to the key vault that contains your customer-
managed key.
Only existing storage accounts can use a system-assigned identity to authorize access to the key vault. New
storage accounts must use a user-assigned identity, if customer-managed keys are configured on account
creation.
Azure portal
PowerShell
Azure CLI

When you configure customer-managed keys with the Azure portal with a system-assigned managed identity,
the system-assigned managed identity is assigned to the storage account for you under the covers. For details,
see Configure customer-managed keys for an existing account.

Configure the key vault access policy


The next step is to configure the key vault access policy. The key vault access policy grants permissions to the
managed identity that will be used to authorize access to the key vault. To learn more about key vault access
policies, see Azure Key Vault Overview and Azure Key Vault security overview.
Azure portal
PowerShell
Azure CLI

To learn how to configure the key vault access policy with the Azure portal, see Assign an Azure Key Vault access
policy.

Configure customer-managed keys for a new account


When you configure encryption with customer-managed keys for a new storage account, you can choose to
automatically update the key version used for Azure Storage encryption whenever a new version is available in
the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the
key version is manually updated.
You must use an existing user-assigned managed identity to authorize access to the key vault when you
configure customer-managed keys while creating the storage account. The user-assigned managed identity
must have appropriate permissions to access the key vault.

Azure portal
PowerShell
Azure CLI

To configure customer-managed keys for a new storage account with automatic updating of the key version,
follow these steps:
1. In the Azure portal, navigate to the Storage accounts page, and select the Create button to create a
new account.
2. Follow the steps outlined in Create a storage account to fill out the fields on the Basics , Advanced ,
Networking , and Data Protection tabs.
3. On the Encr yption tab, indicate for which services you want to enable support for customer-managed
keys in the Enable suppor t for customer-managed keys field.
4. In the Encr yption type field, select Customer-managed keys (CMK) .
5. In the Encr yption key field, choose Select a key vault and key , and specify the key vault and key.
6. For the User-assigned identity field, select an existing user-assigned managed identity.
7. Select Review + create to validate and create the new account.
You can also configure customer-managed keys with manual updating of the key version when you create a new
storage account. Follow the steps described in Configure encryption for manual updating of key versions.

Configure customer-managed keys for an existing account


When you configure encryption with customer-managed keys for an existing storage account, you can choose
to automatically update the key version used for Azure Storage encryption whenever a new version is available
in the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the
key version is manually updated.
You can use either a system-assigned or user-assigned managed identity to authorize access to the key vault
when you configure customer-managed keys for an existing storage account.

NOTE
To rotate a key, create a new version of the key in Azure Key Vault. Azure Storage does not handle key rotation, so you will
need to manage rotation of the key in the key vault. You can configure key auto-rotation in Azure Key Vault or rotate
your key manually.
Configure encryption for automatic updating of key versions
Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest
key version from the key vault. Azure Storage checks the key vault daily for a new version of the key. When a
new version becomes available, then Azure Storage automatically begins using the latest version of the key for
encryption.

IMPORTANT
Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours
before disabling the older version.

Azure portal
PowerShell
Azure CLI

To configure customer-managed keys for an existing account with automatic updating of the key version in the
Azure portal, follow these steps:
1. Navigate to your storage account.
2. On the Settings blade for the storage account, click Encr yption . By default, key management is set to
Microsoft Managed Keys , as shown in the following image.

3. Select the Customer Managed Keys option.


4. Choose the Select from Key Vault option.
5. Select Select a key vault and key .
6. Select the key vault containing the key you want to use. You can also create a new key vault.
7. Select the key from the key vault. You can also create a new key.
8. Select the type of identity to use to authenticate access to the key vault. The options include System-
assigned (the default) or User-assigned . To learn more about each type of managed identity, see
Managed identity types.
a. If you select System-assigned , the system-assigned managed identity for the storage account is
created under the covers, if it does not already exist.
b. If you select User-assigned , then you must select an existing user-assigned identity that has
permissions to access the key vault. To learn how to create a user-assigned identity, see Manage user-
assigned managed identities.

9. Save your changes.


After you've specified the key, the Azure portal indicates that automatic updating of the key version is enabled
and displays the key version currently in use for encryption. The portal also displays the type of managed
identity used to authorize access to the key vault and the principal ID for the managed identity.
Configure encryption for manual updating of key versions
If you prefer to manually update the key version, then explicitly specify the version at the time that you
configure encryption with customer-managed keys. In this case, Azure Storage will not automatically update the
key version when a new version is created in the key vault. To use a new key version, you must manually update
the version used for Azure Storage encryption.

Azure portal
PowerShell
Azure CLI

To configure customer-managed keys with manual updating of the key version in the Azure portal, specify the
key URI, including the version. To specify a key as a URI, follow these steps:
1. To locate the key URI in the Azure portal, navigate to your key vault, and select the Keys setting. Select
the desired key, then click the key to view its versions. Select a key version to view the settings for that
version.
2. Copy the value of the Key Identifier field, which provides the URI.
3. In the Encr yption key settings for your storage account, choose the Enter key URI option.
4. Paste the URI that you copied into the Key URI field. Omit the key version from the URI to enable
automatic updating of the key version.
5. Specify the subscription that contains the key vault.
6. Specify either a system-assigned or user-assigned managed identity.
7. Save your changes.

Change the key


You can change the key that you are using for Azure Storage encryption at any time.

Azure portal
PowerShell
Azure CLI

To change the key with the Azure portal, follow these steps:
1. Navigate to your storage account and display the Encr yption settings.
2. Select the key vault and choose a new key.
3. Save your changes.

Revoke customer-managed keys


Revoking a customer-managed key removes the association between the storage account and the key vault.

Azure portal
PowerShell
Azure CLI

To revoke customer-managed keys with the Azure portal, disable the key as described in Disable customer-
managed keys.

Disable customer-managed keys


When you disable customer-managed keys, your storage account is once again encrypted with Microsoft-
managed keys.

Azure portal
PowerShell
Azure CLI

To disable customer-managed keys in the Azure portal, follow these steps:


1. Navigate to your storage account and display the Encr yption settings.
2. Deselect the checkbox next to the Use your own key setting.

Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM
Configure encryption with customer-managed keys
stored in Azure Key Vault Managed HSM
5/20/2022 • 3 minutes to read • Edit Online

Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-
managed keys. For additional control over encryption keys, you can manage your own keys. Customer-
managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM). An
Azure Key Vault Managed HSM is an FIPS 140-2 Level 3 validated HSM.
This article shows how to configure encryption with customer-managed keys stored in a managed HSM by
using Azure CLI. To learn how to configure encryption with customer-managed keys stored in a key vault, see
Configure encryption with customer-managed keys stored in Azure Key Vault.

NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.

Assign an identity to the storage account


First, assign a system-assigned managed identity to the storage account. You'll use this managed identity to
grant the storage account permissions to access the managed HSM. For more information about system-
assigned managed identities, see What are managed identities for Azure resources?.
To assign a managed identity using Azure CLI, call az storage account update. Remember to replace the
placeholder values in brackets with your own values:

az storage account update \


--name <storage-account> \
--resource-group <resource_group> \
--assign-identity

Assign a role to the storage account for access to the managed HSM
Next, assign the Managed HSM Cr ypto Ser vice Encr yption User role to the storage account's managed
identity so that the storage account has permissions to the managed HSM. Microsoft recommends that you
scope the role assignment to the level of the individual key in order to grant the fewest possible privileges to the
managed identity.
To create the role assignment for storage account, call az key vault role assignment create. Remember to replace
the placeholder values in brackets with your own values.
storage_account_principal = $(az storage account show \
--name <storage-account> \
--resource-group <resource-group> \
--query identity.principalId \
--output tsv)

az keyvault role assignment create \


--hsm-name <hsm-name> \
--role "Managed HSM Crypto Service Encryption User" \
--assignee $storage_account_principal \
--scope /keys/<key-name>

Configure encryption with a key in the managed HSM


Finally, configure Azure Storage encryption with customer-managed keys to use a key stored in the managed
HSM. Supported key types include RSA-HSM keys of sizes 2048, 3072 and 4096. To learn how to create a key in
a managed HSM, see Create an HSM key.
Install Azure CLI 2.12.0 or later to configure encryption to use a customer-managed key in a managed HSM. For
more information, see Install the Azure CLI.
To automatically update the key version for a customer-managed key, omit the key version when you configure
encryption with customer-managed keys for the storage account. For more information about configuring
encryption for automatic key rotation, see Update the key version.
Next, call az storage account update to update the storage account's encryption settings, as shown in the
following example. Include the --encryption-key-source parameter and set it to Microsoft.Keyvault to enable
customer-managed keys for the account. Remember to replace the placeholder values in brackets with your own
values.

hsmurl = $(az keyvault show \


--hsm-name <hsm-name> \
--query properties.hsmUri \
--output tsv)

az storage account update \


--name <storage-account> \
--resource-group <resource_group> \
--encryption-key-name <key> \
--encryption-key-source Microsoft.Keyvault \
--encryption-key-vault $hsmurl

To manually update the version for a customer-managed key, include the key version when you configure
encryption for the storage account:

az storage account update


--name <storage-account> \
--resource-group <resource_group> \
--encryption-key-name <key> \
--encryption-key-version $key_version \
--encryption-key-source Microsoft.Keyvault \
--encryption-key-vault $hsmurl

When you manually update the key version, you'll need to update the storage account's encryption settings to
use the new version. First, query for the key vault URI by calling az keyvault show, and for the key version by
calling az keyvault key list-versions. Then call az storage account update to update the storage account's
encryption settings to use the new version of the key, as shown in the previous example.
Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Enable infrastructure encryption for double
encryption of data
5/20/2022 • 5 minutes to read • Edit Online

Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES
encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require
higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage
infrastructure level for double encryption. Double encryption of Azure Storage data protects against a scenario
where one of the encryption algorithms or keys may be compromised. In this scenario, the additional layer of
encryption continues to protect your data.
Infrastructure encryption can be enabled for the entire storage account, or for an encryption scope within an
account. When infrastructure encryption is enabled for a storage account or an encryption scope, data is
encrypted twice — once at the service level and once at the infrastructure level — with two different encryption
algorithms and two different keys.
Service-level encryption supports the use of either Microsoft-managed keys or customer-managed keys with
Azure Key Vault or Key Vault Managed Hardware Security Model (HSM) (preview). Infrastructure-level
encryption relies on Microsoft-managed keys and always uses a separate key. For more information about key
management with Azure Storage encryption, see About encryption key management.
To doubly encrypt your data, you must first create a storage account or an encryption scope that is configured
for infrastructure encryption. This article describes how to enable infrastructure encryption.

Create an account with infrastructure encryption enabled


To enable infrastructure encryption for a storage account, you must configure a storage account to use
infrastructure encryption at the time that you create the account. Infrastructure encryption cannot be enabled or
disabled after the account has been created. The storage account must be of type general-purpose v2 or
premium block blob.
Azure portal
PowerShell
Azure CLI
Template

To use the Azure portal to create a storage account with infrastructure encryption enabled, follow these steps:
1. In the Azure portal, navigate to the Storage accounts page.
2. Choose the Add button to add a new general-purpose v2 or premium block blob storage account.
3. On the Advanced tab, locate Infrastructure encryption, and select Enabled .
4. Select Review + create to finish creating the storage account.
To verify that infrastructure encryption is enabled for a storage account with the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal.
2. Under Settings , choose Encr yption .

Azure Policy provides a built-in policy to require that infrastructure encryption be enabled for a storage account.
For more information, see the Storage section in Azure Policy built-in policy definitions.

Create an encryption scope with infrastructure encryption enabled


If infrastructure encryption is enabled for an account, then any encryption scope created on that account
automatically uses infrastructure encryption. If infrastructure encryption is not enabled at the account level, then
you have the option to enable it for an encryption scope at the time that you create the scope. The infrastructure
encryption setting for an encryption scope cannot be changed after the scope is created. For more information,
see Create an encryption scope.

Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Encryption scopes for Blob storage
Configure Microsoft Defender for Storage
5/20/2022 • 4 minutes to read • Edit Online

Microsoft Defender for Storage provides an additional layer of security intelligence that detects unusual and
potentially harmful attempts to access or exploit storage accounts. This layer of protection allows you to address
threats without being a security expert or managing security monitoring systems.
Security alerts are triggered when anomalies in activity occur. These security alerts are integrated with Microsoft
Defender for Cloud, and are also sent via email to subscription administrators, with details of suspicious activity
and recommendations on how to investigate and remediate threats.
The service ingests resource logs of read, write, and delete requests to Blob storage and to Azure Files for threat
detection. To investigate alerts from Microsoft Defender for Cloud, you can view related storage activity using
Storage Analytics Logging. For more information, see Configure logging in Monitor a storage account in the
Azure portal.

Availability
Microsoft Defender for Storage is currently available for Blob storage, Azure Files, and Azure Data Lake Storage
Gen2. Account types that support Microsoft Defender for Storage include general-purpose v2, block blob, and
Blob storage accounts. Microsoft Defender for Storage is available in all public clouds and US government
clouds, but not in other sovereign or Azure Government cloud regions.
Accounts with hierarchical namespaces enabled for Data Lake Storage support transactions using both the
Azure Blob storage APIs and the Data Lake Storage APIs. Azure file shares support transactions over SMB.
For pricing details, including a free 30 day trial, see the Microsoft Defender for Cloud pricing page.
The following list summarizes the availability of Microsoft Defender for Storage:
Release state:
Blob Storage (general availability)
Azure Files (general availability)
Azure Data Lake Storage Gen2 (general availability)
Clouds: ✔ Commercial clouds
✔ Azure Government
✘ Azure China 21Vianet

Set up Microsoft Defender for Cloud


You can configure Microsoft Defender for Storage in any of several ways, described in the following sections.
Microsoft Defender for Cloud
Portal
Template
Azure Policy
PowerShell
Azure CLI

Microsoft Defender for Storage is built into Microsoft Defender for Cloud. When you enable Microsoft Defender
for Cloud's enhanced security features on your subscription, Microsoft Defender for Storage is automatically
enabled for all of your storage accounts. To enable or disable Defender for Storage for individual storage
accounts under a specific subscription:
1. Launch Microsoft Defender for Cloud in the Azure portal.
2. From Defender for Cloud's main menu, select Environment settings .
3. Select the subscription for which you want to enable or disable Microsoft Defender for Cloud.
4. Select Enable all Microsoft Defender plans to enable Microsoft Defender for Cloud in the
subscription.
5. Under Select Microsoft Defender plans by resource type , locate the Storage row, and select
Enabled in the Plan column.
6. Save your changes.

Microsoft Defender for Storage is now enabled for all storage accounts in this subscription.

Explore security anomalies


When storage activity anomalies occur, you receive an email notification with information about the suspicious
security event. Details of the event include:
The nature of the anomaly
The storage account name
The event time
The storage type
The potential causes
The investigation steps
The remediation steps
The email also includes details on possible causes and recommended actions to investigate and mitigate the
potential threat.

You can review and manage your current security alerts from Microsoft Defender for Cloud's Security alerts tile.
Select an alert for details and actions for investigating the current threat and addressing future threats.
Security alerts
Alerts are generated by unusual and potentially harmful attempts to access or exploit storage accounts. For a list
of alerts for Azure Storage, see Alerts for Azure Storage.

Next steps
Introduction to Microsoft Defender for Storage
Microsoft Defender for Cloud
Logs in Azure Storage accounts
Configure Azure Storage firewalls and virtual
networks
5/20/2022 • 27 minutes to read • Edit Online

Azure Storage provides a layered security model. This model enables you to secure and control the level of
access to your storage accounts that your applications and enterprise environments demand, based on the type
and subset of networks or resources used. When network rules are configured, only applications requesting
data over the specified set of networks or through the specified set of Azure resources can access a storage
account. You can limit access to your storage account to requests originating from specified IP addresses, IP
ranges, subnets in an Azure Virtual Network (VNet), or resource instances of some Azure services.
Storage accounts have a public endpoint that is accessible through the internet. You can also create Private
Endpoints for your storage account, which assigns a private IP address from your VNet to the storage account,
and secures all traffic between your VNet and the storage account over a private link. The Azure storage firewall
provides access control for the public endpoint of your storage account. You can also use the firewall to block all
access through the public endpoint when using private endpoints. Your storage firewall configuration also
enables select trusted Azure platform services to access the storage account securely.
An application that accesses a storage account when network rules are in effect still requires proper
authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for
blobs and queues, with a valid account access key, or with an SAS token. When a blob container is configured for
anonymous public access, requests to read data in that container do not need to be authorized, but the firewall
rules remain in effect and will block anonymous traffic.

IMPORTANT
Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests
originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses. Requests
that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and
so on.
You can grant access to Azure services that operate from within a VNet by allowing traffic from the subnet hosting the
service instance. You can also enable a limited number of scenarios through the exceptions mechanism described below.
To access data from the storage account through the Azure portal, you would need to be on a machine within the trusted
boundary (either IP or VNet) that you set up.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Scenarios
To secure your storage account, you should first configure a rule to deny access to traffic from all networks
(including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access
to traffic from specific VNets. You can also configure rules to grant access to traffic from selected public internet
IP address ranges, enabling connections from specific internet or on-premises clients. This configuration enables
you to build a secure network boundary for your applications.
You can combine firewall rules that allow access from specific virtual networks and from public IP address
ranges on the same storage account. Storage firewall rules can be applied to existing storage accounts, or when
creating new storage accounts.
Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules
to allow traffic for private endpoints of a storage account. The process of approving the creation of a private
endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
Network rules are enforced on all network protocols for Azure storage, including REST and SMB. To access data
using tools such as the Azure portal, Storage Explorer, and AzCopy, explicit network rules must be configured.
Once network rules are applied, they're enforced for all requests. SAS tokens that grant access to a specific IP
address serve to limit the access of the token holder, but don't grant new access beyond configured network
rules.
Virtual machine disk traffic (including mount and unmount operations, and disk IO) is not affected by network
rules. REST access to page blobs is protected by network rules.
Classic storage accounts do not support firewalls and virtual networks.
You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by
creating an exception. This process is documented in the Manage Exceptions section of this article. Firewall
exceptions aren't applicable with managed disks as they're already managed by Azure.

Change the default network access rule


By default, storage accounts accept connections from clients on any network. You can limit access to selected
networks or prevent traffic from all networks and permit access only through a private endpoint.

WARNING
Changing this setting can impact your application's ability to connect to Azure Storage. Make sure to grant access to any
allowed networks or set up access through a private endpoint before you change this setting.

Portal
PowerShell
Azure CLI

1. Go to the storage account you want to secure.


2. Locate the Networking settings under Security + networking .
3. Choose which type of public network access you want to allow.
To allow traffic from all networks, select Enabled from all networks .
To allow traffic only from specific virtual networks, select Enabled from selected vir tual
networks and IP addresses .
To block traffic from all networks, select Disabled .
4. Select Save to apply your changes.

Grant access from a virtual network


You can configure storage accounts to allow access only from specific subnets. The allowed subnets may belong
to a VNet in the same subscription, or those in a different subscription, including subscriptions belonging to a
different Azure Active Directory tenant.
You can enable a Service endpoint for Azure Storage within the VNet. The service endpoint routes traffic from
the VNet through an optimal path to the Azure Storage service. The identities of the subnet and the virtual
network are also transmitted with each request. Administrators can then configure network rules for the storage
account that allow requests to be received from specific subnets in a VNet. Clients granted access via these
network rules must continue to meet the authorization requirements of the storage account to access the data.
Each storage account supports up to 200 virtual network rules, which may be combined with IP network rules.

IMPORTANT
If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the storage
account. If you create a new subnet by the same name, it will not have access to the storage account. To allow access, you
must explicitly authorize the new subnet in the network rules for the storage account.

Required permissions
To apply a virtual network rule to a storage account, the user must have the appropriate permissions for the
subnets being added. Applying a rule can be performed by a Storage Account Contributor or a user that has
been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action Azure
resource provider operation via a custom Azure role.
Storage account and the virtual networks granted access may be in different subscriptions, including
subscriptions that are a part of a different Azure AD tenant.

NOTE
Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory
tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the
Azure portal, though they may be viewed in the portal.

Available virtual network regions


By default, service endpoints work between virtual networks and service instances in the same Azure region.
When using service endpoints with Azure Storage, service endpoints also work between virtual networks and
service instances in a paired region. If you want to use a service endpoint to grant access to virtual networks in
other regions, you must register the AllowGlobalTagsForStorage feature in the subscription of the virtual
network. This capability is currently in public preview.
Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage
(RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant
access to any RA-GRS instance.
When planning for disaster recovery during a regional outage, you should create the VNets in the paired region
in advance. Enable service endpoints for Azure Storage, with network rules granting access from these
alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
Enabling access to virtual networks in other regions (preview)
To enable access from a virtual network that is located in another region, register the AllowGlobalTagsForStorage
feature in the subscription of the virtual network. All the subnets in the subscription that has the
AllowedGlobalTagsForStorage feature enabled will no longer use a public IP address to communicate with any
storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a
source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no
longer have an effect.
IMPORTANT
This capability is currently in PREVIEW.
See the Supplemental Terms of Use for Microsoft Azure Previews for legal terms that apply to Azure features that are in
beta, preview, or otherwise not yet released into general availability.

Portal
PowerShell
Azure CLI

During the preview you must use either PowerShell or the Azure CLI to enable this feature.
Managing virtual network rules
You can manage virtual network rules for storage accounts through the Azure portal, PowerShell, or CLIv2.

NOTE
If you registered the AllowGlobalTagsForStorage feature, and you want to enable access to your storage account from
a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the storage account or its
paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD
tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to
configure access rules for virtual networks in other regions.

Portal
PowerShell
Azure CLI

1. Go to the storage account you want to secure.


2. Select on the settings menu called Networking .
3. Check that you've selected to allow access from Selected networks .
4. To grant access to a virtual network with a new network rule, under Vir tual networks , select Add
existing vir tual network , select Vir tual networks and Subnets options, and then select Add . To
create a new virtual network and grant it access, select Add new vir tual network . Provide the
information necessary to create the new virtual network, and then select Create .

NOTE
If a service endpoint for Azure Storage wasn't previously configured for the selected virtual network and subnets,
you can configure it as part of this operation.
Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection
during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use ,
PowerShell, CLI or REST APIs.
Even if you registered the AllowGlobalTagsForStorageOnly feature, subnets in regions other than the region of
the storage account or its paired region aren't shown for selection. If you want to enable access to your storage
account from a virtual network/subnet in a different region, use the instructions in the PowerShell or Azure CLI
tabs.

5. To remove a virtual network or subnet rule, select ... to open the context menu for the virtual network or
subnet, and select Remove .
6. select Save to apply your changes.

Grant access from an internet IP range


You can use IP network rules to allow access from specific public internet IP address ranges by creating IP
network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based
services and on-premises networks and blocks general internet traffic.
The following restrictions apply to IP address ranges.
IP network rules are allowed only for public internet IP addresses.
IP address ranges reserved for private networks (as defined in RFC 1918) aren't allowed in IP rules.
Private networks include addresses that start with 10.**, 172.16. - *172.31., and *192.168..
You must provide allowed internet address ranges using CIDR notation in the form 16.17.18.0/24 or as
individual IP addresses like 16.17.18.19.
Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be
configured using individual IP address rules.
Only IPV4 addresses are supported for configuration of storage firewall rules.
IP network rules can't be used in the following cases:
To restrict access to clients in same Azure region as the storage account.
IP network rules have no effect on requests originating from the same Azure region as the storage
account. Use Virtual network rules to allow same-region requests.
To restrict access to clients in a paired region which are in a VNet that has a service endpoint.
To restrict access to Azure services deployed in the same region as the storage account.
Services deployed in the same region as the storage account use private Azure IP addresses for
communication. Thus, you can't restrict access to specific Azure services based on their public outbound
IP address range.
Configuring access from on-premises networks
To grant access from your on-premises networks to your storage account with an IP network rule, you must
identify the internet facing IP addresses used by your network. Contact your network administrator for help.
If you are using ExpressRoute from your premises, for public peering or Microsoft peering, you will need to
identify the NAT IP addresses that are used. For public peering, each ExpressRoute circuit by default uses two
NAT IP addresses applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone.
For Microsoft peering, the NAT IP addresses used are either customer provided or are provided by the service
provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP
firewall setting. To find your public peering ExpressRoute circuit IP addresses, open a support ticket with
ExpressRoute via the Azure portal. Learn more about NAT for ExpressRoute public and Microsoft peering.
Managing IP network rules
You can manage IP network rules for storage accounts through the Azure portal, PowerShell, or CLIv2.

Portal
PowerShell
Azure CLI

1. Go to the storage account you want to secure.


2. Select on the settings menu called Networking .
3. Check that you've selected to allow access from Selected networks .
4. To grant access to an internet IP range, enter the IP address or address range (in CIDR format) under
Firewall > Address Range .
5. To remove an IP network rule, select the trash can icon next to the address range.
6. Select Save to apply your changes.

Grant access from Azure resource instances (preview)


In some cases, an application might depend on Azure resources that cannot be isolated through a virtual
network or an IP address rule. However, you'd still like to secure and restrict storage account access to only your
application's Azure resources. You can configure storage accounts to allow access to specific resource instances
of some Azure services by creating a resource instance rule.
The types of operations that a resource instance can perform on storage account data is determined by the
Azure role assignments of the resource instance. Resource instances must be from the same tenant as your
storage account, but they can belong to any subscription in the tenant.

NOTE
This feature is in public preview and is available in all public cloud regions.

Portal
PowerShell
Azure CLI

You can add or remove resource network rules in the Azure portal.
1. Sign in to the Azure portal to get started.
2. Locate your storage account and display the account overview.
3. Select Networking to display the configuration page for networking.
4. Under Firewalls and vir tual networks , for Selected networks , select to allow access.
5. Scroll down to find Resource instances , and in the Resource type dropdown list, choose the resource
type of your resource instance.
6. In the Instance name dropdown list, choose the resource instance. You can also choose to include all
resource instances in the active tenant, subscription, or resource group.
7. Select Save to apply your changes. The resource instance appears in the Resource instances section of
the network settings page.

To remove the resource instance, select the delete icon ( ) next to the resource instance.

Grant access to trusted Azure services


Some Azure services operate from networks that can't be included in your network rules. You can grant a subset
of such trusted Azure services access to the storage account, while maintaining network rules for other apps.
These trusted services will then use strong authentication to securely connect to your storage account.
You can grant access to trusted Azure services by creating a network rule exception. For step-by-step guidance,
see the Manage exceptions section of this article.
When you grant access to trusted Azure services, you grant the following types of access:
Trusted access for select operations to resources that are registered in your subscription.
Trusted access to resources based on a managed identity.

Trusted access for resources registered in your subscription


Resources of some services, when registered in your subscription , can access your storage account in the
same subscription for select operations, such as writing logs or backup. The following table describes each
service and the operations allowed.

SERVIC E RESO URC E P RO VIDER N A M E O P ERAT IO N S A L LO W ED

Azure Backup Microsoft.RecoveryServices Run backups and restores of


unmanaged disks in IAAS virtual
machines. (not required for managed
disks). Learn more.

Azure Data Box Microsoft.DataBox Enables import of data to Azure using


Data Box. Learn more.

Azure DevTest Labs Microsoft.DevTestLab Custom image creation and artifact


installation. Learn more.

Azure Event Grid Microsoft.EventGrid Enable Blob Storage event publishing


and allow Event Grid to publish to
storage queues. Learn about blob
storage events and publishing to
queues.

Azure Event Hubs Microsoft.EventHub Archive data with Event Hubs Capture.
Learn More.

Azure File Sync Microsoft.StorageSync Enables you to transform your on-


prem file server to a cache for Azure
File shares. Allowing for multi-site sync,
fast disaster-recovery, and cloud-side
backup. Learn more

Azure HDInsight Microsoft.HDInsight Provision the initial contents of the


default file system for a new HDInsight
cluster. Learn more.

Azure Import Export Microsoft.ImportExport Enables import of data to Azure


Storage or export of data from Azure
Storage using the Azure Storage
Import/Export service. Learn more.

Azure Monitor Microsoft.Insights Allows writing of monitoring data to a


secured storage account, including
resource logs, Azure Active Directory
sign-in and audit logs, and Microsoft
Intune logs. Learn more.
SERVIC E RESO URC E P RO VIDER N A M E O P ERAT IO N S A L LO W ED

Azure Networking Microsoft.Network Store and analyze network traffic logs,


including through the Network
Watcher and Traffic Analytics services.
Learn more.

Azure Site Recovery Microsoft.SiteRecovery Enable replication for disaster-recovery


of Azure IaaS virtual machines when
using firewall-enabled cache, source, or
target storage accounts. Learn more.

Trusted access based on a managed identity


The following table lists services that can have access to your storage account data if the resource instances of
those services are given the appropriate permission.
If your account does not have the hierarchical namespace feature enabled on it, you can grant permission, by
explicitly assigning an Azure role to the managed identity for each resource instance. In this case, the scope of
access for the instance corresponds to the Azure role assigned to the managed identity.
You can use the same technique for an account that has the hierarchical namespace feature enable on it.
However, you don't have to assign an Azure role if you add the managed identity to the access control list (ACL)
of any directory or blob contained in the storage account. In that case, the scope of access for the instance
corresponds to the directory or file to which the managed identity has been granted access. You can also
combine Azure roles and ACLs together. To learn more about how to combine them together to grant access, see
Access control model in Azure Data Lake Storage Gen2.

TIP
The recommended way to grant access to specific resources is to use resource instance rules. To grant access to specific
resource instances, see the Grant access from Azure resource instances (preview) section of this article.

SERVIC E RESO URC E P RO VIDER N A M E P URP O SE

Azure API Management Microsoft.ApiManagement/service Enables Api Management service


access to storage accounts behind
firewall using policies. Learn more.

Azure Cache for Redis Microsoft.Cache/Redis Allows access to storage accounts


through Azure Cache for Redis. Learn
more

Azure Cognitive Search Microsoft.Search/searchServices Enables Cognitive Search services to


access storage accounts for indexing,
processing and querying.

Azure Cognitive Services Microsoft.CognitiveService/accounts Enables Cognitive Services to access


storage accounts. Learn more.

Azure Container Registry Tasks Microsoft.ContainerRegistry/registries ACR Tasks can access storage accounts
when building container images.

Azure Data Factory Microsoft.DataFactory/factories Allows access to storage accounts


through the ADF runtime.
SERVIC E RESO URC E P RO VIDER N A M E P URP O SE

Azure Data Share Microsoft.DataShare/accounts Allows access to storage accounts


through Data Share.

Azure DevTest Labs Microsoft.DevTestLab/labs Allows access to storage accounts


through DevTest Labs.

Azure Event Grid Microsoft.EventGrid/topics Allows access to storage accounts


through the Azure Event Grid.

Azure Healthcare APIs Microsoft.HealthcareApis/services Allows access to storage accounts


through Azure Healthcare APIs.

Azure IoT Central Applications Microsoft.IoTCentral/IoTApps Allows access to storage accounts


through Azure IoT Central
Applications.

Azure IoT Hub Microsoft.Devices/IotHubs Allows data from an IoT hub to be


written to Blob storage. Learn more

Azure Logic Apps Microsoft.Logic/workflows Enables logic apps to access storage


accounts. Learn more.

Azure Machine Learning Service Microsoft.MachineLearningServices Authorized Azure Machine Learning


workspaces write experiment output,
models, and logs to Blob storage and
read the data. Learn more.

Azure Media Services Microsoft.Media/mediaservices Allows access to storage accounts


through Media Services.

Azure Migrate Microsoft.Migrate/migrateprojects Allows access to storage accounts


through Azure Migrate.

Microsoft Purview Microsoft.Purview/accounts Allows Microsoft Purview to access


storage accounts.

Azure Remote Rendering Microsoft.MixedReality/remoteRenderin Allows access to storage accounts


gAccounts through Remote Rendering.

Azure Site Recovery Microsoft.RecoveryServices/vaults Allows access to storage accounts


through Site Recovery.

Azure SQL Database Microsoft.Sql Allows writing audit data to storage


accounts behind firewall.

Azure Synapse Analytics Microsoft.Sql Allows import and export of data from
specific SQL databases using the COPY
statement or PolyBase (in dedicated
pool), or the openrowset function
and external tables in serverless pool.
Learn more.

Azure Stream Analytics Microsoft.StreamAnalytics Allows data from a streaming job to be


written to Blob storage. Learn more.
SERVIC E RESO URC E P RO VIDER N A M E P URP O SE

Azure Synapse Analytics Microsoft.Synapse/workspaces Enables access to data in Azure


Storage from Azure Synapse Analytics.

Grant access to storage analytics


In some cases, access to read resource logs and metrics is required from outside the network boundary. When
configuring trusted services access to the storage account, you can allow read-access for the log files, metrics
tables, or both by creating a network rule exception. For step-by-step guidance, see the Manage exceptions
section below. To learn more about working with storage analytics, see Use Azure Storage analytics to collect
logs and metrics data.

Manage exceptions
You can manage network rule exceptions through the Azure portal, PowerShell, or Azure CLI v2.

Portal
PowerShell
Azure CLI

1. Go to the storage account you want to secure.


2. Select on the settings menu called Networking .
3. Check that you've selected to allow access from Selected networks .
4. Under Exceptions , select the exceptions you wish to grant.
5. Select Save to apply your changes.

Next steps
Learn more about Azure Network service endpoints in Service endpoints.
Dig deeper into Azure Storage security in Azure Storage security guide.
Require secure transfer to ensure secure
connections
5/20/2022 • 3 minutes to read • Edit Online

You can configure your storage account to accept requests from secure connections only by setting the Secure
transfer required property for the storage account. When you require secure transfer, any requests originating
from an insecure connection are rejected. Microsoft recommends that you always require secure transfer for all
of your storage accounts.
When secure transfer is required, a call to an Azure Storage REST API operation must be made over HTTPS. Any
request made over HTTP is rejected. By default, the Secure transfer required property is enabled when you
create a storage account.
Azure Policy provides a built-in policy to ensure that secure transfer is required for your storage accounts. For
more information, see the Storage section in Azure Policy built-in policy definitions.
Connecting to an Azure file share over SMB without encryption fails when secure transfer is required for the
storage account. Examples of insecure connections include those made over SMB 2.1 or SMB 3.x without
encryption.

NOTE
Because Azure Storage doesn't support HTTPS for custom domain names, this option is not applied when you're using a
custom domain name.
This secure transfer setting does not apply to TCP. Connections via NFS 3.0 protocol support in Azure Blob Storage using
TCP, which is not secured, will succeed.

Require secure transfer in the Azure portal


You can turn on the Secure transfer required property when you create a storage account in the Azure portal.
You can also enable it for existing storage accounts.
Require secure transfer for a new storage account
1. Open the Create storage account pane in the Azure portal.
2. In the Advanced page, select the Enable secure transfer checkbox.
Require secure transfer for an existing storage account
1. Select an existing storage account in the Azure portal.
2. In the storage account menu pane, under Settings , select Configuration .
3. Under Secure transfer required , select Enabled .

Require secure transfer from code


To require secure transfer programmatically, set the enableHttpsTrafficOnly property to True on the storage
account. You can set this property by using the Storage Resource Provider REST API, client libraries, or tools:
REST API
PowerShell
CLI
NodeJS
.NET SDK
Python SDK
Ruby SDK
Require secure transfer with PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

This sample requires the Azure PowerShell module Az version 0.7 or later. Run Get-Module -ListAvailable Az to
find the version. If you need to install or upgrade, see Install Azure PowerShell module.
Run Connect-AzAccount to create a connection with Azure.
Use the following command line to check the setting:

Get-AzStorageAccount -Name "{StorageAccountName}" -ResourceGroupName "{ResourceGroupName}"


StorageAccountName : {StorageAccountName}
Kind : Storage
EnableHttpsTrafficOnly : False
...

Use the following command line to enable the setting:

Set-AzStorageAccount -Name "{StorageAccountName}" -ResourceGroupName "{ResourceGroupName}" -


EnableHttpsTrafficOnly $True
StorageAccountName : {StorageAccountName}
Kind : Storage
EnableHttpsTrafficOnly : True
...

Require secure transfer with Azure CLI


To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or
Command Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create an Azure free account before you begin.
Use the following command to check the setting:

az storage account show -g {ResourceGroupName} -n {StorageAccountName}


{
"name": "{StorageAccountName}",
"enableHttpsTrafficOnly": false,
"type": "Microsoft.Storage/storageAccounts"
...
}

Use the following command to enable the setting:


az storage account update -g {ResourceGroupName} -n {StorageAccountName} --https-only true
{
"name": "{StorageAccountName}",
"enableHttpsTrafficOnly": true,
"type": "Microsoft.Storage/storageAccounts"
...
}

Next steps
Security recommendations for Blob storage
Remove SMB 1 on Linux
5/20/2022 • 2 minutes to read • Edit Online

Many organizations and internet service providers (ISPs) block the port that SMB uses to communicate, port
445. This practice originates from security guidance about legacy and deprecated versions of the SMB protocol.
Although SMB 3.x is an internet-safe protocol, older versions of SMB, especially SMB 1 aren't. SMB 1, also known
as CIFS (Common Internet File System), is included with many Linux distributions.
SMB 1 is an outdated, inefficient, and insecure protocol. The good news is that Azure Files does not support SMB
1, and starting with Linux kernel version 4.18, Linux makes it possible to disable SMB 1. We always strongly
recommend disabling the SMB 1 on your Linux clients before using SMB file shares in production.

Linux distribution status


Starting with Linux kernel 4.18, the SMB kernel module, called cifs for legacy reasons, exposes a new module
parameter (often referred to as parm by various external documentation), called disable_legacy_dialects .
Although introduced in Linux kernel 4.18, some vendors have backported this change to older kernels that they
support. For convenience, the following table details the availability of this module parameter on common Linux
distributions.

DIST RIB UT IO N C A N DISA B L E SM B 1

Ubuntu 14.04-16.04 No

Ubuntu 18.04 Yes

Ubuntu 19.04+ Yes

Debian 8-9 No

Debian 10+ Yes

Fedora 29+ Yes

CentOS 7 No

CentOS 8+ Yes

Red Hat Enterprise Linux 6.x-7.x No

Red Hat Enterprise Linux 8+ Yes

openSUSE Leap 15.0 No

openSUSE Leap 15.1+ Yes

openSUSE Tumbleweed Yes

SUSE Linux Enterprise 11.x-12.x No


DIST RIB UT IO N C A N DISA B L E SM B 1

SUSE Linux Enterprise 15 No

SUSE Linux Enterprise 15.1 No

You can check to see if your Linux distribution supports the disable_legacy_dialects module parameter via the
following command:

sudo modinfo -p cifs | grep disable_legacy_dialects

This command should output the following message:

disable_legacy_dialects: To improve security it may be helpful to restrict the ability to override the
default dialects (SMB2.1, SMB3 and SMB3.02) on mount with old dialects (CIFS/SMB1 and SMB2) since vers=1.0
(CIFS/SMB1) and vers=2.0 are weaker and less secure. Default: n/N/0 (bool)

Remove SMB 1
Before disabling SMB 1, confirm that the SMB module is not currently loaded on your system (this happens
automatically if you have mounted an SMB share). You can do this with the following command, which should
output nothing if SMB is not loaded:

lsmod | grep cifs

To unload the module, first unmount all SMB shares (using the umount command as described above). You can
identify all the mounted SMB shares on your system with the following command:

mount | grep cifs

Once you have unmounted all SMB file shares, it's safe to unload the module. You can do this with the modprobe
command:

sudo modprobe -r cifs

You can manually load the module with SMB 1 unloaded using the modprobe command:

sudo modprobe cifs disable_legacy_dialects=Y

Finally, you can check the SMB module has been loaded with the parameter by looking at the loaded parameters
in /sys/module/cifs/parameters :

cat /sys/module/cifs/parameters/disable_legacy_dialects

To persistently disable SMB 1 on Ubuntu and Debian-based distributions, you must create a new file (if you don't
already have custom options for other modules) called /etc/modprobe.d/local.conf with the setting. You can do
this with the following command:
echo "options cifs disable_legacy_dialects=Y" | sudo tee -a /etc/modprobe.d/local.conf > /dev/null

You can verify that this has worked by loading the SMB module:

sudo modprobe cifs


cat /sys/module/cifs/parameters/disable_legacy_dialects

Next steps
See these links for more information about Azure Files:
Planning for an Azure Files deployment
Use Azure Files with Linux
Troubleshooting
Enforce a minimum required version of Transport
Layer Security (TLS) for requests to a storage
account
5/20/2022 • 16 minutes to read • Edit Online

Communication between a client application and an Azure Storage account is encrypted using Transport Layer
Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients
and services over the Internet. For more information about TLS, see Transport Layer Security.
Azure Storage currently supports three versions of the TLS protocol: 1.0, 1.1, and 1.2. Azure Storage uses TLS 1.2
on public HTTPS endpoints, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
Azure Storage accounts permit clients to send and receive data with the oldest version of TLS, TLS 1.0, and
above. To enforce stricter security measures, you can configure your storage account to require that clients send
and receive data with a newer version of TLS. If a storage account requires a minimum version of TLS, then any
requests made with an older version will fail.
This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to
continuously manage secure TLS for your storage accounts.
For information about how to specify a particular version of TLS when sending a request from a client
application, see Configure Transport Layer Security (TLS) for a client application.

Detect the TLS version used by client applications


When you enforce a minimum TLS version for your storage account, you risk rejecting requests from clients that
are sending data with an older version of TLS. To understand how configuring the minimum TLS version may
affect client applications, Microsoft recommends that you enable logging for your Azure Storage account and
analyze the logs after an interval of time to detect what versions of TLS client applications are using.
To log requests to your Azure Storage account and determine the TLS version used by the client, you can use
Azure Storage logging in Azure Monitor (preview). For more information, see Monitor Azure Storage.
Azure Storage logging in Azure Monitor supports using log queries to analyze log data. To query logs, you can
use an Azure Log Analytics workspace. To learn more about log queries, see Tutorial: Get started with Log
Analytics queries.
To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analytics, you must first create a
diagnostic setting that indicates what types of requests and for which storage services you want to log data.
Azure Storage logs in Azure Monitor is in public preview and is available for preview testing in all public cloud
regions. This preview enables logs for blobs (including Azure Data Lake Storage Gen2), files, queues, and tables.
To create a diagnostic setting in the Azure portal, follow these steps:
1. Create a new Log Analytics workspace in the subscription that contains your Azure Storage account. After
you configure logging for your storage account, the logs will be available in the Log Analytics workspace.
For more information, see Create a Log Analytics workspace in the Azure portal.
2. Navigate to your storage account in the Azure portal.
3. In the Monitoring section, select Diagnostic settings (preview) .
4. Select the Azure Storage service for which you want to log requests. For example, choose Blob to log
requests to Blob storage.
5. Select Add diagnostic setting .
6. Provide a name for the diagnostic setting.
7. Under Categor y details , in the log section, choose which types of requests to log. You can log read,
write, and delete requests. For example, choosing StorageRead and StorageWrite will log read and
write requests to the selected service.
8. Under Destination details , select Send to Log Analytics . Select your subscription and the Log
Analytics workspace you created earlier, as shown in the following image.

After you create the diagnostic setting, requests to the storage account are subsequently logged according to
that setting. For more information, see Create diagnostic setting to collect resource logs and metrics in Azure.
For a reference of fields available in Azure Storage logs in Azure Monitor, see Resource logs (preview).
Query logged requests by TLS version
Azure Storage logs in Azure Monitor include the TLS version used to send a request to a storage account. Use
the TlsVersion property to check the TLS version of a logged request.
To determine how many requests were made against Blob storage with different versions of TLS over the past
seven days, open your Log Analytics workspace. Next, paste the following query into a new log query and run it.
Remember to replace the placeholder values in brackets with your own values:

StorageBlobLogs
| where TimeGenerated > ago(7d) and AccountName == "<account-name>"
| summarize count() by TlsVersion

The results show the count of the number of requests made with each version of TLS:
Query logged requests by caller IP address and user agent header
Azure Storage logs in Azure Monitor also include the caller IP address and user agent header to help you to
evaluate which client applications accessed the storage account. You can analyze these values to decide whether
client applications must be updated to use a newer version of TLS, or whether it's acceptable to fail a client's
request if it is not sent with the minimum TLS version.
To determine which clients made requests with a version of TLS older than TLS 1.2 over the past seven days,
paste the following query into a new log query and run it. Remember to replace the placeholder values in
brackets with your own values:

StorageBlobLogs
| where TimeGenerated > ago(7d) and AccountName == "<account-name>" and TlsVersion != "TLS 1.2"
| project TlsVersion, CallerIpAddress, UserAgentHeader

Remediate security risks with a minimum version of TLS


When you are confident that traffic from clients using older versions of TLS is minimal, or that it's acceptable to
fail requests made with an older version of TLS, then you can begin enforcement of a minimum TLS version on
your storage account. Requiring that clients use a minimum version of TLS to make requests against a storage
account is part of a strategy to minimize security risks to your data.

IMPORTANT
If you are using a service that connects to Azure Storage, make sure that that service is using the appropriate version of
TLS to send requests to Azure Storage before you set the required minimum version for a storage account.

Configure the minimum TLS version for a storage account


To configure the minimum TLS version for a storage account, set the MinimumTlsVersion version for the
account. This property is available for all storage accounts that are created with the Azure Resource Manager
deployment model. For more information about the Azure Resource Manager deployment model, see Storage
account overview.
The default value of the MinimumTlsVersion property is different depending on how you set it. When you
create a storage account with the Azure portal, the minimum TLS version is set to 1.2 by default. When you
create a storage account with PowerShell, Azure CLI, or an Azure Resource Manager template, the
MinimumTlsVersion property is not set by default and does not return a value until you explicitly set it.
When the MinimumTlsVersion property is not set, its value may be displayed as either null or an empty
string, depending on the context. The storage account will permit requests sent with TLS version 1.0 or greater if
the property is not set.
Portal
PowerShell
Azure CLI
Template

When you create a storage account with the Azure portal, the minimum TLS version is set to 1.2 by default.
To configure the minimum TLS version for an existing storage account with the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal.
2. Under Settings , select Configuration .
3. Under Minimum TLS version , use the drop-down to select the minimum version of TLS required to
access data in this storage account.

NOTE
After you update the minimum TLS version for the storage account, it may take up to 30 seconds before the change is
fully propagated.

Configuring the minimum TLS version requires version 2019-04-01 or later of the Azure Storage resource
provider. For more information, see Azure Storage Resource Provider REST API.
Check the minimum required TLS version for multiple accounts
To check the minimum required TLS version across a set of storage accounts with optimal performance, you can
use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph
Explorer, see Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer.
Running the following query in the Resource Graph Explorer returns a list of storage accounts and displays the
minimum TLS version for each account:
resources
| where type =~ 'Microsoft.Storage/storageAccounts'
| extend minimumTlsVersion = parse_json(properties).minimumTlsVersion
| project subscriptionId, resourceGroup, name, minimumTlsVersion

Test the minimum TLS version from a client


To test that the minimum required TLS version for a storage account forbids calls made with an older version,
you can configure a client to use an older version of TLS. For more information about configuring a client to use
a specific version of TLS, see Configure Transport Layer Security (TLS) for a client application.
When a client accesses a storage account using a TLS version that does not meet the minimum TLS version
configured for the account, Azure Storage returns error code 400 error (Bad Request) and a message indicating
that the TLS version that was used is not permitted for making requests against this storage account.

NOTE
When you configure a minimum TLS version for a storage account, that minimum version is enforced at the application
layer. Tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the
minimum required version when run directly against the storage account endpoint.

Use Azure Policy to audit for compliance


If you have a large number of storage accounts, you may want to perform an audit to make sure that all
accounts are configured for the minimum version of TLS that your organization requires. To audit a set of
storage accounts for their compliance, use Azure Policy. Azure Policy is a service that you can use to create,
assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources
compliant with your corporate standards and service level agreements. For more information, see Overview of
Azure Policy.
Create a policy with an Audit effect
Azure Policy supports effects that determine what happens when a policy rule is evaluated against a resource.
The Audit effect creates a warning when a resource is not in compliance, but does not stop the request. For more
information about effects, see Understand Azure Policy effects.
To create a policy with an Audit effect for the minimum TLS version with the Azure portal, follow these steps:
1. In the Azure portal, navigate to the Azure Policy service.
2. Under the Authoring section, select Definitions .
3. Select Add policy definition to create a new policy definition.
4. For the Definition location field, select the More button to specify where the audit policy resource is
located.
5. Specify a name for the policy. You can optionally specify a description and category.
6. Under Policy rule , add the following policy definition to the policyRule section.
{
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Storage/storageAccounts"
},
{
"anyOf": [
{
"field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
"notEquals": "TLS1_2"
},
{
"field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
"exists": "false"
}
]
}
]
},
"then": {
"effect": "audit"
}
}
}

7. Save the policy.


Assign the policy
Next, assign the policy to a resource. The scope of the policy corresponds to that resource and any resources
beneath it. For more information on policy assignment, see Azure Policy assignment structure.
To assign the policy with the Azure portal, follow these steps:
1. In the Azure portal, navigate to the Azure Policy service.
2. Under the Authoring section, select Assignments .
3. Select Assign policy to create a new policy assignment.
4. For the Scope field, select the scope of the policy assignment.
5. For the Policy definition field, select the More button, then select the policy you defined in the previous
section from the list.
6. Provide a name for the policy assignment. The description is optional.
7. Leave Policy enforcement set to Enabled. This setting has no effect on the audit policy.
8. Select Review + create to create the assignment.
View compliance report
After you've assigned the policy, you can view the compliance report. The compliance report for an audit policy
provides information on which storage accounts are not in compliance with the policy. For more information,
see Get policy compliance data.
It may take several minutes for the compliance report to become available after the policy assignment is
created.
To view the compliance report in the Azure portal, follow these steps:
1. In the Azure portal, navigate to the Azure Policy service.
2. Select Compliance .
3. Filter the results for the name of the policy assignment that you created in the previous step. The report
shows how many resources are not in compliance with the policy.
4. You can drill down into the report for additional details, including a list of storage accounts that are not in
compliance.

Use Azure Policy to enforce the minimum TLS version


Azure Policy supports cloud governance by ensuring that Azure resources adhere to requirements and
standards. To enforce a minimum TLS version requirement for the storage accounts in your organization, you
can create a policy that prevents the creation of a new storage account that sets the minimum TLS requirement
to an older version of TLS than that which is dictated by the policy. This policy will also prevent all configuration
changes to an existing account if the minimum TLS version setting for that account is not compliant with the
policy.
The enforcement policy uses the Deny effect to prevent a request that would create or modify a storage account
so that the minimum TLS version no longer adheres to your organization's standards. For more information
about effects, see Understand Azure Policy effects.
To create a policy with a Deny effect for a minimum TLS version that is less than TLS 1.2, follow the same steps
described in Use Azure Policy to audit for compliance, but provide the following JSON in the policyRule section
of the policy definition:
{
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Storage/storageAccounts"
},
{
"anyOf": [
{
"field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
"notEquals": "TLS1_2"
},
{
"field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
"exists": "false"
}
]
}
]
},
"then": {
"effect": "deny"
}
}
}

After you create the policy with the Deny effect and assign it to a scope, a user cannot create a storage account
with a minimum TLS version that is older than 1.2. Nor can a user make any configuration changes to an
existing storage account that currently requires a minimum TLS version that is older than 1.2. Attempting to do
so results in an error. The required minimum TLS version for the storage account must be set to 1.2 to proceed
with account creation or configuration.
The following image shows the error that occurs if you try to create a storage account with the minimum TLS
version set to TLS 1.0 (the default for a new account) when a policy with a Deny effect requires that the
minimum TLS version be set to TLS 1.2.
Permissions necessary to require a minimum version of TLS
To set the MinimumTlsVersion property for the storage account, a user must have permissions to create and
manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions
include the Microsoft.Storage/storageAccounts/write or Microsoft.Storage/storageAccounts/* action.
Built-in roles with this action include:
The Azure Resource Manager Owner role
The Azure Resource Manager Contributor role
The Storage Account Contributor role
These roles do not provide access to data in a storage account via Azure Active Directory (Azure AD). However,
they include the Microsoft.Storage/storageAccounts/listkeys/action , which grants access to the account
access keys. With this permission, a user can use the account access keys to access all data in a storage account.
Role assignments must be scoped to the level of the storage account or higher to permit a user to require a
minimum version of TLS for the storage account. For more information about role scope, see Understand scope
for Azure RBAC.
Be careful to restrict assignment of these roles only to those who require the ability to create a storage account
or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions
that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see
Best practices for Azure RBAC.

NOTE
The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the
Azure Resource Manager Owner role. The Owner role includes all actions, so a user with one of these administrative roles
can also create and manage storage accounts. For more information, see Classic subscription administrator roles, Azure
roles, and Azure AD administrator roles.

Network considerations
When a client sends a request to storage account, the client establishes a connection with the public endpoint of
the storage account first, before processing any requests. The minimum TLS version setting is checked after the
connection is established. If the request uses an earlier version of TLS than that specified by the setting, the
connection will continue to succeed, but the request will eventually fail. For more information about public
endpoints for Azure Storage, see Resource URI syntax.

Next steps
Configure Transport Layer Security (TLS) for a client application
Security recommendations for Blob storage
Configure Transport Layer Security (TLS) for a client
application
5/20/2022 • 2 minutes to read • Edit Online

For security purposes, an Azure Storage account may require that clients use a minimum version of Transport
Layer Security (TLS) to send requests. Calls to Azure Storage will fail if the client is using a version of TLS that is
lower than the minimum required version. For example, if a storage account requires TLS 1.2, then a a request
sent by a client who is using TLS 1.1 will fail.
This article describes how to configure a client application to use a particular version of TLS. For information
about how to configure a minimum required version of TLS for an Azure Storage account, see Configure
minimum required version of Transport Layer Security (TLS) for a storage account.

Configure the client TLS version


In order for a client to send a request with a particular version of TLS, the operating system must support that
version.
The following examples show how to set the client's TLS version to 1.2 from PowerShell or .NET. The .NET
Framework used by the client must support TLS 1.2. For more information, see Support for TLS 1.2.
PowerShell
.NET v12 SDK
.NET v11 SDK

The following sample shows how to enable TLS 1.2 in a PowerShell client:

# Set the TLS version used by the PowerShell client to TLS 1.2.
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12;

# Create a new container.


$storageAccount = Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName
$ctx = $storageAccount.Context
New-AzStorageContainer -Name "sample-container" -Context $ctx

Verify the TLS version used by a client


To verify that the specified version of TLS was used by the client to send a request, you can use Fiddler or a
similar tool. Open Fiddler to start capturing client network traffic, then execute one of the examples in the
previous section. Look at the Fiddler trace to confirm that the correct version of TLS was used to send the
request, as shown in the following image.
Next steps
Configure minimum required version of Transport Layer Security (TLS) for a storage account
Security recommendations for Blob storage
Get started with AzCopy
5/20/2022 • 6 minutes to read • Edit Online

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This
article helps you download AzCopy, connect to your storage account, and then transfer data.

NOTE
AzCopy V10 is the currently supported version of AzCopy.
If you need to use a previous version of AzCopy, see the Use the previous version of AzCopy section of this article.

Download AzCopy
First, download the AzCopy V10 executable file to any directory on your computer. AzCopy V10 is just an
executable file, so there's nothing to install.
Windows 64-bit (zip)
Windows 32-bit (zip)
Linux x86-64 (tar)
Linux ARM64 Preview (tar)
macOS (zip)
These files are compressed as a zip file (Windows and Mac) or a tar file (Linux). To download and decompress
the tar file on Linux, see the documentation for your Linux distribution.
For detailed information on AzCopy releases see the AzCopy release page.

NOTE
If you want to copy data to and from your Azure Table storage service, then install AzCopy version 7.3.

Run AzCopy
For convenience, consider adding the directory location of the AzCopy executable to your system path for ease
of use. That way you can type azcopy from any directory on your system.
If you choose not to add the AzCopy directory to your path, you'll have to change directories to the location of
your AzCopy executable and type azcopy or .\azcopy in Windows PowerShell command prompts.
As an owner of your Azure Storage account, you aren't automatically assigned permissions to access data.
Before you can do anything meaningful with AzCopy, you need to decide how you'll provide authorization
credentials to the storage service.

Authorize AzCopy
You can provide authorization credentials by using Azure Active Directory (AD), or by using a Shared Access
Signature (SAS) token.
Use this table as a guide:

STO RA GE T Y P E C URREN T LY SUP P O RT ED M ET H O D O F A UT H O RIZ AT IO N

Blob storage Azure AD & SAS

Blob storage (hierarchical namespace) Azure AD & SAS

File storage SAS only


Option 1: Use Azure Active Directory
This option is available for blob Storage only. By using Azure Active Directory, you can provide credentials once
instead of having to append a SAS token to each command.

NOTE
In the current release, if you plan to copy blobs between storage accounts, you'll have to append a SAS token to each
source URL. You can omit the SAS token only from the destination URL. For examples, see Copy blobs between storage
accounts.

To authorize access by using Azure AD, see Authorize access to blobs with AzCopy and Azure Active Directory
(Azure AD).
Option 2: Use a SAS token
You can append a SAS token to each source or destination URL that use in your AzCopy commands.
This example command recursively copies data from a local directory to a blob container. A fictitious SAS token
is appended to the end of the container URL.

azcopy copy "C:\local\path" "https://account.blob.core.windows.net/mycontainer1/?sv=2018-03-


28&ss=bjqt&srt=sco&sp=rwddgcup&se=2019-05-01T05:01:17Z&st=2019-04-
30T21:01:17Z&spr=https&sig=MGCXiyEzbtttkr3ewJIh2AR8KrghSy1DGM9ovN734bQF4%3D" --recursive=true

To learn more about SAS tokens and how to obtain one, see Using shared access signatures (SAS).

NOTE
The Secure transfer required setting of a storage account determines whether the connection to a storage account is
secured with Transport Layer Security (TLS). This setting is enabled by default.

Transfer data
After you've authorized your identity or obtained a SAS token, you can begin transferring data.
To find example commands, see any of these articles.

SERVIC E A RT IC L E

Azure Blob Storage Upload files to Azure Blob Storage

Azure Blob Storage Download blobs from Azure Blob Storage

Azure Blob Storage Copy blobs between Azure storage accounts

Azure Blob Storage Synchronize with Azure Blob Storage

Azure Files Transfer data with AzCopy and file storage

Amazon S3 Copy data from Amazon S3 to Azure Storage

Google Cloud Storage Copy data from Google Cloud Storage to Azure Storage
(preview)

Azure Stack storage Transfer data with AzCopy and Azure Stack storage

Get command help


To see a list of commands, type azcopy -h and then press the ENTER key.
To learn about a specific command, just include the name of the command (For example: azcopy list -h ).
List of commands
The following table lists all AzCopy v10 commands. Each command links to a reference article.

C OMMAND DESC RIP T IO N

azcopy bench Runs a performance benchmark by uploading or


downloading test data to or from a specified location.

azcopy copy Copies source data to a destination location

azcopy doc Generates documentation for the tool in Markdown format.

azcopy env Shows the environment variables that can configure


AzCopy's behavior.

azcopy jobs Subcommands related to managing jobs.

azcopy jobs clean Remove all log and plan files for all jobs.

azcopy jobs list Displays information on all jobs.

azcopy jobs remove Remove all files associated with the given job ID.

azcopy jobs resume Resumes the existing job with the given job ID.

azcopy jobs show Shows detailed information for the given job ID.

azcopy list Lists the entities in a given resource.

azcopy login Logs in to Azure Active Directory to access Azure Storage


resources.

azcopy logout Logs the user out and terminates access to Azure Storage
resources.

azcopy make Creates a container or file share.

azcopy remove Delete blobs or files from an Azure storage account.

azcopy sync Replicates the source location to the destination location.

NOTE
AzCopy does not have a command to rename files.

Use in a script
Obtain a static download link
Over time, the AzCopy download link will point to new versions of AzCopy. If your script downloads AzCopy, the
script might stop working if a newer version of AzCopy modifies features that your script depends upon.
To avoid these issues, obtain a static (unchanging) link to the current version of AzCopy. That way, your script
downloads the same exact version of AzCopy each time that it runs.
To obtain the link, run this command:

O P ERAT IN G SY ST EM C OMMAND

Linux curl -s -D- https://aka.ms/downloadazcopy-v10-linux


| grep ^Location

Windows (curl https://aka.ms/downloadazcopy-v10-windows -


MaximumRedirection 0 -ErrorAction
silentlycontinue).headers.location

NOTE
For Linux, --strip-components=1 on the tar command removes the top-level folder that contains the version name,
and instead extracts the binary directly into the current folder. This allows the script to be updated with a new version of
azcopy by only updating the wget URL.

The URL appears in the output of this command. Your script can then download AzCopy by using that URL.

O P ERAT IN G SY ST EM C OMMAND

Linux wget -O azcopy_v10.tar.gz


https://aka.ms/downloadazcopy-v10-linux && tar -xf
azcopy_v10.tar.gz --strip-components=1

Windows Invoke-WebRequest
https://azcopyvnext.azureedge.net/release20190517/azcopy_windows_amd64_10.1.2.zi
-OutFile azcopyv10.zip <<Unzip here>>

Escape special characters in SAS tokens


In batch files that have the .cmd extension, you'll have to escape the % characters that appear in SAS tokens.
You can do that by adding an additional % character next to existing % characters in the SAS token string.
Run scripts by using Jenkins
If you plan to use Jenkins to run scripts, make sure to place the following command at the beginning of the
script.

/usr/bin/keyctl new_session

Use in Azure Storage Explorer


Storage Explorer uses AzCopy to perform all of its data transfer operations. You can use Storage Explorer if you
want to leverage the performance advantages of AzCopy, but you prefer to use a graphical user interface rather
than the command line to interact with your files.
Storage Explorer uses your account key to perform operations, so after you sign into Storage Explorer, you won't
need to provide additional authorization credentials.

Configure, optimize, and fix


See any of the following resources:
AzCopy configuration settings
Optimize the performance of AzCopy
Troubleshoot AzCopy V10 issues in Azure Storage by using log files

Use a previous version


If you need to use the previous version of AzCopy, see either of the following links:
AzCopy on Windows (v8)
AzCopy on Linux (v7)

Next steps
If you have questions, issues, or general feedback, submit them on GitHub page.
Transfer data with AzCopy and file storage
5/20/2022 • 12 minutes to read • Edit Online

AzCopy is a command-line utility that you can use to copy files to or from a storage account. This article
contains example commands that work with Azure Files.
Before you begin, see the Get started with AzCopy article to download AzCopy and familiarize yourself with the
tool.

TIP
The examples in this article enclose path arguments with single quotes (''). Use single quotes in all command shells except
for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments
with double quotes ("") instead of single quotes ('').

Create file shares


You can use the azcopy make command to create a file share. The example in this section creates a file share
named myfileshare .
Syntax
azcopy make 'https://<storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>'

Example

azcopy make 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-


28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D'

For detailed reference docs, see azcopy make.

Upload files
You can use the azcopy copy command to upload files and directories from your local computer.
This section contains the following examples:
Upload a file
Upload a directory
Upload the contents of a directory
Upload a specific file
TIP
You can tweak your upload operation by using optional flags. Here's a few examples.

SCENAR I O F L AG

Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]

Copy SMB property information along with the files. --preser ve-smb-info =[true|false]

For a complete list, see options.

NOTE
AzCopy doesn't automatically calculate and store the file's md5 hash code. If you want AzCopy to do that, then append
the --put-md5 flag to each copy command. That way, when the file is downloaded, AzCopy calculates an MD5 hash for
downloaded data and verifies that the MD5 hash stored in the file's Content-md5 property matches the calculated hash.

Upload a file
Syntax
azcopy copy '<local-file-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-
name>/<file-name><SAS-token>'

Example

azcopy copy 'C:\myDirectory\myTextFile.txt'


'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-
28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --preserve-smb-
permissions=true --preserve-smb-info=true

You can also upload a file by using a wildcard symbol (*) anywhere in the file path or file name. For example:
'C:\myDirectory\*.txt' , or C:\my*\*.txt .

Upload a directory
This example copies a directory (and all of the files in that directory) to a file share. The result is a directory in
the file share by the same name.
Syntax
azcopy copy '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-name>
<SAS-token>' --recursive

Example

azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-


28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive --preserve-
smb-permissions=true --preserve-smb-info=true

To copy to a directory within the file share, just specify the name of that directory in your command string.
Example
azcopy copy 'C:\myDirectory'
'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-
28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive --preserve-
smb-permissions=true --preserve-smb-info=true

If you specify the name of a directory that does not exist in the file share, AzCopy creates a new directory by that
name.
Upload the contents of a directory
You can upload the contents of a directory without copying the containing directory itself by using the wildcard
symbol (*).
Syntax
azcopy copy '<local-directory-path>/*' 'https://<storage-account-name>.file.core.windows.net/<file-share-
name>/<directory-path><SAS-token>'

Example

azcopy copy 'C:\myDirectory\*'


'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-
28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --preserve-smb-
permissions=true --preserve-smb-info=true

NOTE
Append the --recursive flag to upload files in all sub-directories.

Upload specific files


You can upload specific files by using complete file names, partial names with wildcard characters (*), or by
using dates and times.
Specify multiple complete file names
Use the azcopy copy command with the --include-path option. Separate individual file names by using a
semicolon ( ; ).
Syntax
azcopy copy '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-or-
directory-name><SAS-token>' --include-path <semicolon-separated-file-list>

Example

azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --include-path
'photos;documents\myFile.txt' --preserve-smb-permissions=true --preserve-smb-info=true

In this example, AzCopy transfers the C:\myDirectory\photos directory and the


C:\myDirectory\documents\myFile.txt file. You need to include the --recursive option to transfer all files in the
C:\myDirectory\photos directory.
You can also exclude files by using the --exclude-path option. To learn more, see azcopy copy reference docs.
Use wildcard characters
Use the azcopy copy command with the --include-pattern option. Specify partial names that include the
wildcard characters. Separate names by using a semicolon ( ; ).
Syntax
azcopy copy '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-or-
directory-name><SAS-token>' --include-pattern <semicolon-separated-file-list-with-wildcard-characters>

Example

azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --include-pattern
'myFile*.txt;*.pdf*' --preserve-smb-permissions=true --preserve-smb-info=true

You can also exclude files by using the --exclude-pattern option. To learn more, see azcopy copy reference docs.
The --include-pattern and --exclude-pattern options apply only to filenames and not to the path. If you want
to copy all of the text files that exist in a directory tree, use the --recursive option to get the entire directory
tree, and then use the --include-pattern and specify *.txt to get all of the text files.
Upload files that were modified after a date and time
Use the azcopy copy command with the --include-after option. Specify a date and time in ISO 8601 format
(For example: 2020-08-19T15:04:00Z ).
Syntax
azcopy copy '<local-directory-path>\*' 'https://<storage-account-name>.file.core.windows.net/<file-share-or-
directory-name><SAS-token>' --include-after <Date-Time-in-ISO-8601-format>

Example

azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --include-after '2020-08-
19T15:04:00Z' --preserve-smb-permissions=true --preserve-smb-info=true

For detailed reference, see the azcopy copy reference docs.

Download files
You can use the azcopy copy command to download files, directories, and file shares to your local computer.
This section contains the following examples:
Download a file
Download a directory
Download the contents of a directory
Download specific files
TIP
You can tweak your download operation by using optional flags. Here are a few examples:

SCENAR I O F L AG

Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]

Copy SMB property information along with the files. --preser ve-smb-info =[true|false]

Automatically decompress files. --decompress

For a complete list, see options.

NOTE
If the Content-md5 property value of a file contains a hash, AzCopy calculates an MD5 hash for downloaded data and
verifies that the MD5 hash stored in the file's Content-md5 property matches the calculated hash. If these values don't
match, the download fails unless you override this behavior by appending --check-md5=NoCheck or
--check-md5=LogOnly to the copy command.

Download a file
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<file-path><SAS-token>'
'<local-file-path>'

Example

azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-


28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D'
'C:\myDirectory\myTextFile.txt' --preserve-smb-permissions=true --preserve-smb-info=true

Download a directory
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<directory-path><SAS-
token>' '<local-directory-path>' --recursive

Example

azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-


28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory' --
recursive --preserve-smb-permissions=true --preserve-smb-info=true

This example results in a directory named C:\myDirectory\myFileShareDirectory that contains all of the
downloaded files.
Download the contents of a directory
You can download the contents of a directory without copying the containing directory itself by using the
wildcard symbol (*).
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/*<SAS-token>' '<local-
directory-path>/'

Example

azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory/*?sv=2018-03-


28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory' --
preserve-smb-permissions=true --preserve-smb-info=true

NOTE
Append the --recursive flag to download files in all sub-directories.

Download specific files


You can download specific files by using complete file names, partial names with wildcard characters (*), or by
using dates and times.
Specify multiple complete file names
Use the azcopy copy command with the --include-path option. Separate individual file names by using a
semicolon ( ; ).
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name><SAS-token>'
'<local-directory-path>' --include-path <semicolon-separated-file-list>

Example

azcopy copy 'https://mystorageaccount.file.core.windows.net/myFileShare/myDirectory?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --include-
path 'photos;documents\myFile.txt' --recursive --preserve-smb-permissions=true --preserve-smb-info=true

In this example, AzCopy transfers the


https://mystorageaccount.file.core.windows.net/myFileShare/myDirectory/photos directory and the
https://mystorageaccount.file.core.windows.net/myFileShare/myDirectory/documents/myFile.txt file. Include the
--recursive option to transfer all files in the
https://mystorageaccount.file.core.windows.net/myFileShare/myDirectory/photos directory.
You can also exclude files by using the --exclude-path option. To learn more, see azcopy copy reference docs.
Use wildcard characters
Use the azcopy copy command with the --include-pattern option. Specify partial names that include the
wildcard characters. Separate names by using a semicolon ( ; ).
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name><SAS-token>'
'<local-directory-path>' --include-pattern <semicolon-separated-file-list-with-wildcard-characters>

Example

azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myDirectory?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --include-
pattern 'myFile*.txt;*.pdf*' --preserve-smb-permissions=true --preserve-smb-info=true
You can also exclude files by using the --exclude-pattern option. To learn more, see azcopy copy reference docs.
The --include-pattern and --exclude-pattern options apply only to filenames and not to the path. If you want
to copy all of the text files that exist in a directory tree, use the --recursive option to get the entire directory
tree, and then use the --include-pattern and specify *.txt to get all of the text files.
Download files that were modified after a date and time
Use the azcopy copy command with the --include-after option. Specify a date and time in ISO-8601 format
(For example: 2020-08-19T15:04:00Z ).
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name>/*<SAS-
token>' '<local-directory-path>' --include-after <Date-Time-in-ISO-8601-format>

Example

azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/*?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --include-
after '2020-08-19T15:04:00Z' --preserve-smb-permissions=true --preserve-smb-info=true

For detailed reference, see the azcopy copy reference docs.


Download from a share snapshot
You can download a specific version of a file or directory by referencing the DateTime value of a share
snapshot. To learn more about share snapshots see Overview of share snapshots for Azure Files.
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<file-path-or-directory-
name><SAS-token>&sharesnapshot=<DateTime-of-snapshot>' '<local-file-or-directory-path>'

Example (Download a file)

azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-


28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D&sharesnapshot=2020-09-
23T08:21:07.0000000Z' 'C:\myDirectory\myTextFile.txt' --preserve-smb-permissions=true --preserve-smb-
info=true

Example (Download a director y)

azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-


28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D&sharesnapshot=2020-09-
23T08:21:07.0000000Z' 'C:\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true

Copy files between storage accounts


You can use AzCopy to copy files to other storage accounts. The copy operation is synchronous so when the
command returns, that indicates that all files have been copied.
AzCopy uses server-to-server APIs, so data is copied directly between storage servers. These copy operations
don't use the network bandwidth of your computer. You can increase the throughput of these operations by
setting the value of the AZCOPY_CONCURRENCY_VALUE environment variable. To learn more, see Increase
Concurrency.
You can also copy specific versions of a files by referencing the DateTime value of a share snapshot. To learn
more about share snapshots see Overview of share snapshots for Azure Files.
This section contains the following examples:
Copy a file to another storage account
Copy a directory to another storage account
Copy a file share to another storage account
Copy all file shares, directories, and files to another storage account

TIP
You can tweak your copy operation by using optional flags. Here's a few examples.

SCENAR I O F L AG

Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]

Copy SMB property information along with the files. --preser ve-smb-info =[true|false]

For a complete list, see options.

Copy a file to another storage account


Syntax
azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name>/<file-path><SAS-
token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name>/<file-path><SAS-
token>'

Example

azcopy copy 'https://mysourceaccount.file.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'
'https://mydestinationaccount.file.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --preserve-smb-permissions=true
--preserve-smb-info=true

Example (share snapshot)

azcopy copy 'https://mysourceaccount.file.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D&sharesnapshot=2020-09-
23T08:21:07.0000000Z' 'https://mydestinationaccount.file.core.windows.net/mycontainer/myTextFile.txt?
sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --preserve-smb-permissions=true
--preserve-smb-info=true

Copy a directory to another storage account


Syntax
azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name>/<directory-path>
<SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>'
--recursive

Example
azcopy copy 'https://mysourceaccount.file.core.windows.net/myFileShare/myFileDirectory?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'
'https://mydestinationaccount.file.core.windows.net/mycontainer?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true

Example (share snapshot)

azcopy copy 'https://mysourceaccount.file.core.windows.net/myFileShare/myFileDirectory?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D&sharesnapshot=2020-09-
23T08:21:07.0000000Z' 'https://mydestinationaccount.file.core.windows.net/mycontainer?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true

Copy a file share to another storage account


Syntax
azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>'
'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive

Example

azcopy copy 'https://mysourceaccount.file.core.windows.net/mycontainer?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'
'https://mydestinationaccount.file.core.windows.net/mycontainer?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D --preserve-smb-permissions=true
--preserve-smb-info=true

Example (share snapshot)

azcopy copy 'https://mysourceaccount.file.core.windows.net/mycontainer?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D&sharesnapshot=2020-09-
23T08:21:07.0000000Z' 'https://mydestinationaccount.file.core.windows.net/mycontainer?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true

Copy all file shares, directories, and files to another storage account
Syntax
azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<SAS-token>' 'https://<destination-
storage-account-name>.file.core.windows.net/<SAS-token>' --recursive'

Example

azcopy copy 'https://mysourceaccount.file.core.windows.net?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'
'https://mydestinationaccount.file.core.windows.net?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-
04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --
recursive --preserve-smb-permissions=true --preserve-smb-info=true
Example (share snapshot)

azcopy copy 'https://mysourceaccount.file.core.windows.net?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D&sharesnapshot=2020-09-
23T08:21:07.0000000Z' 'https://mydestinationaccount.file.core.windows.net?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true

Synchronize files
You can synchronize the contents of a local file system with a file share or synchronize the contents of a file
share with another file share. You can also synchronize the contents of a directory in a file share with the
contents of a directory that is located in another file share. Synchronization is one way. In other words, you
choose which of these two endpoints is the source and which one is the destination. Synchronization also uses
server to server APIs.

NOTE
Currently, this scenario is supported for accounts that have enabled hierarchical namespace via the blob endpoint.

WARNING
AzCopy sync is supported but not fully recommended for Azure Files. AzCopy sync doesn't support differential copies at
scale, and some file fidelity might be lost. To learn more, see Migrate to Azure file shares.

Guidelines
The sync command compares file names and last modified timestamps. Set the --delete-destination
optional flag to a value of true or prompt to delete files in the destination directory if those files no
longer exist in the source directory.
If you set the --delete-destination flag to true , AzCopy deletes files without providing a prompt. If you
want a prompt to appear before AzCopy deletes a file, set the --delete-destination flag to prompt .
If you plan to set the --delete-destination flag to prompt or false , consider using the copy command
instead of the sync command and set the --overwrite parameter to ifSourceNewer . The copy command
consumes less memory and incurs less billing costs because a copy operation doesn't have to index the
source or destination prior to moving files.
The machine on which you run the sync command should have an accurate system clock because the last
modified times are critical in determining whether a file should be transferred. If your system has
significant clock skew, avoid modifying files at the destination too close to the time that you plan to run a
sync command.
TIP
You can tweak your sync operation by using optional flags. Here's a few examples.

SCENAR I O F L AG

Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]

Copy SMB property information along with the files. --preser ve-smb-info =[true|false]

Exclude files based on a pattern. --exclude-path

Specify how detailed you want your sync-related log entries --log-level =[WARNING|ERROR|INFO|NONE]
to be.

For a complete list, see options.

Update a file share with changes to a local file system


In this case, the file share is the destination, and the local file system is the source.

TIP
This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the
Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with
double quotes ("") instead of single quotes ('').

Syntax
azcopy sync '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-name>
<SAS-token>' --recursive

Example

azcopy sync 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileShare?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive

Update a local file system with changes to a file share


In this case, the local file system is the destination, and the file share is the source.

TIP
This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the
Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with
double quotes ("") instead of single quotes ('').

Syntax
azcopy sync 'https://<storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>'
'C:\myDirectory' --recursive

Example
azcopy sync 'https://mystorageaccount.file.core.windows.net/myfileShare?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --recursive

Update a file share with changes to another file share


The first file share that appears in this command is the source. The second one is the destination.
Syntax
azcopy sync 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>'
'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive

Example

azcopy sync 'https://mysourceaccount.file.core.windows.net/myfileShare?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'
'https://mydestinationaccount.file.core.windows.net/myfileshare?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true

Update a directory with changes to a directory in another file share


The first directory that appears in this command is the source. The second one is the destination.
Syntax
azcopy sync 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name>/<directory-name>
<SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name>/<directory-
name><SAS-token>' --recursive

Example

azcopy sync 'https://mysourceaccount.file.core.windows.net/myFileShare/myDirectory?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'
'https://mydestinationaccount.file.core.windows.net/myFileShare/myDirectory?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true

Update a file share to match the contents of a share snapshot


The first file share that appears in this command is the source. At the end of the URI, append the string
&sharesnapshot= followed by the DateTime value of the snapshot.

Syntax
azcopy sync 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name><SAS-
token>&sharesnapsot<snapshot-ID>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-
share-name><SAS-token>' --recursive

Example
azcopy sync 'https://mysourceaccount.file.core.windows.net/myfileShare?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D&sharesnapshot=2020-03-
03T20%3A24%3A13.0000000Z' 'https://mydestinationaccount.file.core.windows.net/myfileshare?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true

To learn more about share snapshots, see Overview of share snapshots for Azure Files.

Next steps
Find more examples in any of these articles:
Get started with AzCopy
Transfer data
See these articles to configure settings, optimize performance, and troubleshoot issues:
AzCopy configuration settings
Optimize the performance of AzCopy
Troubleshoot AzCopy V10 issues in Azure Storage by using log files
Find errors and resume jobs by using log and plan
files in AzCopy
5/20/2022 • 3 minutes to read • Edit Online

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This
article helps you use logs to diagnose errors, and then use plan files to resume jobs. This article also shows how
to configure log and plan files by changing their verbosity level, and the default location where they are stored.

NOTE
If you're looking for content to help you get started with AzCopy, see Get started with AzCopy. This article applies to
AzCopy V10 as is this is the currently supported version of AzCopy. If you need to use a previous version of AzCopy, see
Use the previous version of AzCopy.

Log and plan files


AzCopy creates log and plan files for every job. You can use these logs to investigate and troubleshoot any
potential problems.
The logs will contain the status of failure ( UPLOADFAILED , COPYFAILED , and DOWNLOADFAILED ), the full path, and the
reason of the failure.
By default, the log and plan files are located in the %USERPROFILE%\.azcopy directory on Windows or
$HOME$\.azcopy directory on Mac and Linux, but you can change that location.

The relevant error isn't necessarily the first error that appears in the file. For errors such as network errors,
timeouts and Server Busy errors, AzCopy will retry up to 20 times and usually the retry process succeeds. The
first error that you see might be something harmless that was successfully retried. So instead of looking at the
first error in the file, look for the errors that are near UPLOADFAILED , COPYFAILED , or DOWNLOADFAILED .

IMPORTANT
When submitting a request to Microsoft Support (or troubleshooting the issue involving any third party), share the
redacted version of the command you want to execute. This ensures the SAS isn't accidentally shared with anybody. You
can find the redacted version at the start of the log file.

Review the logs for errors


The following command will get all errors with UPLOADFAILED status from the
04dc9ca9-158f-7945-5933-564021086c79 log:

Windows (PowerShell)

Select-String UPLOADFAILED .\04dc9ca9-158f-7945-5933-564021086c79.log

Linux
grep UPLOADFAILED .\04dc9ca9-158f-7945-5933-564021086c79.log

View and resume jobs


Each transfer operation will create an AzCopy job. Use the following command to view the history of jobs:

azcopy jobs list

To view the job statistics, use the following command:

azcopy jobs show <job-id>

To filter the transfers by status, use the following command:

azcopy jobs show <job-id> --with-status=Failed

TIP
The value of the --with-status flag is case-sensitive.

Use the following command to resume a failed/canceled job. This command uses its identifier along with the
SAS token as it isn't persistent for security reasons:

azcopy jobs resume <job-id> --source-sas="<sas-token>" --destination-sas="<sas-token>"

TIP
Enclose path arguments such as the SAS token with single quotes (''). Use single quotes in all command shells except for
the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments
with double quotes ("") instead of single quotes ('').

When you resume a job, AzCopy looks at the job plan file. The plan file lists all the files that were identified for
processing when the job was first created. When you resume a job, AzCopy will attempt to transfer all of the
files that are listed in the plan file which weren't already transferred.

Change the location of plan files


Use any of these commands.

O P ERAT IN G SY ST EM C OMMAND

Windows PowerShell: $env:AZCOPY_JOB_PLAN_LOCATION="<value>"


In a command prompt use::
set AZCOPY_JOB_PLAN_LOCATION=<value>

Linux export AZCOPY_JOB_PLAN_LOCATION=<value>

macOS export AZCOPY_JOB_PLAN_LOCATION=<value>


Use the azcopy env to check the current value of this variable. If the value is blank, then plan files are written to
the default location.

Change the location of log files


Use any of these commands.

O P ERAT IN G SY ST EM C OMMAND

Windows PowerShell: $env:AZCOPY_LOG_LOCATION="<value>"


In a command prompt use::
set AZCOPY_LOG_LOCATION=<value>

Linux export AZCOPY_LOG_LOCATION=<value>

macOS export AZCOPY_LOG_LOCATION=<value>

Use the azcopy env to check the current value of this variable. If the value is blank, then logs are written to the
default location.

Change the default log level


By default, AzCopy log level is set to INFO . If you would like to reduce the log verbosity to save disk space,
overwrite this setting by using the --log-level option.
Available log levels are: DEBUG , INFO , WARNING , ERROR , and NONE .

Remove plan and log files


If you want to remove all plan and log files from your local machine to save disk space, use the
azcopy jobs clean command.

To remove the plan and log files associated with only one job, use azcopy jobs rm <job-id> . Replace the
<job-id> placeholder in this example with the job ID of the job.

See also
Get started with AzCopy
Troubleshoot Azure Files problems in Windows
(SMB)
5/20/2022 • 25 minutes to read • Edit Online

This article lists common problems that are related to Microsoft Azure Files when you connect from Windows
clients. It also provides possible causes and resolutions for these problems. In addition to the troubleshooting
steps in this article, you can also use AzFileDiagnostics to ensure that the Windows client environment has
correct prerequisites. AzFileDiagnostics automates detection of most of the symptoms mentioned in this article
and helps set up your environment to get optimal performance.

IMPORTANT
The content of this article only applies to SMB shares. For details on NFS shares, see Troubleshoot Azure NFS file shares.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Error 5 when you mount an Azure file share


When you try to mount a file share, you might receive the following error:
System error 5 has occurred. Access is denied.
Cause 1: Unencrypted communication channel
For security reasons, connections to Azure file shares are blocked if the communication channel isn't encrypted
and if the connection attempt isn't made from the same datacenter where the Azure file shares reside. If the
Secure transfer required setting is enabled on the storage account, unencrypted connections within the same
datacenter are also blocked. An encrypted communication channel is provided only if the end-user's client OS
supports SMB encryption.
Windows 8, Windows Server 2012, and later versions of each system negotiate requests that include SMB 3.x,
which supports encryption.
Solution for cause 1
1. Connect from a client that supports SMB encryption (Windows 8/Windows Server 2012 or later).
2. Connect from a virtual machine in the same datacenter as the Azure storage account that is used for the
Azure file share.
3. Verify the Secure transfer required setting is disabled on the storage account if the client does not support
SMB encryption.
Cause 2: Virtual network or firewall rules are enabled on the storage account
Network traffic is denied if virtual network (VNET) and firewall rules are configured on the storage account,
unless the client IP address or virtual network is allow listed.
Solution for cause 2
Verify that virtual network and firewall rules are configured properly on the storage account. To test if virtual
network or firewall rules is causing the issue, temporarily change the setting on the storage account to Allow
access from all networks . To learn more, see Configure Azure Storage firewalls and virtual networks.
Cause 3: Share -level permissions are incorrect when using identity-based authentication
If end-users are accessing the Azure file share using Active Directory (AD) or Azure Active Directory Domain
Services (Azure AD DS) authentication, access to the file share fails with "Access is denied" error if share-level
permissions are incorrect.
Solution for cause 3
Validate that permissions are configured correctly:
Active Director y (AD) see Assign share-level permissions to an identity.
Share-level permission assignments are supported for groups and users that have been synced from the
Active Directory (AD) to Azure Active Directory (Azure AD) using Azure AD Connect. Confirm that groups
and users being assigned share-level permissions are not unsupported "cloud-only" groups.
Azure Active Director y Domain Ser vices (Azure AD DS) see Assign access permissions to an
identity.

Error 53, Error 67, or Error 87 when you mount or unmount an Azure
file share
When you try to mount a file share from on-premises or from a different datacenter, you might receive the
following errors:
System error 53 has occurred. The network path was not found.
System error 67 has occurred. The network name cannot be found.
System error 87 has occurred. The parameter is incorrect.
Cause 1: Port 445 is blocked
System error 53 or system error 67 can occur if port 445 outbound communication to an Azure Files datacenter
is blocked. To see the summary of ISPs that allow or disallow access from port 445, go to TechNet.
To check if your firewall or ISP is blocking port 445, use the AzFileDiagnostics tool or Test-NetConnection
cmdlet.
To use the Test-NetConnection cmdlet, the Azure PowerShell module must be installed, see Install Azure
PowerShell module for more information. Remember to replace <your-storage-account-name> and
<your-resource-group-name> with the relevant names for your storage account.
$resourceGroupName = "<your-resource-group-name>"
$storageAccountName = "<your-storage-account-name>"

# This command requires you to be logged into your Azure account and set the subscription your storage
account is under, run:
# Connect-AzAccount -SubscriptionId ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’
# if you haven't already logged in.
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName

# The ComputerName, or host, is <storage-account>.file.core.windows.net for Azure Public Regions.


# $storageAccount.Context.FileEndpoint is used because non-Public Azure regions, such as sovereign clouds
# or Azure Stack deployments, will have different hosts for Azure file shares (and other storage resources).
Test-NetConnection -ComputerName ([System.Uri]::new($storageAccount.Context.FileEndPoint).Host) -Port 445

If the connection was successful, you should see the following output:

ComputerName : <your-storage-account-name>
RemoteAddress : <storage-account-ip-address>
RemotePort : 445
InterfaceAlias : <your-network-interface>
SourceAddress : <your-ip-address>
TcpTestSucceeded : True

NOTE
The above command returns the current IP address of the storage account. This IP address is not guaranteed to remain
the same, and may change at any time. Do not hardcode this IP address into any scripts, or into a firewall configuration.

Solution for cause 1


Solution 1 — Use Azure File Sync
Azure File Sync can transform your on-premises Windows Server into a quick cache of your Azure file share.
You can use any protocol that's available on Windows Server to access your data locally, including SMB, NFS,
and FTPS. Azure File Sync works over port 443 and can thus be used as a workaround to access Azure Files from
clients that have port 445 blocked. Learn how to setup Azure File Sync.
Solution 2 — Use VPN
By Setting up a VPN to your specific Storage Account, the traffic will go through a secure tunnel as opposed to
over the internet. Follow the instructions to setup VPN to access Azure Files from Windows.
Solution 3 — Unblock port 445 with help of your ISP/IT Admin
Work with your IT department or ISP to open port 445 outbound to Azure IP ranges.
Solution 4 — Use REST API-based tools like Storage Explorer/PowerShell
Azure Files also supports REST in addition to SMB. REST access works over port 443 (standard tcp). There are
various tools that are written using REST API that enable rich UI experience. Storage Explorer is one of them.
Download and Install Storage Explorer and connect to your file share backed by Azure Files. You can also use
PowerShell which also user REST API.
Cause 2: NTLMv1 is enabled
System error 53 or system error 87 can occur if NTLMv1 communication is enabled on the client. Azure Files
supports only NTLMv2 authentication. Having NTLMv1 enabled creates a less-secure client. Therefore,
communication is blocked for Azure Files.
To determine whether this is the cause of the error, verify that the following registry subkey is not set to a value
less than 3:
HKLM\SYSTEM\CurrentControlSet\Control\Lsa > LmCompatibilityLevel
For more information, see the LmCompatibilityLevel topic on TechNet.
Solution for cause 2
Revert the LmCompatibilityLevel value to the default value of 3 in the following registry subkey:
HKLM\SYSTEM\CurrentControlSet\Control\Lsa

Error 1816 - Not enough quota is available to process this command


Cause
Error 1816 happens when you reach the upper limit of concurrent open handles that are allowed for a file or
directory on the Azure file share. For more information, see Azure Files scale targets.
Solution
Reduce the number of concurrent open handles by closing some handles, and then retry. For more information,
see Microsoft Azure Storage performance and scalability checklist.
To view open handles for a file share, directory or file, use the Get-AzStorageFileHandle PowerShell cmdlet.
To close open handles for a file share, directory or file, use the Close-AzStorageFileHandle PowerShell cmdlet.

NOTE
The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or
later. To install the latest Az PowerShell module, see Install the Azure PowerShell module.

Error "No access" when you try to access or delete an Azure File
Share
When you try to access or delete an Azure file share in the portal, you might receive the following error:
No access
Error code: 403
Cause 1: Virtual network or firewall rules are enabled on the storage account
Solution for cause 1
Verify virtual network and firewall rules are configured properly on the storage account. To test if virtual
network or firewall rules is causing the issue, temporarily change the setting on the storage account to Allow
access from all networks . To learn more, see Configure Azure Storage firewalls and virtual networks.
Cause 2: Your user account does not have access to the storage account
Solution for cause 2
Browse to the storage account where the Azure file share is located, click Access control (IAM) and verify your
user account has access to the storage account. To learn more, see How to secure your storage account with
Azure role-based access control (Azure RBAC).

Unable to modify or delete an Azure file share (or share snapshots)


because of locks or leases
Azure Files provides two ways to prevent accidental modification or deletion of Azure file shares and share
snapshots:
Storage account resource locks : All Azure resources, including the storage account, support resource
locks. Locks might put on the storage account by an administrator, or by value-added services such as
Azure Backup. Two variations of resource locks exist: modify, which prevents all modifications to the
storage account and its resources, and delete, which only prevent deletes of the storage account and its
resources. When modifying or deleting shares through the Microsoft.Storage resource provider,
resource locks are enforced on Azure file shares and share snapshots. Most portal operations, Azure
PowerShell cmdlets for Azure Files with Rm in the name (i.e. Get-AzRmStorageShare ), and Azure CLI
commands in the share-rm command group (i.e. az storage share-rm list ) use the Microsoft.Storage
resource provider. Some tools and utilities such as Storage Explorer, legacy Azure Files PowerShell
management cmdlets without Rm in the name (i.e. Get-AzStorageShare ), and legacy Azure Files CLI
commands under the share command group (i.e. az storage share list ) use legacy APIs in the
FileREST API that bypass the Microsoft.Storage resource provider and resource locks. For more
information on legacy management APIs exposed in the FileREST API, see control plane in Azure Files.
Share/share snapshot leases : Share leases are a kind of proprietary lock for Azure file shares and file
share snapshots. Leases might be put on individual Azure file shares or file share snapshots by
administrators by calling the API through a script, or by value-added services such as Azure Backup.
When a lease is put on an Azure file share or file share snapshot, modifying or deleting the file
share/share snapshot can be done with the lease ID. Admins can also release the lease before
modification operations, which requires the lease ID, or break the lease, which does not require the lease
ID. For more information on share leases, see lease share.
Since resource locks and leases might interfere with intended administrator operations on your storage
account/Azure file shares, you might wish to remove any resource locks/leases that have been put on your
resources manually or automatically by value-added services such as Azure Backup. The following script
removes all resource locks and leases. Remember to replace <resource-group> and <storage-account> with the
appropriate values for your environment.
To run the following script, you must install the 3.10.1-preview version of the Azure Storage PowerShell module.

IMPORTANT
Value-added services that take resource locks and share/share snapshot leases on your Azure Files resources may
periodically reapply locks and leases. Modifying or deleting locked resources by value-added services may impact regular
operation of those services, such as deleting share snapshots that were managed by Azure Backup.
# Parameters for storage account resource
$resourceGroupName = "<resource-group>"
$storageAccountName = "<storage-account>"

# Get reference to storage account


$storageAccount = Get-AzStorageAccount `
-ResourceGroupName $resourceGroupName `
-Name $storageAccountName

# Remove resource locks


Get-AzResourceLock `
-ResourceType "Microsoft.Storage/storageAccounts" `
-ResourceGroupName $storageAccount.ResourceGroupName `
-ResourceName $storageAccount.StorageAccountName | `
Remove-AzResourceLock -Force | `
Out-Null

# Remove share and share snapshot leases


Get-AzStorageShare -Context $storageAccount.Context | `
Where-Object { $_.Name -eq $fileShareName } | `
ForEach-Object {
try {
$leaseClient = [Azure.Storage.Files.Shares.Specialized.ShareLeaseClient]::new($_.ShareClient)
$leaseClient.Break() | Out-Null
} catch { }
}

Unable to modify, move/rename, or delete a file or directory


One of the key purposes of a file share is that multiple users and applications may simultaneously interact with
files and directories in the share. To assist with this interaction, file shares provide several ways of mediating
access to files and directories.
When you open a file from a mounted Azure file share over SMB, your application/operating system request a
file handle, which is a reference to the file. Among other things, your application specifies a file sharing mode
when it requests a file handle, which specifies the level of exclusivity of your access to the file enforced by Azure
Files:
None : you have exclusive access.
Read : others may read the file while you have it open.
Write : others may write to the file while you have it open.
ReadWrite : a combination of both the Read and Write sharing modes.
Delete : others may delete the file while you have it open.

Although as a stateless protocol, the FileREST protocol does not have a concept of file handles, it does provide a
similar mechanism to mediate access to files and folders that your script, application, or service may use: file
leases. When a file is leased, it is treated as equivalent to a file handle with a file sharing mode of None .
Although file handles and leases serve an important purpose, sometimes file handles and leases might be
orphaned. When this happens, this can cause problems modifying or deleting files. You may see error messages
like:
The process cannot access the file because it is being used by another process.
The action can't be completed because the file is open in another program.
The document is locked for editing by another user.
The specified resource is marked for deletion by an SMB client.
The resolution to this issue depends on whether this is being caused by an orphaned file handle or lease.
Cause 1
A file handle is preventing a file/directory from being modified or deleted. You can use the Get-
AzStorageFileHandle PowerShell cmdlet to view open handles.
If all SMB clients have closed their open handles on a file/directory and the issue continues to occur, you can
force close a file handle.
Solution 1
To force a file handle to be closed, use the Close-AzStorageFileHandle PowerShell cmdlet.

NOTE
The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or
later. To install the latest Az PowerShell module, see Install the Azure PowerShell module.

Cause 2
A file lease is preventing a file from being modified or deleted. You can check if a file has a file lease with the
following PowerShell, replacing <resource-group> , <storage-account> , <file-share> , and <path-to-file> with
the appropriate values for your environment:

# Set variables
$resourceGroupName = "<resource-group>"
$storageAccountName = "<storage-account>"
$fileShareName = "<file-share>"
$fileForLease = "<path-to-file>"

# Get reference to storage account


$storageAccount = Get-AzStorageAccount `
-ResourceGroupName $resourceGroupName `
-Name $storageAccountName

# Get reference to file


$file = Get-AzStorageFile `
-Context $storageAccount.Context `
-ShareName $fileShareName `
-Path $fileForLease

$fileClient = $file.ShareFileClient

# Check if the file has a file lease


$fileClient.GetProperties().Value

If a file has a lease, the returned object should contain the following properties:

LeaseDuration : Infinite
LeaseState : Leased
LeaseStatus : Locked

Solution 2
To remove a lease from a file, you can release the lease or break the lease. To release the lease, you need the
LeaseId of the lease, which you set when you create the lease. You do not need the LeaseId to break the lease.
The following example shows how to break the lease for the file indicated in cause 2 (this example continues
with the PowerShell variables from cause 2):
$leaseClient = [Azure.Storage.Files.Shares.Specialized.ShareLeaseClient]::new($fileClient)
$leaseClient.Break() | Out-Null

Slow file copying to and from Azure Files in Windows


You might see slow performance when you try to transfer files to the Azure File service.
If you don't have a specific minimum I/O size requirement, we recommend that you use 1 MiB as the I/O size
for optimal performance.
If you know the final size of a file that you are extending with writes, and your software doesn't have
compatibility problems when the unwritten tail on the file contains zeros, then set the file size in advance
instead of making every write an extending write.
Use the right copy method:
Use AzCopy for any transfer between two file shares.
Use Robocopy between file shares on an on-premises computer.
Considerations for Windows 8.1 or Windows Server 2012 R2
For clients that are running Windows 8.1 or Windows Server 2012 R2, make sure that the KB3114025 hotfix is
installed. This hotfix improves the performance of create and close handles.
You can run the following script to check whether the hotfix has been installed:
reg query HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\Policies

If hotfix is installed, the following output is displayed:


HKEY_Local_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\Policies {96c345ef-3cac-
477b-8fcd-bea1a564241c} REG_DWORD 0x1

NOTE
Windows Server 2012 R2 images in Azure Marketplace have hotfix KB3114025 installed by default, starting in December
2015.

No folder with a drive letter in "My Computer" or "This PC"


If you map an Azure file share as an administrator by using net use, the share appears to be missing.
Cause
By default, Windows File Explorer does not run as an administrator. If you run net use from an administrative
command prompt, you map the network drive as an administrator. Because mapped drives are user-centric, the
user account that is logged in does not display the drives if they are mounted under a different user account.
Solution
Mount the share from a non-administrator command line. Alternatively, you can follow this TechNet topic to
configure the EnableLinkedConnections registry value.

Net use command fails if the storage account contains a forward slash
Cause
The net use command interprets a forward slash (/) as a command-line option. If your user account name starts
with a forward slash, the drive mapping fails.
Solution
You can use either of the following steps to work around the problem:
Run the following PowerShell command:
New-SmbMapping -LocalPath y: -RemotePath \\server\share -UserName accountName -Password "password can
contain / and \ etc"

From a batch file, you can run the command this way:
Echo new-smbMapping ... | powershell -command –

Put double quotation marks around the key to work around this problem--unless the forward slash is the
first character. If it is, either use the interactive mode and enter your password separately or regenerate
your keys to get a key that doesn't start with a forward slash.

Application or service cannot access a mounted Azure Files drive


Cause
Drives are mounted per user. If your application or service is running under a different user account than the
one that mounted the drive, the application will not see the drive.
Solution
Use one of the following solutions:
Mount the drive from the same user account that contains the application. You can use a tool such as
PsExec.
Pass the storage account name and key in the user name and password parameters of the net use
command.
Use the cmdkey command to add the credentials into Credential Manager. Perform this from a command
line under the service account context, either through an interactive login or by using runas .
cmdkey /add:<storage-account-name>.file.core.windows.net /user:AZURE\<storage-account-name> /pass:
<storage-account-key>

Map the share directly without using a mapped drive letter. Some applications may not reconnect to the
drive letter properly, so using the full UNC path might more reliable.
net use * \\storage-account-name.file.core.windows.net\share

After you follow these instructions, you might receive the following error message when you run net use for the
system/network service account: "System error 1312 has occurred. A specified logon session does not exist. It
may already have been terminated." If this occurs, make sure that the username that is passed to net use
includes domain information (for example: "[storage account name].file.core.windows.net").

Error "You are copying a file to a destination that does not support
encryption"
When a file is copied over the network, the file is decrypted on the source computer, transmitted in plaintext,
and re-encrypted at the destination. However, you might see the following error when you're trying to copy an
encrypted file: "You are copying the file to a destination that does not support encryption."
Cause
This problem can occur if you are using Encrypting File System (EFS). BitLocker-encrypted files can be copied to
Azure Files. However, Azure Files does not support NTFS EFS.
Workaround
To copy a file over the network, you must first decrypt it. Use one of the following methods:
Use the copy /d command. It allows the encrypted files to be saved as decrypted files at the destination.
Set the following registry key:
Path = HKLM\Software\Policies\Microsoft\Windows\System
Value type = DWORD
Name = CopyFileAllowDecryptedRemoteDestination
Value = 1
Be aware that setting the registry key affects all copy operations that are made to network shares.

Slow enumeration of files and folders


Cause
This problem can occur if there is no enough cache on client machine for large directories.
Solution
To resolve this problem, adjusting the Director yCacheEntr ySizeMax registry value to allow caching of larger
directory listings in the client machine:
Location: HKLM\System\CCS\Services\Lanmanworkstation\Parameters
Value name: DirectoryCacheEntrySizeMax
Value type: DWORD

For example, you can set it to 0x100000 and see if the performance improves.

Error AadDsTenantNotFound in enabling Azure Active Directory


Domain Service (Azure AD DS) authentication for Azure Files "Unable
to locate active tenants with tenant ID aad-tenant-id"
Cause
Error AadDsTenantNotFound happens when you try to enable Azure Active Directory Domain Services (Azure
AD DS) authentication on Azure Files on a storage account where Azure AD Domain Service(Azure AD DS) is not
created on the Azure AD tenant of the associated subscription.
Solution
Enable Azure AD DS on the Azure AD tenant of the subscription that your storage account is deployed to. You
need administrator privileges of the Azure AD tenant to create a managed domain. If you aren't the
administrator of the Azure AD tenant, contact the administrator and follow the step-by-step guidance to Create
and configure an Azure Active Directory Domain Services managed domain.

Error ConditionHeadersNotSupported from a Web Application using


Azure Files from Browser
The ConditionHeadersNotSupported error occurs when accessing content hosted in Azure Files through an
application that makes use of conditional headers, such as a web browser, access fails. The error states that
condition headers are not supported.
Cause
Conditional headers are not yet supported. Applications implementing them will need to request the full file
every time the file is accessed.
Workaround
When a new file is uploaded, the cache-control property by default is “no-cache”. To force the application to
request the file every time, the file's cache-control property needs to be updated from “no-cache” to “no-cache,
no-store, must-revalidate”. This can be achieved using Azure Storage Explorer.

Unable to mount Azure Files with AD credentials


Self diagnostics steps
First, make sure that you have followed through all four steps to enable Azure Files AD Authentication.
Second, try mounting Azure file share with storage account key. If you failed to mount, download
AzFileDiagnostics to help you validate the client running environment, detect the incompatible client
configuration which would cause access failure for Azure Files, gives prescriptive guidance on self-fix and, collect
the diagnostics traces.
Third, you can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD
configuration with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. You need
to run this cmdlet with an AD user that has owner permission on the target storage account.
$ResourceGroupName = "<resource-group-name-here>"
$StorageAccountName = "<storage-account-name-here>"

Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -


Verbose

The cmdlet performs these checks below in sequence and provides guidance for failures:
1. CheckADObjectPasswordIsCorrect: Ensure that the password configured on the AD identity that represents
the storage account is matching that of the storage account kerb1 or kerb2 key. If the password is incorrect,
you can run Update-AzStorageAccountADObjectPassword to reset the password.
2. CheckADObject: Confirm that there is an object in the Active Directory that represents the storage account
and has the correct SPN (service principal name). If the SPN isn't correctly setup, please run the Set-AD
cmdlet returned in the debug cmdlet to configure the SPN.
3. CheckDomainJoined: Validate that the client machine is domain joined to AD. If your machine is not domain
joined to AD, please refer to this article for domain join instruction.
4. CheckPort445Connectivity: Check that Port 445 is opened for SMB connection. If the required Port is not
open, please refer to the troubleshooting tool AzFileDiagnostics for connectivity issues with Azure Files.
5. CheckSidHasAadUser: Check that the logged on AD user is synced to Azure AD. If you want to look up
whether a specific AD user is synchronized to Azure AD, you can specify the -UserName and -Domain in the
input parameters.
6. CheckGetKerberosTicket: Attempt to get a Kerberos ticket to connect to the storage account. If there isn't a
valid Kerberos token, run the klist get cifs/storage-account-name.file.core.windows.net cmdlet and examine
the error code to root-cause the ticket retrieval failure.
7. CheckStorageAccountDomainJoined: Check if the AD authentication has been enabled and the account's AD
properties are populated. If not, refer to the instruction here to enable AD DS authentication on Azure Files.
8. CheckUserRbacAssignment: Check if the AD identity has the proper RBAC role assignment to provide share
level permission to access Azure Files. If not, refer to the instruction here to configure the share level
permission. (Supported on AzFilesHybrid v0.2.3+ version)
9. CheckUserFileAccess: Check if the AD identity has the proper directory/file permission (Windows ACLs) to
access Azure Files. If not, refer to the instruction here to configure the directory/file level permission.
(Supported on AzFilesHybrid v0.2.3+ version)

Unable to configure directory/file level permissions (Windows ACLs)


with Windows File Explorer
Symptom
You may experience either symptoms described below when trying to configure Windows ACLs with File
Explorer on a mounted file share:
After you click on Edit permission under the Security tab, the Permission wizard does not load.
When you try to select a new user or group, the domain location does not display the right AD DS domain.
Solution
We recommend you to use icacls tool to configure the directory/file level permissions as a workaround.

Errors when running Join-AzStorageAccountForAuth cmdlet


Error: "The directory service was unable to allocate a relative identifier"
This error may occur if a domain controller that holds the RID Master FSMO role is unavailable or was removed
from the domain and restored from backup. Confirm that all Domain Controllers are running and available.
Error: "Cannot bind positional parameters because no names were given"
This error is most likely triggered by a syntax error in the Join-AzStorageAccountforAuth command. Check the
command for misspellings or syntax errors and verify that the latest version of the AzFilesHybrid module
(https://github.com/Azure-Samples/azure-files-samples/releases) is installed.

Azure Files on-premises AD DS Authentication support for AES-256


Kerberos encryption
Azure Files supports AES-256 Kerberos encryption for AD DS authentication with the AzFilesHybrid module
v0.2.2. AES-256 is the recommended authentication method. If you have enabled AD DS authentication with a
module version lower than v0.2.2, you will need to download the latest AzFilesHybrid module (v0.2.2+) and run
the PowerShell below. If you have not enabled AD DS authentication on your storage account yet, you can follow
this guidance for enablement.

$ResourceGroupName = "<resource-group-name-here>"
$StorageAccountName = "<storage-account-name-here>"

Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -StorageAccountName


$StorageAccountName

User identity formerly having the Owner or Contributor role


assignment still has storage account key access
The storage account Owner and Contributor roles grant the ability to list the storage account keys. The storage
account key enables full access to the storage account's data including file shares, blob containers, tables, and
queues, and limited access to the Azure Files management operations via the legacy management APIs exposed
through the FileREST API. If you're changing role assignments, you should consider that the users being
removed from the Owner or Contributor roles may continue to maintain access to the storage account through
saved storage account keys.
Solution 1
You can remedy this issue easily by rotating the storage account keys. We recommend rotating the keys one at a
time, switching access from one to the other as they are rotated. There are two types of shared keys the storage
account provides: the storage account keys, which provide super-administrator access to the storage account's
data, and the Kerberos keys, which function as a shared secret between the storage account and the Windows
Server Active Directory domain controller for Windows Server Active Directory scenarios.
To rotate the Kerberos keys of a storage account, see Update the password of your storage account identity in
AD DS.
Portal
PowerShell
Azure CLI

Navigate to the desired storage account in the Azure portal. In the table of contents for the desired storage
account, select Access keys under the Security + networking heading. In the Access key* pane, select Rotate
key above the desired key.
Need help? Contact support.
If you still need help, contact support to get your problem resolved quickly.
Troubleshoot Azure Files problems in Linux (SMB)
5/20/2022 • 13 minutes to read • Edit Online

This article lists common problems that are related to Azure Files when you connect from Linux clients. It also
provides possible causes and resolutions for these problems.
In addition to the troubleshooting steps in this article, you can use AzFileDiagnostics to ensure that the Linux
client has correct prerequisites. AzFileDiagnostics automates the detection of most of the symptoms mentioned
in this article. It helps set up your environment to get optimal performance. You can also find this information in
the Azure Files shares troubleshooter. The troubleshooter provides steps to help you with problems connecting,
mapping, and mounting Azure Files shares.

IMPORTANT
The content of this article only applies to SMB shares. For details on NFS shares, see Troubleshoot Azure NFS file shares.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Cannot connect to or mount an Azure file share


Cause
Common causes for this problem are:
You're using an Linux distribution with an outdated SMB client, see Use Azure Files with Linux for more
information on common Linux distributions available in Azure that have compatible clients.
SMB utilities (cifs-utils) are not installed on the client.
The minimum SMB version, 2.1, is not available on the client.
SMB 3.x encryption is not supported on the client. The preceding table provides a list of Linux distributions
that support mounting from on-premises and cross-region using encryption. Other distributions require
kernel 4.11 and later versions.
You're trying to connect to an Azure file share from an Azure VM, and the VM is not in the same region as the
storage account.
If the Secure transfer required setting is enabled on the storage account, Azure Files will allow only
connections that use SMB 3.x with encryption.
Solution
To resolve the problem, use the troubleshooting tool for Azure Files mounting errors on Linux. This tool:
Helps you to validate the client running environment.
Detects the incompatible client configuration that would cause access failure for Azure Files.
Gives prescriptive guidance on self-fixing.
Collects the diagnostics traces.

"Mount error(13): Permission denied" when you mount an Azure file


share
Cause 1: Unencrypted communication channel
For security reasons, connections to Azure file shares are blocked if the communication channel isn't encrypted
and if the connection attempt isn't made from the same datacenter where the Azure file shares reside.
Unencrypted connections within the same datacenter can also be blocked if the Secure transfer required setting
is enabled on the storage account. An encrypted communication channel is provided only if the user's client OS
supports SMB encryption.
To learn more, see Prerequisites for mounting an Azure file share with Linux and the cifs-utils package.
Solution for cause 1
1. Connect from a client that supports SMB encryption or connect from a virtual machine in the same
datacenter as the Azure storage account that is used for the Azure file share.
2. Verify the Secure transfer required setting is disabled on the storage account if the client does not support
SMB encryption.
Cause 2: Virtual network or firewall rules are enabled on the storage account
If virtual network (VNET) and firewall rules are configured on the storage account, network traffic will be denied
access unless the client IP address or virtual network is allowed access.
Solution for cause 2
Verify virtual network and firewall rules are configured properly on the storage account. To test if virtual
network or firewall rules is causing the issue, temporarily change the setting on the storage account to Allow
access from all networks . To learn more, see Configure Azure Storage firewalls and virtual networks.

"[permission denied] Disk quota exceeded" when you try to open a


file
In Linux, you receive an error message that resembles the following:
<filename> [permission denied] Disk quota exceeded
Cause
You have reached the upper limit of concurrent open handles that are allowed for a file or directory.
There is a quota of 2,000 open handles on a single file or directory. When you have 2,000 open handles, an error
message is displayed that says the quota is reached.
Solution
Reduce the number of concurrent open handles by closing some handles, and then retry the operation.
To view open handles for a file share, directory or file, use the Get-AzStorageFileHandle PowerShell cmdlet.
To close open handles for a file share, directory or file, use the Close-AzStorageFileHandle PowerShell cmdlet.
NOTE
The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or
later. To install the latest Az PowerShell module, see Install the Azure PowerShell module.

Slow file copying to and from Azure Files in Linux


If you don't have a specific minimum I/O size requirement, we recommend that you use 1 MiB as the I/O size
for optimal performance.
Use the right copy method:
Use AzCopy for any transfer between two file shares.
Using cp or dd with parallel could improve copy speed, the number of threads depends on your use
case and workload. The following examples use six:
cp example (cp will use the default block size of the file system as the chunk size):
find * -type f | parallel --will-cite -j 6 cp {} /mntpremium/ & .
dd example (this command explicitly sets chunk size to 1 MiB):
find * -type f | parallel --will-cite-j 6 dd if={} of=/mnt/share/{} bs=1M
Open source third party tools such as:
GNU Parallel.
Fpart - Sorts files and packs them into partitions.
Fpsync - Uses Fpart and a copy tool to spawn multiple instances to migrate data from src_dir to
dst_url.
Multi - Multi-threaded cp and md5sum based on GNU coreutils.
Setting the file size in advance, instead of making every write an extending write, helps improve copy speed
in scenarios where the file size is known. If extending writes need to be avoided, you can set a destination file
size with truncate --size <size> <file> command. After that,
dd if=<source> of=<target> bs=1M conv=notrunc command will copy a source file without having to repeatedly
update the size of the target file. For example, you can set the destination file size for every file you want to
copy (assume a share is mounted under /mnt/share):
for i in `` find * -type f``; do truncate --size ``stat -c%s $i`` /mnt/share/$i; done
and then copy files without extending writes in parallel:
find * -type f | parallel -j6 dd if={} of =/mnt/share/{} bs=1M conv=notrunc

"Mount error(115): Operation now in progress" when you mount


Azure Files by using SMB 3.x
Cause
Some Linux distributions don't yet support encryption features in SMB 3.x. Users might receive a "115" error
message if they try to mount Azure Files by using SMB 3.x because of a missing feature. SMB 3.x with full
encryption is supported only when you're using Ubuntu 16.04 or later.
Solution
The encryption feature for SMB 3.x for Linux was introduced in the 4.11 kernel. This feature enables mounting of
an Azure file share from on-premises or from a different Azure region. Some Linux distributions may have
backported changes from the 4.11 kernel to older versions of the Linux kernel which they maintain. To assist in
determining if your version of Linux supports SMB 3.x with encryption, consult with Use Azure Files with Linux.
If your Linux SMB client doesn't support encryption, mount Azure Files by using SMB 2.1 from an Azure Linux
VM that's in the same datacenter as the file share. Verify that the Secure transfer required setting is disabled on
the storage account.

Error "No access" when you try to access or delete an Azure File
Share
When you try to access or delete an Azure file share in the portal, you may receive the following error:
No access
Error code: 403
Cause 1: Virtual network or firewall rules are enabled on the storage account
Solution for cause 1
Verify virtual network and firewall rules are configured properly on the storage account. To test if virtual
network or firewall rules is causing the issue, temporarily change the setting on the storage account to Allow
access from all networks . To learn more, see Configure Azure Storage firewalls and virtual networks.
Cause 2: Your user account does not have access to the storage account
Solution for cause 2
Browse to the storage account where the Azure file share is located, click Access control (IAM) and verify your
user account has access to the storage account. To learn more, see How to secure your storage account with
Azure role-based access control (Azure RBAC).

Unable to delete a file or directory in an Azure file share


Cause
This issue typically occurs if the file or directory has an open handle.
Solution
If the SMB clients have closed all open handles and the issue continues to occur, perform the following:
Use the Get-AzStorageFileHandle PowerShell cmdlet to view open handles.
Use the Close-AzStorageFileHandle PowerShell cmdlet to close open handles.

NOTE
The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or
later. To install the latest Az PowerShell module, see Install the Azure PowerShell module.

Slow performance on an Azure file share mounted on a Linux VM


Cause 1: Caching
One possible cause of slow performance is disabled caching. Caching can be useful if you are accessing a file
repeatedly, otherwise, it can be an overhead. Check if you are using the cache before disabling it.
Solution for cause 1
To check whether caching is disabled, look for the cache= entry.
Cache=none indicates that caching is disabled. Remount the share by using the default mount command or by
explicitly adding the cache=strict option to the mount command to ensure that default caching or "strict"
caching mode is enabled.
In some scenarios, the ser verino mount option can cause the ls command to run stat against every directory
entry. This behavior results in performance degradation when you're listing a large directory. You can check the
mount options in your /etc/fstab entry:
//azureuser.file.core.windows.net/cifs /cifs cifs
vers=2.1,serverino,username=xxx,password=xxx,dir_mode=0777,file_mode=0777

You can also check whether the correct options are being used by running the sudo mount | grep cifs
command and checking its output. The following is example output:

//azureuser.file.core.windows.net/cifs on /cifs type cifs


(rw,relatime,vers=2.1,sec=ntlmssp,cache=strict,username=xxx,domain=X,uid=0,noforceuid,gid=0,noforcegid,addr=
192.168.10.1,file_mode=0777,
dir_mode=0777,persistenthandles,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,actimeo=1)

If the cache=strict or ser verino option is not present, unmount and mount Azure Files again by running the
mount command from the documentation. Then, recheck that the /etc/fstab entry has the correct options.
Cause 2: Throttling
It is possible you are experiencing throttling and your requests are being sent to a queue. You can verify this by
leveraging Azure Storage metrics in Azure Monitor.
Solution for cause 2
Ensure your app is within the Azure Files scale targets.

Time stamps were lost in copying files from Windows to Linux


On Linux/Unix platforms, the cp -p command fails if different users own file 1 and file 2.
Cause
The force flag f in COPYFILE results in executing cp -p -f on Unix. This command also fails to preserve the time
stamp of the file that you don't own.
Workaround
Use the storage account user for copying the files:
str_acc_name=[storage account name]
sudo useradd $str_acc_name
sudo passwd $str_acc_name
su $str_acc_name
cp -p filename.txt /share

ls: cannot access '<path>': Input/output error


When you try to list files in an Azure file share by using the ls command, the command hangs when listing files.
You get the following error:
ls: cannot access'<path>': Input/output error
Solution
Upgrade the Linux kernel to the following versions that have a fix for this problem:
4.4.87+
4.9.48+
4.12.11+
All versions that are greater than or equal to 4.13

Cannot create symbolic links - ln: failed to create symbolic link 't':
Operation not supported
Cause
By default, mounting Azure file shares on Linux by using CIFS doesn't enable support for symbolic links
(symlinks). You see an error like this:

ln -s linked -n t
ln: failed to create symbolic link 't': Operation not supported

Solution
The Linux CIFS client doesn't support creation of Windows-style symbolic links over the SMB 2 or 3 protocol.
Currently, the Linux client supports another style of symbolic links called Minshall+French symlinks for both
create and follow operations. Customers who need symbolic links can use the "mfsymlinks" mount option. We
recommend "mfsymlinks" because it's also the format that Macs use.
To use symlinks, add the following to the end of your CIFS mount command:

,mfsymlinks

So the command looks something like:

sudo mount -t cifs //<storage-account-name>.file.core.windows.net/<share-name> <mount-point> -o vers=<smb-


version>,username=<storage-account-name>,password=<storage-account-
key>,dir_mode=0777,file_mode=0777,serverino,mfsymlinks

You can then create symlinks as suggested on the wiki.

Error ConditionHeadersNotSupported from a Web Application using


Azure Files from Browser
The ConditionHeadersNotSupported error occurs when accessing content hosted in Azure Files through an
application that makes use of conditional headers, such as a web browser, access fails. The error states that
condition headers are not supported.

Cause
Conditional headers are not yet supported. Applications implementing them will need to request the full file
every time the file is accessed.
Workaround
When a new file is uploaded, the cache-control property by default is “no-cache”. To force the application to
request the file every time, the file's cache-control property needs to be updated from “no-cache” to “no-cache,
no-store, must-revalidate”. This can be achieved using Azure Storage Explorer.
"Mount error(112): Host is down" because of a reconnection time-out
A "112" mount error occurs on the Linux client when the client has been idle for a long time. After an extended
idle time, the client disconnects and the connection times out.
Cause
The connection can be idle for the following reasons:
Network communication failures that prevent re-establishing a TCP connection to the server when the
default "soft" mount option is used
Recent reconnection fixes that are not present in older kernels
Solution
This reconnection problem in the Linux kernel is now fixed as part of the following changes:
Fix reconnect to not defer smb3 session reconnect long after socket reconnect
Call echo service immediately after socket reconnect
CIFS: Fix a possible memory corruption during reconnect
CIFS: Fix a possible double locking of mutex during reconnect (for kernel v4.9 and later)
However, these changes might not be ported yet to all the Linux distributions. If you're using a popular Linux
distribution, you can check on the Use Azure Files with Linux to see which version of your distribution has the
necessary kernel changes.
Workaround
You can work around this problem by specifying a hard mount. A hard mount forces the client to wait until a
connection is established or until it's explicitly interrupted. You can use it to prevent errors because of network
time-outs. However, this workaround might cause indefinite waits. Be prepared to stop connections as
necessary.
If you can't upgrade to the latest kernel versions, you can work around this problem by keeping a file in the
Azure file share that you write to every 30 seconds or less. This must be a write operation, such as rewriting the
created or modified date on the file. Otherwise, you might get cached results, and your operation might not
trigger the reconnection.

"CIFS VFS: error -22 on ioctl to get interface list" when you mount an
Azure file share by using SMB 3.x
Cause
This error is logged because Azure Files does not currently support SMB multichannel.
Solution
This error can be ignored.
Unable to access folders or files which name has a space or a dot at the end
You are unable to access folders or files from the Azure file share while mounted on Linux, commands like du
and ls and/or third-party applications may fail with a "No such file or directory" error while accessing the share,
however you are able to upload files to said folders via the portal.
Cause
The folders or files were uploaded from a system that encodes the characters at the end of the name to a
different character, files uploaded from a Macintosh computer may have a "0xF028" or "0xF029" character
instead of 0x20 (space) or 0X2E (dot).
Solution
Use the mapchars option on the share while mounting the share on Linux:
instead of :

sudo mount -t cifs $smbPath $mntPath -o


vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino

use:

sudo mount -t cifs $smbPath $mntPath -o


vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino,mapchars

Need help? Contact support.


If you still need help, contact support to get your problem resolved quickly.
Troubleshoot Azure file shares performance issues
5/20/2022 • 14 minutes to read • Edit Online

This article lists some common problems related to Azure file shares. It provides potential causes and
workarounds for when you encounter these problems.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

High latency, low throughput, and general performance issues


Cause 1: Share was throttled
Requests are throttled when the I/O operations per second (IOPS), ingress, or egress limits for a file share are
reached. To understand the limits for standard and premium file shares, see File share and file scale targets.
To confirm whether your share is being throttled, you can access and use Azure metrics in the portal.
1. In the Azure portal, go to your storage account.
2. On the left pane, under Monitoring , select Metrics .
3. Select File as the metric namespace for your storage account scope.
4. Select Transactions as the metric.
5. Add a filter for Response type , and then check to see whether any requests have been throttled.
For standard file shares, the following response types are logged if a request is throttled:
SuccessWithThrottling
SuccessWithShareIopsThrottling
ClientShareIopsThrottlingError
For premium file shares, the following response types are logged if a request is throttled:
SuccessWithShareEgressThrottling
SuccessWithShareIngressThrottling
SuccessWithShareIopsThrottling
ClientShareEgressThrottlingError
ClientShareIngressThrottlingError
ClientShareIopsThrottlingError
To learn more about each response type, see Metric dimensions.
NOTE
To receive an alert, see the "How to create an alert if a file share is throttled" section later in this article.

Solution
If you're using a standard file share, enable large file shares on your storage account and increase the size of
file share quota to take advantage of the large file share support. Large file shares support great IOPS and
bandwidth limits; see Azure Files scalability and performance targets for details.
If you're using a premium file share, increase the provisioned file share size to increase the IOPS limit. To
learn more, see the Understanding provisioning for premium file shares.
Cause 2: Metadata or namespace heavy workload
If the majority of your requests are metadata-centric (such as createfile , openfile , closefile , queryinfo , or
querydirectory ), the latency will be worse than that of read/write operations.

To determine whether most of your requests are metadata-centric, start by following steps 1-4 as previously
outlined in Cause 1. For step 5, instead of adding a filter for Response type , add a property filter for API
name .

Workaround
Check to see whether the application can be modified to reduce the number of metadata operations.
Add a virtual hard disk (VHD) on the file share and mount the VHD from the client to perform file operations
against the data. This approach works for single writer/reader scenarios or scenarios with multiple readers
and no writers. Because the file system is owned by the client rather than Azure Files, this allows metadata
operations to be local. The setup offers performance similar to that of a local directly attached storage.
To mount a VHD on a Windows client, use the Mount-DiskImage PowerShell cmdlet.
To mount a VHD on Linux, consult the documentation for your Linux distribution.
Cause 3: Single -threaded application
If the application that you're using is single-threaded, this setup can result in significantly lower IOPS throughput
than the maximum possible throughput, depending on your provisioned share size.
Solution
Increase application parallelism by increasing the number of threads.
Switch to applications where parallelism is possible. For example, for copy operations, you could use AzCopy
or RoboCopy from Windows clients or the parallel command from Linux clients.
Cause 4: Number of SMB channels exceeds four
If you're using SMB MultiChannel and the number of channels you have exceeds four, this will result in poor
performance. To determine if your connection count exceeds four, use the PowerShell cmdlet
get-SmbClientConfiguration to view the current connection count settings.
Solution
Set the Windows per NIC setting for SMB so that the total channels don't exceed four. For example, if you have
two NICs, you can set the maximum per NIC to two using the following PowerShell cmdlet:
Set-SmbClientConfiguration -ConnectionCountPerRssNetworkInterface 2 .

Very high latency for requests


Cause
The client virtual machine (VM) could be located in a different region than the file share. Other reason for high
latency could be due to the latency caused by the client or the network.
Solution
Run the application from a VM that's located in the same region as the file share.
For your storage account, review transaction metrics SuccessE2ELatency and SuccessSer verLatency via
Azure Monitor in Azure portal. A high difference between SuccessE2ELatency and SuccessServerLatency
metrics values is an indication of latency that is likely caused by the network or the client. See Transaction
metrics in Azure Files Monitoring data reference.

Client unable to achieve maximum throughput supported by the


network
Cause
One potential cause is a lack of SMB multi-channel support for standard file shares. Currently, Azure Files
supports only single channel, so there's only one connection from the client VM to the server. This single
connection is pegged to a single core on the client VM, so the maximum throughput achievable from a VM is
bound by a single core.
Workaround
For premium file shares, Enable SMB Multichannel.
Obtaining a VM with a bigger core might help improve throughput.
Running the client application from multiple VMs will increase throughput.
Use REST APIs where possible.
For NFS file shares, nconnect is available, in preview. Not recommended for production workloads.

Throughput on Linux clients is significantly lower than that of


Windows clients
Cause
This is a known issue with the implementation of the SMB client on Linux.
Workaround
Spread the load across multiple VMs.
On the same VM, use multiple mount points with a nosharesock option, and spread the load across these
mount points.
On Linux, try mounting with a nostrictsync option to avoid forcing an SMB flush on every fsync call. For
Azure Files, this option doesn't interfere with data consistency, but it might result in stale file metadata on
directory listings ( ls -l command). Directly querying file metadata by using the stat command will
return the most up-to-date file metadata.

High latencies for metadata-heavy workloads involving extensive


open/close operations
Cause
Lack of support for directory leases.
Workaround
If possible, avoid using an excessive opening/closing handle on the same directory within a short period of
time.
For Linux VMs, increase the directory entry cache timeout by specifying actimeo=<sec> as a mount option.
By default, the timeout is 1 second, so a larger value, such as 3 or 5 seconds, might help.
For CentOS Linux or Red Hat Enterprise Linux (RHEL) VMs, upgrade the system to CentOS Linux 8.2 or RHEL
8.2. For other Linux VMs, upgrade the kernel to 5.0 or later.

Low IOPS on CentOS Linux or RHEL


Cause
An I/O depth of greater than 1 is not supported on CentOS Linux or RHEL.
Workaround
Upgrade to CentOS Linux 8 or RHEL 8.
Change to Ubuntu.

Slow file copying to and from Azure file shares in Linux


If you're experiencing slow file copying, see the "Slow file copying to and from Azure file shares in Linux" section
in the Linux troubleshooting guide.

Jittery or sawtooth pattern for IOPS


Cause
The client application consistently exceeds baseline IOPS. Currently, there's no service-side smoothing of the
request load. If the client exceeds baseline IOPS, it will get throttled by the service. The throttling can result in the
client experiencing a jittery or sawtooth IOPS pattern. In this case, the average IOPS achieved by the client might
be lower than the baseline IOPS.
Workaround
Reduce the request load from the client application, so that the share doesn't get throttled.
Increase the quota of the share so that the share doesn't get throttled.

Excessive DirectoryOpen/DirectoryClose calls


Cause
If the number of Director yOpen/Director yClose calls is among the top API calls and you don't expect the
client to make that many calls, the issue might be caused by the antivirus software that's installed on the Azure
client VM.
Workaround
A fix for this issue is available in the April Platform Update for Windows.

File creation is slower than expected


Cause
Workloads that rely on creating a large number of files won't see a substantial difference in performance
between premium file shares and standard file shares.
Workaround
None.

Slow performance from Windows 8.1 or Server 2012 R2


Cause
Higher than expected latency accessing Azure file shares for I/O-intensive workloads.
Workaround
Install the available hotfix.

SMB Multichannel is not being triggered.


Cause
Recent changes to SMB Multichannel config settings without a remount.
Solution
After any changes to Windows SMB client or account SMB multichannel configuration settings, you have to
unmount the share, wait for 60 secs, and remount the share to trigger the multichannel.
For Windows client OS, generate IO load with high queue depth say QD=8, for example copying a file to
trigger SMB Multichannel. For server OS, SMB Multichannel is triggered with QD=1, which means as soon as
you start any IO to the share.

High latency on web sites hosted on file shares


Cause
High number file change notification on file shares can result in significant high latencies. This typically occurs
with web sites hosted on file shares with deep nested directory structure. A typical scenario is IIS hosted web
application where file change notification is setup for each directory in the default configuration. Each change
(ReadDirectoryChangesW) on the share that the client is registered forpushes a change notification from the file
service to the client, which takes system resources, and issue worsens with the number of changes. This can
cause share throttling and thus, result in higher client side latency.
To confirm, you can use Azure Metrics in the portal -
1. In the Azure portal, go to your storage account.
2. In the left menu, underMonitoring, selectMetrics.
3. SelectFileas the metric namespace for your storage account scope.
4. SelectTransactionsas the metric.
5. Add a filter forResponseTypeand check to see if any requests have a response code
ofSuccessWithThrottling(for SMB or NFS) orClientThrottlingError(for REST).
Solution
If file change notification is not used, disable file change notification (preferred).
Disable file change notification by updating FCNMode.
Update the IIS Worker Process (W3WP) polling interval to 0 by setting
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\W3SVC\Parameters\ConfigPollMilliSeconds in
your registry and restart the W3WP process. To learn about this setting, see Common registry keys
that are used by many parts of IIS.
Increase frequency of the file change notification polling interval to reduce volume.
Update the W3WP worker process polling interval to a higher value (e.g. 10mins or 30mins) based on
your requirement. Set
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\W3SVC\Parameters\ConfigPollMilliSeconds in
your registry and restart the W3WP process.
If your web site's mapped physical directory has nested directory structure, you can try to limit scope of file
change notification to reduce the notification volume. By default, IIS uses configuration from Web.config files
in the physical directory to which the virtual directory is mapped, as well as in any child directories in that
physical directory. If you do not want to use Web.config files in child directories, specify false for the
allowSubDirConfig attribute on the virtual directory. More details can be found here.
Set IIS virtual directory "allowSubDirConfig" setting in Web.Config to false to exclude mapped
physical child directories from the scope.

How to create an alert if a file share is throttled


1. Go to your storage account in the Azure por tal .
2. In the Monitoring section, click Aler ts , and then click + New aler t rule .
3. Click Edit resource , select the File resource type for the storage account and then click Done . For
example, if the storage account name is contoso , select the contoso/file resource.
4. Click Add condition to add a condition.
5. You will see a list of signals supported for the storage account, select the Transactions metric.
6. On the Configure signal logic blade, click the Dimension name drop-down and select Response
type .
7. Click the Dimension values drop-down and select the appropriate response types for your file share.
For standard file shares, select the following response types:
SuccessWithThrottling
SuccessWithShareIopsThrottling
ClientShareIopsThrottlingError
For premium file shares, select the following response types:
SuccessWithShareEgressThrottling
SuccessWithShareIngressThrottling
SuccessWithShareIopsThrottling
ClientShareEgressThrottlingError
ClientShareIngressThrottlingError
ClientShareIopsThrottlingError

NOTE
If the response types are not listed in the Dimension values drop-down, this means the resource has not been
throttled. To add the dimension values, next to the Dimension values drop-down list, select Add custom
value , enter the respone type (for example, SuccessWithThrottling ), select OK , and then repeat these steps to
add all applicable response types for your file share.

8. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #10 .
NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Throttling alerts for standard file shares will be triggered if any
file share within the storage account is throttled and the alert will not identify which file share was throttled. Since
per-share metrics are not available for standard file shares, the recommendation is to have one file share per
storage account.

9. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
10. Define the aler t parameters (threshold value, operator, aggregation granularity and frequency of
evaluation) and click Done .

TIP
If you are using a static threshold, the metric chart can help determine a reasonable threshold value if the file
share is currently being throttled. If you are using a dynamic threshold, the metric chart will display the calculated
thresholds based on recent data.

11. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
12. Fill in the Aler t details like Aler t rule name , Description , and Severity .
13. Click Create aler t rule to create the alert.
To learn more about configuring alerts in Azure Monitor, see Overview of alerts in Microsoft Azure.

Slow performance when unzipping files in SMB file shares


Depending on the exact compression method and unzip operation used, decompression operations may
perform more slowly on an Azure file share than on your local disk. This is often because unzipping tools
perform a number of metadata operations in the process of performing the decompression of a compressed
archive. For the best performance, we recommend copying the compressed archive from the Azure file share to
your local disk, unzipping there, and then using a copy tool such as Robocopy (or AzCopy) to copy back to the
Azure file share. Using a copy tool like Robocopy can compensate for the decreased performance of metadata
operations in Azure Files relative to your local disk by using multiple threads to copy data in parallel.

How to create alerts if a premium file share is trending toward being


throttled
1. In the Azure portal, go to your storage account.
2. In the Monitoring section, select Aler ts , and then select New aler t rule .
3. Select Edit resource , select the File resource type for the storage account, and then select Done . For
example, if the storage account name is contoso, select the contoso/file resource.
4. Select Select Condition to add a condition.
5. In the list of signals that are supported for the storage account, select the Egress metric.
NOTE
You have to create three separate alerts to be alerted when the ingress, egress, or transaction values exceed the
thresholds you set. This is because an alert is triggered only when all conditions are met. For example, if you put
all the conditions in one alert, you would be alerted only if ingress, egress, and transactions exceed their threshold
amounts.

6. Scroll down. In the Dimension name drop-down list, select File Share .
7. In the Dimension values drop-down list, select the file share or shares that you want to alert on.
8. Define the alert parameters by selecting values in the Operator , Threshold value , Aggregation
granularity , and Frequency of evaluation drop-down lists, and then select Done .
Egress, ingress, and transactions metrics are expressed per minute, though you're provisioned egress,
ingress, and I/O per second. Therefore, for example, if your provisioned egress is 90 MiB/s and you want
your threshold to be 80 percent of provisioned egress, select the following alert parameters:
For Threshold value : 75497472
For Operator : greater than or equal to
For Aggregation type : average
Depending on how noisy you want your alert to be, you can also select values for Aggregation
granularity and Frequency of evaluation . For example, if you want your alert to look at the average
ingress over the time period of 1 hour, and you want your alert rule to be run every hour, select the
following:
For Aggregation granularity : 1 hour
For Frequency of evaluation : 1 hour
9. Select Add action groups , and then add an action group (for example, email or SMS) to the alert either
by selecting an existing action group or by creating a new one.
10. Enter the alert details, such as Aler t rule name , Description , and Severity .
11. Select Create aler t rule to create the alert.

NOTE
To be notified that your premium file share is close to being throttled because of provisioned ingress,
follow the preceding instructions, but with the following change:
In step 5, select the Ingress metric instead of Egress .
To be notified that your premium file share is close to being throttled because of provisioned IOPS, follow
the preceding instructions, but with the following changes:
In step 5, select the Transactions metric instead of Egress .
In step 10, the only option for Aggregation type is Total. Therefore, the threshold value depends on
your selected aggregation granularity. For example, if you want your threshold to be 80 percent of
provisioned baseline IOPS and you select 1 hour for Aggregation granularity , your Threshold
value would be your baseline IOPS (in bytes) × 0.8 × 3600.

To learn more about configuring alerts in Azure Monitor, see Overview of alerts in Microsoft Azure.

See also
Troubleshoot Azure Files in Windows
Troubleshoot Azure Files in Linux
Azure Files FAQ
Troubleshoot Azure NFS file share problems
5/20/2022 • 4 minutes to read • Edit Online

This article lists some common problems and known issues related to Azure NFS file shares. It provides
potential causes and workarounds when these problems are encountered.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

chgrp "filename" failed: Invalid argument (22)


Cause 1: idmapping is not disabled
Azure Files disallows alphanumeric UID/GID. So idmapping must be disabled.
Cause 2: idmapping was disabled, but got re -enabled after encountering bad file/dir name
Even if idmapping has been correctly disabled, the settings for disabling idmapping gets overridden in some
cases. For example, when the Azure Files encounters a bad file name, it sends back an error. Upon seeing this
particular error code, NFS v 4.1 Linux client decides to re-enable idmapping and the future requests are sent
again with alphanumeric UID/GID. For a list of unsupported characters on Azure Files, see this article. Colon is
one of the unsupported characters.
Workaround
Check that idmapping is disabled and nothing is re-enabling it, then perform the following steps:
Unmount the share
Disable idmapping with # echo Y > /sys/module/nfs/parameters/nfs4_disable_idmapping
Mount the share back
If running rsync, run rsync with the "—numeric-ids" argument from a directory that does not have a bad
dir/file name.

Unable to create an NFS share


Cause 1: Unsupported storage account settings
NFS is only available on storage accounts with the following configuration:
Tier - Premium
Account Kind - FileStorage
Regions - List of supported regions
Solution
Follow the instructions in our article: How to create an NFS share.
Cannot connect to or mount an Azure NFS file share
Cause 1: Request originates from a client in an untrusted network/untrusted IP
Unlike SMB, NFS does not have user-based authentication. The authentication for a share is based on your
network security rule configuration. Due to this, to ensure only secure connections are established to your NFS
share, you must use either the service endpoint or private endpoints. To access shares from on-premises in
addition to private endpoints, you must set up VPN or ExpressRoute. IPs added to the storage account's allowlist
for the firewall are ignored. You must use one of the following methods to set up access to an NFS share:
Service endpoint
Accessed by the public endpoint
Only available in the same region.
VNet peering won't give access to your share.
Each virtual network or subnet must be individually added to the allowlist.
For on-premises access, service endpoints can be used with ExpressRoute, point-to-site, and site-to-
site VPNs but, we recommend using private endpoint since it is more secure.
The following diagram depicts connectivity using public endpoints.

Private endpoint
Access is more secure than the service endpoint.
Access to NFS share via private link is available from within and outside the storage account's Azure
region (cross-region, on-premise)
Virtual network peering with virtual networks hosted in the private endpoint give NFS share access to
the clients in peered virtual networks.
Private endpoints can be used with ExpressRoute, point-to-site, and site-to-site VPNs.
Cause 2: Secure transfer required is enabled
Double encryption isn't supported for NFS shares yet. Azure provides a layer of encryption for all data in transit
between Azure datacenters using MACSec. NFS shares can only be accessed from trusted virtual networks and
over VPN tunnels. No additional transport layer encryption is available on NFS shares.
Solution
Disable secure transfer required in your storage account's configuration blade.

Cause 3: nfs-common package is not installed


Before running the mount command, install the package by running the distro-specific command from below.
To check if the NFS package is installed, run: rpm qa | grep nfs-utils

Solution
If the package isn't installed, install the package on your distribution.
Ubu n t u o r Debi an
sudo apt update
sudo apt install nfs-common

F e d o r a , R e d H a t En t e r p r i se L i n u x 8 + , C e n t O S 8 +

Use the dnf package manager: sudo dnf install nfs-utils .


Older versions of Red Hat Enterprise Linux and CentOS use the yum package manager:
sudo yum install nfs-common .

o p e n SU SE

Use the zypper package manager: sudo zypper install-nfscommon .


Cause 4: Firewall blocking port 2049
The NFS protocol communicates to its server over port 2049, make sure that this port is open to the storage
account (the NFS server).
Solution
Verify that port 2049 is open on your client by running the following command:
telnet <storageaccountnamehere>.file.core.windows.net 2049 . If the port isn't open, open it.

ls hangs for large directory enumeration on some kernels


Cause: A bug was introduced in Linux kernel v5.11 and was fixed in v5.12.5.
Some kernel versions have a bug that causes directory listings to result in an endless READDIR sequence. Very
small directories where all entries can be shipped in one call won't have the problem. The bug was introduced in
Linux kernel v5.11 and was fixed in v5.12.5. So anything in between has the bug. RHEL 8.4 is known to have this
kernel version.
Workaround: Downgrading or upgrading the kernel
Downgrading or upgrading the kernel to anything outside the affected kernel will resolve the issue

Need help? Contact support.


If you still need help, contact support to get your problem resolved quickly.
Troubleshoot problems while backing up Azure file
shares
5/20/2022 • 11 minutes to read • Edit Online

This article provides troubleshooting information to address any issues you come across while configuring
backup or restoring Azure file shares using the Azure Backup Service.

Common configuration issues


Could not find my storage account to configure backup for the Azure file share
Wait until discovery is complete.
Check if any file share under the storage account is already protected with another Recovery Services
vault.

NOTE
All file shares in a Storage Account can be protected only under one Recovery Services vault. You can use this
script to find the Recovery Services vault where your storage account is registered.

Ensure that the file share isn't present in any of the unsupported Storage Accounts. You can refer to the
Support matrix for Azure file share backup to find supported Storage Accounts.
Ensure that the storage account and recovery services vault are present in the same region.
Ensure that the combined length of the storage account name and the resource group name don't exceed
84 characters in the case of new Storage accounts and 77 characters in the case of classic storage
accounts.
Check the firewall settings of storage account to ensure that the exception "Allow Azure services on the
trusted services list to access this storage account" is granted. You can refer this link for the steps to grant
exception.
Error in portal states discovery of storage accounts failed
If you have a partner subscription (CSP-enabled), ignore the error. If your subscription isn't CSP-enabled, and
your storage accounts can't be discovered, contact support.
Selected storage account validation or registration failed
Retry the registration. If the problem persists, contact support.
Could not list or find file shares in the selected storage account
Ensure that the Storage Account exists in the Resource Group and hasn't been deleted or moved after the last
validation or registration in the vault.
Ensure that the file share you're looking to protect hasn't been deleted.
Ensure that the Storage Account is a supported storage account for file share backup. You can refer to the
Support matrix for Azure file share backup to find supported Storage Accounts.
Check if the file share is already protected in the same Recovery Services vault.
Check the Network Routing setting of storage account to ensure that routing preference is set as Microsoft
network routing .
Backup file share configuration (or the protection policy configuration) is failing
Retry the configuration to see if the issue persists.
Ensure that the file share you want to protect hasn't been deleted.
If you're trying to protect multiple file shares at once, and some of the file shares are failing, try configuring
backup for the failed file shares again.
Unable to delete the Recovery Services vault after unprotecting a file share
In the Azure portal, open your Vault > Backup Infrastructure > Storage accounts . Select Unregister to
remove the storage accounts from the Recovery Services vault.

NOTE
A Recovery Services vault can only be deleted after unregistering all storage accounts registered with the vault.

Common backup or restore errors


NOTE
Refer to this document to ensure you have sufficient permissions for performing backup or restore operations.

FileShareNotFound- Operation failed as the file share is not found


Error Code: FileShareNotFound
Error Message: Operation failed as the file share is not found
Ensure that the file share you're trying to protect hasn't been deleted.
UserErrorFileShareEndpointUnreachable - Storage account not found or not supported
Error Code: UserErrorFileShareEndpointUnreachable
Error Message: Storage account not found or not supported
Ensure that the storage account exists in the Resource Group and wasn't deleted or removed from the
Resource Group after the last validation.
Ensure that the Storage account is a supported Storage account for file share backup.
AFSMaxSnapshotReached- You have reached the max limit of snapshots for this file share; you will be able to
take more once the older ones expire
Error Code: AFSMaxSnapshotReached
Error Message: You have reached the max limit of snapshots for this file share; you will be able to take more
once the older ones expire.
This error can occur when you create multiple on-demand backups for a file share.
There's a limit of 200 snapshots per file share including the ones taken by Azure Backup. Older scheduled
backups (or snapshots) are cleaned up automatically. On-demand backups (or snapshots) must be deleted if
the maximum limit is reached.
Delete the on-demand backups (Azure file share snapshots) from the Azure Files portal.

NOTE
You lose the recovery points if you delete snapshots created by Azure Backup.
UserErrorStorageAccountNotFound- Operation failed as the specified storage account does not exist
anymore
Error Code: UserErrorStorageAccountNotFound
Error Message: Operation failed as the specified storage account does not exist anymore.
Ensure that the storage account still exists and isn't deleted.
UserErrorDTSStorageAccountNotFound- The storage account details provided are incorrect
Error Code: UserErrorDTSStorageAccountNotFound
Error Message: The storage account details provided are incorrect.
Ensure that the storage account still exists and isn't deleted.
UserErrorResourceGroupNotFound- Resource group doesn't exist
Error Code: UserErrorResourceGroupNotFound
Error Message: Resource group doesn't exist
Select an existing resource group or create a new resource group.
ParallelSnapshotRequest- A backup job is already in progress for this file share
Error Code: ParallelSnapshotRequest
Error Message: A backup job is already in progress for this file share.
File share backup doesn't support parallel snapshot requests against the same file share.
Wait for the existing backup job to finish and then try again. If you can’t find a backup job in the Recovery
Services vault, check other Recovery Services vaults in the same subscription.
UserErrorStorageAccountInternetRoutingNotSupported- Storage accounts with Internet routing
configuration are not supported by Azure Backup
Error Code: UserErrorStorageAccountInternetRoutingNotSupported
Error Message: Storage accounts with Internet routing configuration are not supported by Azure Backup
Ensure that the routing preference set for the storage account hosting backed up file share is Microsoft network
routing.
FileshareBackupFailedWithAzureRpRequestThrottling/
FileshareRestoreFailedWithAzureRpRequestThrottling- File share backup or restore failed due to storage
service throttling. This may be because the storage service is busy processing other requests for the given
storage account
Error Code: FileshareBackupFailedWithAzureRpRequestThrottling/
FileshareRestoreFailedWithAzureRpRequestThrottling
Error Message: File share backup or restore failed due to storage service throttling. This may be because the
storage service is busy processing other requests for the given storage account.
Try the backup/restore operation at a later time.
TargetFileShareNotFound- Target file share not found
Error Code: TargetFileShareNotFound
Error Message: Target file share not found.
Ensure that the selected Storage Account exists, and the target file share isn't deleted.
Ensure that the Storage Account is a supported storage account for file share backup.
UserErrorStorageAccountIsLocked- Backup or restore jobs failed due to storage account being in locked
state
Error Code: UserErrorStorageAccountIsLocked
Error Message: Backup or restore jobs failed due to storage account being in locked state.
Remove the lock on the Storage Account or use delete lock instead of read lock and retry the backup or
restore operation.
DataTransferServiceCoFLimitReached- Recovery failed because number of failed files are more than the
threshold
Error Code: DataTransferServiceCoFLimitReached
Error Message: Recovery failed because number of failed files are more than the threshold.
Recovery failure reasons are listed in a file (path provided in the job details). Address the failures and
retry the restore operation for the failed files only.
Common reasons for file restore failures:
files that failed are currently in use
a directory with the same name as the failed file exists in the parent directory.
DataTransferServiceAllFilesFailedToRecover- Recovery failed as no file could be recovered
Error Code: DataTransferServiceAllFilesFailedToRecover
Error Message: Recovery failed as no file could be recovered.
Recovery failure reasons are listed in a file (path provided in the job details). Address the failures and
retry the restore operations for the failed files only.
Common reasons for file restore failures:
files that failed are currently in use
a directory with the same name as the failed file exists in the parent directory.
UserErrorDTSSourceUriNotValid - Restore fails because one of the files in the source does not exist
Error Code: DataTransferServiceSourceUriNotValid
Error Message: Restore fails because one of the files in the source does not exist.
The selected items aren't present in the recovery point data. To recover the files, provide the correct file list.
The file share snapshot that corresponds to the recovery point is manually deleted. Select a different
recovery point and retry the restore operation.
UserErrorDTSDestLocked- A recovery job is in process to the same destination
Error Code: UserErrorDTSDestLocked
Error Message: A recovery job is in process to the same destination.
File share backup doesn't support parallel recovery to the same target file share.
Wait for the existing recovery to finish and then try again. If you can’t find a recovery job in the Recovery
Services vault, check other Recovery Services vaults in the same subscription.
UserErrorTargetFileShareFull- Restore operation failed as target file share is full
Error code: UserErrorTargetFileShareFull
Error Message: Restore operation failed as target file share is full.
Increase the target file share size quota to accommodate the restore data and retry the restore operation.
UserErrorTargetFileShareQuotaNotSufficient- Target file share does not have sufficient storage size quota
for restore
Error Code: UserErrorTargetFileShareQuotaNotSufficient
Error Message: Target File share does not have sufficient storage size quota for restore
Increase the target file share size quota to accommodate the restore data and retry the operation
File Sync PreRestoreFailed- Restore operation failed as an error occurred while performing pre restore
operations on File Sync Service resources associated with the target file share
Error Code: File Sync PreRestoreFailed
Error Message: Restore operation failed as an error occurred while performing pre restore operations on File
Sync Service resources associated with the target file share.
Try restoring the data at a later time. If the issue persists, contact Microsoft support.
AzureFileSyncChangeDetectionInProgress- Azure File Sync Service change detection is in progress for the
target file share. The change detection was triggered by a previous restore to the target file share
Error Code: AzureFileSyncChangeDetectionInProgress
Error Message: Azure File Sync Service change detection is in progress for the target file share. The change
detection was triggered by a previous restore to the target file share.
Use a different target file share. Alternatively, you can wait for Azure File Sync Service change detection to
complete for the target file share before retrying the restore.
UserErrorAFSRecoverySomeFilesNotRestored- One or more files could not be recovered successfully. For
more information, check the failed file list in the path given above
Error Code: UserErrorAFSRecoverySomeFilesNotRestored
Error Message: One or more files could not be recovered successfully. For more information, check the failed file
list in the path given above.
Recovery failure reasons are listed in the file (path provided in the Job details). Address the reasons and
retry the restore operation for the failed files only.
Common reasons for file restore failures:
files that failed are currently in use
a directory with the same name as the failed file exists in the parent directory.
UserErrorAFSSourceSnapshotNotFound- Azure file share snapshot corresponding to recovery point cannot
be found
Error Code: UserErrorAFSSourceSnapshotNotFound
Error Message: Azure file share snapshot corresponding to recovery point cannot be found
Ensure that the file share snapshot, corresponding to the recovery point you're trying to use for recovery,
still exists.

NOTE
If you delete a file share snapshot that was created by Azure Backup, the corresponding recovery points become
unusable. We recommend not deleting snapshots to ensure guaranteed recovery.

Try selecting another restore point to recover your data.


UserErrorAnotherRestoreInProgressOnSameTarget- Another restore job is in progress on the same target
file share
Error Code: UserErrorAnotherRestoreInProgressOnSameTarget
Error Message: Another restore job is in progress on the same target file share
Use a different target file share. Alternatively, you can cancel or wait for the other restore to complete.

Common modify policy errors


BMSUserErrorConflictingProtectionOperation- Another configure protection operation is in progress for
this item
Error Code: BMSUserErrorConflictingProtectionOperation
Error Message: Another configure protection operation is in progress for this item.
Wait for the previous modify policy operation to finish and retry at a later time.
BMSUserErrorObjectLocked- Another operation is in progress on the selected item
Error Code: BMSUserErrorObjectLocked
Error Message: Another operation is in progress on the selected item.
Wait for the other in-progress operation to complete and retry at a later time.

Common Soft Delete Related Errors


UserErrorRestoreAFSInSoftDeleteState - This restore point is not available as the snapshot associated with
this point is in a File Share that is in soft-deleted state
Error Code: UserErrorRestoreAFSInSoftDeleteState
Error Message: This restore point is not available as the snapshot associated with this point is in a File Share that
is in soft-deleted state.
You can't perform a restore operation when the file share is in soft deleted state. Undelete the file share from
Files portal or using the Undelete script and then try to restore.
UserErrorRestoreAFSInDeleteState - Listed restore points are not available as the associated file share
containing the restore point snapshots has been deleted permanently
Error Code: UserErrorRestoreAFSInDeleteState
Error Message: Listed restore points are not available as the associated file share containing the restore point
snapshots has been deleted permanently.
Check if the backed-up file share is deleted. If it was in soft deleted state, check if the soft delete retention period
is over and it wasn't recovered back. In either of these cases, you'll lose all your snapshots permanently and
won’t be able to recover the data.

NOTE
We recommend you don't delete the backed up file share, or if it's in soft deleted state, undelete before the soft delete
retention period ends, to avoid losing all your restore points.

UserErrorBackupAFSInSoftDeleteState - Backup failed as the Azure File Share is in soft-deleted state


Error Code: UserErrorBackupAFSInSoftDeleteState
Error Message: Backup failed as the Azure File Share is in soft-deleted state
Undelete the file share from the Files por tal or by using the Undelete script to continue the backup and
prevent permanent deletion of data.
UserErrorBackupAFSInDeleteState - Backup failed as the associated Azure File Share is permanently deleted
Error Code: UserErrorBackupAFSInDeleteState
Error Message: Backup failed as the associated Azure File Share is permanently deleted
Check if the backed-up file share is permanently deleted. If yes, stop the backup for the file share to avoid
repeated backup failures. To learn how to stop protection see Stop Protection for Azure file share

Next steps
For more information about backing up Azure file shares, see:
Back up Azure file shares
Back up Azure file share FAQ
Azure Files API reference
5/20/2022 • 2 minutes to read • Edit Online

Find Azure Files API reference, library packages, readme files, and getting started articles.

.NET client libraries


The following table lists reference information for Azure Files .NET APIs.

REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT

12.x Azure Files client library v12 Package (NuGet)


for .NET

11.x Microsoft.Azure.Storage.File Package (NuGet) Develop for Azure Files with


Namespace .NET

Storage management .NET APIs


The following table lists reference information for Azure Storage management .NET APIs.

VERSIO N REF EREN C E DO C UM EN TAT IO N PA C K A GE

16.x Microsoft.Azure.Management.Storage Package (NuGet)

Data movement .NET APIs


The following table lists reference information for Azure Storage data movement .NET APIs.

VERSIO N REF EREN C E DO C UM EN TAT IO N PA C K A GE

1.x Data movement Package (NuGet)

Java client libraries


The following table lists reference information for Azure Files Java APIs.

REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT

12.x Azure Files client library for Package (Maven)


Java

8.x com.microsoft.azure.storage Package (Maven) Develop for Azure Files with


.file Java

Storage management Java APIs


The following table lists reference information for Azure Storage management Java APIs.
VERSIO N REF EREN C E DO C UM EN TAT IO N PA C K A GE

0.9.x com.microsoft.azure.management.stor Package (Maven)


age

Python client libraries


The following table lists reference information for Azure Files Python APIs.

REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT

12.x Azure Storage client libraries Package (PyPI) Examples


v12 for Python

2.x Azure Storage client libraries Package (PyPI) Develop for Azure Files with
v2 for Python Python

JavaScript client libraries


The following table lists reference information for Azure Files JavaScript APIs.

REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT

12.x Azure Files client library for Package (npm) Examples


JavaScript

10.x @azure/storage-file Package (npm)

C++ client libraries


The following table lists reference information for Azure Files C++ APIs.

REF EREN C E
VERSIO N DO C UM EN TAT IO N SO URC E C O DE/ REA DM E Q UIC K STA RT

12.x Azure SDK for C++ APIs Library source code Develop for Azure Files with
C++

REST APIs
The following table lists reference information for Azure Files REST APIs.

REF EREN C E DO C UM EN TAT IO N O VERVIEW

File service REST API File service concepts

Other REST reference


Azure Storage import-export REST API helps you manage import/export jobs to transfer data to or from Blob
storage.

Other languages and platforms


The following list contains links to libraries for other programming languages and platforms.
Ruby
PHP
iOS
Android

Azure PowerShell
Azure PowerShell reference

Azure CLI
Azure CLI reference
Azure Files monitoring data reference
5/20/2022 • 12 minutes to read • Edit Online

See Monitoring Azure Files for details on collecting and analyzing monitoring data for Azure Files.

Applies to
F IL E SH A RE T Y P E SM B NFS

Standard file shares (GPv2), LRS/ZRS

Standard file shares (GPv2), GRS/GZRS

Premium file shares (FileStorage),


LRS/ZRS

Metrics
The following tables list the platform metrics collected for Azure Files.
Capacity metrics
Capacity metrics values are refreshed daily (up to 24 Hours). The time grain defines the time interval for which
metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
Azure Files provides the following capacity metrics in Azure Monitor.
Account Level
This table shows account-level metrics.

M ET RIC DESC RIP T IO N

UsedCapacity The amount of storage used by the storage account. For


standard storage accounts, it's the sum of capacity used by
blob, table, file, and queue. For premium storage accounts
and Blob storage accounts, it is the same as BlobCapacity.

Unit: Bytes
Aggregation Type: Average
Value example: 1024

Azure Files
This table shows Azure Files metrics.

M ET RIC DESC RIP T IO N

FileCapacity The amount of File storage used by the storage account.

Unit: Bytes
Aggregation Type: Average
Value example: 1024

FileCount The number of files in the storage account.

Unit: Count
Aggregation Type: Average
Value example: 1024

FileShareCapacityQuota The upper limit on the amount of storage that can be used
by Azure Files Service in bytes.

Unit: Bytes
Aggregation Type: Average
Value example: 1024

FileShareCount The number of file shares in the storage account.

Unit: Count
Aggregation Type: Average
Value example: 1024

FileShareProvisionedIOPS The number of provisioned IOPS on a file share. This metric


is applicable to premium file storage only.

Unit: CountPerSecond
Aggregation Type: Average

FileShareSnapshotCount The number of snapshots present on the share in storage


account's Azure Files service.

Unit:Count
Aggregation Type: Average

FileShareSnapshotSize The amount of storage used by the snapshots in storage


account's Azure Files service.

Unit: Bytes
Aggregation Type: Average

Transaction metrics
Transaction metrics are emitted on every request to a storage account from Azure Storage to Azure Monitor. In
the case of no activity on your storage account, there will be no data on transaction metrics in the period. All
transaction metrics are available at both account and Azure Files service level. The time grain defines the time
interval that metric values are presented. The supported time grains for all transaction metrics are PT1H and
PT1M.
Azure Storage provides the following transaction metrics in Azure Monitor.

M ET RIC DESC RIP T IO N

Transactions The number of requests made to a storage service or the


specified API operation. This number includes successful and
failed requests, as well as requests that produced errors.

Unit: Count
Aggregation Type: Total
Applicable dimensions: ResponseType, GeoType, ApiName,
and Authentication (Definition)
Value example: 1024

Ingress The amount of ingress data. This number includes ingress


from an external client into Azure Storage as well as ingress
within Azure.

Unit: Bytes
Aggregation Type: Total
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024

Egress The amount of egress data. This number includes egress


from an external client into Azure Storage as well as egress
within Azure. As a result, this number does not reflect
billable egress.

Unit: Bytes
Aggregation Type: Total
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024

SuccessServerLatency The average time used to process a successful request by


Azure Storage. This value does not include the network
latency specified in SuccessE2ELatency.

Unit: Milliseconds
Aggregation Type: Average
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024

SuccessE2ELatency The average end-to-end latency of successful requests made


to a storage service or the specified API operation. This value
includes the required processing time within Azure Storage
to read the request, send the response, and receive
acknowledgment of the response. The difference between
SuccessE2ELatency and SuccessServerLatency values is the
latency likely caused by the network and the client.

Unit: Milliseconds
Aggregation Type: Average
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024

Availability The percentage of availability for the storage service or the


specified API operation. Availability is calculated by taking
the total billable requests value and dividing it by the
number of applicable requests, including those requests that
produced unexpected errors. All unexpected errors result in
reduced availability for the storage service or the specified
API operation.

Unit: Percent
Aggregation Type: Average
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 99.99

Metrics dimensions
Azure Files supports following dimensions for metrics in Azure Monitor.

NOTE
The File Share dimension is not available for standard file shares (only premium file shares). When using standard file
shares, the metrics provided are for all files shares in the storage account. To get per-share metrics for standard file shares,
create one file share per storage account.

DIM EN SIO N N A M E DESC RIP T IO N

GeoType Transaction from Primary or Secondary cluster. The available


values include Primar y and Secondar y . It applies to Read
Access Geo Redundant Storage(RA-GRS) when reading
objects from secondary tenant.
DIM EN SIO N N A M E DESC RIP T IO N

ResponseType Transaction response type. The available values include:

Ser verOtherError : All other server-side errors except


described ones
Ser verBusyError : Authenticated request that returned
an HTTP 503 status code.
Ser verTimeoutError : Timed-out authenticated request
that returned an HTTP 500 status code. The timeout
occurred due to a server error.
AuthenticationError : The request failed to be
authenticated by the server.
AuthorizationError : Authenticated request that failed
due to unauthorized access of data or an authorization
failure.
NetworkError : Authenticated request that failed due to
network errors. Most commonly occurs when a client
prematurely closes a connection before timeout expiration.
ClientAccountBandwidthThrottlingError : The request
is throttled on bandwidth for exceeding storage account
scalability limits.
ClientAccountRequestThrottlingError : The request is
throttled on request rate for exceeding storage account
scalability limits.
ClientThrottlingError : Other client-side throttling error.
ClientAccountBandwidthThrottlingError and
ClientAccountRequestThrottlingError are excluded.
ClientShareEgressThrottlingError : Applicable to
premium files shares only. Other client-side throttling error.
The request failed due to egress bandwidth throttling for
exceeding share limits.
ClientAccountBandwidthThrottlingError is excluded.
ClientShareIngressThrottlingError : Applicable to
premium files shares only. Other client-side throttling error.
The request failed due to ingress bandwidth throttling for
exceeding share limits.
ClientAccountBandwidthThrottlingError is excluded.
ClientShareIopsThrottlingError : Other client-side
throttling error. The request failed due to IOPS throttling.
ClientAccountRequestThrottlingError is excluded.
ClientTimeoutError : Timed-out authenticated request
that returned an HTTP 500 status code. If the client's
network timeout or the request timeout is set to a lower
value than expected by the storage service, it is an expected
timeout. Otherwise, it is reported as a ServerTimeoutError.
ClientOtherError : All other client-side errors except
described ones.
Success : Successful request
SuccessWithThrottling : Successful request when a SMB
client gets throttled in the first attempt(s) but succeeds after
retries.
SuccessWithShareEgressThrottling : Applicable to
premium files shares only. Successful request when a SMB
client gets throttled due to egress bandwidth throttling in
the first attempt(s) but succeeds after retries.
SuccessWithShareIngressThrottling : Applicable to
premium files shares only. Successful request when a SMB
client gets throttled due to ingress bandwidth throttling in
the first attempt(s) but succeeds after retries.
SuccessWithShareIopsThrottling : Successful request
when a SMB client gets throttled due to IOPS throttling in
the first attempt(s) but succeeds after retries.

ApiName The name of operation. If a failure occurs before the name of


the operation is identified, the name appears as "Unknown".
You can use the value of the ResponseType dimension to
learn more about the failure.

Authentication Authentication type used in transactions. The available


values include:
AccountKey : The transaction is authenticated with
storage account key.
SAS: The transaction is authenticated with shared access
signatures.
OAuth : The transaction is authenticated with OAuth
access tokens.
Anonymous : The transaction is requested anonymously.
It doesn’t include preflight requests.
AnonymousPreflight : The transaction is preflight
request.

TransactionType Type of transaction. The available values include:


User : The transaction was made by customer.
System : The transaction was made by system process.

Resource logs
The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor
Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was
used to perform the operation.
Fields that describe the operation
P RO P ERT Y DESC RIP T IO N
P RO P ERT Y DESC RIP T IO N

time The Universal Time Coordinated (UTC) time when the


request was received by storage. For example:
2018/11/08 21:09:36.6900118 .

resourceId The resource ID of the storage account. For example:


/subscriptions/208841be-a4v3-4234-9450-
08b90c09f4/resourceGroups/
myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/storageAccounts/blobServices/defau

categor y The category of the requested operation. For example:


StorageRead , StorageWrite , or StorageDelete .

operationName The type of REST operation that was performed.


For a complete list of operations, see Storage Analytics
Logged Operations and Status Messages topic.

operationVersion The storage service version that was specified when the
request was made. This is equivalent to the value of the x-
ms-version header. For example: 2017-04-17 .

schemaVersion The schema version of the log. For example: 1.0 .

statusCode The HTTP or SMB status code for the request. If the HTTP
request is interrupted, this value might be set to Unknown .
For example: 206

statusText The status of the requested operation. For a complete list of


status messages, see Storage Analytics Logged Operations
and Status Messages topic. In version 2017-04-17 and later,
the status message ClientOtherError isn't used. Instead,
this field contains an error code. For example: SASSuccess

durationMs The total time, expressed in milliseconds, to perform the


requested operation. This includes the time to read the
incoming request, and to send the response to the
requester. For example: 12 .

callerIpAddress The IP address of the requester, including the port number.


For example: 192.100.0.102:4362 .

correlationId The ID that is used to correlate logs across resources. For


example: b99ba45e-a01e-0042-4ea6-772bbb000000 .

location The location of storage account. For example:


North Europe .

protocol The protocol that is used in the operation. For example:


HTTP , HTTPS , SMB , or NFS

uri Uniform resource identifier that is requested.

Fields that describe how the operation was authenticated


P RO P ERT Y DESC RIP T IO N

identity / type The type of authentication that was used to make the
request. For example: OAuth , Kerberos , SAS Key ,
Account Key , or Anonymous

identity / tokenHash The SHA-256 hash of the authentication token used on the
request.
When the authentication type is Account Key , the format
is "key1 | key2 (SHA256 hash of the key)". For example:
key1(5RTE343A6FEB12342672AFD40072B70D4A91BGH5CDF797EC56BF82B2C3635CE)
.
When authentication type is SAS Key , the format is "key1 |
key2 (SHA 256 hash of the key),SasSignature(SHA 256 hash
of the SAS token)". For example:
key1(0A0XE8AADA354H19722ED12342443F0DC8FAF3E6GF8C8AD805DE6D563E0E5F8A),SasSignature(04D64C2B3A704145C9F1664F201
. When authentication type is OAuth , the format is "SHA
256 hash of the OAuth token". For example:
B3CC9D5C64B3351573D806751312317FE4E910877E7CBAFA9D95E0BE923DD25C
For other authentication types, there is no tokenHash field.

authorization / action The action that is assigned to the request.

authorization / roleAssignmentId The role assignment ID. For example:


4e2521b7-13be-4363-aeda-111111111111 .

authorization / roleDefinitionId The role definition ID. For example:


ba92f5b4-2d11-453d-a403-111111111111" .

principals / id The ID of the security principal. For example:


a4711f3a-254f-4cfb-8a2d-111111111111 .

principals / type The type of security principal. For example:


ServicePrincipal .
P RO P ERT Y DESC RIP T IO N

requester / appID The Open Authorization (OAuth) application ID that is used


as the requester.
For example: d3f7d5fe-e64a-4e4e-871d-333333333333 .

requester / audience The OAuth audience of the request. For example:


https://storage.azure.com .

requester / objectId The OAuth object ID of the requester. In case of Kerberos


authentication, represents the object identifier of Kerberos
authenticated user. For example:
0e0bf547-55e5-465c-91b7-2873712b249c .

requester / tenantId The OAuth tenant ID of identity. For example:


72f988bf-86f1-41af-91ab-222222222222 .

requester / tokenIssuer The OAuth token issuer. For example:


https://sts.windows.net/72f988bf-86f1-41af-91ab-
222222222222/
.

requester / upn The User Principal Name (UPN) of requestor. For example:
someone@contoso.com .

requester / userName This field is reserved for internal use only.

Fields that describe the service


P RO P ERT Y DESC RIP T IO N

accountName The name of the storage account. For example:


mystorageaccount .

requestUrl The URL that is requested.

userAgentHeader The User-Agent header value, in quotes. For example:


WA-Storage/6.2.0 (.NET CLR 4.0.30319.42000; Win32NT
6.2.9200.0)
.

referrerHeader The Referrer header value. For example:


http://contoso.com/about.html .

clientRequestId The x-ms-client-request-id header value of the request.


For example: 360b66a6-ad4f-4c4a-84a4-0ad7cb44f7a6 .

etag The ETag identifier for the returned object, in quotes. For
example: 0x8D101F7E4B662C4 .

ser verLatencyMs The total time expressed in milliseconds to perform the


requested operation. This value doesn't include network
latency (the time to read the incoming request and send the
response to the requester). For example: 22 .

ser viceType The service associated with this request. For example: blob ,
table , files , or queue .

operationCount The number of each logged operation that is involved in the


request. This count starts with an index of 0 . Some
requests require more than one operation. Most requests
perform only one operation. For example: 1 .

requestHeaderSize The size of the request header expressed in bytes. For


example: 578 .
If a request is unsuccessful, this value might be empty.

requestBodySize The size of the request packets, expressed in bytes, that are
read by the storage service.
For example: 0 .
If a request is unsuccessful, this value might be empty.

responseHeaderSize The size of the response header expressed in bytes. For


example: 216 .
If a request is unsuccessful, this value might be empty.

responseBodySize The size of the response packets written by the storage


service, in bytes. If a request is unsuccessful, this value may
be empty. For example: 216 .

requestMd5 The value of either the Content-MD5 header or the x-ms-


content-md5 header in the request. The MD5 hash value
specified in this field represents the content in the request.
For example: 788815fd0198be0d275ad329cafd1830 .
This field can be empty.

ser verMd5 The value of the MD5 hash calculated by the storage service.
For example: 3228b3cf1069a5489b298446321f8521 .
This field can be empty.
P RO P ERT Y DESC RIP T IO N

lastModifiedTime The Last Modified Time (LMT) for the returned object. For
example: Tuesday, 09-Aug-11 21:13:26 GMT .
This field is empty for operations that can return multiple
objects.

conditionsUsed A semicolon-separated list of key-value pairs that represent


a condition. The conditions can be any of the following:
If-Modified-Since
If-Unmodified-Since
If-Match
If-None-Match
For example:
If-Modified-Since=Friday, 05-Aug-11 19:11:54 GMT .

contentLengthHeader The value of the Content-Length header for the request sent
to the storage service. If the request was successful, this
value is equal to requestBodySize. If a request is
unsuccessful, this value may not be equal to
requestBodySize, or it might be empty.

tlsVersion The TLS version used in the connection of request. For


example: TLS 1.2 .

smbTreeConnectID The Server Message Block (SMB) treeConnectId


established at tree connect time. For example: 0x3

smbPersistentHandleID Persistent handle ID from an SMB2 CREATE request that


survives network reconnects. Referenced in MS-SMB2
2.2.14.1 as SMB2_FILEID.Persistent . For example:
0x6003f

smbVolatileHandleID Volatile handle ID from an SMB2 CREATE request that is


recycled on network reconnects. Referenced in MS-SMB2
2.2.14.1 as SMB2_FILEID.Volatile . For example:
0xFFFFFFFF00000065

smbMessageID The connection relative MessageId . For example: 0x3b165

smbCreditsConsumed The ingress or egress consumed by the request, in units of


64k. For example: 0x3

smbCommandDetail More information about this specific request rather than the
general type of request. For example:
0x2000 bytes at offset 0xf2000

smbFileId The FileId associated with the file or directory. Roughly


analogous to an NTFS FileId. For example:
0x9223442405598953

smbSessionID The SMB2 SessionId established at session setup time. For


example: 0x8530280128000049

smbCommandMajor uint32 Value in the SMB2_HEADER.Command . Currently, this is a


number between 0 and 18 inclusive. For example: 0x6

smbCommandMinor The subclass of SmbCommandMajor , where appropriate.


For example: DirectoryCloseAndDelete

See also
See Monitoring Azure Files for a description of monitoring Azure Storage.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Azure Files and Azure NetApp Files comparison
5/20/2022 • 4 minutes to read • Edit Online

This article provides a comparison of Azure Files and Azure NetApp Files.
Most workloads that require cloud file storage work well on either Azure Files or Azure NetApp Files. To help
determine the best fit for your workload, review the information provided in this article. For more information,
see the Azure Files and Azure NetApp Files documentation and the Shared storage for all enterprise file-
workloads session which covers choosing between Azure Files and Azure NetApp Files.

Features
C AT EGO RY A Z URE F IL ES A Z URE N ETA P P F IL ES

Description Azure Files is a fully managed, highly Azure NetApp Files is a fully managed,
available, enterprise-grade service that highly available, enterprise-grade NAS
is optimized for random access service that can handle the most
workloads with in-place data updates. demanding, high-performance, low-
latency workloads requiring advanced
Azure Files is built on the same Azure data management capabilities. It
storage platform as other services like enables the migration of workloads,
Azure Blobs. which are deemed "un-migratable"
without.

ANF is built on NetApp's bare metal


with ONTAP storage OS running inside
the Azure datacenter for a consistent
Azure experience and an on-premises
like performance.

Protocols Premium All tiers


SMB 2.1, 3.0, 3.1.1 SMB 2.1, 3.x (including SMB
NFSv4.1 Continuous Availability
REST optionally)
NFSv3, NFSv4.1
Dual protocol access
Standard (NFSv3/SMB and NFSv4.1/SMB)
SMB 2.1, 3.0, 3.1.1
REST
To learn more, see how to create NFS,
SMB, or dual-protocol volumes.
To learn more, see available file share
protocols.

Region Availability Premium All tiers


30+ Regions 35+ Regions

Standard To learn more, see Products available


All regions by region.

To learn more, see Products available


by region.
C AT EGO RY A Z URE F IL ES A Z URE N ETA P P F IL ES

Redundancy Premium All tiers


LRS Built-in local HA
ZRS Cross-region replication

Standard
LRS
ZRS
GRS
GZRS

To learn more, see redundancy.

Service-Level Agreement (SLA) SLA for Azure Files SLA for Azure NetApp Files

Note that SLAs for Azure Files and


Azure NetApp Files are calculated
differently.

Identity-Based Authentication and SMB SMB


Authorization Active Directory Domain Active Directory Domain
Services (AD DS) Services (AD DS)
Azure Active Directory Domain Azure Active Directory Domain
Services (Azure AD DS) Services (Azure AD DS)

Note that identify-based NFS/SMB dual protocol


authentication is only supported when ADDS/LDAP integration
using SMB protocol. To learn more, see ADD/LDAP over TLS
FAQ.

NFSv3/NFSv4.1
ADDS/LDAP integration with
NFS extended groups

To learn more, see Azure NetApp Files


NFS FAQ and Azure NetApp Files SMB
FAQ.

Encryption All protocols All protocols


Encryption at rest (AES-256) Encryption at rest (AES-256)
with customer or Microsoft- with Microsoft-managed keys
managed keys
SMB
SMB Encryption in transit using AES-
Kerberos encryption using AES- CCM (SMB 3.0) and AES-GCM
256 (recommended) or RC4- (SMB 3.1.1)
HMAC
Encryption in transit
NFS 4.1
Encryption in transit using
REST Kerberos with AES-256
Encryption in transit
To learn more, see security FAQ.
To learn more, see Security and
networking.
C AT EGO RY A Z URE F IL ES A Z URE N ETA P P F IL ES

Access Options Internet Secure VNet access


Secure VNet access VPN Gateway
VPN Gateway ExpressRoute
ExpressRoute Global File Cache
Azure File Sync HPC Cache
Standard Network Features
To learn more, see network
considerations. To learn more, see network
considerations.

Data Protection Incremental snapshots Azure NetApp Files backup


File/directory user self-restore Snapshots (255/volume)
Restore to new location File/directory user self-restore
In-place revert Restore to new volume
Share-level soft delete In-place revert
Azure Backup integration Cross-Region Replication

To learn more, see Azure Files To learn more, see How Azure NetApp
enhances data protection capabilities. Files snapshots work.

Migration Tools Azure Data Box Global File Cache


Azure File Sync CloudSync, XCP
Storage Migration Service Storage Migration Service
AzCopy AzCopy
Robocopy Robocopy
Application-based (for example,
HSR, Data Guard, AOAG)
To learn more, see Migrate to Azure file
shares.

Tiers Premium Ultra


Transaction Optimized Premium
Hot Standard
Cool
All tiers provide sub-ms minimum
To learn more, see storage tiers. latency.

To learn more, see Service Levels and


Performance Considerations.

Pricing Azure Files Pricing Azure NetApp Files Pricing

Scalability and performance


C AT EGO RY A Z URE F IL ES A Z URE N ETA P P F IL ES

Minimum Share/Volume Size Premium All tiers


100 GiB 100 GiB (Minimum capacity
pool size: 4 TiB)
Standard
No minimum.
C AT EGO RY A Z URE F IL ES A Z URE N ETA P P F IL ES

Maximum Share/Volume Size 100 TiB All tiers


100 TiB (500 TiB capacity pool
limit)

Up to 12.5 PiB per Azure NetApp


account.

Maximum Share/Volume IOPS Premium Ultra and Premium


Up to 100k Up to 450k

Standard Standard
Up to 20k Up to 320k

Maximum Share/Volume Throughput Premium Ultra and Premium


Up to 10 GiB/s Up to 4.5 GiB/s

Standard Standard
Up to 300 MiB/s Up to 3.2GiB/s

Maximum File Size 4 TiB 16 TiB

Maximum IOPS Per File Premium All tiers


Up to 8,000 Up to volume limit

Standard
1,000

Maximum Throughput Per File Premium All tiers


300 MiB/s (Up to 1 GiB/s with Up to volume limit
SMB multichannel)

Standard
60 MiB/s

SMB Multichannel Yes Yes

Latency Single-millisecond minimum latency Sub-millisecond minimum latency


(2ms to 3ms for small IO) (<1ms for random IO)

To learn more, see performance


benchmarks.

For more information on scalability and performance targets, see Azure Files and Azure NetApp Files.

Next Steps
Azure Files documentation
Azure NetApp Files documentation
Shared storage for all enterprise file-workloads session
Compare access to Azure Files, Blob Storage, and
Azure NetApp Files with NFS
5/20/2022 • 2 minutes to read • Edit Online

This article provides a comparison between each of these offerings if you access them through the Network File
System (NFS) protocol. This comparison doesn't apply if you access them through any other method.
For more general comparisons, see the this article to compare Azure Blob Storage and Azure Files, or this article
to compare Azure Files and Azure NetApp Files.

Comparison
C AT EGO RY A Z URE B LO B STO RA GE A Z URE F IL ES A Z URE N ETA P P F IL ES

Use cases Blob Storage is best suited Azure Files is a highly Fully managed file service in
for large scale read-heavy available service best suited the cloud, powered by
sequential access workloads for random access NetApp, with advanced
where data is ingested once workloads. management capabilities.
and minimally modified
further. For NFS shares, Azure Files NetApp Files is suited for
provides full POSIX file workloads that require
Blob Storage offers the system support and can random access and
lowest total cost of easily be used from provides broad protocol
ownership, if there is little container platforms like support and data
or no maintenance. Azure Container Instance protection capabilities.
(ACI) and Azure Kubernetes
Some example scenarios Service (AKS) with the built- Some example scenarios
are: Large scale analytical in CSI driver, in addition to are: On-premises enterprise
data, throughput sensitive VM-based platforms. NAS migration that requires
high-performance rich management
computing, backup and Some example scenarios capabilities, latency sensitive
archive, autonomous are: Shared files, databases, workloads like SAP HANA,
driving, media rendering, or home directories, traditional latency-sensitive or IOPS
genomic sequencing. applications, ERP, CMS, NAS intensive high performance
migrations that don't compute, or workloads that
require advanced require simultaneous multi-
management, and custom protocol access.
applications requiring scale-
out file storage.

Available protocols NFS 3.0 SMB NFS 3.0 and 4.1

REST NFS 4.1 SMB

Data Lake Storage Gen2 (No interoperability


between either protocol)
C AT EGO RY A Z URE B LO B STO RA GE A Z URE F IL ES A Z URE N ETA P P F IL ES

Key features Integrated with HPC cache Zonally redundant for high Extremely low latency (as
for low latency workloads. availability. low as sub-ms).

Integrated management, Consistent single-digit Rich NetApp ONTAP


including lifecycle, millisecond latency. management capability
immutable blobs, data such as SnapMirror in
failover, and metadata Predictable performance cloud.
index. and cost that scales with
capacity. Consistent hybrid cloud
experience.

Performance (Per volume) Up to 20,000 IOPS, up to Up to 100,000 IOPS, up to Up to 460,000 IOPS, up to


100 Gib/s throughput. 80 Gib/s throughput. 36 Gib/s throughput.

Scale Up to 2 PiB for a single Up to 100 TiB for a single Up to 100 TiB for a single
volume. file share. volume.

Up to ~4.75 TiB max for a Up to 4 TiB for a single file. Up to 16 TiB for a single file.
single file.
100 GiB min capacity. Consistent hybrid cloud
No minimum capacity experience.
requirements.

Pricing Azure Blob Storage pricing Azure Files pricing Azure NetApp Files pricing

Next steps
To access Blob storage with NFS, see Network File System (NFS) 3.0 protocol support in Azure Blob Storage.
To access Azure Files with NFS, see NFS file shares in Azure Files.
To access Azure NetApp Files with NFS, see Quickstart: Set up Azure NetApp Files and create an NFS volume.

You might also like