Azure Storage
Azure Storage
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files file shares can be mounted
concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux,
and macOS clients. NFS Azure Files shares are accessible from Linux or macOS clients. Additionally, SMB Azure
file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being
used.
Here are some videos on the common use cases of Azure Files:
Replace your file server with a serverless Azure file share
Getting started with FSLogix profile containers on Azure Files in Azure Virtual Desktop leveraging AD
authentication
Key benefits
Shared access . Azure file shares support the industry standard SMB and NFS protocols, meaning you can
seamlessly replace your on-premises file shares with Azure file shares without worrying about application
compatibility. Being able to share a file system across multiple machines, applications/instances is a
significant advantage with Azure Files for applications that need shareability.
Fully managed . Azure file shares can be created without the need to manage hardware or an OS. This
means you don't have to deal with patching the server OS with critical security upgrades or replacing faulty
hard disks.
Scripting and tooling . PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure
file shares as part of the administration of Azure applications. You can create and manage Azure file shares
using Azure portal and Azure Storage Explorer.
Resiliency . Azure Files has been built from the ground up to be always available. Replacing on-premises file
shares with Azure Files means you no longer have to wake up to deal with local power outages or network
issues.
Familiar programmability . Applications running in Azure can access data in the share via file system I/O
APIs. Developers can therefore leverage their existing code and skills to migrate existing applications. In
addition to System IO APIs, you can use Azure Storage Client Libraries or the Azure Storage REST API.
Next Steps
Plan for an Azure Files deployment
Create Azure file Share
Connect and mount an SMB share on Windows
Connect and mount an SMB share on Linux
Connect and mount an SMB share on macOS
Connect and mount an NFS share on Linux
Quickstart: Create and use an Azure file share
5/20/2022 • 11 minutes to read • Edit Online
Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted in Windows, Linux, and
macOS. This guide shows you how to create an SMB Azure file share using either the Azure portal, Azure CLI, or
Azure PowerShell module.
Applies to
F IL E SH A RE T Y P E SM B NFS
Pre-requisites
Portal
PowerShell
Azure CLI
If you don't have an Azure subscription, create a free account before you begin.
A storage account is a shared pool of storage in which you can deploy an Azure file share or other storage
resources, such as blobs or queues. A storage account can contain an unlimited number of shares. A share can
store an unlimited number of files, up to the capacity limits of the storage account.
To create a storage account using the Azure portal:
1. Under Azure ser vices , select + to create a resource.
2. Select Storage account to create a storage account.
3. Under Project details , select the Azure subscription in which to create the storage account. If you have
only one subscription, it should be the default.
4. Select Create new to create a new resource group. For the name, enter myResourceGroup.
5. Under Instance details , provide a name for the storage account such as mystorageacct followed by a
few random numbers to make it a globally unique name. A storage account name must be all lowercase
and numbers, and must be between 3 and 24 characters. Make a note of your storage account name. You
will use it later.
6. In Region , select East US .
7. In Performance , keep the default value of Standard .
8. In Redundancy , select Locally redundant storage (LRS) .
9. Select Review + Create to review your settings and create the storage account.
10. When you see the Validation passed notification, select Create . You should see a notification that
deployment is in progress.
To create a new directory named myDirectory at the root of your Azure file share:
1. On the File share settings page, select the myshare file share. The page for your file share opens,
indicating no files found.
2. On the menu at the top of the page, select + Add director y . The New director y page drops down.
3. Type myDirectory and then click OK .
Upload a file
Portal
PowerShell
Azure CLI
To demonstrate uploading a file, you first need to create or select a file to be uploaded. You may do this by
whatever means you see fit. Once you've decided on the file you would like to upload:
1. Select the myDirector y directory. The myDirector y panel opens.
2. In the menu at the top, select Upload . The Upload files panel opens.
3. Select the folder icon to open a window to browse your local files.
4. Select a file and then select Open .
5. In the Upload files page, verify the file name and then select Upload .
6. When finished, the file should appear in the list on the myDirector y page.
Download a file
Portal
PowerShell
Azure CLI
You can download a copy of the file you uploaded by right-clicking on the file and selecting Download . The
exact experience will depend on the operating system and browser you're using.
Clean up resources
Portal
PowerShell
Azure CLI
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you want to delete.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .
Next steps
What is Azure Files?
Tutorial: Create an SMB Azure file share and
connect it to a Windows VM using the Azure portal
5/20/2022 • 5 minutes to read • Edit Online
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. In this tutorial, you will learn a few ways
you can use an SMB Azure file share in a Windows virtual machine (VM).
If you don't have an Azure subscription, create a free account before you begin.
Create a storage account
Create a file share
Deploy a VM
Connect to the VM
Mount an Azure file share to your VM
Create and delete a share snapshot
Applies to
F IL E SH A RE T Y P E SM B NFS
Getting started
Create a storage account
Before you can work with an Azure file share, you have to create an Azure storage account.
1. Sign in to the Azure portal.
2. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
3. On the Storage Accounts window that appears, choose + New .
4. On the Basics tab, select the subscription in which to create the storage account.
5. Under the Resource group field, select your desired resource group, or create a new resource group.
6. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name
also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
7. Select a region for your storage account, or use the default region.
8. Select a performance tier. The default tier is Standard.
9. Specify how the storage account will be replicated. The default redundancy option is Geo-redundant storage
(GRS).
10. Select Review + Create to review your storage account settings and create the account.
11. Select Create .
The following image shows the settings on the Basics tab for a new storage account:
4. Name the new file share qsfileshare, enter "1" for the Quota , leave Transaction optimized selected,
and select Create . The quota can be a maximum of 5 TiB (100 TiB, with large file shares enabled), but you
only need 1 GiB for this.
5. Create a new txt file called qsTestFile on your local machine.
6. Select the new file share, then on the file share location, select Upload .
7. Browse to the location where you created your .txt file > select qsTestFile.txt > select Upload .
Deploy a VM
So far, you've created an Azure storage account and a file share with one file in it. Next, create an Azure VM with
Windows Server 2019 Datacenter to represent the on-premises server.
1. Expand the menu on the left side of the portal and select Create a resource in the upper left-hand
corner of the Azure portal.
2. Under Popular ser vices select Vir tual machine .
3. In the Basics tab, under Project details , select the resource group you created earlier.
5. You may receive a certificate warning during the sign-in process. Select Yes or Continue to create the
connection.
Map the Azure file share to a Windows drive
1. In the Azure portal, navigate to the qsfileshare fileshare and select Connect .
2. Select a drive letter then copy the contents of the second box and paste it in Notepad .
3. In the VM, open PowerShell and paste in the contents of the Notepad , then press enter to run the
command. It should map the drive.
3. In the VM, open the file. The unmodified version has been restored.
2. Select qsTestFile.txt and > right-click and select Proper ties from the menu.
3. Select Previous Versions to see the list of share snapshots for this directory.
4. Select Open to open the snapshot.
Restore from a previous version
1. Select Restore . This copies the contents of the entire directory recursively to the original location at the
time the share snapshot was created.
NOTE
If your file has not changed, you will not see a previous version for that file because that file is the same version as
the snapshot. This is consistent with how this works on a Windows file server.
Clean up resources
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you want to delete.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .
Next steps
Use an Azure file share with Windows
Tutorial: Create an NFS Azure file share and mount
it on a Linux VM using the Azure Portal
5/20/2022 • 7 minutes to read • Edit Online
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. Both NFS and SMB protocols are
supported on Azure virtual machines (VMs) running Linux. This tutorial shows you how to create an Azure file
share using the NFS protocol and connect it to a Linux VM.
In this tutorial, you will:
Create a storage account
Deploy a Linux VM
Create an NFS file share
Connect to your VM
Mount the file share to your VM
Applies to
F IL E SH A RE T Y P E SM B NFS
Getting started
If you don't have an Azure subscription, create a free account before you begin.
Sign in to the Azure portal.
Create a FileStorage storage account
Before you can work with an NFS 4.1 Azure file share, you have to create an Azure storage account with the
premium performance tier. Currently, NFS 4.1 shares are only available as premium file shares.
1. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
2. On the Storage Accounts window that appears, choose + Create .
3. On the Basics tab, select the subscription in which to create the storage account.
4. Under the Resource group field, select Create new to create a new resource group to use for this tutorial.
5. Enter a name for your storage account. The name you choose must be unique across Azure. The name also
must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
6. Select a region for your storage account, or use the default region. Azure supports NFS file shares in all the
same regions that support premium file storage.
7. Select the Premium performance tier to store your data on solid-state drives (SSD). Under Premium
account type , select File shares.
8. Leave replication set to its default value of Locally-redundant storage (LRS).
9. Select Review + Create to review your storage account settings and create the account.
10. When you see the Validation passed notification appear, select Create . You should see a notification that
deployment is in progress.
The following image shows the settings on the Basics tab for a new storage account:
5. Under Inbound por t rules > Public inbound por ts , choose Allow selected por ts and then select
SSH (22) and HTTP (80) from the drop-down.
IMPORTANT
Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later,
go back to the Basics tab.
4. Leave Subscription and Resource group the same. Under Instance , provide a name and select a
region for the new private endpoint. Your private endpoint must be in the same region as your virtual
network, so use the same region as you specified when creating the V M. When all the fields are
complete, select Next: Resource .
5. Confirm that the Subscription , Resource type and Resource are correct, and select File from the
Target sub-resource drop-down. Then select Next: Vir tual Network .
6. Under Networking , select the virtual network associated with your VM and leave the default subnet.
Select Yes for Integrate with private DNS zone . Select the correct subscription and resource group,
and then select Next: Tags .
7. You can optionally apply tags to categorize your resources, such as applying the name Environment and
the value Test to all testing resources. Enter name/value pairs if desired, and then select Next: Review +
create .
8. Azure will attempt to validate the private endpoint. When validation is complete, select Create . You'll see
a notification that deployment is in progress. After a few minutes, you should see a notification that
deployment is complete.
Disable secure transfer
Because the NFS protocol doesn't support encryption and relies instead on network-level security, you'll need to
disable secure transfer.
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select File shares from the storage account pane.
4. Select the NFS file share that you created. Under Secure transfer setting , select Change setting .
5. Change the Secure transfer required setting to Disabled , and select Save . The setting change may
take up to 30 seconds to take effect.
Connect to your VM
Create an SSH connection with the VM.
1. Select Home and then Vir tual machines .
2. Select the Linux VM you created for this tutorial and ensure that its status is Running . Take note of the
VM's public IP address and copy it to your clipboard.
3. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a
PowerShell prompt.
4. At your prompt, open an SSH connection to your VM. Replace the IP address with the one from your VM,
and replace the path to the .pem with the path to where the key file was downloaded.
If you encounter a warning that the authenticity of the host can't be established, type yes to continue connecting
to the VM. Leave the ssh connection open for the next step.
TIP
The SSH key you created can be used the next time your create a VM in Azure. Just select the Use a key stored in
Azure for SSH public key source the next time you create a VM. You already have the private key on your computer,
so you won't need to download anything.
Clean up resources
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you created for this tutorial.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .
Next steps
Learn about using NFS Azure file shares
Tutorial: Extend Windows file servers with Azure File
Sync
5/20/2022 • 11 minutes to read • Edit Online
The article demonstrates the basic steps for extending the storage capacity of a Windows server by using Azure
File Sync. Although the tutorial features Windows Server as an Azure virtual machine (VM), you would typically
do this process for your on-premises servers. You can find instructions for deploying Azure File Sync in your
own environment in the Deploy Azure File Sync article.
Deploy the Storage Sync Service
Prepare Windows Server to use with Azure File Sync
Install the Azure File Sync agent
Register Windows Server with the Storage Sync Service
Create a sync group and a cloud endpoint
Create a server endpoint
If you don't have an Azure subscription, create a free account before you begin.
Sign in to Azure
Sign in to the Azure portal.
5. Select the new file share. On the file share location, select Upload .
6. Browse to the FilesToSync folder where you created your .txt file, select mytestdoc.txt and select Upload .
At this point, you've created a storage account and a file share with one file in it. Next, you deploy an Azure VM
with Windows Server 2016 Datacenter to represent the on-premises server in this tutorial.
Deploy a VM and attach a data disk
1. Go to the Azure portal and expand the menu on the left. Choose Create a resource in the upper left-
hand corner.
2. In the search box above the list of Azure Marketplace resources, search for Windows Ser ver 2016
Datacenter and select it in the results. Choose Create .
3. Go to the Basics tab. Under Project details , select the resource group you created for this tutorial.
e. Select OK .
9. Select Review + create .
10. Select Create .
You can select the Notifications icon to watch the Deployment progress . Creating a new VM might
take a few minutes to complete.
11. After your VM deployment is complete, select Go to resource .
At this point, you've created a new virtual machine and attached a data disk. Next you connect to the VM.
Connect to your VM
1. In the Azure portal, select Connect on the virtual machine properties page.
2. On the Connect to vir tual machine page, keep the default options to connect by IP address over port
3389. Select Download RDP file .
3. Open the downloaded RDP file and select Connect when prompted.
4. In the Windows Security window, select More choices and then Use a different account . Type the
username as localhost\username, enter the password you created for the virtual machine, and then select
OK .
5. You might receive a certificate warning during the sign-in process. Select Yes or Continue to create the
connection.
Prepare the Windows server
For the Windows Server 2016 Datacenter server, disable Internet Explorer Enhanced Security Configuration. This
step is required only for initial server registration. You can re-enable it after the server has been registered.
In the Windows Server 2016 Datacenter VM, Server Manager opens automatically. If Server Manager doesn't
open by default, search for it in Start Menu.
1. In Ser ver Manager , select Local Ser ver .
2. On the Proper ties pane, select the link for IE Enhanced Security Configuration .
3. In the Internet Explorer Enhanced Security Configuration dialog box, select Off for
Administrators and Users .
Now you can add the data disk to the VM.
Add the data disk
1. While still in the Windows Ser ver 2016 Datacenter VM, select Files and storage ser vices >
Volumes > Disks .
2. Right-click the 1 GiB disk named Msft Vir tual Disk and select New volume .
3. Complete the wizard. Use the default settings and make note of the assigned drive letter.
4. Select Create .
5. Select Close .
At this point, you've brought the disk online and created a volume. Open File Explorer in the Windows
Server VM to confirm the presence of the recently added data disk.
6. In File Explorer in the VM, expand This PC and open the new drive. It's the F: drive in this example.
7. Right-click and select New > Folder . Name the folder FilesToSync.
8. Open the FilesToSync folder.
9. Right-click and select New > Text Document . Name the text file MyTestFile.
10. Close File Explorer and Ser ver Manager .
Download the Azure PowerShell module
Next, in the Windows Server 2016 Datacenter VM, install the Azure PowerShell module on the server.
1. In the VM, open an elevated PowerShell window.
2. Run the following command:
Install-Module -Name Az
NOTE
If you have a NuGet version that is older than 2.8.5.201, you're prompted to download and install the latest
version of NuGet.
By default, the PowerShell gallery isn't configured as a trusted repository for PowerShellGet. The first
time you use the PSGallery, you see the following prompt:
Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet.
Are you sure you want to install the modules from 'PSGallery'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"):
VA L UE DESC RIP T IO N
Resource group The resource group that contains the Storage Sync
Service.
Location East US
4. When you're finished, select Create to deploy the Storage Sync Ser vice .
5. Select the Notifications tab > Go to resource .
Azure Subscription The subscription that contains the Storage Sync Service
for this tutorial.
Resource Group The resource group that contains the Storage Sync
Service. Use afsresgroup101918 for this tutorial.
Storage Sync Ser vice The name of the Storage Sync Service. Use
afssyncservice02 for this tutorial.
2. Enter the following information to create a sync group with a cloud endpoint:
VA L UE DESC RIP T IO N
Sync group name This name must be unique within the Storage Sync
Service, but can be any name that is logical for you. Use
afssyncgroup for this tutorial.
Azure file share The name of the Azure file share you created. Use
afsfileshare for this tutorial.
3. Select Create .
If you select your sync group, you can see that you now have one cloud endpoint .
2. On the Add ser ver endpoint pane, enter the following information to create a server endpoint:
VA L UE DESC RIP T IO N
Registered ser ver The name of the server you created. Use afsvm101918
for this tutorial.
Path The Windows Server path to the drive you created. Use
f:\filestosync in this tutorial.
3. Select Create .
Your files are now in sync across your Azure file share and Windows Server.
Clean up resources
If you'd like to clean up the resources you created in this tutorial, first remove the endpoints from the storage
sync service. Then, unregister the server with your storage sync service, remove the sync groups, and delete the
sync service.
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you want to delete.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .
Next steps
In this tutorial, you learned the basic steps to extend the storage capacity of a Windows server by using Azure
File Sync. For a more thorough look at planning for an Azure File Sync deployment, see:
Plan for Azure File Sync deployment
Planning for an Azure Files deployment
5/20/2022 • 21 minutes to read • Edit Online
Azure Files can be deployed in two main ways: by directly mounting the serverless Azure file shares or by
caching Azure file shares on-premises using Azure File Sync. Deployment considerations will differ based on
which option you choose.
Direct mount of an Azure file share : Because Azure Files provides either Server Message Block (SMB)
or Network File System (NFS) access, you can mount Azure file shares on-premises or in the cloud using
the standard SMB or NFS clients available in your OS. Because Azure file shares are serverless, deploying
for production scenarios does not require managing a file server or NAS device. This means you don't
have to apply software patches or swap out physical disks.
Cache Azure file share on-premises with Azure File Sync : Azure File Sync enables you to
centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and
compatibility of an on-premises file server. Azure File Sync transforms an on-premises (or cloud)
Windows Server into a quick cache of your SMB Azure file share.
This article primarily addresses deployment considerations for deploying an Azure file share to be directly
mounted by an on-premises or cloud client. To plan for an Azure File Sync deployment, see Planning for an
Azure File Sync deployment.
Available protocols
Azure Files offers two industry-standard file system protocols for mounting Azure file shares: the Server
Message Block (SMB) protocol and the Network File System (NFS) protocol, allowing you to choose the protocol
that is the best fit for your workload. Azure file shares do not support both the SMB and NFS protocols on the
same file share, although you can create SMB and NFS Azure file shares within the same storage account. NFS
4.1 is currently only supported within new FileStorage storage account type (premium file shares only).
With both SMB and NFS file shares, Azure Files offers enterprise-grade file shares that can scale up to meet your
storage needs and can be accessed concurrently by thousands of clients.
Supported protocol versions SMB 3.1.1, SMB 3.0, SMB 2.1 NFS 4.1
Management concepts
Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of
storage. This pool of storage can be used to deploy multiple file shares, as well as other storage resources such
as blob containers, queues, or tables. All storage resources that are deployed into a storage account share the
limits that apply to that storage account. For current storage account limits, see Azure Files scalability and
performance targets.
There are two main types of storage accounts you will use for Azure Files deployments:
General purpose version 2 (GPv2) storage accounts : GPv2 storage accounts allow you to deploy Azure
file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2
storage accounts can store other storage resources such as blob containers, queues, or tables.
FileStorage storage accounts : FileStorage storage accounts allow you to deploy Azure file shares on
premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store Azure
file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a FileStorage
account. Only FileStorage accounts can deploy both SMB and NFS file shares.
There are several other storage account types you may come across in the Azure portal, PowerShell, or CLI. Two
storage account types, BlockBlobStorage and BlobStorage storage accounts, cannot contain Azure file shares.
The other two storage account types you may see are general purpose version 1 (GPv1) and classic storage
accounts, both of which can contain Azure file shares. Although GPv1 and classic storage accounts may contain
Azure file shares, most new features of Azure Files are available only in GPv2 and FileStorage storage accounts.
We therefore recommend to only use GPv2 and FileStorage storage accounts for new deployments, and to
upgrade GPv1 and classic storage accounts if they already exist in your environment.
When deploying Azure file shares into storage accounts, we recommend:
Only deploying Azure file shares into storage accounts with other Azure file shares. Although GPv2
storage accounts allow you to have mixed purpose storage accounts, because storage resources such as
Azure file shares and blob containers share the storage account's limits, mixing resources together may
make it more difficult to troubleshoot performance issues later on.
Paying attention to a storage account's IOPS limitations when deploying Azure file shares. Ideally, you
would map file shares 1:1 with storage accounts. However, this may not always be possible due to various
limits and restrictions, both from your organization and from Azure. When it is not possible to have only
one file share deployed in one storage account, consider which shares will be highly active and which
shares will be less active to ensure that the hottest file shares don't get put in the same storage account
together.
Only deploying GPv2 and FileStorage accounts and upgrading GPv1 and classic storage accounts when
you find them in your environment.
Identity
To access an Azure file share, the user of the file share must be authenticated and authorized to access the share.
This is done based on the identity of the user accessing the file share. Azure Files integrates with three main
identity providers:
On-premises Active Director y Domain Ser vices (AD DS, or on-premises AD DS) : Azure storage
accounts can be domain joined to a customer-owned Active Directory Domain Services, just like a Windows
Server file server or NAS device. You can deploy a domain controller on-premises, in an Azure VM, or even
as a VM in another cloud provider; Azure Files is agnostic to where your domain controller is hosted. Once a
storage account is domain-joined, the end user can mount a file share with the user account they signed into
their PC with. AD-based authentication uses the Kerberos authentication protocol.
Azure Active Director y Domain Ser vices (Azure AD DS) : Azure AD DS provides a Microsoft-managed
domain controller that can be used for Azure resources. Domain joining your storage account to Azure AD
DS provides similar benefits to domain joining it to a customer-owned Active Directory. This deployment
option is most useful for application lift-and-shift scenarios that require AD-based permissions. Since Azure
AD DS provides AD-based authentication, this option also uses the Kerberos authentication protocol.
Azure storage account key : Azure file shares may also be mounted with an Azure storage account key. To
mount a file share this way, the storage account name is used as the username and the storage account key
is used as a password. Using the storage account key to mount the Azure file share is effectively an
administrator operation, because the mounted file share will have full permissions to all of the files and
folders on the share, even if they have ACLs. When using the storage account key to mount over SMB, the
NTLMv2 authentication protocol is used.
For customers migrating from on-premises file servers, or creating new file shares in Azure Files intended to
behave like Windows file servers or NAS appliances, domain joining your storage account to Customer-
owned Active Director y is the recommended option. To learn more about domain joining your storage
account to a customer-owned Active Directory, see Azure Files Active Directory overview.
If you intend to use the storage account key to access your Azure file shares, we recommend using service
endpoints as described in the Networking section.
Networking
Directly mounting your Azure file share often requires some thought about networking configuration because:
The port that SMB file shares use for communication, port 445, is frequently blocked by many organizations
and internet service providers (ISPs) for outbound (internet) traffic.
NFS file shares rely on network-level authentication and are therefore only accessible via restricted networks.
Using an NFS file share always requires some level of networking configuration.
To configure networking, Azure Files provides an internet accessible public endpoint and integration with Azure
networking features like service endpoints, which help restrict the public endpoint to specified virtual networks,
and private endpoints, which give your storage account a private IP address from within a virtual network IP
address space.
From a practical perspective, this means you will need to consider the following network configurations:
If the required protocol is SMB, and all access over SMB is from clients in Azure, no special networking
configuration is required.
If the required protocol is SMB, and the access is from clients on-premises, a VPN or ExpressRoute
connection from on-premises to your Azure network is required, with Azure Files exposed on your internal
network using private endpoints.
If the required protocol is NFS, you can use either service endpoints or private endpoints to restrict the
network to specified virtual networks.
To learn more about how to configure networking for Azure Files, see Azure Files networking considerations.
In addition to directly connecting to the file share using the public endpoint or using a VPN/ExpressRoute
connection with a private endpoint, SMB provides an additional client access strategy: SMB over QUIC. SMB over
QUIC offers zero-config "SMB VPN" for SMB access over the QUIC transport protocol. Although Azure Files does
not directly support SMB over QUIC, you can create a lightweight cache of your Azure file shares on a Windows
Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see SMB over QUIC with
Azure File Sync.
Encryption
Azure Files supports two different types of encryption: encryption in transit, which relates to the encryption
used when mounting/accessing the Azure file share, and encryption at rest, which relates to how the data is
encrypted when it is stored on disk.
Encryption in transit
IMPORTANT
This section covers encryption in transit details for SMB shares. For details regarding encryption in transit with NFS
shares, see Security and networking.
By default, all Azure storage accounts have encryption in transit enabled. This means that when you mount a file
share over SMB or access it via the FileREST protocol (such as through the Azure portal, PowerShell/CLI, or
Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.x with encryption or HTTPS.
Clients that do not support SMB 3.x or clients that support SMB 3.x but not SMB encryption will not be able to
mount the Azure file share if encryption in transit is enabled. For more information about which operating
systems support SMB 3.x with encryption, see our detailed documentation for Windows, macOS, and Linux. All
current versions of the PowerShell, CLI, and SDKs support HTTPS.
You can disable encryption in transit for an Azure storage account. When encryption is disabled, Azure Files will
also allow SMB 2.1 and SMB 3.x without encryption, and unencrypted FileREST API calls over HTTP. The primary
reason to disable encryption in transit is to support a legacy application that must be run on an older operating
system, such as Windows Server 2008 R2 or an older Linux distribution. Azure Files only allows SMB 2.1
connections within the same Azure region as the Azure file share; an SMB 2.1 client outside of the Azure region
of the Azure file share, such as on-premises or in a different Azure region, will not be able to access the file
share.
We strongly recommend ensuring encryption of data in-transit is enabled.
For more information about encryption in transit, see requiring secure transfer in Azure storage.
Encryption at rest
All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service
encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because
data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access
to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the
SMB and NFS protocols.
By default, data stored in Azure Files is encrypted with Microsoft-managed keys. With Microsoft-managed keys,
Microsoft holds the keys to encrypt/decrypt the data, and is responsible for rotating them on a regular basis.
You can also choose to manage your own keys, which gives you control over the rotation process. If you choose
to encrypt your file shares with customer-managed keys, Azure Files is authorized to access your keys to fulfill
read and write requests from your clients. With customer-managed keys, you can revoke this authorization at
any time, but this means that your Azure file share will no longer be accessible via SMB or the FileREST API.
Azure Files uses the same encryption scheme as the other Azure storage services such as Azure Blob storage. To
learn more about Azure storage service encryption (SSE), see Azure storage encryption for data at rest.
Data protection
Azure Files has a multi-layered approach to ensuring your data is backed up, recoverable, and protected from
security threats.
Soft delete
Soft delete for file shares is a storage-account level setting that allows you to recover your file share when it is
accidentally deleted. When a file share is deleted, it transitions to a soft deleted state instead of being
permanently erased. You can configure the amount of time soft deleted data is recoverable before it's
permanently deleted, and undelete the share anytime during this retention period.
We recommend turning on soft delete for most file shares. If you have a workflow where share deletion is
common and expected, you may decide to have a short retention period or not have soft delete enabled at all.
For more information about soft delete, see Prevent accidental data deletion.
Backup
You can back up your Azure file share via share snapshots, which are read-only, point-in-time copies of your
share. Snapshots are incremental, meaning they only contain as much data as has changed since the previous
snapshot. You can have up to 200 snapshots per file share and retain them for up to 10 years. You can either
manually take these snapshots in the Azure portal, via PowerShell, or command-line interface (CLI), or you can
use Azure Backup. Snapshots are stored within your file share, meaning that if you delete your file share, your
snapshots will also be deleted. To protect your snapshot backups from accidental deletion, ensure soft delete is
enabled for your share.
Azure Backup for Azure file shares handles the scheduling and retention of snapshots. Its grandfather-father-son
(GFS) capabilities mean that you can take daily, weekly, monthly, and yearly snapshots, each with their own
distinct retention period. Azure Backup also orchestrates the enablement of soft delete and takes a delete lock
on a storage account as soon as any file share within it is configured for backup. Lastly, Azure Backup provides
certain key monitoring and alerting capabilities that allow customers to have a consolidated view of their
backup estate.
You can perform both item-level and share-level restores in the Azure portal using Azure Backup. All you need
to do is choose the restore point (a particular snapshot), the particular file or directory if relevant, and then the
location (original or alternate) you wish you restore to. The backup service handles copying the snapshot data
over and shows your restore progress in the portal.
For more information about backup, see About Azure file share backup.
Protect Azure Files with Microsoft Defender for Storage
Microsoft Defender for Storage provides an additional layer of security intelligence that generates alerts when it
detects anomalous activity on your storage account, for example unusual access attempts. It also runs malware
hash reputation analysis and will alert on known malware. You can configure Microsoft Defender for Storage at
the subscription or storage account level via Microsoft Defender for Cloud.
For more information, see Introduction to Microsoft Defender for Storage.
Storage tiers
Azure Files offers four different tiers of storage, premium, transaction optimized, hot, and cool to allow you to
tailor your shares to the performance and price requirements of your scenario:
Premium : Premium file shares are backed by solid-state drives (SSDs) and provide consistent high
performance and low latency, within single-digit milliseconds for most IO operations, for IO-intensive
workloads. Premium file shares are suitable for a wide variety of workloads like databases, web site hosting,
and development environments. Premium file shares can be used with both Server Message Block (SMB)
and Network File System (NFS) protocols.
Transaction optimized : Transaction optimized file shares enable transaction heavy workloads that don't
need the latency offered by premium file shares. Transaction optimized file shares are offered on the
standard storage hardware backed by hard disk drives (HDDs). Transaction optimized has historically been
called "standard", however this refers to the storage media type rather than the tier itself (the hot and cool
are also "standard" tiers, because they are on standard storage hardware).
Hot : Hot file shares offer storage optimized for general purpose file sharing scenarios such as team shares.
Hot file shares are offered on the standard storage hardware backed by HDDs.
Cool : Cool file shares offer cost-efficient storage optimized for online archive storage scenarios. Cool file
shares are offered on the standard storage hardware backed by HDDs.
Premium file shares are deployed in the FileStorage storage account kind and are only available in a
provisioned billing model. For more information on the provisioned billing model for premium file shares, see
Understanding provisioning for premium file shares. Standard file shares, including transaction optimized, hot,
and cool file shares, are deployed in the general purpose version 2 (GPv2) storage account kind, and are
available through pay as you go billing.
When selecting a storage tier for your workload, consider your performance and usage requirements. If your
workload requires single-digit latency, or you are using SSD storage media on-premises, the premium tier is
probably the best fit. If low latency isn't as much of a concern, for example with team shares mounted on-
premises from Azure or cached on-premises using Azure File Sync, standard storage may be a better fit from a
cost perspective.
Once you've created a file share in a storage account, you cannot move it to tiers exclusive to different storage
account kinds. For example, to move a transaction optimized file share to the premium tier, you must create a
new file share in a FileStorage storage account and copy the data from your original share to a new file share in
the FileStorage account. We recommend using AzCopy to copy data between Azure file shares, but you may also
use tools like robocopy on Windows or rsync for macOS and Linux.
File shares deployed within GPv2 storage accounts can be moved between the standard tiers (transaction
optimized, hot, and cool) without creating a new storage account and migrating data, but you will incur
transaction costs when you change your tier. When you move a share from a hotter tier to a cooler tier, you will
incur the cooler tier's write transaction charge for each file in the share. Moving a file share from a cooler tier to
a hotter tier will incur the cool tier's read transaction charge for each file in the share.
See Understanding Azure Files billing for more information.
Limitations
Standard file shares with 100 TiB capacity have certain limitations.
Currently, only locally redundant storage (LRS) and zone redundant storage (ZRS) accounts are supported.
Once you enable large file shares, you cannot convert storage accounts to geo-redundant storage (GRS) or
geo-zone-redundant storage (GZRS) accounts.
Once you enable large file shares, you can't disable it.
Redundancy
To protect the data in your Azure file shares against data loss or corruption, all Azure file shares store multiple
copies of each file as they are written. Depending on the requirements of your workload, you can select
additional degrees of redundancy. Azure Files currently supports the following data redundancy options:
Locally-redundant storage (LRS) : With LRS, every file is stored three times within an Azure storage
cluster. This protects against loss of data due to hardware faults, such as a bad disk drive. However, if a
disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may
be lost or unrecoverable.
Zone-redundant storage (ZRS) : With ZRS, three copies of each file stored, however these copies are
physically isolated in three distinct storage clusters in different Azure availability zones. Availability zones are
unique physical locations within an Azure region. Each zone is made up of one or more data centers
equipped with independent power, cooling, and networking. A write to storage is not accepted until it is
written to the storage clusters in all three availability zones.
Geo-redundant storage (GRS) : With GRS, you have two regions, a primary and secondary region. Files
are stored three times within an Azure storage cluster in the primary region. Writes are asynchronously
replicated to a Microsoft-defined secondary region. GRS provides six copies of your data spread between two
Azure regions. In the event of a major disaster such as the permanent loss of an Azure region due to a
natural disaster or other similar event, Microsoft will perform a failover and the secondary becomes the
primary, serving all operations. Since the replication between the primary and secondary regions are
asynchronous, in the event of a major disaster, data not yet replicated to the secondary region will be lost.
You can also perform a manual failover of a geo-redundant storage account.
Geo-zone-redundant storage (GZRS) : You can think of GZRS as if it were like ZRS but with geo-
redundancy. With GZRS, files are stored three times across three distinct storage clusters in the primary
region. All writes are then asynchronously replicated to a Microsoft-defined secondary region. The failover
process for GZRS works the same as GRS.
Standard Azure file shares up to 5-TiB support all four redundancy types. Standard file shares larger than 5-TiB
only support LRS and ZRS. Premium Azure file shares only support LRS and ZRS.
General purpose version 2 (GPv2) storage accounts provide two additional redundancy options that are not
supported by Azure Files: read accessible geo-redundant storage, often referred to as RA-GRS, and read
accessible geo-zone-redundant storage, often referred to as RA-GZRS. You can provision Azure file shares in
storage accounts with these options set, however Azure Files does not support reading from the secondary
region. Azure file shares deployed into read-accessible geo- or geo-zone redundant storage accounts will be
billed as geo-redundant or geo-zone-redundant storage, respectively.
Standard ZRS availability
ZRS for standard general-purpose v2 storage accounts is available for a subset of Azure regions:
(Africa) South Africa North
(Asia Pacific) Australia East
(Asia Pacific) Central India
(Asia Pacific) East Asia
(Asia Pacific) Japan East
(Asia Pacific) Korea Central
(Asia Pacific) South India
(Asia Pacific) Southeast Asia
(Europe) France Central
(Europe) Germany West Central
(Europe) North Europe
(Europe) Norway East
(Europe) Sweden Central
(Europe) UK South
(Europe) West Europe
(North America) Canada Central
(North America) Central US
(North America) East US
(North America) East US 2
(North America) South Central US
(North America) US Gov Virginia
(North America) West US 2
(North America) West US 3
(South America) Brazil South
Premium ZRS availability
ZRS for premium file shares is available for a subset of Azure regions:
(Asia Pacific) Australia East
(Asia Pacific) Japan East
(Asia Pacific) Southeast Asia
(Europe) France Central
(Europe) North Europe
(Europe) West Europe
(Europe) UK South
(North America) East US
(North America) East US 2
(North America) West US 2
(South America) Brazil South
Standard GZRS availability
GZRS is available for a subset of Azure regions:
(Africa) South Africa North
(Asia Pacific) Australia East
(Asia Pacific) East Asia
(Asia Pacific) Japan East
(Asia Pacific) Korea Central
(Asia Pacific) Southeast Asia
(Asia Pacific) Central India
(Europe) France Central
(Europe) North Europe
(Europe) Norway East
(Europe) Sweden Central
(Europe) UK South
(Europe) West Europe
(North America) Canada Central
(North America) Central US
(North America) East US
(North America) East US 2
(North America) South Central US
(North America) West US 2
(North America) West US 3
(North America) US Gov Virginia
(South America) Brazil South
Migration
In many cases, you will not be establishing a net new file share for your organization, but instead migrating an
existing file share from an on-premises file server or NAS device to Azure Files. Picking the right migration
strategy and tool for your scenario is important for the success of your migration.
The migration overview article briefly covers the basics and contains a table that leads you to migration guides
that likely cover your scenario.
Next steps
Planning for an Azure File Sync Deployment
Deploying Azure Files
Deploying Azure File Sync
Check out the migration overview article to find the migration guide for your scenario
SMB file shares in Azure Files
5/20/2022 • 11 minutes to read • Edit Online
Azure Files offers two industry-standard protocols for mounting Azure file share: the Server Message Block
(SMB) protocol and the Network File System (NFS) protocol. Azure Files enables you to pick the file system
protocol that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file
share with both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same
storage account. For all file shares, Azure Files offers enterprise-grade file shares that can scale up to meet your
storage needs and can be accessed concurrently by thousands of clients.
This article covers SMB Azure file shares. For information about NFS Azure file shares, see NFS file shares in
Azure Files.
Common scenarios
SMB file shares are used for a variety of applications including end-user file shares and file shares that back
databases and applications. SMB file shares are often used in the following scenarios:
End-user file shares such as team shares, home directories, etc.
Backing storage for Windows-based applications, such as SQL Server databases or line-of-business
applications written for Win32 or .NET local file system APIs.
New application and service development, particularly if that application or service has a requirement for
random IO and hierarchical storage.
Features
Azure Files supports the major features of SMB and Azure needed for production deployments of SMB file
shares:
AD domain join and discretionary access control lists (DACLs).
Integrated serverless backup with Azure Backup.
Network isolation with Azure private endpoints.
High network throughput using SMB Multichannel (premium file shares only).
SMB channel encryption including AES-256-GCM, AES-128-GCM, and AES-128-CCM.
Previous version support through VSS integrated share snapshots.
Automatic soft delete on Azure file shares to prevent accidental deletes.
Optionally internet-accessible file shares with internet-safe SMB 3.0+.
SMB file shares can be mounted directly on-premises or can also be cached on-premises with Azure File Sync.
Security
All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service
encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because
data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access
to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the
SMB and NFS protocols.
By default, all Azure storage accounts have encryption in transit enabled. This means that when you mount a file
share over SMB (or access it via the FileREST protocol), Azure Files will only allow the connection if it is made
with SMB 3.x with encryption or HTTPS. Clients that do not support SMB 3.x with SMB channel encryption will
not be able to mount the Azure file share if encryption in transit is enabled.
Azure Files supports AES-256-GCM with SMB 3.1.1 when used with Windows Server 2022 or Windows 11. SMB
3.1.1 also supports AES-128-GCM and SMB 3.0 supports AES-128-CCM. AES-128-GCM is negotiated by default
on Windows 10, version 21H1 for performance reasons.
You can disable encryption in transit for an Azure storage account. When encryption is disabled, Azure Files will
also allow SMB 2.1 and SMB 3.x without encryption. The primary reason to disable encryption in transit is to
support a legacy application that must be run on an older operating system, such as Windows Server 2008 R2
or older Linux distribution. Azure Files only allows SMB 2.1 connections within the same Azure region as the
Azure file share; an SMB 2.1 client outside of the Azure region of the Azure file share, such as on-premises or in
a different Azure region, will not be able to access the file share.
To view the status of SMB Multichannel, navigate to the storage account containing your premium file shares
and select File shares under the Data storage heading in the storage account table of contents. The status of
the SMB Multichannel can be seen under the File share settings section.
To enable or disable SMB Multichannel, select the current status (Enabled or Disabled depending on the
status). The resulting dialog provides a toggle to enable or disable SMB Multichannel. Select the desired state
and select Save .
SMB security settings
Azure Files exposes settings that let you toggle the SMB protocol to be more compatible or more secure,
depending on your organization's requirements. By default, Azure Files is configured to be maximally
compatible, so keep in mind that restricting these settings may cause some clients not to be able to connect.
Azure Files exposes the following settings:
SMB versions : Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0,
and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure
transfer" is enabled, because SMB 2.1 does not support encryption in transit.
Authentication methods : Which SMB authentication methods are allowed. Supported authentication
methods are NTLMv2 and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2
disallows using the storage account key to mount the Azure file share.
Kerberos ticket encr yption : Which encryption algorithms are allowed. Supported encryption algorithms
are AES-256 (recommended) and RC4-HMAC.
SMB channel encr yption : Which SMB channel encryption algorithms are allowed. Supported encryption
algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM.
The SMB security settings can be viewed and changed using the Azure portal, PowerShell, or CLI. Please select
the desired tab to see the steps on how to get and set the SMB security settings.
Portal
PowerShell
Azure CLI
To view or change the SMB security settings using the Azure portal, follow these steps:
1. Search for Storage accounts and select the storage account for which you want to view the security
settings.
2. Select Data storage > File shares .
3. Under File share settings , select the value associated with Security . If you haven't modified the
security settings, this value defaults to Maximum compatibility .
4. Under Profile , select Maximum compatibility , Maximum security , or Custom . Selecting Custom
allows you to create a custom profile for SMB protocol versions, SMB channel encryption, authentication
mechanisms, and Kerberos ticket encryption.
Limitations
SMB file shares in Azure Files support a subset of features supported by SMB protocol and the NTFS file system.
Although most use cases and applications do not require these features, some applications may not work
properly with Azure Files if they rely on unsupported features. The following features are not supported:
SMB Direct
SMB directory leasing
VSS for SMB file shares (this enables VSS providers to flush their data to the SMB file share before a
snapshot is taken)
Alternate data streams
Extended attributes
Object identifiers
Hard links
Soft links
Reparse points
Sparse files
Short file names (8.3 alias)
Compression
Regional availability
SMB Azure file shares are available in every Azure region, including all public and sovereign regions. Premium
SMB file shares are available in a subset of regions.
Next steps
Plan for an Azure Files deployment
Create an Azure file share
Mount SMB file shares on your preferred operating system:
Mounting SMB file shares on Windows
Mounting SMB file shares on Linux
Mounting SMB file shares on macOS
NFS file shares in Azure Files
5/20/2022 • 4 minutes to read • Edit Online
Azure Files offers two industry-standard file system protocols for mounting Azure file shares: the Server
Message Block (SMB) protocol and the Network File System (NFS) protocol, allowing you to pick the protocol
that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file share with
both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same storage
account. Azure Files offers enterprise-grade file shares that can scale up to meet your storage needs and can be
accessed concurrently by thousands of clients.
This article covers NFS Azure file shares. For information about SMB Azure file shares, see SMB file shares in
Azure Files.
IMPORTANT
Before using NFS file shares for production, see the Troubleshoot Azure NFS file shares article for a list of known issues.
Common scenarios
NFS file shares are often used in the following scenarios:
Backing storage for Linux/UNIX-based applications, such as line-of-business applications written using Linux
or POSIX file system APIs (even if they don't require POSIX-compliance).
Workloads that require POSIX-compliant file shares, case sensitivity, or Unix style permissions (UID/GID).
New application and service development, particularly if that application or service has a requirement for
random IO and hierarchical storage.
Features
Fully POSIX-compliant file system.
Hard link support.
Symbolic link support.
NFS file shares currently only support most features from the 4.1 protocol specification. Some features such
as delegations and callback of all kinds, Kerberos authentication, and encryption-in-transit are not supported.
Encryption at rest ️
✔
Encryption in transit ⛔
Private endpoints ️
✔
Subdirectory mounts ️
✔
Premium tier ️
✔
POSIX-permissions ️
✔
Root squash ️
✔
Identity-based authentication ⛔
AzCopy ⛔
Regional availability
Azure NFS file shares is supported in all the same regions that support premium file storage.
For the most up-to-date list, see the Premium Files Storage entry on the Azure Products available by region
page.
Performance
NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The
IOPS and throughput of NFS shares scale with the provisioned capacity. See the provisioned model section of
the Understanding billing article to understand the formulas for IOPS, IO bursting, and throughput. The
average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are
high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress may face
additional latencies due to the high number of open and close operations.
Workloads
IMPORTANT
Before using NFS file shares for production, see the Troubleshoot Azure NFS file shares article for a list of known issues.
NFS has been validated to work well with workloads such as SAP application layer, database backups, database
replication, messaging queues, home directories for general purpose file servers, and content repositories for
application workloads.
The following workloads have known issues. See the Troubleshoot Azure NFS file shares article for list of known
issues:
Oracle Database will experience incompatibility with its dNFS feature.
Next steps
Create an NFS file share
Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS
Overview of Azure Files identity-based
authentication options for SMB access
5/20/2022 • 11 minutes to read • Edit Online
Azure Files supports identity-based authentication over Server Message Block (SMB) through on-premises
Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (Azure AD DS). This
article focuses on how Azure file shares can use domain services, either on-premises or in Azure, to support
identity-based access to Azure file shares over SMB. Enabling identity-based access for your Azure file shares
allows you to replace existing file servers with Azure file shares without replacing your existing directory
service, maintaining seamless user access to shares.
Azure Files enforces authorization on user access to both the share and the directory/file levels. Share-level
permission assignment can be performed on Azure Active Directory (Azure AD) users or groups managed
through the Azure role-based access control (Azure RBAC) model. With RBAC, the credentials you use for file
access should be available or synced to Azure AD. You can assign Azure built-in roles like Storage File Data SMB
Share Reader to users or groups in Azure AD to grant read access to an Azure file share.
At the directory/file level, Azure Files supports preserving, inheriting, and enforcing Windows DACLs just like
any Windows file servers. You can choose to keep Windows DACLs when copying data over SMB between your
existing file share and your Azure file shares. Whether you plan to enforce authorization or not, you can use
Azure file shares to back up ACLs along with your data.
To learn how to enable on-premises Active Directory Domain Services authentication for Azure file shares, see
Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares.
To learn how to enable Azure AD DS authentication for Azure file shares, see Enable Azure Active Directory
Domain Services authentication on Azure Files.
Applies to
F IL E SH A RE T Y P E SM B NFS
Glossary
It's helpful to understand some key terms relating to Azure AD Domain Service authentication over SMB for
Azure file shares:
Kerberos authentication
Kerberos is an authentication protocol that is used to verify the identity of a user or host. For more
information on Kerberos, see Kerberos Authentication Overview.
Ser ver Message Block (SMB) protocol
SMB is an industry-standard network file-sharing protocol. SMB is also known as Common Internet File
System or CIFS. For more information on SMB, see Microsoft SMB Protocol and CIFS Protocol Overview.
Azure Active Director y (Azure AD)
Azure Active Directory (Azure AD) is Microsoft's multi-tenant cloud-based directory and identity
management service. Azure AD combines core directory services, application access management, and
identity protection into a single solution. Storing FSLogix profiles on Azure file shares for Azure AD-joined
VMs is currently in public preview. For more information, see Create a profile container with Azure Files
and Azure Active Directory (preview).
Azure Active Director y Domain Ser vices (Azure AD DS)
Azure AD DS provides managed domain services such as domain join, group policies, LDAP, and
Kerberos/NTLM authentication. These services are fully compatible with Active Directory Domain
Services. For more information, see Azure Active Directory Domain Services.
On-premises Active Director y Domain Ser vices (AD DS)
On-premises Active Directory Domain Services (AD DS) integration with Azure Files provides the
methods for storing directory data while making it available to network users and administrators.
Security is integrated with AD DS through logon authentication and access control to objects in the
directory. With a single network logon, administrators can manage directory data and organization
throughout their network, and authorized network users can access resources anywhere on the network.
AD DS is commonly adopted by enterprises in on-premises environments and AD DS credentials are
used as the identity for access control. For more information, see Active Directory Domain Services
Overview.
Azure role-based access control (Azure RBAC)
Azure role-based access control (Azure RBAC) enables fine-grained access management for Azure. Using
Azure RBAC, you can manage access to resources by granting users the fewest permissions needed to
perform their jobs. For more information on Azure RBAC, see What is Azure role-based access control
(Azure RBAC)?.
Supported scenarios
The following table summarizes the supported Azure file shares authentication scenarios for Azure AD DS and
on-premises AD DS. We recommend selecting the domain service that you adopted for your client environment
for integration with Azure Files. If you have AD DS already setup on-premises or in Azure where your devices
are domain joined to your AD, you should choose to leverage AD DS for Azure file shares authentication.
Similarly, if you've already adopted Azure AD DS, you should use that for authenticating to Azure file shares.
Azure AD DS-joined Windows machines can access Azure file On-premises AD DS-joined or Azure AD DS-joined Windows
shares with Azure AD credentials over SMB. machines can access Azure file shares with on-premises
Active Directory credentials that are synched to Azure AD
over SMB. Your client must have line of sight to your AD DS.
Restrictions
Azure AD DS and on-premises AD DS authentication do not support authentication against computer
accounts. You can consider using a service logon account instead.
Neither Azure AD DS authentication nor on-premises AD DS authentication is supported against Azure AD-
joined devices or Azure AD-registered devices.
Azure file shares only support identity-based authentication against one of the following domain services,
either Azure Active Directory Domain Services (Azure AD DS) or on-premises Active Directory Domain
Services (AD DS).
Neither identity-based authentication method is supported with Network File System (NFS) shares.
Azure AD DS
For Azure AD DS authentication, you should enable Azure AD Domain Services and domain join the VMs you
plan to access file data from. Your domain-joined VM must reside in the same virtual network (VNET) as your
Azure AD DS.
The following diagram represents the workflow for Azure AD DS authentication to Azure file shares over SMB. It
follows a similar pattern to on-prem AD DS authentication to Azure file shares. There are two major differences:
First, you don't need to create the identity in Azure AD DS to represent the storage account. This is
performed by the enablement process in the background.
Second, all users that exist in Azure AD can be authenticated and authorized. The user can be cloud only
or hybrid. The sync from Azure AD to Azure AD DS is managed by the platform without requiring any
user configuration. However, the client must be domain joined to Azure AD DS, it cannot be Azure AD
joined or registered.
Enable identity-based authentication
You can enable identity-based authentication with either Azure AD DS or on-premises AD DS for Azure file
shares on your new and existing storage accounts. Only one domain service can be used for file access
authentication on the storage account, which applies to all file shares in the account. Detailed guidance on
setting up your file shares for authentication with Azure AD DS in our article Enable Azure Active Directory
Domain Services authentication on Azure Files and guidance for on-premises AD DS in our other article, Enable
on-premises Active Directory Domain Services authentication over SMB for Azure file shares.
Configure share -level permissions for Azure Files
Once either Azure AD DS or on-premises AD DS authentication is enabled, you can use Azure built-in roles or
configure custom roles for Azure AD identities and assign access rights to any file shares in your storage
accounts. The assigned permission allows the granted identity to get access to the share only, nothing else, not
even the root directory. You still need to separately configure directory or file-level permissions for Azure file
shares.
Configure directory or file -level permissions for Azure Files
Azure file shares enforce standard Windows file permissions at both the directory and file level, including the
root directory. Configuration of directory or file-level permissions is supported over both SMB and REST. Mount
the target file share from your VM and configure permissions using Windows File Explorer, Windows icacls, or
the Set-ACL command.
Use the storage account key for superuser permissions
A user with the storage account key can access Azure file shares with superuser permissions. Superuser
permissions bypass all access control restrictions.
IMPORTANT
Our recommended security best practice is to avoid sharing your storage account keys and leverage identity-based
authentication whenever possible.
Preserve directory and file ACLs when importing data to Azure file shares
Azure Files supports preserving directory or file level ACLs when copying data to Azure file shares. You can copy
ACLs on a directory or file to Azure file shares using either Azure File Sync or common file movement toolsets.
For example, you can use robocopy with the /copy:s flag to copy data as well as ACLs to an Azure file share.
ACLs are preserved by default, you are not required to enable identity-based authentication on your storage
account to preserve ACLs.
Pricing
There is no additional service charge to enable identity-based authentication over SMB on your storage account.
For more information on pricing, see Azure Files pricing and Azure AD Domain Services pricing.
Next steps
For more information about Azure Files and identity-based authentication over SMB, see these resources:
Planning for an Azure Files deployment
Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares
Enable Azure Active Directory Domain Services authentication on Azure Files
FAQ
Azure Files networking considerations
5/20/2022 • 12 minutes to read • Edit Online
You can access your Azure file shares over the public internet accessible endpoint, over one or more private
endpoints on your network(s), or by caching your Azure file share on-premises with Azure File Sync (SMB file
shares only). This article focuses on how to configure Azure Files for direct access over public and/or private
endpoints. To learn how to cache your Azure file share on-premises with Azure File Sync, see Introduction to
Azure File Sync.
We recommend reading Planning for an Azure Files deployment prior to reading this conceptual guide.
Directly accessing the Azure file share often requires additional thought with respect to networking:
SMB file shares communicate over port 445, which many organizations and internet service providers
(ISPs) block for outbound (internet) traffic. This practice originates from legacy security guidance about
deprecated and non-internet safe versions of the SMB protocol. Although SMB 3.x is an internet-safe
protocol, organizational or ISP policies may not be possible to change. Therefore, mounting an SMB file
share often requires additional networking configuration to use outside of Azure.
NFS file shares rely on network-level authentication and are therefore only accessible via restricted
networks. Using an NFS file share always requires some level of networking configuration.
Configuring public and private endpoints for Azure Files is done on the top-level management object for Azure
Files, the Azure storage account. A storage account is a management construct that represents a shared pool of
storage in which you can deploy multiple Azure file shares, as well as the storage resources for other Azure
storage services, such as blob containers or queues.
https://www.youtube-nocookie.com/embed/jd49W33DxkQ
This video is a guide and demo for how to securely expose Azure file shares directly to information workers and
apps in five simple steps. The sections below provide links and additional context to the documentation
referenced in the video.
Applies to
F IL E SH A RE T Y P E SM B NFS
Secure transfer
By default, Azure storage accounts require secure transfer, regardless of whether data is accessed over the public
or private endpoint. For Azure Files, the require secure transfer setting is enforced for all protocol access to
the data stored on Azure file shares, including SMB, NFS, and FileREST. The require secure transfer setting
may be disabled to allow unencrypted traffic. You may also see this setting mislabeled as "require secure
transfer for REST API operations".
The SMB, NFS, and FileREST protocols have slightly different behavior with respect to the require secure
transfer setting:
When require secure transfer is enabled on a storage account, all SMB file shares in that storage
account will require the SMB 3.x protocol with AES-128-CCM, AES-128-GCM, or AES-256-GCM
encryption algorithms, depending on the available/required encryption negotiation between the SMB
client and Azure Files. You can toggle which SMB encryption algorithms are allowed via the SMB security
settings. Disabling the require secure transfer setting enables SMB 2.1 and SMB 3.x mounts without
encryption.
NFS file shares do not support an encryption mechanism, so in order to use the NFS protocol to access
an Azure file share, you must disable require secure transfer for the storage account.
When secure transfer is required, the FileREST protocol may only be used with HTTPS. FileREST is only
supported on SMB file shares today.
Public endpoint
The public endpoint for the Azure file shares within a storage account is an internet exposed endpoint. The
public endpoint is the default endpoint for a storage account, however, it can be disabled if desired.
The SMB, NFS, and FileREST protocols can all use the public endpoint. However, each has slightly different rules
for access:
SMB file shares are accessible from anywhere in the world via the storage account's public endpoint with
SMB 3.x with encryption. This means that authenticated requests, such as requests authorized by a user's
logon identity, can originate securely from inside or outside of the Azure region. If SMB 2.1 or SMB 3.x
without encryption is desired, two conditions must be met:
1. The storage account's require secure transfer setting must be disabled.
2. The request must originate from inside of the Azure region. As previously mentioned, encrypted SMB
requests are allowed from anywhere, inside or outside of the Azure region.
NFS file shares are accessible from the storage account's public endpoint if and only if the storage
account's public endpoint is restricted to specific virtual networks using service endpoints. See public
endpoint firewall settings for additional information on service endpoints.
FileREST is accessible via the public endpoint. If secure transfer is required, only HTTPS requests are
accepted. If secure transfer is disabled, HTTP requests are accepted by the public endpoint regardless of
origin.
Public endpoint firewall settings
The storage account firewall restricts access to the public endpoint for a storage account. Using the storage
account firewall, you can restrict access to certain IP addresses/IP address ranges, to specific virtual networks, or
disable the public endpoint entirely.
When you restrict the traffic of the public endpoint to one or more virtual networks, you are using a capability of
the virtual network called service endpoints. Requests directed to the service endpoint of Azure Files are still
going to the storage account public IP address; however, the networking layer is doing additional verification of
the request to validate that it is coming from an authorized virtual network. The SMB, NFS, and FileREST
protocols all support service endpoints. Unlike SMB and FileREST, however, NFS file shares can only be accessed
with the public endpoint through use of a service endpoint.
To learn more about how to configure the storage account firewall, see configure Azure storage firewalls and
virtual networks.
Public endpoint network routing
Azure Files supports multiple network routing options. The default option, Microsoft routing, works with all
Azure Files configurations. The internet routing option does not support AD domain join scenarios or Azure File
Sync.
Private endpoints
In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or
more private endpoints. A private endpoint is an endpoint that is only accessible within an Azure virtual
network. When you create a private endpoint for your storage account, your storage account gets a private IP
address from within the address space of your virtual network, much like how an on-premises file server or
NAS device receives an IP address within the dedicated address space of your on-premises network.
An individual private endpoint is associated with a specific Azure virtual network subnet. A storage account may
have private endpoints in more than one virtual network.
Using private endpoints with Azure Files enables you to:
Securely connect to your Azure file shares from on-premises networks using a VPN or ExpressRoute
connection with private-peering.
Secure your Azure file shares by configuring the storage account firewall to block all connections on the
public endpoint. By default, creating a private endpoint does not block connections to the public endpoint.
Increase security for the virtual network by enabling you to block exfiltration of data from the virtual network
(and peering boundaries).
To create a private endpoint, see Configuring private endpoints for Azure Files.
Tunneling traffic over a virtual private network or ExpressRoute
To use private endpoints to access SMB or NFS file shares from on-premises, you must establish a network
tunnel between your on-premises network and Azure. A virtual network, or VNet, is similar to a traditional on-
premises network. Like an Azure storage account or an Azure VM, a VNet is an Azure resource that is deployed
in a resource group.
Azure Files supports the following mechanisms to tunnel traffic between your on-premises workstations and
servers and Azure SMB/NFS file shares:
Azure VPN Gateway: A VPN gateway is a specific type of virtual network gateway that is used to send
encrypted traffic between an Azure virtual network and an alternate location (such as on-premises) over the
internet. An Azure VPN Gateway is an Azure resource that can be deployed in a resource group alongside of
a storage account or other Azure resources. VPN gateways expose two different types of connections:
Point-to-Site (P2S) VPN gateway connections, which are VPN connections between Azure and an
individual client. This solution is primarily useful for devices that are not part of your organization's
on-premises network. A common use case is for telecommuters who want to be able to mount their
Azure file share from home, a coffee shop, or hotel while on the road. To use a P2S VPN connection
with Azure Files, you'll need to configure a P2S VPN connection for each client that wants to connect.
To simplify the deployment of a P2S VPN connection, see Configure a Point-to-Site (P2S) VPN on
Windows for use with Azure Files and Configure a Point-to-Site (P2S) VPN on Linux for use with Azure
Files.
Site-to-Site (S2S) VPN, which are VPN connections between Azure and your organization's network. A
S2S VPN connection enables you to configure a VPN connection once for a VPN server or device
hosted on your organization's network, rather than configuring a connection for every client device
that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see
Configure a Site-to-Site (S2S) VPN for use with Azure Files.
ExpressRoute, which enables you to create a defined route between Azure and your on-premises network
that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-
premises datacenter and Azure, ExpressRoute may be useful when network performance is a consideration.
ExpressRoute is also a good option when your organization's policy or regulatory requirements require a
deterministic path to your resources in the cloud.
NOTE
Although we recommend using private endpoints to assist in extending your on-premises network into Azure, it is
technically possible to route to the public endpoint over the VPN connection. However, this requires hard-coding the IP
address for the public endpoint for the Azure storage cluster that serves your storage account. Because storage accounts
may be moved between storage clusters at any time and new clusters are frequently added and removed, this requires
regularly hard-coding all the possible Azure storage IP addresses into your routing rules.
DNS configuration
When you create a private endpoint, by default we also create a (or update an existing) private DNS zone
corresponding to the privatelink subdomain. Strictly speaking, creating a private DNS zone is not required to
use a private endpoint for your storage account. However, it is highly recommended in general and explicitly
required when mounting your Azure file share with an Active Directory user principal or accessing it from the
FileREST API.
NOTE
This article uses the storage account DNS suffix for the Azure Public regions, core.windows.net . This commentary also
applies to Azure Sovereign clouds such as the Azure US Government cloud and the Azure China cloud - just substitute
the appropriate suffixes for your environment.
For this example, the storage account storageaccount.file.core.windows.net resolves to the private IP address of
the private endpoint, which happens to be 192.168.0.4 .
Name Type TTL Section NameHost
---- ---- --- ------- --------
storageaccount.file.core.windows. CNAME 29 Answer csostoracct.privatelink.file.core.windows.net
net
Name : storageaccount.privatelink.file.core.windows.net
QueryType : A
TTL : 1769
Section : Answer
IP4Address : 192.168.0.4
Name : privatelink.file.core.windows.net
QueryType : SOA
TTL : 269
Section : Authority
NameAdministrator : azureprivatedns-host.microsoft.com
SerialNumber : 1
TimeToZoneRefresh : 3600
TimeToZoneFailureRetry : 300
TimeToExpiration : 2419200
DefaultTTL : 300
If you run the same command from on-premises, you'll see that the same storage account name resolves to the
public IP address of the storage account instead; storageaccount.file.core.windows.net is a CNAME record for
storageaccount.privatelink.file.core.windows.net , which in turn is a CNAME record for the Azure storage
cluster hosting the storage account:
Name : file.par20prdstr01a.store.core.windows.net
QueryType : A
TTL : 60
Section : Answer
IP4Address : 52.239.194.40
This reflects the fact that the storage account can expose both the public endpoint and one or more private
endpoints. To ensure that the storage account name resolves to the private endpoint's private IP address, you
must change the configuration on your on-premises DNS servers. This can be accomplished in several ways:
Modifying the hosts file on your clients to make storageaccount.file.core.windows.net resolve to the desired
private endpoint's private IP address. This is strongly discouraged for production environments, because you
will need to make these changes to every client that wants to mount your Azure file shares, and changes to
the storage account or private endpoint will not be automatically handled.
Creating an A record for storageaccount.file.core.windows.net in your on-premises DNS servers. This has
the advantage that clients in your on-premises environment will be able to automatically resolve the storage
account without needing to configure each client. However, this solution is similarly brittle to modifying the
hosts file because changes are not reflected. Although this solution is brittle, it may be the best choice for
some environments.
Forward the core.windows.net zone from your on-premises DNS servers to your Azure private DNS zone.
The Azure private DNS host can be reached through a special IP address ( 168.63.129.16 ) that is only
accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this
limitation, you can run additional DNS servers within your virtual network that will forward
core.windows.net on to the Azure private DNS zone. To simplify this set up, we have provided PowerShell
cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To
learn how to set up DNS forwarding, see Configuring DNS with Azure Files.
See also
Azure Files overview
Planning for an Azure Files deployment
Disaster recovery and storage account failover
5/20/2022 • 13 minutes to read • Edit Online
Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may
occur. If your application requires resiliency, Microsoft recommends using geo-redundant storage, so that your
data is copied to a second region. Additionally, customers should have a disaster recovery plan in place for
handling a regional service outage. An important part of a disaster recovery plan is preparing to fail over to the
secondary endpoint in the event that the primary endpoint becomes unavailable.
Azure Storage supports account failover for geo-redundant storage accounts. With account failover, you can
initiate the failover process for your storage account if the primary endpoint becomes unavailable. The failover
updates the secondary endpoint to become the primary endpoint for your storage account. Once the failover is
complete, clients can begin writing to the new primary endpoint.
Account failover is available for general-purpose v1, general-purpose v2, and Blob storage account types with
Azure Resource Manager deployments. Account failover is not supported for storage accounts with a
hierarchical namespace enabled.
This article describes the concepts and process involved with an account failover and discusses how to prepare
your storage account for recovery with the least amount of customer impact. To learn how to initiate an account
failover in the Azure portal or PowerShell, see Initiate an account failover.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Track outages
Customers may subscribe to the Azure Service Health Dashboard to track the health and status of Azure Storage
and other Azure services.
Microsoft also recommends that you design your application to prepare for the possibility of write failures. Your
application should expose write failures in a way that alerts you to the possibility of an outage in the primary
region.
If the primary endpoint becomes unavailable for any reason, the client is no longer able to write to the storage
account. The following image shows the scenario where the primary has become unavailable, but no recovery
has happened yet:
The customer initiates the account failover to the secondary endpoint. The failover process updates the DNS
entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your
storage account, as shown in the following image:
Write access is restored for geo-redundant accounts once the DNS entry has been updated and requests are
being directed to the new primary endpoint. Existing storage service endpoints for blobs, tables, queues, and
files remain the same after the failover.
IMPORTANT
After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint. To
resume replication to the new secondary, configure the account for geo-redundancy again.
Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For
more information, see Important implications of account failover.
An account failover usually involves some data loss. It's important to understand the implications of initiating an
account failover.
Because data is written asynchronously from the primary region to the secondary region, there is always a delay
before a write to the primary region is copied to the secondary region. If the primary region becomes
unavailable, the most recent writes may not yet have been copied to the secondary region.
When you force a failover, all data in the primary region is lost as the secondary region becomes the new
primary region. The new primary region is configured to be locally redundant after the failover.
All data already copied to the secondary is maintained when the failover happens. However, any data written to
the primary that has not also been copied to the secondary is lost permanently.
The Last Sync Time property indicates the most recent time that data from the primary region is guaranteed
to have been written to the secondary region. All data written prior to the last sync time is available on the
secondary, while data written after the last sync time may not have been written to the secondary and may be
lost. Use this property in the event of an outage to estimate the amount of data loss you may incur by initiating
an account failover.
As a best practice, design your application so that you can use the last sync time to evaluate expected data loss.
For example, if you are logging all write operations, then you can compare the time of your last write operations
to the last sync time to determine which writes have not been synced to the secondary.
For more information about checking the Last Sync Time property, see Check the Last Sync Time property for
a storage account.
Use caution when failing back to the original primary
After you fail over from the primary to the secondary region, your storage account is configured to be locally
redundant in the new primary region. You can then configure the account in the new primary region for geo-
redundancy. When the account is configured for geo-redundancy after a failover, the new primary region
immediately begins copying data to the new secondary region, which was the primary before the original
failover. However, it may take some time before existing data in the new primary is fully copied to the new
secondary.
After the storage account is reconfigured for geo-redundancy, it's possible to initiate a failback from the new
primary to the new secondary. In this case, the original primary region prior to the failover becomes the
primary region again, and is configured to be either locally redundant or zone-redundant, depending on
whether the original primary configuration was GRS/RA-GRS or GZRS/RA-GZRS. All data in the post-failover
primary region (the original secondary) is lost during the failback. If most of the data in the storage account has
not been copied to the new secondary before you fail back, you could suffer a major data loss.
To avoid a major data loss, check the value of the Last Sync Time property before failing back. Compare the
last sync time to the last times that data was written to the new primary to evaluate expected data loss.
After a failback operation, you can configure the new primary region to be geo-redundant again. If the original
primary was configured for LRS, you can configure it to be GRS or RA-GRS. If the original primary was
configured for ZRS, you can configure it to be GZRS or RA-GZRS. For additional options, see Change how a
storage account is replicated.
Additional considerations
Review the additional considerations described in this section to understand how your applications and services
may be affected when you force a failover.
Storage account containing archived blobs
Storage accounts containing archived blobs support account failover. After failover is complete, all archived
blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy.
Storage resource provider
Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis of all
actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in
your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API
enables you to manage the storage account and related resources.
After a failover is complete, clients can again read and write Azure Storage data in the new primary region.
However, the Azure Storage resource provider does not fail over, so resource management operations must still
take place in the primary region. If the primary region is unavailable, you will not be able to perform
management operations on the storage account.
Because the Azure Storage resource provider does not fail over, the Location property will return the original
primary location after the failover is complete.
Azure virtual machines
Azure virtual machines (VMs) do not fail over as part of an account failover. If the primary region becomes
unavailable, and you fail over to the secondary region, then you will need to recreate any VMs after the failover.
Also, there is a potential data loss associated with the account failover. Microsoft recommends the following
high availability and disaster recovery guidance specific to virtual machines in Azure.
Azure unmanaged disks
As a best practice, Microsoft recommends converting unmanaged disks to managed disks. However, if you need
to fail over an account that contains unmanaged disks attached to Azure VMs, you will need to shut down the
VM before initiating the failover.
Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged
disks attached to the VM are leased. An account failover cannot proceed when there is a lease on a blob. To
perform the failover, follow these steps:
1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to
which they are attached. Doing so will make it easier to reattach the disks after the failover.
2. Shut down the VM.
3. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
4. Wait until the Last Sync Time has updated, and is later than the time at which you deleted the VM. This step
is important, because if the secondary endpoint has not been fully updated with the VHD files when the
failover occurs, then the VM may not function properly in the new primary region.
5. Initiate the account failover.
6. Wait until the account failover is complete and the secondary region has become the new primary region.
7. Create a VM in the new primary region and reattach the VHDs.
8. Start the new VM.
Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.
An account failover should not be used as part of your data migration strategy.
Microsoft-managed failover
In extreme circumstances where a region is lost due to a significant disaster, Microsoft may initiate a regional
failover. In this case, no action on your part is required. Until the Microsoft-managed failover has completed, you
won't have write access to your storage account. Your applications can read from the secondary region if your
storage account is configured for RA-GRS or RA-GZRS.
See also
Use geo-redundancy to design highly available applications
Initiate an account failover
Check the Last Sync Time property for a storage account
Tutorial: Build a highly available application with Blob storage
Overview of share snapshots for Azure Files
5/20/2022 • 6 minutes to read • Edit Online
Azure Files provides the capability to take share snapshots of file shares. Share snapshots capture the share
state at that point in time. In this article, we describe what capabilities share snapshots provide and how you can
take advantage of them in your custom use case.
Applies to
F IL E SH A RE T Y P E SM B NFS
Capabilities
A share snapshot is a point-in-time, read-only copy of your data. You can create, delete, and manage snapshots
by using the REST API. Same capabilities are also available in the client library, Azure CLI, and Azure portal.
You can view snapshots of a share by using both the REST API and SMB. You can retrieve the list of versions of
the directory or file, and you can mount a specific version directly as a drive (only available on Windows - see
Limits).
After a share snapshot is created, it can be read, copied, or deleted, but not modified. You can't copy a whole
share snapshot to another storage account. You have to do that file by file, by using AzCopy or other copying
mechanisms.
Share snapshot capability is provided at the file share level. Retrieval is provided at individual file level, to allow
for restoring individual files. You can restore a complete file share by using SMB, the REST API, the portal, the
client library, or PowerShell/CLI tooling.
A share snapshot of a file share is identical to its base file share. The only difference is that a DateTime value is
appended to the share URI to indicate the time at which the share snapshot was taken. For example, if a file
share URI is http://storagesample.core.file.windows.net/myshare, the share snapshot URI is similar to:
http://storagesample.core.file.windows.net/myshare?snapshot=2011-03-09T01:42:34.9360000Z
Share snapshots persist until they are explicitly deleted. A share snapshot cannot outlive its base file share. You
can enumerate the snapshots associated with the base file share to track your current snapshots.
When you create a share snapshot of a file share, the files in the share's system properties are copied to the
share snapshot with the same values. The base files and the file share's metadata are also copied to the share
snapshot, unless you specify separate metadata for the share snapshot when you create it.
You cannot delete a share that has share snapshots unless you delete all the share snapshots first.
Space usage
Share snapshots are incremental in nature. Only the data that has changed after your most recent share
snapshot is saved. This minimizes the time required to create the share snapshot and saves on storage costs.
Any write operation to the object or property or metadata update operation is counted toward "changed
content" and is stored in the share snapshot.
To conserve space, you can delete the share snapshot for the period when the churn was highest.
Even though share snapshots are saved incrementally, you need to retain only the most recent share snapshot in
order to restore the share. When you delete a share snapshot, only the data unique to that share snapshot is
removed. Active snapshots contain all the information that you need to browse and restore your data (from the
time the share snapshot was taken) to the original location or an alternate location. You can restore at the item
level.
Snapshots don't count towards the share size limit. There is no limit to how much space share snapshots occupy
in total. Storage account limits still apply.
Limits
The maximum number of share snapshots that Azure Files allows today is 200. After 200 share snapshots, you
have to delete older share snapshots in order to create new ones.
There is no limit to the simultaneous calls for creating share snapshots. There is no limit to amount of space that
share snapshots of a particular file share can consume.
Today, it is not possible to mount share snapshots on Linux. This is because the Linux SMB client does not
support mounting snapshots like Windows does.
Next steps
Working with share snapshots in:
Azure file share backup
PowerShell
CLI
Windows
Share snapshot FAQ
SMB Multichannel performance
5/20/2022 • 7 minutes to read • Edit Online
SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share.
Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account
kind). There is no additional cost for enabling SMB Multichannel in Azure Files. SMB Multichannel is disabled by
default.
Applies to
F IL E SH A RE T Y P E SM B NFS
Benefits
SMB Multichannel enables clients to use multiple network connections that provide increased performance
while lowering the cost of ownership. Increased performance is achieved through bandwidth aggregation over
multiple NICs and utilizing Receive Side Scaling (RSS) support for NICs to distribute the IO load across multiple
CPUs.
Increased throughput : Multiple connections allow data to be transferred over multiple paths in parallel
and thereby significantly benefits workloads that use larger file sizes with larger IO sizes, and require high
throughput from a single VM or a smaller set of VMs. Some of these workloads include media and
entertainment for content creation or transcoding, genomics, and financial services risk analysis.
Higher IOPS : NIC RSS capability allows effective load distribution across multiple CPUs with multiple
connections. This helps achieve higher IOPS scale and effective utilization of VM CPUs. This is useful for
workloads that have small IO sizes, such as database applications.
Network fault tolerance : Multiple connections mitigate the risk of disruption since clients no longer rely
on an individual connection.
Automatic configuration : When SMB Multichannel is enabled on clients and storage accounts, it allows for
dynamic discovery of existing connections, and can create addition connection paths as necessary.
Cost optimization : Workloads can achieve higher scale from a single VM, or a small set of VMs, while
connecting to premium shares. This could reduce the total cost of ownership by reducing the number of VMs
necessary to run and manage a workload.
To learn more about SMB Multichannel, refer to the Windows documentation.
This feature provides greater performance benefits to multi-threaded applications but typically does not help
single-threaded applications. See the Performance comparison section for more details.
Limitations
SMB Multichannel for Azure file shares currently has the following restrictions:
Only supported on Windows and Linux clients that are using SMB 3.1.1. Ensure SMB client operating systems
are patched to recommended levels.
Maximum number of channels is four, for details see here.
Configuration
SMB Multichannel only works when the feature is enabled on both client-side (your client) and service-side
(your Azure storage account).
On Windows clients, SMB Multichannel is enabled by default. You can verify your configuration by running the
following PowerShell command:
On your Azure storage account, you will need to enable SMB Multichannel. See Enable SMB Multichannel.
Disable SMB Multichannel
In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB
Multichannel. However, some specific scenarios such as single-threaded workloads or for testing purposes, you
may want to disable SMB Multichannel. See Performance comparison for more details.
Performance comparison
There are two categories of read/write workload patterns - single-threaded and multi-threaded. Most workloads
use multiple files, but there could be specific use cases where the workload works with a single file in a share.
This section covers different use cases and the performance impact for each of them. In general, most workloads
are multi-threaded and distribute workload over multiple files so they should observe significant performance
improvements with SMB Multichannel.
Multi-threaded/multiple files : Depending on the workload pattern, you should see significant
performance improvement in read and write IOs over multiple channels. The performance gains vary from
anywhere between 2x to 4x in terms of IOPS, throughput, and latency. For this category, SMB Multichannel
should be enabled for the best performance.
Multi-threaded/single file : For most use cases in this category, workloads will benefit from having SMB
Multichannel enabled, especially if the workload has an average IO size > ~16k. A few example scenarios
that benefit from SMB Multichannel are backup or recovery of a single large file. An exception where you
may want to disable SMB Multichannel is if your workload is small IOs heavy. In that case, you may observe a
slight performance loss of ~10%. Depending on the use case, consider spreading load across multiple files,
or disable the feature. See the Configuration section for details.
Single-threaded/multiple files or single file : For most single-threaded workloads, there are minimum
performance benefits due to lack of parallelism, usually there is a slight performance degradation of ~10% if
SMB Multichannel is enabled. In this case, it's ideal to disable SMB Multichannel, with one exception. If the
single-threaded workload can distribute load across multiple files and uses on an average larger IO size (>
~16k), then there should be slight performance benefits from SMB Multichannel.
Performance test configuration
For the charts in this article, the following configuration was used: A single Standard D32s v3 VM with a single
RSS enabled NIC with four channels. Load was generated using diskspd.exe, multiple-threaded with IO depth of
10, and random IOs with various IO sizes.
MAX
C A C H ED
AND
T EM P
STO RA GE
T H RO UG MAX EXP EC T E
H P UT : UN C A C H D
IO P S/ M B ED DISK N ET W O R
PS T H RO UG K
T EM P MAX (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B MAX TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) PS N IC S ( M B P S)
Optimizing performance
The following tips may help you optimize your performance:
Ensure that your storage account and your client are colocated in the same Azure region to reduce network
latency.
Use multi-threaded applications and spread load across multiple files.
Performance benefits of SMB Multichannel increase with the number of files distributing load.
Premium share performance is bound by provisioned share size (IOPS/egress/ingress) and single file limits.
For details, see Understanding provisioning for premium file shares.
Maximum performance of a single VM client is still bound to VM limits. For example, Standard_D32s_v3 can
support a maximum bandwidth of 16,000 MBps (or 2GBps), egress from the VM (writes to storage) is
metered, ingress (reads from storage) is not. File share performance is subject to machine network limits,
CPUs, internal storage available network bandwidth, IO sizes, parallelism, as well as other factors.
The initial test is usually a warm-up, discard its results and repeat the test.
If performance is limited by a single client and workload is still below provisioned share limits, higher
performance can be achieved by spreading load over multiple clients.
The relationship between IOPS, throughput, and IO sizes
Throughput = IO size * IOPS
Higher IO sizes drive higher throughput and will have higher latencies, resulting in a lower number of net IOPS.
Smaller IO sizes will drive higher IOPS but results in lower net throughput and latencies.
Next steps
Enable SMB Multichannel
See the Windows documentation to learn more about SMB Multichannel.
Azure Files scalability and performance targets
5/20/2022 • 10 minutes to read • Edit Online
Azure Files offers fully managed file shares in the cloud that are accessible via the SMB and NFS file system
protocols. This article discusses the scalability and performance targets for Azure Files and Azure File Sync.
The targets listed here might be affected by other variables in your deployment. For example, the performance
of IO for a file might be impacted by your SMB client's behavior and by your available network bandwidth. You
should test your usage pattern to determine whether the scalability and performance of Azure Files meet your
requirements. You should also expect these limits will increase over time.
Applies to
F IL E SH A RE T Y P E SM B NFS
Maximum number of file shares Unlimited Unlimited, total provisioned size of all
shares must be less than max than the
max storage account capacity
Throughput (ingress + egress) for ZRS Ingress: 7,152 MiB/sec 10,340 MiB/sec
Australia East Egress: 14,305 MiB/sec
Central US
East US
East US 2
Japan East
North Europe
South Central US
Southeast Asia
UK South
West Europe
West US 2
Management write operations 10 per second/1200 per hour 10 per second/1200 per hour
1 General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by
request. To request an increase in account limits, contact Azure Support.
Azure file share scale targets
AT T RIB UT E STA N DA RD F IL E SH A RES 1 P REM IUM F IL E SH A RES
Maximum size of a file share 100 TiB, with large file share 100 TiB
feature enabled2
5 TiB, default
Maximum request rate (Max IOPS) 20,000, with large file share Baseline IOPS: 3000 + 1 IOPS
feature enabled2 per GiB, up to 100,000
1,000 or 100 requests per 100 IOPS bursting: Max (10000, 3x
ms, default IOPS per GiB), up to 100,000
Throughput (ingress + egress) for a Up to 300 MiB/sec, with large 100 + CEILING(0.04 * ProvisionedGiB)
single file share (MiB/sec) file share feature enabled2 + CEILING(0.06 * ProvisionedGiB)
Up to 60 MiB/sec, default
1 The limits forstandard file shares apply to all three of the tiers available for standard file shares: transaction
optimized, hot, and cool.
2 Default on standard file shares is 5
TiB, see Create an Azure file share for the details on how to create file
shares with 100 TiB size and increase existing standard file shares up to 100 TiB. To take advantage of the larger
scale targets, you must change your quota so that it is larger than 5 TiB.
File scale targets
AT T RIB UT E F IL ES IN STA N DA RD F IL E SH A RES F IL ES IN P REM IUM F IL E SH A RES
Maximum ingress for a file 60 MiB/sec 200 MiB/sec (Up to 1 GiB/s with SMB
Multichannel)2
Maximum egress for a file 60 MiB/sec 300 MiB/sec (Up to 1 GiB/s with SMB
Multichannel)2
1 Applies to read and write IOs (typically smaller IO sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be
lower. 2 Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see SMB Multichannel
performance.
Storage Sync Services per region 100 Storage Sync Services Yes
Sync groups per Storage Sync Service 200 sync groups Yes
Minimum file size for a file to be tiered Based on file system cluster size Yes
(double file system cluster size). For
example, if the file system cluster size is
4 KiB, the minimum file size will be 8
KiB.
NOTE
An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync
will not be able to operate.
NOTE
When many server endpoints in the same sync group are syncing at the same time, they are contending for cloud service
resources. As a result, upload performance will be impacted. In extreme cases, some sync sessions will fail to access the
resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is
reduced.
To help you plan your deployment for each of the stages, below are the results observed during the internal
testing on a system with a config
SY ST EM C O N F IGURAT IO N DETA IL S
IN IT IA L O N E- T IM E P RO VISIO N IN G DETA IL S
O N GO IN G SY N C DETA IL S
*If cloud tiering is enabled, you are likely to observe better performance as only some of the file data is
downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the
endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only
syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as
they are accessed by the user.
NOTE
The numbers above are not an indication of the performance that you will experience. The actual performance will depend
on multiple factors as outlined in the beginning of this section.
As a general guide for your deployment, you should keep a few things in mind:
The object throughput approximately scales in proportion to the number of sync groups on the server.
Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the
server and network.
The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you will
experience higher throughput in terms of the number of objects processed per second, but lower MiB per
second throughput. Conversely, for larger files, you will get fewer objects processed per second, but higher
MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.
See also
Planning for an Azure Files deployment
Planning for an Azure File Sync deployment
Understand Azure Files billing
5/20/2022 • 25 minutes to read • Edit Online
Azure Files provides two distinct billing models: provisioned and pay-as-you-go. The provisioned model is only
available for premium file shares, which are file shares deployed in the FileStorage storage account kind. The
pay-as-you-go model is only available for standard file shares, which are file shares deployed in the general
purpose version 2 (GPv2) storage account kind. This article explains how both models work in order to help
you understand your monthly Azure Files bill.
https://www.youtube-nocookie.com/embed/m5_-GsKv4-o
This video is an interview that discusses the basics of the Azure Files billing model, including how to optimize
Azure file shares to achieve the lowest costs possible and how to compare Azure Files to other file storage
offerings on-premises and in the cloud.
For Azure Files pricing information, see Azure Files pricing page.
Applies to
F IL E SH A RE T Y P E SM B NFS
Storage units
Azure Files uses the base-2 units of measurement to represent storage capacity: KiB, MiB, GiB, and TiB.
A C RO N Y M DEF IN IT IO N UN IT
Although the base-2 units of measure are commonly used by most operating systems and tools to measure
storage quantities, they are frequently mislabeled as the base-10 units, which you may be more familiar with:
KB, MB, GB, and TB. Although the reasons for the mislabeling may vary, the common reason why operating
systems like Windows mislabel the storage units is because many operating systems began using these
acronyms before they were standardized by the IEC, BIPM, and NIST.
The following table shows how common operating systems measure and label storage:
O P ERAT IN G SY ST EM M EA SUREM EN T SY ST EM L A B EL IN G
Linux distributions Commonly base-2, some software may Inconsistent labeling, alignment
use base-10 between measurement and labeling
depends on the software package.
Check with your operating system vendor if your operating system is not listed.
Reserve capacity
Azure Files supports storage capacity reservations, which enable you to achieve a discount on storage by pre-
committing to storage utilization. You should consider purchasing reserved instances for any production
workload, or dev/test workloads with consistent footprints. When you purchase reserved capacity, your
reservation must specify the following dimensions:
Capacity size : Capacity reservations can be for either 10 TiB or 100 TiB, with more significant discounts for
purchasing a higher capacity reservation. You can purchase multiple reservations, including reservations of
different capacity sizes to meet your workload requirements. For example, if your production deployment
has 120 TiB of file shares, you could purchase one 100 TiB reservation and two 10 TiB reservations to meet
the total capacity requirements.
Term : Reservations can be purchased for either a one-year or three-year term, with more significant
discounts for purchasing a longer reservation term.
Tier : The tier of Azure Files for the capacity reservation. Reservations for Azure Files currently are available
for the premium, hot, and cool tiers.
Location : The Azure region for the capacity reservation. Capacity reservations are available in a subset of
Azure regions.
Redundancy : The storage redundancy for the capacity reservation. Reservations are supported for all
redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.
Once you purchase a capacity reservation, it will automatically be consumed by your existing storage utilization.
If you use more storage than you have reserved, you will pay list price for the balance not covered by the
capacity reservation. Transaction, bandwidth, data transfer, and metadata storage charges are not included in the
reservation.
For more information on how to purchase storage reservations, see Optimize costs for Azure Files with reserved
capacity.
Provisioned model
Azure Files uses a provisioned model for premium file shares. In a provisioned business model, you proactively
specify to the Azure Files service what your storage requirements are, rather than being billed based on what
you use. A provisioned model for storage is similar to buying an on-premises storage solution because when
you provision an Azure file share with a certain amount of storage capacity, you pay for that storage capacity
regardless of whether you use it or not. Unlike purchasing physical media on-premises, provisioned file shares
can be dynamically scaled up or down depending on your storage and IO performance characteristics.
The provisioned size of the file share can be increased at any time but can be decreased only after 24 hours
since the last increase. After waiting for 24 hours without a quota increase, you can decrease the share quota as
many times as you like, until you increase it again. IOPS/throughput scale changes will be effective within a few
minutes after the provisioned size change.
It is possible to decrease the size of your provisioned share below your used GiB. If you do this, you will not lose
data, but you will still be billed for the size used and receive the performance of the provisioned share, not the
size used.
Provisioning method
When you provision a premium file share, you specify how many GiBs your workload requires. Each GiB that
you provision entitles you to additional IOPS and throughput on a fixed ratio. In addition to the baseline IOPS
for which you are guaranteed, each premium file share supports bursting on a best effort basis. The formulas
for IOPS and throughput are as follows:
IT EM VA L UE
The following table illustrates a few examples of these formulae for the provisioned share sizes:
T H RO UGH P UT
( IN GRESS + EGRESS)
C A PA C IT Y ( GIB ) B A SEL IN E IO P S B URST IO P S B URST C REDIT S ( M IB / SEC )
Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, and
parallelism, among many other factors. For example, based on internal testing with 8 KiB read/write IO sizes, a
single Windows virtual machine without SMB Multichannel enabled, Standard F16s_v2, connected to premium
file share over SMB could achieve 20K read IOPS and 15K write IOPS. With 512 MiB read/write IO sizes, the
same VM could achieve 1.1 GiB/s egress and 370 MiB/s ingress throughput. The same client can achieve up to
~3x performance if SMB Multichannel is enabled on the premium shares. To achieve maximum performance
scale, enable SMB Multichannel and spread the load across multiple VMs. Refer to SMB Multichannel
performance and troubleshooting guide for some common performance issues and workarounds.
Bursting
If your workload needs the extra performance to meet peak demand, your share can use burst credits to go
above the share's baseline IOPS limit to give the share the performance it needs to meet the demand. Premium
file shares can burst their IOPS up to 4,000 or up to a factor of three, whichever is a higher value. Bursting is
automated and operates based on a credit system. Bursting works on a best effort basis, and the burst limit is
not a guarantee.
Credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. For example, a
100 GiB share has 500 baseline IOPS. If actual traffic on the share was 100 IOPS for a specific 1-second interval,
then the 400 unused IOPS are credited to a burst bucket. Similarly, an idle 1 TiB share accrues burst credit at
1,424 IOPS. These credits will then be used later when operations would exceed the baseline IOPS.
Whenever a share exceeds the baseline IOPS and has credits in a burst bucket, it will burst up to the maximum
allowed peak burst rate. Shares can continue to burst as long as credits are remaining, but this is based on the
number of burst credits accrued. Each IO beyond baseline IOPS consumes one credit, and once all credits are
consumed, the share would return to the baseline IOPS.
Share credits have three states:
Accruing, when the file share is using less than the baseline IOPS.
Declining, when the file share is using more than the baseline IOPS and in the bursting mode.
Constant, when the files share is using exactly the baseline IOPS, there are either no credits accrued or used.
New file shares start with the full number of credits in its burst bucket. Burst credits will not be accrued if the
share IOPS fall below baseline IOPS due to throttling by the server.
Pay-as-you-go model
Azure Files uses a pay-as-you-go business model for standard file shares. In a pay-as-you-go business model,
the amount you pay is determined by how much you actually use, rather than based on a provisioned amount.
At a high level, you pay a cost for the amount of logical data stored, and then an additional set of transactions
based on your usage of that data. A pay-as-you-go model can be cost-efficient, because you don't need to
overprovision to account for future growth or performance requirements, or deprovision if your workload and
data footprint vary over time. On the other hand, a pay-as-you-go model can also be difficult to plan as part of a
budgeting process, because the pay-as-you-go billing model is driven by end-user consumption.
Differences in standard tiers
When you create a standard file share, you pick between the following tiers: transaction optimized, hot, and cool.
All three tiers are stored on the exact same standard storage hardware. The main difference for these three tiers
is their data at-rest storage prices, which are lower in cooler tiers, and the transaction prices, which are higher in
the cooler tiers. This means:
Transaction optimized, as the name implies, optimizes the price for high transaction workloads. Transaction
optimized has the highest data at-rest storage price, but the lowest transaction prices.
Hot is for active workloads that do not involve a large number of transactions, and has a slightly lower data
at-rest storage price, but slightly higher transaction prices as compared to transaction optimized. Think of it
as the middle ground between the transaction optimized and cool tiers.
Cool optimizes the price for workloads that do not have much activity, offering the lowest data at-rest
storage price, but the highest transaction prices.
If you put an infrequently accessed workload in the transaction optimized tier, you will pay almost nothing for
the few times in a month that you make transactions against your share, but you will pay a high amount for the
data storage costs. If you were to move this same share to the cool tier, you would still pay almost nothing for
the transaction costs, simply because you are infrequently making transactions for this workload, but the cool
tier has a much cheaper data storage price. Selecting the appropriate tier for your use case allows you to
considerably reduce your costs.
Similarly, if you put a highly accessed workload in the cool tier, you will pay a lot more in transaction costs, but
less for data storage costs. This can lead to a situation where the increased costs from the transaction prices
increase outweigh the savings from the decreased data storage price, leading you to pay more money on cool
than you would have on transaction optimized. For some usage levels, it's possible that the hot tier will be the
most cost efficient, and the cool tier will be more expensive than transaction optimized.
Your workload and activity level will determine the most cost efficient tier for your standard file share. In
practice, the best way to pick the most cost efficient tier involves looking at the actual resource consumption of
the share (data stored, write transactions, etc.).
Choosing a tier
Regardless of how you migrate existing data into Azure Files, we recommend initially creating the file share in
transaction optimized tier due to the large number of transactions incurred during migration. After your
migration is complete and you've operated for a few days/weeks with regular usage, you can plug your
transaction counts into the pricing calculator to figure out which tier is best suited for your workload.
Because standard file shares only show transaction information at the storage account level, using the storage
metrics to estimate which tier is cheaper at the file share level is an imperfect science. If possible, we
recommend deploying only one file share in each storage account to ensure full visibility into billing.
To see previous transactions:
1. Go to your storage account and select Metrics in the left navigation bar.
2. Select Scope as your storage account name, Metric Namespace as "File", Metric as "Transactions", and
Aggregation as "Sum".
3. Select Apply Splitting .
4. Select Values as "API Name". Select your desired Limit and Sor t .
5. Select your desired time period.
NOTE
Make sure you view transactions over a period of time to get a better idea of average number of transactions. Ensure that
the chosen time period does not overlap with initial provisioning. Multiply the average number of transactions during this
time period to get the estimated transactions for an entire month.
NOTE
NFS 4.1 is only available for premium file shares, which use the provisioned billing model. Transactions do not affect billing
for premium file shares.
Snapshots
Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server.
Snapshots are always differential from the live share and from each other, meaning that you are always paying
only for what's different in each snapshot. For more information on share snapshots, see Overview of snapshots
for Azure Files.
Snapshots do not count against file share size limits, although you are limited to a specific number of snapshots.
To see the current snapshot limits, see Azure file share scale targets.
Snapshots are always billed based on the differential storage utilization of each snapshot, however this looks
slightly different between premium file shares and standard file shares:
In premium file shares, snapshots are billed against their own snapshot meter, which has a reduced price
over the provisioned storage price. This means that you will see a separate line item on your bill
representing snapshots for premium file shares for each FileStorage storage account on your bill.
In standard file shares, snapshots are billed as part of the normal used storage meter, although you are
still only billed for the differential cost of the snapshot. This means that you will not see a separate line
item on your bill representing snapshots for each standard storage account containing Azure file shares.
This also means that differential snapshot usage counts against capacity reservations that are purchased
for standard file shares.
Value-added services for Azure Files may use snapshots as part of their value proposition. See value-added
services for Azure Files for more information on how snapshots are used.
Value-added services
Like on-premises storage solutions which offer first- and third-party features/product integrations to bring
additional value to the hosted file shares, Azure Files provides integration points for first- and third-party
products to integrate with customer-owned file shares. Although these solutions may provide considerable extra
value to Azure Files, you should consider the additional costs that these services add to the total cost of an Azure
Files solution.
Costs are generally broken down into three buckets:
Licensing costs for the value-added ser vice. These may come in the form of a fixed cost per
customer, end user (sometimes referred to as a "head cost"), Azure file share or storage account, or in
units of storage utilization, such as a fixed cost for every 500 GiB chunk of data in the file share.
Transaction costs for the value-added ser vice. Some value-added services have their own concept
of transactions distinct from what Azure Files views as a transaction. These transactions will show up on
your bill under the value-added service's charges; however, they relate directly to how you use the value-
added service with your file share.
Azure Files costs for using a value-added ser vice. Azure Files does not directly charge customers
costs for adding value-added services, but as part of adding value to the Azure file share, the value-added
service might increase the costs that you see on your Azure file share. This is easy to see with standard
file shares, because standard file shares have a pay-as-you-go model with transaction charges. If the
value-added service does transactions against the file share on your behalf, they will show up in your
Azure Files transaction bill even though you didn't directly do those transactions yourself. This applies to
premium file shares as well, although it may be less noticeable. Additional transactions against premium
file shares from value-added services count against your provisioned IOPS numbers, meaning that value-
added services may require provisioning additional storage to have enough IOPS or throughput available
for your workload.
When computing the total cost of ownership for your file share, you should consider the costs of Azure Files and
of all value-added services that you would like to use with Azure Files.
There are multiple value-added first- and third-party services. This document covers a subset of the common
first-party services customers use with Azure file shares. You can learn more about services not listed here by
reading the pricing page for that service.
Azure File Sync
Azure File Sync is a value-added service for Azure Files that synchronizes one or more on-premises Windows
file shares with an Azure file share. Because the cloud Azure file share has a complete copy of the data in a
synchronized file share that is available on-premises, you can transform your on-premises Windows File Server
into a cache of the Azure file share to reduce your on-premises footprint. Learn more by reading Introduction to
Azure File Sync.
When considering the total cost of ownership for a solution deployed using Azure File Sync, you should
consider the following cost aspects:
Capital and operational costs of Windows File Ser vers with one or more ser ver endpoints.
Azure File Sync as a replication solution is agnostic of where the Windows File Servers that are
synchronized with Azure Files are; they could be hosted on-premises, in an Azure VM, or even in another
cloud. Unless you are using Azure File Sync with a Windows File Server that is hosted in an Azure VM, the
capital (i.e. the upfront hardware costs of your solution) and operating (i.e. cost of labor, electricity, etc.)
costs will not be part of your Azure bill, but will still be very much a part of your total cost of ownership.
You should consider the amount of data you need to cache on-premises, the number of CPUs and
amount of memory your Windows File Servers need to host Azure File Sync workloads (see
recommended system resources for more information), and other organization-specific costs you might
have.
Per ser ver licensing cost for ser vers registered with Azure File Sync. To use Azure File Sync with
a specific Windows File Server, you must first register it with Azure File Sync's Azure resource, the Storage
Sync Service. Each server that you register after the first server has a flat monthly fee. Although this fee is
very small, it is one component of your bill to consider. To see the current price of the server registration
fee for your desired region, see the File Sync section on Azure Files pricing page.
Azure Files costs. Because Azure File Sync is a synchronization solution for Azure Files, it will cause you
to consume Azure Files resources. Some of these resources, like storage consumption, are relatively
obvious, while others such as transaction and snapshot utilization may not be. For most customers, we
recommend using standard file shares with Azure File Sync, although Azure File Sync is fully supported
with premium file shares if desired.
Storage utilization. Azure File Sync will replicate any changes you have made to the path on
your Windows File Server specified on your server endpoint to your Azure file share, thus causing
storage to be consumed. On standard file shares, this means that adding or increasing the size of
existing files on server endpoints will cause storage costs to grow, because the changes will be
replicated. On premium file shares, changes will be consume provisioned space - it is your
responsibility to periodically increase provisioning as needed to account for file share growth.
Snapshot utilization. Azure File Sync takes share and file-level snapshots as part of regular
usage. Although snapshot utilization is always differential, this can contribute in a noticeable way
to the total Azure Files bill.
Transactions from churn. As files change on server endpoints, the changes are uploaded to the
cloud share, which generates transactions. When cloud tiering is enabled, additional transactions
are generated for managing tiered files, including I/O happening on tiered files, in addition to
egress costs. Although the quantity and type of transactions is difficult to predict due to churn
rates and cache efficiency, you can use your previous transaction patterns to estimate future costs
if you believe your future usage will be similar to your current usage.
Transactions from cloud enumeration. Azure File Sync enumerates the Azure File Share in the
cloud once per day to discover changes that were made directly to the share so that they can sync
down to the server endpoints. This scan generates transactions which are billed to the storage
account at a rate of one ListFiles transaction per directory per day. You can put this number into
the pricing calculator to estimate the scan cost.
TIP
If you don't know how many folders you have, check out the TreeSize tool from JAM Software GmbH.
To optimize costs for Azure Files with Azure File Sync, you should consider the tier of your file share. For more
information on how to pick the tier for each file share, see choosing a file share tier.
If you are migrating to Azure File Sync from StorSimple, see Comparing the costs of StorSimple to Azure File
Sync.
Azure Backup
Azure Backup provides a serverless backup solution for Azure Files that seamlessly integrates with your file
shares, as well as other value-added services such as Azure File Sync. Azure Backup for Azure Files is a
snapshot-based backup solution, meaning that Azure Backup provides a scheduling mechanism for
automatically taking snapshots on an administrator-defined schedule and a user-friendly interface for restoring
deleted files/folders or the entire share to a particular point in time. To learn more about Azure Backup for Azure
Files, see About Azure file share backup.
When considering the costs of using Azure Backup to back up your Azure file shares, you should consider the
following:
Protected instance licensing cost for Azure file share data. Azure Backup charges a protected
instance licensing cost per storage account containing backed up Azure file shares. A protected instance is
defined as 250 GiB of Azure file share storage. Storage accounts containing less than 250 GiB of Azure file
share storage are subject to a fractional protected instance cost. See Azure Backup pricing for more
information (note that you must select Azure Files from the list of services Azure Backup can protect).
Azure Files costs. Azure Backup increases the costs of Azure Files in the following ways:
Differential costs from Azure file share snapshots. Azure Backup automates taking Azure
file share snapshots on an administrator-defined schedule. Snapshots are always differential;
however, the additional cost added to the total bill depends on the length of time snapshots are
kept and the amount of churn on the file share during that time, because that dictates how
different the snapshot is from the live file share and therefore how much additional data is stored
by Azure Files.
Transaction costs from restore operations. Restore operations from the snapshot to the live
share will cause transactions. For standard file shares, this means that reads from snapshots/writes
from restores will be billed as normal file share transactions. For premium file shares, these
operations are counted against the provisioned IOPS for the file share.
Microsoft Defender for Storage
Microsoft Defender provides support for Azure Files as part of its Microsoft Defender for Storage product.
Microsoft Defender for Storage detects unusual and potentially harmful attempts to access or exploit your Azure
file shares over SMB or FileREST. Microsoft Defender for Storage is enabled on the subscription level for all file
shares in storage accounts in that subscription.
Microsoft Defender for Storage does not support antivirus capabilities for Azure file shares.
The main cost from Microsoft Defender for Storage is an additional set of transaction costs that the product
levies on top of the transactions that are done against the Azure file share. Although these costs are based on
the transactions incurred in Azure Files, they are not part of the billing for Azure Files, but rather are part of the
Microsoft Defender pricing. Microsoft Defender for Storage charges a transaction rate even on premium file
shares, where Azure Files includes transactions as part of IOPS provisioning. The current transaction rate can be
found on Microsoft Defender for Cloud pricing page under the Microsoft Defender for Storage table row.
Transaction heavy file shares will incur significant costs using Microsoft Defender for Storage. Based on these
costs, you may wish to opt-out of Microsoft Defender for Storage for specific storage accounts. For more
information, see Exclude a storage account from Microsoft Defender for Storage protections.
See also
Azure Files pricing page.
Planning for an Azure Files deployment and Planning for an Azure File Sync deployment.
Create a file share and Deploy Azure File Sync.
Prevent accidental deletion of Azure file shares
5/20/2022 • 3 minutes to read • Edit Online
Azure Files offers soft delete for file shares. Soft delete allows you to recover your file share when it is
mistakenly deleted by an application or other storage account user.
Applies to
F IL E SH A RE T Y P E SM B NFS
Configuration settings
Enabling or disabling soft delete
Soft delete for file shares is enabled at the storage account level, because of this, the soft delete settings apply to
all file shares within a storage account. Soft delete is enabled by default for new storage accounts and can be
disabled or enabled at any time. Soft delete is not automatically enabled for existing storage accounts unless
Azure file share backup was configured for a Azure file share in that storage account. If Azure file share backup
was configured, then soft delete for Azure file shares are automatically enabled on that share's storage account.
If you enable soft delete for file shares, delete some file shares, and then disable soft delete, if the shares were
saved in that period you can still access and recover those file shares. When you enable soft delete, you also
need to configure the retention period.
Retention period
The retention period is the amount of time that soft deleted file shares are stored and available for recovery. For
file shares that are explicitly deleted, the retention period clock starts when the data is deleted. Currently you can
specify a retention period between 1 and 365 days. You can change the soft delete retention period at any time.
An updated retention period will only apply to shares deleted after the retention period has been updated.
Shares deleted before the retention period update will expire based on the retention period that was configured
when that data was deleted.
Next steps
To learn how to enable and use soft delete, continue to Enable soft delete.
To learn how to prevent a storage account from being deleted or modified, see Apply an Azure Resource
Manager lock to a storage account.
To learn how to apply locks to resources and resource groups, see Lock resources to prevent unexpected
changes.
About Azure file share backup
5/20/2022 • 4 minutes to read • Edit Online
Azure file share backup is a native, cloud based backup solution that protects your data in the cloud and
eliminates additional maintenance overheads involved in on-premises backup solutions. The Azure Backup
service smoothly integrates with Azure File Sync, and allows you to centralize your file share data as well as
your backups. This simple, reliable, and secure solution enables you to configure protection for your enterprise
file shares in a few simple steps with an assurance that you can recover your data in case of any accidental
deletion.
Architecture
How the backup process works
1. The first step in configuring backup for Azure file shares is creating a Recovery Services vault. The vault
gives you a consolidated view of the backups configured across different workloads.
2. Once you create a vault, the Azure Backup service discovers the storage accounts that can be registered
with the vault. You can select the storage account hosting the file shares you want to protect.
3. After you select the storage account, the Azure Backup service lists the set of file shares present in the
storage account and stores their names in the management layer catalog.
4. You then configure the backup policy (schedule and retention) according to your requirements, and select
the file shares to back up. The Azure Backup service registers the schedules in the control plane to do
scheduled backups.
5. Based on the policy specified, the Azure Backup scheduler triggers backups at the scheduled time. As part
of that job, the file share snapshot is created using the File share API. Only the snapshot URL is stored in
the metadata store.
NOTE
The file share data isn't transferred to the Backup service, since the Backup service creates and manages
snapshots that are part of your storage account, and backups aren't transferred to the vault.
6. You can restore the Azure file share contents (individual files or the full share) from snapshots available
on the source file share. Once the operation is triggered, the snapshot URL is retrieved from the metadata
store and the data is listed and transferred from the source snapshot to the target file share of your
choice.
7. If you're using Azure File Sync, the Backup service indicates to the Azure File Sync service the paths of the
files being restored, which then triggers a background change detection process on these files. Any files
that have changed are synced down to the server endpoint. This process happens in parallel with the
original restore to the Azure file share.
8. The backup and restore job monitoring data is pushed to the Azure Backup Monitoring service. This
allows you to monitor cloud backups for your file shares in a single dashboard. In addition, you can also
configure alerts or email notifications when backup health is affected. Emails are sent via the Azure email
service.
Backup costs
There are two costs associated with Azure file share backup solution:
1. Snapshot storage cost : Storage charges incurred for snapshots are billed along with Azure Files usage
according to the pricing details mentioned here
2. Protected Instance fee : Starting September 1, 2020, customers will be charged a protected instance fee
according to the pricing details mentioned here. The protected instance fee depends on the total size of
protected file shares in a storage account.
To get detailed estimates for backing up Azure file shares, you can download the detailed Azure Backup pricing
estimator.
Next steps
Learn how to Back up Azure file shares
Find answers to Questions about backing up Azure Files
Azure Storage encryption for data at rest
5/20/2022 • 4 minutes to read • Edit Online
Azure Storage uses server-side encryption (SSE) to automatically encrypt your data when it is persisted to the
cloud. Azure Storage encryption protects your data and to help you to meet your organizational security and
compliance commitments.
Azure Storage services All Blob storage, Azure Files1,2 Blob storage
supported
Key storage Microsoft key store Azure Key Vault or Key Customer's own key store
Vault HSM
1 For information about creating an account that supports using customer-managed keys with Queue storage,
see Create an account that supports customer-managed keys for queues.
2 For information about creating an account that supports using customer-managed keys with Table storage, see
NOTE
Microsoft-managed keys are rotated appropriately per compliance requirements. If you have specific key rotation
requirements, Microsoft recommends that you move to customer-managed keys so that you can manage and audit the
rotation yourself.
Next steps
What is Azure Key Vault?
Customer-managed keys for Azure Storage encryption
Encryption scopes for Blob storage
Provide an encryption key on a request to Blob storage
Customer-managed keys for Azure Storage
encryption
5/20/2022 • 7 minutes to read • Edit Online
You can use your own encryption key to protect the data in your storage account. When you specify a customer-
managed key, that key is used to protect and control access to the key that encrypts your data. Customer-
managed keys offer greater flexibility to manage access controls.
You must use one of the following Azure key stores to store your customer-managed keys:
Azure Key Vault
Azure Key Vault Managed Hardware Security Module (HSM)
You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure
Key Vault APIs to generate keys. The storage account and the key vault or managed HSM must be in the same
region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.
NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.
Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more
information about keys, see About keys.
Using a key vault or managed HSM has associated costs. For more information, see Key Vault pricing.
NOTE
To rotate a key, create a new version of the key in the key vault or managed HSM, according to your compliance policies.
You can rotate your key manually or create a function to rotate it on a schedule.
Next steps
Azure Storage encryption for data at rest
Configure encryption with customer-managed keys stored in Azure Key Vault
Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM
Azure Storage compliance offerings
5/20/2022 • 2 minutes to read • Edit Online
To help organizations comply with national, regional, and industry-specific requirements governing the
collection and use of individuals' data, Microsoft Azure & Azure Storage offer the most comprehensive set of
certifications and attestations of any cloud service provider.
You can find below compliance offerings on Azure Storage to ensure your service regulated in using Azure
Storage service. They are applicable to the following Azure Storage offerings: Blobs(ADLS Gen2), Files, Queues,
Tables, Disks, Cool Storage, and Premium Storage.
Global
CSA-STAR-Attestation
CSA-Star-Certification
CSA-STAR-Self-Assessment
ISO 20000-1:2011
ISO 22301
ISO 27001
ISO 27017
ISO 27018
ISO 9001
WCAG 2.0
US Government
DoD DISA L2, L4, L5
DoE 10 CFR Part 810
EAR (US Export Administration Regulations)
FDA CFR Title 21 Part 11
FedRAMP
FERPA
FIPS 140-2
NIST 800-171
Section 508 VPATS
Industry
23 NYCRR Part 500
APRA (Australia)
CDSA
DPP (UK)
FACT (UK)
FCA (UK)
FFIEC
FISC (Japan)
GLBA
GxP
HIPAA/HITECH
HITRUST
MARS-E
MAS + ABS (Singapore)
MPAA
NEN-7510 (Netherlands)
PCI DSS
Shared Assessments
SOX
Regional
BIR 2012 (Netherlands)
C5 (Germany)
CCSL/IRAP (Australia)
CS Gold Mark (Japan)
DJCP (China)
ENISA IAF (EU)
ENS (Spain)
EU-Model-Clauses
EU-U.S. Privacy Shield
GB 18030 (China)
GDPR (EU)
IT Grundschutz Workbook (Germany)
LOPD (Spain)
MTCS (Singapore)
My Number (Japan)
NZ CC Framework (New Zealand)
PASF (UK)
PDPA (Argentina)
PIPEDA (Canada)
TRUCS (China)
UK-G-Cloud
Next steps
Microsoft Azure & Azure Storage keep leading in compliance offerings, you can find the latest coverage and
details in Microsoft TrustCenter.
Frequently asked questions (FAQ) about Azure Files
5/20/2022 • 15 minutes to read • Edit Online
Azure Files offers fully managed file shares in the cloud that are accessible via the industry-standard Server
Message Block (SMB) protocol and the Network File System (NFS) protocol. You can mount Azure file shares
concurrently on cloud or on-premises deployments of Windows, Linux, and macOS. You also can cache Azure
file shares on Windows Server machines by using Azure File Sync for fast access close to where the data is used.
NOTE
The Invoke-AzStorageSyncChangeDetection PowerShell cmdlet can only detect a maximum of 10,000 items.
For other limitations, see the Invoke-AzStorageSyncChangeDetection documentation.
NOTE
Changes made to an Azure file share using REST does not update the SMB last modified time and will not be seen
as a change by sync.
We are exploring adding change detection for an Azure file share similar to USN for volumes on
Windows Server. Help us prioritize this feature for future development by voting for it at Azure
Community Feedback.
If the same file is changed on two ser vers at approximately the same time, what happens?
Azure File Sync uses a simple conflict-resolution strategy: we keep both changes to files that are changed
in two endpoints at the same time. The most recently written change keeps the original file name. The
older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to
the filename. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the
endpoint name is Cloud . The name follows this taxonomy:
<FileNameWithoutExtension>-<endpointName>[-#].<ext>
For example, the first conflict of CompanyReport.docx would become CompanyReport-CentralServer.docx
if CentralServer is where the older write occurred. The second conflict would be named CompanyReport-
CentralServer-1.docx. Azure File Sync supports 100 conflict files per file. Once the maximum number of
conflict files has been reached, the file will fail to sync until the number of conflict files is less than 100.
I have cloud tiering disabled, why are there tiered files in the ser ver endpoint location?
There are two reasons why tiered files may exist in the server endpoint location:
When adding a new server endpoint to an existing sync group, if you choose either the recall
namespace first option or recall namespace only option for initial download mode, files will show
up as tiered until they're downloaded locally. To avoid this, select the avoid tiered files option for
initial download mode. To manually recall files, use the Invoke-StorageSyncFileRecall cmdlet.
If cloud tiering was enabled on the server endpoint and then disabled, files will remain tiered until
they're accessed.
Why are my tiered files not showing thumbnails or previews in Windows Explorer?
For tiered files, thumbnails and previews won't be visible at your server endpoint. This behavior is
expected since the thumbnail cache feature in Windows intentionally skips reading files with the offline
attribute. With Cloud Tiering enabled, reading through tiered files would cause them to be downloaded
(recalled).
This behavior is not specific to Azure File Sync, Windows Explorer displays a "grey X" for any files that
have the offline attribute set. You will see the X icon when accessing files over SMB. For a detailed
explanation of this behavior, refer to Why don’t I get thumbnails for files that are marked offline?
For questions on how to manage tiered files, please see How to manage tiered files.
Why do tiered files exist outside of the ser ver endpoint namespace?
Prior to Azure File Sync agent version 3, Azure File Sync blocked the move of tiered files outside the
server endpoint but on the same volume as the server endpoint. Copy operations, moves of non-tiered
files, and moves of tiered to other volumes were unaffected. The reason for this behavior was the implicit
assumption that File Explorer and other Windows APIs have that move operations on the same volume
are (nearly) instantaneous rename operations. This means moves will make File Explorer or other move
methods (such as command line or PowerShell) appear unresponsive while Azure File Sync recalls the
data from the cloud. Starting with Azure File Sync agent version 3.0.12.0, Azure File Sync will allow you to
move a tiered file outside of the server endpoint. We avoid the negative effects previously mentioned by
allowing the tiered file to exist as a tiered file outside of the server endpoint and then recalling the file in
the background. This means that moves on the same volume are instantaneous, and we do all the work
to recall the file to disk after the move has completed.
I'm having an issue with Azure File Sync on my ser ver (sync, cloud tiering, etc.). Should I
remove and recreate my ser ver endpoint?
No: removing a server endpoint isn't like rebooting a server! Removing and recreating the server
endpoint is almost never an appropriate solution to fixing issues with sync, cloud tiering, or other aspects
of Azure File Sync. Removing a server endpoint is a destructive operation. It may result in data loss in the
case that tiered files exist outside of the server endpoint namespace. For more information, see why do
tiered files exist outside of the server endpoint namespace for more information. Or it may result in
inaccessible files for tiered files that exist within the server endpoint namespace. These issues won't
resolve when the server endpoint is recreated. Tiered files may exist within your server endpoint
namespace even if you never had cloud tiering enabled. That's why we recommend that you don't
remove the server endpoint unless you would like to stop using Azure File Sync with this particular folder
or have been explicitly instructed to do so by a Microsoft engineer. For more information on remove
server endpoints, see Remove a server endpoint.
Can I move the storage sync ser vice and/or storage account to a different resource group,
subscription, or Azure AD tenant?
Yes, the storage sync service and/or storage account can be moved to a different resource group,
subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to
give the Microsoft.StorageSync application access to the storage account (see Ensure Azure File Sync has
access to the storage account).
NOTE
When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD
tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to
different Azure AD tenants.
Does Azure File Sync preser ve director y/file level NTFS ACLs along with data stored in
Azure Files?
As of February 24th, 2020, new and existing ACLs tiered by Azure file sync will be persisted in NTFS
format, and ACL modifications made directly to the Azure file share will sync to all servers in the sync
group. Any changes on ACLs made to Azure Files will sync down via Azure file sync. When copying data
to Azure Files, make sure you use a copy tool that supports the necessary "fidelity" to copy attributes,
timestamps and ACLs into an Azure file share - either via SMB or REST. When using Azure copy tools,
such as AzCopy, it is important to use the latest version. Check the file copy tools table to get an overview
of Azure copy tools to ensure you can copy all of the important metadata of a file.
If you have enabled Azure Backup on your file sync managed file shares, file ACLs can continue to be
restored as part of the backup restore workflow. This works either for the entire share or individual
files/directories.
If you are using snapshots as part of the self-managed backup solution for file shares managed by file
sync, your ACLs may not be restored properly to NTFS ACLs if the snapshots were taken prior to
February 24th, 2020. If this occurs, consider contacting Azure Support.
Does Azure File Sync sync the LastWriteTime for directories?
No, Azure File Sync does not sync the LastWriteTime for directories.
Share snapshots
Create share snapshots
Are my share snapshots geo-redundant?
Share snapshots have the same redundancy as the Azure file share for which they were taken. If you have
selected geo-redundant storage for your account, your share snapshot also is stored redundantly in the
paired region.
Clean up share snapshots
Can I delete my share but not delete my share snapshots?
If you have active share snapshots on your share, you cannot delete your share. You can use an API to delete
share snapshots, along with the share. You also can delete both the share snapshots and the share in the
Azure portal.
See also
Troubleshoot Azure Files in Windows
Troubleshoot Azure Files in Linux
Troubleshoot Azure File Sync
What's new in Azure Files
5/20/2022 • 6 minutes to read • Edit Online
Azure Files is updated regularly to offer new features and enhancements. This article provides detailed
information about what's new in Azure Files.
IT EM O L D VA L UE N EW VA L UE
Egress:
60 + CEILING(0.06 *
ProvisionedGiB)
See also
What is Azure Files?
Planning for an Azure Files deployment
Create an Azure file share
Create an Azure file share
5/20/2022 • 15 minutes to read • Edit Online
To create an Azure file share, you need to answer three questions about how you will use it:
What are the performance requirements for your Azure file share?
Azure Files offers standard file shares which are hosted on hard disk-based (HDD-based) hardware, and
premium file shares, which are hosted on solid-state disk-based (SSD-based) hardware.
What are your redundancy requirements for your Azure file share?
Standard file shares offer locally-redundant (LRS), zone redundant (ZRS), geo-redundant (GRS), or geo-
zone-redundant (GZRS) storage, however the large file share feature is only supported on locally
redundant and zone redundant file shares. Premium file shares do not support any form of geo-
redundancy.
Premium file shares are available with locally redundancy and zone redundancy in a subset of regions. To
find out if premium file shares are currently available in your region, see the products available by region
page for Azure. For information about regions that support ZRS, see Azure Storage redundancy.
What size file share do you need?
In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo-
and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB.
For more information on these three choices, see Planning for an Azure Files deployment.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
This article assumes that you have already created an Azure subscription. If you don't already have a
subscription, then create a free account before you begin.
If you intend to use Azure PowerShell, install the latest version.
If you intend to use the Azure CLI, install the latest version.
Portal
PowerShell
Azure CLI
To create a storage account via the Azure portal, select + Create a resource from the dashboard. In the
resulting Azure Marketplace search window, search for storage account and select the resulting search result.
This will lead to an overview page for storage accounts; select Create to proceed with the storage account
creation wizard.
Basics
The first section to complete to create a storage account is labeled Basics . This contains all of the required fields
to create a storage account. To create a GPv2 storage account, ensure the Performance radio button is set to
Standard and the Account kind drop-down list is selected to StorageV2 (general purpose v2).
To create a FileStorage storage account, ensure the Performance radio button is set to Premium and
Fileshares is selected in the Premium account type drop-down list.
The other basics fields are independent from the choice of storage account:
Storage account name : The name of the storage account resource to be created. This name must be
globally unique. The storage account name will be used as the server name when you mount an Azure file
share via SMB. Storage account names must be between 3 and 24 characters in length and may contain
numbers and lowercase letters only.
Location : The region for the storage account to be deployed into. This can be the region associated with the
resource group, or any other available region.
Replication : Although this is labeled replication, this field actually means redundancy ; this is the desired
redundancy level: locally redundancy (LRS), zone redundancy (ZRS), geo-redundancy (GRS), and geo-zone-
redundancy (GZRS). This drop-down list also contains read-access geo-redundancy (RA-GRS) and read-
access geo-zone redundancy (RA-GZRS), which do not apply to Azure file shares; any file share created in a
storage account with these selected will actually be either geo-redundant or geo-zone-redundant,
respectively.
Networking
The networking section allows you to configure networking options. These settings are optional for the creation
of the storage account and can be configured later if desired. For more information on these options, see Azure
Files networking considerations.
Data protection
The data protection section allows you to configure the soft-delete policy for Azure file shares in your storage
account. Other settings related to soft-delete for blobs, containers, point-in-time restore for containers,
versioning, and change feed apply only to Azure Blob storage.
Advanced
The advanced section contains several important settings for Azure file shares:
Secure transfer required : This field indicates whether the storage account requires encryption in
transit for communication to the storage account. If you require SMB 2.1 support, you must disable this.
Large file shares : This field enables the storage account for file shares spanning up to 100 TiB. Enabling
this feature will limit your storage account to only locally redundant and zone redundant storage options.
Once a GPv2 storage account has been enabled for large file shares, you cannot disable the large file
share capability. FileStorage storage accounts (storage accounts for premium file shares) do not have this
option, as all premium file shares can scale up to 100 TiB.
The other settings that are available in the advanced tab (hierarchical namespace for Azure Data Lake storage
gen 2, default blob tier, NFSv3 for blob storage, etc.) do not apply to Azure Files.
IMPORTANT
Selecting the blob access tier does not affect the tier of the file share.
Tags
Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the
same tag to multiple resources and resource groups. These are optional and can be applied after storage
account creation.
Review + create
The final step to create the storage account is to select the Create button on the Review + create tab. This
button won't be available if all of the required fields for a storage account are not filled.
Enable large files shares on an existing account
Before you create an Azure file share on an existing storage account, you may want to enable it for large file
shares (up to 100 TiB) if you haven't already. Standard storage accounts using either LRS or ZRS can be
upgraded to support large file shares without causing downtime for existing file shares on the storage account.
If you have a GRS, GZRS, RA-GRS, or RA-GZRS account, you will need to convert it to an LRS account before
proceeding.
Portal
PowerShell
Azure CLI
1. Open the Azure portal, and navigate to the storage account where you want to enable large file shares.
2. Open the storage account and select File shares .
3. Select Enabled on Large file shares , and then select Save .
4. Select Over view and select Refresh .
5. Select Share capacity then select 100 TiB and Save .
Create a file share
Once you've created your storage account, you can create your file share. This process is mostly the same
regardless of whether you're using a premium file share or a standard file share. You should consider the
following differences:
Standard file shares may be deployed into one of the standard tiers: transaction optimized (default), hot, or cool.
This is a per file share tier that is not affected by the blob access tier of the storage account (this property only
relates to Azure Blob storage - it does not relate to Azure Files at all). You can change the tier of the share at any
time after it has been deployed. Premium file shares cannot be directly converted to any standard tier.
IMPORTANT
You can move file shares between tiers within GPv2 storage account types (transaction optimized, hot, and cool). Share
moves between tiers incur transactions: moving from a hotter tier to a cooler tier will incur the cooler tier's write
transaction charge for each file in the share, while a move from a cooler tier to a hotter tier will incur the cool tier's read
transaction charge for each file the share.
The quota property means something slightly different between premium and standard file shares:
For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot
go. If a quota is not specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares
property is not set for a storage account). If you did not create your storage account with large file shares
enabled, see Enable large files shares on an existing account for how to enable 100 TiB file shares.
For premium file shares, quota means provisioned size . The provisioned size is the amount that you
will be billed for, regardless of actual usage. The IOPS and throughput available on a premium file share is
based on the provisioned size. For more information on how to plan for a premium file share, see
provisioning premium file shares.
Portal
PowerShell
Azure CLI
If you just created your storage account, you can navigate to it from the deployment screen by selecting Go to
resource . Once in the storage account, select the File shares in the table of contents for the storage account.
In the file share listing, you should see any file shares you have previously created in this storage account; an
empty table if no file shares have been created yet. Select + File share to create a new file share.
The new file share blade should appear on the screen. Complete the fields in the new file share blade to create a
file share:
Name : the name of the file share to be created.
Quota : the quota of the file share for standard file shares; the provisioned size of the file share for premium
file shares. For standard file shares, the quota will also determine what performance you receive.
Tiers : the selected tier for a file share. This field is only available in a general purpose (GPv2) storage
account . You can choose transaction optimized, hot, or cool. The share's tier can be changed at any time. We
recommend picking the hottest tier possible during a migration, to minimize transaction expenses, and then
switching to a lower tier if desired after the migration is complete.
Select Create to finishing creating the new share.
NOTE
The name of your file share must be all lowercase. For complete details about naming file shares and files, see Naming and
referencing shares, directories, files, and metadata.
On the main storage account page, select File shares select the tile labeled File shares (you can also navigate
to File shares via the table of contents for the storage account).
In the table list of file shares, select the file share for which you would like to change the tier. On the file share
overview page, select Change tier from the menu.
On the resulting dialog, select the desired tier: transaction optimized, hot, or cool.
Portal
PowerShell
Azure CLI
Next steps
Plan for a deployment of Azure Files or Plan for a deployment of Azure File Sync.
Networking overview.
Connect and mount a file share on Windows, macOS, and Linux.
Tutorial: Create an NFS Azure file share and mount
it on a Linux VM using the Azure Portal
5/20/2022 • 7 minutes to read • Edit Online
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB) protocol or Network File System (NFS) protocol. Both NFS and SMB protocols are
supported on Azure virtual machines (VMs) running Linux. This tutorial shows you how to create an Azure file
share using the NFS protocol and connect it to a Linux VM.
In this tutorial, you will:
Create a storage account
Deploy a Linux VM
Create an NFS file share
Connect to your VM
Mount the file share to your VM
Applies to
F IL E SH A RE T Y P E SM B NFS
Getting started
If you don't have an Azure subscription, create a free account before you begin.
Sign in to the Azure portal.
Create a FileStorage storage account
Before you can work with an NFS 4.1 Azure file share, you have to create an Azure storage account with the
premium performance tier. Currently, NFS 4.1 shares are only available as premium file shares.
1. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
2. On the Storage Accounts window that appears, choose + Create .
3. On the Basics tab, select the subscription in which to create the storage account.
4. Under the Resource group field, select Create new to create a new resource group to use for this tutorial.
5. Enter a name for your storage account. The name you choose must be unique across Azure. The name also
must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
6. Select a region for your storage account, or use the default region. Azure supports NFS file shares in all the
same regions that support premium file storage.
7. Select the Premium performance tier to store your data on solid-state drives (SSD). Under Premium
account type , select File shares.
8. Leave replication set to its default value of Locally-redundant storage (LRS).
9. Select Review + Create to review your storage account settings and create the account.
10. When you see the Validation passed notification appear, select Create . You should see a notification that
deployment is in progress.
The following image shows the settings on the Basics tab for a new storage account:
5. Under Inbound por t rules > Public inbound por ts , choose Allow selected por ts and then select
SSH (22) and HTTP (80) from the drop-down.
IMPORTANT
Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later,
go back to the Basics tab.
4. Leave Subscription and Resource group the same. Under Instance , provide a name and select a
region for the new private endpoint. Your private endpoint must be in the same region as your virtual
network, so use the same region as you specified when creating the V M. When all the fields are
complete, select Next: Resource .
5. Confirm that the Subscription , Resource type and Resource are correct, and select File from the
Target sub-resource drop-down. Then select Next: Vir tual Network .
6. Under Networking , select the virtual network associated with your VM and leave the default subnet.
Select Yes for Integrate with private DNS zone . Select the correct subscription and resource group,
and then select Next: Tags .
7. You can optionally apply tags to categorize your resources, such as applying the name Environment and
the value Test to all testing resources. Enter name/value pairs if desired, and then select Next: Review +
create .
8. Azure will attempt to validate the private endpoint. When validation is complete, select Create . You'll see
a notification that deployment is in progress. After a few minutes, you should see a notification that
deployment is complete.
Disable secure transfer
Because the NFS protocol doesn't support encryption and relies instead on network-level security, you'll need to
disable secure transfer.
1. Select Home and then Storage accounts .
2. Select the storage account you created.
3. Select File shares from the storage account pane.
4. Select the NFS file share that you created. Under Secure transfer setting , select Change setting .
5. Change the Secure transfer required setting to Disabled , and select Save . The setting change may
take up to 30 seconds to take effect.
Connect to your VM
Create an SSH connection with the VM.
1. Select Home and then Vir tual machines .
2. Select the Linux VM you created for this tutorial and ensure that its status is Running . Take note of the
VM's public IP address and copy it to your clipboard.
3. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a
PowerShell prompt.
4. At your prompt, open an SSH connection to your VM. Replace the IP address with the one from your VM,
and replace the path to the .pem with the path to where the key file was downloaded.
If you encounter a warning that the authenticity of the host can't be established, type yes to continue connecting
to the VM. Leave the ssh connection open for the next step.
TIP
The SSH key you created can be used the next time your create a VM in Azure. Just select the Use a key stored in
Azure for SSH public key source the next time you create a VM. You already have the private key on your computer,
so you won't need to download anything.
Clean up resources
When you're done, delete the resource group. Deleting the resource group deletes the storage account, the
Azure file share, and any other resources that you deployed inside the resource group.
1. Select Home and then Resource groups .
2. Select the resource group you created for this tutorial.
3. Select Delete resource group . A window opens and displays a warning about the resources that will be
deleted with the resource group.
4. Enter the name of the resource group, and then select Delete .
Next steps
Learn about using NFS Azure file shares
How to use DFS Namespaces with Azure Files
5/20/2022 • 12 minutes to read • Edit Online
Distributed File Systems Namespaces, commonly referred to as DFS Namespaces or DFS-N, is a Windows
Server server role that is widely used to simplify the deployment and maintenance of SMB file shares in
production. DFS Namespaces is a storage namespace virtualization technology, which means that it enables you
to provide a layer of indirection between the UNC path of your file shares and the actual file shares themselves.
DFS Namespaces works with SMB file shares, agnostic of those file shares are hosted: it can be used with SMB
shares hosted on an on-premises Windows File Server with or without Azure File Sync, Azure file shares directly,
SMB file shares hosted in Azure NetApp Files or other third-party offerings, and even with file shares hosted in
other clouds.
At its core, DFS Namespaces provides a mapping between a user-friendly UNC path, like
\\contoso\shares\ProjectX and the underlying UNC path of the SMB share like \\Server01-Prod\ProjectX or
\\storageaccount.file.core.windows.net\projectx . When the end user wants to navigate to their file share, they
type in the user-friendly UNC path, but their SMB client accesses the underlying SMB path of the mapping. You
can also extend this basic concept to take over an existing file server name, such as \\MyServer\ProjectX . You
can use this capability to achieve the following scenarios:
Provide a migration-proof name for a logical set of data. In this example, you have a mapping like
\\contoso\shares\Engineering that maps to \\OldServer\Engineering . When you complete your
migration to Azure Files, you can change your mapping so your user-friendly UNC path points to
\\storageaccount.file.core.windows.net\engineering . When an end user accesses the user-friendly UNC
path, they will be seamlessly redirected to the Azure file share path.
Establish a common name for a logical set of data that is distributed to multiple servers at different
physical sites, such as through Azure File Sync. In this example, a name such as
\\contoso\shares\FileSyncExample is mapped to multiple UNC paths such as
\\FileSyncServer1\ExampleShare , \\FileSyncServer2\DifferentShareName , \\FileSyncServer3\ExampleShare .
When the user accesses the user-friendly UNC, they are given a list of possible UNC paths and choose the
one closest to them based on Windows Server Active Directory (AD) site definitions.
Extend a logical set of data across size, IO, or other scale thresholds. This is common when dealing with
user directories, where every user gets their own folder on a share, or with scratch shares, where users
get arbitrary space to handle temporary data needs. With DFS Namespaces, you stitch together multiple
folders into a cohesive namespace. For example, \\contoso\shares\UserShares\user1 maps to
\\storageaccount.file.core.windows.net\user1 , \\contoso\shares\UserShares\user2 maps to
\\storageaccount.file.core.windows.net\user2 , and so on.
You can see an example of how to use DFS Namespaces with your Azure Files deployment in the following video
overview.
NOTE
Skip to 10:10 in the video to see how to set up DFS Namespaces.
If you already have a DFS Namespace in place, no special steps are required to use it with Azure Files and File
Sync. If you're accessing your Azure file share from on-premises, normal networking considerations apply; see
Azure Files networking considerations for more information.
Applies to
F IL E SH A RE T Y P E SM B NFS
Namespace types
DFS Namespaces provides two main namespace types:
Domain-based namespace : A namespace hosted as part of your Windows Server AD domain.
Namespaces hosted as part of AD will have a UNC path containing the name of your domain, for example,
\\contoso.com\shares\myshare , if your domain is contoso.com . Domain-based namespaces support larger
scale limits and built-in redundancy through AD. Domain-based namespaces can't be a clustered resource on
a failover cluster.
Standalone namespace : A namespace hosted on an individual server, not hosted as part of Windows
Server AD. Standalone namespaces will have a name based on the name of the standalone server, such as
\\MyStandaloneServer\shares\myshare , where your standalone server is named MyStandaloneServer .
Standalone namespaces support lower scale targets than domain-based namespaces but can be hosted as a
clustered resource on a failover cluster.
Requirements
To use DFS Namespaces with Azure Files and File Sync, you must have the following resources:
An Active Directory domain. This can be hosted anywhere you like, such as on-premises, in an Azure virtual
machine (VM), or even in another cloud.
A Windows Server that can host the namespace. A common pattern deployment pattern for DFS
Namespaces is to use the Active Directory domain controller to host the namespaces, however the
namespaces can be setup from any server with the DFS Namespaces server role installed. DFS Namespaces
is available on all supported Windows Server versions.
An SMB file share hosted in a domain-joined environment, such as an Azure file share hosted within a
domain-joined storage account, or a file share hosted on a domain-joined Windows File Server using Azure
File Sync. For more on domain-joining your storage account, see Identity-based authentication. Windows File
Servers are domain-joined the same way regardless of whether you are using Azure File Sync.
The SMB file shares you want to use with DFS Namespaces must be reachable from your on-premises
networks. This is primarily a concern for Azure file shares, however, technically applies to any file share
hosted in Azure or any other cloud. For more information on networking, see Networking considerations for
direct access.
Portal
PowerShell
To install the DFS Namespaces server role, open the Server Manager on your server. Select Manage , and then
select Add Roles and Features . The resulting wizard guides you through the installation of the necessary
Windows components to run and manage DFS Namespaces.
In the Installation Type section of the installation wizard, select the Role-based or feature-based
installation radio button and select Next . On the Ser ver Selection section, select the desired server(s) on
which you would like to install the DFS Namespaces server role, and select Next .
In the Ser ver Roles section, select and check the DFS Namespaces role from role list. You can find this under
File and Storage Ser vices > File and ISCSI Ser vices . When you select the DFS Namespaces server role, it
may also add any required supporting server roles or features that you don't already have installed.
After you have checked the DFS Namespaces role, you may select Next on all subsequent screens, and select
Install as soon as the wizard enables the button. When the installation is complete, you may configure your
namespace.
Root consolidation may only be used with standalone namespaces. If you already have an existing domain-
based namespace for your file shares, you do not need to create a root consolidated namespace.
Enabling root consolidation
Root consolidation can be enabled by setting the following registry keys from an elevated PowerShell session
(or using PowerShell remoting).
New-Item `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs" `
-Type Registry `
-ErrorAction SilentlyContinue
New-Item `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters" `
-Type Registry `
-ErrorAction SilentlyContinue
New-Item `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" `
-Type Registry `
-ErrorAction SilentlyContinue
Set-ItemProperty `
-Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" `
-Name "ServerConsolidationRetry" `
-Value 1
Portal
PowerShell
On a Windows DNS server, open the DNS management console. This can be found by selecting the Star t button
and typing DNS . Navigate to the forward lookup zone for your domain. For example, if your domain is
contoso.com , the forward lookup zone can be found under For ward Lookup Zones > contoso.com in the
management console. The exact hierarchy shown in this dialog will depend on the DNS configuration for your
network.
Right-click on your forward lookup zone and select New Alias (CNAME) . In the resulting dialog, enter the
short name for the file server you're replacing (the fully qualified domain name will be auto-populated in the
textbox labeled Fully qualified domain name ). In the textbox labeled Fully qualified domain name
(FQDN) for the target host , enter the name of the DFS-N server you have set up. You can use the Browse
button to help you select the server if desired. Select OK to create the CNAME record for your server.
Create a namespace
The basic unit of management for DFS Namespaces is the namespace. The namespace root, or name, is the
starting point of the namespace, such that in the UNC path \\contoso.com\Public\ , the namespace root is
Public .
If you are using DFS Namespaces to take over an existing server name with root consolidation, the name of the
namespace should be the name of server name you want to take over, prepended with the # character. For
example, if you wanted to take over an existing server named MyServer , you would create a DFS-N namespace
called #MyServer . The PowerShell section below takes care of prepending the # , but if you create via the DFS
Management console, you will need to prepend as appropriate.
Portal
PowerShell
To create a new namespace, open the DFS Management console. This can be found by selecting the Star t
button and typing DFS Management . The resulting management console has two sections Namespaces and
Replication , which refer to DFS Namespaces and DFS Replication (DFS-R) respectively. Azure File Sync provides
a modern replication and synchronization mechanism that may be used in place of DFS-R if replication is also
desired.
Select the Namespaces section, and select the New Namespace button (you may also right-click on the
Namespaces section). The resulting New Namespace Wizard walks you through creating a namespace.
The first section in the wizard requires you to pick the DFS Namespace server to host the namespace. Multiple
servers can host a namespace, but you will need to set up DFS Namespaces with one server at a time. Enter the
name of the desired DFS Namespace server and select Next . In the Namespace Name and Settings section,
you can enter the desired name of your namespace and select Next .
The Namespace Type section allows you to choose between a Domain-based namespace and a Stand-
alone namespace . If you intend to use DFS Namespaces to preserve an existing file server/NAS device name,
you should select the standalone namespace option. For any other scenarios, you should select a domain-based
namespace. Refer to namespace types above for more information on choosing between namespace types.
Select the desired namespace type for your environment and select Next . The wizard will then summarize the
namespace to be created. Select Create to create the namespace and Close when the dialog completes.
Portal
PowerShell
In the DFS Management console, select the namespace you just created and select New Folder . The resulting
New Folder dialog will allow you to create both the folder and its targets.
In the textbox labeled Name provide the name of the folder. Select Add... to add folder targets for this folder.
The resulting Add Folder Target dialog provides a textbox labeled Path to folder target where you can
provide the UNC path to the desired folder. Select OK on the Add Folder Target dialog. If you are adding a
UNC path to an Azure file share, you may receive a message reporting that the server
storageaccount.file.core.windows.net cannot be contacted. This is expected; select Yes to continue. Finally,
select OK on the New Folder dialog to create the folder and folder targets.
Now that you have created a namespace, a folder, and a folder target, you should be able to mount your file
share through DFS Namespaces. If you are using a domain-based namespace, the full path for your share
should be \\<domain-name>\<namespace>\<share> . If you are using a standalone namespace, the full path for your
share should be \\<DFS-server>\<namespace>\<share> . If you are using a standalone namespace with root
consolidation, you can access directly through your old server name, such as \\<old-server>\<share> .
See also
Deploying an Azure file share: Planning for an Azure Files deployment and How to create an file share.
Configuring file share access: Identity-based authentication and Networking considerations for direct access.
Windows Server Distributed File System Namespaces
Optimize costs for Azure Files with reserved
capacity
5/20/2022 • 7 minutes to read • Edit Online
You can save money on the storage costs for Azure file shares with capacity reservations. Azure Files reserved
capacity offers you a discount on capacity for storage costs when you commit to a reservation for either one
year or three years. A reservation provides a fixed amount of storage capacity for the term of the reservation.
Azure Files reserved capacity can significantly reduce your capacity costs for storing data in your Azure file
shares. How much you save will depend on the duration of your reservation, the total capacity you choose to
reserve, and the tier and redundancy settings that you've chosen for you Azure file shares. Reserved capacity
provides a billing discount and doesn't affect the state of your Azure file shares.
For pricing information about reservation capacity for Azure Files, see Azure Files pricing.
Applies to
F IL E SH A RE T Y P E SM B NFS
Subscription The subscription that's used to pay for the Azure Files
reservation. The payment method on the selected
subscription is used in charging the costs. The
subscription must be one of the following types:
Tier The tier where the for which the reservation is in effect.
Options include Premium, Hot, and Cool.
Billing frequency Indicates how often the account is billed for the
reservation. Options include Monthly or Upfront.
Expiration of a reservation
When a reservation expires, any Azure Files capacity that you are using under that reservation is billed at the
pay-as-you go rate. Reservations don't renew automatically.
You will receive an email notification 30 days prior to the expiration of the reservation, and again on the
expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later
than the expiration date.
Next steps
What are Azure Reservations?
Understand how reservation discounts are applied to Azure storage services
Mount SMB Azure file share on Windows
5/20/2022 • 6 minutes to read • Edit Online
Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be seamlessly used in Windows
and Windows Server. This article discusses the considerations for using an Azure file share with Windows and
Windows Server.
In order to use an Azure file share via the public endpoint outside of the Azure region it is hosted in, such as on-
premises or in a different Azure region, the OS must support SMB 3.x. Older versions of Windows that support
only SMB 2.1 cannot mount Azure file shares via the public endpoint.
A Z URE F IL ES SM B M A XIM UM SM B C H A N N EL
W IN DO W S VERSIO N SM B VERSIO N M ULT IC H A N N EL EN C RY P T IO N
Windows 10, version 21H1 SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM
newer
Windows 10, version 20H2 SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM
newer
Windows 10, version 2004 SMB 3.1.1 Yes, with KB5003690 or AES-128-GCM
newer
Windows 10, version 1809 SMB 3.1.1 Yes, with KB5003703 or AES-128-GCM
newer
Windows 10, version 1607 SMB 3.1.1 Yes, with KB5004238 or AES-128-GCM
newer and applied registry
key
Windows 10, version 1507 SMB 3.1.1 Yes, with KB5004249 or AES-128-GCM
newer and applied registry
key
1Regular Microsoft support for Windows 7 and Windows Server 2008 R2 has ended. It is possible to purchase
additional support for security updates only through the Extended Security Update (ESU) program. We strongly
recommend migrating off of these operating systems.
NOTE
We always recommend taking the most recent KB for your version of Windows.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
Ensure port 445 is open: The SMB protocol requires TCP port 445 to be open; connections will fail if port 445 is
blocked. You can check if your firewall is blocking port 445 with the Test-NetConnection cmdlet. To learn about
ways to work around a blocked 445 port, see the Cause 1: Port 445 is blocked section of our Windows
troubleshooting guide.
5. Select Connect .
NOTE
Note that the following instructions are shown on Windows 10 and may differ slightly on older releases.
1. Open File Explorer. This can be done by opening from the Start Menu, or by pressing Win+E shortcut.
2. Navigate to This PC on the left-hand side of the window. This will change the menus available in the
ribbon. Under the Computer menu, select Map network drive .
3. Select the drive letter and enter the UNC path, the UNC path format is
\\<storageAccountName>.file.core.windows.net\<fileShareName> . For example:
\\anexampleaccountname.file.core.windows.net\example-share-name .
4. Use the storage account name prepended with AZURE\ as the username and a storage account key as the
password.
6. When you are ready to dismount the Azure file share, you can do so by right-clicking on the entry for the
share under the Network locations in File Explorer and selecting Disconnect .
Accessing share snapshots from Windows
If you have taken a share snapshot, either manually or automatically through a script or service like Azure
Backup, you can view previous versions of a share, a directory, or a particular file from file share on Windows.
You can take a share snapshot using Azure PowerShell, Azure CLI, or the Azure portal.
List previous versions
Browse to the item or parent item that needs to be restored. Double-click to go to the desired directory. Right-
click and select Proper ties from the menu.
Select Previous Versions to see the list of share snapshots for this directory. The list might take a few seconds
to load, depending on the network speed and the number of share snapshots in the directory.
Set-ItemProperty `
-Path "HKLM:SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides" `
-Name "2291605642" `
-Value 1 `
-Force
Set-ItemProperty `
-Path "HKLM:\SYSTEM\CurrentControlSet\Services\MRxSmb\KBSwitch" `
-Name "{FFC376AE-A5D2-47DC-A36F-FE9A46D53D75}" `
-Value 1 `
-Force
Next steps
See these links for more information about Azure Files:
Planning for an Azure Files deployment
FAQ
Troubleshooting on Windows
Mount SMB Azure file share on Linux
5/20/2022 • 8 minutes to read • Edit Online
Azure Files is Microsoft's easy to use cloud file system. Azure file shares can be mounted in Linux distributions
using the SMB kernel client.
The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires
encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support
encryption in transit, but you may not mount Azure file shares with SMB 2.1 from another Azure region or on-
premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1.
DIST RIB UT IO N SM B 3. 1. 1 SM B 3. 0
Linux kernel version Basic 3.1.1 support: 4.17 Basic 3.0 support: 3.12
Default mount: 5.0 AES-128-CCM encryption: 4.11
AES-128-GCM encryption: 5.3
SUSE Linux Enterprise Server AES-128-GCM encryption: 15 SP2+ AES-128-CCM encryption: 12 SP2+
If your Linux distribution isn't listed in the above table, you can check the Linux kernel version with the uname
command:
uname -r
NOTE
SMB 2.1 support was added to Linux kernel version 3.7. If you are using a version of the Linux kernel after 3.7, it should
support SMB 2.1.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
Ensure the cifs-utils package is installed.
The cifs-utils package can be installed using the package manager on the Linux distribution of your
choice.
On Ubuntu and Debian , use the apt package manager:
On older versions of Red Hat Enterprise Linux use the yum package manager:
On SUSE Linux Enterprise Ser ver , use the zypper package manager:
On other distributions, use the appropriate package manager or compile from source.
The most recent version of the Azure Command Line Interface (CLI). For more information on
how to install the Azure CLI, see Install the Azure CLI and select your operating system. If you prefer to
use the Azure PowerShell module in PowerShell 6+, you may, however the instructions in this article are
for the Azure CLI.
Ensure por t 445 is open : SMB communicates over TCP port 445 - check to see if your firewall is not
blocking TCP ports 445 from client machine. Replace <your-resource-group> and <your-storage-account>
then run the following script:
resourceGroupName="<your-resource-group>"
storageAccountName="<your-storage-account>"
If the connection was successful, you should see something similar to the following output:
Connection to <your-storage-account> 445 port [tcp/microsoft-ds] succeeded!
If you are unable to open up port 445 on your corporate network or are blocked from doing so by an ISP,
you may use a VPN connection or ExpressRoute to work around port 445. For more information, see
Networking considerations for direct Azure file share access.
resourceGroupName="<resource-group-name>"
storageAccountName="<storage-account-name>"
fileShareName="<file-share-name>"
mntRoot="/mount"
mntPath="$mntRoot/$storageAccountName/$fileShareName"
Next, mount the file share using the mount command. In the following example, the $smbPath command is
populated using the fully qualified domain name for the storage account's file endpoint and $storageAccountKey
is populated with the storage account key.
SMB 3.1.1
SMB 3.0
SMB 2.1
NOTE
Starting in Linux kernel version 5.0, SMB 3.1.1 is the default negotiated protocol. If you're using a version of the Linux
kernel older than 5.0, specify vers=3.1.1 in the mount options list.
You can use uid / gid or dir_mode and file_mode in the mount options for the mount command to set
permissions. For more information on how to set permissions, see UNIX numeric notation on Wikipedia.
You can also mount the same Azure file share to multiple mount points if desired. When you are done using the
Azure file share, use sudo umount $mntPath to unmount the share.
mntRoot="/mount"
sudo mkdir -p $mntRoot
To mount an Azure file share on Linux, use the storage account name as the username of the file share, and the
storage account key as the password. Since the storage account credentials may change over time, you should
store the credentials for the storage account separately from the mount configuration.
The following example shows how to create a file to store the credentials. Remember to replace
<resource-group-name> and <storage-account-name> with the appropriate information for your environment.
resourceGroupName="<resource-group-name>"
storageAccountName="<storage-account-name>"
# Create a folder to store the credentials for this storage account and
# any other that you might set up.
credentialRoot="/etc/smbcredentials"
sudo mkdir -p "/etc/smbcredentials"
# Get the storage account key for the indicated storage account.
# You must be logged in with az login and your user identity must have
# permissions to list the storage account keys for this command to work.
storageAccountKey=$(az storage account keys list \
--resource-group $resourceGroupName \
--account-name $storageAccountName \
--query "[0].value" --output tsv | tr -d '"')
# Change permissions on the credential file so only root can read or modify the password file.
sudo chmod 600 $smbCredentialFile
To automatically mount a file share, you have a choice between using a static mount via the /etc/fstab utility
or using a dynamic mount via the autofs utility.
Static mount with /etc/fstab
Using the earlier environment, create a folder for your storage account/file share under your mount folder.
Replace <file-share-name> with the appropriate name of your Azure file share.
fileShareName="<file-share-name>"
mntPath="$mntRoot/$storageAccountName/$fileShareName"
sudo mkdir -p $mntPath
Finally, create a record in the /etc/fstab file for your Azure file share. In the command below, the default 0755
Linux file and folder permissions are used, which means read, write, and execute for the owner (based on the
file/directory Linux owner), read and execute for users in owner group, and read and execute for others on the
system. You may wish to set alternate uid and gid or dir_mode and file_mode permissions on mount as
desired. For more information on how to set permissions, see UNIX numeric notation on Wikipedia.
sudo mount -a
NOTE
Starting in Linux kernel version 5.0, SMB 3.1.1 is the default negotiated protocol. You can specify alternate protocol
versions using the vers mount option (protocol versions are 3.1.1 , 3.0 , and 2.1 ).
On older versions of Red Hat Enterprise Linux , use the yum package manager:
On SUSE Linux Enterprise Ser ver , use the zypper package manager:
Next steps
See these links for more information about Azure Files:
Planning for an Azure Files deployment
Remove SMB 1 on Linux
Troubleshooting
How to mount an NFS file share
5/20/2022 • 2 minutes to read • Edit Online
Azure Files is Microsoft's easy to use cloud file system. Azure file shares can be mounted in Linux distributions
using either the Server Message Block protocol (SMB) or the Network File System (NFS) protocol. This article is
focused on mounting with NFS, for details on mounting with SMB, see Use Azure Files with Linux. For details on
each of the available protocols, see Azure file share protocols.
Limitations
NFS Azure file shares are only available for the premium tier.
Regional availability
Azure NFS file shares is supported in all the same regions that support premium file storage.
For the most up-to-date list, see the Premium Files Storage entry on the Azure Products available by region
page.
Prerequisites
Create an NFS share.
Open port 2049 on any client you want to mount your NFS share to.
IMPORTANT
NFS shares can only be accessed from trusted networks. Connections to your NFS share must originate from one
of the following sources:
Next steps
Learn more about Azure Files with our article, Planning for an Azure Files deployment.
If you experience any issues, see Troubleshoot Azure NFS file shares.
Mount SMB Azure file share on macOS
5/20/2022 • 2 minutes to read • Edit Online
Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted with the industry
standard SMB 3 protocol by macOS High Sierra 10.13+. This article shows two different ways to mount an
Azure file share on macOS: with the Finder UI and using the Terminal.
Applies to
F IL E SH A RE T Y P E SM B NFS
2. Select "Connect to Ser ver" from the "Go" Menu : Using the UNC path from the prerequisites,
convert the beginning double backslash ( \\ ) to smb:// and all other backslashes ( \ ) to forwards
slashes ( / ). Your link should look like the following:
3. Use the storage account name and storage account key when prompted for a username and
password : When you click "Connect" on the "Connect to Server" dialog, you will be prompted for the
username and password (This will be autopopulated with your macOS username). You have the option of
placing the storage account name/storage account key in your macOS Keychain.
4. Use the Azure file share as desired : After substituting the share name and storage account key in for
the username and password, the share will be mounted. You may use this as you would normally use a
local folder/file share, including dragging and dropping files into the file share:
2. Use the Azure file share as desired : The Azure file share will be mounted at the mount point
specified by the previous command.
Next steps
Connect your Mac to shared computers and servers - Apple Support
Configuring Azure Files network endpoints
5/20/2022 • 17 minutes to read • Edit Online
Azure Files provides two main types of endpoints for accessing Azure file shares:
Public endpoints, which have a public IP address and can be accessed from anywhere in the world.
Private endpoints, which exist within a virtual network and have a private IP address from within the address
space of that virtual network.
Public and private endpoints exist on the Azure storage account. A storage account is a management construct
that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage
resources, such as blob containers or queues.
This article focuses on how to configure a storage account's endpoints for accessing the Azure file share directly.
Most of the detail provided within this document also applies to how Azure File Sync interoperates with public
and private endpoints for the storage account, however for more information about networking considerations
for an Azure File Sync deployment, see configuring Azure File Sync proxy and firewall settings.
We recommend reading Azure Files networking considerations prior to reading this how to guide.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
This article assumes that you have already created an Azure subscription. If you don't already have a
subscription, then create a free account before you begin.
This article assumes that you have already created an Azure file share in a storage account that you would
like to connect to from on-premises. To learn how to create an Azure file share, see Create an Azure file share.
If you intend to use Azure PowerShell, install the latest version.
If you intend to use the Azure CLI, install the latest version.
Endpoint configurations
You can configure your endpoints to restrict network access to your storage account. There are two approaches
to restricting access to a storage account to a virtual network:
Create one or more private endpoints for the storage account and restrict all access to the public endpoint.
This ensures that only traffic originating from within the desired virtual networks can access the Azure file
shares within the storage account.
Restrict the public endpoint to one or more virtual networks. This works by using a capability of the virtual
network called service endpoints. When you restrict the traffic to a storage account via a service endpoint,
you are still accessing the storage account via the public IP address, but access is only possible from the
locations you specify in your configuration.
Create a private endpoint
Creating a private endpoint for your storage account will result in the following Azure resources being deployed:
A private endpoint : An Azure resource representing the storage account's private endpoint. You can think
of this as a resource that connects a storage account and a network interface.
A network interface (NIC) : The network interface that maintains a private IP address within the specified
virtual network/subnet. This is the exact same resource that gets deployed when you deploy a virtual
machine, however instead of being assigned to a VM, it's owned by the private endpoint.
A private DNS zone : If you've never deployed a private endpoint for this virtual network before, a new
private DNS zone will be deployed for your virtual network. A DNS A record will also be created for the
storage account in this DNS zone. If you've already deployed a private endpoint in this virtual network, a new
A record for the storage account will be added to the existing DNS zone. Deploying a DNS zone is optional,
however highly recommended, and required if you are mounting your Azure file shares with an AD service
principal or using the FileREST API.
NOTE
This article uses the storage account DNS suffix for the Azure Public regions, core.windows.net . This commentary also
applies to Azure Sovereign clouds such as the Azure US Government cloud and the Azure China cloud - just substitute
the the appropriate suffixes for your environment.
Portal
PowerShell
Azure CLI
Navigate to the storage account for which you would like to create a private endpoint. In the table of contents
for the storage account, select Networking , Private endpoint connections , and then + Private endpoint to
create a new private endpoint.
The resulting wizard has multiple pages to complete.
In the Basics blade, select the desired resource group, name, and region for your private endpoint. These can be
whatever you want, they don't have to match the storage account in any way, although you must create the
private endpoint in the same region as the virtual network you wish to create the private endpoint in.
In the Resource blade, select the radio button for Connect to an Azure resource in my director y . Under
Resource type , select Microsoft.Storage/storageAccounts for the resource type. The Resource field is the
storage account with the Azure file share you wish to connect to. Target sub-resource is file , since this is for
Azure Files.
The Configuration blade allows you to select the specific virtual network and subnet you would like to add
your private endpoint to. You must select a distinct subnet from the subnet you added your service endpoint to
above. The Configuration blade also contains the information for creating/update the private DNS zone. We
recommend using the default privatelink.file.core.windows.net zone.
Verify connectivity
Portal
PowerShell
Azure CLI
If you have a virtual machine inside of your virtual network, or you've configured DNS forwarding as described
in Configuring DNS forwarding for Azure Files, you can test that your private endpoint has been set up correctly
by running the following commands from PowerShell, the command line, or the terminal (works for Windows,
Linux, or macOS). You must replace <storage-account-name> with the appropriate storage account name:
nslookup <storage-account-name>.file.core.windows.net
If everything has worked successfully, you should see the following output, where 192.168.0.5 is the private IP
address of the private endpoint in your virtual network (output shown for Windows):
Server: UnKnown
Address: 10.2.4.4
Non-authoritative answer:
Name: storageaccount.privatelink.file.core.windows.net
Address: 192.168.0.5
Aliases: storageaccount.file.core.windows.net
Portal
PowerShell
Azure CLI
Navigate to the storage account for which you would like to restrict all access to the public endpoint. In the table
of contents for the storage account, select Networking .
At the top of the page, select the Selected networks radio button. This will un-hide a number of settings for
controlling the restriction of the public endpoint. Check Allow trusted Microsoft ser vices to access this
ser vice account to allow trusted first party Microsoft services such as Azure File Sync to access the storage
account.
Portal
PowerShell
Azure CLI
Navigate to the storage account for which you would like to restrict the public endpoint to specific virtual
networks. In the table of contents for the storage account, select Networking .
At the top of the page, select the Selected networks radio button. This will un-hide a number of settings for
controlling the restriction of the public endpoint. Click +Add existing vir tual network to select the specific
virtual network that should be allowed to access the storage account via the public endpoint. This will require
selecting a virtual network and a subnet for that virtual network.
Check Allow trusted Microsoft ser vices to access this ser vice account to allow trusted first party
Microsoft services such as Azure File Sync to access the storage account.
See also
Azure Files networking considerations
Configuring DNS forwarding for Azure Files
Configuring S2S VPN for Azure Files
Configuring DNS forwarding for Azure Files
5/20/2022 • 6 minutes to read • Edit Online
Azure Files enables you to create private endpoints for the storage accounts containing your file shares.
Although useful for many different applications, private endpoints are especially useful for connecting to your
Azure file shares from your on-premises network using a VPN or ExpressRoute connection using private-
peering.
In order for connections to your storage account to go over your network tunnel, the fully qualified domain
name (FQDN) of your storage account must resolve to your private endpoint's private IP address. To achieve
this, you must forward the storage endpoint suffix ( core.windows.net for public cloud regions) to the Azure
private DNS service accessible from within your virtual network. This guide will show how to setup and
configure DNS forwarding to properly resolve to your storage account's private endpoint IP address.
We strongly recommend that you read Planning for an Azure Files deployment and Azure Files networking
considerations before you complete the steps described in this article.
Applies to
F IL E SH A RE T Y P E SM B NFS
Overview
Azure Files provides two main types of endpoints for accessing Azure file shares:
Public endpoints, which have a public IP address and can be accessed from anywhere in the world.
Private endpoints, which exist within a virtual network and have a private IP address from within the address
space of that virtual network.
Public and private endpoints exist on the Azure storage account. A storage account is a management construct
that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage
resources, such as blob containers or queues.
Every storage account has a fully qualified domain name (FQDN). For the public cloud regions, this FQDN
follows the pattern storageaccount.file.core.windows.net where storageaccount is the name of the storage
account. When you make requests against this name, such as mounting the share on your workstation, your
operating system performs a DNS lookup to resolve the fully qualified domain name to an IP address.
By default, storageaccount.file.core.windows.net resolves to the public endpoint's IP address. The public
endpoint for a storage account is hosted on an Azure storage cluster which hosts many other storage accounts'
public endpoints. When you create a private endpoint, a private DNS zone is linked to the virtual network it was
added to, with a CNAME record mapping storageaccount.file.core.windows.net to an A record entry for the
private IP address of your storage account's private endpoint. This enables you to use
storageaccount.file.core.windows.net FQDN within the virtual network and have it resolve to the private
endpoint's IP address.
Since our ultimate objective is to access the Azure file shares hosted within the storage account from on-
premises using a network tunnel such as a VPN or ExpressRoute connection, you must configure your on-
premises DNS servers to forward requests made to the Azure Files service to the Azure private DNS service. To
accomplish this, you need to set up conditional forwarding of *.core.windows.net (or the appropriate storage
endpoint suffix for the US Government, Germany, or China national clouds) to a DNS server hosted within your
Azure virtual network. This DNS server will then recursively forward the request on to Azure's private DNS
service that will resolve the fully qualified domain name of the storage account to the appropriate private IP
address.
Configuring DNS forwarding for Azure Files will require running a virtual machine to host a DNS server to
forward the requests, however this is a one time step for all the Azure file shares hosted within your virtual
network. Additionally, this is not an exclusive requirement to Azure Files - any Azure service that supports
private endpoints that you want to access from on-premises can make use of the DNS forwarding you will
configure in this guide: Azure Blob storage, SQL Azure, Cosmos DB, etc.
This guide shows the steps for configuring DNS forwarding for the Azure storage endpoint, so in addition to
Azure Files, DNS name resolution requests for all of the other Azure storage services (Azure Blob storage, Azure
Table storage, Azure Queue storage, etc.) will be forwarded to Azure's private DNS service. Additional endpoints
for other Azure services can also be added if desired. DNS forwarding back to your on-premises DNS servers
will also be configured, enabling cloud resources within your virtual network (such as a DFS-N server) to
resolve on-premises machine names.
Prerequisites
Before you can setup DNS forwarding to Azure Files, you need to have completed the following steps:
A storage account containing an Azure file share you would like to mount. To learn how to create a storage
account and an Azure file share, see Create an Azure file share.
A private endpoint for the storage account. To learn how to create a private endpoint for Azure Files, see
Create a private endpoint.
The latest version of the Azure PowerShell module.
IMPORTANT
This guide assumes you're using the DNS server within Windows Server in your on-premises environment. All of the steps
described in this guide are possible with any DNS server, not just the Windows DNS Server.
$storageAccountEndpoint = Get-AzContext | `
Select-Object -ExpandProperty Environment | `
Select-Object -ExpandProperty StorageEndpointSuffix
Add-DnsServerConditionalForwarderZone `
-Name $storageAccountEndpoint `
-MasterServers $vnetDnsServers
On the DNS servers within your Azure virtual network, you also will need to put a forwarder in place such that
requests for the storage account DNS zone are directed to the Azure private DNS service, which is fronted by
the reserved IP address 168.63.129.16 . (Remember to populate $storageAccountEndpoint if you're running the
commands within a different PowerShell session.)
Add-DnsServerConditionalForwarderZone `
-Name $storageAccountEndpoint `
-MasterServers "168.63.129.16"
# Replace storageaccount.file.core.windows.net with the appropriate FQDN for your storage account.
# Note the proper suffix (core.windows.net) depends on the cloud you're deployed in.
Resolve-DnsName -Name storageaccount.file.core.windows.net
If the name resolution is successful, you should see the resolved IP address match the IP address of your storage
account.
Name : storageaccount.privatelink.file.core.windows.net
QueryType : A
TTL : 1769
Section : Answer
IP4Address : 192.168.0.4
If you're mounting an SMB file share, you can also use the following Test-NetConnection command to see that a
TCP connection can be successfully made to your storage account.
See also
Planning for an Azure Files deployment
Azure Files networking considerations
Configuring Azure Files network endpoints
Configure a Site-to-Site VPN for use with Azure
Files
5/20/2022 • 8 minutes to read • Edit Online
You can use a Site-to-Site (S2S) VPN connection to mount your Azure file shares from your on-premises
network, without sending data over the open internet. You can set up a Site-to-Site VPN using Azure VPN
Gateway, which is an Azure resource offering VPN services, and is deployed in a resource group alongside
storage accounts or other Azure resources.
We strongly recommend that you read Azure Files networking overview before continuing with this how to
article for a complete discussion of the networking options available for Azure Files.
The article details the steps to configure a Site-to-Site VPN to mount Azure file shares directly on-premises. If
you're looking to route sync traffic for Azure File Sync over a Site-to-Site VPN, please see configuring Azure File
Sync proxy and firewall settings.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage
accounts, which are management constructs that represent a shared pool of storage in which you can
deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can
learn more about how to deploy Azure file shares and storage accounts in Create an Azure file share.
A private endpoint for the storage account containing the Azure file share you want to mount on-
premises. To learn more about how to create a private endpoint, see Configuring Azure Files network
endpoints.
A network appliance or server in your on-premises datacenter that is compatible with Azure VPN
Gateway. Azure Files is agnostic of the on-premises network appliance chosen but Azure VPN Gateway
maintains a list of tested devices. Different network appliances offer different features, performance
characteristics, and management functionalities, so consider these when selecting a network appliance.
If you do not have an existing network appliance, Windows Server contains a built-in Server Role, Routing
and Remote Access (RRAS), which may be used as the on-premises network appliance. To learn more
about how to configure Routing and Remote Access in Windows Server, see RAS Gateway.
If you add existing virtual network, you will be asked to select one or more subnets of that virtual network
which the storage account should be added to. If you select a new virtual network, you will create a subnet as
part of the creation of the virtual network, and you can add more later through the resulting Azure resource for
the virtual network.
If you have not added a storage account to your subscription before, the Microsoft.Storage service endpoint will
need to be added to the virtual network. This may take some time, and until this operation has completed, you
will not be able to access the Azure file shares within that storage account, including via the VPN connection.
Deploy an Azure VPN Gateway
In the table of contents for the Azure portal, select Create a new resource and search for Virtual network
gateway. Your virtual network gateway must be in the same subscription, Azure region, and resource group as
the virtual network you deployed in the previous step (note that resource group is automatically selected when
the virtual network is picked).
For the purposes of deploying an Azure VPN Gateway, you must populate the following fields:
Name : The name of the Azure resource for the VPN Gateway. This name may be any name you find useful
for your management.
Region : The region into which the VPN Gateway will be deployed.
Gateway type : For the purpose of deploying a Site-to-Site VPN, you must select VPN .
VPN type : You may choose either Route-based* or Policy-based depending on your VPN device. Route-
based VPNs support IKEv2, while policy-based VPNs only support IKEv1. To learn more about the two types
of VPN gateways, see About policy-based and route-based VPN gateways
SKU : The SKU controls the number of allowed Site-to-Site tunnels and desired performance of the VPN. To
select the appropriate SKU for your use case, consult the Gateway SKU listing. The SKU of the VPN Gateway
may be changed later if necessary.
Vir tual network : The virtual network you created in the previous step.
Public IP address : The IP address of VPN Gateway that will be exposed to the internet. Likely, you will need
to create a new IP address, however you may also use an existing unused IP address if that is appropriate. If
you select to Create new , a new IP address Azure resource will be created in the same resource group as the
VPN Gateway and the Public IP address name will be the name of the newly created IP address. If you
select Use existing , you must select the existing unused IP address.
Enable active-active mode : Only select Enabled if you are creating an active-active gateway
configuration, otherwise leave Disabled selected. To learn more about active-active mode, see Highly
available cross-premises and VNet-to-VNet connectivity.
Configure BGP ASN : Only select Enabled if your configuration specifically requires this setting. To learn
more about this setting, see About BGP with Azure VPN Gateway.
Select Review + create to create the VPN Gateway. A VPN Gateway may take up to 45 minutes to fully create
and deploy.
Create a local network gateway for your on-premises gateway
A local network gateway is an Azure resource that represents your on-premises network appliance. In the table
of contents for the Azure portal, select Create a new resource and search for local network gateway. The local
network gateway is an Azure resource that will be deployed alongside your storage account, virtual network,
and VPN Gateway, but does not need to be in the same resource group or subscription as the storage account.
For the purposes of deploying the local network gateway resource, you must populate the following fields:
Name : The name of the Azure resource for the local network gateway. This name may be any name you find
useful for your management.
IP address : The public IP address of your local gateway on-premises.
Address space : The address ranges for the network this local network gateway represents. You can add
multiple address space ranges, but make sure that the ranges you specify here do not overlap with ranges of
other networks that you want to connect to.
Configure BGP settings : Only configure BGP settings if your configuration requires this setting. To learn
more about this setting, see About BGP with Azure VPN Gateway.
Subscription : The desired subscription. This does not need to match the subscription used for the VPN
Gateway or the storage account.
Resource group : The desired resource group. This does not need to match the resource group used for the
VPN Gateway or the storage account.
Location : The Azure Region the local network gateway resource should be created in. This should match the
region you selected for the VPN Gateway and the storage account.
Select Create to create the local network gateway resource.
See also
Azure Files networking overview
Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files
Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files
Configure a Point-to-Site (P2S) VPN on Windows
for use with Azure Files
5/20/2022 • 9 minutes to read • Edit Online
You can use a Point-to-Site (P2S) VPN connection to mount your Azure file shares over SMB from outside of
Azure, without opening up port 445. A Point-to-Site VPN connection is a VPN connection between Azure and an
individual client. To use a P2S VPN connection with Azure Files, a P2S VPN connection will need to be configured
for each client that wants to connect. If you have many clients that need to connect to your Azure file shares
from your on-premises network, you can use a Site-to-Site (S2S) VPN connection instead of a Point-to-Site
connection for each client. To learn more, see Configure a Site-to-Site VPN for use with Azure Files.
We strongly recommend that you read Networking considerations for direct Azure file share access before
continuing with this how to article for a complete discussion of the networking options available for Azure Files.
The article details the steps to configure a Point-to-Site VPN on Windows (Windows client and Windows Server)
to mount Azure file shares directly on-premises. If you're looking to route Azure File Sync traffic over a VPN,
please see configuring Azure File Sync proxy and firewall settings.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
The most recent version of the Azure PowerShell module. For more information on how to install the
Azure PowerShell, see Install the Azure PowerShell module and select your operating system. If you
prefer to use the Azure CLI on Windows, you may, however the instructions below are presented for
Azure PowerShell.
An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage
accounts, which are management constructs that represent a shared pool of storage in which you can
deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can
learn more about how to deploy Azure file shares and storage accounts in Create an Azure file share.
A virtual network with a private endpoint for the storage account containing the Azure file share you
want to mount on-premises. To learn more about how to create a private endpoint, see Configuring Azure
Files network endpoints.
$resourceGroupName = "<resource-group-name>"
$virtualNetworkName = "<vnet-name>"
$subnetName = "<subnet-name>"
$storageAccountName = "<storage-account-name>"
$virtualNetwork = Get-AzVirtualNetwork `
-ResourceGroupName $resourceGroupName `
-Name $virtualNetworkName
$subnetId = $virtualNetwork | `
Select-Object -ExpandProperty Subnets | `
Where-Object { $_.Name -eq "StorageAccountSubnet" } | `
Select-Object -ExpandProperty Id
$storageAccount = Get-AzStorageAccount `
-ResourceGroupName $resourceGroupName `
-Name $storageAccountName
$privateEndpoint = Get-AzPrivateEndpoint | `
Where-Object {
$subnets = $_ | `
Select-Object -ExpandProperty Subnet | `
Where-Object { $_.Id -eq $subnetId }
$connections = $_ | `
Select-Object -ExpandProperty PrivateLinkServiceConnections | `
Where-Object { $_.PrivateLinkServiceId -eq $storageAccount.Id }
$rootcert = New-SelfSignedCertificate `
-Type Custom `
-KeySpec Signature `
-Subject $rootcertname `
-KeyExportPolicy Exportable `
-HashAlgorithm sha256 `
-KeyLength 2048 `
-CertStoreLocation $certLocation `
-KeyUsageProperty Sign `
-KeyUsage CertSign
Export-Certificate `
-Cert $rootcert `
-FilePath $exportedencodedrootcertpath `
-NoClobber | Out-Null
[System.String]$rootCertificate = ""
foreach($line in $rawRootCertificate) {
if ($line -notlike "*Certificate*") {
$rootCertificate += $line
}
}
NOTE
Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this
PowerShell script will block for the deployment to be completed. This is expected.
$vpnName = "<desired-vpn-name-here>"
$publicIpAddressName = "$vpnName-PublicIP"
$publicIPAddress = New-AzPublicIpAddress `
-ResourceGroupName $resourceGroupName `
-Name $publicIpAddressName `
-Location $region `
-Sku Basic `
-AllocationMethod Dynamic
$gatewayIpConfig = New-AzVirtualNetworkGatewayIpConfig `
-Name "vnetGatewayConfig" `
-SubnetId $gatewaySubnet.Id `
-PublicIpAddressId $publicIPAddress.Id
$azRootCertificate = New-AzVpnClientRootCertificate `
-Name "P2SRootCert" `
-PublicCertData $rootCertificate
$vpn = New-AzVirtualNetworkGateway `
-ResourceGroupName $resourceGroupName `
-Name $vpnName `
-Location $region `
-GatewaySku VpnGw1 `
-GatewayType Vpn `
-VpnType RouteBased `
-IpConfigurations $gatewayIpConfig `
-VpnClientAddressPool "172.16.201.0/24" `
-VpnClientProtocol IkeV2 `
-VpnClientRootCertificates $azRootCertificate
$vpnClientConfiguration = New-AzVpnClientConfiguration `
-ResourceGroupName $resourceGroupName `
-Name $vpnName `
-AuthenticationMethod EAPTLS
Invoke-WebRequest `
-Uri $vpnClientConfiguration.VpnProfileSASUrl `
-OutFile "$vpnTemp\vpnclientconfiguration.zip"
Expand-Archive `
-Path "$vpnTemp\vpnclientconfiguration.zip" `
-DestinationPath "$vpnTemp\vpnclientconfiguration"
$vpnGeneric = "$vpnTemp\vpnclientconfiguration\Generic"
$vpnProfile = ([xml](Get-Content -Path "$vpnGeneric\VpnSettings.xml")).VpnProfile
$clientcert = New-SelfSignedCertificate `
-Type Custom `
-DnsName $vpnProfile.VpnServer `
-KeySpec Signature `
-Subject $clientcertname `
-KeyExportPolicy Exportable `
-HashAlgorithm sha256 `
-KeyLength 2048 `
-CertStoreLocation $certLocation `
-Signer $rootcert `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2")
Export-PfxCertificate `
-FilePath $exportedclientcertpath `
-Password $mypwd `
-Cert $clientcert | Out-Null
$sessions = [System.Management.Automation.Runspaces.PSSession[]]@()
$sessions += New-PSSession -ComputerName "<computer1>"
$sessions += New-PSSession -ComputerName "<computer2>"
Copy-Item `
-Path $exportedclientcertpath, $exportedrootcertpath, "$vpnTemp\vpnclientconfiguration.zip" `
-Destination $vpnTemp `
-ToSession $session
Invoke-Command `
-Session $session `
-ArgumentList `
$mypwd, `
$vpnTemp, `
$virtualNetworkName `
-ScriptBlock {
$mypwd = $args[0]
$vpnTemp = $args[1]
$virtualNetworkName = $args[2]
Import-PfxCertificate `
-Exportable `
-Password $mypwd `
-CertStoreLocation "Cert:\LocalMachine\My" `
-FilePath "$vpnTemp\P2SClientCert.pfx" | Out-Null
Import-Certificate `
-FilePath "$vpnTemp\P2SRootCert.cer" `
-CertStoreLocation "Cert:\LocalMachine\Root" | Out-Null
Expand-Archive `
-Path "$vpnTemp\vpnclientconfiguration.zip" `
-DestinationPath "$vpnTemp\vpnclientconfiguration"
$vpnGeneric = "$vpnTemp\vpnclientconfiguration\Generic"
Add-VpnConnection `
-Name $virtualNetworkName `
-ServerAddress $vpnProfile.VpnServer `
-TunnelType Ikev2 `
-EncryptionLevel Required `
-AuthenticationMethod MachineCertificate `
-SplitTunneling `
-AllUserConnection
Add-VpnConnectionRoute `
-Name $virtualNetworkName `
-DestinationPrefix $vpnProfile.Routes `
-AllUserConnection
Add-VpnConnectionRoute `
-Name $virtualNetworkName `
-DestinationPrefix $vpnProfile.VpnClientAddressPool `
-AllUserConnection
rasdial $virtualNetworkName
}
}
$myShareToMount = "<file-share>"
$storageAccountKeys = Get-AzStorageAccountKey `
-ResourceGroupName $resourceGroupName `
-Name $storageAccountName
$storageAccountKey = ConvertTo-SecureString `
-String $storageAccountKeys[0].Value `
-AsPlainText `
-Force
Invoke-Command `
-Session $sessions `
-ArgumentList `
$storageAccountName, `
$storageAccountKey, `
$storageAccountPrivateIP, `
$myShareToMount `
-ScriptBlock {
$storageAccountName = $args[0]
$storageAccountKey = $args[1]
$storageAccountPrivateIP = $args[2]
$myShareToMount = $args[3]
$credential = [System.Management.Automation.PSCredential]::new(
"AZURE\$storageAccountName",
$storageAccountKey)
New-PSDrive `
-Name Z `
-PSProvider FileSystem `
-Root "\\$storageAccountPrivateIP\$myShareToMount" `
-Credential $credential `
-Persist | Out-Null
Get-ChildItem -Path Z:\
Remove-PSDrive -Name Z
}
$rootcertname = "CN=$NewRootCertName"
$certLocation = "Cert:\CurrentUser\My"
$date = get-date -Format "MM_yyyy"
$vpnTemp = "C:\vpn-temp_$date\"
$exportedencodedrootcertpath = $vpnTemp + "P2SRootCertencoded.cer"
$exportedrootcertpath = $vpnTemp + "P2SRootCert.cer"
$rootcert = New-SelfSignedCertificate `
-Type Custom `
-KeySpec Signature `
-Subject $rootcertname `
-KeyExportPolicy Exportable `
-HashAlgorithm sha256 `
-KeyLength 2048 `
-CertStoreLocation $certLocation `
-KeyUsageProperty Sign `
-KeyUsage CertSign
Export-Certificate `
-Cert $rootcert `
-FilePath $exportedencodedrootcertpath `
-NoClobber | Out-Null
[System.String]$rootCertificate = ""
foreach($line in $rawRootCertificate) {
if ($line -notlike "*Certificate*") {
$rootCertificate += $line
}
}
#Fetching gateway details and adding the newly created Root Certificate.
$gateway = Get-AzVirtualNetworkGateway -Name $vpnName -ResourceGroupName $ResourceGroupName
Add-AzVpnClientRootCertificate `
-PublicCertData $rootCertificate `
-ResourceGroupName $ResourceGroupName `
-VirtualNetworkGatewayName $gateway `
-VpnClientRootCertificateName $NewRootCertName
See also
Networking considerations for direct Azure file share access
Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files
Configure a Site-to-Site (S2S) VPN for use with Azure Files
Configure a Point-to-Site (P2S) VPN on Linux for
use with Azure Files
5/20/2022 • 6 minutes to read • Edit Online
You can use a Point-to-Site (P2S) VPN connection to mount your Azure file shares from outside of Azure,
without sending data over the open internet. A Point-to-Site VPN connection is a VPN connection between Azure
and an individual client. To use a P2S VPN connection with Azure Files, a P2S VPN connection will need to be
configured for each client that wants to connect. If you have many clients that need to connect to your Azure file
shares from your on-premises network, you can use a Site-to-Site (S2S) VPN connection instead of a Point-to-
Site connection for each client. To learn more, see Configure a Site-to-Site VPN for use with Azure Files.
We strongly recommend that you read Azure Files networking overview before continuing with this how to
article for a complete discussion of the networking options available for Azure Files.
The article details the steps to configure a Point-to-Site VPN on Linux to mount Azure file shares directly on-
premises.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
The most recent version of the Azure CLI. For more information on how to install the Azure CLI, see Install
the Azure PowerShell CLI and select your operating system. If you prefer to use the Azure PowerShell
module on Linux, you may, however the instructions below are presented for Azure CLI.
An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage
accounts, which are management constructs that represent a shared pool of storage in which you can
deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can
learn more about how to deploy Azure file shares and storage accounts in Create an Azure file share.
A private endpoint for the storage account containing the Azure file share you want to mount on-
premises. To learn more about how to create a private endpoint, see Configuring Azure Files network
endpoints.
installDir="/etc/"
region="<region>"
resourceGroupName="<resource-group>"
virtualNetworkName="<desired-vnet-name>"
mkdir temp
cd temp
sudo ipsec pki --gen --size 4096 --outform pem > "clientKey.pem"
sudo ipsec pki --pub --in "clientKey.pem" | \
sudo ipsec pki \
--issue \
--cacert rootCert.pem \
--cakey rootKey.pem \
--dn "CN=$username" \
--san $username \
--flag clientAuth \
--outform pem > "clientCert.pem"
openssl pkcs12 -in "clientCert.pem" -inkey "clientKey.pem" -certfile rootCert.pem -export -out "client.p12"
-password "pass:$password"
NOTE
Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this bash
script script will block for the deployment to be completed.
P2S IKEv2/OpenVPN connections are not supported with the Basic SKU. This script uses the VpnGw1 SKU for the virtual
network gateway, accordingly.
vpnName="<desired-vpn-name-here>"
publicIpAddressName="$vpnName-PublicIP"
echo ": P12 client.p12 '$password'" | sudo tee -a "${installDir}ipsec.secrets" > /dev/null
See also
Azure Files networking overview
Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files
Configure a Site-to-Site (S2S) VPN for use with Azure Files
Overview - on-premises Active Directory Domain
Services authentication over SMB for Azure file
shares
5/20/2022 • 5 minutes to read • Edit Online
Azure Filessupports identity-based authentication over Server Message Block (SMB) throughtwo types of
Domain Services: on-premises Active Directory Domain Services (AD DS) and Azure Active Directory Domain
Services (Azure AD DS). We strongly recommend you to review the How it works section to select the right
domain service for authentication. The setup is different depending on the domain service you choose. These
series of articles focus on enabling and configuring on-premises AD DS for authentication with Azure file shares.
If you are new to Azure file shares, we recommend reading our planning guide before reading the following
series of articles.
Applies to
F IL E SH A RE T Y P E SM B NFS
Videos
To help you setup Azure Files AD authentication for some common use cases, we published two videos with step
by step guidance for the following scenarios:
Prerequisites
Before you enable AD DS authentication for Azure file shares, make sure you have completed the following
prerequisites:
Select or create your AD DS environment and sync it to Azure AD with Azure AD Connect.
You can enable the feature on a new or existing on-premises AD DS environment. Identities used for
access must be synced to Azure AD or use a default share-level permission. The Azure AD tenant and the
file share that you are accessing must be associated with the same subscription.
Domain-join an on-premises machine or an Azure VM to on-premises AD DS. For information about how
to domain-join, refer to Join a Computer to a Domain.
If your machine is not domain joined to an AD DS, you may still be able to leverage AD credentials for
authentication if your machine has line of sight of the AD domain controller.
Select or create an Azure storage account. For optimal performance, we recommend that you deploy the
storage account in the same region as the client from which you plan to access the share. Then, mount
the Azure file share with your storage account key. Mounting with the storage account key verifies
connectivity.
Make sure that the storage account containing your file shares is not already configured for Azure AD DS
Authentication. If Azure Files Azure AD DS authentication is enabled on the storage account, it needs to be
disabled before changing to use on-premises AD DS. This implies that existing ACLs configured in Azure
AD DS environment will need to be reconfigured for proper permission enforcement.
If you experience issues in connecting to Azure Files, refer to the troubleshooting tool we published for
Azure Files mounting errors on Windows.
Make any relevant networking configuration prior to enabling and configuring AD DS authentication to
your Azure file shares. See Azure Files networking considerations for more information.
Regional availability
Azure Files authentication with AD DS is available in all Azure Public, China and Gov regions.
Overview
If you plan to enable any networking configurations on your file share, we recommend you to read the
networking considerations article and complete the related configuration before enabling AD DS authentication.
Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares
with your on-prem AD DS credentials. Further, it allows you to better manage your permissions to allow
granular access control. Doing this requires synching identities from on-prem AD DS to Azure AD with AD
connect. You control the share level access with identities synced to Azure AD while managing file/share level
access with on-prem AD DS credentials.
Next, follow the steps below to set up Azure Files for AD DS Authentication:
1. Part one: enable AD DS authentication on your storage account
2. Part two: assign access permissions for a share to the Azure AD identity (a user, group, or service
principal) that is in sync with the target AD identity
3. Part three: configure Windows ACLs over SMB for directories and files
4. Part four: mount an Azure file share to a VM joined to your AD DS
5. Update the password of your storage account identity in AD DS
The following diagram illustrates the end-to-end workflow for enabling Azure AD authentication over SMB for
Azure file shares.
Identities used to access Azure file shares must be synced to Azure AD to enforce share level file permissions
through the Azure role-based access control (Azure RBAC) model. Alternatively, you can use a default share-
level permission. Windows-style DACLs on files/directories carried over from existing file servers will be
preserved and enforced. This offers seamless integration with your enterprise AD DS environment. As you
replace on-prem file servers with Azure file shares, existing users can access Azure file shares from their current
clients with a single sign-on experience, without any change to the credentials in use.
Next steps
To enable on-premises AD DS authentication for your Azure file share, continue to the next article:
Part one: enable AD DS authentication for your account
Part one: enable AD DS authentication for your
Azure file shares
5/20/2022 • 9 minutes to read • Edit Online
This article describes the process for enabling Active Directory Domain Services (AD DS) authentication on your
storage account. After enabling the feature, you must configure your storage account and your AD DS, to use AD
DS credentials for authenticating to your Azure file share.
IMPORTANT
Before you enable AD DS authentication, make sure you understand the supported scenarios and requirements in the
overview article and complete the necessary prerequisites.
To enable AD DS authentication over SMB for Azure file shares, you need to register your storage account with
AD DS and then set the required domain properties on the storage account. To register your storage account
with AD DS, create an account representing it in your AD DS. You can think of this process as if it were like
creating an account representing an on-premises Windows file server in your AD DS. When the feature is
enabled on the storage account, it applies to all new and existing file shares in the account.
Applies to
F IL E SH A RE T Y P E SM B NFS
IMPORTANT
The domain join cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to
register as a computer account or service logon account, see FAQ for details. For computer accounts, there is a default
password expiration age set in AD at 30 days. Similarly, the service logon account may have a default password expiration
age set on the AD domain or Organizational Unit (OU). For both account types, we recommend you check the password
expiration age configured in your AD environment and plan to update the password of your storage account identity of
the AD account before the maximum password age. You can consider creating a new AD Organizational Unit (OU) in AD
and disabling password expiration policy on computer accounts or service logon accounts accordingly.
# Navigate to where AzFilesHybrid is unzipped and stored and run to copy the files into your path
.\CopyToPSPath.ps1
# Login with an Azure AD credential that has either storage account owner or contributor Azure role
assignment
# If you are logging into an Azure environment other than Public (ex. AzureUSGovernment) you will need to
specify that.
# See https://docs.microsoft.com/azure/azure-government/documentation-government-get-started-connect-with-ps
# for more information.
Connect-AzAccount
# Define parameters
# $StorageAccountName is the name of an existing storage account that you want to join to AD
# $SamAccountName is an AD object, see https://docs.microsoft.com/en-us/windows/win32/adschema/a-
samaccountname
# for more information.
# If you want to use AES256 encryption (recommended), except for the trailing '$', the storage account name
must be the same as the computer object's SamAccountName.
$SubscriptionId = "<your-subscription-id-here>"
$ResourceGroupName = "<resource-group-name-here>"
$StorageAccountName = "<storage-account-name-here>"
$SamAccountName = "<sam-account-name-here>"
$DomainAccountType = "<ComputerAccount|ServiceLogonAccount>" # Default is set as ComputerAccount
# If you don't provide the OU name as an input parameter, the AD identity that represents the storage
account is created under the root directory.
$OuDistinguishedName = "<ou-distinguishedname-here>"
# Specify the encryption algorithm used for Kerberos authentication. Using AES256 is recommended.
$EncryptionType = "<AES256|RC4|AES256,RC4>"
# Register the target storage account with your active directory environment under the target OU (for
example: specify the OU with Name as "UserAccounts" or DistinguishedName as
"OU=UserAccounts,DC=CONTOSO,DC=COM").
# You can use to this PowerShell cmdlet: Get-ADOrganizationalUnit to find the Name and DistinguishedName of
your target OU. If you are using the OU Name, specify it with -OrganizationalUnitName as shown below. If you
are using the OU DistinguishedName, you can set it with -OrganizationalUnitDistinguishedName. You can choose
to provide one of the two names to specify the target OU.
# You can choose to create the identity that represents the storage account as either a Service Logon
Account or Computer Account (default parameter value), depends on the AD permission you have and preference.
# Run Get-Help Join-AzStorageAccountForAuth for more details on this cmdlet.
Join-AzStorageAccount `
-ResourceGroupName $ResourceGroupName `
-StorageAccountName $StorageAccountName `
-SamAccountName $SamAccountName `
-DomainAccountType $DomainAccountType `
-OrganizationalUnitDistinguishedName $OuDistinguishedName `
-EncryptionType $EncryptionType
#Run the command below to enable AES256 encryption. If you plan to use RC4, you can skip this step.
Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -StorageAccountName
$StorageAccountName
#You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration
with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more details on
the checks performed in this cmdlet, see Azure Files Windows troubleshooting guide.
Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -
Verbose
# Create the Kerberos key on the storage account and get the Kerb1 key as the password for the AD identity
to represent the storage account
$ResourceGroupName = "<resource-group-name-here>"
$StorageAccountName = "<storage-account-name-here>"
Once you have that key, create either a service or computer account under your OU. Use the following
specification (remember to replace the example text with your storage account name):
SPN: "cifs/your-storage-account-name-here.file.core.windows.net" Password: Kerberos key for your storage
account.
If your OU enforces password expiration, you must update the password before the maximum password age to
prevent authentication failures when accessing Azure file shares. See Update the password of your storage
account identity in AD for details.
Keep the SID of the newly created identity, you'll need it for the next step. The identity you've created that
represent the storage account doesn't need to be synced to Azure AD.
Enable the feature on your storage account
Modify the following command to include configuration details for the domain properties in the following
command, then run it to enable the feature. The storage account SID required in the following command is the
SID of the identity you created in your AD DS in the previous section.
# Set the feature flag on the target storage account and provide the required AD domain information
Set-AzStorageAccount `
-ResourceGroupName "<your-resource-group-name-here>" `
-Name "<your-storage-account-name-here>" `
-EnableActiveDirectoryDomainServicesForFile $true `
-ActiveDirectoryDomainName "<your-domain-dns-root-here>" `
-ActiveDirectoryNetBiosDomainName "<your-domain-dns-root-here>" `
-ActiveDirectoryForestName "<your-forest-name-here>" `
-ActiveDirectoryDomainGuid "<your-guid-here>" `
-ActiveDirectoryDomainsid "<your-domain-sid-here>" `
-ActiveDirectoryAzureStorageSid "<your-storage-account-sid>"
After you've run that cmdlet, replace <domain-object-identity> in the following script with your value, then run
the script to refresh your domain object password:
$KeyName = "kerb1" # Could be either the first or second kerberos key, this script assumes we're refreshing
the first
$KerbKeys = New-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -KeyName
$KeyName
$KerbKey = $KerbKeys | Where-Object {$_.KeyName -eq $KeyName} | Select-Object -ExpandProperty Value
$NewPassword = ConvertTo-SecureString -String $KerbKey -AsPlainText -Force
Debugging
You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration
with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more information on
the checks performed in this cmdlet, see Unable to mount Azure Files with AD credentials in the troubleshooting
guide for Windows.
# List the directory domain information if the storage account has enabled AD DS authentication for file
shares
$storageAccount.AzureFilesIdentityBasedAuth.ActiveDirectoryProperties
DomainName:<yourDomainHere>
NetBiosDomainName:<yourNetBiosDomainNameHere>
ForestName:<yourForestNameHere>
DomainGuid:<yourGUIDHere>
DomainSid:<yourSIDHere>
AzureStorageID:<yourStorageSIDHere>
Next steps
You've now successfully enabled the feature on your storage account. To use the feature, you must assign share-
level permissions. Continue to the next section.
Part two: assign share-level permissions to an identity
Part two: assign share-level permissions to an
identity
5/20/2022 • 8 minutes to read • Edit Online
Before you begin this article, make sure you've completed the previous article, Enable AD DS authentication for
your account.
Once you've enabled Active Directory Domain Services (AD DS) authentication on your storage account, you
must configure share-level permissions in order to get access to your file shares. There are two ways you can
assign share-level permissions. You can assign them to specific Azure AD users/user groups and you can assign
them to all authenticated identities as a default share level permission.
IMPORTANT
Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage
account key. Administrative control is not supported with Azure AD credentials.
Applies to
F IL E SH A RE T Y P E SM B NFS
Share-level permissions
The following table lists the share-level permissions and how they align with the built-in RBAC roles:
Storage File Data SMB Share Reader Allows for read access to files and directories in Azure file
shares. This role is analogous to a file share ACL of read on
Windows File servers. Learn more.
Storage File Data SMB Share Contributor Allows for read, write, and delete access on files and
directories in Azure file shares. Learn more.
Storage File Data SMB Share Elevated Contributor Allows for read, write, delete, and modify ACLs on files and
directories in Azure file shares. This role is analogous to a file
share ACL of change on Windows file servers. Learn more.
IMPORTANT
Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (*)
character. If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are
granted access for all possible data actions. This means that all such identities will also be granted any new data action
added to the platform.The additional access and permissions granted through new actions or data actions may be
unwanted behavior for customers using wildcard. To mitigate any unintended future impact, we highly recommend
declaring actions and data actions explicitly as opposed to using the wildcard.
IMPORTANT
The share-level permissions will take up to three hours to take effect once completed. Please wait for the permissions to
sync before connecting to your file share using your credentials.
Portal
Azure PowerShell
Azure CLI
To assign an Azure role to an Azure AD identity, using the Azure portal, follow these steps:
1. In the Azure portal, go to your file share, or create a file share.
2. Select Access Control (IAM) .
3. Select Add a role assignment
4. In the Add role assignment blade, select the appropriate built-in role from the Role list.
a. Storage File Data SMB Share Reader
b. Storage File Data SMB Share Contributor
c. Storage File Data SMB Share Elevated Contributor
5. Leave Assign access to at the default setting: Azure AD user, group, or ser vice principal . Select the
target Azure AD identity by name or email address. The selected Azure AD identity must be a hybrid
identity and cannot be a cloud only identity. This means that the same identity is also represented in
AD DS.
6. Select Save to complete the role assignment operation.
Portal
Azure PowerShell
Azure CLI
You can't currently assign permissions to the storage account with the Azure portal. Use either the Azure
PowerShell module or the Azure CLI, instead.
Next steps
Now that you've assigned share-level permissions, you must configure directory and file-level permissions.
Continue to the next article.
Part three: configure directory and file-level permissions over SMB
Part three: configure directory and file level
permissions over SMB
5/20/2022 • 6 minutes to read • Edit Online
Before you begin this article, make sure you completed the previous article, Assign share-level permissions to
an identity to ensure that your share-level permissions are in place.
After you assign share-level permissions with Azure RBAC, you must configure proper Windows ACLs at the
root, directory, or file level, to take advantage of granular access control. The Azure RBAC share-level
permissions act as a high-level gatekeeper that determines whether a user can access the share. While the
Windows ACLs operate at a more granular level to control what operations the user can do at the directory or
file level. Both share-level and file/directory level permissions are enforced when a user attempts to access a
file/directory, so if there is a difference between either of them, only the most restrictive one will be applied. For
example, if a user has read/write access at the file-level, but only read at a share-level, then they can only read
that file. The same would be true if it was reversed, and a user had read/write access at the share-level, but only
read at the file-level, they can still only read the file.
Applies to
F IL E SH A RE T Y P E SM B NFS
Storage File Data SMB Share Reader Full control, Modify, Read, Write, Read & execute
Execute
Read Read
Storage File Data SMB Share Full control Modify, Read, Write, Execute
Contributor
Modify Modify
Read Read
B UILT - IN RO L E N T F S P ERM ISSIO N RESULT IN G A C C ESS
Write Write
Storage File Data SMB Share Elevated Full control Modify, Read, Write, Edit (Change
Contributor permissions), Execute
Modify Modify
Read Read
Write Write
Supported permissions
Azure Files supports the full set of basic and advanced Windows ACLs. You can view and configure Windows
ACLs on directories and files in an Azure file share by mounting the share and then using Windows File Explorer,
running the Windows icacls command, or the Set-ACL command.
To configure ACLs with superuser permissions, you must mount the share by using your storage account key
from your domain-joined VM. Follow the instructions in the next section to mount an Azure file share from the
command prompt and to configure Windows ACLs.
The following permissions are included on the root directory of a file share:
BUILTIN\Administrators:(OI)(CI)(F)
BUILTIN\Users:(RX)
BUILTIN\Users:(OI)(CI)(IO)(GR,GE)
NT AUTHORITY\Authenticated Users:(OI)(CI)(M)
NT AUTHORITY\SYSTEM:(OI)(CI)(F)
NT AUTHORITY\SYSTEM:(F)
CREATOR OWNER:(OI)(CI)(IO)(F)
USERS DEF IN IT IO N
NT AUTHORITY\Authenticated Users All users in AD that can get a valid Kerberos token.
USERS DEF IN IT IO N
CREATOR OWNER Each object either directory or file has an owner for that
object. If there are ACLs assigned to CREATOR OWNER on
that object, then the user that is the owner of this object has
the permissions to the object defined by the ACL.
NOTE
You may see the Full Control* ACL applied to a role already. This typically already offers the ability to assign permissions.
However, because there are access checks at two levels (the share-level and the file-level), this is restricted. Only users
who have the SMB Elevated Contributor role and create a new file or folder can assign permissions on those specific
new files or folders without the use of the storage account key. All other permission assignment requires mounting the
share with the storage account key, first.
If you experience issues in connecting to Azure Files, refer to the troubleshooting tool we published for Azure
Files mounting errors on Windows.
Next steps
Now that the feature is enabled and configured, continue to the next article, where you mount your Azure file
share from a domain-joined VM.
Part four: mount a file share from a domain-joined VM
Part four: mount a file share from a domain-joined
VM
5/20/2022 • 2 minutes to read • Edit Online
Before you begin this article, make sure you complete the previous article, configure directory and file level
permissions over SMB.
The process described in this article verifies that your file share and access permissions are set up correctly and
that you can access an Azure File share from a domain-joined VM. Share-level Azure role assignment can take
some time to take effect.
Sign in to the client by using the credentials that you granted permissions to, as shown in the following image.
Applies to
F IL E SH A RE T Y P E SM B NFS
Mounting prerequisites
Before you can mount the file share, make sure you've gone through the following pre-requisites:
If you are mounting the file share from a client that has previously mounted the file share using your storage
account key, make sure that you have disconnected the share, removed the persistent credentials of the
storage account key, and are currently using AD DS credentials for authentication. For instructions to clear
the mounted share with storage account key, refer to FAQ page.
Your client must have line of sight to your AD DS. If your machine or VM is out of the network managed by
your AD DS, you will need to enable VPN to reach AD DS for authentication.
Replace the placeholder values with your own values, then use the following command to mount the Azure file
share. You always need to mount using the path shown below. Using CNAME for file mount is not supported for
identity based authentication (AD DS or Azure AD DS).
# Always mount your share using.file.core.windows.net, even if you setup a private endpoint for your share.
$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
if ($connectTestResult.TcpTestSucceeded)
{
net use <desired-drive letter>: \\<storage-account-name>.file.core.windows.net\<fileshare-name>
}
else
{
Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your
organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to
tunnel SMB traffic over a different port."
}
If you run into issues mounting with AD DS credentials, refer to Unable to mount Azure Files with AD credentials
for guidance.
If mounting your file share succeeded, then you have successfully enabled and configured on-premises AD DS
authentication for your Azure file shares.
Next steps
If the identity you created in AD DS to represent the storage account is in a domain or OU that enforces
password rotation, continue to the next article for instructions on updating your password:
Update the password of your storage account identity in AD DS
Update the password of your storage account
identity in AD DS
5/20/2022 • 2 minutes to read • Edit Online
If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage
account in an organizational unit or domain that enforces password expiration time, you must change the
password before the maximum password age. Your organization may run automated cleanup scripts that delete
accounts once their password expires. Because of this, if you do not change your password before it expires,
your account could be deleted, which will cause you to lose access to your Azure file shares.
To trigger password rotation, you can run the Update-AzStorageAccountADObjectPassword command from the
AzFilesHybrid module. This command must be run in an on-premises AD DS-joined environment using a hybrid
user with owner permission to the storage account and AD DS permissions to change the password of the
identity representing the storage account. The command performs actions similar to storage account key
rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password
of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and
updates the password of the registered account in AD DS. You must run this command in an on-premises AD
DS-joined environment.
To prevent password rotation, during the onboarding of the Azure Storage account in the domain, make sure to
place the Azure Storage Account into a separate organizational unit in AD DS. Disable Group Policy inheritance
on this organizational unit to prevent default domain policies or specific password policies to be applied.
# Update the password of the AD DS account registered for the storage account
# You may use either kerb1 or kerb2
Update-AzStorageAccountADObjectPassword `
-RotateToKerbKey kerb2 `
-ResourceGroupName "<your-resource-group-name-here>" `
-StorageAccountName "<your-storage-account-name-here>"
Applies to
F IL E SH A RE T Y P E SM B NFS
Azure Filessupports identity-based authentication over Server Message Block (SMB) throughtwo types of
Domain Services: on-premises Active Directory Domain Services (AD DS) and Azure Active Directory Domain
Services (Azure AD DS). We strongly recommend you to review the How it works section to select the right
domain service for authentication. The setup is different depending on the domain service you choose. This
article focuses on enabling and configuring Azure AD DS for authentication with Azure file shares.
If you are new to Azure file shares, we recommend reading our planning guide before reading the following
series of articles.
NOTE
Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC and AES-256 encryption. We
recommend using AES-256.
Azure Files supports authentication for Azure AD DS with full synchronization with Azure AD. If you have enabled scoped
synchronization in Azure AD DS which only sync a limited set of identities from Azure AD, authentication and
authorization is not supported.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
Before you enable Azure AD over SMB for Azure file shares, make sure you have completed the following
prerequisites:
1. Select or create an Azure AD tenant.
You can use a new or existing tenant for Azure AD authentication over SMB. The tenant and the file share
that you want to access must be associated with the same subscription.
To create a new Azure AD tenant, you can Add an Azure AD tenant and an Azure AD subscription. If you
have an existing Azure AD tenant but want to create a new tenant for use with Azure file shares, see
Create an Azure Active Directory tenant.
2. Enable Azure AD Domain Ser vices on the Azure AD tenant.
To support authentication with Azure AD credentials, you must enable Azure AD Domain Services for
your Azure AD tenant. If you aren't the administrator of the Azure AD tenant, contact the administrator
and follow the step-by-step guidance to Enable Azure Active Directory Domain Services using the Azure
portal.
It typically takes about 15 minutes for an Azure AD DS deployment to complete. Verify that the health
status of Azure AD DS shows Running , with password hash synchronization enabled, before proceeding
to the next step.
3. Domain-join an Azure VM with Azure AD DS.
To access a file share by using Azure AD credentials from a VM, your VM must be domain-joined to Azure
AD DS. For more information about how to domain-join a VM, see Join a Windows Server virtual
machine to a managed domain.
NOTE
Azure AD DS authentication over SMB with Azure file shares is supported only on Azure VMs running on OS
versions above Windows 7 or Windows Server 2008 R2.
Regional availability
Azure Files authentication with Azure AD DS is available in all Azure Public, Gov, and China regions.
To enable Azure AD DS authentication over SMB with the Azure portal, follow these steps:
1. In the Azure portal, go to your existing storage account, or create a storage account.
2. In the File shares section, select Active director y: Not Configured .
3. Select Azure Active Director y Domain Ser vices then switch the toggle to Enabled .
4. Select Save .
$storageAccountName= “<InsertStorageAccountNameHere>”
$searchFilter = "Name -like '*{0}*'" -f $storageAccountName
$userObject = Get-ADUser -filter $searchFilter
# 3. Validate that the object now has the expected (AES256) encryption type.
IMPORTANT
Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage
account key. Administrative control is not supported with Azure AD credentials.
You can use the Azure portal, PowerShell, or Azure CLI to assign the built-in roles to the Azure AD identity of a
user for granting share-level permissions. Be aware that the share level Azure role assignment can take some
time to be in effect.
NOTE
Remember to sync your AD DS credentials to Azure AD if you plan to use your on-premises AD DS for authentication.
Password hash sync from AD DS to Azure AD is optional. Share level permission will be granted to the Azure AD identity
that is synced from your on-premises AD DS.
The general recommendation is to use share level permission for high level access management to an AD group
representing a group of users and identities, then leverage NTFS permissions for granular access control on
directory/file level.
Assign an Azure role to an AD identity
IMPORTANT
Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (*)
character. If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are
granted access for all possible data actions. This means that all such identities will also be granted any new data action
added to the platform.The additional access and permissions granted through new actions or data actions may be
unwanted behavior for customers using wildcard.
Portal
PowerShell
Azure CLI
To assign an Azure role to an Azure AD identity, using the Azure portal, follow these steps:
1. In the Azure portal, go to your file share, or Create a file share.
2. Select Access Control (IAM) .
3. Select Add a role assignment
4. In the Add role assignment blade, select the appropriate built-in role (Storage File Data SMB Share Reader,
Storage File Data SMB Share Contributor) from the Role list. Leave Assign access to at the default setting:
Azure AD user, group, or ser vice principal . Select the target Azure AD identity by name or email
address.
5. Select Save to complete the role assignment operation.
If you experience issues in connecting to Azure Files, please refer to the troubleshooting tool we published for
Azure Files mounting errors on Windows.
Configure NTFS permissions with Windows File Explorer
Use Windows File Explorer to grant full permission to all directories and files under the file share, including the
root directory.
1. Open Windows File Explorer and right click on the file/directory and select Proper ties .
2. Select the Security tab.
3. Select Edit.. to change permissions.
4. You can change the permissions of existing users or select Add... to grant permissions to new users.
5. In the prompt window for adding new users, enter the target user name you want to grant permission to in
the Enter the object names to select box, and select Check Names to find the full UPN name of the
target user.
6. Select OK .
7. In the Security tab, select all permissions you want to grant your new user.
8. Select Apply .
Configure NTFS permissions with icacls
Use the following Windows command to grant full permissions to all directories and files under the file share,
including the root directory. Remember to replace the placeholder values in the example with your own values.
For more information on how to use icacls to set NTFS permissions and on the different types of supported
permissions, see the command-line reference for icacls.
You have now successfully enabled Azure AD DS authentication over SMB and assigned a custom role that
provides access to an Azure file share with an Azure AD identity. To grant additional users access to your file
share, follow the instructions in the Assign access permissions to use an identity and Configure NTFS
permissions over SMB sections.
Next steps
For more information about Azure Files and how to use Azure AD over SMB, see these resources:
Overview of Azure Files identity-based authentication support for SMB access
FAQ
Managing Storage in the Azure independent clouds
using PowerShell
5/20/2022 • 3 minutes to read • Edit Online
Most people use Azure Public Cloud for their global Azure deployment. There are also some independent
deployments of Microsoft Azure for reasons of sovereignty and so on. These independent deployments are
referred to as "environments." The following list details the independent clouds currently available.
Azure Government Cloud
Azure China 21Vianet Cloud operated by 21Vianet in China
Azure German Cloud
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Log in to Azure
Run the Get-AzEnvironment cmdlet to see the available Azure environments:
Get-AzEnvironment
Sign in to your account that has access to the cloud to which you want to connect and set the environment. This
example shows how to sign into an account that uses the Azure Government Cloud.
To access the China Cloud, use the environment AzureChinaCloud . To access the German Cloud, use
AzureGermanCloud .
At this point, if you need the list of locations to create a storage account or another resource, you can query the
locations available for the selected cloud using Get-AzLocation.
Get-AzLocation | select Location, DisplayName
The following table shows the locations returned for the German cloud.
LO C AT IO N DISP L AY N A M E
Endpoint suffix
The endpoint suffix for each of these environments is different from the Azure Public endpoint. For example, the
blob endpoint suffix for Azure Public is blob.core.windows.net . For the Government Cloud, the blob endpoint
suffix is blob.core.usgovcloudapi.net .
Get endpoint using Get-AzEnvironment
Retrieve the endpoint suffix using Get-AzEnvironment. The endpoint is the StorageEndpointSuffix property of
the environment.
The following code snippets show how to retrieve the endpoint suffix. All of these commands return something
like "core.cloudapp.net" or "core.cloudapi.de", etc. Append the suffix to the storage service to access that service.
For example, "queue.core.cloudapi.de" will access the queue service in German Cloud.
This code snippet retrieves all of the environments and the endpoint suffix for each one.
AzureChinaCloud core.chinacloudapi.cn
AzureCloud core.windows.net
AzureGermanCloud core.cloudapi.de
AzureUSGovernment core.usgovcloudapi.net
To retrieve all of the properties for the specified environment, call Get-AzEnvironment and specify the cloud
name. This code snippet returns a list of properties; look for StorageEndpointSuffix in the list. The following
example is for the German Cloud.
P RO P ERT Y N A M E VA L UE
Name AzureGermanCloud
P RO P ERT Y N A M E VA L UE
EnableAdfsAuthentication False
ActiveDirectoryServiceEndpointResourceI http://management.core.cloudapi.de/
GalleryURL https://gallery.cloudapi.de/
ManagementPortalUrl https://portal.microsoftazure.de/
ServiceManagementUrl https://manage.core.cloudapi.de/
PublishSettingsFileUrl https://manage.microsoftazure.de/publishsettings/index
ResourceManagerUrl http://management.microsoftazure.de/
SqlDatabaseDnsSuffix .database.cloudapi.de
StorageEndpointSuffix core.cloudapi.de
... ...
To retrieve just the storage endpoint suffix property, retrieve the specific cloud and ask for just that one property.
For a storage account in the Government Cloud, this command returns the following output:
Clean up resources
If you created a new resource group and a storage account for this exercise, you can remove both assets by
deleting the resource group. Deleting the resource group deletes all resources contained within the group.
Next steps
Persisting user logins across PowerShell sessions
Azure Government storage
Microsoft Azure Government Developer Guide
Developer Notes for Azure China 21Vianet Applications
Azure Germany Documentation
Initiate a storage account failover
5/20/2022 • 5 minutes to read • Edit Online
If the primary endpoint for your geo-redundant storage account becomes unavailable for any reason, you can
initiate an account failover. An account failover updates the secondary endpoint to become the primary
endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary
region. Forced failover enables you to maintain high availability for your applications.
This article shows how to initiate an account failover for your storage account using the Azure portal,
PowerShell, or Azure CLI. To learn more about account failover, see Disaster recovery and storage account
failover.
WARNING
An account failover typically results in some data loss. To understand the implications of an account failover and to
prepare for data loss, review Understand the account failover process.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Prerequisites
Before you can perform an account failover on your storage account, make sure that your storage account is
configured for geo-replication. Your storage account can use any of the following redundancy options:
Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
For more information about Azure Storage redundancy, see Azure Storage redundancy.
Keep in mind that the following features and services are not supported for account failover:
Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares
being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop
working and may also cause unexpected data loss in the case of newly tiered files.
Storage accounts that have hierarchical namespace enabled (such as for Data Lake Storage Gen2) are not
supported at this time.
A storage account containing premium block blobs cannot be failed over. Storage accounts that support
premium block blobs do not currently support geo-redundancy.
A storage account containing any WORM immutability policy enabled containers cannot be failed over.
Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.
To initiate an account failover from the Azure portal, follow these steps:
1. Navigate to your storage account.
2. Under Settings , select Geo-replication . The following image shows the geo-replication and failover
status of a storage account.
3. Verify that your storage account is configured for geo-redundant storage (GRS) or read-access geo-
redundant storage (RA-GRS). If it's not, then select Configuration under Settings to update your
account to be geo-redundant.
4. The Last Sync Time property indicates how far the secondary is behind from the primary. Last Sync
Time provides an estimate of the extent of data loss that you will experience after the failover is
completed. For more information about checking the Last Sync Time property, see Check the Last Sync
Time property for a storage account.
5. Select Prepare for failover .
6. Review the confirmation dialog. When you are ready, enter Yes to confirm and initiate the failover.
Important implications of account failover
When you initiate an account failover for your storage account, the DNS records for the secondary endpoint are
updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the
potential impact to your storage account before you initiate a failover.
To estimate the extent of likely data loss before you initiate a failover, check the Last Sync Time property. For
more information about checking the Last Sync Time property, see Check the Last Sync Time property for a
storage account.
The time it takes to failover after initiation can vary though typically less than one hour.
After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the
new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage
(RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost
is due to the network egress charges to re-replicate the data to the new secondary region. For additional
information, see Bandwidth Pricing Details.
After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the
new secondary region. Replication time depends on many factors, which include:
The number and size of the objects in the storage account. Many small objects can take longer than fewer
and larger objects.
The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live
traffic takes priority over geo replication.
If using Blob storage, the number of snapshots per blob.
If using Table storage, the data partitioning strategy. The replication process can't scale beyond the number
of partition keys that you use.
Next steps
Disaster recovery and storage account failover
Check the Last Sync Time property for a storage account
Use geo-redundancy to design highly available applications
Tutorial: Build a highly available application with Blob storage
Enable soft delete on Azure file shares
5/20/2022 • 3 minutes to read • Edit Online
Azure Files offers soft delete for file shares so that you can more easily recover your data when it's mistakenly
deleted by an application or other storage account user. To learn more about soft delete, see How to prevent
accidental deletion of Azure file shares.
Applies to
F IL E SH A RE T Y P E SM B NFS
Prerequisites
If you intend to use Azure PowerShell, install the latest version.
If you intend to use the Azure CLI, install the latest version.
Getting started
The following sections show how to enable and use soft delete for Azure file shares on an existing storage
account:
Portal
PowerShell
Azure CLI
3. Select the share and select undelete , this will restore the share.
You can confirm the share is restored since its status switches to Active .
Disable soft delete
If you wish to stop using soft delete, follow these instructions. To permanently delete a file share that has been
soft deleted, you must undelete it, disable soft delete, and then delete it again.
Portal
PowerShell
Azure CLI
1. Navigate to your storage account and select File shares under Data storage .
2. Select the link next to Soft delete .
3. Select Disabled for Soft delete for all file shares .
4. Select Save to confirm your data retention settings.
Next steps
To learn about another form of data protection and recovery, see our article Overview of share snapshots for
Azure Files.
Change how a storage account is replicated
5/20/2022 • 11 minutes to read • Edit Online
Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned
events, including transient hardware failures, network or power outages, and massive natural disasters.
Redundancy ensures that your storage account meets the Service-Level Agreement (SLA) for Azure Storage
even in the face of failures.
Azure Storage offers the following types of replication:
Locally redundant storage (LRS)
Zone-redundant storage (ZRS)
Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
For an overview of each of these options, see Azure Storage redundancy.
…from LRS N/A Use Azure portal, Perform a manual Perform a manual
PowerShell, or CLI to migration migration
change the
replication setting1,2 OR OR
…from GRS/RA- Use Azure portal, N/A Perform a manual Perform a manual
GRS PowerShell, or CLI to migration migration
change the
replication setting OR OR
OR
Use PowerShell or
Azure CLI to change
the replication
setting as part of a
failback operation
only4
…from GZRS/RA- Perform a manual Perform a manual Use Azure portal, N/A
GZRS migration migration PowerShell, or CLI to
change the
replication setting
to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see Use
caution when failing back to the original primary.
5 Migrating from LRS to ZRS is not supported if the NFSv3 protocol support is enabled for Azure Blob Storage
If you performed an account failover for your (RA-)GRS or (RA-)GZRS account, the account is locally redundant
(LRS) in the new primary region after the failover. Live migration to ZRS or GZRS for an LRS account resulting
from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you
perform an account failover from RA-GZRS to the LRS in the secondary region, and then configure it again to
RA-GRS and perform another account failover to the original primary region, you can't contact support for the
original live migration to RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to
ZRS or GZRS.
To change the redundancy configuration for a storage account that contains blobs in the Archive tier, you must
first rehydrate all archived blobs to the Hot or Cool tier. Microsoft recommends that you avoid changing the
redundancy configuration for a storage account that contains archived blobs if at all possible, because
rehydration operations can be costly and time-consuming.
Portal
PowerShell
Azure CLI
To change the redundancy option for your storage account in the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal.
2. Under Settings select Configuration .
3. Update the Replication setting.
IMPORTANT
If you need to migrate more than one storage account, create a single support ticket and specify the names of the
accounts to convert on the Details tab.
6. Fill out the additional required information on the Details tab, then select Review + create to review
and submit your support ticket. A support person will contact you to provide any assistance you may
need.
NOTE
Premium file shares are available only for LRS and ZRS.
GZRS storage accounts do not currently support the archive tier. See Hot, Cool, and Archive access tiers for blob data for
more details.
Managed disks are only available for LRS and cannot be migrated to ZRS. You can store snapshots and images for
standard SSD managed disks on standard HDD storage and choose between LRS and ZRS options. For information about
integration with availability sets, see Introduction to Azure managed disks.
ZRS Classic asynchronously replicates data across data centers within one to two regions. Replicated data may
not be available unless Microsoft initiates failover to the secondary. A ZRS Classic account can't be converted to
or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
ZRS Classic is available only for block blobs in general-purpose V1 (GPv1) storage accounts. For more
information about storage accounts, see Azure storage account overview.
To manually migrate ZRS account data to or from an LRS, GRS, RA-GRS, or ZRS Classic account, use one of the
following tools: AzCopy, Azure Storage Explorer, PowerShell, or Azure CLI. You can also build your own migration
solution with one of the Azure Storage client libraries.
You can also upgrade your ZRS Classic storage account to ZRS by using the Azure portal, PowerShell, or Azure
CLI in regions where ZRS is available.
Portal
PowerShell
Azure CLI
To upgrade to ZRS in the Azure portal, navigate to the Configuration settings of the account and choose
Upgrade :
Costs associated with changing how data is replicated
The costs associated with changing how data is replicated depend on your conversion path. Ordering from least
to the most expensive, Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-
GZRS.
For example, going from LRS to any other type of replication will incur additional charges because you are
moving to a more sophisticated redundancy level. Migrating to GRS or RA-GRS will incur an egress bandwidth
charge at the time of migration because your entire storage account is being replicated to the secondary region.
All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the
secondary region. For details on bandwidth charges, see Azure Storage Pricing page.
If you migrate your storage account from GRS to LRS, there is no additional cost, but your replicated data is
deleted from the secondary location.
IMPORTANT
If you migrate your storage account from RA-GRS to GRS or LRS, that account is billed as RA-GRS for an additional 30
days beyond the date that it was converted.
See also
Azure Storage redundancy
Check the Last Sync Time property for a storage account
Use geo-redundancy to design highly available applications
Use geo-redundancy to design highly available
applications
5/20/2022 • 19 minutes to read • Edit Online
A common feature of cloud-based infrastructures like Azure Storage is that they provide a highly available and
durable platform for hosting data and applications. Developers of cloud-based applications must consider
carefully how to leverage this platform to maximize those advantages for their users. Azure Storage offers geo-
redundant storage to ensure high availability even in the event of a regional outage. Storage accounts
configured for geo-redundant replication are synchronously replicated in the primary region, and then
asynchronously replicated to a secondary region that is hundreds of miles away.
Azure Storage offers two options for geo-redundant replication. The only difference between these two options
is how data is replicated in the primary region:
Geo-zone-redundant storage (GZRS): Data is replicated synchronously across three Azure availability
zones in the primary region using zone-redundant storage (ZRS), then replicated asynchronously to the
secondary region. For read access to data in the secondary region, enable read-access geo-zone-
redundant storage (RA-GZRS).
Microsoft recommends using GZRS/RA-GZRS for scenarios that require maximum availability and
durability.
Geo-redundant storage (GRS): Data is replicated synchronously three times in the primary region using
locally redundant storage (LRS), then replicated asynchronously to the secondary region. For read access
to data in the secondary region, enable read-access geo-redundant storage (RA-GRS).
This article shows how to design your application to handle an outage in the primary region. If the primary
region becomes unavailable, your application can adapt to perform read operations against the secondary
region instead. Make sure that your storage account is configured for RA-GRS or RA-GZRS before you get
started.
Handling retries
The Azure Storage client library helps you determine which errors can be retried. For example, a 404 error
(resource not found) would not be retried because retrying it is not likely to result in success. On the other hand,
a 500 error can be retried because it is a server error, and the problem may simply be a transient issue. For
more details, check out the open source code for the ExponentialRetry class in the .NET storage client library.
(Look for the ShouldRetry method.)
Read requests
Read requests can be redirected to secondary storage if there is a problem with primary storage. As noted
above in Using Eventually Consistent Data, it must be acceptable for your application to potentially read stale
data. If you are using the storage client library to access data from the secondary, you can specify the retry
behavior of a read request by setting a value for the LocationMode property to one of the following:
Primar yOnly (the default)
Primar yThenSecondar y
Secondar yOnly
Secondar yThenPrimar y
When you set the LocationMode to Primar yThenSecondar y , if the initial read request to the primary
endpoint fails with an error that can be retried, the client automatically makes another read request to the
secondary endpoint. If the error is a server timeout, then the client will have to wait for the timeout to expire
before it receives a retryable error from the service.
There are basically two scenarios to consider when you are deciding how to respond to a retryable error:
This is an isolated problem and subsequent requests to the primary endpoint will not return a retryable
error. An example of where this might happen is when there is a transient network error.
In this scenario, there is no significant performance penalty in having LocationMode set to
Primar yThenSecondar y as this only happens infrequently.
This is a problem with at least one of the storage services in the primary region and all subsequent
requests to that service in the primary region are likely to return retryable errors for a period of time. An
example of this is if the primary region is completely inaccessible.
In this scenario, there is a performance penalty because all your read requests will try the primary
endpoint first, wait for the timeout to expire, then switch to the secondary endpoint.
For these scenarios, you should identify that there is an ongoing issue with the primary endpoint and send all
read requests directly to the secondary endpoint by setting the LocationMode property to Secondar yOnly .
At this time, you should also change the application to run in read-only mode. This approach is known as the
Circuit Breaker Pattern.
Update requests
The Circuit Breaker pattern can also be applied to update requests. However, update requests cannot be
redirected to secondary storage, which is read-only. For these requests, you should leave the LocationMode
property set to Primar yOnly (the default). To handle these errors, you can apply a metric to these requests –
such as 10 failures in a row – and when your threshold is met, switch the application into read-only mode. You
can use the same methods for returning to update mode as those described below in the next section about the
Circuit Breaker pattern.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client
libraries. For more information, see Announcing the Azure Storage v12 Client Libraries.
In the Evaluate method in a custom retry policy, you can run custom code whenever a retry takes place.
In addition to recording when a retry happens, this also gives you the opportunity to modify your retry
behavior.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client
libraries. For more information, see Announcing the Azure Storage v12 Client Libraries.
The third approach is to implement a custom monitoring component in your application that continually
pings your primary storage endpoint with dummy read requests (such as reading a small blob) to
determine its health. This would take up some resources, but not a significant amount. When a problem is
discovered that reaches your threshold, you would then perform the switch to Secondar yOnly and
read-only mode.
At some point, you will want to switch back to using the primary endpoint and allowing updates. If using one of
the first two methods listed above, you could simply switch back to the primary endpoint and enable update
mode after an arbitrarily selected amount of time or number of operations has been performed. You can then let
it go through the retry logic again. If the problem has been fixed, it will continue to use the primary endpoint
and allow updates. If there is still a problem, it will once more switch back to the secondary endpoint and read-
only mode after failing the criteria you've set.
For the third scenario, when pinging the primary storage endpoint becomes successful again, you can trigger
the switch back to Primar yOnly and continue allowing updates.
T IM E T RA N SA C T IO N REP L IC AT IO N L A ST SY N C T IM E RESULT
T0 Transaction A: Transaction A
Insert employee inserted to primary,
entity in primary not replicated yet.
T1 Transaction A T1 Transaction A
replicated to replicated to
secondary secondary.
Last Sync Time
updated.
T4 Transaction C T1 Transaction C
replicated to replicated to
secondary secondary.
LastSyncTime not
updated because
transaction B has not
been replicated yet.
In this example, assume the client switches to reading from the secondary region at T5. It can successfully read
the administrator role entity at this time, but the entity contains a value for the count of administrators that is
not consistent with the number of employee entities that are marked as administrators in the secondary region
at this time. Your client could simply display this value, with the risk that it is inconsistent information.
Alternatively, the client could attempt to determine that the administrator role is in a potentially inconsistent
state because the updates have happened out of order, and then inform the user of this fact.
To recognize that it has potentially inconsistent data, the client can use the value of the Last Sync Time that you
can get at any time by querying a storage service. This tells you the time when the data in the secondary region
was last consistent and when the service had applied all the transactions prior to that point in time. In the
example shown above, after the service inserts the employee entity in the secondary region, the last sync time
is set to T1. It remains at T1 until the service updates the employee entity in the secondary region when it is set
to T6. If the client retrieves the last sync time when it reads the entity at T5, it can compare it with the timestamp
on the entity. If the timestamp on the entity is later than the last sync time, then the entity is in a potentially
inconsistent state, and you can take whatever is the appropriate action for your application. Using this field
requires that you know when the last update to the primary was completed.
To learn how to check the last sync time, see Check the Last Sync Time property for a storage account.
Testing
It's important to test that your application behaves as expected when it encounters retryable errors. For
example, you need to test that the application switches to the secondary and into read-only mode when it
detects a problem, and switches back when the primary region becomes available again. To do this, you need a
way to simulate retryable errors and control how often they occur.
You can use Fiddler to intercept and modify HTTP responses in a script. This script can identify responses that
come from your primary endpoint and change the HTTP status code to one that the Storage Client Library
recognizes as a retryable error. This code snippet shows a simple example of a Fiddler script that intercepts
responses to read requests against the employeedata table to return a 502 status:
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
Next steps
For a complete sample showing how to make the switch back and forth between the primary and secondary
endpoints, see Azure Samples – Using the Circuit Breaker Pattern with RA-GRS storage.
Check the Last Sync Time property for a storage
account
5/20/2022 • 2 minutes to read • Edit Online
When you configure a storage account, you can specify that your data is copied to a secondary region that is
hundreds of miles from the primary region. Geo-replication offers durability for your data in the event of a
significant outage in the primary region, such as a natural disaster. If you additionally enable read access to the
secondary region, your data remains available for read operations if the primary region becomes unavailable.
You can design your application to switch seamlessly to reading from the secondary region if the primary
region is unresponsive.
Geo-redundant storage (GRS) and geo-zone-redundant storage (GZRS) both replicate your data asynchronously
to a secondary region. For read access to the secondary region, enable read-access geo-redundant storage (RA-
GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information about the various options
for redundancy offered by Azure Storage, see Azure Storage redundancy.
This article describes how to check the Last Sync Time property for your storage account so that you can
evaluate any discrepancy between the primary and secondary regions.
PowerShell
Azure CLI
To get the last sync time for the storage account with PowerShell, install version 1.11.0 or later of the Az.Storage
module. Then check the storage account's GeoReplicationStats.LastSyncTime property. Remember to
replace the placeholder values with your own values:
See also
Azure Storage redundancy
Change the redundancy option for a storage account
Use geo-redundancy to design highly available applications
Initiate a storage account failover
5/20/2022 • 5 minutes to read • Edit Online
If the primary endpoint for your geo-redundant storage account becomes unavailable for any reason, you can
initiate an account failover. An account failover updates the secondary endpoint to become the primary
endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary
region. Forced failover enables you to maintain high availability for your applications.
This article shows how to initiate an account failover for your storage account using the Azure portal,
PowerShell, or Azure CLI. To learn more about account failover, see Disaster recovery and storage account
failover.
WARNING
An account failover typically results in some data loss. To understand the implications of an account failover and to
prepare for data loss, review Understand the account failover process.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Prerequisites
Before you can perform an account failover on your storage account, make sure that your storage account is
configured for geo-replication. Your storage account can use any of the following redundancy options:
Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
For more information about Azure Storage redundancy, see Azure Storage redundancy.
Keep in mind that the following features and services are not supported for account failover:
Azure File Sync does not support storage account failover. Storage accounts containing Azure file shares
being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop
working and may also cause unexpected data loss in the case of newly tiered files.
Storage accounts that have hierarchical namespace enabled (such as for Data Lake Storage Gen2) are not
supported at this time.
A storage account containing premium block blobs cannot be failed over. Storage accounts that support
premium block blobs do not currently support geo-redundancy.
A storage account containing any WORM immutability policy enabled containers cannot be failed over.
Unlocked/locked time-based retention or legal hold policies prevent failover in order to maintain compliance.
To initiate an account failover from the Azure portal, follow these steps:
1. Navigate to your storage account.
2. Under Settings , select Geo-replication . The following image shows the geo-replication and failover
status of a storage account.
3. Verify that your storage account is configured for geo-redundant storage (GRS) or read-access geo-
redundant storage (RA-GRS). If it's not, then select Configuration under Settings to update your
account to be geo-redundant.
4. The Last Sync Time property indicates how far the secondary is behind from the primary. Last Sync
Time provides an estimate of the extent of data loss that you will experience after the failover is
completed. For more information about checking the Last Sync Time property, see Check the Last Sync
Time property for a storage account.
5. Select Prepare for failover .
6. Review the confirmation dialog. When you are ready, enter Yes to confirm and initiate the failover.
Important implications of account failover
When you initiate an account failover for your storage account, the DNS records for the secondary endpoint are
updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the
potential impact to your storage account before you initiate a failover.
To estimate the extent of likely data loss before you initiate a failover, check the Last Sync Time property. For
more information about checking the Last Sync Time property, see Check the Last Sync Time property for a
storage account.
The time it takes to failover after initiation can vary though typically less than one hour.
After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the
new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage
(RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost
is due to the network egress charges to re-replicate the data to the new secondary region. For additional
information, see Bandwidth Pricing Details.
After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the
new secondary region. Replication time depends on many factors, which include:
The number and size of the objects in the storage account. Many small objects can take longer than fewer
and larger objects.
The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live
traffic takes priority over geo replication.
If using Blob storage, the number of snapshots per blob.
If using Table storage, the data partitioning strategy. The replication process can't scale beyond the number
of partition keys that you use.
Next steps
Disaster recovery and storage account failover
Check the Last Sync Time property for a storage account
Use geo-redundancy to design highly available applications
Tutorial: Build a highly available application with Blob storage
Back up Azure file shares
5/20/2022 • 8 minutes to read • Edit Online
This article explains how to back up Azure file shares from the Azure portal.
In this article, you'll learn how to:
Create a Recovery Services vault.
Configure backup from the Recovery Services vault
Configure backup from the file share pane
Run an on-demand backup job to create a restore point
Prerequisites
Learn about the Azure file share snapshot-based backup solution.
Ensure that the file share is present in one of the supported storage account types.
Identify or create a Recovery Services vault in the same region and subscription as the storage account that
hosts the file share.
5. The Recover y Ser vices vault dialog opens. Provide the following values:
Subscription : Choose the subscription to use. If you're a member of only one subscription, you'll
see that name. If you're not sure which subscription to use, use the default (suggested)
subscription. There are multiple choices only if your work or school account is associated with
more than one Azure subscription.
Resource group : Use an existing resource group or create a new one. To see the list of available
resource groups in your subscription, select Use existing , and then select a resource from the
dropdown list. To create a new resource group, select Create new and enter the name. For more
information about resource groups, see Azure Resource Manager overview.
Vault name : Enter a friendly name to identify the vault. The name must be unique to the Azure
subscription. Specify a name that has at least 2 but not more than 50 characters. The name must
start with a letter and consist only of letters, numbers, and hyphens.
Region : Select the geographic region for the vault. For you to create a vault to help protect any
data source, the vault must be in the same region as the data source.
IMPORTANT
If you're not sure of the location of your data source, close the dialog. Go to the list of your resources in
the portal. If you have data sources in multiple regions, create a Recovery Services vault for each region.
Create the vault in the first location before you create the vault for another location. There's no need to
specify storage accounts to store the backup data. The Recovery Services vault and Azure Backup handle
that automatically.
7. When you're ready to create the Recovery Services vault, select Create .
8. It can take a while to create the Recovery Services vault. Monitor the status notifications in the
Notifications area at the upper-right corner of the portal. After your vault is created, it's visible in the list
of Recovery Services vaults. If you don't see your vault, select Refresh .
Configure backup from the Recovery Services vault
To configure backup for multiple file shares from the Recovery Services vault pane, follow these steps:
1. In the Azure portal, go to Backup center and click +Backup .
2. Select Azure Files (Azure Storage) as the datasource type, select the vault that you wish to protect the
file shares with, and then click Continue .
3. Click Select to select the storage account that contains the file shares to be backed-up.
The Select Storage Account Pane opens on the right, listing a set of discovered supported storage
accounts. They're either associated with this vault or present in the same region as the vault, but not yet
associated to any Recovery Services vault.
4. From the list of discovered storage accounts, select an account, and select OK .
NOTE
If a storage account is present in a different region than the vault, it won't be present in the list of discovered
storage accounts.
5. The next step is to select the file shares you want to back up. Select the Add button in the FileShares to
Backup section.
6. The Select File Shares context pane opens on the right. Azure searches the storage account for file
shares that can be backed up. If you recently added your file shares and don't see them in the list, allow
some time for the file shares to appear.
7. From the Select File Shares list, select one or more of the file shares you want to back up. Select OK .
8. To choose a backup policy for your file share, you have three options:
Choose the default policy.
This option allows you to enable daily backup that will be retained for 30 days. If you don’t have an
existing backup policy in the vault, the backup pane opens with the default policy settings. If you
want to choose the default settings, you can directly select Enable backup .
Create a new policy
a. To create a new backup policy for your file share, select the link text below the drop-down
list in the Backup Policy section.
b. Follow the steps 3-7 in the Create a new policy section.
c. After defining all attributes of the policy, click OK .
4. Select Backup under the Operations section of the file share pane. The Configure backup pane will
load on the right.
5. For the Recovery Services vault selection, do one of the following:
If you already have a vault, select the Select existing Recovery Services vault radio button, and
choose one of the existing vaults from Vault Name drop down menu.
If you don't have a vault, select the Create new Recovery Services vault radio button. Specify a
name for the vault. It's created in the same region as the file share. By default, the vault is created
in the same resource group as the file share. If you want to choose a different resource group,
select Create New link below the Resource Type drop down and specify a name for the
resource group. Select OK to continue.
IMPORTANT
If the storage account is registered with a vault, or there are few protected shares within the storage
account hosting the file share you're trying to protect, the Recovery Services vault name will be pre-
populated and you won’t be allowed to edit it Learn more here.
8. You can track the configuration progress in the portal notifications, or by monitoring the backup jobs
under the vault you're using to protect the file share.
9. After the completion of the configure backup operation, select Backup under the Operations section of
the file share pane. The context pane listing Vault Essentials will load on the right. From there, you can
trigger on-demand backup and restore operations.
4. The Backup Now pane opens. Specify the last day you want to retain the recovery point. You can have a
maximum retention of 10 years for an on-demand backup.
3. The Backup Now pane opens. Specify the retention for the recovery point. You can have a maximum
retention of 10 years for an on-demand backup.
4. Select OK to confirm.
NOTE
Azure Backup locks the storage account when you configure protection for any file share in the corresponding account.
This provides protection against accidental deletion of a storage account with backed up file shares.
Best practices
Don't delete snapshots created by Azure Backup. Deleting snapshots can result in loss of recovery points
and/or restore failures.
Don't remove the lock taken on the storage account by Azure Backup. If you delete the lock, your storage
account will be prone to accidental deletion and if it's deleted, you'll lose your snapshots or backups.
Next steps
Learn how to:
Restore Azure file shares
Manage Azure file share backups
Back up Azure file shares with Azure CLI
5/20/2022 • 4 minutes to read • Edit Online
The Azure CLI provides a command-line experience for managing Azure resources. It's a great tool for building
custom automation to use Azure resources. This article details how to back up Azure file shares with Azure CLI.
You can also perform these steps via Azure PowerShell or the Azure portal.
By the end of this tutorial, you'll learn how to perform the operations below with Azure CLI:
Create a Recovery Services vault
Enable backup for Azure file shares
Trigger an on-demand backup for file shares
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.
2. Use the az backup vault create cmdlet to create the vault. Specify the same location for the vault as was
used for the resource group.
The following example creates a Recovery Services vault named azurefilesvault in the East US region.
az backup vault create --resource-group azurefiles --name azurefilesvault --location eastus --output
table
Name ResourceGroup
------------------------------------ ---------------
0caa93f4-460b-4328-ac1d-8293521dd928 azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your enable backup operation. To track status of the job, use the az backup job show cmdlet.
Name ResourceGroup
------------------------------------ ---------------
9f026b4f-295b-4fb8-aae0-4f058124cb12 azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your “on-demand backup” operation. To track the status of a job, use the az backup job show cmdlet.
Next steps
Learn how to Restore Azure file shares with CLI
Learn how to Manage Azure file share backups with CLI
Back up an Azure file share by using PowerShell
5/20/2022 • 9 minutes to read • Edit Online
This article describes how to use Azure PowerShell to back up an Azure Files file share through an Azure Backup
Recovery Services vault.
This article explains how to:
Set up PowerShell and register the Recovery Services provider.
Create a Recovery Services vault.
Configure backup for an Azure file share.
Run a backup job.
Set up PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
NOTE
Azure PowerShell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this
feature. Learn more
2. Find the PowerShell cmdlets for Azure Backup by using this command:
Get-Command *azrecoveryservices*
3. Review the aliases and cmdlets for Azure Backup, Azure Site Recovery, and the Recovery Services vault.
Here's an example of what you might see. It's not a complete list of cmdlets.
4. Sign in to your Azure account by using Connect-AzAccount .
5. On the webpage that appears, you're prompted to enter your account credentials.
Alternatively, you can include your account credentials as a parameter in the Connect-AzAccount
cmdlet by using -Credential .
If you're a CSP partner working on behalf of a tenant, specify the customer as a tenant. Use their tenant ID
or tenant primary domain name. An example is Connect-AzAccount -Tenant "fabrikam.com" .
6. Associate the subscription that you want to use with the account, because an account can have several
subscriptions:
Select-AzSubscription -SubscriptionName $SubscriptionName
7. If you're using Azure Backup for the first time, use the Register-AzResourceProvider cmdlet to register
the Azure Recovery Services provider with your subscription:
9. In the command output, verify that RegistrationState changes to Registered . If it doesn't, run the
Register-AzResourceProvider cmdlet again.
2. Use the New-AzRecoveryServicesVault cmdlet to create the vault. Specify the same location for the vault
that you used for the resource group.
Get-AzRecoveryServicesVault
The output is similar to the following. Note that the output provides the associated resource group and location.
Name : Contoso-vault
ID : /subscriptions/1234
Type : Microsoft.RecoveryServices/vaults
Location : WestUS
ResourceGroupName : Contoso-docs-rg
SubscriptionId : 1234-567f-8910-abc
Properties : Microsoft.Azure.Commands.RecoveryServices.ARSVaultProperties
IMPORTANT
You need to provide the start time in 30-minute multiples only. In the preceding example, it can be only "01:00:00" or
"02:30:00". The start time can't be "01:15:00".
The following example stores the schedule policy and the retention policy in variables. It then uses those
variables as parameters for a new policy (NewAFSPolicy ). NewAFSPolicy takes a daily backup and retains it
for 30 days.
Enable backup
After you define the backup policy, you can enable protection for the Azure file share by using the policy.
Retrieve a backup policy
You fetch the relevant policy object by using Get-AzRecoveryServicesBackupProtectionPolicy. Use this cmdlet to
view the policies associated with a workload type, or to get a specific policy.
Retrieve a policy for a workload type
The following example retrieves policies for the workload type AzureFiles :
NOTE
The time zone of the BackupTime field in PowerShell is in UTC. When the backup time is shown in the Azure portal, the
time is adjusted to your local time zone.
The command waits until the configure-protection job is finished and gives an output that's similar to the
following example:
For more information on how to get a list of file shares for a storage account, see this article.
IMPORTANT
Make sure that PowerShell is upgraded to the minimum version (Az.RecoveryServices 2.6.0) for backups of Azure file
shares. With this version, the FriendlyName filter is available for the Get-AzRecover ySer vicesBackupItem command.
Pass the name of the Azure file share to the FriendlyName parameter. If you pass the name of the file share to the Name
parameter, this version throws a warning to pass the name to the FriendlyName parameter.
Not installing the minimum version might result in a failure of existing scripts. Install the minimum version of PowerShell
by using the following command:
The command returns a job with an ID to be tracked, as shown in the following example:
Azure file share snapshots are used while the backups are taken. Usually the job finishes by the time the
command returns this output.
Next steps
Learn about backing up Azure Files in the Azure portal.
Refer to the sample script on GitHub for using an Azure Automation runbook to schedule backups.
Backup Azure file share using Azure Backup via
REST API
5/20/2022 • 9 minutes to read • Edit Online
This article describes how to back up an Azure File share using Azure Backup via REST API.
This article assumes you've already created a Recovery Services vault and policy for configuring backup for your
file share. If you haven’t, refer to the create vault and create policy REST API tutorials for creating new vaults and
policies.
For this article, we'll use the following resources:
Recover ySer vicesVault : azurefilesvault
Policy: schedule1
Resource group : azurefiles
Storage Account : testvault2
File Share : testshare
Configure backup for an unprotected Azure file share using REST API
Discover storage accounts with unprotected Azure file shares
The vault needs to discover all Azure storage accounts in the subscription with file shares that can be backed up
to the Recovery Services vault. This is triggered using the refresh operation. It's an asynchronous POST
operation that ensures the vault gets the latest list of all unprotected Azure File shares in the current
subscription and 'caches' them. Once the file share is 'cached', Recovery services can access the file share and
protect it.
POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/provider
s/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/refreshContainers?api-
version=2016-12-01&$filter={$filter}
The POST URI has {subscriptionId} , {vaultName} , {vaultresourceGroupName} , and {fabricName} parameters. In
our example, the value for the different parameters will be as follows:
{fabricName} is Azure
{vaultName} is azurefilesvault
{vaultresourceGroupName} is azurefiles
$filter=backupManagementType eq 'AzureStorage'
Since all the required parameters are given in the URI, there's no need for a separate request body.
POST https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/refreshContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'
Responses to the refresh operation
The 'refresh' operation is an asynchronous operation. It means this operation creates another operation that
needs to be tracked separately.
It returns two responses: 202 (Accepted) when another operation is created, and 200 (OK) when that operation
completes.
Ex a m p l e r e sp o n se s t o t h e r e fr e sh o p e r a t i o n
Track the resulting operation using the "Location" header with a simple GET command
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/operationResults/cca47745-12d2-42f9-b3a4-75335f18fdf6?api-version=2016-12-01
Once all the Azure Storage accounts are discovered, the GET command returns a 204 (No Content) response.
The vault is now able to discover any storage account with file shares that can be backed up within the
subscription.
Get List of storage accounts with file shares that can be backed up with Recovery Services vault
To confirm that “caching” is done, list all the storage accounts in the subscription with file shares that can be
backed up with the Recovery Services vault. Then locate the desired storage account in the response. This is
done using the GET ProtectableContainers operation.
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectableContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'
The GET URI has all the required parameters. No additional request body is needed.
Example of response body:
"value": [
"id": "/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers
/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/
protectableContainers/StorageContainer;Storage;AzureFiles;testvault2",
"name": "StorageContainer;Storage;AzureFiles;testvault2",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectableContainers",
"properties": {
"friendlyName": "testvault2",
"backupManagementType": "AzureStorage",
"protectableContainerType": "StorageContainer",
"healthStatus": "Healthy",
"containerId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/
AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2"
Since we can locate the testvault2 storage account in the response body with the friendly name, the refresh
operation performed above was successful. The Recovery Services vault can now successfully discover storage
accounts with unprotected files shares in the same subscription.
Register storage account with Recovery Services vault
This step is only needed if you didn't register the storage account with the vault earlier. You can register the vault
via the ProtectionContainers-Register operation.
PUT
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}?
api-version=2016-12-01
NOTE
Always take the name attribute of the response and fill it in this request. Don't hard-code or create the container-name
format. If you create or hard-code it, the API call will fail if the container-name format changes in the future.
PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2?api-version=2016-12-01
"properties": {
"containerType": "StorageContainer",
"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"resourceGroup": "AzureFiles",
"friendlyName": "testvault2",
"backupManagementType": "AzureStorage"
}
}
For the complete list of definitions of the request body and other details, refer to ProtectionContainers-Register.
This is an asynchronous operation and returns two responses: "202 Accepted" when the operation is accepted
and "200 OK" when the operation is complete. To track the operation status, use the location header to get the
latest status of the operation.
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/operationresults/1a3c8ee7-
e0e5-43ed-b8b3-73cc992b6db9?api-version=2016-12-01
You can verify if the registration was successful from the value of the registrationstatus parameter in the
response body. In our case, it shows the status as registered for testvault2, so the registration operation was
successful.
Inquire all unprotected files shares under a storage account
You can inquire about protectable items in a storage account using the Protection Containers-Inquire operation.
It's an asynchronous operation and the results should be tracked using the location header.
POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/i
nquire?api-version=2016-12-01
https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/inquire?api-version=2016-12-
01
Cache-Control : no-cache
Pragma : no-cache
X-Content-Type-Options: nosniff
x-ms-request-id : 68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
x-ms-client-request-id : 3da383a5-d66d-4b7c-982a-bc8d94798d61,3da383a5-d66d-4b7c-982a-bc8d94798d61
Strict-Transport-Security: max-age=31536000; includeSubDomains
Server : Microsoft-IIS/10.0
X-Powered-B : ASP.NET
x-ms-ratelimit-remaining-subscription-reads: 11932
x-ms-correlation-request-id : 68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
x-ms-routing-request-id : CENTRALUSEUAP:20200127T105305Z:68727f1e-b8cf-4bf1-bf92-8e03a9d96c46
Date : Mon, 27 Jan 2020 10:53:05 GMT
Select the file share you want to back up
You can list all protectable items under the subscription and locate the desired file share to be backed up using
the GET backupprotectableItems operation.
GET
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupProtectableItems?api-version=2016-12-01&$filter={$filter}
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPro
tectableItems?$filter=backupManagementType eq 'AzureStorage'&api-version=2016-12-01
Sample response:
Status Code:200
{
"value": [
{
"id": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afaccount1/protectableItems/azurefilesha
re;azurefiles1",
"name": "azurefileshare;azurefiles1",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afaccount1",
"parentContainerFriendlyName": "afaccount1",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "azurefiles1",
"protectionState": "NotProtected"
}
},
{
"id": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afsaccount/protectableItems/azurefilesha
re;afsresource",
"name": "azurefileshare;afsresource",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"parentContainerFriendlyName": "afsaccount",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "afsresource",
"protectionState": "NotProtected"
}
},
{
"id": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;testvault2/protectableItems/azurefilesha
re;testshare",
"name": "azurefileshare;testshare",
"type": "Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectableItems",
"properties": {
"parentContainerFabricId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"parentContainerFriendlyName": "testvault2",
"azureFileShareType": "XSMB",
"backupManagementType": "AzureStorage",
"workloadType": "AzureFileShare",
"protectableItemType": "AzureFileShare",
"friendlyName": "testshare",
"protectionState": "NotProtected"
}
}
]
}
The response contains the list of all unprotected file shares and contains all the information required by the
Azure Recovery Service to configure the backup.
Enable backup for the file share
After the relevant file share is "identified" with the friendly name, select the policy to protect. To learn more
about existing policies in the vault, refer to list Policy API. Then select the relevant policy by referring to the
policy name. To create policies, refer to create policy tutorial.
Enabling protection is an asynchronous PUT operation that creates a "protected item".
PUT
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupName}/provider
s/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerNa
me}/protectedItems/{protectedItemName}?api-version=2019-05-13
Set the containername and protecteditemname variables using the ID attribute in the response body of the
GET backupprotectableitems operation.
In our example, the ID of file share we want to protect is:
"/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/storagecontainer;storage;azurefiles;testvault2/protectableItems/azurefilesha
re;testshare
{containername} - storagecontainer;storage;azurefiles;testvault2
{protectedItemName} - azurefileshare;testshare
Or you can refer to the name attribute of the protection container and protectable item responses.
NOTE
Always take the name attribute of the response and fill it in this request. Don't hard-code or create the container-name
format or protected item name format. If you create or hard-code it, the API call will fail if the container-name format or
protected item name format changes in the future.
PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare
;testshare?api-version=2016-12-01
{
"properties": {
"protectedItemType": "AzureFileShareProtectedItem",
"sourceResourceId": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"policyId": "/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPol
icies/schedule1"
}
}
Then track the resulting operation using the location header or Azure-AsyncOperation header with a GET
command.
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOpe
rations/c3a52d1d-0853-4211-8141-477c65740264?api-version=2016-12-01
Once the operation completes, it returns 200 (OK) with the protected item content in the response body.
Sample Response Body:
{
"id": "c3a52d1d-0853-4211-8141-477c65740264",
"name": "c3a52d1d-0853-4211-8141-477c65740264",
"status": "Succeeded",
"startTime": "2020-02-03T18:10:48.296012Z",
"endTime": "2020-02-03T18:10:48.296012Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b"
}
}
This confirms that protection is enabled for the file share and the first backup will be triggered according to the
policy schedule.
POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/backup?api-version=2016-12-01
{containerName} and {protectedItemName} are as constructed above while enabling backup. For our example,
this translates to:
POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/backup?api-version=2017-07-01
For the complete list of definitions of the request body and other details, refer to trigger backups for protected
items REST API document.
Request Body example
"properties":{
"objectType":"AzureFileShareBackupRequest",
"recoveryPointExpiryTimeInUTC":"2020-03-07T18:29:59.000Z"
}
Then track the resulting operation using the location header or Azure-AsyncOperation header with a GET
command.
GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOpe
rations/dc62d524-427a-4093-968d-e951c0a0726e?api-version=2016-12-01
Once the operation completes, it returns 200 (OK) with the ID of the resulting backup job in the response body.
Sample response body
{
"id": "dc62d524-427a-4093-968d-e951c0a0726e",
"name": "dc62d524-427a-4093-968d-e951c0a0726e",
"status": "Succeeded",
"startTime": "2020-02-06T11:06:02.1327954Z",
"endTime": "2020-02-06T11:06:02.1327954Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "39282261-cb52-43f5-9dd0-ffaf66beeaef"
}
}
Since the backup job is a long running operation, it needs to be tracked as explained in the monitor jobs using
REST API document.
Next steps
Learn how to restore Azure file shares using REST API.
Restore Azure file shares
5/20/2022 • 5 minutes to read • Edit Online
This article explains how to use the Azure portal to restore an entire file share or specific files from a restore
point created by Azure Backup.
In this article, you'll learn how to:
Restore a full Azure file share.
Restore individual files or folders.
Track the restore operation status.
2. Select Azure Files (Azure Storage) as the datasource type, select the file share that you wish to restore,
and then click Continue .
Full share recovery
You can use this restore option to restore the complete file share in the original location or an alternate location.
1. After you select Continue in the previous step, the Restore pane opens. To select the restore point you
want to use for performing the restore operation, choose the Select link text below the Restore Point
text box.
2. The Select Restore Point context pane opens on the right, listing the restore points available for the
selected file share. Select the restore point you want to use to perform the restore operation, and select
OK .
NOTE
By default, the Select Restore Point pane lists restore points from the last 30 days. If you want to look at the
restore points created during a specific duration, specify the range by selecting the appropriate Star t Time and
End Time and select the Refresh button.
3. The next step is to choose the Restore Location . In the Recover y Destination section, specify where
or how to restore the data. Select one of the following two options by using the toggle button:
Original Location : Restore the complete file share to the same location as the original source.
Alternate Location : Restore the complete file share to an alternate location and keep the original file
share as is.
Restore to the original location (full share recovery )
1. Select Original Location as the Recover y Destination , and select whether to skip or overwrite if there
are conflicts, by choosing the appropriate option from the In case of Conflicts drop-down list.
2. Select Restore to start the restore operation.
3. After you select File Recover y , the Restore pane opens. To select the restore point you want to use for
performing the restore operation, select the Select link text below the Restore Point text box.
4. The Select Restore Point context pane opens on the right, listing the restore points available for the
selected file share. Select the restore point you want to use to perform the restore operation, and select
OK .
5. The next step is to choose the Restore Location . In the Recover y Destination section, specify where
or how to restore the data. Select one of the following two options by using the toggle button:
Original Location : Restore selected files or folders to the same file share as the original source.
Alternate Location : Restore selected files or folders to an alternate location and keep the original file
share contents as is.
Restore to the original location (item-level recovery )
1. Select Original Location as the Recover y Destination , and select whether to skip or overwrite if there
are conflicts by choosing the appropriate option from the In case of conflicts drop-down list.
2. To select the files or folders you want to restore, select the Add File button. This will open a context pane
on the right, displaying the contents of the file share recovery point you selected for restore.
3. Select the check box that corresponds to the file or folder you want to restore, and choose Select .
You can also monitor restore progress from the Recovery Services vault:
1. Go to Backup center and click Backup Jobs from the menu.
2. Filter for jobs for the required datasource type and job status.
3. Select the workload name that corresponds to your file share to view more details about the restore
operation, like Data Transferred and Number of Restored Files .
NOTE
Folders will be restored with original permissions if there is atleast one file present in them.
NOTE
Trailing dots in any directory path can lead to failures in the restore.
Next steps
Learn how to Manage Azure file share backups.
Restore Azure file shares with the Azure CLI
5/20/2022 • 8 minutes to read • Edit Online
The Azure CLI provides a command-line experience for managing Azure resources. It's a great tool for building
custom automation to use Azure resources. This article explains how to restore an entire file share or specific
files from a restore point created by Azure Backup by using the Azure CLI. You can also perform these steps with
Azure PowerShell or in the Azure portal.
By the end of this article, you'll learn how to perform the following operations with the Azure CLI:
View restore points for a backed-up Azure file share.
Restore a full Azure file share.
Restore individual files or folders.
NOTE
Azure Backup now supports restoring multiple files or folders to the original or an alternate location using Azure CLI.
Refer to the Restore multiple files or folders to original or alternate location section of this document to learn more.
Prerequisites
This article assumes that you already have an Azure file share that's backed up by Azure Backup. If you don't
have one, see Back up Azure file shares with the CLI to configure backup for your file share. For this article, you
use the following resources:
You can use a similar structure for your file shares to try out the different types of restores explained in this
article.
Prepare your environment for the Azure CLI
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.
You can also run the previous cmdlet by using the friendly name for the container and the item by providing the
following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare
The result set is a list of recovery points with time and consistency details for each restore point.
The Name attribute in the output corresponds to the recovery point name that can be used as a value for the --
rp-name parameter in recovery operations.
Name ResourceGroup
------------------------------------ ---------------
6a27cc23-9283-4310-9c27-dcfb81b7b4bb azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
Restore a full share to an alternate location
You can use this option to restore a file share to an alternate location and keep the original file share as is.
Specify the following parameters for alternate location recovery:
--target-storage-account : The storage account to which the backed-up content is restored. The target
storage account must be in the same location as the vault.
--target-file-share : The file share within the target storage account to which the backed-up content is
restored.
--target-folder : The folder under the file share to which data is restored. If the backed-up content is to be
restored to a root folder, give the target folder values as an empty string.
--resolve-conflict : Instruction if there's a conflict with the restored data. Accepts Over write or Skip .
The following example uses az backup restore restore-azurefileshare with restore mode as alternatelocation to
restore the azurefiles file share in the afsaccount storage account to the azurefiles1" file share in the afaccount1
storage account.
Name ResourceGroup
------------------------------------ ---------------
babeb61c-d73d-4b91-9830-b8bfa83c349a azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
Item-level recovery
You can use this restore option to restore individual files or folders in the original or an alternate location.
Define the following parameters to perform restore operations:
--container-name : The name of the storage account that hosts the backed-up original file share. To retrieve
the name or friendly name of your container, use the az backup container list command.
--item-name : The name of the backed-up original file share you want to use for the restore operation. To
retrieve the name or friendly name of your backed-up item, use the az backup item list command.
Specify the following parameters for the items you want to recover:
SourceFilePath : The absolute path of the file, to be restored within the file share, as a string. This path is the
same path used in the az storage file download or az storage file show CLI commands.
SourceFileType : Choose whether a directory or a file is selected. Accepts Director y or File .
ResolveConflict : Instruction if there's a conflict with the restored data. Accepts Over write or Skip .
Restore individual files or folders to the original location
Use the az backup restore restore-azurefiles cmdlet with restore mode set to originallocation to restore specific
files or folders to their original location.
The following example restores the RestoreTest.txt file in its original location: the azurefiles file share.
Name ResourceGroup
------------------------------------ ---------------
df4d9024-0dcb-4edc-bf8c-0a3d18a25319 azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
Restore individual files or folders to an alternate location
To restore specific files or folders to an alternate location, use the az backup restore restore-azurefiles cmdlet
with restore mode set to alternatelocation and specify the following target-related parameters:
--target-storage-account : The storage account to which the backed-up content is restored. The target
storage account must be in the same location as the vault.
--target-file-share : The file share within the target storage account to which the backed-up content is
restored.
--target-folder : The folder under the file share to which data is restored. If the backed-up content is to be
restored to a root folder, give the target folder's value as an empty string.
The following example restores the RestoreTest.txt file originally present in the azurefiles file share to an
alternate location: the restoredata folder in the azurefiles1 file share hosted in the afaccount1 storage account.
Name ResourceGroup
------------------------------------ ---------------
df4d9024-0dcb-4edc-bf8c-0a3d18a25319 azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
Name ResourceGroup
------------------------------------ ---------------
649b0c14-4a94-4945-995a-19e2aace0305 azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your restore operation. To track the status of the job, use the az backup job show cmdlet.
If you want to restore multiple items to an alternate location, use the command above by specifying target-
related parameters as explained in the Restore individual files or folders to an alternate location section.
Next steps
Learn how to Manage Azure file share backups with the Azure CLI.
Restore Azure Files with PowerShell
5/20/2022 • 4 minutes to read • Edit Online
This article explains how to restore an entire file share, or specific files, from a restore point created by the Azure
Backup service using Azure PowerShell.
You can restore an entire file share or specific files on the share. You can restore to the original location, or to an
alternate location.
WARNING
Make sure the PowerShell version is upgraded to the minimum version for 'Az.RecoveryServices 2.6.0' for AFS backups.
For more information, see the section outlining the requirement for this change.
NOTE
Azure Backup now supports restoring multiple files or folders to the original or alternate Location using PowerShell. Refer
to this section of the document to learn how.
After the relevant recovery point is selected, you restore the file share or file to the original location, or to an
alternate location.
The command returns a job with an ID to be tracked, as shown in the following example.
This command returns a job with an ID to be tracked, as shown in the previous section.
$files = ("Restore","zrs1_restore")
If you want to restore multiple files or folders to alternate location, use the scripts above by specifying the target
location-related parameter values, as explained above in Restore an Azure file to an alternate location.
Next steps
Learn about restoring Azure Files in the Azure portal.
Restore Azure File Shares using REST API
5/20/2022 • 6 minutes to read • Edit Online
This article explains how to restore an entire file share or specific files from a restore point created by Azure
Backup by using the REST API.
By the end of this article, you'll learn how to perform the following operations using REST API:
View restore points for a backed-up Azure file share.
Restore a full Azure file share.
Restore individual files or folders.
Prerequisites
We assume that you already have a backed-up file share you want to restore. If you don’t, check Backup Azure
file share using REST API to learn how to create one.
For this article, we'll use the following resources:
Recover ySer vicesVault : azurefilesvault
Resource group : azurefiles
Storage Account : afsaccount
File Share : azurefiles
GET
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/recoveryPoints?api-version=2019-05-13&$filter={$filter}
GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/recoveryPoints?api-version=2019-05-13
The recovery point is identified with the {name} field in the response above.
POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13
The values {containerName} and {protectedItemName} are as set here and recoveryPointID is the {name} field of
the recovery point mentioned above.
POST https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
%3Bazurefiles/recoveryPoints/932886657837421071/restore?api-version=2019-05-13'
For the complete list of definitions of the request body and other details, refer to the trigger Restore REST API
document.
Restore to original location
Request body example for restore to original location
The following request body defines properties required to trigger an Azure file share restore:
{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"recoveryType":"OriginalLocation",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"copyOptions":"Overwrite",
"restoreRequestType":"FullShareRestore"
}
}
{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"recoveryType":"AlternateLocation",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"copyOptions":"Overwrite",
"restoreRequestType":"FullShareRestore",
"restoreFileSpecs":[
{
"targetFolderPath":"restoredata"
}
],
"targetDetails":{
"name":"azurefiles1",
"targetResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afaccount1"
}
}
}
Response
The triggering of a restore operation is an asynchronous operation. This operation creates another operation
that needs to be tracked separately. It returns two responses: 202 (Accepted) when another operation is created,
and 200 (OK) when that operation completes.
Response example
Once you submit the POST URI for triggering a restore, the initial response is 202 (Accepted) with a location
header or Azure-async-header.
HTTP/1.1" 202
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Expires': '-1'
'Location': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/operationResults/68ccfbc1-a64f-4b29-b955-314b5790cfa9?api-version=2019-05-13'
'Retry-After': '60'
'Azure-AsyncOperation': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
;azurefiles/operationsStatus/68ccfbc1-a64f-4b29-b955-314b5790cfa9?api-version=2019-05-13'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '2426777d-c5ec-44b6-a324-384f8947460c'
'x-ms-client-request-id': '3c743096-47eb-11ea-ae90-0a580af41908, 3c743096-47eb-11ea-ae90-0a580af41908'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-writes': '1198'
'x-ms-correlation-request-id': '2426777d-c5ec-44b6-a324-384f8947460c'
'x-ms-routing-request-id': 'WESTEUROPE:20200205T074347Z:2426777d-c5ec-44b6-a324-384f8947460c'
'Date': 'Wed, 05 Feb 2020 07:43:47 GMT'
Then track the resulting operation using the location header or the Azure-AsyncOperation header with a GET
command.
GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupOpe
rations/68ccfbc1-a64f-4b29-b955-314b5790cfa9?api-version=2016-12-01
Once the operation completes, it returns 200 (OK) with the ID of the resulting restore job in the response body.
HTTP/1.1" 200
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Transfer-Encoding': 'chunked'
'Content-Type': 'application/json'
'Content-Encoding': 'gzip'
'Expires': '-1'
'Vary': 'Accept-Encoding'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '41ee89b2-3be4-40d8-8ff6-f5592c2571e3'
'x-ms-client-request-id': '3c743096-47eb-11ea-ae90-0a580af41908, 3c743096-47eb-11ea-ae90-0a580af41908'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'Server': 'Microsoft-IIS/10.0'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-reads': '11998'
'x-ms-correlation-request-id': '41ee89b2-3be4-40d8-8ff6-f5592c2571e3'
'x-ms-routing-request-id': 'WESTEUROPE:20200205T074348Z:41ee89b2-3be4-40d8-8ff6-f5592c2571e3'
'Date': 'Wed, 05 Feb 2020 07:43:47 GMT'
{
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/a7e97e42-4e54-4d4b-b449-26fcf946f42c",
"location": null,
"name": "a7e97e42-4e54-4d4b-b449-26fcf946f42c",
"properties": {
"actionsInfo": [
"Cancellable"
],
"activityId": "3c743096-47eb-11ea-ae90-0a580af41908",
"backupManagementType": "AzureStorage",
"duration": "0:00:01.863098",
"endTime": null,
"entityFriendlyName": "azurefiles",
"errorDetails": null,
"extendedInfo": {
"dynamicErrorMessage": null,
"propertyBag": {},
"tasksList": []
},
"jobType": "AzureStorageJob",
"operation": "Restore",
"startTime": "2020-02-05T07:43:47.144961+00:00",
"status": "InProgress",
"storageAccountName": "afsaccount",
"storageAccountVersion": "Storage"
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupJobs"
}
For alternate location recovery, the response body will be like this:
{
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/7e0ee41e-6e31-4728-a25c-98ff6b777641",
"location": null,
"name": "7e0ee41e-6e31-4728-a25c-98ff6b777641",
"properties": {
"actionsInfo": [
"Cancellable"
],
"activityId": "6077be6e-483a-11ea-a915-0a580af4ad72",
"backupManagementType": "AzureStorage",
"duration": "0:00:02.171965",
"endTime": null,
"entityFriendlyName": "azurefiles",
"errorDetails": null,
"extendedInfo": {
"dynamicErrorMessage": null,
"propertyBag": {
"Data Transferred (in MB)": "0",
"Job Type": "Recover to an alternate file share",
"Number Of Failed Files": "0",
"Number Of Restored Files": "0",
"Number Of Skipped Files": "0",
"RestoreDestination": "afaccount1/azurefiles1/restoredata",
"Source File Share Name": "azurefiles",
"Source Storage Account Name": "afsaccount",
"Target File Share Name": "azurefiles1",
"Target Storage Account Name": "afaccount1"
},
"tasksList": []
},
"jobType": "AzureStorageJob",
"operation": "Restore",
"startTime": "2020-02-05T17:10:18.106532+00:00",
"status": "InProgress",
"storageAccountName": "afsaccount",
"storageAccountVersion": "ClassicCompute"
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupJobs"
}
Since the backup job is a long running operation, it should be tracked as explained in the monitor jobs using
REST API document.
POST
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13
The values {containerName} and {protectedItemName} are as set here and recoveryPointID is the {name} field of
the recovery point mentioned above.
POST https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare
%3Bazurefiles/recoveryPoints/932886657837421071/restore?api-version=2019-05-13'
For the complete list of definitions of the request body and other details, refer to the trigger Restore REST API
document.
Restore to original location for item-level recovery using REST API
The following request body is to restore the Restoretest.txt file in the azurefiles file share in the afsaccount
storage account.
Create Request Body
{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"copyOptions":"Overwrite",
"recoveryType":"OriginalLocation",
"restoreFileSpecs":[
{
"fileSpecType":"File",
"path":"RestoreTest.txt",
"targetFolderPath":null
}
],
"restoreRequestType":"ItemLevelRestore",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.storage/storageAccounts/afsaccount",
"targetDetails":null
}
}
{
"properties":{
"objectType":"AzureFileShareRestoreRequest",
"recoveryType":"AlternateLocation",
"sourceResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afsaccount",
"copyOptions":"Overwrite",
"restoreRequestType":"ItemLevelRestore",
"restoreFileSpecs":[
{
"path":"Restore/RestoreTest.txt",
"fileSpecType":"File",
"targetFolderPath":"restoredata"
}
],
"targetDetails":{
"name":"azurefiles1",
"targetResourceId":"/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/afaccount1"
}
}
}
The response should be handled in the same way as explained above for full share restores.
Next steps
Learn how to manage Azure file shares backup using REST API.
Manage Azure file share backups
5/20/2022 • 6 minutes to read • Edit Online
This article describes common tasks for managing and monitoring the Azure file shares that are backed up by
Azure Backup. You'll learn how to do management tasks in Backup center .
Monitor jobs
When you trigger a backup or restore operation, the backup service creates a job for tracking. You can monitor
the progress of all jobs on the Backup Jobs page.
To open the Backup Jobs page:
1. Go to Backup center and select Backup Jobs under the Monitoring section.
NOTE
Since there is no data transferred to the vault, the data transferred in MB is 0 for backup jobs corresponding to
Azure Files.
Monitor using Azure Backup reports
Azure Backup provides a reporting solution that uses Azure Monitor logs and Azure workbooks. These
resources help you get rich insights into your backups. You can leverage these reports to gain visibility into
Azure Files backup items, jobs at item level and details of active policies. Using the Email Report feature
available in Backup Reports, you can create automated tasks to receive periodic reports via email. Learn how to
configure and view Azure Backup reports.
2. Select Azure Files (Azure Storage) as the datasource type, select the vault under which the policy
should be created, and then click Continue .
3. As the Backup policy pane for Azure File Share opens, specify the policy name.
4. In Backup schedule , select an appropriate frequency for the backups - Daily or Hourly .
Daily : Triggers one backup per day. For daily frequency, select the appropriate values for:
Time : The timestamp when the backup job needs to be triggered.
Time zone : The corresponding time zone for the backup job.
Hourly : Triggers multiple backups per day. For hourly frequency, select the appropriate values for:
Schedule : The time interval (in hours) between the consecutive backups.
Star t time : The time when the first backup job of the day needs to be triggered.
Duration : Represents the backup window (in hours), that is, the time span in which the backup
jobs need to be triggered as per the selected schedule.
Time zone : The corresponding time zone for the backup job.
For example, you’ve the RPO (recovery point objective) requirement of 4 hours and your working
hours are 9 AM to 9 PM. To meet these requirements, the configuration for backup schedule would
be:
Schedule: Every 4 hours
Start time: 9 AM
Duration: 12 hours
Based on your selection, the backup job details (the time stamps when backup job would be
triggered) display on the backup policy blade.
5. In Retention range , specify appropriate retention values for backups - tagged as daily, weekly, monthly,
or yearly.
6. After defining all attributes of the policy, click Create .
View policy
To view the existing backup policies:
1. Go to Backup center and select Backup policies under the Manage section.
All Backup policies configured across your vault appear.
2. To view policies specific to Azure Files (Azure Storage) , select Azure File Share as the datasource
type.
Modify policy
You can modify a backup policy to change the backup frequency or retention range.
To modify a policy:
1. Go to Backup center and select Backup policies under the Manage section.
All Backup policies configured across your vaults appear.
2. To view policies specific to an Azure file share, select Azure Files (Azure Storage) as the datasource
type.
Click the policy you want to update.
3. The Schedule pane opens. Edit the Backup schedule and Retention range as required, and select
Save .
You'll see an Update in Progress message in the pane. After the policy changes update successfully, you'll
see the message Successfully updated the backup policy.
2. Select the backup item for which you want to stop protection.
3. Select the Stop backup option.
4. In the Stop Backup pane, select Retain Backup Data or Delete Backup Data . Then select Stop
backup .
2. Select the backup item for which you want to resume protection.
3. Select the Resume backup option.
4. The Backup Policy pane opens. Select a policy of your choice to resume backup.
5. After you select a backup policy, select Save .
You'll see an Update in Progress message in the portal. After the backup successfully resumes, you'll see
the message Successfully updated backup policy for the Protected Azure File Share.
2. The Delete Backup Data pane opens. Enter the name of the file share to confirm deletion. Optionally,
provide more information in the Reason or Comments boxes. After you're sure about deleting the
backup data, select Delete .
Unregister a storage account
To protect your file shares in a particular storage account by using a different Recovery Services vault, first stop
protection for all file shares in that storage account. Then unregister the account from the current Recovery
Services vault used for protection.
The following procedure assumes that the protection was stopped for all file shares in the storage account you
want to unregister.
To unregister the storage account:
1. Open the Recovery Services vault where your storage account is registered.
2. On the Over view pane, select the Backup Infrastructure option under the Manage section.
3. The Backup Infrastructure pane opens. Select Storage Accounts under the Azure Storage
Accounts section.
4. After you select Storage Accounts , a list of storage accounts registered with the vault appears.
5. Right-click the storage account you want to unregister, and select Unregister .
Next steps
For more information, see Troubleshoot Azure file shares backup.
Manage Azure file share backups with the Azure CLI
5/20/2022 • 9 minutes to read • Edit Online
The Azure CLI provides a command-line experience for managing Azure resources. It's a great tool for building
custom automation to use Azure resources. This article explains how to perform tasks for managing and
monitoring the Azure file shares that are backed up by Azure Backup. You can also perform these steps with the
Azure portal.
Prerequisites
This article assumes you already have an Azure file share backed up by Azure Backup. If you don't have one, see
Back up Azure file shares with the CLI to configure backup for your file shares. For this article, you use the
following resources:
Resource group : azurefiles
Recover ySer vicesVault : azurefilesvault
Storage Account : afsaccount
File Share : azurefiles
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.
Monitor jobs
When you trigger backup or restore operations, the backup service creates a job for tracking. To monitor
completed or currently running jobs, use the az backup job list cmdlet. With the CLI, you also can suspend a
currently running job or wait until a job finishes.
The following example displays the status of backup jobs for the azurefilesvault Recovery Services vault:
Create policy
You can create a backup policy by executing the az backup policy create command with the following
parameters:
--backup-management-type – Azure Storage
--workload-type - AzureFileShare
--name – Name of the policy
--policy - JSON file with appropriate details for schedule and retention
--resource-group - Resource group of the vault
--vault-name – Name of the vault
Example
az backup policy create --resource-group azurefiles --vault-name azurefilesvault --name schedule20 --backup-
management-type AzureStorage --policy samplepolicy.json --workload-type AzureFileShare
{
"eTag": null,
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPol
icies/schedule20",
"location": null,
"name": "schedule20",
"properties": {
"backupManagementType": "AzureStorage",
"protectedItemsCount": 0,
"retentionPolicy": {
"dailySchedule": {
"retentionDuration": {
"count": 30,
"durationType": "Days"
},
"retentionTimes": [
"2020-01-05T08:00:00+00:00"
]
},
"monthlySchedule": null,
"retentionPolicyType": "LongTermRetentionPolicy",
"weeklySchedule": null,
"yearlySchedule": null
},
"schedulePolicy": {
"schedulePolicyType": "SimpleSchedulePolicy",
"scheduleRunDays": null,
"scheduleRunFrequency": "Daily",
"scheduleRunTimes": [
"2020-01-05T08:00:00+00:00"
],
"scheduleWeeklyFrequency": 0
},
"timeZone": "UTC",
"workLoadType": “AzureFileShare”
},
"resourceGroup": "azurefiles",
"tags": null,
"type": "Microsoft.RecoveryServices/vaults/backupPolicies"
}
Once the policy is created successfully, the output of the command will display the policy JSON that you have
passed as a parameter while executing the command.
You can modify the schedule and retention section of the policy as required.
Example
If you want to retain the backup of first Sunday of every month for two months, update the monthly schedule as
below:
"monthlySchedule": {
"retentionDuration": {
"count": 2,
"durationType": "Months"
},
"retentionScheduleDaily": null,
"retentionScheduleFormatType": "Weekly",
"retentionScheduleWeekly": {
"daysOfTheWeek": [
"Sunday"
],
"weeksOfTheMonth": [
"First"
]
},
"retentionTimes": [
"2020-01-05T08:00:00+00:00"
]
}
Modify policy
You can modify a backup policy to change backup frequency or retention range by using az backup item set-
policy.
To change the policy, define the following parameters:
--container-name : The name of the storage account that hosts the file share. To retrieve the name or
friendly name of your container, use the az backup container list command.
--name : The name of the file share for which you want to change the policy. To retrieve the name or
friendly name of your backed-up item, use the az backup item list command.
--policy-name : The name of the backup policy you want to set for your file share. You can use az backup
policy list to view all the policies for your vault.
The following example sets the schedule2 backup policy for the azurefiles file share present in the afsaccount
storage account.
az backup item set-policy --policy-name schedule2 --name azurefiles --vault-name azurefilesvault --resource-
group azurefiles --container-name "StorageContainer;Storage;AzureFiles;afsaccount" --name
"AzureFileShare;azurefiles" --backup-management-type azurestorage --out table
You can also run the previous command by using the friendly names for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare
az backup item set-policy --policy-name schedule2 --name azurefiles --vault-name azurefilesvault --resource-
group azurefiles --container-name afsaccount --name azurefiles --backup-management-type azurestorage --out
table
Name ResourceGroup
------------------------------------ ---------------
fec6f004-0e35-407f-9928-10a163f123e5 azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your change policy operation. To track the status of the job, use the az backup job show cmdlet.
You can also run the previous command by using the friendly name for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your stop protection operation. To track the status of the job, use the az backup job show cmdlet.
Stop protection without retaining recovery points
To stop protection without retaining recovery points, use the az backup protection disable cmdlet with the
delete-backup-data option set to true .
The following example stops protection for the azurefiles file share without retaining recovery points.
You can also run the previous command by using the friendly name for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare
You can also run the previous command by using the friendly name for the container and the item by providing
the following two additional parameters:
--backup-management-type : azurestorage
--workload-type : azurefileshare
az backup protection resume --vault-name azurefilesvault --resource-group azurefiles --container-name
afsaccount --item-name azurefiles --workload-type azurefileshare --backup-management-type Azurestorage --
policy-name schedule2 --out table
Name ResourceGroup
------------------------------------ ---------------
75115ab0-43b0-4065-8698-55022a234b7f azurefiles
The Name attribute in the output corresponds to the name of the job that's created by the backup service for
your resume protection operation. To track the status of the job, use the az backup job show cmdlet.
You can also run the previous cmdlet by using the friendly name for the container by providing the following
additional parameter:
--backup-management-type : azurestorage
Next steps
For more information, see Troubleshoot Azure file shares backup.
Manage Azure file share backups with PowerShell
5/20/2022 • 2 minutes to read • Edit Online
This article describes how to use Azure PowerShell to manage and monitor the Azure file shares that are backed
up by the Azure Backup service.
WARNING
Make sure the PowerShell version is upgraded to the minimum version for 'Az.RecoveryServices 2.6.0' for AFS backups.
For more details, refer to the section outlining the requirement for this change.
$job | fl
IsCancellable : False
IsRetriable : False
ErrorDetails :
{Microsoft.Azure.Commands.RecoveryServices.Backup.Cmdlets.Models.AzureFileShareJobErrorInfo}
ActivityId : 00000000-5b71-4d73-9465-8a4a91f13a36
JobId : 00000000-6c46-496e-980a-3740ccb2ad75
Operation : Restore
Status : Failed
WorkloadName : testAFS
StartTime : 12/10/2018 9:56:38 AM
EndTime : 12/10/2018 11:03:03 AM
Duration : 01:06:24.4660027
BackupManagementType : AzureStorage
$job.ErrorDetails
The Job ID attribute in the output corresponds to the Job ID of the job that's created by the backup service for
your “stop protection” operation. To track the status of the job, use the Get-AzRecoveryServicesBackupJob
cmdlet.
Stop protection without retaining recovery points
To stop protection without retaining recovery points, use the Disable-AzRecoveryServicesBackupProtection
cmdlet and add the -RemoveRecover yPoints parameter.
The following example stops protection for the afsfileshare file share without retaining recovery points:
Next steps
Learn about managing Azure file share backups in the Azure portal.
Manage Azure File share backup with REST API
5/20/2022 • 3 minutes to read • Edit Online
This article explains how to perform tasks for managing and monitoring the Azure file shares that are backed up
by Azure Backup.
Monitor jobs
The Azure Backup service triggers jobs that run in the background. This includes scenarios such as triggering
backup, restore operations, and disabling backup. These jobs can be tracked using their IDs.
Fetch job information from operations
An operation such as triggering backup will always return a jobID in the response.
For example, the final response of a trigger backup REST API operation is as follows:
{
"id": "c3a52d1d-0853-4211-8141-477c65740264",
"name": "c3a52d1d-0853-4211-8141-477c65740264",
"status": "Succeeded",
"startTime": "2020-02-03T18:10:48.296012Z",
"endTime": "2020-02-03T18:10:48.296012Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b"
}
}
The Azure file share backup job is identified by the jobId field and can be tracked as mentioned here using a
GET request.
Tracking the job
GET
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupJobs/{jobName}?api-version=2019-05-13
The { jobName} is the "jobId" mentioned above. The response is always "200 OK" with the status field indicating
the status of the job. Once it's "Completed" or "CompletedWithWarnings", the extendedInfo section reveals
more details about the job.
GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b?api-version=2019-05-13'
Response
200 OK JobResource OK
Response example
Once the GET URI is submitted, a 200 response is returned.
HTTP/1.1" 200
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Transfer-Encoding': 'chunked'
'Content-Type': 'application/json'
'Content-Encoding': 'gzip'
'Expires': '-1'
'Vary': 'Accept-Encoding'
'Server': 'Microsoft-IIS/10.0, Microsoft-IIS/10.0'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': 'dba43f00-5cdb-43b1-a9ec-23e419db67c5'
'x-ms-client-request-id': 'a644712a-4895-11ea-ba57-0a580af42708, a644712a-4895-11ea-ba57-0a580af42708'
'X-Powered-By': 'ASP.NET'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'x-ms-ratelimit-remaining-subscription-reads': '11999'
'x-ms-correlation-request-id': 'dba43f00-5cdb-43b1-a9ec-23e419db67c5'
'x-ms-routing-request-id': 'WESTEUROPE:20200206T040341Z:dba43f00-5cdb-43b1-a9ec-23e419db67c5'
'Date': 'Thu, 06 Feb 2020 04:03:40 GMT'
{
"id": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupJob
s/e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b",
"name": "e2ca2cf4-2eb9-4d4b-b16a-8e592d2a658b",
"type": "Microsoft.RecoveryServices/vaults/backupJobs",
"properties": {
"jobType": "AzureStorageJob",
"duration": "00:00:43.1809140",
"storageAccountName": "testvault2",
"storageAccountVersion": "Storage",
"extendedInfo": {
"tasksList": [],
"propertyBag": {
"File Share Name": "testshare",
"Storage Account Name": "testvault2",
"Policy Name": "schedule1"
}
},
"entityFriendlyName": "testshare",
"backupManagementType": "AzureStorage",
"operation": "ConfigureBackup",
"status": "Completed",
"startTime": "2020-02-03T18:10:48.296012Z",
"endTime": "2020-02-03T18:11:31.476926Z",
"activityId": "3677cec0-942d-4eac-921f-8f3c873140d7"
}
}
Modify policy
To change the policy with which the file share is protected, you can use the same format as enabling protection.
Just provide the new policy ID in the request policy and submit the request.
For example: To change the protection policy of testshare from schedule1 to schedule2, provide the schedule2 ID
in the request body.
{
"properties": {
"protectedItemType": "AzureFileShareProtectedItem",
"sourceResourceId": "/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"policyId": "/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupPol
icies/schedule2"
}
}
{
"properties": {
"protectedItemType": "AzureFileShareProtectedItem",
"sourceResourceId": "/subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
"policyId": "" ,
"protectionState":"ProtectionStopped"
}
}
Sample response
Stopping protection for a file share is an asynchronous operation. The operation creates another operation that
needs to be tracked. It returns two responses: 202 (Accepted) when another operation is created, and 200 when
that operation completes.
Response header when operation is successfully accepted:
HTTP/1.1" 202
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Expires': '-1'
'Location': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/operationResults/b300922a-ad9c-4181-b4cd-d42ea780ad77?api-version=2019-05-13'
'Retry-After': '60'
msrest.http_logger : 'Azure-AsyncOperation': 'https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-
4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;storage;azurefiles;testvault2/protectedItems/AzureFileShare
;testshare/operationsStatus/b300922a-ad9c-4181-b4cd-d42ea780ad77?api-version=2019-05-13'
'X-Content-Type-Options': 'nosniff'
'x-ms-request-id': '3895e8a1-e4b9-4da5-bec7-2cf0266405f8'
'x-ms-client-request-id': 'd331c15e-48ab-11ea-84c0-0a580af46a50, d331c15e-48ab-11ea-84c0-0a580af46a50'
'Strict-Transport-Security': 'max-age=31536000; includeSubDomains'
'X-Powered-By': 'ASP.NET'
'x-ms-ratelimit-remaining-subscription-writes': '1199'
'x-ms-correlation-request-id': '3895e8a1-e4b9-4da5-bec7-2cf0266405f8'
'x-ms-routing-request-id': 'WESTEUROPE:20200206T064224Z:3895e8a1-e4b9-4da5-bec7-2cf0266405f8'
'Date': 'Thu, 06 Feb 2020 06:42:24 GMT'
'Content-Length': '0'
Then track the resulting operation using the location header or Azure-AsyncOperation header with a GET
command:
GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupope
rations/b300922a-ad9c-4181-b4cd-d42ea780ad77?api-version=2016-12-01
Response body
{
"id": "b300922a-ad9c-4181-b4cd-d42ea780ad77",
"name": "b300922a-ad9c-4181-b4cd-d42ea780ad77",
"status": "Succeeded",
"startTime": "2020-02-06T06:42:24.4001299Z",
"endTime": "2020-02-06T06:42:24.4001299Z",
"properties": {
"objectType": "OperationStatusJobExtendedInfo",
"jobId": "7816fca8-d5be-4c41-b911-1bbd922e5826"
}
}
DELETE
https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/p
rotectedItems/{protectedItemName}?api-version=2019-05-13
DELETE https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-
af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFab
rics/Azure/protectionContainers/StorageContainer;Storage;AzureFiles;testvault2/protectedItems/azurefileshare
;testshare?api-version=2016-12-01
Responses
Delete protection is an asynchronous operation. The operation creates another operation that needs to be
tracked separately. It returns two responses: 202 (Accepted) when another operation is created and 204
(NoContent) when that operation completes.
Next steps
Learn how to troubleshoot problems while configuring backup for Azure File shares.
Monitoring Azure Files
5/20/2022 • 25 minutes to read • Edit Online
When you have critical applications and business processes that rely on Azure resources, you want to monitor
those resources for their availability, performance, and operation. This article describes the monitoring data
that's generated by Azure Files and how you can use the features of Azure Monitor to analyze alerts on this data.
Applies to
F IL E SH A RE T Y P E SM B NFS
Monitor overview
The Over view page in the Azure portal for each Azure Files resource includes a brief view of the resource
usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring
data is available. Some of this data is collected automatically and is available for analysis as soon as you create
the resource. You can enable additional types of data collection with some configuration.
Monitoring data
Azure Files collects the same kinds of monitoring data as other Azure resources, which are described in
Monitoring data from Azure resources.
See Azure File monitoring data reference for detailed information on the metrics and logs metrics created by
Azure Files.
Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor
doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you
need to migrate to an Azure Resource Manager storage account. See Migrate to Azure Resource Manager.
To get the list of SMB and REST operations that are logged, see Storage logged operations and status messages
and Azure Files monitoring data reference.
TIP
You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A
policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
This section doesn't describe templates or policy definitions.
To view an Azure Resource Manager template that creates a diagnostic setting, see Diagnostic setting for Azure
Storage.
To learn how to create a diagnostic setting by using a policy definition, see Azure Policy built-in definitions for
Azure Storage.
Azure portal
PowerShell
Azure CLI
4. Choose file as the type of storage that you want to enable logs for.
5. Click Add diagnostic setting .
The Diagnostic settings page appears.
6. In the Name field of the page, enter a name for this Resource log setting. Then, select which operations
you want logged (read, write, and delete operations), and where you want the logs to be sent.
Archive logs to a storage account
If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the
storage account. For specific pricing, see the Platform Logs section of the Azure Monitor pricing page. You
can't send logs to the same storage account that you are monitoring with this setting. This would lead to
recursive logs in which a log entry describes the writing of another log entry. You must create an account or use
another existing account to store log information.
1. Select the Archive to a storage account checkbox, and then click the Configure button.
2. In the Storage account drop-down list, select the storage account that you want to archive your logs to,
click the OK button, and then click the Save button.
IMPORTANT
You can't set a retention policy. However, you can manage the retention policy of a log container by defining a
lifecycle management policy. To learn how, see Optimize costs by automating Azure Blob Storage access tiers.
NOTE
Before you choose a storage account as the export destination, see Archive Azure resource logs to understand
prerequisites on the storage account.
Analyzing metrics
For a list of all Azure Monitor support metrics, which includes Azure Files, see Azure Monitor supported metrics.
Azure portal
PowerShell
Azure CLI
You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer.
Open Metrics Explorer by choosing Metrics from the Azure Monitor menu. For details on using this tool, see
Getting started with Azure Metrics Explorer.
For metrics that support dimensions, you can filter the metric with the desired dimension value. For a complete
list of the dimensions that Azure Storage supports, see Metrics dimensions. Metrics for Azure Files are in these
namespaces:
Microsoft.Storage/storageAccounts
Microsoft.Storage/storageAccounts/fileServices
Microsoft.Azure.Management.Monitor.Models.Response Response;
aggregation: "Average",
resultType: ResultType.Data,
cancellationToken: CancellationToken.None);
Microsoft.Azure.Management.Monitor.Models.Response Response;
Response = readOnlyClient.Metrics.List(
resourceUri: resourceId,
timespan: timeSpan,
interval: System.TimeSpan.FromHours(1),
metricnames: "BlobCapacity",
odataQuery: odataFilterMetrics,
aggregation: "Average",
resultType: ResultType.Data);
Analyzing logs
You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic
queries.
To get the list of SMB and REST operations that are logged, see Storage logged operations and status messages
and Azure Files monitoring data reference.
Log entries are created only if there are requests made against the service endpoint. For example, if a storage
account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure
File service are created. Azure Storage logs contain detailed information about successful and failed requests to
a storage service. This information can be used to monitor individual requests and to diagnose issues with a
storage service. Requests are logged on a best-effort basis.
Log authenticated requests
The following types of authenticated requests are logged:
Successful requests
Failed requests, including timeout, throttling, network, authorization, and other errors
Requests that use Kerberos, NTLM or shared access signature (SAS), including failed and successful requests
Requests to analytics data (classic log data in the $logs container and classic metric data in the $metric
tables)
Requests made by the Azure Files service itself, such as log creation or deletion, aren't logged. For a full list of
the SMB and REST requests that are logged, see Storage logged operations and status messages and Azure Files
monitoring data reference.
Accessing logs in a storage account
Logs appear as blobs stored to a container in the target storage account. Data is collected and stored inside a
single blob as a line-delimited JSON payload. The name of the blob follows this naming convention:
https://<destination-storage-account>.blob.core.windows.net/insights-logs-<storage-
operation>/resourceId=/subscriptions/<subscription-ID>/resourceGroups/<resource-group-
name>/providers/Microsoft.Storage/storageAccounts/<source-storage-account>/fileServices/default/y=<year>/m=
<month>/d=<day>/h=<hour>/m=<minute>/PT1H.json
Here's an example:
https://mylogstorageaccount.blob.core.windows.net/insights-logs-storagewrite/resourceId=/subscriptions/
208841be-a4v3-4234-9450-
08b90c09f4/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount/fileServices/default/y=2019/m=07/d=30/h=23/m=12
You can access and read log data that's sent to your event hub by using security information and event
management and monitoring tools. For more information, see What can I do with the monitoring data being
sent to my event hub?.
Accessing logs in a Log Analytics workspace
You can access logs sent to a Log Analytics workspace by using Azure Monitor log queries. Data is stored in the
StorageFileLogs table.
For more information, see Log Analytics tutorial.
Sample Kusto queries
Here are some queries that you can enter in the Log search bar to help you monitor your Azure Files. These
queries work with the new language.
IMPORTANT
When you select Logs from the storage account resource group menu, Log Analytics is opened with the query scope set
to the current resource group. This means that log queries will only include data from that resource group. If you want to
run a query that includes data from other resources or data from other Azure services, select Logs from the Azure
Monitor menu. See Log query scope and time range in Azure Monitor Log Analytics for details.
Use these queries to help you monitor your Azure file shares:
View SMB errors over the last week
StorageFileLogs
| where Protocol == "SMB" and TimeGenerated >= ago(7d) and StatusCode contains "-"
| sort by StatusCode
StorageFileLogs
| where Protocol == "SMB" and TimeGenerated >= ago(7d)
| summarize count() by OperationName
| sort by count_ desc
| render piechart
StorageFileLogs
| where Protocol == "HTTPS" and TimeGenerated >= ago(7d)
| summarize count() by OperationName
| sort by count_ desc
| render piechart
To view the list of column names and descriptions for Azure Files, see StorageFileLogs.
For more information on how to write queries, see Log Analytics tutorial.
Alerts
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They
allow you to identify and address issues in your system before your customers notice them. You can set alerts
on metrics, logs, and the activity log.
The following table lists some example scenarios to monitor and the proper metric to use for the alert:
File share egress has exceeded 500 GiB in one day. Metric: Egress
Dimension name: FileShare (premium file share only)
NOTE
If you create an alert and it's too noisy, adjust the threshold value and alert logic.
NOTE
If the response types are not listed in the Dimension values drop-down, this means the resource has not been
throttled. To add the dimension values, next to the Dimension values drop-down list, select Add custom
value , enter the respone type (for example, SuccessWithThrottling ), select OK , and then repeat these steps to
add all applicable response types for your file share.
8. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #10 .
NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Throttling alerts for standard file shares will be triggered if any
file share within the storage account is throttled and the alert will not identify which file share was throttled. Since
per-share metrics are not available for standard file shares, the recommendation is to have one file share per
storage account.
9. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
10. Define the aler t parameters (threshold value, operator, aggregation granularity and frequency of
evaluation) and click Done .
TIP
If you are using a static threshold, the metric chart can help determine a reasonable threshold value if the file
share is currently being throttled. If you are using a dynamic threshold, the metric chart will display the calculated
thresholds based on recent data.
11. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
12. Fill in the Aler t details like Aler t rule name , Description , and Severity .
13. Click Create aler t rule to create the alert.
How to create an alert if the Azure file share size is 80% of capacity
1. Go to your storage account in the Azure por tal .
2. In the Monitoring section, click Aler ts and then click + New aler t rule .
3. Click Edit resource , select the File resource type for the storage account and then click Done . For
example, if the storage account name is contoso , select the contoso/file resource.
4. Click Add condition to add a condition.
5. You will see a list of signals supported for the storage account, select the File Capacity metric.
6. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #8 .
NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Alerts for standard file shares are based on all file shares in the
storage account. Since per-share metrics are not available for standard file shares, the recommendation is to have
one file share per storage account.
7. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
8. Enter the Threshold value in bytes. For example, if the file share size is 100 TiB and you want to receive
an alert when the file share size is 80% of capacity, the threshold value in bytes is 87960930222080.
9. Define the rest of the aler t parameters (aggregation granularity and frequency of evaluation) and click
Done .
10. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
11. Fill in the Aler t details like Aler t rule name , Description , and Severity .
12. Click Create aler t rule to create the alert.
How to create an alert if the Azure file share egress has exceeded 500 GiB in a day
1. Go to your storage account in the Azure por tal .
2. In the Monitoring section, click Aler ts and then click + New aler t rule .
3. Click Edit resource , select the File resource type for the storage account and then click Done . For
example, if the storage account name is contoso, select the contoso/file resource.
4. Click Add condition to add a condition.
5. You will see a list of signals supported for the storage account, select the Egress metric.
6. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #8 .
NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Alerts for standard file shares are based on all file shares in the
storage account. Since per-share metrics are not available for standard file shares, the recommendation is to have
one file share per storage account.
7. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
8. Enter 536870912000 bytes for Threshold value.
9. Click the Aggregation granularity drop-down and select 24 hours .
10. Select the Frequency of evaluation and click Done .
11. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
12. Fill in the Aler t details like Aler t rule name , Description , and Severity .
13. Click Create aler t rule to create the alert.
Next steps
Azure Files monitoring data reference
Monitor Azure resources with Azure Monitor
Azure Storage metrics migration
Planning for an Azure Files deployment
How to deploy Azure Files
Troubleshoot Azure Files on Windows
Troubleshoot Azure Files on Linux
Migrate to Azure file shares
5/20/2022 • 7 minutes to read • Edit Online
This article covers the basic aspects of a migration to Azure file shares.
This article contains migration basics and a table of migration guides. These guides help you move your files
into Azure file shares. The guides are organized based on where your data is and what deployment model
(cloud-only or hybrid) you're moving to.
Migration basics
Azure has multiple available types of cloud storage. A fundamental aspect of file migrations to Azure is
determining which Azure storage option is right for your data.
Azure file shares are suitable for general-purpose file data. This data includes anything you use an on-premises
SMB or NFS share for. With Azure File Sync, you can cache the contents of several Azure file shares on servers
running Windows Server on-premises.
For an app that currently runs on an on-premises server, storing files in an Azure file share might be a good
choice. You can move the app to Azure and use Azure file shares as shared storage. You can also consider Azure
Disks for this scenario.
Some cloud apps don't depend on SMB or on machine-local data access or shared access. For those apps, object
storage like Azure blobs is often the best choice.
The key in any migration is to capture all the applicable file fidelity when moving your files from their current
storage location to Azure. How much fidelity the Azure storage option supports and how much your scenario
requires also helps you pick the right Azure storage. General-purpose file data traditionally depends on file
metadata. App data might not.
Here are the two basic components of a file:
Data stream : The data stream of a file stores the file content.
File metadata : The file metadata has these subcomponents:
File attributes like read-only
File permissions, which can be referred to as NTFS permissions or file and folder ACLs
Timestamps, most notably the creation, and last-modified timestamps
An alternative data stream, which is a space to store larger amounts of nonstandard properties
File fidelity in a migration can be defined as the ability to:
Store all applicable file information on the source.
Transfer files with the migration tool.
Store files in the target storage of the migration.
Ultimately, the target for migration guides on this page is one or more Azure file shares. Consider this list of
features / file fidelity that Azure file shares don't support.
To ensure your migration proceeds smoothly, identify the best copy tool for your needs and match a storage
target to your source.
Taking the previous information into account, you can see that the target storage for general-purpose files in
Azure is Azure file shares.
Unlike object storage in Azure blobs, an Azure file share can natively store file metadata. Azure file shares also
preserve the file and folder hierarchy, attributes, and permissions. NTFS permissions can be stored on files and
folders because they're on-premises.
A user of Active Directory, which is their on-premises domain controller, can natively access an Azure file share.
So can a user of Azure Active Directory Domain Services (Azure AD DS). Each uses their current identity to get
access based on share permissions and on file and folder ACLs. This behavior is similar to a user connecting to
an on-premises file share.
The alternative data stream is the primary aspect of file fidelity that currently can't be stored on a file in an Azure
file share. It's preserved on-premises when Azure File Sync is used.
Learn more about on-premises Active Directory authentication and Azure AD DS authentication for Azure file
shares.
Migration guides
The following table lists detailed migration guides.
How to use the table:
1. Locate the row for the source system your files are currently stored on.
2. Choose one of these targets:
A hybrid deployment using Azure File Sync to cache the content of Azure file shares on-premises
Azure file shares in the cloud
Select the target column that matches your choice.
3. Within the intersection of source and target, a table cell lists available migration scenarios. Select one to
directly link to the detailed migration guide.
A scenario without a link doesn't yet have a published migration guide. Check this table occasionally for updates.
New guides will be published when they're available.
TA RGET : TA RGET :
SO URC E H Y B RID DEP LO Y M EN T C LO UD- O N LY DEP LO Y M EN T
Windows Server 2012 R2 and later Azure File Sync Via RoboCopy to a mounted
Azure File Sync and Azure Azure file share
DataBox Via Azure File Sync
Windows Server 2012 and earlier Via DataBox and Azure File Via Storage Migration Service
Sync to recent server OS to recent server with Azure File
Via Storage Migration Service Sync
to recent server with Azure File Via RoboCopy to a mounted
Sync, then upload Azure file share
Network-attached storage (NAS) Via Azure File Sync upload Via DataBox
Via DataBox + Azure File Sync Via RoboCopy to a mounted
Azure file share
TA RGET : TA RGET :
SO URC E H Y B RID DEP LO Y M EN T C LO UD- O N LY DEP LO Y M EN T
Linux / Samba Azure File Sync and RoboCopy Via RoboCopy to a mounted
Azure file share
Microsoft Azure StorSimple 8100 or Via dedicated data migration Via dedicated data migration
8600 series appliances cloud service cloud service
Migration toolbox
File -copy tools
There are several file-copy tools available from Microsoft and others. To select the right tool for your migration
scenario, you must consider these fundamental questions:
Does the tool support the source and target locations for your file copy?
Does the tool support your network path or available protocols (such as REST, SMB, or NFS) between the
source and target storage locations?
Does the tool preserve the necessary file fidelity supported by your source and target locations?
In some cases, your target storage doesn't support the same fidelity as your source. If the target storage
is sufficient for your needs, the tool must match only the target's file-fidelity capabilities.
Does the tool have features that let it fit into your migration strategy?
For example, consider whether the tool lets you minimize your downtime.
When a tool supports an option to mirror a source to a target, you can often run it multiple times on the
same source and target while the source stays accessible.
The first time you run the tool, it copies the bulk of the data. This initial run might last a while. It often
lasts longer than you want for taking the data source offline for your business processes.
By mirroring a source to a target (as with robocopy /MIR ), you can run the tool again on that same
source and target. The run is much faster because it needs to transport only source changes that occur
after the previous run. Rerunning a copy tool this way can reduce downtime significantly.
The following table classifies Microsoft tools and their current suitability for Azure file shares:
Data Box (including the Supported. Data Box and Data Box
data copy service to load (Data Box Disks does not Heavy fully support
files onto the device) support large file shares) metadata.
Data Box Disks does not
preserve file metadata.
Azure Storage Explorer Supported but not Loses most file fidelity, like
latest version recommended. ACLs. Supports timestamps.
Next steps
1. Create a plan for which deployment of Azure file shares (cloud-only or hybrid) you want.
2. Review the list of available migration guides to find the detailed guide that matches your source and
deployment of Azure file shares.
More information about the Azure Files technologies mentioned in this article:
Azure file share overview
Planning for an Azure File Sync deployment
Azure File Sync: Cloud tiering
Use RoboCopy to migrate to Azure file shares
5/20/2022 • 26 minutes to read • Edit Online
This migration article describes the use of RoboCopy to move or migrate files to an Azure file share. RoboCopy
is a trusted and well-known file copy utility with a feature set that makes it well suited for migrations. It uses the
SMB protocol, which makes it broadly applicable to any source and target combination, supporting SMB.
Data sources: Any source supporting the SMB protocol, such as Network Attached Storage (NAS), Windows
or Linux servers, another Azure file share and many more
Migration route: From source storage ⇒ Windows machine with RoboCopy ⇒ Azure file share
There are many different migration routes for different source and deployment combinations. Look through the
table of migration guides to find the migration that best suits your needs.
Applies to
F IL E SH A RE T Y P E SM B NFS
Migration goals
The goal is to move the data from existing file share locations to Azure. In Azure, you'll store you data in native
Azure file shares you can use without a need for a Windows Server. This migration needs to be done in a way
that guarantees the integrity of the production data and availability during the migration. The latter requires
keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
Migration overview
The migration process consists of several phases. You'll need to deploy Azure storage accounts and file shares.
Furthermore, you'll configure networking, consider a DFS Namespace deployment (DFS-N) or update your
existing one. Once it's time for the actual data copy, you'll need to consider repeated, differential RoboCopy runs
to minimize downtime, and finally, cut-over your users to the newly created Azure file shares. The following
sections describe the phases of the migration process in detail.
TIP
If you are returning to this article, use the navigation on the right side to jump to the migration phase where you left off.
TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
A structured approach to a deployment map
Before you deploy cloud storage in a later step, it's important to create a map between on-premises folders and
Azure file shares. This mapping will inform how many and which Azure File Sync sync group resources you'll
provision. A sync group ties the Azure file share and the folder on your server together and establishes a sync
connection.
To decide how many Azure file shares you need, review the following limits and best practices. Doing so will
help you optimize your map.
A server on which the Azure File Sync agent is installed can sync with up to 30 Azure file shares.
An Azure file share is deployed in a storage account. That arrangement makes the storage account a scale
target for performance numbers like IOPS and throughput.
One standard Azure file share can theoretically saturate the maximum performance that a storage
account can deliver. If you place multiple shares in a single storage account, you're creating a shared pool
of IOPS and throughput for these shares. If you plan to only attach Azure File Sync to these file shares,
grouping several Azure file shares into the same storage account won't create a problem. Review the
Azure file share performance targets for deeper insight into the relevant metrics. These limitations don't
apply to premium storage, where performance is explicitly provisioned and guaranteed for each share.
If you plan to lift an app to Azure that will use the Azure file share natively, you might need more
performance from your Azure file share. If this type of use is a possibility, even in the future, it's best to
create a single standard Azure file share in its own storage account.
There's a limit of 250 storage accounts per subscription per Azure region.
TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.
IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.
If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.
Once you are ready, review the Use an Azure file share with Windows how-to article. Then mount the Azure file
share you want to start the RoboCopy for.
Phase 4: RoboCopy
The following RoboCopy command will copy only the differences (updated files and folders) from your source
storage to your Azure file share.
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
TIP
Check out the Troubleshooting section if RoboCopy is impacting your production environment, reports lots of errors or is
not progressing as fast as expected.
While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.
TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.
Next steps
There is more to discover about Azure file shares. The following articles help understand advanced options, best
practices, and also contain troubleshooting help. These articles link to Azure file share documentation as
appropriate.
Migration overview
Backup: Azure file share snapshots
How to use DFS Namespaces with Azure Files
Use DataBox to migrate from Network Attached
Storage (NAS) to Azure file shares
5/20/2022 • 31 minutes to read • Edit Online
This migration article is one of several involving the keywords NAS and Azure DataBox. Check if this article
applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: NAS ⇒ DataBox ⇒ Azure file share
Caching files on-premises: No, the final goal is to use the Azure file shares directly in the cloud. There is no
plan to use Azure File Sync.
If your scenario is different, look through the table of migration guides.
This article guides you end-to-end through the planning, deployment, and networking configurations needed to
migrate from your NAS appliance to functional Azure file shares. This guide uses Azure DataBox for bulk data
transport (offline data transport).
Applies to
F IL E SH A RE T Y P E SM B NFS
Migration goals
The goal is to move the shares on your NAS appliance to Azure and have them become native Azure file shares.
You can use native Azure file shares without a need for a Windows Server. This migration needs to be done in a
way that guarantees the integrity of the production data and availability during the migration. The latter requires
keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
Migration overview
The migration process consists of several phases. You'll need to deploy Azure storage accounts and file shares
and configure networking. Then you'll migrate your files using Azure DataBox, and RoboCopy to catch-up with
changes. Finally, you'll cut-over your users and apps to the newly created Azure file shares. The following
sections describe the phases of the migration process in detail.
TIP
If you are returning to this article, use the navigation on the right side to jump to the migration phase where you left off.
TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.
IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.
If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.
Robocopy /MT:32 /NP /NFL /NDL /B /MIR /IT /COPY:DATSO /DCOPY:DAT /UNILOG:<FilePathAndName> <SourcePath>
<Dest.Path>
To learn more about the details of the individual RoboCopy flags, check out the table in the upcoming
RoboCopy section.
To learn more about how to appropriately size the thread count /MT:n , optimize RoboCopy speed, and make
RoboCopy a good neighbor in your data center, take a look at the RoboCopy troubleshooting section.
TIP
As an alternative to Robocopy, Data Box has created a data copy service. You can use this service to load files onto your
Data Box with full fidelity. Follow this data copy service tutorial and make sure to set the correct Azure file share target.
IMPORTANT
Before you can successfully mount an Azure file share to a local Windows Server, you need to have completed Phase :
Preparing to use Azure file shares!
Once you are ready, review the Use an Azure file share with Windows how-to article and mount the Azure file
share you want to start the NAS catch-up RoboCopy for.
RoboCopy
The following RoboCopy command will copy only the differences (updated files and folders) from your NAS
storage to your Azure file share.
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
SW ITC H M EA N IN G
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
SW ITC H M EA N IN G
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
TIP
Check out the Troubleshooting section if RoboCopy is impacting your production environment, reports lots of errors or is
not progressing as fast as expected.
User cut-over
When you run the RoboCopy command for the first time, your users and applications are still accessing files on
the NAS and potentially change them. It is possible, that RoboCopy has processed a directory, moves on to the
next and then a user on the source location (NAS) adds, changes, or deletes a file that will now not be processed
in this current RoboCopy run. This behavior is expected.
The first run is about moving the bulk of the churned data to your Azure file share. This first copy can take a
while. Check out the Troubleshooting section for more insight into what can affect RoboCopy speeds.
Once the initial run is complete, run the command again.
A second time you run RoboCopy for the same share, it will finish faster, because it only needs to transport
changes that happened since the last run. You can run repeated jobs for the same share.
When you consider the downtime acceptable, then you need to remove user access to your NAS-based shares.
You can do that by any steps that prevent users from changing the file and folder structure and content. An
example is to point your DFS-Namespace to a non-existing location or change the root ACLs on the share.
Run one last RoboCopy round. It will pick up any changes, that might have been missed. How long this final step
takes, is dependent on the speed of the RoboCopy scan. You can estimate the time (which is equal to your
downtime) by measuring how long the previous run took.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions as on your NAS SMB share. If you had an enterprise-class domain-
joined NAS, then the user SIDs will automatically match as the users exist in Active Directory and RoboCopy
copies files and metadata at full fidelity. If you have used local users on your NAS, you need to re-create these
users as Windows Server local users and map the existing SIDs RoboCopy moved over to your Windows Server
to the SIDs of your new, Windows Server local users.
You have finished migrating a share / group of shares into a common root or volume. (Depending on your
mapping from Phase 1)
You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share
at a time.
Troubleshoot
Speed and success rate of a given RoboCopy run will depend on several factors:
IOPS on the source and target storage
the available network bandwidth between source and target
the ability to quickly process files and folders in a namespace
the number of changes between RoboCopy runs
IOPS and bandwidth considerations
In this category, you need to consider abilities of the source storage , the target storage , and the network
connecting them. The maximum possible throughput is determined by the slowest of these three components.
Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
Cau t i on
While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.
TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.
Next steps
There is more to discover about Azure file shares. The following articles help understand advanced options, best
practices, and also contain troubleshooting help. These articles link to Azure file share documentation as
appropriate.
Migration overview
Monitor, diagnose, and troubleshoot Microsoft Azure Storage
Networking considerations for direct access
Backup: Azure file share snapshots
Migrate from Linux to a hybrid cloud deployment
with Azure File Sync
5/20/2022 • 27 minutes to read • Edit Online
This migration article is one of several involving the keywords NFS and Azure File Sync. Check if this article
applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: Linux Server with SAMBA ⇒ Windows Server 2012R2 or later ⇒ sync with Azure file
share(s)
Caching files on-premises: Yes, the final goal is an Azure File Sync deployment.
If your scenario is different, look through the table of migration guides.
Azure File Sync works on Windows Server instances with direct attached storage (DAS). It does not support sync
to and from Linux clients, or a remote Server Message Block (SMB) share, or Network File System (NFS) shares.
As a result, transforming your file services into a hybrid deployment makes a migration to Windows Server
necessary. This article guides you through the planning and execution of such a migration.
Applies to
F IL E SH A RE T Y P E SM B NFS
Migration goals
The goal is to move the shares that you have on your Linux Samba server to a Windows Server instance. Then
use Azure File Sync for a hybrid cloud deployment. This migration needs to be done in a way that guarantees
the integrity of the production data and availability during the migration. The latter requires keeping downtime
to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
Migration overview
As mentioned in the Azure Files migration overview article, using the correct copy tool and approach is
important. Your Linux Samba server is exposing SMB shares directly on your local network. Robocopy, built into
Windows Server, is the best way to move your files in this migration scenario.
If you're not running Samba on your Linux server and rather want to migrate folders to a hybrid deployment on
Windows Server, you can use Linux copy tools instead of Robocopy. Be aware of the fidelity capabilities of your
copy tool. Review the migration basics section in the migration overview article to learn what to look for in a
copy tool.
Phase 1: Identify how many Azure file shares you need
In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.
TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.
IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.
NOTE
The previously linked article presents a table with a range for server memory (RAM). You can orient toward the smaller
number for your server, but expect that initial sync can take significantly more time.
If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.
If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Phase 6: Configure Azure File Sync on the Windows Server
deployment
Your registered on-premises Windows Server instance must be ready and connected to the internet for this
process.
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.
IMPORTANT
Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the
cloud, yet have the full namespace available. Locally interesting data is also cached locally for fast access performance.
Cloud tiering is an optional feature for each Azure File Sync server endpoint.
WARNING
If you provisioned less storage on your Windows Server volumes than your data used on the Linux Samba server, then
cloud tiering is mandatory. If you don't turn on cloud tiering, your server will not free up space to store all files. Set your
tiering policy, temporarily for the migration, to 99 percent free space for a volume. Be sure to return to your cloud tiering
settings after the migration is complete, and set the policy to a more useful level for the long term.
Repeat the steps of sync group creation and the addition of the matching server folder as a server endpoint for
all Azure file shares and server locations that need to be configured for sync.
After the creation of all server endpoints, sync is working. You can create a test file and see it sync up from your
server location to the connected Azure file share (as described by the cloud endpoint in the sync group).
Both locations, the server folders and the Azure file shares, are otherwise empty and awaiting data. In the next
step, you'll begin to copy files into the Windows Server instance for Azure File Sync to move them up to the
cloud. If you've enabled cloud tiering, the server will then begin to tier files if you run out of capacity on the local
volumes.
Phase 7: Robocopy
The basic migration approach is to use Robocopy to copy files and use Azure File Sync to do the syncing.
Run the first local copy to your Windows Server target folder:
1. Identify the first location on your Linux Samba server.
2. Identify the matching folder on the Windows Server instance that already has Azure File Sync configured on
it.
3. Start the copy by using Robocopy.
The following Robocopy command will copy files from your Linux Samba server's storage to your Windows
Server target folder. Windows Server will sync it to the Azure file shares.
If you provisioned less storage on your Windows Server instance than your files take up on the Linux Samba
server, then you have configured cloud tiering. As the local Windows Server volume gets full, cloud tiering will
start and tier files that have successfully synced already. Cloud tiering will generate enough space to continue
the copy from the Linux Samba server. Cloud tiering checks once an hour to see what has synced and to free up
disk space to reach the policy of 99 percent free space for a volume.
It's possible that Robocopy moves files faster than you can sync to the cloud and tier locally, causing you to run
out of local disk space. Robocopy will then fail. We recommend that you work through the shares in a sequence
that prevents the problem. For example, consider not starting Robocopy jobs for all shares at the same time. Or
consider moving shares that fit on the current amount of free space on the Windows Server instance. If your
Robocopy job does fail, you can always rerun the command as long as you use the following mirror/purge
option:
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
SW ITC H M EA N IN G
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
WARNING
After you've moved all the data from your Linux Samba server to the Windows Server instance, and your migration is
complete, return to all sync groups in the Azure portal. Adjust the percentage of free space for cloud tiering volume to
something better suited for cache utilization, such as 20 percent.
The policy for free space in cloud tiering volume acts on a volume level with potentially multiple server
endpoints syncing from it. If you forget to adjust the free space on even one server endpoint, sync will continue
to apply the most restrictive rule and attempt to keep free disk space at 99 percent. The local cache then might
not perform as you expect. The performance might be acceptable if your goal is to have the namespace for a
volume that contains only rarely accessed archival data, and you're reserving the rest of the storage space for
another scenario.
Troubleshoot
The most common problem is that the Robocopy command fails with Volume full on the Windows Server side.
Cloud tiering acts once every hour to evacuate content from the local Windows Server disk that has synced. Its
goal is to reach free space of 99 percent on the volume.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on Windows Server.
When your Windows Server instance has enough available capacity, rerunning the command will resolve the
problem. Nothing breaks when you get into this situation, and you can move forward with confidence. The
inconvenience of running the command again is the only consequence.
Check the link in the following section for troubleshooting Azure File Sync problems.
Next steps
There's more to discover about Azure file shares and Azure File Sync. The following articles contain advanced
options, best practices, and troubleshooting help. These articles link to Azure file share documentation as
appropriate.
Azure File Sync overview
Deploy Azure File Sync
Azure File Sync troubleshooting
Migrate from Network Attached Storage (NAS) to a
hybrid cloud deployment with Azure File Sync
5/20/2022 • 27 minutes to read • Edit Online
This migration article is one of several involving the keywords NAS and Azure File Sync. Check if this article
applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: NAS ⇒ Windows Server ⇒ upload and sync with Azure file share(s)
Caching files on-premises: Yes, the final goal is an Azure File Sync deployment.
If your scenario is different, look through the table of migration guides.
Azure File Sync works on Direct Attached Storage (DAS) locations and does not support sync to Network
Attached Storage (NAS) locations. This fact makes a migration of your files necessary and this article guides you
through the planning and execution of such a migration.
Applies to
F IL E SH A RE T Y P E SM B NFS
Migration goals
The goal is to move the shares that you have on your NAS appliance to a Windows Server. Then utilize Azure
File Sync for a hybrid cloud deployment. Generally, migrations need to be done in a way that guaranty the
integrity of the production data and it's availability during the migration. The latter requires keeping downtime
to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
Migration overview
As mentioned in the Azure Files migration overview article, using the correct copy tool and approach is
important. Your NAS appliance is exposing SMB shares directly on your local network. RoboCopy, built-into
Windows Server, is the best way to move your files in this migration scenario.
Phase 1: Identify how many Azure file shares you need
Phase 2: Provision a suitable Windows Server on-premises
Phase 3: Deploy the Azure File Sync cloud resource
Phase 4: Deploy Azure storage resources
Phase 5: Deploy the Azure File Sync agent
Phase 6: Configure Azure File Sync on the Windows Server
Phase 7: RoboCopy
Phase 8: User cut-over
TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.
IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.
NOTE
The previously linked article presents a table with a range for server memory (RAM). You can orient towards the smaller
number for your server but expect that initial sync can take significantly more time.
If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Phase 6: Configure Azure File Sync on the Windows Server
Your registered on-premises Windows Server must be ready and connected to the internet for this process.
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.
IMPORTANT
Cloud tiering is the AFS feature that allows the local server to have less storage capacity than is stored in the cloud, yet
have the full namespace available. Locally interesting data is also cached locally for fast access performance. Cloud tiering
is an optional feature per Azure File Sync "server endpoint".
WARNING
If you provisioned less storage on your Windows server volume(s) than your data used on the NAS appliance, then cloud
tiering is mandatory. If you do not turn on cloud tiering, your server will not free up space to store all files. Set your
tiering policy, temporarily for the migration, to 99% volume free space. Be sure to return to your cloud tiering settings
after the migration is complete, and set it to a more long-term useful level.
Repeat the steps of sync group creation and addition of the matching server folder as a server endpoint for all
Azure file shares / server locations, that need to be configured for sync.
After the creation of all server endpoints, sync is working. You can create a test file and see it sync up from your
server location to the connected Azure file share (as described by the cloud endpoint in the sync group).
Both locations, the server folders and the Azure file shares are otherwise empty and awaiting data in either
location. In the next step, you will begin to copy files into the Windows Server for Azure File Sync to move them
up to the cloud. In case you've enabled cloud tiering, the server will then begin to tier files, should you run out of
capacity on the local volume(s).
Phase 7: RoboCopy
The basic migration approach is a RoboCopy from your NAS appliance to your Windows Server, and Azure File
Sync to Azure file shares.
Run the first local copy to your Windows Server target folder:
Identify the first location on your NAS appliance.
Identify the matching folder on the Windows Server, that already has Azure File Sync configured on it.
Start the copy using RoboCopy
The following RoboCopy command will copy files from your NAS storage to your Windows Server target folder.
The Windows Server will sync it to the Azure file share(s).
If you provisioned less storage on your Windows Server than your files take up on the NAS appliance, then you
have configured cloud tiering. As the local Windows Server volume gets full, cloud tiering will kick in and tier
files that have successfully synced already. Cloud tiering will generate enough space to continue the copy from
the NAS appliance. Cloud tiering checks once an hour to see what has synced and to free up disk space to reach
the 99% volume free space. It is possible, that RoboCopy moves files faster than you can sync to the cloud and
tier locally, thus running out of local disk space. RoboCopy will fail. It is recommended that you work through
the shares in a sequence that prevents that. For example, not starting RoboCopy jobs for all shares at the same
time, or only moving shares that fit on the current amount of free space on the Windows Server, to mention a
few.
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
WARNING
Once you have moved all the data from you NAS to the Windows Server, and your migration is complete: Return to all
sync groups in the Azure portal and adjust the cloud tiering volume free space percent value to something better suited
for cache utilization, say 20%.
The cloud tiering volume free space policy acts on a volume level with potentially multiple server endpoints
syncing from it. If you forget to adjust the free space on even one server endpoint, sync will continue to apply
the most restrictive rule and attempt to keep 99% free disk space, making the local cache not performing as you
might expect. Unless it is your goal to only have the namespace for a volume that only contains rarely accessed,
archival data and you are reserving the rest of the storage space for another scenario.
Troubleshoot
The most likely issue you can run into, is that the RoboCopy command fails with "Volume full" on the Windows
Server side. Cloud tiering acts once every hour to evacuate content from the local Windows Server disk, that
has synced. Its goal is to reach your 99% free space on the volume.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows
Server.
When your Windows Server has sufficient available capacity, rerunning the command will resolve the problem.
Nothing breaks when you get into this situation and you can move forward with confidence. Inconvenience of
running the command again is the only consequence.
Check the link in the following section for troubleshooting Azure File Sync issues.
Next steps
There is more to discover about Azure file shares and Azure File Sync. The following articles help understand
advanced options, best practices, and also contain troubleshooting help. These articles link to Azure file share
documentation as appropriate.
Azure File Sync overview
Deploy Azure File Sync
Azure File Sync troubleshooting
Use Data Box to migrate from Network Attached
Storage (NAS) to a hybrid cloud deployment by
using Azure File Sync
5/20/2022 • 36 minutes to read • Edit Online
This migration article is one of several that apply to the keywords NAS, Azure File Sync, and Azure Data Box.
Check if this article applies to your scenario:
Data source: Network Attached Storage (NAS)
Migration route: NAS ⇒ Data Box ⇒ Azure file share ⇒ sync with Windows Server
Caching files on-premises: Yes, the final goal is an Azure File Sync deployment
If your scenario is different, look through the table of migration guides.
Azure File Sync works on Direct Attached Storage (DAS) locations. It doesn't support sync to Network Attached
Storage (NAS) locations. So you need to migrate your files. This article guides you through the planning and
implementation of that migration.
Applies to
F IL E SH A RE T Y P E SM B NFS
Migration goals
The goal is to move the shares that you have on your NAS appliance to Windows Server. You'll then use Azure
File Sync for a hybrid cloud deployment. This migration needs to be done in a way that guarantees the integrity
of the production data and availability during the migration. The latter requires keeping downtime to a
minimum so that it meets or only slightly exceeds regular maintenance windows.
Migration overview
The migration process consists of several phases. You'll need to:
Deploy Azure storage accounts and file shares.
Deploy an on-premises computer running Windows Server.
Configure Azure File Sync.
Migrate files by using Robocopy.
Do the cutover.
The following sections describe the phases of the migration process in detail.
TIP
If you're returning to this article, use the navigation on the right side of the screen to jump to the migration phase where
you left off.
TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.
IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.
If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.
Phase 3: Determine how many Azure Data Box appliances you need
Start this step only after you've finished the previous phase. Your Azure storage resources (storage accounts and
file shares) should be created at this time. When you order your Data Box, you need to specify the storage
accounts into which the Data Box is moving data.
In this phase, you need to map the results of the migration plan from the previous phase to the limits of the
available Data Box options. These considerations will help you make a plan for which Data Box options to choose
and how many of them you'll need to move your NAS shares to Azure file shares.
To determine how many devices you need and their types, consider these important limits:
Any Azure Data Box appliance can move data into up to 10 storage accounts.
Each Data Box option comes with its own usable capacity. See Data Box options.
Consult your migration plan to find the number of storage accounts you've decided to create and the shares in
each one. Then look at the size of each of the shares on your NAS. Combining this information will allow you to
optimize and decide which appliance should be sending data to which storage accounts. Two Data Box devices
can move files into the same storage account, but don't split content of a single file share across two Data Boxes.
Data Box options
For a standard migration, choose one or a combination of these Data Box options:
Data Box Disk . Microsoft will send you between one and five SSD disks that have a capacity of 8 TiB each,
for a maximum total of 40 TiB. The usable capacity is about 20 percent less because of encryption and file-
system overhead. For more information, see Data Box Disk documentation.
Data Box . This option is the most common one. Microsoft will send you a ruggedized Data Box appliance
that works similar to a NAS. It has a usable capacity of 80 TiB. For more information, see Data Box
documentation.
Data Box Heavy . This option features a ruggedized Data Box appliance on wheels that works similar to a
NAS. It has a capacity of 1 PiB. The usable capacity is about 20 percent less because of encryption and file-
system overhead. For more information, see Data Box Heavy documentation.
NOTE
The previously linked article includes a table with a range for server memory (RAM). You can use numbers at the lower
end of the range for your server, but expect the initial sync to take significantly longer.
TIP
As an alternative to Robocopy, Data Box has created a data copy service. You can use this service to load files onto your
Data Box with full fidelity. Follow this data copy service tutorial and make sure to set the correct Azure file share target.
Data Box documentation specifies a Robocopy command. That command isn't suitable for preserving the full file
and folder fidelity. Use this command instead:
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
IMPORTANT
Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the
cloud but have the full namespace available. Locally interesting data is also cached locally for fast access performance.
Cloud tiering is optional. You can set it individually for each Azure File Sync server endpoint. You need to use this feature if
you don't have enough local disk capacity on the Windows Server instance to hold all cloud data and you want to avoid
downloading all data from the cloud.
For all Azure file shares / server locations that you need to configure for sync, repeat the steps to create sync
groups and to add the matching server folders as server endpoints. Wait until the sync of the namespace is
complete. The following section will explain how you can ensure the sync is complete.
NOTE
After you create a server endpoint, sync is working. But sync needs to enumerate (discover) the files and folders you
moved via Data Box into the Azure file share. Depending on the size of the namespace, it can take a long time before the
namespace from the cloud appears on the server.
WARNING
Because of regressed Robocopy behavior in Windows Server 2019, the Robocopy /MIR switch isn't compatible with
tiered target directories. You can't use Windows Server 2019 or Windows 10 client for this phase of the migration. Use
Robocopy on an intermediate Windows Server 2016 instance.
Here's the basic migration approach:
Run Robocopy from your NAS appliance to sync your Windows Server instance.
Use Azure File Sync to sync the Azure file shares from Windows Server.
Run the first local copy to your Windows Server target folder:
1. Identify the first location on your NAS appliance.
2. Identify the matching folder on the Windows Server instance that already has Azure File Sync configured on
it.
3. Start the copy by using Robocopy.
The following Robocopy command will copy only the differences (updated files and folders) from your NAS
storage to your Windows Server target folder. The Windows Server instance will then sync them to the Azure
file shares.
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
If you provisioned less storage on your Windows Server instance than your files use on the NAS appliance,
you've configured cloud tiering. As the local Windows Server volume becomes full, cloud tiering will kick in and
tier files that have already successfully synced. Cloud tiering will generate enough space to continue the copy
from the NAS appliance. Cloud tiering checks once an hour to determine what has synced and to free up disk
space to reach the 99 percent volume free space.
Robocopy might need to move more files than you can store locally on the Windows Server instance. You can
expect Robocopy to move faster than Azure File Sync can upload your files and tier them off your local volume.
In this situation, Robocopy will fail. We recommend that you work through the shares in a sequence that
prevents this scenario. For example, move only shares that fit in the free space available on the Windows Server
instance. Or avoid starting Robocopy jobs for all shares at the same time. The good news is that the /MIR
switch will ensure that only deltas are moved. After a delta has been moved, a restarted job won't need to move
the file again.
Do the cutover
When you run the Robocopy command for the first time, your users and applications will still be accessing files
on the NAS and potentially changing them. Robocopy will process a directory and then move on to the next one.
A user on the NAS might then add, change, or delete a file on the first directory that won't be processed during
the current Robocopy run. This behavior is expected.
The first run is about moving the bulk of the churned data to your Windows Server instance and into the cloud
via Azure File Sync. This first copy can take a long time, depending on:
The upload bandwidth.
The local network speed and how optimally the number of Robocopy threads matches it.
The number of items (files and folders) that need to be processed by Robocopy and Azure File Sync.
After the initial run is complete, run the command again.
Robocopy will finish faster the second time you run it for a share. It needs to transport only changes that
happened since the last run. You can run repeated jobs for the same share.
When you consider downtime acceptable, you need to remove user access to your NAS-based shares. You can
do that in any way that prevents users from changing the file and folder structure and the content. For example,
you can point your DFS namespace to a location that doesn't exist or change the root ACLs on the share.
Run Robocopy one last time. It will pick up any changes that have been missed. How long this final step takes
depends on the speed of the Robocopy scan. You can estimate the time (which is equal to your downtime) by
measuring the length of the previous run.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions that are on your NAS SMB share. If you had an enterprise-class,
domain-joined NAS, the user SIDs will automatically match because the users are in Active Directory and
Robocopy copies files and metadata at full fidelity. If you have used local users on your NAS, you need to:
Re-create these users as Windows Server local users.
Map the existing SIDs that Robocopy moved over to your Windows Server instance to the SIDs of your new
Windows Server local users.
You've finished migrating a share or group of shares into a common root or volume (depending on your
mapping from Phase 1).
You can try to run a few of these copies in parallel. We recommend that you process the scope of one Azure file
share at a time.
Troubleshooting
The most common problem is for the Robocopy command to fail with "Volume full" on the Windows Server
side. Cloud tiering acts once every hour to evacuate content from the local Windows Server disk that has
synced. Its goal is to reach your 99 percent free space on the volume.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows
Server instance.
When your Windows Server instance has enough available capacity, run the command again to resolve the
problem. Nothing breaks in this situation. You can move forward with confidence. The inconvenience of running
the command again is the only consequence.
To troubleshoot Azure File Sync problems, see the article listed in the next section.
Next steps
There's more to discover about Azure file shares and Azure File Sync. The following articles will help you
understand advanced options and best practices. They also provide help with troubleshooting. These articles
contain links to the Azure file share documentation where appropriate.
Migration overview
Planning for an Azure File Sync deployment
Create a file share
Troubleshoot Azure File Sync
StorSimple 8100 and 8600 migration to Azure File
Sync
5/20/2022 • 68 minutes to read • Edit Online
The StorSimple 8000 series is represented by either the 8100 or the 8600 physical, on-premises appliances and
their cloud service components. StorSimple 8010 and 8020 virtual appliances are also covered in this migration
guide. It's possible to migrate the data from either of these appliances to Azure file shares with optional Azure
File Sync. Azure File Sync is the default and strategic long-term Azure service that replaces the StorSimple on-
premises functionality.
The StorSimple 8000 series will reach its end of life in December 2022. It's important to begin planning your
migration as soon as possible. This article provides the necessary background knowledge and migration steps
for a successful migration to Azure File Sync.
NOTE
No operations can be performed in the Azure portal of your StorSimple Manager service until the key rollover is
completed.
If you are using the device serial console to connect to the Windows PowerShell interface, perform the following
steps.
To initiate the service data encryption key change
1. Select option 1 to log on with full access.
2. At the command prompt, type:
Invoke-HcsmServiceDataEncryptionKeyChange
3. After the cmdlet has successfully completed, you will get a new service data encryption key. Copy and
save this key for use in step 3 of this process. This key will be used to update all the remaining devices
registered with the StorSimple Manager service.
NOTE
This process must be initiated within four hours of authorizing a StorSimple device.
This new key is then sent to the service to be pushed to all the devices that are registered with the
service. An alert will then appear on the service dashboard. The service will disable all the operations on
the registered devices, and the device administrator will then need to update the service data encryption
key on the other devices. However, the I/Os (hosts sending data to the cloud) will not be disrupted.
If you have a single device registered to your service, the rollover process is now complete and you can
skip the next step. If you have multiple devices registered to your service, proceed to step 3.
Step 3: Update the service data encryption key on other StorSimple devices
These steps must be performed in the Windows PowerShell interface of your StorSimple device if you have
multiple devices registered to your StorSimple Manager service. The key that you obtained in Step 2 must be
used to update all the remaining StorSimple device registered with the StorSimple Manager service.
Perform the following steps to update the service data encryption on your device.
To update the service data encryption key on physical devices
1. Use Windows PowerShell for StorSimple to connect to the console. Select option 1 to log on with full access.
2. At the command prompt, type: Invoke-HcsmServiceDataEncryptionKeyChange – ServiceDataEncryptionKey
3. Provide the service data encryption key that you obtained in Step 2: Use Windows PowerShell for StorSimple
to initiate the service data encryption key change.
To update the service data encryption key on all the 8010/8020 cloud appliances
1. Download and setup Update-CloudApplianceServiceEncryptionKey.ps1 PowerShell script.
2. Open PowerShell and at the command prompt, type:
Update-CloudApplianceServiceEncryptionKey.ps1 -SubscriptionId [subscription] -TenantId [tenantid] -
ResourceGroupName [resource group] -ManagerName [device manager]
This script will ensure that service data encryption key is set on all the 8010/8020 cloud appliances under the
device manager.
Cau t i on
When you're deciding how to connect to your StorSimple appliance, consider the following:
Connecting through an HTTPS session is the most secure and recommended option.
Connecting directly to the device serial console is secure, but connecting to the serial console over network
switches is not.
HTTP session connections are an option but are not encrypted. They're not recommended unless they're used
within in a closed, trusted network.
Known limitations
The StorSimple Data Manager and Azure file shares have a few limitations you should consider before you
begin your migration, as they can prevent a migration:
Only NTFS volumes from your StorSimple appliance are supported. ReFS volumes are not supported.
Any volume placed on Windows Server Dynamic Disks is not supported. (deprecated before Windows
Server 2012)
The service doesn't work with volumes that are BitLocker encrypted or have Data Deduplication enabled.
Corrupted StorSimple backups can't be migrated.
Special networking options, such as firewalls or private endpoint-only communication can't be enabled on
either the source storage account where StorSimple backups are stored, nor on the target storage account
that holds you Azure file shares.
File fidelity
If none of the limitations in Known limitations prevent a migration. There are still limitations on what can be
stored in Azure file shares that you need to be aware of. File fidelity refers to the multitude of attributes,
timestamps, and data that compose a file. In a migration, file fidelity is a measure of how well the information on
the source (StorSimple volume) can be translated (migrated) to the target (Azure file share). Azure Files supports
a subset of the NTFS file properties. ACLs, common metadata, and some timestamps will be migrated. The
following items won't prevent a migration but will cause per-item issues during a migration:
Timestamps: File change time will not be set - it is currently read-only over the REST protocol. Last access
timestamp on a file will not be moved, it currently isn't a supported attribute on files stored in an Azure file
share.
Alternative Data Streams can't be stored in Azure file shares. Files holding Alternate Data Streams will be
copied, but Alternate Data Streams will be stripped from the file in the process.
Symbolic links, hard links, junctions, and reparse points are skipped during a migration. The migration copy
logs will list each skipped item and a reason.
EFS encrypted files will fail to copy. Copy logs will show the item failed to copy with "Access is denied".
Corrupt files are skipped. The copy logs may list different errors for each item that is corrupt on the
StorSimple disk: "The request failed due to a fatal device hardware error" or "The file or directory is
corrupted or unreadable" or "The access control list (ACL) structure is invalid".
Individual files larger than 4 TiB are skipped.
File path lengths need to be equal to or fewer than 2048 characters. Files and folders with longer paths will
be skipped.
StorSimple volume backups
StorSimple offers differential backups on the volume level. Azure file shares also have this ability, called share
snapshots. Your migration jobs can only move backups, not data from the live volume. So the most recent
backup should always be on the list of backups moved in a migration.
Decide if you need to move any older backups during your migration. Best practice is to keep this list as small as
possible, so your migration jobs complete faster.
To identify critical backups that must be migrated, make a checklist of your backup policies. For instance:
The most recent backup. (Note: The most recent backup should always be part of this list.)
One backup a month for 12 months.
One backup a year for three years.
Later on, when you create your migration jobs, you can use this list to identify the exact StorSimple volume
backups that must be migrated to satisfy the requirements on your list.
Cau t i on
Selecting more than 50 StorSimple volume backups is not supported. Your migration jobs can only move
backups, never data from the live volume. Therefore the most recent backup is closest to the live data and thus
should always be part of the list of backups to be moved in a migration.
Cau t i on
It's best to suspend all StorSimple backup retention policies before you select a backup for migration.
Migrating your backups takes several days or weeks. StorSimple offers backup retention policies that will delete
backups. Backups you have selected for this migration may get deleted before they had a chance to be migrated.
Map your existing StorSimple volumes to Azure file shares
In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.
TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.
IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.
IMPORTANT
Decide on an Azure region, and ensure each storage account and Azure File Sync resource matches the region you
selected. Don't configure network and firewall settings for the storage accounts now. Making these configurations at this
point would make a migration impossible. Configure these Azure storage settings after the migration is complete.
Subscription
You can use the same subscription you used for your StorSimple deployment or a different one. The only
limitation is that your subscription must be in the same Azure Active Directory tenant as the StorSimple
subscription. Consider moving the StorSimple subscription to the appropriate tenant before you start a
migration. You can only move the entire subscription, individual StorSimple resources can't be moved to a
different tenant or subscription.
Resource group
Resource groups are assisting with organization of resources and admin management permissions. Find out
more about resource groups in Azure.
Storage account name
The name of your storage account will become part of a URL and has certain character limitations. In your
naming convention, consider that storage account names have to be unique in the world, allow only lowercase
letters and numbers, require between 3 to 24 characters, and don't allow special characters like hyphens or
underscores. For more information, see Azure storage resource naming rules.
Location
The location or Azure region of a storage account is very important. If you use Azure File Sync, all of your
storage accounts must be in the same region as your Storage Sync Service resource. The Azure region you pick
should be close or central to your local servers and users. After your resource has been deployed, you can't
change its region.
You can pick a different region from where your StorSimple data (storage account) currently resides.
IMPORTANT
If you pick a different region from your current StorSimple storage account location, egress charges will apply during the
migration. Data will leave the StorSimple region and enter your new storage account region. No bandwidth charges apply
if you stay within the same Azure region.
Performance
You have the option to pick premium storage (SSD) for Azure file shares or standard storage. Standard storage
includes several tiers for a file share. Standard storage is the right option for most customers migrating from
StorSimple.
Still not sure?
Choose premium storage if you need the performance of a premium Azure file share.
Choose standard storage for general-purpose file server workloads, which includes hot data and archive
data. Also choose standard storage if the only workload on the share in the cloud will be Azure File Sync.
For premium file shares, choose File shares in the create storage account wizard.
Replication
There are several replication settings available. Learn more about the different replication types.
Only choose from either of the following two options:
Locally redundant storage (LRS).
Zone redundant storage (ZRS), which isn't available in all Azure regions.
NOTE
Only LRS and ZRS redundancy types are compatible with the large 100 TiB capacity Azure file shares.
Geo redundant storage (GRS) in all variations is currently not supported. You can switch your redundancy type
later, and switch to GRS when support for it arrives in Azure.
Enable 100 TiB capacity file shares
Under the Advanced section of the new storage account wizard in the Azure portal, you can enable Large file
shares support in this storage account. If this option isn't available to you, you most likely selected the wrong
redundancy type. Ensure you only select LRS or ZRS for this option to become available.
Opting for the large, 100 TiB capacity file shares has several benefits:
Your performance is greatly increased as compared to the smaller 5 TiB-capacity file shares (for example, 10
times the IOPS).
Your migration will finish significantly faster.
You ensure that a file share will have enough capacity to hold all the data you'll migrate into it, including the
storage capacity differential backups require.
Future growth is covered.
IMPORTANT
Do not apply special networking to your storage account before or during your migration. The public endpoint must be
accessible on source and target storage accounts. No limiting to specific IP ranges or VNETs is supported. You can change
the storage account networking configurations after the migration.
Quota
Quota here is comparable to an SMB hard quota on a Windows Server instance. The best practice is to not set a
quota here because your migration and other services will fail when the quota is reached.
Tiers
Select Transaction optimized for your new file share. During the migration, many transactions will occur. Its
more cost efficient to change your tier later to the tier best suited to your workload.
StorSimple Data Manager
The Azure resource that will hold your migration jobs is called a StorSimple Data Manager . Select New
resource , and search for it. Then select Create .
This temporary resource is used for orchestration. You deprovision it after your migration completes. It should
be deployed in the same subscription, resource group, and region as your StorSimple storage account.
Azure File Sync
With Azure File Sync, you can add on-premises caching of the most frequently accessed files. Similar to the
caching abilities of StorSimple, the Azure File Sync cloud tiering feature offers local-access latency in
combination with improved control over the available cache capacity on the Windows Server instance and
multi-site sync. If having an on-premises cache is your goal, then in your local network, prepare a Windows
Server VM (physical servers and failover clusters are also supported) with sufficient direct-attached storage
capacity.
IMPORTANT
Don't set up Azure File Sync yet. It's best to set up Azure File Sync after the migration of your share is complete.
Deploying Azure File Sync shouldn't start before Phase 4 of a migration.
Phase 2 summary
At the end of Phase 2, you'll have deployed your storage accounts and all Azure file shares across them. You'll
also have a StorSimple Data Manager resource. You'll use the latter in Phase 3 when you configure your
migration jobs.
Source
Source subscription
Select the subscription in which you store your StorSimple Device Manager resource.
StorSimple resource
Select your StorSimple Device Manager your appliance is registered with.
Device
Select your StorSimple device that holds the volume where you want to migrate.
Volume
Select the source volume. Later you'll decide if you want to migrate the whole volume or subdirectories into the
target Azure file share.
Volume backups
You can select Select volume backups to choose specific backups to move as part of this job. An upcoming,
dedicated section in this article covers the process in detail.
Target
Select the subscription, storage account, and Azure file share as the target of this migration job.
Directory mapping
A dedicated section in this article, discusses all relevant details.
Selecting volume backups to migrate
There are important aspects around choosing backups that need to be migrated:
Your migration jobs can only move backups, not live volume data. So the most recent backup is closest to the
live data and should always be on the list of backups moved in a migration. When you open the Backup
selection dialog, it is selected by default.
Make sure your latest backup is recent to keep the delta to the live share as small as possible. It could be
worth manually triggering and completing another volume backup before creating a migration job. A small
delta to the live share will improve your migration experience. If this delta can be zero = no more changes to
the StorSimple volume happened after the newest backup was taken in your list - then Phase 5: User cut-over
will be drastically simplified and sped up.
Backups must be played back into the Azure file share from oldest to newest . An older backup cannot be
"sorted into" the list of backups on the Azure file share after a migration job has run. Therefore you must
ensure that your list of backups is complete before you create a job.
This list of backups in a job cannot be modified once the job is created - even if the job never ran.
In order to select backups, the StorSimple volume you want to migrate must be online.
To select backups of your StorSimple volume for your migration job, select the Select volume backups on the job
creation form.
When the backup selection blade opens, it is separated into two lists. In the first list, all available backups are
displayed. You can expand and narrow the result set by filtering for a specific time range. (see next section)
A selected backup will display as grayed-out and it is added to a second list on the lower half of the blade. The
second list displays all the backups selected for migration. A backup selected in error can also be removed again.
Cau t i on
You must select all backups you wish to migrate. You cannot add older backups later on. You cannot modify the
job to change your selection once the job is created.
By default, the list is filtered to show the StorSimple volume backups within the past seven days. The most
recent backup is selected by default, even if it didn't occur in the past seven days. For older backups, use the time
range filter at the top of the blade. You can either select from an existing filter or set a custom time range to filter
for only the backups taken during this period.
Cau t i on
Selecting more than 50 StorSimple volume backups is not supported. Jobs with a large number of backups may
fail. Make sure your backup retention policies don't delete a selected backup before it got a chance to be
migrated!
Directory mapping
Directory mapping is optional for your migration job. If you leave the section empty, all the files and folders on
the root of your StorSimple volume will be moved into the root of your target Azure file share. In most cases,
storing an entire volume's content in an Azure file share isn't the best approach. It's often better to split a
volume's content across multiple file shares in Azure. If you haven't made a plan already, see Map your
StorSimple volume to Azure file shares first.
As part of your migration plan, you might have decided that the folders on a StorSimple volume need to be split
across multiple Azure file shares. If that's the case, you can accomplish that split by:
1. Defining multiple jobs to migrate the folders on one volume. Each will have the same StorSimple volume
source but a different Azure file share as the target.
2. Specifying precisely which folders from the StorSimple volume need to be migrated into the specified file
share by using the Director y-mapping section of the job creation form and following the specific mapping
semantics.
IMPORTANT
The paths and mapping expressions in this form can't be validated when the form is submitted. If mappings are specified
incorrectly, a job might either fail completely or produce an undesirable result. In that case, it's usually best to delete the
Azure file share, re-create it, and then fix the mapping statements in a new migration job for the share. Running a new job
with fixed mapping statements can fix omitted folders and bring them into the existing share. However, only folders that
were omitted because of path misspellings can be addressed this way.
Semantic elements
A mapping is expressed from left to right: [\source path] > [\target path].
SEM A N T IC C H A RA C T ER M EA N IN G
Examples
Moves the content of folder User data to the root of the target file share:
Moves the entire volume content into a new path on the target file share:
Moves the source folder content into a new path on the target file share:
Semantic rules
Always specify folder paths relative to the root level.
Begin each folder path with a root level indicator "\".
Don't include drive letters.
When specifying multiple paths, source or target paths can't overlap:
Invalid source path overlap example:
\folder\1 > \folder
\folder\1\2 > \folder2
Invalid target path overlap example:
\folder > \
\folder2 > \
Source folders that don't exist will be ignored.
Folder structures that don't exist on the target will be created.
Like Windows, folder names are case insensitive but case preserving.
NOTE
Contents of the \System Volume Information folder and the $Recycle.Bin on your StorSimple volume won't be copied by
the migration job.
Initially, the migration job will have the status: Never ran .
When you are ready, you can start this migration job. (Select the image for a version with higher resolution.)
When a backup was successfully migrated, an automatic Azure file share snapshot will be taken. The original
backup date of your StorSimple backup will be placed in the Comments section of the Azure file share snapshot.
Utilizing this field will allow you to see when the data was originally backed up as compared to the time the file
share snapshot was taken.
Cau t i on
Backups must be processed from oldest to newest. Once a migration job is created, you can't change the list of
selected StorSimple volume backups. Don't start the job if the list of Backups is incorrect or incomplete. Delete
the job and make a new one with the correct backups selected. For each selected backup, check your retention
schedules. Backups may get deleted by one or more of your retention policies before they got a chance to be
migrated!
Per-item errors
The migration jobs have two columns in the list of backups that list any issues that may have occurred during
the copy:
Copy errors
This column lists files or folders that should have been copied but weren't. These errors are often
recoverable. When a backup lists item issues in this column, review the copy logs. If you need to migrate
these files, select Retr y backup . This option will become available once the backup finished processing. The
Managing a migration job section explains your options in more detail.
Unsupported files
This column lists files or folders that can't be migrated. Azure Storage has limitations in file names, path
lengths, and file types that currently or logically can't be stored in an Azure file share. A migration job won't
pause for these kind of errors. Retrying migration of the backup won't change the result. When a backup lists
item issues in this column, review the copy logs and take note. If such issues arise in your last backup and
you found in the copy log that the failure was due to a file name, path length or other issue you have
influence over, you may want to remedy the issue in the live StorSImple volume, take a StorSimple volume
backup and create a new migration job with just that backup. You will then migrate this remedied namespace
and it will become the most recent / live version of the Azure file share. This is a manual and time consuming
process. Review the copy logs carefully and evaluate if it's worth it.
These copy logs are *.csv files listing namespace items succeeded and items that failed to get copied. The errors
are further split into the previously discussed categories. From the log file location, you can find logs for failed
files by searching for "failed". The result should be a set of logs for files that failed to copy. Sort these logs by
size. There may be extra logs produced at 17 bytes in size. They are empty and can be ignored. With a sort, you
can focus on the logs with content.
The same process applies for log files recording successful copies.
Manage a migration job
Migration jobs have the following states:
Never ran
A new job, that has been defined but never ran before.
Waiting
A job in this state is waiting for resources to be provisioned in the migration service. It will automatically
switch to a different state when ready.
Failed
A failed job hit a fatal error that prevents it from processing more backups. A job is not expected to enter this
state. A support request is the best course of action.
Canceled / Canceling
Either and entire migration job or individual backups within the job can be canceled. Canceled backups won't
be processed, a canceled migration job will stop processing more backups. Expect that canceling a job will
take a long time. This doesn't prevent you from creating a new job. The best course of action is patience to let
a job fully arrive in the Canceled state. You can either ignore failed / canceled jobs or delete them at a later
time. You won't have to delete jobs before you can delete the Data Manager resource at the end of your
StorSimple migration.
Running
A running job is currently processing a backup. Refer to the table on the bottom half of the blade to see which
backup is currently being processed and which ones might have been migrated already.
Already migrated backups have a column with a link to a copy log. If there are any errors reported for a backup,
you should review its copy log.
Paused
A migration job is paused when there is a decision needed. This condition enables two command buttons on the
top of the blade:
Choose Retr y backup when the backup shows files that were supposed to move but didn't (Copy error
column).
Choose Skip backup when the backup is missing (was deleted by policy since you created the migration job) or
when the backup is corrupt. You can find detailed error information in the blade that opens when you click on
the failed backup.
When you skip or retry the current backup, the migration service will create a new snapshot in your target
Azure file share. You may want to delete the previous one later, it is likely incomplete.
Complete and Complete with warnings
A migration job is listed as Complete when all backups in the job have been successfully processed.
Complete with warnings is a state that occurs when:
A backup ran into a recoverable issue. This backup is marked as partial success or failed.
You decided to continue on the paused job by skipping the backup with said issues. (You chose Skip backup
instead of Retry backup)
If the migration job completes with warnings, you should always review the copy log for the relevant backups.
Run jobs in parallel
You will likely have multiple StorSimple volumes, each with their own shares that need to be migrated to an
Azure file share. It's important that you understand how much you can do in parallel. There are limitations that
aren't enforced in the user experience and will either degrade or inhibit a complete migration if jobs are
executed at the same time.
There are no limits in defining migration jobs. You can define the same StorSimple source volume, the same
Azure file share, across the same or different StorSimple appliances. However, running them has limitations:
Only one migration job with the same StorSimple source volume can run at the same time.
Only one migration job with the same target Azure file share can run at the same time.
Before starting the next job, you ensured that any of the previous jobs are in the copy stage and show
progress of moving files for at least 30 Minutes.
You can run up to four migration jobs in parallel per StorSimple device manager, as long as you also abide by
the previous rules.
When you attempt to start a migration job, the previous rules are checked. If there are jobs running, you may
not be able to start the current job. You'll receive an alert that lists the name of currently running job(s) that
must finish before you can start the new job.
TIP
It's a good idea to regularly check your migration jobs in the Job definition tab of your Data Manager resource, to see if
any of them have paused and need your input to complete.
Phase 3 summary
At the end of Phase 3, you'll have run at least one of your migration jobs from StorSimple volumes into Azure
file share(s). With your run, you will have migrated your specified backups into Azure file share snapshots. You
can now focus on either setting up Azure File Sync for the share (once migration jobs for a share have
completed) or direct-share-access for your information workers and apps to the Azure file share.
TIP
If you want to change the Azure region your data resides in after the migration is finished, deploy the Storage Sync
Service in the same region as the target storage accounts for this migration.
If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Configure Azure File Sync on the Windows Server instance
Your registered on-premises Windows Server instance must be ready and connected to the internet for this
process.
IMPORTANT
Your StorSimple migration of files and folders into the Azure file share must be complete before you proceed. Make sure
there are no more changes done to the file share.
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.
IMPORTANT
Be sure to turn on cloud tiering. Cloud tiering is the Azure File Sync feature that allows the local server to have less
storage capacity than is stored in the cloud, yet have the full namespace available. Locally interesting data is also cached
locally for fast, local access performance. Another reason to turn on cloud tiering at this step is that we don't want to sync
file content at this stage. Only the namespace should be moving at this time.
WARNING
You must not start the RoboCopy before the server has the namespace for an Azure file share downloaded fully. For more
information, see Determine when your namespace has fully downloaded to your server.
You only want to copy files that were changed after the migration job last ran and files that haven't moved
through these jobs before. You can solve the problem as to why they didn't move later on the server, after the
migration is complete. For more information, see Azure File Sync troubleshooting.
RoboCopy has several parameters. The following example showcases a finished command and a list of reasons
for choosing these parameters.
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
SW ITC H M EA N IN G
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
When you configure source and target locations of the RoboCopy command, make sure you review the
structure of the source and target to ensure they match. If you used the directory-mapping feature of the
migration job, your root-directory structure might be different than the structure of your StorSimple volume. If
that's the case, you might need multiple RoboCopy jobs, one for each subdirectory. If you unsure if the
command will perform as expected, you can use the /L parameter, which will simulate the command without
actually making any changes.
This RoboCopy command uses /MIR, so it won't move files that are the same (tiered files, for instance). But if you
get the source and target path wrong, /MIR also purges directory structures on your Windows Server instance
or Azure file share that aren't present on the StorSimple source path. They must match exactly for the RoboCopy
job to reach its intended goal of updating your migrated content with the latest changes made while the
migration is ongoing.
Consult the RoboCopy log file to see if files have been left behind. If issues exist, fix them, and rerun the
RoboCopy command. Don't deprovision any StorSimple resources before you fix outstanding issues for files or
folders you care about.
If you don't use Azure File Sync to cache the particular Azure file share in question but instead opted for direct-
share-access:
1. Mount your Azure file share as a network drive to a local Windows machine.
2. Perform the RoboCopy between your StorSimple and the mounted Azure file share. If files don't copy, fix up
their names on the StorSimple side to remove invalid characters. Then retry RoboCopy. The previously listed
RoboCopy command can be run multiple times without causing unnecessary recall to StorSimple.
Troubleshoot and optimize
Speed and success rate of a given RoboCopy run will depend on several factors:
IOPS on the source and target storage
the available network bandwidth between source and target
the ability to quickly process files and folders in a namespace
the number of changes between RoboCopy runs
IOPS and bandwidth considerations
In this category, you need to consider abilities of the source storage , the target storage , and the network
connecting them. The maximum possible throughput is determined by the slowest of these three components.
Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
Cau t i on
While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.
TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.
Phase 6: Deprovision
When you deprovision a resource, you lose access to the configuration of that resource and its data.
Deprovisioning can't be undone. Don't proceed until you've confirmed that:
Your migration is complete.
There are no dependencies whatsoever on the StorSimple files, folders, or volume backups that you're about
to deprovision.
Before you begin, it's a best practice to observe your new Azure File Sync deployment in production for a while.
That time gives you the opportunity to fix any problems you might encounter. After you've observed your Azure
File Sync deployment for at least a few days, you can begin to deprovision resources in this order:
1. Deprovision your StorSimple Data Manager resource via the Azure portal. All of your DTS jobs will be deleted
with it. You won't be able to easily retrieve the copy logs. If they're important for your records, retrieve them
before you deprovision.
2. Make sure that your StorSimple physical appliances have been migrated, and then unregister them. If you
aren't sure that they've been migrated, don't proceed. If you deprovision these resources while they're still
necessary, you won't be able to recover the data or their configuration.
Optionally you can first deprovision the StorSimple volume resource, which will clean up the data on the
appliance. This process can take several days and won't forensically zero out the data on the appliance. If this
is important to you, handle disk zeroing separately from the resource deprovisioning and according to your
policies.
3. If there are no more registered devices left in a StorSimple Device Manager, you can proceed to remove that
Device Manager resource itself.
4. It's now time to delete the StorSimple storage account in Azure. Again, stop and confirm your migration is
complete and that nothing and no one depends on this data before you proceed.
5. Unplug the StorSimple physical appliance from your data center.
6. If you own the StorSimple appliance, you're free to PC Recycle it. If your device is leased, inform the lessor
and return the device as appropriate.
Your migration is complete.
NOTE
Still have questions or encountered any issues?
We're here to help:
Next steps
Get more familiar with Azure File Sync: aka.ms/AFS.
Understand the flexibility of cloud tiering policies.
Enable Azure Backup on your Azure file shares to schedule snapshots and define backup retention schedules.
If you see in the Azure portal that some files are permanently not syncing, review the Troubleshooting guide
for steps to resolve these issues.
StorSimple 1200 migration to Azure File Sync
5/20/2022 • 33 minutes to read • Edit Online
StorSimple 1200 series is a virtual appliance that is run in an on-premises data center. It is possible to migrate
the data from this appliance to an Azure File Sync environment. Azure File Sync is the default and strategic long-
term Azure service that StorSimple appliances can be migrated to.
StorSimple 1200 series will reach its end-of-life in December 2022. It is important to begin planning your
migration as soon as possible. This article provides the necessary background knowledge and migrations steps
for a successful migration to Azure File Sync.
Applies to
F IL E SH A RE T Y P E SM B NFS
Migration goals
The goal is to guarantee the integrity of the production data and guaranteeing availability. The latter requires
keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
The previous image depicts steps that correspond to sections in this article.
Step 1: Provision your on-premises Windows Server and storage
1. Create a Windows Server 2019 - at a minimum 2012R2 - as a virtual machine or physical server. A Windows
Server fail-over cluster is also supported.
2. Provision or add Direct Attached Storage (DAS as compared to NAS, which is not supported). The size of the
Windows Server storage must be equal to or larger than the size of the available capacity of your virtual
StorSimple 1200 appliance.
Step 2: Configure your Windows Server storage
In this step, you map your StorSimple storage structure (volumes and shares) to your Windows Server storage
structure. If you plan to make changes to your storage structure, meaning the number of volumes, the
association of data folders to volumes, or the subfolder structure above or below your current SMB/NFS shares,
then now is the time to take these changes into consideration. Changing your file and folder structure after
Azure File Sync is configured, is cumbersome, and should be avoided. This article assumes you are mapping 1:1,
so you must take your mapping changes into consideration when you follow the steps in this article.
None of your production data should end up on the Windows Server system volume. Cloud tiering is not
supported on system volumes. However, this feature is required for the migration as well as continuous
operations as a StorSimple replacement.
Provision the same number of volumes on your Windows Server as you have on your StorSimple 1200
virtual appliance.
Configure any Windows Server roles, features, and settings you need. We recommend you opt into Windows
Server updates to keep your OS safe and up to date. Similarly, we recommend opting into Microsoft Update
to keep Microsoft applications up to date, including the Azure File Sync agent.
Do not configure any folders or shares before reading the following steps.
Step 3: Deploy the first Azure File Sync cloud resource
To complete this step, you need your Azure subscription credentials.
The core resource to configure for Azure File Sync is called a Storage Sync Service. We recommend that you
deploy only one for all servers that are syncing the same set of files now or in the future. Create multiple
Storage Sync Services only if you have distinct sets of servers that must never exchange data. For example, you
might have servers that must never sync the same Azure file share. Otherwise, using a single Storage Sync
Service is the best practice.
Choose an Azure region for your Storage Sync Service that's close to your location. All other cloud resources
must be deployed in the same region. To simplify management, create a new resource group in your
subscription that houses sync and storage resources.
For more information, see the section about deploying the Storage Sync Service in the article about deploying
Azure File Sync. Follow only this section of the article. There will be links to other sections of the article in later
steps.
Step 4: Match your local volume and folder structure to Azure File Sync and Azure file share resources
In this step, you'll determine how many Azure file shares you need. A single Windows Server instance (or
cluster) can sync up to 30 Azure file shares.
You might have more folders on your volumes that you currently share out locally as SMB shares to your users
and apps. The easiest way to picture this scenario is to envision an on-premises share that maps 1:1 to an Azure
file share. If you have a small enough number of shares, below 30 for a single Windows Server instance, we
recommend a 1:1 mapping.
If you have more than 30 shares, mapping an on-premises share 1:1 to an Azure file share is often unnecessary.
Consider the following options.
Share grouping
For example, if your human resources (HR) department has 15 shares, you might consider storing all the HR
data in a single Azure file share. Storing multiple on-premises shares in one Azure file share doesn't prevent you
from creating the usual 15 SMB shares on your local Windows Server instance. It only means that you organize
the root folders of these 15 shares as subfolders under a common folder. You then sync this common folder to
an Azure file share. That way, only a single Azure file share in the cloud is needed for this group of on-premises
shares.
Volume sync
Azure File Sync supports syncing the root of a volume to an Azure file share. If you sync the volume root, all
subfolders and files will go to the same Azure file share.
Syncing the root of the volume isn't always the best option. There are benefits to syncing multiple locations. For
example, doing so helps keep the number of items lower per sync scope. We test Azure file shares and Azure
File Sync with 100 million items (files and folders) per share. But a best practice is to try to keep the number
below 20 million or 30 million in a single share. Setting up Azure File Sync with a lower number of items isn't
beneficial only for file sync. A lower number of items also benefits scenarios like these:
Initial scan of the cloud content can complete faster, which in turn decreases the wait for the namespace to
appear on a server enabled for Azure File Sync.
Cloud-side restore from an Azure file share snapshot will be faster.
Disaster recovery of an on-premises server can speed up significantly.
Changes made directly in an Azure file share (outside of sync) can be detected and synced faster.
TIP
If you don't know how many files and folders you have, check out the TreeSize tool from JAM Software GmbH.
TIP
Given this information, it often becomes necessary to group multiple top-level folders on your volumes into a new
common root directory. You then sync this new root directory, and all the folders you grouped into it, to a single Azure
file share. This technique allows you to stay within the limit of 30 Azure file share syncs per server.
This grouping under a common root doesn't affect access to your data. Your ACLs stay as they are. You only need to
adjust any share paths (like SMB or NFS shares) you might have on the local server folders that you now changed into a
common root. Nothing else changes.
IMPORTANT
The most important scale vector for Azure File Sync is the number of items (files and folders) that need to be synced.
Review the Azure File Sync scale targets for more details.
It's a best practice to keep the number of items per sync scope low. That's an important factor to consider in
your mapping of folders to Azure file shares. Azure File Sync is tested with 100 million items (files and folders)
per share. But it's often best to keep the number of items below 20 million or 30 million in a single share. Split
your namespace into multiple shares if you start to exceed these numbers. You can continue to group multiple
on-premises shares into the same Azure file share if you stay roughly below these numbers. This practice will
provide you with room to grow.
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the
new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync
to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file
share balanced across the server. You can also split your on-premises shares and sync across more on-premises
servers, adding the ability to sync with 30 more Azure file shares per extra server.
Create a mapping table
Use the previous information to determine how many Azure file shares you need and which parts of your
existing data will end up in which Azure file share.
Create a table that records your thoughts so you can refer to it when you need to. Staying organized is
important because it can be easy to lose details of your mapping plan when you're provisioning many Azure
resources at once. Download the following Excel file to use as a template to help create your mapping.
If you create an Azure file share that has a 100 TiB limit, that share can use only locally redundant storage or
zone-redundant storage redundancy options. Consider your storage redundancy needs before using 100-TiB file
shares.
Azure file shares are still created with a 5 TiB limit by default. Follow the steps in Create an Azure file share to
create a large file share.
Another consideration when you're deploying a storage account is the redundancy of Azure Storage. See Azure
Storage redundancy options.
The names of your resources are also important. For example, if you group multiple shares for the HR
department into an Azure storage account, you should name the storage account appropriately. Similarly, when
you name your Azure file shares, you should use names similar to the ones used for their on-premises
counterparts.
Storage account settings
There are many configurations you can make on a storage account. The following checklist should be used for
your storage account configurations. You can change for instance the networking configuration after your
migration is complete.
Large file shares: Enabled - Large file shares improve performance and allow you to store up to 100TiB in a
share.
Firewall and virtual networks: Disabled - do not configure any IP restrictions or limit storage account access
to a specific VNET. The public endpoint of the storage account is used during the migration. All IP addresses
from Azure VMs must be allowed. It's best to configure any firewall rules on the storage account after the
migration.
Private Endpoints: Supported - You can enable private endpoints but the public endpoint is used for the
migration and must remain available.
Step 6: Configure Windows Server target folders
In previous steps, you have considered all aspects that will determine the components of your sync topologies. It
is now time, to prepare the server to receive files for upload.
Create all folders, that will sync each to its own Azure file share. It's important that you follow the folder
structure you've documented earlier. If for instance, you have decided to sync multiple, local SMB shares
together into a single Azure file share, then you need to place them under a common root folder on the volume.
Create this target root folder on the volume now.
The number of Azure file shares you have provisioned should match the number of folders you've created in
this step + the number of volumes you will sync at the root level.
Step 7: Deploy the Azure File Sync agent
In this section, you install the Azure File Sync agent on your Windows Server instance.
The deployment guide explains that you need to turn off Internet Explorer Enhanced Security
Configuration . This security measure isn't applicable with Azure File Sync. Turning it off allows you to
authenticate to Azure without any problems.
Open PowerShell. Install the required PowerShell modules by using the following commands. Be sure to install
the full module and the NuGet provider when you're prompted to do so.
If you have any problems reaching the internet from your server, now is the time to solve them. Azure File Sync
uses any available network connection to the internet. Requiring a proxy server to reach the internet is also
supported. You can either configure a machine-wide proxy now or, during agent installation, specify a proxy that
only Azure File Sync will use.
If configuring a proxy means you need to open your firewalls for the server, that approach might be acceptable
to you. At the end of the server installation, after you've completed server registration, a network connectivity
report will show you the exact endpoint URLs in Azure that Azure File Sync needs to communicate with for the
region you've selected. The report also tells you why communication is needed. You can use the report to lock
down the firewalls around the server to specific URLs.
You can also take a more conservative approach in which you don't open the firewalls wide. You can instead
limit the server to communicate with higher-level DNS namespaces. For more information, see Azure File Sync
proxy and firewall settings. Follow your own networking best practices.
At the end of the server installation wizard, a server registration wizard will open. Register the server to your
Storage Sync Service's Azure resource from earlier.
These steps are described in more detail in the deployment guide, which includes the PowerShell modules that
you should install first: Azure File Sync agent installation.
Use the latest agent. You can download it from the Microsoft Download Center: Azure File Sync Agent.
After a successful installation and server registration, you can confirm that you've successfully completed this
step. Go to the Storage Sync Service resource in the Azure portal. In the left menu, go to Registered ser vers .
You'll see your server listed there.
Step 8: Configure sync
This step ties together all the resources and folders you've set up on your Windows Server instance during the
previous steps.
1. Sign in to the Azure portal.
2. Locate your Storage Sync Service resource.
3. Create a new sync group within the Storage Sync Service resource for each Azure file share. In Azure File
Sync terminology, the Azure file share will become a cloud endpoint in the sync topology that you're
describing with the creation of a sync group. When you create the sync group, give it a familiar name so that
you recognize which set of files syncs there. Make sure you reference the Azure file share with a matching
name.
4. After you create the sync group, a row for it will appear in the list of sync groups. Select the name (a link) to
display the contents of the sync group. You'll see your Azure file share under Cloud endpoints .
5. Locate the Add Ser ver Endpoint button. The folder on the local server that you've provisioned will become
the path for this server endpoint.
WARNING
Be sure to turn on cloud tiering! This is required if your local server does not have enough space to store the total
size of your data in the StorSimple cloud storage. Set your tiering policy, temporarily for the migration, to 99% volume
free space.
Repeat the steps of sync group creation and addition of the matching server folder as a server endpoint for all
Azure file shares / server locations, that need to be configured for sync.
Step 9: Copy your files
The basic migration approach is a RoboCopy from your StorSimple virtual appliance to your Windows Server,
and Azure File Sync to Azure file shares.
Run the first local copy to your Windows Server target folder:
Identify the first location on your virtual StorSimple appliance.
Identify the matching folder on the Windows Server, that already has Azure File Sync configured on it.
Start the copy using RoboCopy
The following RoboCopy command will recall files from your StorSimple Azure storage to your local StorSimple
and then move them over to the Windows Server target folder. The Windows Server will sync it to the Azure file
share(s). As the local Windows Server volume gets full, cloud tiering will kick in and tier files that have
successfully synced already. Cloud tiering will generate enough space to continue the copy from the StorSimple
virtual appliance. Cloud tiering checks once an hour to see what has synced and to free up disk space to reach
the 99% volume free space.
robocopy <SourcePath> <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD
"System Volume Information" /UNILOG:<FilePathAndName>
SW ITC H M EA N IN G
/R:n Maximum retry count for a file that fails to copy on first
attempt. Robocopy will try n times before the file
permanently fails to copy in the run. You can optimize the
performance of your run: Choose a value of two or three if
you believe timeout issues caused failures in the past. This
may be more common over WAN links. Choose no retry or a
value of one if you believe the file failed to copy because it
was actively in use. Trying again a few seconds later may not
be enough time for the in-use state of the file to change.
Users or apps holding the file open may need hours more
time. In this case, accepting the file wasn't copied and
catching it in one of your planned, subsequent Robocopy
runs, may succeed in eventually copying the file successfully.
That helps the current run to finish faster without being
prolonged by many retries that ultimately end up in a
majority of copy failures due to files still open past the retry
timeout.
/COPY:[copyflags] The fidelity of the file copy. Default: /COPY:DAT . Copy flags:
D = Data, A = Attributes, T = Timestamps, S = Security
= NTFS ACLs, O = Owner information, U = Auditing
information. Auditing information can't be stored in an Azure
file share.
/NP Specifies that the progress of the copy for each file and
folder won't be displayed. Displaying the progress
significantly lowers copy performance.
/UNILOG:<file name> Writes status to the log file as Unicode. (Overwrites the
existing log.)
/LFSM Only for targets with tiered storage. Not suppor ted
when the destination is a remote SMB share.
Specifies that Robocopy operates in "low free space mode."
This switch is useful only for targets with tiered storage that
might run out of local capacity before Robocopy finishes. It
was added specifically for use with a target enabled for Azure
File Sync cloud tiering. It can be used independently of Azure
File Sync. In this mode, Robocopy will pause whenever a file
copy would cause the destination volume's free space to go
below a "floor" value. This value can be specified by the
/LFSM:n form of the flag. The parameter n is specified in
base 2: nKB , nMB , or nGB . If /LFSM is specified with no
explicit floor value, the floor is set to 10 percent of the
destination volume's size. Low free space mode isn't
compatible with /MT , /EFSRAW , or /ZB . Support for /B
was added in Windows Server 2022.
/Z Use cautiously
Copies files in restart mode. This switch is recommended
only in an unstable network environment. It significantly
reduces copy performance because of extra logging.
IMPORTANT
We recommend using a Windows Server 2022. When using a Windows Server 2019, ensure at the latest patch level or at
least OS update KB5005103 is installed. It contains important fixes for certain Robocopy scenarios.
When you run the RoboCopy command for the first time, your users and applications are still accessing the
StorSimple files and folders and potentially change it. It is possible, that RoboCopy has processed a directory,
moves on to the next and then a user on the source location (StorSimple) adds, changes, or deletes a file that will
now not be processed in this current RoboCopy run. That is fine.
The first run is about moving the bulk of the data back to on-premises, over to your Windows Server and
backup into the cloud via Azure File Sync. This can take a long time, depending on:
your download bandwidth
the recall speed of the StorSimple cloud service
the upload bandwidth
the number of items (files and folders), that need to be processed by either service
Once the initial run is complete, run the command again.
The second time it will finish faster, because it only needs to transport changes that happened since the last run.
Those changes are likely local to the StorSimple already, because they are recent. That is further reducing the
time because the need for recall from the cloud is reduced. During this second run, still, new changes can
accumulate.
Repeat this process until you are satisfied that the amount of time it takes to complete is an acceptable
downtime.
When you consider the downtime acceptable and you are prepared to take the StorSimple location offline, then
do so now: For example, remove the SMB share so that no user can access the folder or take any other
appropriate step that prevents content to change in this folder on StorSimple.
Run one last RoboCopy round. This will pick up any changes, that might have been missed. How long this final
step takes, is dependent on the speed of the RoboCopy scan. You can estimate the time (which is equal to your
downtime) by measuring how long the previous run took.
Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure
to set the same share-level permissions as on your StorSimple SMB share.
You have finished migrating a share / group of shares into a common root or volume. (Depending on what you
mapped and decided that needed to go into the same Azure file share.)
You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share
at a time.
WARNING
Once you have moved all the data from you StorSimple to the Windows Server, and your migration is complete: Return to
all sync groups in the Azure portal and adjust the cloud tiering volume free space percent value to something better
suited for cache utilization, say 20%.
The cloud tiering volume free space policy acts on a volume level with potentially multiple server endpoints
syncing from it. If you forget to adjust the free space on even one server endpoint, sync will continue to apply
the most restrictive rule and attempt to keep 99% free disk space, making the local cache not performing as you
might expect. Unless it is your goal to only have the namespace for a volume that only contains rarely accessed,
archival data.
Troubleshoot
The most likely issue you can run into, is that the RoboCopy command fails with "Volume full" on the Windows
Server side. If that is the case, then your download speed is likely better than your upload speed. Cloud tiering
acts once every hour to evacuate content from the local Windows Server disk, that has synced.
Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows
Server.
When your Windows Server has sufficient available capacity, rerunning the command will resolve the problem.
Nothing breaks when you get into this situation and you can move forward with confidence. Inconvenience of
running the command again is the only consequence.
You can also run into other Azure File Sync issues. As unlikely as they may be, if that happens, take a look at the
LINK Azure File Sync troubleshooting guide .
Speed and success rate of a given RoboCopy run will depend on several factors:
IOPS on the source and target storage
the available network bandwidth between source and target
the ability to quickly process files and folders in a namespace
the number of changes between RoboCopy runs
IOPS and bandwidth considerations
In this category, you need to consider abilities of the source storage , the target storage , and the network
connecting them. The maximum possible throughput is determined by the slowest of these three components.
Make sure your network infrastructure is configured to support optimal transfer speeds to its best abilities.
Cau t i on
While copying as fast as possible is often most desireable, consider the utilization of your local network and
NAS appliance for other, often business critical tasks.
Copying as fast as possible might not be desirable when there's a risk that the migration could monopolize
available resources.
Consider when it's best in your environment to run migrations: during the day, off-hours, or during
weekends.
Also consider networking QoS on a Windows Server to throttle the RoboCopy speed.
Avoid unnecessary work for the migration tools.
RobCopy can insert inter-packet delays by specifying the /IPG:n switch where n is measured in milliseconds
between RoboCopy packets. Using this switch can help avoid monopolization of resources on both IO
constrained devices, and crowded network links.
/IPG:n cannot be used for precise network throttling to a certain Mbps. Use Windows Server Network QoS
instead. RoboCopy entirely relies on the SMB protocol for all networking needs. Using SMB is the reason why
RoboCopy can't influence the network throughput itself, but it can slow down its use.
A similar line of thought applies to the IOPS observed on the NAS. The cluster size on the NAS volume, packet
sizes, and an array of other factors influence the observed IOPS. Introducing inter-packet delay is often the
easiest way to control the load on the NAS. Test multiple values, for instance from about 20 milliseconds (n=20)
to multiples of that number. Once you introduce a delay, you can evaluate if your other apps can now work as
expected. This optimization strategy will allow you to find the optimal RoboCopy speed in your environment.
Processing speed
RoboCopy will traverse the namespace it's pointed to and evaluate each file and folder for copy. Every file will be
evaluated during an initial copy and during catch-up copies. For example, repeated runs of RoboCopy /MIR
against the same source and target storage locations. These repeated runs are useful to minimize downtime for
users and apps, and to improve the overall success rate of files migrated.
We often default to considering bandwidth as the most limiting factor in a migration - and that can be true. But
the ability to enumerate a namespace can influence the total time to copy even more for larger namespaces with
smaller files. Consider that copying 1 TiB of small files will take considerably longer than copying 1 TiB of fewer
but larger files. Assuming that all other variables remain the same.
The cause for this difference is the processing power needed to walk through a namespace. RoboCopy supports
multi-threaded copies through the /MT:n parameter where n stands for the number of threads to be used. So
when provisioning a machine specifically for RoboCopy, consider the number of processor cores and their
relationship to the thread count they provide. Most common are two threads per core. The core and thread
count of a machine is an important data point to decide what multi-thread values /MT:n you should specify.
Also consider how many RoboCopy jobs you plan to run in parallel on a given machine.
More threads will copy our 1-TiB example of small files considerably faster than fewer threads. At the same time,
the extra resource investment on our 1 TiB of larger files may not yield proportional benefits. A high thread
count will attempt to copy more of the large files over the network simultaneously. This extra network activity
increases the probability of getting constrained by throughput or storage IOPS.
During a first RoboCopy into an empty target or a differential run with lots of changed files, you are likely
constrained by your network throughput. Start with a high thread count for an initial run. A high thread count,
even beyond your currently available threads on the machine, helps saturate the available network bandwidth.
Subsequent /MIR runs are progressively impacted by processing items. Fewer changes in a differential run
mean less transport of data over the network. Your speed is now more dependent on your ability to process
namespace items than to move them over the network link. For subsequent runs, match your thread count
value to your processor core count and thread count per core. Consider if cores need to be reserved for other
tasks a production server may have.
TIP
Rule of thumb: The first RoboCopy run, that will move a lot of data of a higher-latency network, benefits from over-
provisioning the thread count ( /MT:n ). Subsequent runs will copy fewer differences and you are more likely to shift from
network throughput constrained to compute constrained. Under these circumstances, it is often better to match the
robocopy thread count to the actually available threads on the machine. Over-provisioning in that scenario can lead to
more context shifts in the processor, possibly slowing down your copy.
Relevant links
Migration content:
StorSimple 8000 series migration guide
Azure File Sync content:
Azure File Sync overview
Deploy Azure File Sync
Azure File Sync troubleshooting guide
Configure Azure Storage connection strings
5/20/2022 • 7 minutes to read • Edit Online
A connection string includes the authorization information required for your application to access data in an
Azure Storage account at runtime using Shared Key authorization. You can configure connection strings to:
Connect to the Azurite storage emulator.
Access a storage account in Azure.
Access specified resources in Azure via a shared access signature (SAS).
To learn how to view your account access keys and copy a connection string, see Manage storage account access
keys.
NOTE
Microsoft recommends using Azure Active Directory (Azure AD) to authorize requests against blob and queue data if
possible, rather than using the account keys (Shared Key authorization). Authorization with Azure AD provides superior
security and ease of use over Shared Key authorization.
To protect an Azure Storage account with Azure AD Conditional Access policies, you must disallow Shared Key
authorization for the storage account. For more information about how to disallow Shared Key authorization, see Prevent
Shared Key authorization for an Azure Storage account.
NOTE
The authentication key supported by the emulator is intended only for testing the functionality of your client
authentication code. It does not serve any security purpose. You cannot use your production storage account and key
with the emulator. You should not use the development account with production data.
The emulator supports connection via HTTP only. However, HTTPS is the recommended protocol for accessing resources
in a production Azure storage account.
DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;
AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;
BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;
QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;
TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;
The following .NET code snippet shows how you can use the shortcut from a method that takes a connection
string. For example, the BlobContainerClient(String, String) constructor takes a connection string.
Make sure that the emulator is running before calling the code in the snippet.
For more information about Azurite, see Use the Azurite emulator for local Azure Storage development.
Although Azure Storage supports both HTTP and HTTPS in a connection string, HTTPS is highly recommended.
TIP
You can find your storage account's connection strings in the Azure portal. Navigate to SETTINGS > Access keys in
your storage account's menu blade to see connection strings for both primary and secondary access keys.
BlobEndpoint=myBlobEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
FileEndpoint=myFileEndpoint;
SharedAccessSignature=sasToken
Each service endpoint is optional, although the connection string must contain at least one.
NOTE
Using HTTPS with a SAS is recommended as a best practice.
If you are specifying a SAS in a connection string in a configuration file, you may need to encode special characters in the
URL.
BlobEndpoint=https://storagesample.blob.core.windows.net;
SharedAccessSignature=sv=2015-04-05&sr=b&si=tutorial-policy-
635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D
And here's an example of the same connection string with encoding of special characters:
BlobEndpoint=https://storagesample.blob.core.windows.net;
SharedAccessSignature=sv=2015-04-05&sr=b&si=tutorial-policy-
635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D
BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-
08&sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-04-12T03%3A24%3A31Z&se=2016-04-
13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl
And here's an example of the same connection string with URL encoding:
BlobEndpoint=https://storagesample.blob.core.windows.net;
FileEndpoint=https://storagesample.file.core.windows.net;
SharedAccessSignature=sv=2015-07-
08&sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-04-
12T03%3A24%3A31Z&se=2016-04-13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl
DefaultEndpointsProtocol=[http|https];
BlobEndpoint=myBlobEndpoint;
FileEndpoint=myFileEndpoint;
QueueEndpoint=myQueueEndpoint;
TableEndpoint=myTableEndpoint;
AccountName=myAccountName;
AccountKey=myAccountKey
One scenario where you might wish to specify an explicit endpoint is when you've mapped your Blob storage
endpoint to a custom domain. In that case, you can specify your custom endpoint for Blob storage in your
connection string. You can optionally specify the default endpoints for the other services if your application uses
them.
Here is an example of a connection string that specifies an explicit endpoint for the Blob service:
This example specifies explicit endpoints for all services, including a custom domain for the Blob service:
The endpoint values in a connection string are used to construct the request URIs to the storage services, and
dictate the form of any URIs that are returned to your code.
If you've mapped a storage endpoint to a custom domain and omit that endpoint from a connection string, then
you will not be able to use that connection string to access data in that service from your code.
For more information about configuring a custom domain for Azure Storage, see Map a custom domain to an
Azure Blob Storage endpoint.
IMPORTANT
Service endpoint values in your connection strings must be well-formed URIs, including https:// (recommended) or
http:// .
DefaultEndpointsProtocol=[http|https];
AccountName=myAccountName;
AccountKey=myAccountKey;
EndpointSuffix=mySuffix;
Here's an example connection string for storage services in Azure China 21Vianet:
DefaultEndpointsProtocol=https;
AccountName=storagesample;
AccountKey=<account-key>;
EndpointSuffix=core.chinacloudapi.cn;
Here's an example that shows how to retrieve a connection string from a configuration file:
// Parse the connection string and return a reference to the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
Using the Azure Configuration Manager is optional. You can also use an API such as the .NET Framework's
ConfigurationManager Class.
Next steps
Use the Azurite emulator for local Azure Storage development
Grant limited access to Azure Storage resources using shared access signatures (SAS)
Develop for Azure Files with .NET
5/20/2022 • 22 minutes to read • Edit Online
Learn the basics of developing .NET applications that use Azure Files to store data. This article shows how to
create a simple console application to do the following with .NET and Azure Files:
Get the contents of a file.
Set the maximum size, or quota, for a file share.
Create a shared access signature (SAS) for a file.
Copy a file to another file in the same storage account.
Copy a file to a blob in the same storage account.
Create a snapshot of a file share.
Restore a file from a share snapshot.
Use Azure Storage Metrics for troubleshooting.
To learn more about Azure Files, see What is Azure Files?
TIP
Check out the Azure Storage code samples repositor y
For easy-to-use end-to-end Azure Storage code samples that you can download and run, please check out our list of
Azure Storage Samples.
Applies to
F IL E SH A RE T Y P E SM B NFS
API W H EN TO USE N OT ES
API W H EN TO USE N OT ES
Azure core library for .NET: This package is the implementation of the Azure client pipeline.
Azure Storage Blob client library for .NET: This package provides programmatic access to blob resources in
your storage account.
Azure Storage Files client library for .NET: This package provides programmatic access to file resources in
your storage account.
System Configuration Manager library for .NET: This package provides a class storing and retrieving values
in a configuration file.
You can use NuGet to obtain the packages. Follow these steps:
1. In Solution Explorer , right-click your project and choose Manage NuGet Packages .
2. In NuGet Package Manager , select Browse . Then search for and choose Azure.Core , and then select
Install .
This step installs the package and its dependencies.
3. Search for and install these packages:
Azure.Storage.Blobs
Azure.Storage.Files.Shares
System.Configuration.ConfigurationManager
Replace myaccount with your storage account name and mykey with your storage account key.
value="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;EndpointSuffix=core.windows.net
" />
<add key="StorageAccountName" value="myaccount" />
<add key="StorageAccountKey" value="mykey" />
</appSettings>
</configuration>
NOTE
The Azurite storage emulator does not currently support Azure Files. Your connection string must target an Azure storage
account in the cloud to work with Azure Files.
using System;
using System.Configuration;
using System.IO;
using System.Threading.Tasks;
using Azure;
using Azure.Storage;
using Azure.Storage.Blobs;
using Azure.Storage.Files.Shares;
using Azure.Storage.Files.Shares.Models;
using Azure.Storage.Sas;
Access the file share programmatically
In the Program.cs file, add the following code to access the file share programmatically.
Azure .NET SDK v12
Azure .NET SDK v11
The following method creates a file share if it doesn't already exist. The method starts by creating a ShareClient
object from a connection string. The sample then attempts to download a file we created earlier. Call this method
from Main() .
//-------------------------------------------------
// Create a file share
//-------------------------------------------------
public async Task CreateShareAsync(string shareName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instantiate a ShareClient which will be used to create and manipulate the file share
ShareClient share = new ShareClient(connectionString, shareName);
// Save the data to a local file, overwrite if the file already exists
using (FileStream stream = File.OpenWrite(@"downloadedLog1.txt"))
{
await download.Content.CopyToAsync(stream);
await stream.FlushAsync();
stream.Close();
//-------------------------------------------------
// Set the maximum size of a share
//-------------------------------------------------
public async Task SetMaxShareSizeAsync(string shareName, int increaseSizeInGiB)
{
const long ONE_GIBIBYTE = 10737420000; // Number of bytes in 1 gibibyte
The following example method returns a SAS on a file in the specified share.
//-------------------------------------------------
// Create a SAS URI for a file
//-------------------------------------------------
public Uri GetFileSasUri(string shareName, string filePath, DateTime expiration, ShareFileSasPermissions
permissions)
{
// Get the account details from app settings
string accountName = ConfigurationManager.AppSettings["StorageAccountName"];
string accountKey = ConfigurationManager.AppSettings["StorageAccountKey"];
// Expires in 24 hours
ExpiresOn = expiration
};
For more information about creating and using shared access signatures, see How a shared access signature
works.
Copy files
Beginning with version 5.x of the Azure Files client library, you can copy a file to another file, a file to a blob, or a
blob to a file.
You can also use AzCopy to copy one file to another or to copy a blob to a file or the other way around. See Get
started with AzCopy.
NOTE
If you are copying a blob to a file, or a file to a blob, you must use a shared access signature (SAS) to authorize access to
the source object, even if you are copying within the same storage account.
if (await destFile.ExistsAsync())
{
Console.WriteLine($"{sourceFile.Uri} copied to {destFile.Uri}");
}
}
}
await destBlob.StartCopyFromUriAsync(sourceFile.Uri);
if (await destBlob.ExistsAsync())
{
Console.WriteLine($"File {sourceFile.Name} copied to blob {destBlob.Name}");
}
}
}
You can copy a blob to a file in the same way. If the source object is a blob, then create a SAS to authorize access
to that blob during the copy operation.
Share snapshots
Beginning with version 8.5 of the Azure Files client library, you can create a share snapshot. You can also list or
browse share snapshots and delete share snapshots. Once created, share snapshots are read-only.
Create share snapshots
The following example creates a file share snapshot.
// Instatiate a ShareServiceClient
ShareServiceClient shareServiceClient = new ShareServiceClient(connectionString);
//-------------------------------------------------
// List the snapshots on a share
//-------------------------------------------------
public void ListShareSnapshots()
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instatiate a ShareServiceClient
ShareServiceClient shareServiceClient = new ShareServiceClient(connectionString);
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);
Console.WriteLine($"Share: {share.Name}");
//-------------------------------------------------
// Recursively list a directory tree
//-------------------------------------------------
public void ListDirTree(ShareDirectoryClient dir)
{
// List the files and directories in the snapshot
foreach (ShareFileItem item in dir.GetFilesAndDirectories())
{
if (item.IsDirectory)
{
Console.WriteLine($"Directory: {item.Name}");
ShareDirectoryClient subDir = dir.GetSubdirectoryClient(item.Name);
ListDirTree(subDir);
}
else
{
Console.WriteLine($"File: {dir.Name}\\{item.Name}");
}
}
}
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);
try
{
// Delete the snapshot
await snapshotShare.DeleteIfExistsAsync();
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Exception: {ex.Message}");
Console.WriteLine($"Error code: {ex.Status}\t{ex.ErrorCode}");
}
}
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
If you encounter any problems, you can refer to Troubleshoot Azure Files problems in Windows.
Next steps
For more information about Azure Files, see the following resources:
Conceptual articles and videos
Azure Files: a frictionless cloud SMB file system for Windows and Linux
Use Azure Files with Linux
Tooling support for File storage
Get started with AzCopy
Troubleshoot Azure Files problems in Windows
Reference
Azure Storage APIs for .NET
File Service REST API
Develop for Azure Files with Java
5/20/2022 • 8 minutes to read • Edit Online
Learn the basics developing Java applications that use Azure Files to store data. Create a console application and
learn basic actions using Azure Files APIs:
Create and delete Azure file shares
Create and delete directories
Enumerate files and directories in an Azure file share
Upload, download, and delete a file
TIP
Check out the Azure Storage code samples repositor y
For easy-to-use end-to-end Azure Storage code samples that you can download and run, please check out our list of
Azure Storage Samples.
Applies to
F IL E SH A RE T Y P E SM B NFS
Replace <storage_account_name> and <storage_account_key> with the actual values for your storage account.
To access Azure Files, create a ShareClient object. Use the ShareClientBuilder class to build a new ShareClient
object.
The ShareClient.create method throws an exception if the share already exists. Put the call to create in a
try/catch block and handle the exception.
shareClient.create();
return true;
}
catch (Exception e)
{
System.out.println("createFileShare exception: " + e.getMessage());
return false;
}
}
shareClient.delete();
return true;
}
catch (Exception e)
{
System.out.println("deleteFileShare exception: " + e.getMessage());
return false;
}
}
Create a directory
Organize storage by putting files inside subdirectories instead of having all of them in the root directory.
Azure Java SDK v12
Azure Java SDK v8
The following code creates a directory by calling ShareDirectoryClient.create. The example method returns a
Boolean value indicating if it successfully created the directory.
dirClient.create();
return true;
}
catch (Exception e)
{
System.out.println("createDirectory exception: " + e.getMessage());
return false;
}
}
Delete a directory
Deleting a directory is a straightforward task. You can't delete a directory that still contains files or
subdirectories.
Azure Java SDK v12
Azure Java SDK v8
The ShareDirectoryClient.delete method throws an exception if the directory doesn't exist or isn't empty. Put the
call to delete in a try/catch block and handle the exception.
dirClient.delete();
return true;
}
catch (Exception e)
{
System.out.println("deleteDirectory exception: " + e.getMessage());
return false;
}
}
Get a list of files and directories by calling ShareDirectoryClient.listFilesAndDirectories. The method returns a list
of ShareFileItem objects on which you can iterate. The following code lists files and directories inside the
directory specified by the dirName parameter.
dirClient.listFilesAndDirectories().forEach(
fileRef -> System.out.printf("Resource: %s\t Directory? %b\n",
fileRef.getName(), fileRef.isDirectory())
);
return true;
}
catch (Exception e)
{
System.out.println("enumerateFilesAndDirs exception: " + e.getMessage());
return false;
}
}
Upload a file
Learn how to upload a file from local storage.
The following code uploads a local file to Azure Files by calling the ShareFileClient.uploadFromFile method. The
following example method returns a Boolean value indicating if it successfully uploaded the specified file.
Download a file
One of the more frequent operations is to download files from an Azure file share.
The following example downloads the specified file to the local directory specified in the destDir parameter. The
example method makes the downloaded filename unique by prepending the date and time.
public static Boolean downloadFile(String connectStr, String shareName,
String dirName, String destDir,
String fileName)
{
try
{
ShareDirectoryClient dirClient = new ShareFileClientBuilder()
.connectionString(connectStr).shareName(shareName)
.resourcePath(dirName)
.buildDirectoryClient();
fileClient.downloadToFile(destPath);
return true;
}
catch (Exception e)
{
System.out.println("downloadFile exception: " + e.getMessage());
return false;
}
}
Delete a file
Another common Azure Files operation is file deletion.
The following code deletes the specified file specified. First, the example creates a ShareDirectoryClient based on
the dirName parameter. Then, the code gets a ShareFileClient from the directory client, based on the fileName
parameter. Finally, the example method calls ShareFileClient.delete to delete the file.
Next steps
If you would like to learn more about other Azure storage APIs, follow these links.
Azure for Java developers
Azure SDK for Java
Azure SDK for Android
Azure File Share client library for Java SDK Reference
Azure Storage Services REST API
Azure Storage Team Blog
Transfer data with the AzCopy Command-Line Utility
Troubleshooting Azure Files problems - Windows
Develop for Azure Files with C++
5/20/2022 • 6 minutes to read • Edit Online
TIP
Tr y the Microsoft Azure Storage Explorer
Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually with Azure
Storage data on Windows, macOS, and Linux.
Applies to
F IL E SH A RE T Y P E SM B NFS
NOTE
Because Azure Files may be accessed over SMB, it is possible to write simple applications that access the Azure file share
using the standard C++ I/O classes and functions. This article will describe how to write applications that use the Azure
Storage C++ SDK, which uses the File REST API to talk to Azure Files.
Prerequisites
Azure subscription
Azure storage account
C++ compiler
CMake
Vcpkg - C and C++ package manager
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
C++.
Install the packages
The vcpkg install command will install the Azure Storage Blobs SDK for C++ and necessary dependencies:
For more information, visit GitHub to acquire and build the Azure SDK for C++.
Create the project
In Visual Studio, create a new C++ console application for Windows called FilesShareQuickstartV12.
Windows
Linux and macOS
After you add the environment variable in Windows, you must start a new instance of the command window.
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.
Code examples
These example code snippets show you how to do the following tasks with the Azure Files Share client library
for C++:
Add include files
Get the connection string
Create a files share
Upload files to a files share
Set the metadata of a file
List the metadata of a file
Download files
Delete a file
Delete a files share
Add include files
From the project directory:
1. Open the FilesShareQuickstartV12.sln solution file in Visual Studio.
2. Inside Visual Studio, open the FilesShareQuickstartV12.cpp source file.
3. Remove any code inside main that was autogenerated.
4. Add #include statements.
#include <iostream>
#include <stdlib.h>
#include <vector>
#include <azure/storage/files/shares.hpp>
// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING.
// Note that _MSC_VER is set when using MSVC compiler.
static const char* AZURE_STORAGE_CONNECTION_STRING = "AZURE_STORAGE_CONNECTION_STRING";
#if !defined(_MSC_VER)
const char* connectionString = std::getenv(AZURE_STORAGE_CONNECTION_STRING);
#else
// Use getenv_s for MSVC
size_t requiredSize;
getenv_s(&requiredSize, NULL, NULL, AZURE_STORAGE_CONNECTION_STRING);
if (requiredSize == 0) {
throw std::runtime_error("missing connection string from env.");
}
std::vector<char> value(requiredSize);
getenv_s(&requiredSize, value.data(), value.size(), AZURE_STORAGE_CONNECTION_STRING);
std::string connectionStringStr = std::string(value.begin(), value.end());
const char* connectionString = connectionStringStr.c_str();
#endif
// Create the files share. This will do nothing if the files share already exists.
std::cout << "Creating files share: " << shareName << std::endl;
shareClient.CreateIfNotExists();
Download files
Having retrieved the properties of the file in List the metadata of a File a new std::vector<uint8_t> object by
using the properties of the uploaded file. Download the previously created file into the new
std::vector<uint8_t> object by calling the DownloadTo function in the ShareFileClient base class. Finally,
display the downloaded file data.
Add this code to the end of main() :
std::vector<uint8_t> fileDownloaded(properties.FileSize);
fileClient.DownloadTo(fileDownloaded.data(), fileDownloaded.size());
std::cout << "Downloaded file contents: " << std::string(fileDownloaded.begin(), fileDownloaded.end()) <<
std::endl;
Delete a file
The following code deletes the blob from the Azure Storage Files Share by calling the ShareFileClient.Delete
function.
std::cout << "Deleting files share: " << shareName << std::endl;
shareClient.DeleteIfExists();
Next steps
In this quickstart, you learned how to upload, download, and list files using C++. You also learned how to create
and delete an Azure Storage Files Share.
To see a C++ Blob Storage sample, continue to:
Azure Storage Files Share SDK v12 for C++ samples
Develop for Azure Files with Python
5/20/2022 • 8 minutes to read • Edit Online
Learn the basics of using Python to develop apps or services that use Azure Files to store file data. Create a
simple console app and learn how to perform basic actions with Python and Azure Files:
Create Azure file shares
Create directories
Enumerate files and directories in an Azure file share
Upload, download, and delete a file
Create file share backups by using snapshots
NOTE
Because Azure Files may be accessed over SMB, it is possible to write simple applications that access the Azure file share
using the standard Python I/O classes and functions. This article will describe how to write apps that use the Azure
Storage SDK for Python, which uses the Azure Files REST API to talk to Azure Files.
Applies to
F IL E SH A RE T Y P E SM B NFS
The Azure Files client library v12.x for Python requires Python 2.7 or 3.6+.
ShareServiceClient lets you work with shares, directories, and files. The following code creates a
ShareServiceClient object using the storage account connection string.
The following code example uses a ShareClient object to create the share if it doesn't exist.
Create a directory
You can organize storage by putting files inside subdirectories instead of having all of them in the root directory.
The following method creates a directory in the root of the specified file share by using a ShareDirectoryClient
object.
Upload a file
In this section, you'll learn how to upload a file from local storage into Azure Files.
Azure Python SDK v12
Azure Python SDK v2
The following method uploads the contents of the specified file into the specified directory in the specified Azure
file share.
To list the files and directories in a subdirectory, use the list_directories_and_files method. This method returns
an auto-paging iterable. The following code outputs the name of each file and subdirectory in the specified
directory to the console.
def list_files_and_dirs(self, connection_string, share_name, dir_name):
try:
# Create a ShareClient from a connection string
share_client = ShareClient.from_connection_string(
connection_string, share_name)
Download a file
Azure Python SDK v12
Azure Python SDK v2
# Create a snapshot
snapshot = share_client.create_snapshot()
print("Created snapshot:", snapshot["snapshot"])
print("Snapshot:", snapshot_time)
Delete a file
Azure Python SDK v12
Azure Python SDK v2
Next steps
Now that you've learned how to manipulate Azure Files with Python, follow these links to learn more.
Python Developer Center
Azure Storage Services REST API
Microsoft Azure Storage SDK for Python
Determine which Azure Storage encryption key
model is in use for the storage account
5/20/2022 • 2 minutes to read • Edit Online
Data in your storage account is automatically encrypted by Azure Storage. Azure Storage encryption offers two
options for managing encryption keys at the level of the storage account:
Microsoft-managed keys. By default, Microsoft manages the keys used to encrypt your storage account.
Customer-managed keys. You can optionally choose to manage encryption keys for your storage account.
Customer-managed keys must be stored in Azure Key Vault.
Additionally, you can provide an encryption key at the level of an individual request for some Blob storage
operations. When an encryption key is specified on the request, that key overrides the encryption key that is
active on the storage account. For more information, see Specify a customer-provided key on a request to Blob
storage.
For more information about encryption keys, see Azure Storage encryption for data at rest.
To check the encryption model for the storage account by using the Azure portal, follow these steps:
1. In the Azure portal, navigate to your storage account.
2. Select the Encr yption setting and note the setting.
The following image shows a storage account that is encrypted with Microsoft-managed keys:
And the following image shows a storage account that is encrypted with customer-managed keys:
Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Configure encryption with customer-managed keys
stored in Azure Key Vault
5/20/2022 • 18 minutes to read • Edit Online
Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-
managed keys. For additional control over encryption keys, you can manage your own keys. Customer-
managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM).
This article shows how to configure encryption with customer-managed keys stored in a key vault by using the
Azure portal, PowerShell, or Azure CLI. To learn how to configure encryption with customer-managed keys
stored in a managed HSM, see Configure encryption with customer-managed keys stored in Azure Key Vault
Managed HSM.
NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.
To learn how to create a key vault with the Azure portal, see Quickstart: Create a key vault using the Azure
portal. When you create the key vault, select Enable purge protection , as shown in the following image.
To enable purge protection on an existing key vault, follow these steps:
1. Navigate to your key vault in the Azure portal.
2. Under Settings , choose Proper ties .
3. In the Purge protection section, choose Enable purge protection .
Add a key
Next, add a key to the key vault.
Azure Storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more
information about supported key types, see About keys.
Azure portal
PowerShell
Azure CLI
To learn how to add a key with the Azure portal, see Quickstart: Set and retrieve a key from Azure Key Vault
using the Azure portal.
When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned
identity through the portal user interface. For details, see one of the following sections:
Configure customer-managed keys for a new account
Configure customer-managed keys for an existing account
Use a system-assigned managed identity to authorize access
A system-assigned managed identity is associated with an instance of an Azure service, in this case an Azure
Storage account. You must explicitly assign a system-assigned managed identity to a storage account before you
can use the system-assigned managed identity to authorize access to the key vault that contains your customer-
managed key.
Only existing storage accounts can use a system-assigned identity to authorize access to the key vault. New
storage accounts must use a user-assigned identity, if customer-managed keys are configured on account
creation.
Azure portal
PowerShell
Azure CLI
When you configure customer-managed keys with the Azure portal with a system-assigned managed identity,
the system-assigned managed identity is assigned to the storage account for you under the covers. For details,
see Configure customer-managed keys for an existing account.
To learn how to configure the key vault access policy with the Azure portal, see Assign an Azure Key Vault access
policy.
Azure portal
PowerShell
Azure CLI
To configure customer-managed keys for a new storage account with automatic updating of the key version,
follow these steps:
1. In the Azure portal, navigate to the Storage accounts page, and select the Create button to create a
new account.
2. Follow the steps outlined in Create a storage account to fill out the fields on the Basics , Advanced ,
Networking , and Data Protection tabs.
3. On the Encr yption tab, indicate for which services you want to enable support for customer-managed
keys in the Enable suppor t for customer-managed keys field.
4. In the Encr yption type field, select Customer-managed keys (CMK) .
5. In the Encr yption key field, choose Select a key vault and key , and specify the key vault and key.
6. For the User-assigned identity field, select an existing user-assigned managed identity.
7. Select Review + create to validate and create the new account.
You can also configure customer-managed keys with manual updating of the key version when you create a new
storage account. Follow the steps described in Configure encryption for manual updating of key versions.
NOTE
To rotate a key, create a new version of the key in Azure Key Vault. Azure Storage does not handle key rotation, so you will
need to manage rotation of the key in the key vault. You can configure key auto-rotation in Azure Key Vault or rotate
your key manually.
Configure encryption for automatic updating of key versions
Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest
key version from the key vault. Azure Storage checks the key vault daily for a new version of the key. When a
new version becomes available, then Azure Storage automatically begins using the latest version of the key for
encryption.
IMPORTANT
Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours
before disabling the older version.
Azure portal
PowerShell
Azure CLI
To configure customer-managed keys for an existing account with automatic updating of the key version in the
Azure portal, follow these steps:
1. Navigate to your storage account.
2. On the Settings blade for the storage account, click Encr yption . By default, key management is set to
Microsoft Managed Keys , as shown in the following image.
Azure portal
PowerShell
Azure CLI
To configure customer-managed keys with manual updating of the key version in the Azure portal, specify the
key URI, including the version. To specify a key as a URI, follow these steps:
1. To locate the key URI in the Azure portal, navigate to your key vault, and select the Keys setting. Select
the desired key, then click the key to view its versions. Select a key version to view the settings for that
version.
2. Copy the value of the Key Identifier field, which provides the URI.
3. In the Encr yption key settings for your storage account, choose the Enter key URI option.
4. Paste the URI that you copied into the Key URI field. Omit the key version from the URI to enable
automatic updating of the key version.
5. Specify the subscription that contains the key vault.
6. Specify either a system-assigned or user-assigned managed identity.
7. Save your changes.
Azure portal
PowerShell
Azure CLI
To change the key with the Azure portal, follow these steps:
1. Navigate to your storage account and display the Encr yption settings.
2. Select the key vault and choose a new key.
3. Save your changes.
Azure portal
PowerShell
Azure CLI
To revoke customer-managed keys with the Azure portal, disable the key as described in Disable customer-
managed keys.
Azure portal
PowerShell
Azure CLI
Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM
Configure encryption with customer-managed keys
stored in Azure Key Vault Managed HSM
5/20/2022 • 3 minutes to read • Edit Online
Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-
managed keys. For additional control over encryption keys, you can manage your own keys. Customer-
managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM). An
Azure Key Vault Managed HSM is an FIPS 140-2 Level 3 validated HSM.
This article shows how to configure encryption with customer-managed keys stored in a managed HSM by
using Azure CLI. To learn how to configure encryption with customer-managed keys stored in a key vault, see
Configure encryption with customer-managed keys stored in Azure Key Vault.
NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.
Assign a role to the storage account for access to the managed HSM
Next, assign the Managed HSM Cr ypto Ser vice Encr yption User role to the storage account's managed
identity so that the storage account has permissions to the managed HSM. Microsoft recommends that you
scope the role assignment to the level of the individual key in order to grant the fewest possible privileges to the
managed identity.
To create the role assignment for storage account, call az key vault role assignment create. Remember to replace
the placeholder values in brackets with your own values.
storage_account_principal = $(az storage account show \
--name <storage-account> \
--resource-group <resource-group> \
--query identity.principalId \
--output tsv)
To manually update the version for a customer-managed key, include the key version when you configure
encryption for the storage account:
When you manually update the key version, you'll need to update the storage account's encryption settings to
use the new version. First, query for the key vault URI by calling az keyvault show, and for the key version by
calling az keyvault key list-versions. Then call az storage account update to update the storage account's
encryption settings to use the new version of the key, as shown in the previous example.
Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Enable infrastructure encryption for double
encryption of data
5/20/2022 • 5 minutes to read • Edit Online
Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES
encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require
higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage
infrastructure level for double encryption. Double encryption of Azure Storage data protects against a scenario
where one of the encryption algorithms or keys may be compromised. In this scenario, the additional layer of
encryption continues to protect your data.
Infrastructure encryption can be enabled for the entire storage account, or for an encryption scope within an
account. When infrastructure encryption is enabled for a storage account or an encryption scope, data is
encrypted twice — once at the service level and once at the infrastructure level — with two different encryption
algorithms and two different keys.
Service-level encryption supports the use of either Microsoft-managed keys or customer-managed keys with
Azure Key Vault or Key Vault Managed Hardware Security Model (HSM) (preview). Infrastructure-level
encryption relies on Microsoft-managed keys and always uses a separate key. For more information about key
management with Azure Storage encryption, see About encryption key management.
To doubly encrypt your data, you must first create a storage account or an encryption scope that is configured
for infrastructure encryption. This article describes how to enable infrastructure encryption.
To use the Azure portal to create a storage account with infrastructure encryption enabled, follow these steps:
1. In the Azure portal, navigate to the Storage accounts page.
2. Choose the Add button to add a new general-purpose v2 or premium block blob storage account.
3. On the Advanced tab, locate Infrastructure encryption, and select Enabled .
4. Select Review + create to finish creating the storage account.
To verify that infrastructure encryption is enabled for a storage account with the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal.
2. Under Settings , choose Encr yption .
Azure Policy provides a built-in policy to require that infrastructure encryption be enabled for a storage account.
For more information, see the Storage section in Azure Policy built-in policy definitions.
Next steps
Azure Storage encryption for data at rest
Customer-managed keys for Azure Storage encryption
Encryption scopes for Blob storage
Configure Microsoft Defender for Storage
5/20/2022 • 4 minutes to read • Edit Online
Microsoft Defender for Storage provides an additional layer of security intelligence that detects unusual and
potentially harmful attempts to access or exploit storage accounts. This layer of protection allows you to address
threats without being a security expert or managing security monitoring systems.
Security alerts are triggered when anomalies in activity occur. These security alerts are integrated with Microsoft
Defender for Cloud, and are also sent via email to subscription administrators, with details of suspicious activity
and recommendations on how to investigate and remediate threats.
The service ingests resource logs of read, write, and delete requests to Blob storage and to Azure Files for threat
detection. To investigate alerts from Microsoft Defender for Cloud, you can view related storage activity using
Storage Analytics Logging. For more information, see Configure logging in Monitor a storage account in the
Azure portal.
Availability
Microsoft Defender for Storage is currently available for Blob storage, Azure Files, and Azure Data Lake Storage
Gen2. Account types that support Microsoft Defender for Storage include general-purpose v2, block blob, and
Blob storage accounts. Microsoft Defender for Storage is available in all public clouds and US government
clouds, but not in other sovereign or Azure Government cloud regions.
Accounts with hierarchical namespaces enabled for Data Lake Storage support transactions using both the
Azure Blob storage APIs and the Data Lake Storage APIs. Azure file shares support transactions over SMB.
For pricing details, including a free 30 day trial, see the Microsoft Defender for Cloud pricing page.
The following list summarizes the availability of Microsoft Defender for Storage:
Release state:
Blob Storage (general availability)
Azure Files (general availability)
Azure Data Lake Storage Gen2 (general availability)
Clouds: ✔ Commercial clouds
✔ Azure Government
✘ Azure China 21Vianet
Microsoft Defender for Storage is built into Microsoft Defender for Cloud. When you enable Microsoft Defender
for Cloud's enhanced security features on your subscription, Microsoft Defender for Storage is automatically
enabled for all of your storage accounts. To enable or disable Defender for Storage for individual storage
accounts under a specific subscription:
1. Launch Microsoft Defender for Cloud in the Azure portal.
2. From Defender for Cloud's main menu, select Environment settings .
3. Select the subscription for which you want to enable or disable Microsoft Defender for Cloud.
4. Select Enable all Microsoft Defender plans to enable Microsoft Defender for Cloud in the
subscription.
5. Under Select Microsoft Defender plans by resource type , locate the Storage row, and select
Enabled in the Plan column.
6. Save your changes.
Microsoft Defender for Storage is now enabled for all storage accounts in this subscription.
You can review and manage your current security alerts from Microsoft Defender for Cloud's Security alerts tile.
Select an alert for details and actions for investigating the current threat and addressing future threats.
Security alerts
Alerts are generated by unusual and potentially harmful attempts to access or exploit storage accounts. For a list
of alerts for Azure Storage, see Alerts for Azure Storage.
Next steps
Introduction to Microsoft Defender for Storage
Microsoft Defender for Cloud
Logs in Azure Storage accounts
Configure Azure Storage firewalls and virtual
networks
5/20/2022 • 27 minutes to read • Edit Online
Azure Storage provides a layered security model. This model enables you to secure and control the level of
access to your storage accounts that your applications and enterprise environments demand, based on the type
and subset of networks or resources used. When network rules are configured, only applications requesting
data over the specified set of networks or through the specified set of Azure resources can access a storage
account. You can limit access to your storage account to requests originating from specified IP addresses, IP
ranges, subnets in an Azure Virtual Network (VNet), or resource instances of some Azure services.
Storage accounts have a public endpoint that is accessible through the internet. You can also create Private
Endpoints for your storage account, which assigns a private IP address from your VNet to the storage account,
and secures all traffic between your VNet and the storage account over a private link. The Azure storage firewall
provides access control for the public endpoint of your storage account. You can also use the firewall to block all
access through the public endpoint when using private endpoints. Your storage firewall configuration also
enables select trusted Azure platform services to access the storage account securely.
An application that accesses a storage account when network rules are in effect still requires proper
authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for
blobs and queues, with a valid account access key, or with an SAS token. When a blob container is configured for
anonymous public access, requests to read data in that container do not need to be authorized, but the firewall
rules remain in effect and will block anonymous traffic.
IMPORTANT
Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests
originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses. Requests
that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and
so on.
You can grant access to Azure services that operate from within a VNet by allowing traffic from the subnet hosting the
service instance. You can also enable a limited number of scenarios through the exceptions mechanism described below.
To access data from the storage account through the Azure portal, you would need to be on a machine within the trusted
boundary (either IP or VNet) that you set up.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Scenarios
To secure your storage account, you should first configure a rule to deny access to traffic from all networks
(including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access
to traffic from specific VNets. You can also configure rules to grant access to traffic from selected public internet
IP address ranges, enabling connections from specific internet or on-premises clients. This configuration enables
you to build a secure network boundary for your applications.
You can combine firewall rules that allow access from specific virtual networks and from public IP address
ranges on the same storage account. Storage firewall rules can be applied to existing storage accounts, or when
creating new storage accounts.
Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules
to allow traffic for private endpoints of a storage account. The process of approving the creation of a private
endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
Network rules are enforced on all network protocols for Azure storage, including REST and SMB. To access data
using tools such as the Azure portal, Storage Explorer, and AzCopy, explicit network rules must be configured.
Once network rules are applied, they're enforced for all requests. SAS tokens that grant access to a specific IP
address serve to limit the access of the token holder, but don't grant new access beyond configured network
rules.
Virtual machine disk traffic (including mount and unmount operations, and disk IO) is not affected by network
rules. REST access to page blobs is protected by network rules.
Classic storage accounts do not support firewalls and virtual networks.
You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by
creating an exception. This process is documented in the Manage Exceptions section of this article. Firewall
exceptions aren't applicable with managed disks as they're already managed by Azure.
WARNING
Changing this setting can impact your application's ability to connect to Azure Storage. Make sure to grant access to any
allowed networks or set up access through a private endpoint before you change this setting.
Portal
PowerShell
Azure CLI
IMPORTANT
If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the storage
account. If you create a new subnet by the same name, it will not have access to the storage account. To allow access, you
must explicitly authorize the new subnet in the network rules for the storage account.
Required permissions
To apply a virtual network rule to a storage account, the user must have the appropriate permissions for the
subnets being added. Applying a rule can be performed by a Storage Account Contributor or a user that has
been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action Azure
resource provider operation via a custom Azure role.
Storage account and the virtual networks granted access may be in different subscriptions, including
subscriptions that are a part of a different Azure AD tenant.
NOTE
Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory
tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the
Azure portal, though they may be viewed in the portal.
Portal
PowerShell
Azure CLI
During the preview you must use either PowerShell or the Azure CLI to enable this feature.
Managing virtual network rules
You can manage virtual network rules for storage accounts through the Azure portal, PowerShell, or CLIv2.
NOTE
If you registered the AllowGlobalTagsForStorage feature, and you want to enable access to your storage account from
a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the storage account or its
paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD
tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to
configure access rules for virtual networks in other regions.
Portal
PowerShell
Azure CLI
NOTE
If a service endpoint for Azure Storage wasn't previously configured for the selected virtual network and subnets,
you can configure it as part of this operation.
Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection
during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use ,
PowerShell, CLI or REST APIs.
Even if you registered the AllowGlobalTagsForStorageOnly feature, subnets in regions other than the region of
the storage account or its paired region aren't shown for selection. If you want to enable access to your storage
account from a virtual network/subnet in a different region, use the instructions in the PowerShell or Azure CLI
tabs.
5. To remove a virtual network or subnet rule, select ... to open the context menu for the virtual network or
subnet, and select Remove .
6. select Save to apply your changes.
Portal
PowerShell
Azure CLI
NOTE
This feature is in public preview and is available in all public cloud regions.
Portal
PowerShell
Azure CLI
You can add or remove resource network rules in the Azure portal.
1. Sign in to the Azure portal to get started.
2. Locate your storage account and display the account overview.
3. Select Networking to display the configuration page for networking.
4. Under Firewalls and vir tual networks , for Selected networks , select to allow access.
5. Scroll down to find Resource instances , and in the Resource type dropdown list, choose the resource
type of your resource instance.
6. In the Instance name dropdown list, choose the resource instance. You can also choose to include all
resource instances in the active tenant, subscription, or resource group.
7. Select Save to apply your changes. The resource instance appears in the Resource instances section of
the network settings page.
To remove the resource instance, select the delete icon ( ) next to the resource instance.
Azure Event Hubs Microsoft.EventHub Archive data with Event Hubs Capture.
Learn More.
TIP
The recommended way to grant access to specific resources is to use resource instance rules. To grant access to specific
resource instances, see the Grant access from Azure resource instances (preview) section of this article.
Azure Container Registry Tasks Microsoft.ContainerRegistry/registries ACR Tasks can access storage accounts
when building container images.
Azure Synapse Analytics Microsoft.Sql Allows import and export of data from
specific SQL databases using the COPY
statement or PolyBase (in dedicated
pool), or the openrowset function
and external tables in serverless pool.
Learn more.
Manage exceptions
You can manage network rule exceptions through the Azure portal, PowerShell, or Azure CLI v2.
Portal
PowerShell
Azure CLI
Next steps
Learn more about Azure Network service endpoints in Service endpoints.
Dig deeper into Azure Storage security in Azure Storage security guide.
Require secure transfer to ensure secure
connections
5/20/2022 • 3 minutes to read • Edit Online
You can configure your storage account to accept requests from secure connections only by setting the Secure
transfer required property for the storage account. When you require secure transfer, any requests originating
from an insecure connection are rejected. Microsoft recommends that you always require secure transfer for all
of your storage accounts.
When secure transfer is required, a call to an Azure Storage REST API operation must be made over HTTPS. Any
request made over HTTP is rejected. By default, the Secure transfer required property is enabled when you
create a storage account.
Azure Policy provides a built-in policy to ensure that secure transfer is required for your storage accounts. For
more information, see the Storage section in Azure Policy built-in policy definitions.
Connecting to an Azure file share over SMB without encryption fails when secure transfer is required for the
storage account. Examples of insecure connections include those made over SMB 2.1 or SMB 3.x without
encryption.
NOTE
Because Azure Storage doesn't support HTTPS for custom domain names, this option is not applied when you're using a
custom domain name.
This secure transfer setting does not apply to TCP. Connections via NFS 3.0 protocol support in Azure Blob Storage using
TCP, which is not secured, will succeed.
This sample requires the Azure PowerShell module Az version 0.7 or later. Run Get-Module -ListAvailable Az to
find the version. If you need to install or upgrade, see Install Azure PowerShell module.
Run Connect-AzAccount to create a connection with Azure.
Use the following command line to check the setting:
Next steps
Security recommendations for Blob storage
Remove SMB 1 on Linux
5/20/2022 • 2 minutes to read • Edit Online
Many organizations and internet service providers (ISPs) block the port that SMB uses to communicate, port
445. This practice originates from security guidance about legacy and deprecated versions of the SMB protocol.
Although SMB 3.x is an internet-safe protocol, older versions of SMB, especially SMB 1 aren't. SMB 1, also known
as CIFS (Common Internet File System), is included with many Linux distributions.
SMB 1 is an outdated, inefficient, and insecure protocol. The good news is that Azure Files does not support SMB
1, and starting with Linux kernel version 4.18, Linux makes it possible to disable SMB 1. We always strongly
recommend disabling the SMB 1 on your Linux clients before using SMB file shares in production.
Ubuntu 14.04-16.04 No
Debian 8-9 No
CentOS 7 No
CentOS 8+ Yes
You can check to see if your Linux distribution supports the disable_legacy_dialects module parameter via the
following command:
disable_legacy_dialects: To improve security it may be helpful to restrict the ability to override the
default dialects (SMB2.1, SMB3 and SMB3.02) on mount with old dialects (CIFS/SMB1 and SMB2) since vers=1.0
(CIFS/SMB1) and vers=2.0 are weaker and less secure. Default: n/N/0 (bool)
Remove SMB 1
Before disabling SMB 1, confirm that the SMB module is not currently loaded on your system (this happens
automatically if you have mounted an SMB share). You can do this with the following command, which should
output nothing if SMB is not loaded:
To unload the module, first unmount all SMB shares (using the umount command as described above). You can
identify all the mounted SMB shares on your system with the following command:
Once you have unmounted all SMB file shares, it's safe to unload the module. You can do this with the modprobe
command:
You can manually load the module with SMB 1 unloaded using the modprobe command:
Finally, you can check the SMB module has been loaded with the parameter by looking at the loaded parameters
in /sys/module/cifs/parameters :
cat /sys/module/cifs/parameters/disable_legacy_dialects
To persistently disable SMB 1 on Ubuntu and Debian-based distributions, you must create a new file (if you don't
already have custom options for other modules) called /etc/modprobe.d/local.conf with the setting. You can do
this with the following command:
echo "options cifs disable_legacy_dialects=Y" | sudo tee -a /etc/modprobe.d/local.conf > /dev/null
You can verify that this has worked by loading the SMB module:
Next steps
See these links for more information about Azure Files:
Planning for an Azure Files deployment
Use Azure Files with Linux
Troubleshooting
Enforce a minimum required version of Transport
Layer Security (TLS) for requests to a storage
account
5/20/2022 • 16 minutes to read • Edit Online
Communication between a client application and an Azure Storage account is encrypted using Transport Layer
Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients
and services over the Internet. For more information about TLS, see Transport Layer Security.
Azure Storage currently supports three versions of the TLS protocol: 1.0, 1.1, and 1.2. Azure Storage uses TLS 1.2
on public HTTPS endpoints, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
Azure Storage accounts permit clients to send and receive data with the oldest version of TLS, TLS 1.0, and
above. To enforce stricter security measures, you can configure your storage account to require that clients send
and receive data with a newer version of TLS. If a storage account requires a minimum version of TLS, then any
requests made with an older version will fail.
This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to
continuously manage secure TLS for your storage accounts.
For information about how to specify a particular version of TLS when sending a request from a client
application, see Configure Transport Layer Security (TLS) for a client application.
After you create the diagnostic setting, requests to the storage account are subsequently logged according to
that setting. For more information, see Create diagnostic setting to collect resource logs and metrics in Azure.
For a reference of fields available in Azure Storage logs in Azure Monitor, see Resource logs (preview).
Query logged requests by TLS version
Azure Storage logs in Azure Monitor include the TLS version used to send a request to a storage account. Use
the TlsVersion property to check the TLS version of a logged request.
To determine how many requests were made against Blob storage with different versions of TLS over the past
seven days, open your Log Analytics workspace. Next, paste the following query into a new log query and run it.
Remember to replace the placeholder values in brackets with your own values:
StorageBlobLogs
| where TimeGenerated > ago(7d) and AccountName == "<account-name>"
| summarize count() by TlsVersion
The results show the count of the number of requests made with each version of TLS:
Query logged requests by caller IP address and user agent header
Azure Storage logs in Azure Monitor also include the caller IP address and user agent header to help you to
evaluate which client applications accessed the storage account. You can analyze these values to decide whether
client applications must be updated to use a newer version of TLS, or whether it's acceptable to fail a client's
request if it is not sent with the minimum TLS version.
To determine which clients made requests with a version of TLS older than TLS 1.2 over the past seven days,
paste the following query into a new log query and run it. Remember to replace the placeholder values in
brackets with your own values:
StorageBlobLogs
| where TimeGenerated > ago(7d) and AccountName == "<account-name>" and TlsVersion != "TLS 1.2"
| project TlsVersion, CallerIpAddress, UserAgentHeader
IMPORTANT
If you are using a service that connects to Azure Storage, make sure that that service is using the appropriate version of
TLS to send requests to Azure Storage before you set the required minimum version for a storage account.
When you create a storage account with the Azure portal, the minimum TLS version is set to 1.2 by default.
To configure the minimum TLS version for an existing storage account with the Azure portal, follow these steps:
1. Navigate to your storage account in the Azure portal.
2. Under Settings , select Configuration .
3. Under Minimum TLS version , use the drop-down to select the minimum version of TLS required to
access data in this storage account.
NOTE
After you update the minimum TLS version for the storage account, it may take up to 30 seconds before the change is
fully propagated.
Configuring the minimum TLS version requires version 2019-04-01 or later of the Azure Storage resource
provider. For more information, see Azure Storage Resource Provider REST API.
Check the minimum required TLS version for multiple accounts
To check the minimum required TLS version across a set of storage accounts with optimal performance, you can
use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph
Explorer, see Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer.
Running the following query in the Resource Graph Explorer returns a list of storage accounts and displays the
minimum TLS version for each account:
resources
| where type =~ 'Microsoft.Storage/storageAccounts'
| extend minimumTlsVersion = parse_json(properties).minimumTlsVersion
| project subscriptionId, resourceGroup, name, minimumTlsVersion
NOTE
When you configure a minimum TLS version for a storage account, that minimum version is enforced at the application
layer. Tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the
minimum required version when run directly against the storage account endpoint.
After you create the policy with the Deny effect and assign it to a scope, a user cannot create a storage account
with a minimum TLS version that is older than 1.2. Nor can a user make any configuration changes to an
existing storage account that currently requires a minimum TLS version that is older than 1.2. Attempting to do
so results in an error. The required minimum TLS version for the storage account must be set to 1.2 to proceed
with account creation or configuration.
The following image shows the error that occurs if you try to create a storage account with the minimum TLS
version set to TLS 1.0 (the default for a new account) when a policy with a Deny effect requires that the
minimum TLS version be set to TLS 1.2.
Permissions necessary to require a minimum version of TLS
To set the MinimumTlsVersion property for the storage account, a user must have permissions to create and
manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions
include the Microsoft.Storage/storageAccounts/write or Microsoft.Storage/storageAccounts/* action.
Built-in roles with this action include:
The Azure Resource Manager Owner role
The Azure Resource Manager Contributor role
The Storage Account Contributor role
These roles do not provide access to data in a storage account via Azure Active Directory (Azure AD). However,
they include the Microsoft.Storage/storageAccounts/listkeys/action , which grants access to the account
access keys. With this permission, a user can use the account access keys to access all data in a storage account.
Role assignments must be scoped to the level of the storage account or higher to permit a user to require a
minimum version of TLS for the storage account. For more information about role scope, see Understand scope
for Azure RBAC.
Be careful to restrict assignment of these roles only to those who require the ability to create a storage account
or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions
that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see
Best practices for Azure RBAC.
NOTE
The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the
Azure Resource Manager Owner role. The Owner role includes all actions, so a user with one of these administrative roles
can also create and manage storage accounts. For more information, see Classic subscription administrator roles, Azure
roles, and Azure AD administrator roles.
Network considerations
When a client sends a request to storage account, the client establishes a connection with the public endpoint of
the storage account first, before processing any requests. The minimum TLS version setting is checked after the
connection is established. If the request uses an earlier version of TLS than that specified by the setting, the
connection will continue to succeed, but the request will eventually fail. For more information about public
endpoints for Azure Storage, see Resource URI syntax.
Next steps
Configure Transport Layer Security (TLS) for a client application
Security recommendations for Blob storage
Configure Transport Layer Security (TLS) for a client
application
5/20/2022 • 2 minutes to read • Edit Online
For security purposes, an Azure Storage account may require that clients use a minimum version of Transport
Layer Security (TLS) to send requests. Calls to Azure Storage will fail if the client is using a version of TLS that is
lower than the minimum required version. For example, if a storage account requires TLS 1.2, then a a request
sent by a client who is using TLS 1.1 will fail.
This article describes how to configure a client application to use a particular version of TLS. For information
about how to configure a minimum required version of TLS for an Azure Storage account, see Configure
minimum required version of Transport Layer Security (TLS) for a storage account.
The following sample shows how to enable TLS 1.2 in a PowerShell client:
# Set the TLS version used by the PowerShell client to TLS 1.2.
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12;
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This
article helps you download AzCopy, connect to your storage account, and then transfer data.
NOTE
AzCopy V10 is the currently supported version of AzCopy.
If you need to use a previous version of AzCopy, see the Use the previous version of AzCopy section of this article.
Download AzCopy
First, download the AzCopy V10 executable file to any directory on your computer. AzCopy V10 is just an
executable file, so there's nothing to install.
Windows 64-bit (zip)
Windows 32-bit (zip)
Linux x86-64 (tar)
Linux ARM64 Preview (tar)
macOS (zip)
These files are compressed as a zip file (Windows and Mac) or a tar file (Linux). To download and decompress
the tar file on Linux, see the documentation for your Linux distribution.
For detailed information on AzCopy releases see the AzCopy release page.
NOTE
If you want to copy data to and from your Azure Table storage service, then install AzCopy version 7.3.
Run AzCopy
For convenience, consider adding the directory location of the AzCopy executable to your system path for ease
of use. That way you can type azcopy from any directory on your system.
If you choose not to add the AzCopy directory to your path, you'll have to change directories to the location of
your AzCopy executable and type azcopy or .\azcopy in Windows PowerShell command prompts.
As an owner of your Azure Storage account, you aren't automatically assigned permissions to access data.
Before you can do anything meaningful with AzCopy, you need to decide how you'll provide authorization
credentials to the storage service.
Authorize AzCopy
You can provide authorization credentials by using Azure Active Directory (AD), or by using a Shared Access
Signature (SAS) token.
Use this table as a guide:
NOTE
In the current release, if you plan to copy blobs between storage accounts, you'll have to append a SAS token to each
source URL. You can omit the SAS token only from the destination URL. For examples, see Copy blobs between storage
accounts.
To authorize access by using Azure AD, see Authorize access to blobs with AzCopy and Azure Active Directory
(Azure AD).
Option 2: Use a SAS token
You can append a SAS token to each source or destination URL that use in your AzCopy commands.
This example command recursively copies data from a local directory to a blob container. A fictitious SAS token
is appended to the end of the container URL.
To learn more about SAS tokens and how to obtain one, see Using shared access signatures (SAS).
NOTE
The Secure transfer required setting of a storage account determines whether the connection to a storage account is
secured with Transport Layer Security (TLS). This setting is enabled by default.
Transfer data
After you've authorized your identity or obtained a SAS token, you can begin transferring data.
To find example commands, see any of these articles.
SERVIC E A RT IC L E
Google Cloud Storage Copy data from Google Cloud Storage to Azure Storage
(preview)
Azure Stack storage Transfer data with AzCopy and Azure Stack storage
azcopy jobs clean Remove all log and plan files for all jobs.
azcopy jobs remove Remove all files associated with the given job ID.
azcopy jobs resume Resumes the existing job with the given job ID.
azcopy jobs show Shows detailed information for the given job ID.
azcopy logout Logs the user out and terminates access to Azure Storage
resources.
NOTE
AzCopy does not have a command to rename files.
Use in a script
Obtain a static download link
Over time, the AzCopy download link will point to new versions of AzCopy. If your script downloads AzCopy, the
script might stop working if a newer version of AzCopy modifies features that your script depends upon.
To avoid these issues, obtain a static (unchanging) link to the current version of AzCopy. That way, your script
downloads the same exact version of AzCopy each time that it runs.
To obtain the link, run this command:
O P ERAT IN G SY ST EM C OMMAND
NOTE
For Linux, --strip-components=1 on the tar command removes the top-level folder that contains the version name,
and instead extracts the binary directly into the current folder. This allows the script to be updated with a new version of
azcopy by only updating the wget URL.
The URL appears in the output of this command. Your script can then download AzCopy by using that URL.
O P ERAT IN G SY ST EM C OMMAND
Windows Invoke-WebRequest
https://azcopyvnext.azureedge.net/release20190517/azcopy_windows_amd64_10.1.2.zi
-OutFile azcopyv10.zip <<Unzip here>>
/usr/bin/keyctl new_session
Next steps
If you have questions, issues, or general feedback, submit them on GitHub page.
Transfer data with AzCopy and file storage
5/20/2022 • 12 minutes to read • Edit Online
AzCopy is a command-line utility that you can use to copy files to or from a storage account. This article
contains example commands that work with Azure Files.
Before you begin, see the Get started with AzCopy article to download AzCopy and familiarize yourself with the
tool.
TIP
The examples in this article enclose path arguments with single quotes (''). Use single quotes in all command shells except
for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments
with double quotes ("") instead of single quotes ('').
Example
Upload files
You can use the azcopy copy command to upload files and directories from your local computer.
This section contains the following examples:
Upload a file
Upload a directory
Upload the contents of a directory
Upload a specific file
TIP
You can tweak your upload operation by using optional flags. Here's a few examples.
SCENAR I O F L AG
Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]
Copy SMB property information along with the files. --preser ve-smb-info =[true|false]
NOTE
AzCopy doesn't automatically calculate and store the file's md5 hash code. If you want AzCopy to do that, then append
the --put-md5 flag to each copy command. That way, when the file is downloaded, AzCopy calculates an MD5 hash for
downloaded data and verifies that the MD5 hash stored in the file's Content-md5 property matches the calculated hash.
Upload a file
Syntax
azcopy copy '<local-file-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-
name>/<file-name><SAS-token>'
Example
You can also upload a file by using a wildcard symbol (*) anywhere in the file path or file name. For example:
'C:\myDirectory\*.txt' , or C:\my*\*.txt .
Upload a directory
This example copies a directory (and all of the files in that directory) to a file share. The result is a directory in
the file share by the same name.
Syntax
azcopy copy '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-name>
<SAS-token>' --recursive
Example
To copy to a directory within the file share, just specify the name of that directory in your command string.
Example
azcopy copy 'C:\myDirectory'
'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-
28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-
09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive --preserve-
smb-permissions=true --preserve-smb-info=true
If you specify the name of a directory that does not exist in the file share, AzCopy creates a new directory by that
name.
Upload the contents of a directory
You can upload the contents of a directory without copying the containing directory itself by using the wildcard
symbol (*).
Syntax
azcopy copy '<local-directory-path>/*' 'https://<storage-account-name>.file.core.windows.net/<file-share-
name>/<directory-path><SAS-token>'
Example
NOTE
Append the --recursive flag to upload files in all sub-directories.
Example
Example
You can also exclude files by using the --exclude-pattern option. To learn more, see azcopy copy reference docs.
The --include-pattern and --exclude-pattern options apply only to filenames and not to the path. If you want
to copy all of the text files that exist in a directory tree, use the --recursive option to get the entire directory
tree, and then use the --include-pattern and specify *.txt to get all of the text files.
Upload files that were modified after a date and time
Use the azcopy copy command with the --include-after option. Specify a date and time in ISO 8601 format
(For example: 2020-08-19T15:04:00Z ).
Syntax
azcopy copy '<local-directory-path>\*' 'https://<storage-account-name>.file.core.windows.net/<file-share-or-
directory-name><SAS-token>' --include-after <Date-Time-in-ISO-8601-format>
Example
Download files
You can use the azcopy copy command to download files, directories, and file shares to your local computer.
This section contains the following examples:
Download a file
Download a directory
Download the contents of a directory
Download specific files
TIP
You can tweak your download operation by using optional flags. Here are a few examples:
SCENAR I O F L AG
Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]
Copy SMB property information along with the files. --preser ve-smb-info =[true|false]
NOTE
If the Content-md5 property value of a file contains a hash, AzCopy calculates an MD5 hash for downloaded data and
verifies that the MD5 hash stored in the file's Content-md5 property matches the calculated hash. If these values don't
match, the download fails unless you override this behavior by appending --check-md5=NoCheck or
--check-md5=LogOnly to the copy command.
Download a file
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<file-path><SAS-token>'
'<local-file-path>'
Example
Download a directory
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<directory-path><SAS-
token>' '<local-directory-path>' --recursive
Example
This example results in a directory named C:\myDirectory\myFileShareDirectory that contains all of the
downloaded files.
Download the contents of a directory
You can download the contents of a directory without copying the containing directory itself by using the
wildcard symbol (*).
Syntax
azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/*<SAS-token>' '<local-
directory-path>/'
Example
NOTE
Append the --recursive flag to download files in all sub-directories.
Example
Example
Example
TIP
You can tweak your copy operation by using optional flags. Here's a few examples.
SCENAR I O F L AG
Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]
Copy SMB property information along with the files. --preser ve-smb-info =[true|false]
Example
Example
azcopy copy 'https://mysourceaccount.file.core.windows.net/myFileShare/myFileDirectory?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'
'https://mydestinationaccount.file.core.windows.net/mycontainer?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true
Example
Copy all file shares, directories, and files to another storage account
Syntax
azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<SAS-token>' 'https://<destination-
storage-account-name>.file.core.windows.net/<SAS-token>' --recursive'
Example
Synchronize files
You can synchronize the contents of a local file system with a file share or synchronize the contents of a file
share with another file share. You can also synchronize the contents of a directory in a file share with the
contents of a directory that is located in another file share. Synchronization is one way. In other words, you
choose which of these two endpoints is the source and which one is the destination. Synchronization also uses
server to server APIs.
NOTE
Currently, this scenario is supported for accounts that have enabled hierarchical namespace via the blob endpoint.
WARNING
AzCopy sync is supported but not fully recommended for Azure Files. AzCopy sync doesn't support differential copies at
scale, and some file fidelity might be lost. To learn more, see Migrate to Azure file shares.
Guidelines
The sync command compares file names and last modified timestamps. Set the --delete-destination
optional flag to a value of true or prompt to delete files in the destination directory if those files no
longer exist in the source directory.
If you set the --delete-destination flag to true , AzCopy deletes files without providing a prompt. If you
want a prompt to appear before AzCopy deletes a file, set the --delete-destination flag to prompt .
If you plan to set the --delete-destination flag to prompt or false , consider using the copy command
instead of the sync command and set the --overwrite parameter to ifSourceNewer . The copy command
consumes less memory and incurs less billing costs because a copy operation doesn't have to index the
source or destination prior to moving files.
The machine on which you run the sync command should have an accurate system clock because the last
modified times are critical in determining whether a file should be transferred. If your system has
significant clock skew, avoid modifying files at the destination too close to the time that you plan to run a
sync command.
TIP
You can tweak your sync operation by using optional flags. Here's a few examples.
SCENAR I O F L AG
Copy access control lists (ACLs) along with the files. --preser ve-smb-permissions =[true|false]
Copy SMB property information along with the files. --preser ve-smb-info =[true|false]
Specify how detailed you want your sync-related log entries --log-level =[WARNING|ERROR|INFO|NONE]
to be.
TIP
This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the
Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with
double quotes ("") instead of single quotes ('').
Syntax
azcopy sync '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-name>
<SAS-token>' --recursive
Example
TIP
This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the
Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with
double quotes ("") instead of single quotes ('').
Syntax
azcopy sync 'https://<storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>'
'C:\myDirectory' --recursive
Example
azcopy sync 'https://mystorageaccount.file.core.windows.net/myfileShare?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --recursive
Example
Example
Syntax
azcopy sync 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name><SAS-
token>&sharesnapsot<snapshot-ID>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-
share-name><SAS-token>' --recursive
Example
azcopy sync 'https://mysourceaccount.file.core.windows.net/myfileShare?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D&sharesnapshot=2020-03-
03T20%3A24%3A13.0000000Z' 'https://mydestinationaccount.file.core.windows.net/myfileshare?sv=2018-03-
28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-
03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --preserve-smb-
permissions=true --preserve-smb-info=true
To learn more about share snapshots, see Overview of share snapshots for Azure Files.
Next steps
Find more examples in any of these articles:
Get started with AzCopy
Transfer data
See these articles to configure settings, optimize performance, and troubleshoot issues:
AzCopy configuration settings
Optimize the performance of AzCopy
Troubleshoot AzCopy V10 issues in Azure Storage by using log files
Find errors and resume jobs by using log and plan
files in AzCopy
5/20/2022 • 3 minutes to read • Edit Online
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This
article helps you use logs to diagnose errors, and then use plan files to resume jobs. This article also shows how
to configure log and plan files by changing their verbosity level, and the default location where they are stored.
NOTE
If you're looking for content to help you get started with AzCopy, see Get started with AzCopy. This article applies to
AzCopy V10 as is this is the currently supported version of AzCopy. If you need to use a previous version of AzCopy, see
Use the previous version of AzCopy.
The relevant error isn't necessarily the first error that appears in the file. For errors such as network errors,
timeouts and Server Busy errors, AzCopy will retry up to 20 times and usually the retry process succeeds. The
first error that you see might be something harmless that was successfully retried. So instead of looking at the
first error in the file, look for the errors that are near UPLOADFAILED , COPYFAILED , or DOWNLOADFAILED .
IMPORTANT
When submitting a request to Microsoft Support (or troubleshooting the issue involving any third party), share the
redacted version of the command you want to execute. This ensures the SAS isn't accidentally shared with anybody. You
can find the redacted version at the start of the log file.
Windows (PowerShell)
Linux
grep UPLOADFAILED .\04dc9ca9-158f-7945-5933-564021086c79.log
TIP
The value of the --with-status flag is case-sensitive.
Use the following command to resume a failed/canceled job. This command uses its identifier along with the
SAS token as it isn't persistent for security reasons:
TIP
Enclose path arguments such as the SAS token with single quotes (''). Use single quotes in all command shells except for
the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments
with double quotes ("") instead of single quotes ('').
When you resume a job, AzCopy looks at the job plan file. The plan file lists all the files that were identified for
processing when the job was first created. When you resume a job, AzCopy will attempt to transfer all of the
files that are listed in the plan file which weren't already transferred.
O P ERAT IN G SY ST EM C OMMAND
O P ERAT IN G SY ST EM C OMMAND
Use the azcopy env to check the current value of this variable. If the value is blank, then logs are written to the
default location.
To remove the plan and log files associated with only one job, use azcopy jobs rm <job-id> . Replace the
<job-id> placeholder in this example with the job ID of the job.
See also
Get started with AzCopy
Troubleshoot Azure Files problems in Windows
(SMB)
5/20/2022 • 25 minutes to read • Edit Online
This article lists common problems that are related to Microsoft Azure Files when you connect from Windows
clients. It also provides possible causes and resolutions for these problems. In addition to the troubleshooting
steps in this article, you can also use AzFileDiagnostics to ensure that the Windows client environment has
correct prerequisites. AzFileDiagnostics automates detection of most of the symptoms mentioned in this article
and helps set up your environment to get optimal performance.
IMPORTANT
The content of this article only applies to SMB shares. For details on NFS shares, see Troubleshoot Azure NFS file shares.
Applies to
F IL E SH A RE T Y P E SM B NFS
Error 53, Error 67, or Error 87 when you mount or unmount an Azure
file share
When you try to mount a file share from on-premises or from a different datacenter, you might receive the
following errors:
System error 53 has occurred. The network path was not found.
System error 67 has occurred. The network name cannot be found.
System error 87 has occurred. The parameter is incorrect.
Cause 1: Port 445 is blocked
System error 53 or system error 67 can occur if port 445 outbound communication to an Azure Files datacenter
is blocked. To see the summary of ISPs that allow or disallow access from port 445, go to TechNet.
To check if your firewall or ISP is blocking port 445, use the AzFileDiagnostics tool or Test-NetConnection
cmdlet.
To use the Test-NetConnection cmdlet, the Azure PowerShell module must be installed, see Install Azure
PowerShell module for more information. Remember to replace <your-storage-account-name> and
<your-resource-group-name> with the relevant names for your storage account.
$resourceGroupName = "<your-resource-group-name>"
$storageAccountName = "<your-storage-account-name>"
# This command requires you to be logged into your Azure account and set the subscription your storage
account is under, run:
# Connect-AzAccount -SubscriptionId ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’
# if you haven't already logged in.
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName
If the connection was successful, you should see the following output:
ComputerName : <your-storage-account-name>
RemoteAddress : <storage-account-ip-address>
RemotePort : 445
InterfaceAlias : <your-network-interface>
SourceAddress : <your-ip-address>
TcpTestSucceeded : True
NOTE
The above command returns the current IP address of the storage account. This IP address is not guaranteed to remain
the same, and may change at any time. Do not hardcode this IP address into any scripts, or into a firewall configuration.
NOTE
The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or
later. To install the latest Az PowerShell module, see Install the Azure PowerShell module.
Error "No access" when you try to access or delete an Azure File
Share
When you try to access or delete an Azure file share in the portal, you might receive the following error:
No access
Error code: 403
Cause 1: Virtual network or firewall rules are enabled on the storage account
Solution for cause 1
Verify virtual network and firewall rules are configured properly on the storage account. To test if virtual
network or firewall rules is causing the issue, temporarily change the setting on the storage account to Allow
access from all networks . To learn more, see Configure Azure Storage firewalls and virtual networks.
Cause 2: Your user account does not have access to the storage account
Solution for cause 2
Browse to the storage account where the Azure file share is located, click Access control (IAM) and verify your
user account has access to the storage account. To learn more, see How to secure your storage account with
Azure role-based access control (Azure RBAC).
IMPORTANT
Value-added services that take resource locks and share/share snapshot leases on your Azure Files resources may
periodically reapply locks and leases. Modifying or deleting locked resources by value-added services may impact regular
operation of those services, such as deleting share snapshots that were managed by Azure Backup.
# Parameters for storage account resource
$resourceGroupName = "<resource-group>"
$storageAccountName = "<storage-account>"
Although as a stateless protocol, the FileREST protocol does not have a concept of file handles, it does provide a
similar mechanism to mediate access to files and folders that your script, application, or service may use: file
leases. When a file is leased, it is treated as equivalent to a file handle with a file sharing mode of None .
Although file handles and leases serve an important purpose, sometimes file handles and leases might be
orphaned. When this happens, this can cause problems modifying or deleting files. You may see error messages
like:
The process cannot access the file because it is being used by another process.
The action can't be completed because the file is open in another program.
The document is locked for editing by another user.
The specified resource is marked for deletion by an SMB client.
The resolution to this issue depends on whether this is being caused by an orphaned file handle or lease.
Cause 1
A file handle is preventing a file/directory from being modified or deleted. You can use the Get-
AzStorageFileHandle PowerShell cmdlet to view open handles.
If all SMB clients have closed their open handles on a file/directory and the issue continues to occur, you can
force close a file handle.
Solution 1
To force a file handle to be closed, use the Close-AzStorageFileHandle PowerShell cmdlet.
NOTE
The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or
later. To install the latest Az PowerShell module, see Install the Azure PowerShell module.
Cause 2
A file lease is preventing a file from being modified or deleted. You can check if a file has a file lease with the
following PowerShell, replacing <resource-group> , <storage-account> , <file-share> , and <path-to-file> with
the appropriate values for your environment:
# Set variables
$resourceGroupName = "<resource-group>"
$storageAccountName = "<storage-account>"
$fileShareName = "<file-share>"
$fileForLease = "<path-to-file>"
$fileClient = $file.ShareFileClient
If a file has a lease, the returned object should contain the following properties:
LeaseDuration : Infinite
LeaseState : Leased
LeaseStatus : Locked
Solution 2
To remove a lease from a file, you can release the lease or break the lease. To release the lease, you need the
LeaseId of the lease, which you set when you create the lease. You do not need the LeaseId to break the lease.
The following example shows how to break the lease for the file indicated in cause 2 (this example continues
with the PowerShell variables from cause 2):
$leaseClient = [Azure.Storage.Files.Shares.Specialized.ShareLeaseClient]::new($fileClient)
$leaseClient.Break() | Out-Null
NOTE
Windows Server 2012 R2 images in Azure Marketplace have hotfix KB3114025 installed by default, starting in December
2015.
Net use command fails if the storage account contains a forward slash
Cause
The net use command interprets a forward slash (/) as a command-line option. If your user account name starts
with a forward slash, the drive mapping fails.
Solution
You can use either of the following steps to work around the problem:
Run the following PowerShell command:
New-SmbMapping -LocalPath y: -RemotePath \\server\share -UserName accountName -Password "password can
contain / and \ etc"
From a batch file, you can run the command this way:
Echo new-smbMapping ... | powershell -command –
Put double quotation marks around the key to work around this problem--unless the forward slash is the
first character. If it is, either use the interactive mode and enter your password separately or regenerate
your keys to get a key that doesn't start with a forward slash.
Map the share directly without using a mapped drive letter. Some applications may not reconnect to the
drive letter properly, so using the full UNC path might more reliable.
net use * \\storage-account-name.file.core.windows.net\share
After you follow these instructions, you might receive the following error message when you run net use for the
system/network service account: "System error 1312 has occurred. A specified logon session does not exist. It
may already have been terminated." If this occurs, make sure that the username that is passed to net use
includes domain information (for example: "[storage account name].file.core.windows.net").
Error "You are copying a file to a destination that does not support
encryption"
When a file is copied over the network, the file is decrypted on the source computer, transmitted in plaintext,
and re-encrypted at the destination. However, you might see the following error when you're trying to copy an
encrypted file: "You are copying the file to a destination that does not support encryption."
Cause
This problem can occur if you are using Encrypting File System (EFS). BitLocker-encrypted files can be copied to
Azure Files. However, Azure Files does not support NTFS EFS.
Workaround
To copy a file over the network, you must first decrypt it. Use one of the following methods:
Use the copy /d command. It allows the encrypted files to be saved as decrypted files at the destination.
Set the following registry key:
Path = HKLM\Software\Policies\Microsoft\Windows\System
Value type = DWORD
Name = CopyFileAllowDecryptedRemoteDestination
Value = 1
Be aware that setting the registry key affects all copy operations that are made to network shares.
For example, you can set it to 0x100000 and see if the performance improves.
The cmdlet performs these checks below in sequence and provides guidance for failures:
1. CheckADObjectPasswordIsCorrect: Ensure that the password configured on the AD identity that represents
the storage account is matching that of the storage account kerb1 or kerb2 key. If the password is incorrect,
you can run Update-AzStorageAccountADObjectPassword to reset the password.
2. CheckADObject: Confirm that there is an object in the Active Directory that represents the storage account
and has the correct SPN (service principal name). If the SPN isn't correctly setup, please run the Set-AD
cmdlet returned in the debug cmdlet to configure the SPN.
3. CheckDomainJoined: Validate that the client machine is domain joined to AD. If your machine is not domain
joined to AD, please refer to this article for domain join instruction.
4. CheckPort445Connectivity: Check that Port 445 is opened for SMB connection. If the required Port is not
open, please refer to the troubleshooting tool AzFileDiagnostics for connectivity issues with Azure Files.
5. CheckSidHasAadUser: Check that the logged on AD user is synced to Azure AD. If you want to look up
whether a specific AD user is synchronized to Azure AD, you can specify the -UserName and -Domain in the
input parameters.
6. CheckGetKerberosTicket: Attempt to get a Kerberos ticket to connect to the storage account. If there isn't a
valid Kerberos token, run the klist get cifs/storage-account-name.file.core.windows.net cmdlet and examine
the error code to root-cause the ticket retrieval failure.
7. CheckStorageAccountDomainJoined: Check if the AD authentication has been enabled and the account's AD
properties are populated. If not, refer to the instruction here to enable AD DS authentication on Azure Files.
8. CheckUserRbacAssignment: Check if the AD identity has the proper RBAC role assignment to provide share
level permission to access Azure Files. If not, refer to the instruction here to configure the share level
permission. (Supported on AzFilesHybrid v0.2.3+ version)
9. CheckUserFileAccess: Check if the AD identity has the proper directory/file permission (Windows ACLs) to
access Azure Files. If not, refer to the instruction here to configure the directory/file level permission.
(Supported on AzFilesHybrid v0.2.3+ version)
$ResourceGroupName = "<resource-group-name-here>"
$StorageAccountName = "<storage-account-name-here>"
Navigate to the desired storage account in the Azure portal. In the table of contents for the desired storage
account, select Access keys under the Security + networking heading. In the Access key* pane, select Rotate
key above the desired key.
Need help? Contact support.
If you still need help, contact support to get your problem resolved quickly.
Troubleshoot Azure Files problems in Linux (SMB)
5/20/2022 • 13 minutes to read • Edit Online
This article lists common problems that are related to Azure Files when you connect from Linux clients. It also
provides possible causes and resolutions for these problems.
In addition to the troubleshooting steps in this article, you can use AzFileDiagnostics to ensure that the Linux
client has correct prerequisites. AzFileDiagnostics automates the detection of most of the symptoms mentioned
in this article. It helps set up your environment to get optimal performance. You can also find this information in
the Azure Files shares troubleshooter. The troubleshooter provides steps to help you with problems connecting,
mapping, and mounting Azure Files shares.
IMPORTANT
The content of this article only applies to SMB shares. For details on NFS shares, see Troubleshoot Azure NFS file shares.
Applies to
F IL E SH A RE T Y P E SM B NFS
Error "No access" when you try to access or delete an Azure File
Share
When you try to access or delete an Azure file share in the portal, you may receive the following error:
No access
Error code: 403
Cause 1: Virtual network or firewall rules are enabled on the storage account
Solution for cause 1
Verify virtual network and firewall rules are configured properly on the storage account. To test if virtual
network or firewall rules is causing the issue, temporarily change the setting on the storage account to Allow
access from all networks . To learn more, see Configure Azure Storage firewalls and virtual networks.
Cause 2: Your user account does not have access to the storage account
Solution for cause 2
Browse to the storage account where the Azure file share is located, click Access control (IAM) and verify your
user account has access to the storage account. To learn more, see How to secure your storage account with
Azure role-based access control (Azure RBAC).
NOTE
The Get-AzStorageFileHandle and Close-AzStorageFileHandle cmdlets are included in Az PowerShell module version 2.4 or
later. To install the latest Az PowerShell module, see Install the Azure PowerShell module.
You can also check whether the correct options are being used by running the sudo mount | grep cifs
command and checking its output. The following is example output:
If the cache=strict or ser verino option is not present, unmount and mount Azure Files again by running the
mount command from the documentation. Then, recheck that the /etc/fstab entry has the correct options.
Cause 2: Throttling
It is possible you are experiencing throttling and your requests are being sent to a queue. You can verify this by
leveraging Azure Storage metrics in Azure Monitor.
Solution for cause 2
Ensure your app is within the Azure Files scale targets.
Cannot create symbolic links - ln: failed to create symbolic link 't':
Operation not supported
Cause
By default, mounting Azure file shares on Linux by using CIFS doesn't enable support for symbolic links
(symlinks). You see an error like this:
ln -s linked -n t
ln: failed to create symbolic link 't': Operation not supported
Solution
The Linux CIFS client doesn't support creation of Windows-style symbolic links over the SMB 2 or 3 protocol.
Currently, the Linux client supports another style of symbolic links called Minshall+French symlinks for both
create and follow operations. Customers who need symbolic links can use the "mfsymlinks" mount option. We
recommend "mfsymlinks" because it's also the format that Macs use.
To use symlinks, add the following to the end of your CIFS mount command:
,mfsymlinks
Cause
Conditional headers are not yet supported. Applications implementing them will need to request the full file
every time the file is accessed.
Workaround
When a new file is uploaded, the cache-control property by default is “no-cache”. To force the application to
request the file every time, the file's cache-control property needs to be updated from “no-cache” to “no-cache,
no-store, must-revalidate”. This can be achieved using Azure Storage Explorer.
"Mount error(112): Host is down" because of a reconnection time-out
A "112" mount error occurs on the Linux client when the client has been idle for a long time. After an extended
idle time, the client disconnects and the connection times out.
Cause
The connection can be idle for the following reasons:
Network communication failures that prevent re-establishing a TCP connection to the server when the
default "soft" mount option is used
Recent reconnection fixes that are not present in older kernels
Solution
This reconnection problem in the Linux kernel is now fixed as part of the following changes:
Fix reconnect to not defer smb3 session reconnect long after socket reconnect
Call echo service immediately after socket reconnect
CIFS: Fix a possible memory corruption during reconnect
CIFS: Fix a possible double locking of mutex during reconnect (for kernel v4.9 and later)
However, these changes might not be ported yet to all the Linux distributions. If you're using a popular Linux
distribution, you can check on the Use Azure Files with Linux to see which version of your distribution has the
necessary kernel changes.
Workaround
You can work around this problem by specifying a hard mount. A hard mount forces the client to wait until a
connection is established or until it's explicitly interrupted. You can use it to prevent errors because of network
time-outs. However, this workaround might cause indefinite waits. Be prepared to stop connections as
necessary.
If you can't upgrade to the latest kernel versions, you can work around this problem by keeping a file in the
Azure file share that you write to every 30 seconds or less. This must be a write operation, such as rewriting the
created or modified date on the file. Otherwise, you might get cached results, and your operation might not
trigger the reconnection.
"CIFS VFS: error -22 on ioctl to get interface list" when you mount an
Azure file share by using SMB 3.x
Cause
This error is logged because Azure Files does not currently support SMB multichannel.
Solution
This error can be ignored.
Unable to access folders or files which name has a space or a dot at the end
You are unable to access folders or files from the Azure file share while mounted on Linux, commands like du
and ls and/or third-party applications may fail with a "No such file or directory" error while accessing the share,
however you are able to upload files to said folders via the portal.
Cause
The folders or files were uploaded from a system that encodes the characters at the end of the name to a
different character, files uploaded from a Macintosh computer may have a "0xF028" or "0xF029" character
instead of 0x20 (space) or 0X2E (dot).
Solution
Use the mapchars option on the share while mounting the share on Linux:
instead of :
use:
This article lists some common problems related to Azure file shares. It provides potential causes and
workarounds for when you encounter these problems.
Applies to
F IL E SH A RE T Y P E SM B NFS
Solution
If you're using a standard file share, enable large file shares on your storage account and increase the size of
file share quota to take advantage of the large file share support. Large file shares support great IOPS and
bandwidth limits; see Azure Files scalability and performance targets for details.
If you're using a premium file share, increase the provisioned file share size to increase the IOPS limit. To
learn more, see the Understanding provisioning for premium file shares.
Cause 2: Metadata or namespace heavy workload
If the majority of your requests are metadata-centric (such as createfile , openfile , closefile , queryinfo , or
querydirectory ), the latency will be worse than that of read/write operations.
To determine whether most of your requests are metadata-centric, start by following steps 1-4 as previously
outlined in Cause 1. For step 5, instead of adding a filter for Response type , add a property filter for API
name .
Workaround
Check to see whether the application can be modified to reduce the number of metadata operations.
Add a virtual hard disk (VHD) on the file share and mount the VHD from the client to perform file operations
against the data. This approach works for single writer/reader scenarios or scenarios with multiple readers
and no writers. Because the file system is owned by the client rather than Azure Files, this allows metadata
operations to be local. The setup offers performance similar to that of a local directly attached storage.
To mount a VHD on a Windows client, use the Mount-DiskImage PowerShell cmdlet.
To mount a VHD on Linux, consult the documentation for your Linux distribution.
Cause 3: Single -threaded application
If the application that you're using is single-threaded, this setup can result in significantly lower IOPS throughput
than the maximum possible throughput, depending on your provisioned share size.
Solution
Increase application parallelism by increasing the number of threads.
Switch to applications where parallelism is possible. For example, for copy operations, you could use AzCopy
or RoboCopy from Windows clients or the parallel command from Linux clients.
Cause 4: Number of SMB channels exceeds four
If you're using SMB MultiChannel and the number of channels you have exceeds four, this will result in poor
performance. To determine if your connection count exceeds four, use the PowerShell cmdlet
get-SmbClientConfiguration to view the current connection count settings.
Solution
Set the Windows per NIC setting for SMB so that the total channels don't exceed four. For example, if you have
two NICs, you can set the maximum per NIC to two using the following PowerShell cmdlet:
Set-SmbClientConfiguration -ConnectionCountPerRssNetworkInterface 2 .
NOTE
If the response types are not listed in the Dimension values drop-down, this means the resource has not been
throttled. To add the dimension values, next to the Dimension values drop-down list, select Add custom
value , enter the respone type (for example, SuccessWithThrottling ), select OK , and then repeat these steps to
add all applicable response types for your file share.
8. For premium file shares , click the Dimension name drop-down and select File Share . For standard
file shares , skip to step #10 .
NOTE
If the file share is a standard file share, the File Share dimension will not list the file share(s) because per-share
metrics are not available for standard file shares. Throttling alerts for standard file shares will be triggered if any
file share within the storage account is throttled and the alert will not identify which file share was throttled. Since
per-share metrics are not available for standard file shares, the recommendation is to have one file share per
storage account.
9. Click the Dimension values drop-down and select the file share(s) that you want to alert on.
10. Define the aler t parameters (threshold value, operator, aggregation granularity and frequency of
evaluation) and click Done .
TIP
If you are using a static threshold, the metric chart can help determine a reasonable threshold value if the file
share is currently being throttled. If you are using a dynamic threshold, the metric chart will display the calculated
thresholds based on recent data.
11. Click Add action groups to add an action group (email, SMS, etc.) to the alert either by selecting an
existing action group or creating a new action group.
12. Fill in the Aler t details like Aler t rule name , Description , and Severity .
13. Click Create aler t rule to create the alert.
To learn more about configuring alerts in Azure Monitor, see Overview of alerts in Microsoft Azure.
6. Scroll down. In the Dimension name drop-down list, select File Share .
7. In the Dimension values drop-down list, select the file share or shares that you want to alert on.
8. Define the alert parameters by selecting values in the Operator , Threshold value , Aggregation
granularity , and Frequency of evaluation drop-down lists, and then select Done .
Egress, ingress, and transactions metrics are expressed per minute, though you're provisioned egress,
ingress, and I/O per second. Therefore, for example, if your provisioned egress is 90 MiB/s and you want
your threshold to be 80 percent of provisioned egress, select the following alert parameters:
For Threshold value : 75497472
For Operator : greater than or equal to
For Aggregation type : average
Depending on how noisy you want your alert to be, you can also select values for Aggregation
granularity and Frequency of evaluation . For example, if you want your alert to look at the average
ingress over the time period of 1 hour, and you want your alert rule to be run every hour, select the
following:
For Aggregation granularity : 1 hour
For Frequency of evaluation : 1 hour
9. Select Add action groups , and then add an action group (for example, email or SMS) to the alert either
by selecting an existing action group or by creating a new one.
10. Enter the alert details, such as Aler t rule name , Description , and Severity .
11. Select Create aler t rule to create the alert.
NOTE
To be notified that your premium file share is close to being throttled because of provisioned ingress,
follow the preceding instructions, but with the following change:
In step 5, select the Ingress metric instead of Egress .
To be notified that your premium file share is close to being throttled because of provisioned IOPS, follow
the preceding instructions, but with the following changes:
In step 5, select the Transactions metric instead of Egress .
In step 10, the only option for Aggregation type is Total. Therefore, the threshold value depends on
your selected aggregation granularity. For example, if you want your threshold to be 80 percent of
provisioned baseline IOPS and you select 1 hour for Aggregation granularity , your Threshold
value would be your baseline IOPS (in bytes) × 0.8 × 3600.
To learn more about configuring alerts in Azure Monitor, see Overview of alerts in Microsoft Azure.
See also
Troubleshoot Azure Files in Windows
Troubleshoot Azure Files in Linux
Azure Files FAQ
Troubleshoot Azure NFS file share problems
5/20/2022 • 4 minutes to read • Edit Online
This article lists some common problems and known issues related to Azure NFS file shares. It provides
potential causes and workarounds when these problems are encountered.
Applies to
F IL E SH A RE T Y P E SM B NFS
Private endpoint
Access is more secure than the service endpoint.
Access to NFS share via private link is available from within and outside the storage account's Azure
region (cross-region, on-premise)
Virtual network peering with virtual networks hosted in the private endpoint give NFS share access to
the clients in peered virtual networks.
Private endpoints can be used with ExpressRoute, point-to-site, and site-to-site VPNs.
Cause 2: Secure transfer required is enabled
Double encryption isn't supported for NFS shares yet. Azure provides a layer of encryption for all data in transit
between Azure datacenters using MACSec. NFS shares can only be accessed from trusted virtual networks and
over VPN tunnels. No additional transport layer encryption is available on NFS shares.
Solution
Disable secure transfer required in your storage account's configuration blade.
Solution
If the package isn't installed, install the package on your distribution.
Ubu n t u o r Debi an
sudo apt update
sudo apt install nfs-common
F e d o r a , R e d H a t En t e r p r i se L i n u x 8 + , C e n t O S 8 +
o p e n SU SE
This article provides troubleshooting information to address any issues you come across while configuring
backup or restoring Azure file shares using the Azure Backup Service.
NOTE
All file shares in a Storage Account can be protected only under one Recovery Services vault. You can use this
script to find the Recovery Services vault where your storage account is registered.
Ensure that the file share isn't present in any of the unsupported Storage Accounts. You can refer to the
Support matrix for Azure file share backup to find supported Storage Accounts.
Ensure that the storage account and recovery services vault are present in the same region.
Ensure that the combined length of the storage account name and the resource group name don't exceed
84 characters in the case of new Storage accounts and 77 characters in the case of classic storage
accounts.
Check the firewall settings of storage account to ensure that the exception "Allow Azure services on the
trusted services list to access this storage account" is granted. You can refer this link for the steps to grant
exception.
Error in portal states discovery of storage accounts failed
If you have a partner subscription (CSP-enabled), ignore the error. If your subscription isn't CSP-enabled, and
your storage accounts can't be discovered, contact support.
Selected storage account validation or registration failed
Retry the registration. If the problem persists, contact support.
Could not list or find file shares in the selected storage account
Ensure that the Storage Account exists in the Resource Group and hasn't been deleted or moved after the last
validation or registration in the vault.
Ensure that the file share you're looking to protect hasn't been deleted.
Ensure that the Storage Account is a supported storage account for file share backup. You can refer to the
Support matrix for Azure file share backup to find supported Storage Accounts.
Check if the file share is already protected in the same Recovery Services vault.
Check the Network Routing setting of storage account to ensure that routing preference is set as Microsoft
network routing .
Backup file share configuration (or the protection policy configuration) is failing
Retry the configuration to see if the issue persists.
Ensure that the file share you want to protect hasn't been deleted.
If you're trying to protect multiple file shares at once, and some of the file shares are failing, try configuring
backup for the failed file shares again.
Unable to delete the Recovery Services vault after unprotecting a file share
In the Azure portal, open your Vault > Backup Infrastructure > Storage accounts . Select Unregister to
remove the storage accounts from the Recovery Services vault.
NOTE
A Recovery Services vault can only be deleted after unregistering all storage accounts registered with the vault.
NOTE
You lose the recovery points if you delete snapshots created by Azure Backup.
UserErrorStorageAccountNotFound- Operation failed as the specified storage account does not exist
anymore
Error Code: UserErrorStorageAccountNotFound
Error Message: Operation failed as the specified storage account does not exist anymore.
Ensure that the storage account still exists and isn't deleted.
UserErrorDTSStorageAccountNotFound- The storage account details provided are incorrect
Error Code: UserErrorDTSStorageAccountNotFound
Error Message: The storage account details provided are incorrect.
Ensure that the storage account still exists and isn't deleted.
UserErrorResourceGroupNotFound- Resource group doesn't exist
Error Code: UserErrorResourceGroupNotFound
Error Message: Resource group doesn't exist
Select an existing resource group or create a new resource group.
ParallelSnapshotRequest- A backup job is already in progress for this file share
Error Code: ParallelSnapshotRequest
Error Message: A backup job is already in progress for this file share.
File share backup doesn't support parallel snapshot requests against the same file share.
Wait for the existing backup job to finish and then try again. If you can’t find a backup job in the Recovery
Services vault, check other Recovery Services vaults in the same subscription.
UserErrorStorageAccountInternetRoutingNotSupported- Storage accounts with Internet routing
configuration are not supported by Azure Backup
Error Code: UserErrorStorageAccountInternetRoutingNotSupported
Error Message: Storage accounts with Internet routing configuration are not supported by Azure Backup
Ensure that the routing preference set for the storage account hosting backed up file share is Microsoft network
routing.
FileshareBackupFailedWithAzureRpRequestThrottling/
FileshareRestoreFailedWithAzureRpRequestThrottling- File share backup or restore failed due to storage
service throttling. This may be because the storage service is busy processing other requests for the given
storage account
Error Code: FileshareBackupFailedWithAzureRpRequestThrottling/
FileshareRestoreFailedWithAzureRpRequestThrottling
Error Message: File share backup or restore failed due to storage service throttling. This may be because the
storage service is busy processing other requests for the given storage account.
Try the backup/restore operation at a later time.
TargetFileShareNotFound- Target file share not found
Error Code: TargetFileShareNotFound
Error Message: Target file share not found.
Ensure that the selected Storage Account exists, and the target file share isn't deleted.
Ensure that the Storage Account is a supported storage account for file share backup.
UserErrorStorageAccountIsLocked- Backup or restore jobs failed due to storage account being in locked
state
Error Code: UserErrorStorageAccountIsLocked
Error Message: Backup or restore jobs failed due to storage account being in locked state.
Remove the lock on the Storage Account or use delete lock instead of read lock and retry the backup or
restore operation.
DataTransferServiceCoFLimitReached- Recovery failed because number of failed files are more than the
threshold
Error Code: DataTransferServiceCoFLimitReached
Error Message: Recovery failed because number of failed files are more than the threshold.
Recovery failure reasons are listed in a file (path provided in the job details). Address the failures and
retry the restore operation for the failed files only.
Common reasons for file restore failures:
files that failed are currently in use
a directory with the same name as the failed file exists in the parent directory.
DataTransferServiceAllFilesFailedToRecover- Recovery failed as no file could be recovered
Error Code: DataTransferServiceAllFilesFailedToRecover
Error Message: Recovery failed as no file could be recovered.
Recovery failure reasons are listed in a file (path provided in the job details). Address the failures and
retry the restore operations for the failed files only.
Common reasons for file restore failures:
files that failed are currently in use
a directory with the same name as the failed file exists in the parent directory.
UserErrorDTSSourceUriNotValid - Restore fails because one of the files in the source does not exist
Error Code: DataTransferServiceSourceUriNotValid
Error Message: Restore fails because one of the files in the source does not exist.
The selected items aren't present in the recovery point data. To recover the files, provide the correct file list.
The file share snapshot that corresponds to the recovery point is manually deleted. Select a different
recovery point and retry the restore operation.
UserErrorDTSDestLocked- A recovery job is in process to the same destination
Error Code: UserErrorDTSDestLocked
Error Message: A recovery job is in process to the same destination.
File share backup doesn't support parallel recovery to the same target file share.
Wait for the existing recovery to finish and then try again. If you can’t find a recovery job in the Recovery
Services vault, check other Recovery Services vaults in the same subscription.
UserErrorTargetFileShareFull- Restore operation failed as target file share is full
Error code: UserErrorTargetFileShareFull
Error Message: Restore operation failed as target file share is full.
Increase the target file share size quota to accommodate the restore data and retry the restore operation.
UserErrorTargetFileShareQuotaNotSufficient- Target file share does not have sufficient storage size quota
for restore
Error Code: UserErrorTargetFileShareQuotaNotSufficient
Error Message: Target File share does not have sufficient storage size quota for restore
Increase the target file share size quota to accommodate the restore data and retry the operation
File Sync PreRestoreFailed- Restore operation failed as an error occurred while performing pre restore
operations on File Sync Service resources associated with the target file share
Error Code: File Sync PreRestoreFailed
Error Message: Restore operation failed as an error occurred while performing pre restore operations on File
Sync Service resources associated with the target file share.
Try restoring the data at a later time. If the issue persists, contact Microsoft support.
AzureFileSyncChangeDetectionInProgress- Azure File Sync Service change detection is in progress for the
target file share. The change detection was triggered by a previous restore to the target file share
Error Code: AzureFileSyncChangeDetectionInProgress
Error Message: Azure File Sync Service change detection is in progress for the target file share. The change
detection was triggered by a previous restore to the target file share.
Use a different target file share. Alternatively, you can wait for Azure File Sync Service change detection to
complete for the target file share before retrying the restore.
UserErrorAFSRecoverySomeFilesNotRestored- One or more files could not be recovered successfully. For
more information, check the failed file list in the path given above
Error Code: UserErrorAFSRecoverySomeFilesNotRestored
Error Message: One or more files could not be recovered successfully. For more information, check the failed file
list in the path given above.
Recovery failure reasons are listed in the file (path provided in the Job details). Address the reasons and
retry the restore operation for the failed files only.
Common reasons for file restore failures:
files that failed are currently in use
a directory with the same name as the failed file exists in the parent directory.
UserErrorAFSSourceSnapshotNotFound- Azure file share snapshot corresponding to recovery point cannot
be found
Error Code: UserErrorAFSSourceSnapshotNotFound
Error Message: Azure file share snapshot corresponding to recovery point cannot be found
Ensure that the file share snapshot, corresponding to the recovery point you're trying to use for recovery,
still exists.
NOTE
If you delete a file share snapshot that was created by Azure Backup, the corresponding recovery points become
unusable. We recommend not deleting snapshots to ensure guaranteed recovery.
NOTE
We recommend you don't delete the backed up file share, or if it's in soft deleted state, undelete before the soft delete
retention period ends, to avoid losing all your restore points.
Next steps
For more information about backing up Azure file shares, see:
Back up Azure file shares
Back up Azure file share FAQ
Azure Files API reference
5/20/2022 • 2 minutes to read • Edit Online
Find Azure Files API reference, library packages, readme files, and getting started articles.
REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT
REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT
REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT
2.x Azure Storage client libraries Package (PyPI) Develop for Azure Files with
v2 for Python Python
REF EREN C E
VERSIO N DO C UM EN TAT IO N PA C K A GE Q UIC K STA RT
REF EREN C E
VERSIO N DO C UM EN TAT IO N SO URC E C O DE/ REA DM E Q UIC K STA RT
12.x Azure SDK for C++ APIs Library source code Develop for Azure Files with
C++
REST APIs
The following table lists reference information for Azure Files REST APIs.
Azure PowerShell
Azure PowerShell reference
Azure CLI
Azure CLI reference
Azure Files monitoring data reference
5/20/2022 • 12 minutes to read • Edit Online
See Monitoring Azure Files for details on collecting and analyzing monitoring data for Azure Files.
Applies to
F IL E SH A RE T Y P E SM B NFS
Metrics
The following tables list the platform metrics collected for Azure Files.
Capacity metrics
Capacity metrics values are refreshed daily (up to 24 Hours). The time grain defines the time interval for which
metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
Azure Files provides the following capacity metrics in Azure Monitor.
Account Level
This table shows account-level metrics.
Unit: Bytes
Aggregation Type: Average
Value example: 1024
Azure Files
This table shows Azure Files metrics.
Unit: Bytes
Aggregation Type: Average
Value example: 1024
Unit: Count
Aggregation Type: Average
Value example: 1024
FileShareCapacityQuota The upper limit on the amount of storage that can be used
by Azure Files Service in bytes.
Unit: Bytes
Aggregation Type: Average
Value example: 1024
Unit: Count
Aggregation Type: Average
Value example: 1024
Unit: CountPerSecond
Aggregation Type: Average
Unit:Count
Aggregation Type: Average
Unit: Bytes
Aggregation Type: Average
Transaction metrics
Transaction metrics are emitted on every request to a storage account from Azure Storage to Azure Monitor. In
the case of no activity on your storage account, there will be no data on transaction metrics in the period. All
transaction metrics are available at both account and Azure Files service level. The time grain defines the time
interval that metric values are presented. The supported time grains for all transaction metrics are PT1H and
PT1M.
Azure Storage provides the following transaction metrics in Azure Monitor.
Unit: Count
Aggregation Type: Total
Applicable dimensions: ResponseType, GeoType, ApiName,
and Authentication (Definition)
Value example: 1024
Unit: Bytes
Aggregation Type: Total
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024
Unit: Bytes
Aggregation Type: Total
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024
Unit: Milliseconds
Aggregation Type: Average
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024
Unit: Milliseconds
Aggregation Type: Average
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 1024
Unit: Percent
Aggregation Type: Average
Applicable dimensions: GeoType, ApiName, and
Authentication (Definition)
Value example: 99.99
Metrics dimensions
Azure Files supports following dimensions for metrics in Azure Monitor.
NOTE
The File Share dimension is not available for standard file shares (only premium file shares). When using standard file
shares, the metrics provided are for all files shares in the storage account. To get per-share metrics for standard file shares,
create one file share per storage account.
Resource logs
The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor
Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was
used to perform the operation.
Fields that describe the operation
P RO P ERT Y DESC RIP T IO N
P RO P ERT Y DESC RIP T IO N
operationVersion The storage service version that was specified when the
request was made. This is equivalent to the value of the x-
ms-version header. For example: 2017-04-17 .
statusCode The HTTP or SMB status code for the request. If the HTTP
request is interrupted, this value might be set to Unknown .
For example: 206
identity / type The type of authentication that was used to make the
request. For example: OAuth , Kerberos , SAS Key ,
Account Key , or Anonymous
identity / tokenHash The SHA-256 hash of the authentication token used on the
request.
When the authentication type is Account Key , the format
is "key1 | key2 (SHA256 hash of the key)". For example:
key1(5RTE343A6FEB12342672AFD40072B70D4A91BGH5CDF797EC56BF82B2C3635CE)
.
When authentication type is SAS Key , the format is "key1 |
key2 (SHA 256 hash of the key),SasSignature(SHA 256 hash
of the SAS token)". For example:
key1(0A0XE8AADA354H19722ED12342443F0DC8FAF3E6GF8C8AD805DE6D563E0E5F8A),SasSignature(04D64C2B3A704145C9F1664F201
. When authentication type is OAuth , the format is "SHA
256 hash of the OAuth token". For example:
B3CC9D5C64B3351573D806751312317FE4E910877E7CBAFA9D95E0BE923DD25C
For other authentication types, there is no tokenHash field.
requester / upn The User Principal Name (UPN) of requestor. For example:
someone@contoso.com .
etag The ETag identifier for the returned object, in quotes. For
example: 0x8D101F7E4B662C4 .
ser viceType The service associated with this request. For example: blob ,
table , files , or queue .
requestBodySize The size of the request packets, expressed in bytes, that are
read by the storage service.
For example: 0 .
If a request is unsuccessful, this value might be empty.
ser verMd5 The value of the MD5 hash calculated by the storage service.
For example: 3228b3cf1069a5489b298446321f8521 .
This field can be empty.
P RO P ERT Y DESC RIP T IO N
lastModifiedTime The Last Modified Time (LMT) for the returned object. For
example: Tuesday, 09-Aug-11 21:13:26 GMT .
This field is empty for operations that can return multiple
objects.
contentLengthHeader The value of the Content-Length header for the request sent
to the storage service. If the request was successful, this
value is equal to requestBodySize. If a request is
unsuccessful, this value may not be equal to
requestBodySize, or it might be empty.
smbCommandDetail More information about this specific request rather than the
general type of request. For example:
0x2000 bytes at offset 0xf2000
See also
See Monitoring Azure Files for a description of monitoring Azure Storage.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Azure Files and Azure NetApp Files comparison
5/20/2022 • 4 minutes to read • Edit Online
This article provides a comparison of Azure Files and Azure NetApp Files.
Most workloads that require cloud file storage work well on either Azure Files or Azure NetApp Files. To help
determine the best fit for your workload, review the information provided in this article. For more information,
see the Azure Files and Azure NetApp Files documentation and the Shared storage for all enterprise file-
workloads session which covers choosing between Azure Files and Azure NetApp Files.
Features
C AT EGO RY A Z URE F IL ES A Z URE N ETA P P F IL ES
Description Azure Files is a fully managed, highly Azure NetApp Files is a fully managed,
available, enterprise-grade service that highly available, enterprise-grade NAS
is optimized for random access service that can handle the most
workloads with in-place data updates. demanding, high-performance, low-
latency workloads requiring advanced
Azure Files is built on the same Azure data management capabilities. It
storage platform as other services like enables the migration of workloads,
Azure Blobs. which are deemed "un-migratable"
without.
Standard
LRS
ZRS
GRS
GZRS
Service-Level Agreement (SLA) SLA for Azure Files SLA for Azure NetApp Files
NFSv3/NFSv4.1
ADDS/LDAP integration with
NFS extended groups
To learn more, see Azure Files To learn more, see How Azure NetApp
enhances data protection capabilities. Files snapshots work.
Standard Standard
Up to 20k Up to 320k
Standard Standard
Up to 300 MiB/s Up to 3.2GiB/s
Standard
1,000
Standard
60 MiB/s
For more information on scalability and performance targets, see Azure Files and Azure NetApp Files.
Next Steps
Azure Files documentation
Azure NetApp Files documentation
Shared storage for all enterprise file-workloads session
Compare access to Azure Files, Blob Storage, and
Azure NetApp Files with NFS
5/20/2022 • 2 minutes to read • Edit Online
This article provides a comparison between each of these offerings if you access them through the Network File
System (NFS) protocol. This comparison doesn't apply if you access them through any other method.
For more general comparisons, see the this article to compare Azure Blob Storage and Azure Files, or this article
to compare Azure Files and Azure NetApp Files.
Comparison
C AT EGO RY A Z URE B LO B STO RA GE A Z URE F IL ES A Z URE N ETA P P F IL ES
Use cases Blob Storage is best suited Azure Files is a highly Fully managed file service in
for large scale read-heavy available service best suited the cloud, powered by
sequential access workloads for random access NetApp, with advanced
where data is ingested once workloads. management capabilities.
and minimally modified
further. For NFS shares, Azure Files NetApp Files is suited for
provides full POSIX file workloads that require
Blob Storage offers the system support and can random access and
lowest total cost of easily be used from provides broad protocol
ownership, if there is little container platforms like support and data
or no maintenance. Azure Container Instance protection capabilities.
(ACI) and Azure Kubernetes
Some example scenarios Service (AKS) with the built- Some example scenarios
are: Large scale analytical in CSI driver, in addition to are: On-premises enterprise
data, throughput sensitive VM-based platforms. NAS migration that requires
high-performance rich management
computing, backup and Some example scenarios capabilities, latency sensitive
archive, autonomous are: Shared files, databases, workloads like SAP HANA,
driving, media rendering, or home directories, traditional latency-sensitive or IOPS
genomic sequencing. applications, ERP, CMS, NAS intensive high performance
migrations that don't compute, or workloads that
require advanced require simultaneous multi-
management, and custom protocol access.
applications requiring scale-
out file storage.
Key features Integrated with HPC cache Zonally redundant for high Extremely low latency (as
for low latency workloads. availability. low as sub-ms).
Scale Up to 2 PiB for a single Up to 100 TiB for a single Up to 100 TiB for a single
volume. file share. volume.
Up to ~4.75 TiB max for a Up to 4 TiB for a single file. Up to 16 TiB for a single file.
single file.
100 GiB min capacity. Consistent hybrid cloud
No minimum capacity experience.
requirements.
Pricing Azure Blob Storage pricing Azure Files pricing Azure NetApp Files pricing
Next steps
To access Blob storage with NFS, see Network File System (NFS) 3.0 protocol support in Azure Blob Storage.
To access Azure Files with NFS, see NFS file shares in Azure Files.
To access Azure NetApp Files with NFS, see Quickstart: Set up Azure NetApp Files and create an NFS volume.