Data mobility from an on-premises FlashBlade® system to cloud-adjacent locations, like Equinix for cloud bursting, is important for staging the data in the Equinix-hosted FlashBlade to use the compute resources in Azure Cloud. Our recent post, Data Mobility for HPC and EDA Workloads from On-premises to Azure Cloud, highlights the process to handle data mobility between an on-premises FlashBlade and an Equinix-hosted FlashBlade using file system replication.
However, after the life of a project, you can choose to move the data back to the on-premises FlashBlade or from the FlashBlade in Equinix to a cold storage solution like Azure Blob storage for long-term retention. While array-level replication can be used to move data from the Equinix-hosted FlashBlade into the local data center, host-based tools like fpsync can be useful to move data from an NFS share on FlashBlade to blob storage in Azure.
What Is Fpsync?
Fpsync is a powerful open source migration tool that uses “fpart” and “rsync” to migrate small and large files across heterogeneous storage endpoints and data formats. While fpart synchronizes directories in parallel, rsync copies data from the source to the target locations. The fpsync tool has a faster file transfer rate irrespective of the file sizes and the size of data that is copied compared to the standard UNIX “cp” and other open source tools available.
A typical semiconductor chip design environment has high file counts with deep directory structure and millions of small files with soft and hard links. Fpsync is a very effective host-based tool to migrate design and simulation data across heterogeneous data platforms.
As shown in the diagram above, there may be a lot of residual data on FlashBlade from the design and simulation workflow in the Equinix location that may not need to be retained on primary high-performance storage. Organizations may decide to retain the data in cheap and deep blob storage in Azure Cloud.
How to Use NFS with Azure Blob Storage
Azure provides a hierarchical namespace in Azure Data Lake Storage Gen2 that requires arranging the blob with a valid storage account. This feature offers admins and end users two main advantages:
- Arranges the objects in the object store bucket as files and directories.
- Allows the blob to be mounted on the compute host as an NFS share with a mount path.
The following table shows how the Azure blob storage has an NFS mount path in the Linux compute resource. The table also lists the mount path of the NFS share from FlashBlade in the Equinix data center on the same Linux host as the blob storage in Azure.
1 2 3 4 5 6 7 8 9 10 11 12 |
NFS Share from FlashBlade: 192.168.100.103:/vcs–01 on /mnt/vcs–01 type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255, hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.100.103,mountvers=3,mountport=2049, mountproto=tcp,local_lock=all,addr=192.168.100.103) Blob Storage over NFS in Azure Cloud: purenfsblobacct.blob.core.windows.net:/purenfsblobacct/purenfsblobcont on /mnt/nfsblob type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,port=2048, timeo=600,retrans=2,sec=sys,mountaddr=20.150.91.100,mountvers=3,mountport=2048,mountproto=tcp,local_lock=all, addr=20.150.91.100) |
Microsoft shared a known issue where the metadata information is lost when data is copied from a source NFS and shared to a blob mounted over NFS, using the standard UNIX “cp.” The following test validates the problem.
How Fpsync Preserves Metadata Between FlashBlade and Azure Blob Storage
The files listed under the source subdirectory have “azhpcadmin” as the user and “packer” as the group ownership. The user and group ownership are now changed to “root” on the target share that has the blob storage on the back end after copying the files under the subdirectory from the NFS share on FlashBlade using the standard UNIX “cp” command.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
Source NFS share from FlashBlade: [root@azpr–bikash3000 dir1]# ls -al /mnt/vcs-01/ravi/50_50/dir0/dir8/dir1/dir1 total 18 drwxrwxrwx. 2 azhpcadmin packer 0 Mar 15 18:39 . drwxrwxrwx. 5 azhpcadmin packer 0 Mar 15 18:38 .. –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 18:39 file1 –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 18:39 file2 –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 18:39 file3 –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 18:39 file4 Target NFS share mounts the Azure Blob storage: [root@azpr–bikash3000 dir1]# pwd /mnt/nfsblob/ravi/50_50/dir0/dir8/dir1/dir1 [root@azpr–bikash3000 dir1]# ls -la total 18 drwxr–xr–x. 2 root root 0 Sep 13 17:21 . drwxr–xr–x. 2 root root 0 Sep 13 17:21 .. –rwxr–xr–x. 1 root root 4129 Sep 13 17:21 file1 –rwxr–xr–x. 1 root root 4129 Sep 13 17:21 file2 –rwxr–xr–x. 1 root root 4129 Sep 13 17:21 file3 –rwxr–xr–x. 1 root root 4129 Sep 13 17:21 file4 |
Tests confirm the issue reported by Microsoft. A similar test was performed using fpsync. Fpsync was not only faster than the standard UNIX “cp” command but also preserved the files and directories metadata at the target location.
1 2 3 4 5 6 7 8 9 10 |
[root@azpr–bikash3000 ~]# fpsync /mnt/vcs-01/ravi/50_50/dir0/dir0/dir0/dir0/ /mnt/nfsblob/test2 [root@azpr–bikash3000 ~]# cd /mnt/nfsblob/test2 [root@azpr–bikash3000 test2]# ls -la total 18 drwxrwxrwx. 2 azhpcadmin packer 0 Sep 27 18:38 . drwxr–x—–. 2 root root 0 Sep 27 18:38 .. –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 2021 file1 –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 2021 file2 –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 2021 file3 –rwxrwxrwx. 1 azhpcadmin packer 4129 Mar 15 2021 file4 |
Data tiering from FlashBlade in Equinix into blob storage in Azure Cloud allows organizations to archive data for long-term retention. By using fpsync, data can be moved in both directions—from FlashBlade to Azure Blob storage and restored back on demand. Reading from blob storage becomes easy with the hierarchical namespace and the mount path over NFS, as the objects are arranged as files and directories. Fpsync provides data continuity across files and objects across heterogeneous data platforms.Tests confirm the issue reported by Microsoft. A similar test was performed using fpsync. Fpsync was not only faster than the standard UNIX “cp” command but also preserved the files and directories metadata at the target location.
Read Connected Cloud with FlashBlade and Microsoft Azure HPC for EDA Workloads to learn more about the value of cloud-connected storage in partnership with Microsoft Azure.
Written By:
Speed Up EDA Workloads
See how cloud-connected storage can deliver performance at scale more cost-effectively.