image_pdfimage_print

Released earlier this year, VMware vSphere 7 has the ability to run virtual machines (VMs) and multiple Kubernetes clusters in hybrid mode for better manageability. vSphere 7 also provides the building blocks for cloud-native applications and simplifies the administration of security, performance, resiliency, and lifecycle management for different applications. In this post, I’ll focus on how to use VMware Tanzu for distributed machine learning (ML) and high-performance computing (HPC) applications. 

ML/HPC applications can be a mix of both interactive or batch workloads. End users run jobs in parallel that start and finish once the test results converge. Many other modern applications are associated in the ML pipeline, which generates heterogeneous workloads with scalable performance requirements. Most of the ML/HPC applications require ease of deployment and simple manageability with on-demand storage provisioning and capacity scaling with predictable performance to complete the jobs that run in parallel. 

Most of the ML/HPC applications like JupyterHub/notebook require ReadWriteMany (RWX) access mode for capacity scaling and data sharing in the Tanzu Kubernetes cluster, which allows reading and writing from multiple pods. The native Tanzu Kubernetes Cloud Native Storage (CNS) driver supports ReadWriteOnce (RWO) access mode. In Figure 1, the CNS driver supports vSphere datastores: VMFS/vVOLs and vSAN. 

CNS driver supports vSphere datastores: VMFS/vVOLs and vSAN

Figure 1: The CNS driver supports vSphere datastores.

vSAN file services provide the ability to mount file systems over NFSv3 and NFSv4.1 from the Tanzu Kubernetes cluster. The vSAN file services are coupled with CNS drivers, but the setup is complicated and provisioned persistent storage isn’t elastic.

Pure Storage® FlashBlade® provides a standard data platform with Unified Fast File and Object (UFFO) storage that supports various heterogeneous workloads from ML/HPC and modern applications that are part of single or multiple workflow pipelines. While vSAN file services can offer RWX access modes for HPC applications, FlashBlade provides capacity and performance scaling with distributed compute for massively parallel access from HPC applications with RWX access over a network file system (NFSv3). 

VMware Tanzu Kubernetes cluster supports several modern applications

Figure 2: VMware Tanzu Kubernetes cluster supports many types of applications.

The VMware Tanzu Kubernetes cluster supports several modern applications for a wide variety of use cases (Figure 2). Most of the applications use file access (network file system NFSv3/v4.1) to provide resiliency to the applications and make data shareable among end users. 

Pure Storage Pure Service Orchestrator is an abstracted control plane that dynamically provisions persistent storage on-demand using the default or custom storage class used by the stateful applications running on the Tanzu Kubernetes cluster. By default, Pure Service Orchestrator supports storage classes that are pure-block for FlashArray and pure-file for FlashBlade. For the purpose of this blog post, the ML/HPC applications require the default or a custom file-based storage class to provision persistent storage on FlashBlade.

In this example, I’m using the Tanzu Kubernetes guest cluster hpc2-dev-cluster5 in the vSphere client that is used for the ML/HPC workload validations on FlashBlade using Pure Service Orchestrator.

Tanzu Kubernetes guest cluster hpc2-dev-cluster5 in the vSphere client

Figure 3: The guest cluster “hpc2-dev-cluster5” is listed under namespace “hpc2” along with the control plane.

The Tanzu Kubernetes cluster consists of three master and four worker VM nodes. I need to update the pod security policy in the Tanzu Kubernetes guest cluster hpc2-dev-cluster5 before installing Pure Service Orchestrator using helm. By default, Tanzu Kubernetes clusters don’t allow privileged pods to run. Run the following psp.yaml to create appropriate bindings. 

Next, we’ll look at how to set up JupyterHub/notebook in the Tanzu Kubernetes cluster with the appropriate privileges to create a custom storage class in Pure Service Orchestrator that will provision persistent volumes (PVs) with RWX access modes.

Version 6.0x of Pure Service Orchestrator is stateful and has a database of its own. The database stores metadata about name, size, NFS endpoints, and NFS export rules for file systems created by Pure Service Orchestrator on FlashBlade. Version 6.0.x allows you to provision PVs with a generic set of NFS export rules in the values.yaml or create a custom storage class with specific export rules that apply to certain ML/HPC applications. The following table has “noatime” added as the export rule in the values.yaml for all the PVs provisioned by Pure Service Orchestrator on FlashBlade.

After successfully installing Pure Service Orchestrator in its namespace “-pure-csi,” all the database pods and the pso-csi driver are running on the cluster nodes.

By default, applications using the default storage class pure-file mount the PV over NFSv4.1. 

However, most of the HPC applications would use NFSv3 to mount the file systems from FlashBlade. A custom storage class “pure-file-v3” is created with the NFS-mount option set to version 3 and the default storage class.

Now the Tanzu Kubernetes cluster is ready to install stateful ML/HPC applications like JupyterHub/notebook to be configured on FlashBlade using Pure Service Orchestrator. You can add other applications for monitoring and alerting, log analytics, and more to the Jupyter-as-a-service pipeline.

Jupyter-as-a-service pipeline

Figure 4: Various applications in the Jupyter pipeline.

The table below shows how Jupyter proxy service can automatically pick up an external IP address. The NSX, which is part of vSphere 7, provides the external IP address. You don’t need an external load balancer like MetalLB for Tanzu Kubernetes cluster.

The persistent volume claims (PVC) for different users starting the Jupyter notebook is set to RWX access mode and uses the default storage class “pure-file-v3” while provisioning persistent storage on the FlashBlade.

End users like data scientists using the Jupyter notebook create their own persistent work area on FlashBlade using Pure Service Orchestrator. Jupyter-as-a-Service on Tanzu Kubernetes on CPUs allows data scientists to qualify the preliminary exploration and validation of the data-intensive ML/HPC pipelines before they move to more specialized hardware like GPUs.

Monitoring, alerting, log analytics, and other HPC applications could be an addition to the ML/HPC pipeline where you can store, reuse, and share data on FlashBlade using Pure Service Orchestrator. In the next part of this blog series, I’ll highlight the use of monitoring and alerting tools like Prometheus and Grafana on the Tanzu Kubernetes cluster and FlashBlade.

For more information on Tanzu Kubernetes for ML/HPC applications and Pure Storage FlashBlade using Pure Service Orchestrator, be sure to check out my VMworld 2020 session: Modernize and Optimize ML/AI/HPC Applications with vSphere Tanzu (HCP1545).