There’s complexity in an AI model that your data scientists will have to work out. There’s complexity in data management that your data engineers will have to plan. There are even complexities aligning an organization to an AI strategy. And only you and your executives can work through those hurdles. 

Infrastructure Complexity Is Removable 

Are you drowning in aggregates, caches, pools, balancing, and all that other management nonsense? Pure Storage®  FlashBlade is simple—offering one system that delivers a truly gigantic scale-out namespace, multiprotocol access, and a simple, web-based GUI and REST API. Anyone can manage it. At any scale. 

And to pair with our industry-leading storage simplicity, NVIDIA offers a highly flexible GPU system that brings unmatched compute simplicity. NVIDIA DGX™ A100 replaces the multiple specialized legacy-compute systems needed to support an entire AI workflow, including training, inference, and analytics. 

Perfect for Scaling AI Projects 

As a company’s AI strategy evolves, the infrastructure must evolve with it. In the early stages of model development, teams may be primarily executing training tasks. Deep learning training generally has a random-access storage pattern and is sped up by using multiple GPUs for a single task. On the other hand, inference on incoming data generally has a sequential read pattern, and the compute work is usually performed on a single GPU. 

Support for the Entire AI Pipeline as Workloads Grow 

FlashBlade delivers high-performance across all data access types and patterns. Plus single, centralized storage delivers drastically lower data copying times between storage nodes. 

NVIDIA DGX A100 contains eight NVIDIA A100 Tensor Core GPUs and supports multi-instance GPU (MIG) partitioning; this allows a large number of separate, fully isolated jobs to be run within the same DGX A100. 

Now, administrators can right-size storage and compute resources to every workload—from the smallest to the largest. For enterprise deployments, DGX A100 can support the unique resource requirements of training, inference, and analytics, all on the same system. And FlashBlade can provide scaling performance for each workflow’s unique IO profile. 

The AIRI solutions, with Pure Storage FlashBlade and NVIDIA DGX systems, enable data scientists to “work smarter, together” by providing an enterprise-grade infrastructure solution that IT can manage. AIRI solutions help eliminate infrastructure design guesswork, speed deployment, and simplify management.

Impossibly Simple Management Across the AI Platform 

As the saying goes, “models are a tiny part of an end-to-end ML system.”

At Pure Storage, our vision is for IT teams that provide scalable, self-service platforms to their developer teams and enable organizations to deploy valuable models in production. Together, FlashBlade and DGX A100 form the backbone for a stable, end-to-end AI environment. 

By backing a Kubernetes-based AI platform with FlashBlade, IT teams can easily add new applications as necessary. They use the Pure Service Orchestrator to auto-provision persistent volumes on FlashBlade for new users. And as a bonus, cross-pipeline monitoring and alerting applications use that same FlashBlade for performant storage. 

Kubernetes-based AI platform with FlashBlade Object Storage

Combine the Prometheus exporters from Pure and from NVIDIA for a single-pane-of-glass management across an entire cluster. 

Prometheus exporters from Pure and from NVIDIA

FlashBlade’s data hub architecture delivers an ideal storage platform to power analytics and AI.

NVIDIA DGX A100 offers unprecedented compute density, performance, and flexibility. 

To learn more, contact us with any questions or learn more about our partnership with Nvidia. We’d love for you to see how FlashBlade and AIRI enable new possibilities.