Container adoption is increasing rapidly, but lack of persistent storage support for dynamic containerized applications threatens to slow this trend. We announced Pure Service Orchestrator at the Pure//Accelerate conference in May 2018 and were thrilled with the response.  Check out our launch blog here. Today, we are excited to announce that Pure Service Orchestrator is now Generally Available (GA)!


This new product, Pure Service Orchestrator (PSO) is a software layer which allows fleets of Pure Storage FlashArray and FlashBlade™ storage to be federated together and consumed through a simple Storage-as-a-Service API.


The initial release of PSO is coming to market as a part of our container orchestration integration effort. We support a broad array of orchestration flavors through a Docker Volume Plugin and Kubernetes FlexVolume Driver. The latest releases of both of these plugins have the new Service Orchestrator layer built into them, ready for use.


Given the structural preference that container orchestrators and the cloud-native community have for consuming services and isolating the application layer from the underlying infrastructure, we think the new fleet API concepts that we’re introducing are going to make using Pure Storage in containerized environments a truly differentiated, cloud-native experience.

Getting Started With PSO

Pure Service Orchestrator is super simple to get started with if you have an existing container environment.  Given that PSO has just launched, most of you reading this will need to download and install the plugin for your environment before you can see the workflows I’m about to describe in action. If you don’t have a container environment set up yet feel free to choose any orchestrator and return here once you’ve got a cluster up and running. You can also check out our demo videos to get a preview of the functionality we’re launching


In order to get access to our latest plugins please head over to the links.


Kubernetes FlexVolume Driver:

Docker Volume Plugin:

Once you’ve got the plugin downloaded, follow the orchestrator-specific instructions available along with the plugins to complete the installation. We do have a new installation method for Kubernetes using Helm that you can check out at this link:

How Do We Turn Distinct Arrays Into A Single Service

Ok it’s all well and good that we’re saying that we can take a group of FlashArray and FlashBlade storage arrays  and turn them into a fleet, from which we provision a single storage service, but how does that actually work in practice?


At a high-level it’s fairly simple, because we’re just keeping track of the resources that the cluster has access to and making the best provisioning and connectivity decisions that we can at any given time. However, let us unpack that a bit and get into the details of how we’re able to keep a cluster happily supplied with persistent volumes not only from multiple arrays, but actually from multiple different types of storage arrays that can be added to or removed from the cluster on demand.

Keeping track of the Fleet

Before getting into exactly how we track and manage the fleet sitting behind PSO, I want to establish a few things that define a fleet for us:


  1. A fleet consists of at least one storage array
  2. A fleet can consist of one or more FlashArray, FlashBlade, or both types of arrays.
  3. All Pure Storage hardware versions that are currently in the field are supported as a part of a PSO fleet
  4. Any Purity Operating Environment (OE) version 4.7.0+ on FlashArray and 2.1.6+ on FlashBlade is supported as part of a PSO fleet
  5. Arrays in the fleet can be used for multiple workloads, including more than one PSO fleet at the same time
  6. Arrays can be added to a new fleet at any time without impacting array or fleet performance


The core design decision that allows us to have such scalable and dynamic fleets is that we do not require any special software or proprietary access on our arrays. All communication with the arrays is done using our standard, public REST APIs.


This abstraction allows arrays to operate without having any awareness that they are a part of the fleet. This is important because it forces PSO to behave in safe, standard ways that allow other activities on the arrays to continue uninterrupted. For example, an array could be plugged into a VMware environment, providing datastores and IO to large numbers of VMs while at the same time be added into a PSO fleet and begin to provision and service persistent volume claims for containers. As long as the array is sized to handle the performance requirements of both environments the workloads will happily run side-by-side.

How We Make Provisioning Decisions

What happens if your containerized environment starts to reach the performance ceiling of an array? What happens if you have multiple arrays in a fleet running a variety of workloads? How can you be sure that we’ll make the best provisioning decision for any new volume claim coming down from your container environment?


All of these are good questions and speak to the core reason PSO exists. Given the complexity of customer environments and the speed and flexibility required by a container orchestrator to serve cloud-native applications, a better, smarter, cross-fleet provisioning approach is necessitated to deliver as-a-service experience. Here’s how we provide that optimal experience.


Monitoring the Fleet

At all times, PSO keeps track of the state of the world by monitoring the fleet on a number of axes, which includes the following factors:

  1. Keep track of all the arrays in the fleet
    1. The user could add new resources at any time.
  2. Track health, capacity, and performance load of all systems in the fleet
    1. An array could be upgraded, or expanded at any time
    2. Other workloads sharing the array could change their consumption pattern
    3. An issue in the environment or the array could degrade performance for one of the arrays
    4. Background array processes could reduce the existing load
  3. Maintain an understanding of the capabilities of each array


Get storage request

When we get a request for storage from the container orchestrator, we use our understanding of the state of the fleet to make an optimal decision, for example:

  1. Understand the requirements of the inbound request.
    1. Could be as simple as “Put 100GB of storage at /mnt/disk”
    2. Could contain more specifics about the requirements of the workload
  2. Identify the arrays having features capable of serving the request
  3. Prioritize the arrays based on their state
    1. Make sure the array is in good health
    2. Make sure the array has capacity available
    3. Make sure the array has performance headroom available
  4. Provision to the most desirable array


Using this methodology we’re able to deal with a constantly shifting environment while still providing a clean, simple Storage-as-a-Service experience to our customers.

How To Inject Business Policy Into Our Process

One additional question that might come up when thinking about deploying a PSO fleet is what happens if there are business requirements or datacenter architecture constraints which need to be applied to the provisioning decision?


The likelihood of having situations like this come up is very high, so we’ve made sure to address it in the initial PSO release as well. In this case we’ve taken the direction from the way the container orchestrators allow for additional information to be added to their provisioning process. Specifically, we allow for tagging of the storage arrays to indicate specific business policies.


One common example of this is that a customer wants to provide a public-cloud style availability zone system to their applications to help provide higher availability to end-users. We support such an environment just like the container orchestrators do. You can tag arrays to identify the specific availability zone or zones that they are able to serve. Once complete, when a pod is provisioned all you need to do is provide an indication of which availability zone it needs to live in and we will adjust our provisioning process accordingly.


One important thing to note here is that both our business logic support and our smart provisioning work together. Once we’ve made sure that we’re provisioning to an array in the correct zone or other requirements we will make a smart provisioning decision among the set of qualifying arrays.



Hopefully this discussion has shed some light on the way Pure is able to provide Storage-as-a-Service for containerized environments with our new Pure Service Orchestrator. From managing a dynamic fleet of arrays, to smart provisioning across the fleet, we’re thrilled to launch this new product and we hope you’ll love it.


To learn more about Pure Service Orchestrator, visit our new Containers web page –