Running Cloud-Native AI and IoT Workloads at the Edge

Edge computing is accelerating the next generation of automation and analytics. Portworx is a container-native storage platform engineered for the edge.

Running Cloud-Native AI and IoT Workloads at the Edge

3 minutes

Edge computing is one of the core components of the Industrial Internet of Things (IIoT). It plays a critical role in accelerating the journey toward Industry 4.0. Enterprises in manufacturing, transportation, and oil and gas rely on edge-computing platforms to implement the next generation of automation and analytics. Smart buildings do as well.

Kubernetes has become the foundation for modern infrastructure, including the edge. Whether running on-premises, in the cloud, or at the edge, Kubernetes provides a consistent experience and developer workflows for developing, deploying, and managing applications.

Modern applications running on Kubernetes are packaged and deployed as microservices. Microservices are an architectural pattern in which multiple, reusable, smaller services are assembled into one application unit. Kubernetes-based edge infrastructure runs two types of microservices:

  • Stateless: Without persistent data such as load balancers or web servers
  • Stateful: With persistent data such as streaming apps and databases

A pod is the fundamental unit of execution in Kubernetes. In a typical IIoT scenario, a pod consists of each sensor and an actuator connected to the edge. A message broker orchestrates communication between the sensors and data-processing service. The telemetry ingested by the sensors through the message broker is stored in a time-series database used for real-time processing and analytics. It’s a stateful microservice that needs persistence.

Eventually, the telemetry is moved to the cloud, where the historical data is used to train machine-learning models. Trained models are deployed at the edge for inference, which results in anomaly detection and predictive maintenance of equipment. The AI models are also packaged as a Kubernetes deployment object. It may have one or more pods running the inference service.

The scenario in Figure 1 highlights diverse types of microservices:

  • Stateless services representing the sensors
  • Stateful service for the message broker to persist logs and raw telemetry
  • Stateful time-series database that is write-intensive
  • Stateful read-heavy AI-inference engine relying on shared storage to serve the model

The Edge Computing Layer

The edge computing layer is running the Kubernetes-based container infrastructure. It requires a robust storage engine that meets the requirements of different workloads. Most storage options deliver persistence to Kubernetes workloads. They’re either optimized for running read-heavy workloads (e.g., NFS) or write-intensive workloads (e.g., block storage). They’re not optimized for shared storage which is critical for AI and IoT workloads. As a result, you need an overlay storage layer for stateful applications. You also need a separate, shared file system such as NFS to run AI inference.

Unlike a data center, the edge runs in remote locations that aren’t easily accessible. This significantly increases the cost of support and maintenance. It isn’t practical for a DevOps engineer to SSH (Secure Shell) into one of the cluster nodes to troubleshoot and resolve issues. The edge computing infrastructure needs to be self-driving, autonomous, and self-healing.

Without a robust storage engine, you’ll need to deploy two storage layers: a dedicated Read-Write-Once (RWO) layer and a shared Read-Write-Many (RWX) layer. This means you’ll have to deal with two independent and disjointed storage layers. With multiple moving parts, it becomes difficult to isolate and pinpoint problems.

A unified storage engine can handle both dedicated (RWO) and shared volumes (RWX):

  • Dedicated volumes deliver the throughput write-intensive databases require while optimizing read operations.
  • Shared volumes let multiple pods access the same file with no additional configuration.

Portworx® is a container-native storage engine optimized for read-only operations, write-intensive workloads, and applications requiring shared access. It enables you to run workloads with different characteristics on the same storage layer. It also dramatically reduces the overhead that’s typically needed to run a separate shared file system.

Portworx reduces the support and maintenance cost of running workloads at the edge. Integrations with industry-standard observability platforms such as Prometheus, Alert Manager, and Grafana provide rich insights into the storage platform.

The edge requires reliable compute, storage, and network layers. Portworx is an ideal storage layer for running cloud-native workloads at the edge.