High-performance computing (HPC) has evolved rapidly since its genesis in 1964 with the introduction of the CDC 6600, the world’s first supercomputer. Since then, the amount of data the world generates has exploded, and accordingly, the need for HPC to be able to process data more rapidly and efficiently has become paramount.
This requirement to process data more efficiently has forced HPC innovators and designers to think outside the box in terms of not just how the data is processed but where it’s processed and what ends up getting processed.
There are no standards yet, or clear best practices to follow. But the innovations and breakthroughs are coming fast and thick, especially with things like AI.
“The world, “the possible” keeps expanding dramatically, almost geometrically. The amount of data that we have and the amount of compute that we have, and it’s not just about the raw compute but also the advances in terms of algorithms and transfer learning. There’s so much amazing work going on, it’s this Cambrian explosion of innovation.” – Justin Emerson, Principal Technology Evangelist, Pure Storage
With cloud computing now firmly established, the floodgates have opened up for a whole new world of supercomputing innovation and experimentation. Here are the top five trends in HPC—what they are and what they mean for the potential of the modern enterprise to fully capitalize on its new wealth of data.
1. Artificial Intelligence Will be Used to Improve HPC.
It would be very hard to talk about current HPC trends without mentioning artificial intelligence (AI). Over the last five years or so, with the advent of the internet of things (IoT), 5G, and other data-driven technologies, the amount of data available for meaningful, life-changing AI has actually grown enough for AI to have an impact on HPC, and vice versa.
High-performance computers are needed to power AI workloads, but it turns out AI itself can now be used to improve HPC data centers. For example, AI can monitor overall system health, including the health of storage, servers, and networking gear, ensuring correct configuration and predicting equipment failure. Companies can also use AI to reduce electricity consumption and improve efficiency by optimizing heating and cooling systems.
AI is also important for security in HPC systems, as it can be used to screen incoming and outgoing data for malware. It can also protect data through behavioral analytics and anomaly detection.
Natural language processing (NLP) is another exciting, HPC-related area of AI. AI is all about deriving insights and value from data, and NLP is helping to create more accessible and inclusive technologies.
2. Edge Computing Will Add Value and Speed.
Companies can deploy their HPC data center on premises, in the cloud, at the “edge” (however that edge may be defined), or with some combination of these. However, more and more organizations are choosing distributed (i.e., edge) deployments for the faster response times and bandwidth-saving benefits they bring.
Centralized data centers are simply too slow for modern applications, which require data computation and storage to take place as close to the application or device as possible to meet increasingly stringent, 5G-enabled latency SLAs.
Speed is of course a key component of high-performance computing, as the faster HPCs can compute data, the more data they can compute, and the more complex problems they can solve. As edge computing becomes increasingly popular, high-performance computers will become even more powerful and valuable.
Read up on 7 trends in edge computing to watch >>
3. HPC Will Become Even More Accessible as a Service.
The emergence of the cloud ushered in an as-a-service revolution, and HPC is now joining the movement. Many vendors have switched from selling HPC equipment to providing HPC as a service (HPCaaS). This allows companies that don’t have the in-house knowledge, resources, or infrastructure to create their own HPC platform to take advantage of HPC via the cloud.
Now, many major cloud providers, such as Amazon Web Services (AWS), Google, and Alibaba, offer HPCaaS. The benefits of HPCaaS include ease of deployment, scalability, and predictability of costs.
4. GPU Computing Will be on the Rise.
Originally designed for gaming, graphics processing units (GPUs) have evolved into one of the most important types of computing technology. A GPU is a specialized processing unit capable of processing many pieces of data simultaneously, making GPUs useful for machine learning, video editing, and gaming applications.
Applications that use GPUs for HPC include weather forecasting, data mining, and other diverse processes that require this speed and amount of data computation. NVIDIA is the largest maker of GPUs.
Learn how Pure redefines and simplifies AI workflows with NVIDIA >>
Note: GPUs are sometimes confused with central processing units (CPUs) and tensor processing units (TCUs). A CPU is a general-purpose processor that manages all the logic, calculations, and I/O of a computer. A TCU is Google’s custom-made application-specific processor used to accelerate machine learning workloads. A GPU, on the other hand, is an additional processor to improve the graphical interface and run high-end tasks.
5. Modern Data Storage Will be a Critical Investment.
The three key components of an HPC system are computing, networking, and storage. Because storage is one of the most important elements, it’s key to have a powerful, modern data storage solution if you’re using or plan to use HPC.
To be able to accommodate the vast amount of data involved in HPC, the HPC system’s data storage system should be able to:
- Make data from any node available at any time
- Handle any size of data request
- Support performance-oriented protocols
- Scale rapidly to keep up with increasingly demanding latency SLAs
You’ll want to choose a data storage solution that keeps your HPC system genuinely future-proof.
How FlashBlade Delivers the Scale-out Storage Needed to Support HPC Projects
Pure Storage® FlashBlade® is the industry’s most advanced solution to help eliminate storage bottlenecks by delivering a unified fast file and object (UFFO) platform for modern data and applications. Best of all, it’s simple to install, provision, and manage. See for yourself how you can power your high-performance computing projects with FlashBlade—take a test drive today.