HPC – what is it?

High Performance Computing – what is it and how will it impact design and operation of data centers.

High Performance Computing or HPC is a new buzz word in the lexicon of corporate IT. But what exactly is it, who needs it and how will it impact decisions about how and where to locate your core IT infrastructure? As a leading designer, builder and operator of sustainable and secure data centers, DigiPlex partners with a wide range of customers and is helping many to integrate HPC into their IT resource. From these discussions I have distilled some thoughts on what HPC means for the delivery of IT in today’s digital businesses.

Let’s start with a definition. HPC is really about density. It uses the latest processor, cooling and power technologies to pack in vast amounts of compute capacity into a small area. This density allows HPC systems to process massive amounts of data exceptionally fast. According to analysts Hyperion Research the HPC server market grew more than 15% to reach a record $13.7 billion in 2018, and is forecast to reach $44bn by 2023.

AI leads the way
Several trends, all emanating from the widespread digitisation of the economy are driving this demand. Virtual Reality, IoT, and autonomous vehicles all demand the high data-high speed capabilities of HPC, but it is AI that is the real impetus. All of these applications and many others are increasingly relying on the ability of computers to learn and make decisions based on goal-orientated algorithms rather than simple instruction sets. AI and Machine Learning require the capability to analyse huge amounts of data extremely quickly and then act on that analysis in near-real time.

Currently, only the largest and wealthiest organisations, companies and nations, are able to invest in and build HPC infrastructures to support AI at scale. However, as costs fall more and more businesses will consider integrating AI capabilities into their operations. Indeed, the advantages AI could bestow on an organisation may mean that it is quickly seen essential to remain competitive. Even as the market grows and prices fall, it is unlikely that most businesses will be able to make the investment in technology, skills and infrastructure to support the dedicated HPC facilities needed for AI.

Specialist support
Many will turn to specialist data center operators to provide dedicated HPC capacity, and there are a number of advantages in this approach. The density and power of HPC requires specialist energy distribution and cooling to operate. Data center specialists have the high-capacity connections to energy distribution networks and the expertise to ensure consistent, resilience and, if necessary, redundant, power to individual racks of HPC servers. They understand the demands these super-dense server racks put on power capacity.

They also understand cooling. HPC will require specialist cooling to prevent densely packed GPUs and CPUs from overheating. Water-cooled racks are already widely used, emerging technologies including micro-convection cooling and embedded-cooling will soon be seen in commercial applications. The experience and skills needed to successfully deploy these technologies will most frequently be found among those used to the engineering challenges of designing, building and operating large-scale data centers.

What about the Cloud
Some may choose to turn to the cloud for HPC capacity and hyperscale cloud providers are beginning to offer these services. Microsoft has announced a partnership with supercomputer company Cray to offer its HPC capabilities within the Azure cloud. Google and Amazon also offer HPC in the cloud. However, hyperscalers and enterprises alike see the benefits of owned and dedicated HPC resources. Whether driven by commercial sensitivity, data-security or availability and latency, many will want to retain ownership over HPC capacity. As a result, we are seeing interest from both hyperscalers and enterprises to house HPC within our data centers.

I believe that we’ll quickly see real benefits from a hybrid approach to HPC. The engineering challenges associated with HPC make it most efficient to create dedicated data halls for HPC workloads. However, organisations will also want fast and direct connection to other enterprise resources, cloud infrastructure and storage. I envision modular data center designs with data halls each with different cooling and energy profiles to meet these different requirements. Proximity, direct connectivity and single-view management will allow customers to integrate HPC fully into their IT estate.

Listening and learning
It is clear to me that HPC will radically change the way we design data centers. Indeed, it already is. Through conversations with customers we are learning about their needs and advising on the best approaches. The combination of deep expertise, skills and agile approaches help us to help them to put HPC at the heart of their organisations.

But what sort of organisations need HPC, how will they use it, and where do they need their HPC capacity to be located? These are important and interesting questions which I’ll address in my next blog.

Article by Tim Bawtree – VP of International Sales. Read our other blogs on topics such as Edge / AI / Speed to Market