• Digital accessories
  • Server
  • Digital life
  • Privacy policy
  • Contact us
  1. Home
  2. Server
  3. INTEL BRACES FOR DPU HIT, AWAITS JEVON’S PARADOX BOUNCE

INTEL BRACES FOR DPU HIT, AWAITS JEVON’S PARADOX BOUNCE

Rsdaa 31/07/2021 1672

We said this a long time ago, and we are going to say it again now. One big reason that Intel paid $16.7 billion to buy FPGA maker Altera was that it was hedging on the future of compute in the datacenter and that it could see how the hyperscalers and cloud builders might offload a lot of network, storage, and security functions from its Xeon CPU cores to devices like FPGAs. Given this, it was important to have FPGAs to catch that business, even if at reduced revenues and profits.

It has been six years since that deal went down, and our analysis of why Intel might buy Altera still stands on its merits, which we did ahead of the deal. It has perhaps taken more time for the Data Processing Unit, or DPU, to evolve from a SmartNIC, which is a network interface card with some brains on it to handle specific tasks offloaded from pricey CPU cores. Just to be different, Intel is now calling these advanced SmartNICs Infrastructure Processing Units, or IPUs, and they are distinct from CPUs, GPUs, and other XPUs like machine learning accelerators or other customer ASICs.

In the end, IPUs might have a mix of CPUs, XPUs, and other custom ASICs, in fact, based on the presentation given today by Navin Shenoy, who is general manager of the Data Platforms Group at the chip maker, at the online Six Five Summit hosted by Moor Insights and Futurum.

Let’s be frank for a second. Way back in the dawn of time, when transistors were a lot more scarce than they are today – but maybe as relatively scarce as they will soon become – the central processing unit was just that: one of a very large number of devices that managed a data processing flow across interconnected hardware. It’s called a main frame for a reason, and that is because there were lots of other frames performing dedicated tasks that were offloaded from that very expensive CPU inside that main frame. It is only with the economic benefits of decades of Moore’s Law advances from the 1960s through the 2000s that we could finally turn what we still call a CPU into something that more accurately can be thought of as a baby server-on-chip. Just about everything but main memory and auxiliary memory is now within the server CPU socket. But as Moore’s Law advances are slowing and CPU cores are expensive and can’t be wasted, the time has come to turn back the clock and offload as much of the crufty overhead from virtualized, containerized, securitized workloads from those CPU cores as possible. The DPU is not new, but represents a return to scarcity, a new twist on old ideas. (Which are the best kind.)

Shenoy did not tip Intel’s cards down too much in talking about its DPU strategy – we are not calling them IPUs until there is a consensus across all glorified SmartNIC makers to do so – but he did provide some insight into Intel’s thinking and how, in the long run the DPU that gets offloaded work should, if Jevon’s Paradox holds, result in an increase in CPU demand in the long run even if it does cause a hit in the short run.

Back in 1865 in England, William Stanley Jevons observed that as machinery was created that more efficiently burned coal for various industrial uses, coal demand did not drop, but rather increased – and importantly, increased non-linearly. This is counter-intuitive to many, but what it really showed is that there was a demand for cheap energy and high density energy that could do what muscle power or water power could not. We have some contrarian thoughts about the applicability of Jevon’s Paradox to the datacenter, which we brought up six years ago when Intel first started applying this paradox to the long term capacity needs of compute in the datacenter. There are only so many people on Earth, there are only so any hours in a day, and therefore only so much data processing that can ever need to be done. We aren’t there yet for computing overall, but certain classes of online transaction processing has been only growing at GDP rates for two decades now.

But there definitely is a kind of elasticity of demand in compute thus far, and Shenoy actually gave out some data that showed that as the cost came down, demand went up, with some exceptions where efficiencies caused a flattening in demand. The example Shenoy gave was the introduction of server virtualization on VMware platforms in the early 2000s, and combined with the dot-com bust, server shipments definitely flatlined for two years:

There are so many factors that caused the flatlining of server sales at that time that it is hard to argue virtualization on the X86 platform, which was nascent at the time with VMware just getting started with server products, was the main cause, much less the only one. Virtualization was definitely a big factor in 2008 and 2009, when the Great Recession hit and server CPUs had features added to them to accelerate virtualization hypervisors and radically cut their overhead. But that just because a built-in assumption from that point forward until containers came along in earnest about five years ago. The advent of containers and DPUs, we think, is going to push a lot of workloads back to bare metal servers and off VMs.

While counting server processors is interesting, trying to reckon server capacity is more interesting, and we have done this, as we showed in a story last week discussing Q1 2021 server sales:

It is well known that Intel is the FPGA supplier for Microsoft’s SmartNICs, and Shenoy reminded everyone of that, and also hinted that it would be delivering SmartNICs that have Xeon D processors on them, much as Amazon Web Services has multi-core Arm CPUs on its “Nitro” DPUs, which virtualize all network and storage for AWS server nodes as well as run almost all of the KVM hypervisor functions, essentially converting the CPU to a bare metal serial processor for shared applications.

Shenoy did not provide any insight into how what components and capacities these future Intel DPUs would have, but walked through the usual scenario we have heard from Mellanox/Nvidia, Fungible, Pensando, Xilinx, and others in recent years.

“We call this silicon solution a new unit of computing, the infrastructure processing unit or the IPU,” explained Shenoy. “It’s an evolution of our SmartNIC product line that when coupled with a Xeon microprocessor, will deliver highly intelligent infrastructure acceleration. And it enables new levels of system security control isolation to be delivered in a more predictable manner. FPGAs can be used to attach for workload customization and overtime these solutions become more and more tightly coupled. So blending this capability of the IPU with the ongoing trend in microservices is a unique opportunity for a function-based infrastructure to achieve more optimal hardware and software. To deal with overhead tax and more effectively orchestrate the software landscape on a complex datacenter infrastructure.”

The newsy bit is that Chinese hyperscalers Baidu and JD Cloud are working on DPUs with Intel, apparently, and the fact that Intel is working with VMware, much as Nvidia is, is no surprise at all. What is surprising is that it didn’t happen before the Nvidia deal, to be honest, and we strongly suspect it is something that Pat Gelsinger, the former CEO at VMware and the current CEO at Intel, fixed shortly after he took his new job. Nvidia is working, through Project Monterrey, to get the ESXi hypervisor ported to the Arm-based BlueField-2 line of DPUs from Nvidia, which will also have GPU acceleration for AI and hashing functions, among other things. We would not be surprised, as we have said, to see Intel buy VMware outright once Dell lets go of it later this month.

Whatever Intel is going to do with DPUs, we will find out more in October at its Intel On event, formerly known as Intel Developer Forum, which Gelsinger revived when he came back to the chip maker.


PREV: U.S. Institutions Put Fujitsu A64FX Through the Paces

NEXT: The Many Other High Costs Cloud Users Pay

Popular Articles

Hot Articles
Back to Top