If the workload-specific datacenter dominates in the near term, it could be RISC-V’s time to shine. While most often associated with embedded devices, there is a push to use RISC-V as the base for AI and targeted workloads, giving the ISA a springboard into much larger systems.
So far, there is not much momentum for RISC-V in the datacenter but it could serve as the underlying engine for various accelerators. For instance, AI chip startup, Tenstorrent, based their inference chip on RISC-V and a much more ambitious project at the Barcelona Supercomputer Center will use it (via the commercial RISC-V entity, SiFive) to build an indigenous datacenter—from processors to accelerators.
At a time when options are relatively unlimited for building or buying cores, what’s next for RISC-V’s reach into the datacenter? We talked to SiFive’s James Prior, who told us it’s highly unlikely that we’ll see an end-to-end RISC-V datacenter in the next five years but there is certainly big opportunity for custom accelerators where it can outpace Arm, particularly in terms of software, tooling, and support.
“We are taking a software-first approach to providing IP cores. We’re going to develop software and tools alongside the cores in a way that makes sense for programmers instead of saying, here’s the core, go figure it out.” He adds that another difference is that SiFive can do this without competing with its customers since they aren’t in the silicon business. “We have some boards to let partners and companies evaluate before they build a big design. The Nvidia acquisition of Arm has taken the RISC-V trajectory and accelerated decisions—people are moving from asking if they should have a RISC-V strategy to thinking about what their plan is right now. Then we can come in as the leader of commercial RISC-V IP with experience in co-developing specific architectures for specific needs.”
While most of SiFive’s commercial RISC-V business is embedded, over the last six years they’ve managed to secure over 200 design wins across 80 companies, including seven of the top ten tech companies with over a billion cores shipped. “But as we’re growing and attacking the application core space, we’re moving into AI with new product lines that focus on more general purpose processing and specific functions.”
In fact, SiFive is looking at AI as something of a trojan horse to enter the datacenter in large numbers. They’ve developed some custom IP that customers can use to build their own accelerators, an approach that fits with SiFive’s view of the datacenter shifting to more purpose-built rather than general purpose.
For those developing next-gen AI processors Prior says having the software and tooling in an application-specific processor with the vector capabilities that can handle modern data types is key. Aligning this with a custom AI accelerator for pre/post-processing and AI math is more flexible too, given the quickly-changing AI model space. “There’s a massive amount of churn in AI models, way faster than silicon creation processes, which means while you need a dedicated piece of silicon for acceleration to be efficient, you also need general purpose programmability within a set type of model.”
While AI is a nice entry point to datacenter design wins, Prior says there are other opportunities SiFive is eyeing, from the edge to top of rack networking, all of which could benefit from a mature ecosystem with a common set of programming tools.
“If you look at how x86 scaled from simple microarchitectures to complex out of order processes, to multicore, it took a long time. Arm too was marketed as the cores you never heard of but were surrounded by in the kitchen. Now it’s ubiquitous because they’re in our phones,” Prior says when asked about what RISC-V doesn’t seem to have quite the same legs as Arm.
“That killer app is coming and it’s going to be in AI and accelerators or even purpose-built systems. If you look at how the datacenter is changing, people are saying they don’t need a bland bunch of cores, they need all the compute balanced in the socket and that’s where RISC-V can deliver value—even if the main OS on the processor isn’t running the RISC core, it is what’s doing the accelerated work of value.”
PREV: Lenovo Gives SuperMUC-NG Supercomputer An Upgrade
NEXT: Forget Mesos And OpenStack, Hashi Stack Is The New Next Platform