With artificial intelligence already sweeping the headlines in nearly every industry imaginable, it's hard to imagine how the technology could become even more sophisticated – but it has. Announced in early April 2019, Qualcomm's Cloud AI 100, is a server-oriented iteration of its already proven AI technology. Moreover, their technology happens to be 10 times the performance per watt when compared to its closest competitor.
Keith Kressin, senior vice president of product management with Qualcomm Technologies, spoke enthusiastically about the latest release by saying: "Today, Qualcomm Snapdragon mobile platforms bring leading AI acceleration to over a billion client devices. Our all new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today’s data centers. Furthermore, Qualcomm Technologies is now well positioned to support complete cloud-to-edge AI solutions all connected with high-speed and low-latency 5G connectivity.”
Apart from a tenfold increase in performance over its competition, the Qualcomm Cloud AI 100 provides a number of benefits, including:
- Greater efficiency when processing AI inference workloads thanks to an all-new chip design
- Improved performance and power utilization on account of a 7nm process node
- Compatibility with some of the most prolific stacks, including PyTorch, Glow, TensorFlow, Keras, and ONNX
Other than that, however, details on the Qualcomm AI 100 are rather hard to come by. While some experts are predicting that the technology could be vital in the mainstream rollout of 5G, and it could even catapult Qualcomm to the top of the 5G game, these are nothing more than highly optimistic guesses at best.
But we do know that the Qualcomm AI 100 comprises an entire family of AI inference accelerators. Meant specifically for data center owners and operators, their devices feature various form factors and TDPs to ensure compatibility with most modern facilities.
Kressin went into further detail about the functionality of the Qualcomm AI 100 by saying: "FPGA or GPUs [can often do] AI inference processing more efficiently, [because] a GPU is a much more parallel machine, [while] the CPU is more serial machine, [and] the parallel machines are better for AI processing. But still, a GPU is more so designed for graphics, and you can get a significant improvement if you design a chip from the ground up for AI acceleration. There’s about an order of magnitude improvement for a CPU to FPGA or GPU. There’s another order of magnitude improvement opportunity for custom-built AI accelerator."
It's important to note the timing of the Qualcomm AI 100, too, as it comes shortly after one of their biggest competitors, Huawei, released their Arm-based processor – the Kunpeng 920. In fact, there are several other processors, including Intel's Nervana Neural Network Processor (NNP-I), Google's Edge TPU, and Amazon's AWS Inferentia. As you can see, the market for next-gen AI is already beginning to heat up.
To find out even more information about Qualcomm, including details on the Qualcomm AI 100, please visit their official website at www.qualcomm.com.
Qualcomm's Cloud AI 100 Set to Bolster the AI of Cloud Computing
No comments yet. Sign in to add the first!