AI chip wars upgrade! Intel’s new Movidius VPU, NNP is finally officially delivered to commercial use.

Editor’s note: This article is from WeChat public account “Core” (ID: aichip001 ), author Wei Shizhen.

Core on November 13th, at 2 am Beijing time today, Intel held the 2019 Artificial Intelligence Summit in San Francisco to launch the next-generation Movidius VPU code-named Keem Bay for edge media, computer vision and reasoning applications. It is planned to be listed in the first half of next year.

In addition, Intel also demonstrated the Nervana Neural Network Processor (NNP) on the spot and officially announced commercial use. This is the official start of commercial delivery of Intel’s NNP R&D project three years after its announcement.

In addition, Intel Vice President and General Manager of Artificial Intelligence Products Division Naveen Rao, Intel IoT Business Unit Vice President and General Manager of the Visual Markets and Channels Department Jonathan Ballon gave a speech at the conference, introducing Intel’s latest AI products and Related technical progress.

Naveen Rao said in a live presentation that with the update and release of Intel AI products, the product portfolio of AI solutions will be further enhanced and optimized, and this is expected to create more than $3.5 billion in 2019. Revenue.

Intel smashes the new visual killer, the performance is superior to NVIDIA, the first show of the cloud AI commercial core

01 Movidius Myriad: 10 times better reasoning performance, 6 times energy efficiency for competing products

Let’s talk about the latest heavyweight product, the Intel Movidius Myriad Visual Processing Unit (VPU), code-named Keem Bay, which is optimized for workload reasoning at the edge.

Intel smashes the new visual killer, the performance is superior to NVIDIA, the first show of the cloud AI commercial core

Performance, with the previous generation VPUIn comparison, Keem Bay’s reasoning performance has increased by more than 10 times, and energy efficiency can reach 6 times that of competing products.

At the same time, Intel also introduced that Keem Bay’s power consumption is about 30W, which is four times faster than NVIDIA’s TX2 and 1.25 times faster than Huawei’s HiSilicon 310.

Intel smashes the new visual killer, the performance is superior to NVIDIA, the first show of the cloud AI commercial core

Jonathan Ballon mentioned on the scene that the chip has a new on-chip memory architecture. At the same time, Keem Bay provides 4 times more Tops inference per second than NVIDIA Xavier, which allows customers to get 50% extra performance when fully utilized.

“Compared to competing products, Keem Bay performs better than GPUs, not only with a reduction in power, size and cost, but also complements our complete range of products, tools and services. Combination,” added Jonathan Ballon.

In addition, Keem Bay plans to be available in the first half of 2020.

Intel smashes the new visual killer, the performance is superior to NVIDIA, the first show of the cloud AI commercial core

02 Nervana Series: Started production and officially delivered to commercials

This year, Intel introduced NPN-T and NNP-I Nervana neural network processors for AI reasoning and AI training, designed for large data centers.

At the same time, the Nervana neural network processor is also the first dedicated ASIC chip developed by Intel for complex deep learning, mainly for cloud and data center customers.

In fact, Intel has proposed the development of a project to launch the Nervana neural network processor as early as 2016. However, Intel did not unveil the mystery of this series of processors in the AI ​​conference last year. It was only released this year, and it is finally officially delivered.

Intel smashes new visual killer, performance is superior to NVIDIA, big show first cloud AI commercial core

Naveen Rao said that as part of the system-level AI solution, the Nervana Neural Network Training Processor is currently in production and has completed customer delivery.

Among them, NNP-T uses TSMC’s 16nm process technology, with 27 billion transistors and a total area of ​​680 square millimeters.

Applications are highly programmable and support all major deep learning frameworks such as TensorFlow, PYTORCH training frameworks and C++ deep learning software libraries.

Intel smashes the new visual killer, the performance is superior to NVIDIA, the first show of the cloud AI commercial core

At the same time, it also enables a balance between computing, communication, and memory, and it can be nearly linear and energy-efficiently scalable for both small-scale clusters and the largest pod supercomputers.

Intel smashes the new visual killer, the performance is superior to NVIDIA, the first show of the cloud AI commercial core

On the other hand, NNP-I is based on Intel’s 10nm Ice Lake processor architecture and also supports all major deep learning frameworks. The efficiency on ResNet50 is 4.8 TOPs/W and the power range is between 10W and 50W.

In addition, it is energy efficient and low cost, combining traditional AI and multiple engines to achieve a highly efficient AI reasoning workload, suitable for running high-intensity multi-mode reasoning at real scale.

Intel smashes the new visual killer, the performance is superior to NVIDIA, the first show of the cloud AI commercial core

In Naveen Rao’s opinion,