AI chip comparison: GPU/FPGA/ASIC/ brain-like chip

AI chip comparison: GPU/FPGA/ASIC/ brain-like chip

In 2017, "artificial intelligence" has become the headline of all media. With the promotion of media and capital, AI is surging toward us with lightning speed. From the policy level, three major events concerning AI occurred in 2017. First, an artificial intelligence development plan was released; second, artificial intelligence was first written into the 19th largest report; and third, many cities such as Shanghai and Chongqing were carrying out artificial intelligence. planning.

From the data, only more than 100 organizations studied deep learning and artificial intelligence in 2013. By 2015, this figure has soared to 3,409, which has increased by more than 30 times in two years. Even Tesla, which started as an electric car, also announced the start of designing the AI ​​chip. Musk invited Jim Keller, the developer of AMD's Zen Architecture, to serve as vice president of autopilot hardware.

In the buzzing background of AI, we also need to seriously think: Does artificial intelligence need specialized chips? What advantages does the existing technology framework have? Recently, Beijing Jianguang Asset Management Co., Ltd. hosted a "structure" core. "Eco" as the theme of Sharon, deputy director of the Institute of Semiconductor Research Institute, think of rain, Lin Yu from the AI ​​chip definition, classification, ecological environment, investment and other aspects of the analysis.

If artificial intelligence is divided according to the architecture, there are three important elements: data, algorithms, and calculations. Among them, the example is the chip, the example is the basis, the algorithm is the core, and the data is guaranteed. First look at the definition of artificial intelligence, from a broad point of view as long as the chips that can run artificial intelligence algorithms are called artificial intelligence chips. However, in-depth analysis, Lin Yu said, "Only chips with special accelerated design for artificial intelligence algorithms can be called artificial intelligence chips. There are very few companies on the market that have really accelerated the design of the architecture inside the chip." ."

AI chip comparison: GPU/FPGA/ASIC/ brain-like chip

Three dimensions categorize artificial intelligence

Classification of artificial intelligence from functions, application scenarios and technical architectures:

From a functional perspective, artificial intelligence includes both inference and training. At the training level, a complex neural network model is trained through big data. At present, the training session is mainly completed using Nvidia's GPU cluster. Google's TPU2.0 also supports training sessions and in-depth network acceleration. The inference link refers to using the trained model to infer various conclusions using new data. In general, the performance of the training session on the chip is relatively high, and the reasoning link has high requirements for the simple designation of repeated calculation and low delay.

From the perspective of application scenarios, artificial intelligence chips are applied to the cloud and equipment. In the training phase of deep learning, a large amount of data and large computations are required. A single processor cannot be completed independently. Therefore, the training link can only be implemented in the cloud. On the device side, the number of smart terminals is huge, and the requirements are quite different. For example, VR devices require high real-time performance, reasoning links cannot be completed in the cloud, and devices require independent reasoning and computing capabilities. Therefore, the demand for dedicated chips is still high. .

From the perspective of technical architecture, there are four categories: first, general-purpose chips such as GPUs; secondly, semi-customized chips represented by FPGAs, such as DPU of Shenjian Technology; and thirdly, custom-built ASICs such as Google's TPU. Fourth, brain-like chips.

What are the advantages of GPU/FPGA/ASIC/ brain-like chips?

In order to perform big data processing, current solutions generally use high-performance processors to assist MCUs in calculations, but as the time period of Moore's Law lengthens, the number of devices that can be integrated on the processor will reach the limit, and the amount of data will continue to increase. Therefore, we need to meet the increase in data volume through changes in the architecture. This is the background of the introduction of artificial intelligence chips.

Currently, there are four types of artificial intelligence chips: GPUs, FPGAs, ASICs, and brain-like chips.

GPU: Single-instruction, multi-data processing, using a large number of computing units and long pipelines, as the name of the graphics processor, GPU is good at handling the acceleration of the image area. However, the GPU cannot work alone and must be controlled by the CPU to work. The CPU can act alone to handle complex logic operations and different data types, but when a large amount of data is required to handle the same type of data, the GPU can be called for parallel computation.

FPGA: In contrast to GPUs, FPGAs are suitable for multi-instruction, single-stream analysis and are therefore often used in prediction stages such as the cloud. FPGAs use software to implement software algorithms. Therefore, it is difficult to implement complex algorithms. The disadvantage is that the price is relatively high. Comparing the FPGA with the GPU, it is found that the lack of memory and control brings up the storage and read parts and is faster. The second reason is that because of the lack of reading, power consumption is low, and the disadvantage is that the amount of computation is not large. Combining the advantages of CPU and GPU, one solution is heterogeneous.

ASIC chip: It is a special custom chip that is customized for specific requirements. In addition to being incapable of expansion, it has advantages in terms of power consumption, reliability, and volume, especially in mobile terminals with high performance and low power consumption. Google's TPU, Cambrian GPU, and Horizon's BPU all belong to the ASIC chip. Google's TPU is 30-80 times faster than CPU and GPU solutions. Compared with CPUs and GPUs, TPU reduces control, thereby reducing chip area and power consumption.

Laser Marking Machine

Laser Marking Machine,Laser Printing Machine,Laser Marking Machine Price,Portable Laser Marking Machine

Foshan Youngmax Machine Co.LTD , https://www.ymaxmachine.com