*** Welcome to piglix ***

AI accelerator


An AI accelerator is (as of 2016) an emerging class of microprocessor (or coprocessor) designed to accelerate artificial neural networks, machine vision and other machine learning algorithms for robotics, internet of things and other data-intensive or sensor-driven tasks. They are frequently manycore designs (mirroring the massively-parallel nature of biological neural networks). They are targeted at practical narrow AI applications, rather than artificial general intelligence research. Many vendor specific terms exist for devices in this space.

They are distinct from GPUs (which are commonly used for the same role) in that they lack any fixed function units for graphics, and generally focus on low-precision arithmetic.

Computer systems have frequently complemented the CPU with special purpose accelerators for intensive tasks, most notably graphics, but also sound, video, etc. Over time various accelerators have appeared that have been applicable to AI workloads.

In the early days, DSPs (such as the AT&T DSP32C) have been used as neural network accelerators e.g. to accelerate OCR software, and there have been attempts to create parallel high throughput systems for workstations (e.g. TetraSpert in the 1990s, which was a parallel fixed point vector processor), aimed at various applications including neural network simulations. ANNA was a neural net CMOS accelerator developed by Yann LeCun. There was another attempt to build a neural net workstation called Synapse-1 (not to be confused with the current IBM SyNAPSE project).


...
Wikipedia

...