A Survey of Neural Network Hardware Accelerators in Machine Learning


Fatimah Jasem and Manar AlSaraf, Kuwait University, Kuwait


The use of Machine Learning in Artificial Intelligence is the inspiration that shaped technology as it is today. Machine Learning has the power to greatly simplify our lives. Improvement in speech recognition and language understanding help the community interact more naturally with technology. The popularity of machine learning opens up the opportunities for optimizing the design of computing platforms using welldefined hardware accelerators. In the upcoming few years, cameras will be utilised as sensors for several applications. For ease of use and privacy restrictions, the requested image processing should be limited to a local embedded computer platform and with a high accuracy. Furthermore, less energy should be consumed. Dedicated acceleration of Convolutional Neural Networks can achieve these targets with high flexibility to perform multiple vision tasks. However, due to the exponential growth in technology constraints (especially in terms of energy) which could lead to heterogeneous multicores, and increasing number of defects, the strategy of defect-tolerant accelerators for heterogeneous multi-cores may become a main micro-architecture research issue. The up to date accelerators used still face some performance issues such as memory limitations, bandwidth, speed etc. This literature summarizes (in terms of a survey) recent work of accelerators including their advantages and disadvantages to make it easier for developers with neural network interests to further improve what has already been established.


Artificial Intelligence, ASIC, Convolutional Neural Network, CPU, Deep Learning, Deep Neural Network, FPGA, GPU, hardware accelerators, Intel, Machine Learning, Nvidia, Performance.