Trends and Challenges in Computing-in-Memory for Neural Network Model: A Review From Device Design to Application-Side Optimization

Research output: Contribution to journalReview articlepeer-review

Abstract

Neural network models have been widely used in various fields as the main way to solve problems in the current artificial intelligence (AI) field. Efficient execution of neural network models requires devices with massively parallel Multiply-accumulate (MAC) and Matrix-vector Multiplication (MVM) computing capability. However, existing computing devices based on von Neumann architecture suffer from bottlenecks, and the separation of memory and computation module makes data on the move wasting a lot of meaningless computation time and energy. Computing-in-memory (CIM) based on performing MAC computation inside the memory is considered a promising direction to solve this problem. However, large-scale application of CIM still faces challenges due to the non-idealities of current CIM devices and the lack of a common and reliable programmable interface on the application side. In this paper, we will comprehensively analyze the current problems faced by CIMs from various perspectives, such as CIM memory arrays, peripheral circuits, and application-side design, and discuss the possible future development opportunities of CIMs.

Original languageEnglish
Pages (from-to)186679-186702
Number of pages24
JournalIEEE Access
Volume12
DOIs
StatePublished - 2024

Keywords

  • computing-in-memory
  • matrix-vector multiplication
  • memory array
  • multiply-accumulate
  • Neural network model
  • peripheral circuits
  • von Neumann architecture

Fingerprint

Dive into the research topics of 'Trends and Challenges in Computing-in-Memory for Neural Network Model: A Review From Device Design to Application-Side Optimization'. Together they form a unique fingerprint.

Cite this