Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator

  • Kyungmi Noh
  • , Hyunjeong Kwak
  • , Jeonghoon Son
  • , Seungkun Kim
  • , Minseong Um
  • , Minil Kang
  • , Doyoon Kim
  • , Wonjae Ji
  • , Junyong Lee
  • , Hwi Jeong Jo
  • , Jiyong Woo
  • , Hyung Min Lee
  • , Seyoung Kim

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

We present the fabrication of 4 K-scale electrochemical random-access memory (ECRAM) cross-point arrays for analog neural network training accelerator and an electrical characteristic of an 8 × 8 ECRAM array with a 100% yield, showing excellent switching characteristics, low cycle-to-cycle, and device-to-device variations. Leveraging the advances of the ECRAM array, we showcase its efficacy in neural network training using the Tiki-Taka version 2 algorithm (TTv2) tailored for non-ideal analog memory devices. Through an experimental study using ECRAM devices, we investigate the influence of retention characteristics on the training performance of TTv2, revealing that the relative location of the retention convergence point critically determines the available weight range and, consequently, affects the training accuracy. We propose a retention-aware zero-shifting technique designed to optimize neural network training performance, particularly in scenarios involving cross-point devices with limited retention times. This technique ensures robust and efficient analog neural network training despite the practical constraints posed by analog cross-point devices.

Original languageEnglish
Article numbereadl3350
JournalScience advances
Volume10
Issue number24
DOIs
StatePublished - Jun 2024

Fingerprint

Dive into the research topics of 'Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator'. Together they form a unique fingerprint.

Cite this