Training and inference using approximate floating-point arithmetic for energy efficient spiking neural network processors

Myeongjin Kwak, Jungwon Lee, Hyoju Seo, Mingyu Sung, Yongtae Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

This paper presents a systematic analysis of spiking neural network (SNN) performance with reduced computation precisions using approximate adders. We propose an IEEE 754-based approximate floating-point adder that applies to the leaky integrate-and-fire (LIF) neuron-based SNN operation for both training and inference. The experimental results under a two-layer SNN for MNIST handwritten digit recognition application show that 4-bit exact mantissa adder with 19-bit approximation for lower-part OR adder (LOA), instead of 23-bit full-precision mantissa adder, can be exploited to maintain good classification accuracy. When adopted LOA as mantissa adder, it can achieve up to 74.1% and 96.5% of power and energy saving, respectively.

Original languageEnglish
Title of host publication2021 International Conference on Electronics, Information, and Communication, ICEIC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728191614
DOIs
StatePublished - 31 Jan 2021
Event2021 International Conference on Electronics, Information, and Communication, ICEIC 2021 - Jeju, Korea, Republic of
Duration: 31 Jan 20213 Feb 2021

Publication series

Name2021 International Conference on Electronics, Information, and Communication, ICEIC 2021

Conference

Conference2021 International Conference on Electronics, Information, and Communication, ICEIC 2021
Country/TerritoryKorea, Republic of
CityJeju
Period31/01/213/02/21

Keywords

  • Approximate adder
  • Floating-point arithmetic
  • Leaky integrate-and-fire (LIF) neuron
  • Spiking neural network (SNN)

Fingerprint

Dive into the research topics of 'Training and inference using approximate floating-point arithmetic for energy efficient spiking neural network processors'. Together they form a unique fingerprint.

Cite this