A comprehensive exploration of approximate DNN models with a novel floating-point simulation framework

Myeongjin Kwak, Jeonggeun Kim, Yongtae Kim

Research output: Contribution to journalArticlepeer-review

Abstract

This paper introduces TorchAxf1, a framework for fast simulation of diverse approximate deep neural network (DNN) models, including spiking neural networks (SNNs). The proposed framework utilizes various approximate adders and multipliers, supports industrial standard reduced precision floating-point formats, such as bfloat16, and accommodates user-customized precision representations. Leveraging GPU acceleration on the PyTorch framework, TorchAxf accelerates approximate DNN training and inference. In addition, it allows seamless integration of arbitrary approximate arithmetic algorithms with C/C++ behavioral models to emulate approximate DNN hardware accelerators. We utilize the proposed TorchAxf framework to assess twelve popular DNN models under approximate multiply-and-accumulate (MAC) operations. Through comprehensive experiments, we determine the suitable degree of floating-point arithmetic approximation for these DNN models without significant accuracy loss and offer the optimal reduced precision formats for each DNN model. Additionally, we demonstrate that approximate-aware re-training can rectify errors and enhance pre-trained DNN models under reduced precision formats. Furthermore, TorchAxf, operating on GPU, remarkably reduces simulation time for complex DNN models using approximate arithmetic by up to 131.38× compared to the baseline optimized CPU implementation. Finally, we compare the proposed framework with state-of-the-art frameworks to highlight its superiority.

Original languageEnglish
Article number102423
JournalPerformance Evaluation
Volume165
DOIs
StatePublished - Aug 2024

Keywords

  • Approximate computing
  • Deep neural network (DNN)
  • Floating-point
  • GPU
  • PyTorch
  • Spiking neural network (SNN)

Fingerprint

Dive into the research topics of 'A comprehensive exploration of approximate DNN models with a novel floating-point simulation framework'. Together they form a unique fingerprint.

Cite this