TY - GEN
T1 - A Power-Efficient Reconfigurable Hybrid CNN-SNN Accelerator for High Performance AI Applications
AU - Yun, Heuijee
AU - Park, Daejin
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Deep learning-based object detection requires high computation, making real-time processing difficult due to excessive power consumption and irregular workloads in conventional accelerators. Event-driven hybrid model training has been explored as a method to reduce power consumption. However, its implementation on traditional hardware remains challenging due to the lack of efficient sparse computation optimization. To address this issue, this paper proposes a power-efficient CNNSNN hybrid accelerator that leverages event-driven spiking computation and adaptive reconfiguration. Unlike conventional CNN accelerators that rely on continuous activation functions and fixed processing pipelines, the proposed architecture selectively converts energy-intensive layers into SNNs. This hybrid approach minimizes power-hungry multiply-accumulate operations by leveraging sparse, event-driven spike processing. The accelerator uses a reconfigurable dual-lane processor that switches between CNN and SNN operations for efficient workload distribution. To efficiently manage the dynamic switching between CNN and SNN operations, the accelerator employs adaptive dynamic memory optimization to minimize data movement overhead, while multistage pipeline optimizes temporal accumulation to maximize the benefits of event-driven SNN processing. The proposed hybrid CNN-SNN accelerator reduces power consumption by 32 % while maintaining 97.5% accuracy, improving FPS per watt by 47-67% over conventional CNN architectures. Its dynamic workload adaptation increases inference speed by up to 16 %, making it highly efficient for real-time edge AI.
AB - Deep learning-based object detection requires high computation, making real-time processing difficult due to excessive power consumption and irregular workloads in conventional accelerators. Event-driven hybrid model training has been explored as a method to reduce power consumption. However, its implementation on traditional hardware remains challenging due to the lack of efficient sparse computation optimization. To address this issue, this paper proposes a power-efficient CNNSNN hybrid accelerator that leverages event-driven spiking computation and adaptive reconfiguration. Unlike conventional CNN accelerators that rely on continuous activation functions and fixed processing pipelines, the proposed architecture selectively converts energy-intensive layers into SNNs. This hybrid approach minimizes power-hungry multiply-accumulate operations by leveraging sparse, event-driven spike processing. The accelerator uses a reconfigurable dual-lane processor that switches between CNN and SNN operations for efficient workload distribution. To efficiently manage the dynamic switching between CNN and SNN operations, the accelerator employs adaptive dynamic memory optimization to minimize data movement overhead, while multistage pipeline optimizes temporal accumulation to maximize the benefits of event-driven SNN processing. The proposed hybrid CNN-SNN accelerator reduces power consumption by 32 % while maintaining 97.5% accuracy, improving FPS per watt by 47-67% over conventional CNN architectures. Its dynamic workload adaptation increases inference speed by up to 16 %, making it highly efficient for real-time edge AI.
KW - Event-driven processing
KW - Hybrid CNN/SNN accelerator
KW - Low-power deep learning
KW - Reconfigurable computing
UR - https://www.scopus.com/pages/publications/105008490034
U2 - 10.1109/COOLCHIPS65488.2025.11018586
DO - 10.1109/COOLCHIPS65488.2025.11018586
M3 - Conference contribution
AN - SCOPUS:105008490034
T3 - IEEE Symposium on Low-Power and High-Speed Chips and Systems, COOL CHIPS 2025 - Proceedings
BT - IEEE Symposium on Low-Power and High-Speed Chips and Systems, COOL CHIPS 2025 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 28th IEEE Symposium on Low-Power and High-Speed Chips and Systems, COOL CHIPS 2025
Y2 - 16 April 2025 through 18 April 2025
ER -