High-throughput PIM (Processing in-Memory) for DRAM using Bank-level Pipelined Architecture

Hyunsoo Lee, Hyundong Lee, Minseung Shin, Gyuri Shin, Sumin Jeon, Taigon Song

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Artificial intelligence (AI) is a technology requires massive computation. Among many solutions that accelerate AI for faster computation and low power, processing-in-memory (PIM) is a promising candidate. In this paper, we propose a PIM architecture of DRAM via custom pipelining. Our architecture proposes pipelining of operation units that leads to massive throughput, where an operation unit consists of eight banks. Our optimized pipelined architecture shows a -19.16% reduction in power-delay-product (PDP) per area and 24.1% better throughput compared to the latest DRAM PIM architecture with only 0.7% area overhead.

Original languageEnglish
Title of host publicationProceedings - International SoC Design Conference 2023, ISOCC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages101-102
Number of pages2
ISBN (Electronic)9798350327038
DOIs
StatePublished - 2023
Event20th International SoC Design Conference, ISOCC 2023 - Jeju, Korea, Republic of
Duration: 25 Oct 202328 Oct 2023

Publication series

NameProceedings - International SoC Design Conference 2023, ISOCC 2023

Conference

Conference20th International SoC Design Conference, ISOCC 2023
Country/TerritoryKorea, Republic of
CityJeju
Period25/10/2328/10/23

Keywords

  • DRAM
  • Pipelining
  • processing in-memory (PIM)

Fingerprint

Dive into the research topics of 'High-throughput PIM (Processing in-Memory) for DRAM using Bank-level Pipelined Architecture'. Together they form a unique fingerprint.

Cite this