Abstract
This work presents a general-purpose compute-in-memory (GPCIM) processor combining DNN operations and vector CPU. Utilizing special reconfigurability, dataflow, and instruction set, the 65nm test chip demonstrates a 28.5 TOPS/W DNN macro efficiency and a best-in-class peak CPU efficiency of 802GOPS/W. Due to a data locality flow, 37% to 55% end-to-end latency improvement on AI-related applications is achieved by eliminating inter-core data transfer.
Original language | English (US) |
---|---|
Title of host publication | 2023 IEEE Symposium on VLSI Technology and Circuits, VLSI Technology and Circuits 2023 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9784863488069 |
DOIs | |
State | Published - 2023 |
Event | 2023 IEEE Symposium on VLSI Technology and Circuits, VLSI Technology and Circuits 2023 - Kyoto, Japan Duration: Jun 11 2023 → Jun 16 2023 |
Publication series
Name | Digest of Technical Papers - Symposium on VLSI Technology |
---|---|
Volume | 2023-June |
ISSN (Print) | 0743-1562 |
Conference
Conference | 2023 IEEE Symposium on VLSI Technology and Circuits, VLSI Technology and Circuits 2023 |
---|---|
Country/Territory | Japan |
City | Kyoto |
Period | 6/11/23 → 6/16/23 |
Funding
Fig. 8 A detailed case study on the SLAM application from GPCIM. Acknowledgements This work is supported in part by NSF grant CCF-2008906.
ASJC Scopus subject areas
- Electrical and Electronic Engineering