Lift: Exploiting Hybrid Stacked Memory for Energy-Efficient Processing of Graph Convolutional Networks
DescriptionGraph Convolutional Networks (GCNs) are powerful learning approaches for graph-structured data. The emerging 3D-stacked computation-in-memory (CIM) architecture provides a promising solution to process GCNs efficiently. However, previous works do not fully exploit the CIM architecture, leading to significant energy consumption. This paper presents Lift, an energy-efficient GCN accelerator based on 3D CIM architecture using software and hardware co-design. At the software level, Lift adopts a push-based dataflow and a hybrid mapping to reduce data movement. At the hardware level, Lift introduces dedicated logic to high bandwidth memory (HBM), supporting the hybrid execution of GCNs. The proposed scheme outperforms the baselines.
TimeThursday, July 13th10:40am - 10:55am PDT
Location3006, 3rd Floor
Embedded Memory, Storage and Networking