DBPS: Dynamic Block Size and Precision Scaling for Efficient DNN Training Supported by RISC-V ISA Extensions
DescriptionIn this work, we utilize a block floating point (BFP) format that reduces the size of tensors and the power consumption of arithmetic units when training deep neural networks (DNNs). Unfortunately, prior work on BFP-based DNN training empirically selects the block size and the precision that maintain the training accuracy. To make the BFP-based training more feasible, we propose dynamic block size and precision scaling (DBPS) that significantly reduces the energy consumption. We also present a hardware accelerator, called DBPS core, which flexibly configures its precision and block size by custom instructions extended in a RISC-V processor.
TimeTuesday, July 11th4:55pm - 5:10pm PDT
Location3003, 3rd Floor
AI/ML Architecture Design