Close

Presentation

FlexNAS: Flexible Hardware Aware Training-Less Neural Architecture Search for FPGAs
DescriptionConcurrent NAS and DSE of neural architectures play a fundamental role in AI democratization. However, the length and cost of NAS and DSE processes, requiring hundreds of GPU and CPU hours, is a limiting factor to the adoption of automated design flows.
We present FlexNAS, a hardware aware training-less NAS framework for FPGAs targeting the NASBench201 space. FlexNAS can explore ~2 million implementations achieving up to 89.18%, 63.79%, 42.79% accuracy for the CIFAR-10, CIFAR-100 and ImageNet16-120 benchmarks while targeting Alveo, Kintex7, and Virtex7 FPGA families in less than 4.5 hours of search time with a 0.5ms target inference latency.
Event Type
Work-in-Progress Poster
TimeTuesday, July 11th6:00pm - 7:00pm PDT
LocationLevel 2 Lobby
Topics
AI
Autonomous Systems
Cloud
Design
EDA
Embedded Systems
RISC-V
Security