Close

Presentation

Hardware and Software Co-Design for Edge AI
DescriptionMany real-world edge applications including self-driving cars, mobile health, robotics, and augmented reality (AR) / virtual reality (VR) are enabled by complex AI algorithms (e.g., deep neural networks). Currently, much of this computation for these applications happen in the cloud, but there are several good reasons to perform the processing on local edge platforms (e.g., smartphones). First, it will increase the accessibility of AI applications to different parts of the world by reducing the dependence on the communication fabric and the cloud infrastructure. Second, many of these applications such as robotics and AR/VR require low latency. We may not be able to achieve their critical real-time requirements without performing the computation directly on the edge platform. Third, for many important applications (e.g., mobile health), privacy and security are important concerns (e.g., medical data). Since edge platforms are constrained by resources (power, computation, and memory), there is a great need for innovative solutions to realize the vision of practically useful Edge AI. This tutorial will address inference and training tasks using different neural networks including CNNs and GNNs for diverse edge applications. The presenters will describe the most-compelling research advances in the design of hardware accelerators using emerging technologies (e.g., ReRAM, monolithic 3D, heterogeneous cores, chiplets); software approaches (e.g., model pruning, quantization of weights, adaptive computation); and various approaches for synergistic co-design to push the Pareto front of energy, performance, accuracy, and privacy.
Event Type
Tutorial
TimeMonday, July 10th10:30am - 12:00pm PDT
Location3003, 3rd Floor
Topics
AI