Real-time Deep Visual Tracking on a Tight Budget
DescriptionExisting visual multiple object tracking (MOT) methods are computationally intensive and are usually infeasible for embedded systems. We propose an algorithmic-hardware codesign methodology that combines novel algorithm augmentations and architecture mapping of state-of-the-art visual MOT methods on low-cost heterogeneous multi-core processor platforms. We applied the proposed methodology to two deep visual MOT pipelines, and our experiments on an embedded device (Odroid N2+) using widely-used datasets demonstrate that the proposed methods achieve better performance than the baselines. In addition, we show that in some cases, the proposed method on ODROID N2+ achieves comparable performance with the implementation on a high-end workstation.
TimeTuesday, July 11th6:00pm - 7:00pm PDT
LocationLevel 2 Lobby