Privacy-Preserving DNN Training with Prefetched Meta-Keys on Heterogeneous Neural Network Accelerators
DescriptionThe embedded software may migrate private data to servers for DNN computation acceleration, which may compromise privacy. We propose a DNN computation framework that combines TEE and NNA to address the privacy leakage problem. We design an NNA-friendly encryption method that enables NNA to correctly compute the encrypted linear input. Facing the overhead of TEE-NNA interaction, we design a pipeline-based prefetch mechanism that can reduce 87.1% of the type conversion and TEE interaction overhead. Experimentally, our approach proves to be compatible with a wide range of NPUs and TPUs, and improves the performance by 7-20 times over the TEE scheme.
TimeThursday, July 13th4:55pm - 5:10pm PDT
Location3006, 3rd Floor