论文标题
EF-Train:通过在线适应或个性化的数据重塑进行有效的FPGA培训效率
EF-Train: Enable Efficient On-device CNN Training on FPGA Through Data Reshaping for Online Adaptation or Personalization
论文作者
论文摘要
通常,DNN型号一次在云中进行训练,并部署在诸如汽车,机器人或无人驾驶汽车(无人机)等边缘设备中,以实时推理。但是,有许多情况要求模型适应新的环境,域或新用户。为了实现这种域的适应或个性化,设备上的模型需要在设备上进行连续训练。在这项工作中,我们设计了EF-Train,这是一种有效的DNN训练加速器,具有统一的基于频道级并行性的卷积内核,可以在资源有限的低功率边缘FPGA上实现端到端培训。由于在向前,向后传播和重量更新中不同的内存访问模式引起的效率低下,因此在资源有限的FPGA上实施设备培训是具有挑战性的。因此,我们开发了一种数据重塑方法,并使用瓷砖内的连续内存分配和权重再利用。建立了一个分析模型,以自动安排计算和内存资源,以实现Edge FPGA上的高能效率。实验结果表明,我们的设计分别在吞吐量和能源效率方面达到了46.99 Gflops和6.09Gflops/w。
Conventionally, DNN models are trained once in the cloud and deployed in edge devices such as cars, robots, or unmanned aerial vehicles (UAVs) for real-time inference. However, there are many cases that require the models to adapt to new environments, domains, or new users. In order to realize such domain adaption or personalization, the models on devices need to be continuously trained on the device. In this work, we design EF-Train, an efficient DNN training accelerator with a unified channel-level parallelism-based convolution kernel that can achieve end-to-end training on resource-limited low-power edge-level FPGAs. It is challenging to implement on-device training on resource-limited FPGAs due to the low efficiency caused by different memory access patterns among forward, backward propagation, and weight update. Therefore, we developed a data reshaping approach with intra-tile continuous memory allocation and weight reuse. An analytical model is established to automatically schedule computation and memory resources to achieve high energy efficiency on edge FPGAs. The experimental results show that our design achieves 46.99 GFLOPS and 6.09GFLOPS/W in terms of throughput and energy efficiency, respectively.