论文标题

在神经网络中的注射域知识用于超应当计算

Injective Domain Knowledge in Neural Networks for Transprecision Computing

论文作者

Borghesi, Andrea, Baldo, Federico, Lombardi, Michele, Milano, Michela

论文摘要

机器学习(ML)模型在许多学习任务中非常有效,因为能够从大型数据集中提取有意义的信息。然而,有一些学习问题不能轻易依靠纯数据来解决,例如稀缺的数据或非常复杂的函数要近似。幸运的是,在许多情况下,域知识可明确可用,可用于训练更好的ML模型。本文研究了可以通过在处理非平凡学习任务时(即对推出计算应用程序的精确调整)来研究的改进。域信息以不同的方式注入ML模型:i)其他功能,ii)基于临时图的网络拓扑,iii)正则化方案。结果清楚地表明,利用特定问题信息的ML模型优于纯粹数据驱动的信息,平均准确性提高了38%。

Machine Learning (ML) models are very effective in many learning tasks, due to the capability to extract meaningful information from large data sets. Nevertheless, there are learning problems that cannot be easily solved relying on pure data, e.g. scarce data or very complex functions to be approximated. Fortunately, in many contexts domain knowledge is explicitly available and can be used to train better ML models. This paper studies the improvements that can be obtained by integrating prior knowledge when dealing with a non-trivial learning task, namely precision tuning of transprecision computing applications. The domain information is injected in the ML models in different ways: I) additional features, II) ad-hoc graph-based network topology, III) regularization schemes. The results clearly show that ML models exploiting problem-specific information outperform the purely data-driven ones, with an average accuracy improvement around 38%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源