论文标题
神经网络的固定点代码合成
Fixed-Point Code Synthesis For Neural Networks
论文作者
论文摘要
在过去的几年中,神经网络已经开始穿透安全关键系统,以在机器人,火箭,自动驾驶汽车等中做出决策。问题是这些关键系统通常具有有限的计算资源。通常,他们使用固定点算术来实现其许多优点(速度,与小内存设备的兼容性。)在本文中,引入了一种新技术来调整已经训练的神经网络的格式(精度),使用定义点算术算术,可以仅使用整数操作来实现。新的优化神经网络使用定点编号计算输出,而无需将精度修改为用户固定的阈值。为新的优化神经网络合成了一个定点代码,以确保阈值对分析期间确定的范围[XMIN,XMAX]的任何输入向量的尊重。从技术的角度来看,我们对浮动神经网络进行初步分析以确定最坏的情况,然后我们在整数变量之间生成了线性约束系统,我们可以通过线性编程来解决。该系统的解决方案是每个神经元的新定点格式。获得的实验结果表明我们方法的效率可以确保新的固定点神经网络具有与初始浮点神经网络相同的行为。
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.