论文标题

旋转机械智能诊断的深度学习算法:开源基准研究

Deep Learning Algorithms for Rotating Machinery Intelligent Diagnosis: An Open Source Benchmark Study

论文作者

Zhao, Zhibin, Li, Tianfu, Wu, Jingyao, Sun, Chuang, Wang, Shibin, Yan, Ruqiang, Chen, Xuefeng

论文摘要

随着深度学习(DL)技术的发展,旋转的机械智能诊断经历了巨大的进步,并获得了验证的成功,许多基于DL的智能诊断算法的分类精度趋于100 \%。但是,通常建议将不同的数据集,配置和超参数用于不同类型的模型的性能验证,并且很少有开源代码公开以进行评估和比较。因此,在旋转机械智能诊断中可能存在不公平的比较和无效的改进,这限制了该领域的进步。为了解决这些问题,我们对四种模型进行了广泛的评估,包括多层感知(MLP),自动编码器(AE),卷积神经网络(CNN)和经常性神经网络(RNN),并提供了各种数据集,以在同一框架内提供基准标准。我们首先收集大多数公开可用的数据集,并在两种数据拆分策略,五种输入格式,三种标准化方法和四种增强方法下对基于DL的智能算法进行完整的基准研究。其次,我们将整个评估代码集成到代码库中,并将此代码库发布给公众以更好地开发该领域。第三,我们使用特定设计的案例来指出现有问题,包括阶级失衡,概括能力,解释性,很少的学习和模型选择。通过这些作品,我们发布了一个统一的代码框架,用于公平,快速地比较和测试模型,强调开源代码的重要性,提供基线准确性(下限),以避免无用改进,并讨论该领域中潜在的未来方向。代码库可在https://github.com/zhaozhibin/dl-lase-intelligent-diargeens-benchmark中获得。

With the development of deep learning (DL) techniques, rotating machinery intelligent diagnosis has gone through tremendous progress with verified success and the classification accuracies of many DL-based intelligent diagnosis algorithms are tending to 100\%. However, different datasets, configurations, and hyper-parameters are often recommended to be used in performance verification for different types of models, and few open source codes are made public for evaluation and comparisons. Therefore, unfair comparisons and ineffective improvement may exist in rotating machinery intelligent diagnosis, which limits the advancement of this field. To address these issues, we perform an extensive evaluation of four kinds of models, including multi-layer perception (MLP), auto-encoder (AE), convolutional neural network (CNN), and recurrent neural network (RNN), with various datasets to provide a benchmark study within the same framework. We first gather most of the publicly available datasets and give the complete benchmark study of DL-based intelligent algorithms under two data split strategies, five input formats, three normalization methods, and four augmentation methods. Second, we integrate the whole evaluation codes into a code library and release this code library to the public for better development of this field. Third, we use specific-designed cases to point out the existing issues, including class imbalance, generalization ability, interpretability, few-shot learning, and model selection. By these works, we release a unified code framework for comparing and testing models fairly and quickly, emphasize the importance of open source codes, provide the baseline accuracy (a lower bound) to avoid useless improvement, and discuss potential future directions in this field. The code library is available at https://github.com/ZhaoZhibin/DL-based-Intelligent-Diagnosis-Benchmark.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源