论文标题
基于小波变换和深度神经网络的嘈杂拉曼频谱进行分类的方法
Method for classifying a noisy Raman spectrum based on a wavelet transform and a deep neural network
论文作者
论文摘要
本文提出了一个基于小波变换和深层神经网络的新框架,用于识别嘈杂的拉曼光谱,因为在实践中,在基线噪声和添加剂白色高斯噪声环境下对频谱进行分类相对困难。该框架由两个主要引擎组成。提出小波变换是将1-D噪声拉曼光谱转换为二维数据的前端框架前端。此二维数据将被馈送到后端框架后端,该框架是分类器。通过实施几种传统的机器学习(ML)和深度学习(DL)算法来选择最佳分类器,然后我们研究了他们的分类准确性和鲁棒性能。我们选择的四个ML包括幼稚的贝叶斯(NB),支持向量机(SVM),随机森林(RF)和K-Neartiment邻居(KNN),其中为DL分类器选择了深度卷积神经网络(DCNN)。应用无噪声,高斯噪声,基线噪声和混合噪声拉曼光谱训练和验证ML和DCNN模型。最佳的后端分类器是通过测试ML和DCNN模型的几个嘈杂的拉曼光谱(10-30 dB噪声功率)获得的。基于模拟,DCNN分类器的准确性比NB分类器高9%,比RF分类器高3.5%,比KNN分类器高1%,比SVM分类器高0.5%。在对混合噪声方案的鲁棒性方面,具有DCNN后端的框架比其他ML后端表现出色。 DCNN的后端在3 dB SNR下达到90%的精度,而NB,SVM,RF和K-NN后端分别需要27 dB,22 dB,27 dB和23 dB SNR。此外,在低噪声测试数据集中,DCNN后端的F量度得分超过99.1%,而其他ML发动机的F量级得分低于98.7%。
This paper proposes a new framework based on a wavelet transform and deep neural network for identifying noisy Raman spectrum since, in practice, it is relatively difficult to classify the spectrum under baseline noise and additive white Gaussian noise environments. The framework consists of two main engines. Wavelet transform is proposed as the framework front-end for transforming 1-D noise Raman spectrum to two-dimensional data. This two-dimensional data will be fed to the framework back-end which is a classifier. The optimum classifier is chosen by implementing several traditional machine learning (ML) and deep learning (DL) algorithms, and then we investigated their classification accuracy and robustness performances. The four MLs we choose included a Naive Bayes (NB), a Support Vector Machine (SVM), a Random Forest (RF) and a K-Nearest Neighbor (KNN) where a deep convolution neural network (DCNN) was chosen for a DL classifier. Noise-free, Gaussian noise, baseline noise, and mixed-noise Raman spectrums were applied to train and validate the ML and DCNN models. The optimum back-end classifier was obtained by testing the ML and DCNN models with several noisy Raman spectrums (10-30 dB noise power). Based on the simulation, the accuracy of the DCNN classifier is 9% higher than the NB classifier, 3.5% higher than the RF classifier, 1% higher than the KNN classifier, and 0.5% higher than the SVM classifier. In terms of robustness to the mixed noise scenarios, the framework with DCNN back-end showed superior performance than the other ML back-ends. The DCNN back-end achieved 90% accuracy at 3 dB SNR while NB, SVM, RF, and K-NN back-ends required 27 dB, 22 dB, 27 dB, and 23 dB SNR, respectively. In addition, in the low-noise test data set, the F-measure score of the DCNN back-end exceeded 99.1% while the F-measure scores of the other ML engines were below 98.7%.