论文标题

研究基于脑电图的功能连接模式,以识别多模式情感

Investigating EEG-Based Functional Connectivity Patterns for Multimodal Emotion Recognition

论文作者

Wu, Xun, Zheng, Wei-Long, Lu, Bao-Liang

论文摘要

与对运动脑部计算机界面(BCI)的丰富研究相比,最近新兴的情感BCI提出了明显的挑战,因为涉及情绪的大脑功能连通性网络没有得到很好的研究。先前关于基于脑电图(EEG)信号的情绪识别的研究主要依赖于基于单通道的特征提取方法。在本文中,我们提出了一种新型的与情绪相关的关键子网选择算法,并研究了三个EEG功能连接网络特征:强度,聚类系数和特征向量中心。在三个公共情绪脑电图数据集上评估了脑电图连通性特征在情绪识别中的歧视能力:种子,种子V和DEAP。强度功能可实现最佳的分类性能,并优于基于单通道分析的最先进的差分熵功能。实验结果表明,对于厌恶,恐惧,悲伤,幸福和中立的五种情绪,展示了独特的功能连接模式。此外,我们通过使用深层规范相关分析结合了EEG的功能连接功能以及眼球运动或生理信号的功能来构建多模式情绪识别模型。种子数据集的多模式情绪识别的分类精度为95.08/6.42%,Seed-V数据集的分类为84.51/5.11%,在DEAP数据集上的唤醒和价值分别为85.34/2.90%和86.61/3.76%。结果证明了EEG连通性特征与眼动数据的互补表示属性。此外,我们发现,使用18个频道构建的大脑网络与多模式情感识别中的62通道网络的性能可比性,并在实际场景中为BCI系统提供了更轻松的设置。

Compared with the rich studies on the motor brain-computer interface (BCI), the recently emerging affective BCI presents distinct challenges since the brain functional connectivity networks involving emotion are not well investigated. Previous studies on emotion recognition based on electroencephalography (EEG) signals mainly rely on single-channel-based feature extraction methods. In this paper, we propose a novel emotion-relevant critical subnetwork selection algorithm and investigate three EEG functional connectivity network features: strength, clustering coefficient, and eigenvector centrality. The discrimination ability of the EEG connectivity features in emotion recognition is evaluated on three public emotion EEG datasets: SEED, SEED-V, and DEAP. The strength feature achieves the best classification performance and outperforms the state-of-the-art differential entropy feature based on single-channel analysis. The experimental results reveal that distinct functional connectivity patterns are exhibited for the five emotions of disgust, fear, sadness, happiness, and neutrality. Furthermore, we construct a multimodal emotion recognition model by combining the functional connectivity features from EEG and the features from eye movements or physiological signals using deep canonical correlation analysis. The classification accuracies of multimodal emotion recognition are 95.08/6.42% on the SEED dataset, 84.51/5.11% on the SEED-V dataset, and 85.34/2.90% and 86.61/3.76% for arousal and valence on the DEAP dataset, respectively. The results demonstrate the complementary representation properties of the EEG connectivity features with eye movement data. In addition, we find that the brain networks constructed with 18 channels achieve comparable performance with that of the 62-channel network in multimodal emotion recognition and enable easier setups for BCI systems in real scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源