前沿与热点

多模态数据融合的在线学习情感计算研究*

展开
  • (1.郑州航空工业管理学院   河南郑州   450046)
司俊勇(1999-),男,郑州航空工业管理学院硕士研究生;付永华(1979-),男,郑州航空工业管理学院教授,研究方向:人机情感和智能教育信息处理。

收稿日期: 2024-05-10

  网络出版日期: 2024-07-23

基金资助

*本文系2024年度河南省高等教育教学改革研究与实践项目“智能技术驱动的新文科教育教学模式探索”(项目编号:2024SJGLX0413)研究成果之一。

Affective Computing for E-Learning Based on Multimodal Data Fusion

Expand

Received date: 2024-05-10

  Online published: 2024-07-23

摘要

在线学习由于其智能化和个性化愈发成为人们青睐的主流学习方式,然“情知分离”现象的存在严重阻碍了在线教学深层发展,如何即时、精确感知学习情感进而为改善学习绩效提供参考便亟待研究。文章构建多模态数据融合的在线学习情感计算模型,采集被试面部表情、语音和文本数据,借助情感识别模型获取各模态情感识别结果。在此基础之上,通过基于决策级融合的方式实现多模态在线学习情感计算,并确定最优情感计算模型。研究发现,最优情感计算模型的平均识别精度较单模态情感识别提高了14.51%,证实该模型在在线学习场景下进行情感计算具有可行性和有效性。

本文引用格式

司俊勇 付永华 . 多模态数据融合的在线学习情感计算研究*[J]. 图书与情报, 2024 , 44(03) : 69 -80 . DOI: 10.11968/tsyqb.1003-6938.2024034

Abstract

Due to its intelligent and personalization, online learning has increasingly become a favored mainstream learning method. However, the existence of the 'affective gap' severely hampers the development of online teaching activities. It is imperative to research how to instantaneously and accurately perceive emotional cues in learning to provide guidance for improving learning performance. This paper constructs a multimodal data fusion model for emotional computation in online learning. Facial expressions, voice, and text data of subjects are collected, and emotional recognition models are employed to obtain emotional recognition results for each modality. Based on decision-level fusion, multimodal emotional computation in online learning is achieved, determining the optimal emotional computation model. The study reveals that the average recognition accuracy based on the optimal emotional computation model has increased by 14.51% compared to single-modal emotional recognition. This confirms the feasibility and effectiveness of the model in emotional computation within online learning scenarios.
文章导航

/