Title: The effects of crossmodal semantic reliability for audiovisual immersion experience of virtual reality
Authors: Hongtao Yu; Qiong Wu; Mengni Zhou; Qi Li; Jiajia Yang; Satoshi Takahashi; Yoshimichi Ejima; Jinglong Wu
Addresses: Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, 7008530, Japan ' Department of Psychology, Suzhou University of Science and Technology, Suzhou, China ' Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, 7008530, Japan ' School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089 Weixing Road, Changchun 130022, Jilin Province, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, Guangdong, China ' Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, 7008530, Japan ' Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, 7008530, Japan ' Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, 7008530, Japan ' Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, China; Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, 7008530, Japan
Abstract: Previous studies have reported that immersion experience can be improved by pairing it with auditory-visual stimuli; however, whether the semantic relationship of auditory-visual stimuli can also modulate visual virtual experience remains unclear. By using the psychophysics method, this study investigates the category performance under three different crossmodal semantic reliability conditions: semantically reliable, semantically unreliable and semantically uncertain. The results revealed a faster category speed for the crossmodal semantically reliable condition regardless of the category, indicating that crossmodal semantic reliability led to sufficient multisensory integration. In particular, under crossmodal semantically unreliable conditions, category speed was faster for non-living stimuli, indicating robust representation for non-living objects. These results indicate that adopting semantically reliable visual and auditory stimuli as multisensory inputs can efficiently improve the multisensory immersion experience.
Keywords: audiovisual integration; semantic reliability; semantic category; selective attention; multisensory presence; virtual reality.
International Journal of Mechatronics and Automation, 2022 Vol.9 No.4, pp.161 - 171
Received: 28 Sep 2021
Accepted: 27 Jan 2022
Published online: 19 Apr 2023 *