文档详情

cue integration in categorical tasks insights from audio-visual speech perception线索集成分类任务的见解从视听语言感知.pdf

发布:2017-09-10约11.02万字共12页下载文档
文本预览下载声明
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception ¤ Vikranth Rao Bejjanki*, Meghan Clayards , David C. Knill, Richard N. Aslin Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America Abstract Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants’ performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during c
显示全部
相似文档