Meaning is communicated in spoken language through words and their arrangement in syntactic structures, and also through the way the words and sentences are said—their intonation.Intonation is expressed through the complex patterning of pitch, loudness, voice quality and tempo. The goal of this project is to identify which patterns of modulation among these acoustic dimensions are perceptually salient for listeners and form cognitive representations that are stored in long-term memory, as the basis for subsequent speech production. A key question is whether intonation patterns are perceived and represented in memory in fine phonetic detail, or as abstract patterns of tonally marked prominence that may be variously realized, possibly depending on the discourse or social context of an utterance. A second question is whether the intonation patterns at the end of a sentence, which convey meaning about the speaker’s intention (e.g., to ask vs. inform) and attitude (e.g., surprise, certainty), are more reliably perceived and stored in memory representations than intonation patterns earlier in a sentence, which aredescribed as “ornamental”. These questions are investigated through the acoustic analysis of intonation that a listener reproduces from pre-recorded model sentences they hear. Experiments will test 24 distinct intonational “tunes” proposed for English in the Autosegmental-Metrical theory of English intonation (Pierrehumbert 1980). In a series of 8 experiments, the participant’s task of reproducing intonation is manipulated in three ways: using model sentences with more complex intonational patterns and variable lexical and syntactic content; introducing a time delay or an intervening speech task between hearing the model sentence and reproducing the intonation pattern; with information about the immediately preceding discourse and social context that support the interpretation of intonational meaning. Computational tools will be used to automatically extract a rich set of dynamic acoustic measures from the reproduced sentences, and statistical moding through factor analysis and Bayesian regression will evaluate the similarity of reproduced intonation patterns to corresponding stimuli.
|Effective start/end date||3/1/20 → 8/31/23|
- National Science Foundation (BCS-1944773)