Often, it is easy to tell on the basis of the spoken language they produce, whether a person experiences some specific emotion. If a speaker is angry, for instance, (s)he may speak with a louder tone of voice and with a higher pitch, while sadness might cause the speech to be softer and lower (Scherer, 2003, Bachorowski & Owren, 2006). Also, the words used may vary; even though only a limited number of the words that speakers produce can be classified as emotional (Pennebaker, Mehl, & Niederhoffer, 2003), word use has been shown to be indicative of speaker’s feelings. For instance, Stirman and Pennebaker (2001) compared word use of suicidal and non-suicidal poets and found that the former used relatively more first person singular pronouns, more words referring to death, but also fewer references to other people in their poems than non-suicidal poets.
While the effects of emotion on speech prosody and word use are well established, the impact of emotion on other aspects of the speech production process has received surprisingly little scholarly attention. In this project we set out to test our conjecture that emotion can exert an influence on the early content selection and message formulation stages of language production. In particular, we study how state-of-the-art language production models can be interfaced with state-of-the-art emotion models, and will test predictions made by such a combined model in a series of experiments, where we zoom in on referential communication. In addition, we develop a computational model that is capable of generating different linguistic realizations of the same content, as a function of emotional state. Such a model has important practical applications, in the context of games (allowing game characters to express themselves in a dynamic, contextually appropriate way) and automatic news reporting (enabling sophisticated expressions of sentiment in “robo-journalism”).