IncReD aims to investigate incremental human reasoning in dialogue by combining insights from different fields involved in dialogue research (e.g. Artificial Intelligence, Linguistics, Psychology). We will experimentally test how dialogue participants dynamically and incrementally update their common ground and reason using this information, and develop a model to account for our findings. This project will integrate state of the art experimental techniques, corpus methods and formal models from syntactic, semantic and pragmatic domains into a model of dialogue.
Specifically we will investigate:
(1) How do people respond to why-questions in dialogue? What does this tell us about the reasoning people do in dialogue, and the resources they use?
(2) What happens in a dialogue (linguistically and interactionally) when there is a mismatch in the resources for reasoning between participants?
(3) How can this incremental human reasoning ability be formally modelled?
Our research into incremental reasoning will address foundational questions in dialogue research and feed into the field of AI where it may be utilised in applications such as artificial companions for the elderly. This project also has important implications for understanding reasoning in practical dialogic situations, for example, in therapy dialogues where a mismatch in reasoning between a psychiatrist and patient can have potentially catastrophic consequences.
The project is being funded by VR, the Swedish Research council (2016-01162) between 2017 and 2020.
The project plan can be accessed here.
Howes, C. & Eshghi, A. (2021). Feedback relevance spaces: Interactional constraints on processing contexts in Dynamic Syntax. Journal of Logic, Language and Information, 30(2), 331-362. [More] [Digital version] [Bibtex]
Breitholtz, E. & Howes, C (2020). Communicable reasons: How children learn topoi through dialogue. In Proceedings of the 24th Workshop on the Semantics and Pragmatics of Dialogue. Waltham, MA : SEMDIAL. [More] [Slides] [Digital version] [Bibtex]
Gregoromichelaki, E., Mills, G. J., Howes, C., Eshghi, A., Chatzikyriakidis, S., Purver, M. et al. (2020). Completability vs (In)completeness. Acta Linguistica Hafniensia, . [More] [Digital version] [Bibtex]
Ginzburg, J., Cooper, R., Hough, J. & Schlangen, D (2018). Incrementality and clarification/sluicing potential. In Trueswell, R., Cummins, C., Heycock, C. et al (editors), Proceedings of Sinn und Bedeutung 21, pages 463-480. Linguistic Society of America. [More] [Digital version] [Bibtex]
Breitholtz, E., Howes, C. & Cooper, R (2017). Incrementality all the way up. In Computing Natural Language Inference Workshop at the International Conference on Computational Semantics (IWCS). [More] [Digital version] [Slides] [Bibtex]
Howes, C. & Eshghi, A (2017). Feedback relevance spaces: The organisation of increments in conversation. In Proceedings of the 12th International Conference on Computational Semantics (IWCS 2017). Association for Computational Linguisitics. [More] [Digital version] [Poster] [Lightning Slide] [Bibtex]
Howes, C. & Rieser, H., editors. (2017). Proceedings of the workshop on Formal Approaches to the Dynamics of Linguistic Interaction (FADLI), number 1863, In CEUR Workshop Proceedings, Aachen. [More] [Digital version] [Bibtex]