-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
13 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
--- | ||
lecturer: "Noortje J. Venhuizen from Tilburg University" | ||
title: "Expectation-based Semantics in Language Comprehension" | ||
type: seminar | ||
date: 2023-11-08 | ||
duration: "2 hours" | ||
venue: "Online" | ||
slides: "Noortje J. Venhuizen 8.11.2023.pdf" | ||
--- | ||
|
||
Abstract: | ||
The processing difficulty of each word we encounter in a sentence is affected by both our prior linguistic experience and our general knowledge about the world. Computational models of incremental language processing have, however, been limited in accounting for the influence of world knowledge. We develop an incremental model of language comprehension that integrates linguistic experience and world knowledge at the level of utterance interpretation. To this end, our model constructs--on a word-by-word basis--rich, distributed representations that capture utterance meaning in terms of propositional co-occurrence across formal model structures. These representations implement a Distributional Formal Semantics and are inherently compositional and probabilistic, capturing entailment and probabilistic inference. To quantify linguistic processing effort in the model, we adopt Surprisal Theory, which asserts that the processing difficulty incurred by a word is inversely proportional to its expectancy. In contrast with typical language model implementations of surprisal, our model instantiates surprisal as a comprehension-centric metric that reflects the likelihood of the unfolding utterance meaning as established after processing each word. I will present simulations that illustrate how the model captures processing effects from various semantic phenomena, such as presupposition, quantification and reference resolution, and how linguistic experience and world knowledge combine in determining online expectations. Finally, I will discuss the implications of our approach for neurocognitive theories and models of language comprehension. | ||
|