\[ \definecolor{firebrick}{RGB}{178,34,34} \newcommand{\red}[1]{{\color{firebrick}{#1}}} \] \[ \definecolor{green}{RGB}{107,142,35} \newcommand{\green}[1]{{\color{green}{#1}}} \] \[ \definecolor{blue}{RGB}{0,0,205} \newcommand{\blue}[1]{{\color{blue}{#1}}} \] \[ \newcommand{\den}[1]{[\![#1]\!]} \] \[ \newcommand{\set}[1]{\{#1\}} \] \[ \newcommand{\tuple}[1]{\langle#1\rangle} \]
\[\newcommand{\States}{{T}}\] \[\newcommand{\state}{{t}}\] \[\newcommand{\Messgs}{{M}}\] \[\newcommand{\messg}{{m}}\]
processing
incrementality
build a syntactic & semantic representations as the sentence comes in
what's the increment size? fixed or variable?
how to deal with ambiguity? singular guess or parallel hypotheses?
predicitive
minimal sense: processing behavior is a function of current state
strong(est) sense: comprehender entertains hypotheses about the future
(Kuperberg & Jaeger 2016)
assign a partial parse \(p\) to word sequence \(w_1, \dots w_i\)
as \(w_{i+1}\) comes in update \(p\) to \(p'\) by parsing heuristics
(Beck & Tiemann 2017, Towards a model of incremental composition, SuB)
(Beck & Tiemann 2017, Towards a model of incremental composition, SuB)
theory follows data
(e.g., Jurafsky 1996, Hale 2006, Levy 2008)
literal listener picks literal interpretation (uniformly at random):
\[ P_{LL}(t \mid m) \propto P(t \mid [\![m]\!]) \]
Gricean speaker approximates informativity-maximization:
\[ P_{S}(m \mid t) \propto \exp( \lambda P_{LL}(t \mid m)) \]
pragmatic listener uses Bayes' rule to infer likely world states:
\[ P_L(t \mid m ) \propto P(t) \cdot P_S(m \mid t) \]
interpretation as holistic: full & complete utterance
messages are word sequences: \(\messg = w_1, \dots, w_n\)
initial subsequence of \(\messg\): \(\messg_{\rightarrow i} = w_1, \dots w_i\)
all messages sharing initial subsequence: \(\Messgs(\messg_{\rightarrow i}) = \set{\messg' \in \Messgs \mid \messg'_{\rightarrow i} = \messg_{\rightarrow i}}\)
next-word expectation:
\[P_L(w_{i+1} \mid \messg_{\rightarrow i}) \propto \sum_{\state} P(\state) \ \sum_{\messg' \in \Messgs(\messg_{\rightarrow i}, w_{i+1})} P_S(\messg' \mid \state)\]
\[P_L(\state \mid \messg_{\rightarrow i}) \propto P(\state) \ \sum_{\messg' \in \Messgs(\messg_{\rightarrow i})} P_S(\messg' \mid \state)\]
next-word
self-paced reading
eye-tracked reading
ERPs
…?
interpretation
visual worlds
mouse-tracking
…?
Noveck & Posada (2013)
Nieuwland et al. (2010)
Hunt et al. (2013)
participants & procedure
sentence material
visual stimuli
work by Petra Augurzky
role of context?
SI computed per default vs. when contextually supported
when is SI computed online?
on the word, as soon as NP (possibly) complete, as soon as S (possibly) complete …
when is SI content checked against context?
as soon as possible vs. end of sentence
what incurs processing costs?
SI computation, SI cancellation, SI violation, …
Alle/Einige\(_1\) Punkte sind blau\(_2\), die im Kreis/Quadrat\(_3\) sind
Alle/Einige\(_1\) Punkte sind blau\(_2\) , die im Kreis/Quadrat\(_3\) sind
Alle/Einige\(_1\) Punkte sind blau\(_2\), die im Kreis/Quadrat\(_3\) sind
general assumptions
specific assumptions
main issue
how to fix reasonable \(\States\) and \(\Messgs\)?
experimental microcosmos assumption
all (and only?) meanings and forms that occur in the experiment
prediction
massive influence of filler material
participants & procedure
sentence material
visual stimuli
work by Petra Augurzky
behavioral data
only one participant consistently gave pragmatic judgements
ERP responses
no trace of pragmatic infelicity / expectations