Using eyetracking to investigate language comprehension in autism

First published in Cracking the Enigma, September 2010

In her classic book, Autism: Explaining the Enigma, Uta Frith coined the term ‘weak central coherence’ to describe the tendency of people with autism to focus on details at the expense of pulling together different sources of information and seeing the big picture. Frith described this as the “red thread” running through many of the symptoms of autism, including both the difficulties with social interaction and the strengths in attention to detail.

Frith argued that the ability to pull together different sources of information is particularly important for language comprehension and that weak central coherence could explain many of the comprehension difficulties faced by children with autism. Kanner had first noted such difficulties almost half a century earlier, in his original description of autism,  observing that “stories are experienced in unintegrated portions”. Subsequent studies have confirmed that children with autism often struggle on reading comprehension tests, even when they are able to sound out the words quite fluently. They also tend to perform poorly on tests that require them to ‘read between the lines’ to make inferences about events that are implied but are not explicitly stated.

The worry with these more traditional tests of language comprehension is that people with autism may  struggle for a host of reasons unrelated to their comprehension skills. For example, they may understand the material itself but have difficulty processing the question and providing an answer. To address these concerns, Frith and her colleague, Maggie Snowling, developed an ingenious test in which children were asked to read aloud sentences that contained ambiguous words – technically referred to as ‘homographs’ – that have two distinct meanings. Crucially, the pronunciation of the homograph depends on the meaning that is assigned. If the child has understood the word correctly, they should pronounce it correctly. For example, in the sentence

“In her dress was a large tear”

they should pronounce the final word “tare” rather than “teer”. So by looking at how children pronounced the homograph, Frith and Snowling could infer how well they had understood the sentence. As predicted, children with autism performed very poorly. They would tend to give the more common pronunciation of the word (in this case “tare”), regardless of the context.

This finding has since been replicated on a number of occasions and insensitivity to linguistic context has become one of the central planks of the weak central coherence account. However, there are a number of important points to note. First off, participants have to know both meanings of each homograph in order to pass, and in none of the studies has this actually been checked. Secondly, while people with autism as a group perform statistically worse than control participants, many autistic individuals pass with flying colours. This could be because the test isn’t sensitive enough to show up their difficulties (there are usually only four or five homographs to read), but it could also mean that there are individual differences within autism.

In a recent study conducted with colleagues at Oxford University, we developed a new task to further investigate the effect of sentence context on language comprehension in autism. We wanted to look at spoken language so that we could include a broader range of participants and wouldn’t have to worry about potential differences in reading ability. But we needed a test that, like the homographs task, would allow us to measure the child’s understanding and their use of sentence context without them having to reflect on the meaning or answer an explicit question. Eventually, we came up with the idea of using ‘language mediated eye-movements’. This approach relies on the fact that, when you listen to spoken language, you tend to look at objects in front of you that correspond to what you’re hearing. Eye-movements occur automatically and we’re usually unaware of how much our eyes are moving around the ‘visual world’ in front of us.

In our task, people heard sentences while looking at a computer screen that showed a number of different objects. A small camera below the screen was used to work out where the person was looking. The figure below shows the performance of a group of Oxford University undergraduates on our test. In a neutral sentence such as “Joe chose the button”, the students tended to look at objects that sounded like the words they were hearing. In this case, they would look at the butter. The effect would only last for a fraction of a second because once they’d heard the whole word, they knew that it wasn’t “butter” after all and would look away again. Nevertheless, their eye-movements showed that, for that split second, they thought that the speaker was perhaps going to say “butter”.

Crucially, however, this effect disappeared if they heard “Sam fastened the button”. We called this the constraining condition because the sentence context constrained the possible words that could be referred to. In this example, the listeners were able to use the context of the sentence and their knowledge of fastenable and non-fastenable objects to rule out the possibility that the word could be “butter” – and so they didn’t look at the butter any more than they looked at the other objects.

Our prediction, based on the weak central coherence account, was that children with autism would fail to show this context effect. In fact, this only turned out to be true of the autistic children who also had language difficulties. In the figure below, this is shown by the orange line. These kids would look at the butter even when the sentence was about fastening. In contrast, the autistic children who had age-appropriate language skills (brown line) showed very similar patterns of eye-movements to the typically developing children (dark grey) – and to the Oxford undergraduates!

Interestingly, when we ran the same test with non-autistic children with language impairments (the light grey line), their eye-movements were remarkably similar to those of the autistic kids with language impairment. In other words, difficulties processing language in context were related to children’s language ability, regardless of whether or not they had autism.

We also included a few checks to make sure the children knew and understood all the words. For example, when there actually was a button on the screen, all of the kids looked at it straight away on hearing the word “fastened”. So we can be confident that they knew what the word meant and that group differences in eye-movements were just down to their ability to use the sentence context.

In line with the weak central coherence account, we’d predicted that insensitivity to context would be an autism-specific phenomenon. But it turned out to be related to children’s language ability, irrespective of their diagnosis. With a bit of hindsight, our results actually make quite a lot of sense. After all, as Frith herself argued, the ability to make use of context is critical to language comprehension. So perhaps we should have expected a link between difficulties in using context and language impairment.

More generally, these findings highlight the need to consider individual differences within autism and not just look at group averages, assuming that everyone with autism is the same. They also highlight the importance of looking at the overlap between autism and other developmental disorders. In particular, they provide further evidence relating to the ongoing debate surrounding similarities between language difficulties affecting children with and without autism.


Jon Brock, Courtenay Frazier Norbury, Shiri Einav, & Kate Nation (2008). Do individuals with autism process words in context? Evidence from language-mediated eye-movements Cognition DOI: 10.1016/j.cognition.2008.06.007