Different layers of LLMs capture different aspects of human sentence processing: early layers model easy, naturalistic reading while deeper layers better capture the cognitive effort needed for syntactically complex sentences.
This paper investigates how different layers of large language models align with human brain activity during reading. The researchers found that early LLM layers match natural reading patterns, while later layers better capture how humans process grammatically complex sentences. This reveals that humans use different cognitive strategies depending on reading difficulty.