You can decode what someone is imagining saying by training on their brain activity while listening to speech, then mapping imagined brain patterns to listened patterns—solving the data scarcity problem in brain-computer interfaces.
This paper shows how to decode imagined speech from brain recordings (MEG) by training on the more abundant listened speech data instead. Researchers mapped brain activity from imagining speech to brain activity from listening to speech, then used a decoder trained on listened speech to identify imagined words. This approach works without needing large imagined speech datasets.