You can train dense retrievers to match LLM utility by distilling perplexity-based signals into embeddings during training, eliminating expensive test-time LLM re-ranking while improving retrieval quality.
This paper proposes Utility-Aligned Embeddings (UAE), a method that trains dense retrievers to match the ranking quality of LLM-based re-ranking without the computational cost.