The Compression Gap: Why Discrete Tokenization Limits Vision-Language-Action Model Scaling — ThinkLLM