Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models — ThinkLLM