A vision-capable reasoning model that has had its refusal behaviors surgically removed through abliteration, making it notably more permissive than its base counterpart. It handles both text and image inputs, drawing on Qwen3.5's 35B parameter foundation while activating only ~3B parameters per forward pass via mixture-of-experts. Expect fewer content blocks but also less built-in safety guardrails.