A heavily modified, uncensored variant of Qwen3.5 35B with aggressive fine-tuning applied by HauhauCS. It operates with significantly reduced content filtering, meaning it will engage with topics and requests that standard models decline — a double-edged trait that demands careful deployment context. The mixture-of-experts architecture activates only 3B parameters per forward pass despite the 35B total, keeping inference lean.