A heavily modified, uncensored variant of Qwen3.5 122B with aggressive fine-tuning applied by HauhauCS, this model prioritizes unfiltered output over safety guardrails. It handles text and image inputs, making it multimodal, but the 'aggressive' tuning means responses can be blunt, unrestrained, and potentially unpredictable in tone. The sparse 10B active parameters (MoE architecture) keep inference costs lower than the full parameter count suggests.