A community-modified version of Gemma 4 26B that has undergone 'abliteration' — a fine-tuning technique that removes built-in refusal behaviors, making it more willing to respond to prompts that the base model would typically decline. With only ~4B parameters active at inference time despite a 26B total parameter count, it runs efficiently. Trade-off: safety guardrails are intentionally removed.