A community-quantized version of Google's Gemma 4 26B multimodal model, packaged in GGUF format by publisher 'nohurry' for local inference. It handles both text and image inputs, making it capable of visual reasoning tasks alongside standard language work. As a quantized derivative, it trades some precision for reduced memory footprint and easier deployment on consumer hardware.