A compact multimodal workhorse that punches above its weight class for its size. Gemma 4's 31B parameter count, quantized to 8-bit, makes it memory-efficient without sacrificing too much capability — it handles both text and image inputs with reasonable fluency. Expect solid general reasoning and vision understanding, though it won't match larger unquantized models on complex tasks.