A compact, open-weight model from Google's Gemma 4 family, quantized to 6-bit precision and packaged by lmstudio-community for local MLX inference. The reduced bit-width keeps memory footprint small at the cost of some precision compared to full-weight variants. Behaves as a general text-in, text-out model suited for running on Apple Silicon hardware via MLX.