A quantized variant of Qwen's 3.6 27B model, compressed to 5.5-bit precision by publisher rdtand for use with vllm inference. The quantization reduces memory footprint compared to full-precision weights, making the model more accessible on consumer or mid-range hardware, though with potential minor quality trade-offs from the compression. Accepts both text and image inputs, producing text outputs.