A quantized variant of Qwen3's 32B model, trimmed to 27B parameters using NVFP4 precision by a community publisher. It trades a small amount of numerical precision for reduced memory footprint, making the weights more accessible on consumer hardware. Behavior and reasoning capabilities follow the base Qwen3 architecture, with the usual caveats around quantization potentially affecting edge-case outputs.