A large mixture-of-experts model that punches above its weight by activating only 10B parameters per forward pass despite having 122B total parameters. The int4 AutoRound quantization keeps memory footprint manageable while preserving reasoning quality. Handles both text and image inputs, making it a versatile open-weight option for multimodal tasks.