Gemma 4 E4B is a compact, open-weight model that punches above its parameter count through a mixture-of-experts architecture, activating only a fraction of its weights per inference. It handles text tasks efficiently and runs on modest hardware, making it practical for local deployment. The trade-off is that it's text-only and may lack the depth of larger dense models on complex reasoning tasks.