A lightweight embedding specialist built for speed and efficiency. This nano-scale model punches above its weight for retrieval tasks, trading raw capacity for a lean footprint that fits comfortably in resource-constrained environments. It handles up to 8K tokens of context, making it practical for chunked document retrieval pipelines without heavy infrastructure.