A mid-sized instruction-tuned model with a generous 131K token context window, making it comfortable handling long documents and extended conversations. Built on an open-weight foundation under Apache 2.0, it trades raw scale for accessibility and deployability. Its PyTorch format keeps it straightforward to work with in standard ML pipelines.