LLM choice matters for generating ontology requirements: different models have distinct strengths depending on the domain, so practitioners should test multiple models rather than assuming one works universally.
This paper evaluates how different AI language models generate Competency Questions—natural language requirements for ontology systems. The researchers tested open and closed models across multiple domains, measuring readability, relevance, and structural complexity to understand what kinds of questions each model produces best.