Towards An Ontology of Normative Role Design For Multi-Agent Llm Systems
Large Language Models (LLMs) are increasingly used as autonomous agents deployed in Multi-Agent Systems (MAS). A key mechanism for shaping agent behavior is the assignment of roles that carry normative expectations, such as "you are a fair negotiator" or "consider what happens if everyone acts as you do." These role-based instructions strongly influence individual agent behavior and collective dynamics. However, the field is emergent, and there is no systematic approach to designing and operationalizing such roles for LLM agents. We propose a prototype ontology that maps normative roles to prompting methods, ethical framings, and outcomes as they unfold in multi-agent simulations. The preliminary results point to recurring role–outcome patterns, domain-dependent affordances across different role types, and design contradictions in which agent capabilities undermine intended normative specifications. The paper contributes a conceptual frame for normative role design for LLM agents, supporting more systematic comparison, evaluation, and governance of multi-agent systems.
