Skip to content

Some Team/Agent like SocietyOfMindAgent is trying to use multiple/separated system messages, but the current system does not allow that. #6290

@SongChiYoung

Description

@SongChiYoung

What happened?

Describe the bug
Some Team/Agent like SocietyOfMindAgent or SelectorGroupChat tries to use multiple or separated system_messages, but the current system doesn't support that. As a result, non-OpenAI model backends may break or crash depending on how they handle system_messages.

Currently:

  • SelectorGroupChat manually downgrades the system_message to a user message unless the model is OpenAI-compatible.
  • Other agents (like SocietyOfMindAgent) do not have such fallback, and will fail silently or cause exceptions depending on the model vendor.

This issue is especially problematic for models like Anthropic, which only support multiple/separated system_messages when invoked via OpenAI-compatible APIs, but not via their native SDK.

This situation makes some teams or agents effectively OpenAI-only, which is not an intentional design.

To Reproduce
Use SocietyOfMindAgent or any GroupChat-style agent that passes multiple/separated system_messages to sub-agents, and test against various model backends.

Example:

    from autogen_agentchat.agents import AssistantAgent, SocietyOfMindAgent
    from autogen_agentchat.teams import RoundRobinGroupChat
    from autogen_ext.models.anthropic import AnthropicChatCompletionClient  # native SDK
    from autogen_agentchat.ui import Console

    client = AnthropicChatCompletionClient(model="claude-3-5-haiku-20241022")

    agent1 = AssistantAgent(name="agent1", system_message="You are a writer.", model_client=client)
    agent2 = AssistantAgent(name="agent2", system_message="You are a critic.", model_client=client)

    group = RoundRobinGroupChat([agent1, agent2], max_turns=1)

    society = SocietyOfMindAgent(name="society", team=group, model_client=client)
    
    async def team_run():
        await Console(
            society.run_stream(
                task="Write a short story with critique."
            )
        )
    
    asyncio.run(team_run())

Depending on the SDK, this may throw an error or silently misbehave (e.g., ignore one of the system_messages).

Expected behavior
Agents and teams that wish to use multiple/separated system_messages should:

  • Check model compatibility before applying that structure
  • Fallback gracefully (e.g., convert to user message) when unsupported
  • Not crash or behave incorrectly on non-OpenAI models

Additional context
We propose the following as a longer-term structural fix:

  1. Survey required
    a. Which models (by SDK / backend) support multiple/separated system_messages
    b. Which Team/Agent types want to make use of this feature (e.g., SocietyOfMindAgent, SelectorGroupChat, custom GroupChats)

  2. Introduce capability flags
    A new field (e.g., supports_multi_system_message) should be added in model_info or similar, and exposed through the model registry.

  3. Propagate model capabilities to agents
    Agent constructors or transformers should receive this flag and adjust behavior accordingly.

  4. Conditional behavior per agent
    Each agent or team should gracefully fallback or reformat messages based on the capability flag.

These changes should be implemented across two or more separate PRs, and step 1a/1b will likely require community input — it's probably too much to handle alone.

However, this issue is critical because some agents/teams silently become OpenAI-only, and we need to address this explicitly in the design.


TODO...

Check list - is Agent/Team need to multi-system message?

Agents

  • Name : assistant_agent Need it? : X
  • Name : base_chat_agent Need it? : X
  • Name : code_executor_agent Need it? : X
  • Name : society_of_mind_agnet Need it? : O
  • Name : user_proxy_agent Need it? : X

Teams

  • [?] Name : magentic_one Need it? : ?
  • Name : base_group_chat Need it? : X
  • Name : round_robin_group_chat Need it? : X
  • Name : selector_group_chat Need it? : X
  • [?] Name : swarm_group_chat Need it? : ?

Model SDK

  • Name : anthropic Could Multi-System message : X
  • Name : azure Could Multi-System message : ?
  • Name : cache Could Multi-System message : ?
  • Name : llama_cpp Could Multi-System message : ?
  • Name : ollama Could Multi-System message : ?
  • Name : replay Could Multi-System message : ?
  • Name : semantic_kernel Could Multi-System message : ?
  • Name : openai Could Multi-System message : go to down

OpenAI API aware model

  • Name : openAI Could Multi-System message : O
  • Name : anthropic Could Multi-System message : O
  • Name : gemini Could Multi-System message : X

Which packages was the bug in?

Python AgentChat (autogen-agentchat>=0.4.0)

AutoGen library version.

Python dev (main branch)

Other library version.

No response

Model used

No response

Model provider

None

Other model provider

No response

Python version

None

.NET version

None

Operating system

None

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions