Skip to content

[Bug]: Exception "TypeError: sequence item 0: expected str instance, dict found" was throwing due to different format of return when running Gemini #2860

@hemanoid

Description

@hemanoid

Describe the bug

Function __post_carryover_processing(chat_info: Dict[str, Any]) of chat.py in agentchat folder throw the above exception when running Google Gemini.

The cause of the problem was the difference in returns when using models other than openai. In this case, the return of Gemini was of the format of {"Content": "{'Reviewer': 'SEO Reviewer', 'Review': ' .......'}", 'role': 'assistant', 'function_call': None, 'tool_calls': None}, whereas OPENAI returned {'Reviewer': 'SEO Reviewer', 'Review': ' .......'}.

Steps to reproduce

#Examples from DeepLearning.ai - almost a direct copy, autogen 0.2.25, python 3.12.2

from myutils import get_openai_api_key, get_gemini_api_key
from autogen import ConversableAgent
import autogen
import pprint

GEMINI_API_KEY = get_gemini_api_key()
OPENAI_API_KEY = get_openai_api_key()
llm_config = {"model": "gemini-pro", "api_key": GEMINI_API_KEY, "api_type": "google"}
#llm_config ={"model": "gpt-3.5-turbo", "api_key": OPENAI_API_KEY}

task = '''
Write a engaging blog post about why local deployed LLM
is important to AI's future. Make sure the blog post is
within 100 words.
'''

writer = autogen.AssistantAgent(
name="Writer",
system_message="You are a writer. You write engaging and intelligent "
"blog post (with title) on given topics. You must polish your "
"writing based on the feedback you receive and give a refined "
"version. Only return your final work without additional comments.",
llm_config=llm_config
)

reply = writer.generate_reply(messages=[{"content": task, "role": "user"}])
print(reply)

critic = autogen.AssistantAgent(
name="Critic",
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
llm_config=llm_config,
system_message="You are a critic. you review the work of "
"the writer and provide constructive "
"feedback to help improve the quality of the content."
)

""" res = critic.initiate_chat(
recipient=writer,
message=task,
max_turns=2,
summary_method="last_msg"
) """

SEO_reviewer = autogen.AssistantAgent(
name="SEO Reviewer",
llm_config=llm_config,
system_message="You are an SEO reviewer, known for "
"your ability to optimize content for search engines, "
"ensuring that it ranks well and attracts organic traffic. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)

legal_reviewer = autogen.AssistantAgent(
name="Legal Reviewer",
llm_config=llm_config,
system_message="You are a legal reviewer, known for "
"your ability to ensure that content is legally compliant "
"and free from any potential legal issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)

ethics_reviewer = autogen.AssistantAgent(
name="Ethics Reviewer",
llm_config=llm_config,
system_message="You are an ethics reviewer, known for "
"your ability to ensure that content is ethically sound "
"and free from any potential ethical issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role. "
)

meta_reviewer = autogen.AssistantAgent(
name="Meta Reviewer",
llm_config=llm_config,
system_message="You are a meta reviewer, you aggragate and review "
"the work of other reviewers and give a final suggestion on the content."
)

def reflection_message(recipient, messages, sender, config):
return f'''Review the following content.
\n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}'''

review_chats = [
{
"recipient": SEO_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": { "summary_prompt": "Return review into JSON object only:"
"{'Reviewer': '', 'Review': ''}. Here Reviewer should be your role",},
"max_turns": 1
},
{
"recipient": legal_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'Reviewer': '', 'Review': ''}.",},
"max_turns": 1
},
{
"recipient": ethics_reviewer, "message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'reviewer': '', 'review': ''}",},
"max_turns": 1
},
{
"recipient": meta_reviewer,
"message": "Aggregrate feedback from all reviewers and give final suggestions on the writing, also suggesting to double the use of the verb in the writing.",
"max_turns": 1
}
]

critic.register_nested_chats(
review_chats,
trigger=writer
)

res = critic.initiate_chat(
recipient=writer,
message=task,
max_turns=2,
summary_method="last_msg"
)

print(res.summary)

Model Used

Gemini-pro

Expected Behavior

No response

Screenshots and logs

No response

Additional Information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    0.2Issues which are related to the pre 0.4 codebaseneeds-triage

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions