Skip to content

[Bug] Temporal Workflows Require Model Name to be String - LitellmModel Not Supported #1134

@databill86

Description

@databill86

What are you really trying to do?

I'm building a multi-tenant application using Temporal workflows with OpenAI agents where different agents need to use different proxy configurations (different API keys and base URLs). I want each agent to have its own model configuration rather than being forced to use a global OpenAI client configuration.

Describe the bug

Temporal workflows with OpenAI agents require the model name to be a string, but this prevents using custom model objects like LitellmModel with specific API keys and base URLs for individual agents. This limitation forces all agents to use the same global OpenAI client configuration, making it impossible to have different proxy configurations per agent.

When using Temporal workflows with OpenAI agents, the convert_agent function in temporalio/contrib/openai_agents/_openai_runner.py enforces that the model must be a string:

def _model_name(agent):
    # ...
    raise ValueError(
        "Temporal workflows require a model name to be a string in the agent."
    )

This prevents using custom model objects like LitellmModel that allow per-agent configuration of API keys and base URLs.

Expected Behavior:
Agents should be able to use custom model objects that implement the Model interface, allowing for:

  • Per-agent API key configuration
  • Per-agent base URL configuration (for proxy setups)
  • Flexible model provider selection per agent

Current Behavior:
Using a custom LitellmModel object in an agent results in:

ValueError: Temporal workflows require a model name to be a string in the agent.

Minimal Reproduction

1. Activity Definition (Tool → Activity conversion):

from temporalio import activity

@activity.defn
async def simple_tool_activity(message: str) -> str:
    """Simple tool activity that processes a message."""
    logger.info(f"🔧 Simple tool activity processing: {message}")
    response = f"Tool processed: '{message}' - Response from activity!"
    return response

# The activity is automatically converted to a tool when passed to Agent
tools = [simple_tool_activity]

2. Workflow with Agent:

from agents import Agent
from agents.extensions.models.litellm_model import LitellmModel
from temporalio import workflow

@workflow.defn
class MyWorkflow:
    @workflow.run
    async def run(self) -> str:
        # Inside the workflow
        
        agent = Agent(
            name="Triage Agent",
            instructions="Your instructions here",
            model=LitellmModel(model="model_name", api_key="proxy_api_key", base_url="proxy_base_url"),
            # or this: model=OpenAIChatCompletionsModel(model="model_name", openai_client=AsyncOpenAI(api_key=key, base_url=base_url)),
            # model="model_name", # This works if the client is created with plugins
            tools=tools,  # Activities are automatically converted to tools
            handoffs=handoffs,
        )
        
        # This is where the error occurs - agent.run() internally calls convert_agent()
        result = await agent.run("Your message here")
        return result

3. Worker Setup:

from temporalio.worker import Worker

async def main():
    # Create Temporal client with plugins (required for agents)
    client = await create_temporal_client(include_plugins=True)
    
    # Run the worker
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as activity_executor:
        worker = Worker(
            client,
            task_queue="demo-task-queue",
            workflows=[MyWorkflow],
            activities=[simple_tool_activity],  # Register the activity
            activity_executor=activity_executor,
        )
        await worker.run()

# This fails when client is created without plugins
Client.connect(
    "localhost:7233",
    namespace="default",
    tls=False,
    plugins=[],  # Empty plugins causes the error
)

# Start the workflow
workflow_handle = await client.start_workflow(
    MyWorkflow.run,
    id="my-workflow-id",
    task_queue="demo-task-queue",
)

Workaround that works:

plugins = [
    OpenAIAgentsPlugin(
        model_params=ModelActivityParameters(
            start_to_close_timeout=timedelta(seconds=30)
        ),
        model_provider=CustomLitellmProvider(
            base_url=PROXY_BASE_URL,
            api_key=PROXY_API_KEY,
        ),
    ),
]

Client.connect(
    "localhost:7233",
    namespace="default",
    tls=False,
    plugins=plugins,  # With plugins it works
)

# In the workflow: use model="model_name" instead of LitellmModel
# The agent.run() call succeeds because convert_agent() can handle string model names

This is the CustomLitellmProvider implementation:

from agents.extensions.models.litellm_model import LitellmModel
from agents.models.interface import Model, ModelProvider


class CustomLitellmProvider(ModelProvider):
    """
    A custom ModelProvider that uses LiteLLM with configurable base_url and api_key.
    """

    def __init__(self, base_url: str | None = None, api_key: str | None = None):
        """
        Initialize the custom Litellm provider.

        Args:
            base_url: The base URL for the API (e.g., proxy endpoint)
            api_key: The API key for authentication
        """
        self.base_url = base_url
        self.api_key = api_key

    @property
    def model_class(self) -> type[Model]:
        """Get the model class used by this provider."""
        return LitellmModel

    @property
    def provider_name(self) -> str:
        """Get the name of this provider."""
        return "CustomLitellmProvider"

    @property
    def is_proxy_configured(self) -> bool:
        """Check if proxy configuration is set."""
        return self.base_url is not None and self.api_key is not None

    def get_model(self, model_name: str) -> Model:
        """
        Get a LitellmModel instance with the configured base_url and api_key.

        Args:
            model_name: The name of the model to use

        Returns:
            A LitellmModel instance configured with the provider's settings
        """
        if not model_name:
            raise ValueError("Model name is required")
        return LitellmModel(
            model=model_name,
            base_url=self.base_url,
            api_key=self.api_key,
        )

    def __repr__(self) -> str:
        """Return a string representation of the provider."""
        proxy_status = "configured" if self.is_proxy_configured else "not configured"
        return (
            f"CustomLitellmProvider("
            f"base_url={self.base_url!r}, "
            f"api_key={'***' if self.api_key else None!r}, "
            f"proxy={proxy_status})"
        )

    def __str__(self) -> str:
        """Return a user-friendly string representation."""
        return (
            f"CustomLitellmProvider with "
            f"base_url={self.base_url or 'default'}, "
            f"proxy={'enabled' if self.is_proxy_configured else 'disabled'}"
        )

Environment/Versions

  • OS and processor: Linux
  • Temporal Version: temporalio==1.18.0
  • OpenAI SDK: openai==1.109.0
  • OpenAI Agents: openai-agents==0.3.2
  • Python: 3.11
  • Are you using Docker or Kubernetes or building Temporal from source? Using Docker

Additional context

Use Case:
In a multi-tenant application with different proxy configurations per agent:

  • Agent A needs to use Proxy 1 with specific API key
  • Agent B needs to use Proxy 2 with different API key
  • Agent C needs to use direct OpenAI API

Currently, this is impossible because all agents must use the same global OpenAI client configuration.

Important Note:
One of the most important reasons I don't want to use the global OpenAI client configuration is to use logfire to trace the agents. When I used the plugins, the logfire tracing was not working at all.

Tracing Conflict Issue

The plugin approach conflicts with observability/tracing setup. Here's the tracing configuration that doesn't work with plugins:

from agents import set_trace_processors
import logfire
import nest_asyncio

def init_tracing():
    """Initialize tracing and observability."""
    # Set Langfuse env vars from settings (user can override via real env)
    os.environ.setdefault("LANGFUSE_PUBLIC_KEY", LANGFUSE_PUBLIC_KEY)
    os.environ.setdefault("LANGFUSE_SECRET_KEY", LANGFUSE_SECRET_KEY)
    os.environ.setdefault("LANGFUSE_HOST", LANGFUSE_HOST)

    set_trace_processors([])  # only disable OpenAI tracing

    # Instrument OpenAI Agents SDK via pydantic-ai logfire
    try:
        nest_asyncio.apply()
        logfire.configure(service_name="proxy", send_to_logfire=False)
        # This method automatically patches the OpenAI Agents SDK to send logs via OTLP to Langfuse.
        logfire.instrument_openai_agents()
    except Exception as exc:  # noqa: BLE001
        logger.error(f"Logfire instrumentation not available: {exc}")

# Worker setup with tracing
async def main():
    # Initialize tracing (conflicts with plugins)
    init_tracing()  # Sets up logfire → OTLP → Langfuse
    
    # Create Temporal client with plugins (required for agents)
    client = await create_temporal_client(include_plugins=True)
    
    # Run the worker
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as activity_executor:
        worker = Worker(
            client,
            task_queue="demo-task-queue",
            workflows=[MyWorkflow],
            activities=[simple_tool_activity],
            activity_executor=activity_executor,
        )
        await worker.run()

The Problem: When using plugins, the logfire tracing instrumentation doesn't work properly, making it impossible to trace agent execution in Langfuse. This forces a choice between:

  1. Using plugins (works with string models) but losing observability
  2. Using custom model objects (maybe better observability, not tested because of the string model requirement).

If you've run into this same issue or have any insights on how to work around it, I'd really appreciate hearing from you.

Thanks 🙏

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions