Skip to content

Conversation

@dsfaccini
Copy link
Contributor

@dsfaccini dsfaccini commented Nov 15, 2025

summary

hey guys, this as a draft PR to add a devcontainer config for AI coding agents (claude code, cursor, et al) and just ol' flesh and bone contributors.

I included an install.sh that is used to initialize the devcontainer either in standard mode (default) or full ML dev mode (pytorch etc).

standard takes a couple of minutes on my machine (m4, 24gb) and full ML dev mode more like 10 min.

I recommend using Orbstack instead of Docker Desktop (seems to be easier on my system's resources) but only if you know how to set it up.

lastly, I'm not an expert in devcontainers so this is largely built by claude itself but I built it on my machine multiple times until it was smooth enough, so I hope that's your experience too. Otherwise let me know!

facts

  • Python 3.12 environment with uv, pre-commit, deno
  • Placeholder for Optional services: Ollama (local models), PostgreSQL, pgvector
  • Modern git configuration (something about source references)
  • Platform compatibility (x86_64 via emulation on ARM64, this was relevant for the full ML setup)
  • Docs with troubleshooting and setup instructions

files

  • .devcontainer/Dockerfile - container image with all dev tools
  • .devcontainer/devcontainer.json - VS Code configuration and settings
  • .devcontainer/docker-compose.yml - service orchestration
  • .devcontainer/.env.example - API keys template
  • .devcontainer/mcp-proxy-config.json - MCP server integration
  • .devcontainer/README|AGENTS.md - docs and agent guidelines

more facts

  • Uses modern git configuration (environment variables, auto credential forwarding)
  • Includes optional services for examples (Ollama, PostgreSQL, pgvector)
  • Platform-specific configuration for ARM64/x86_64 compatibility
  • Latest stable versions (Node 24, PostgreSQL 18, pgvector pg18)

testing

  • Builds successfully on macOS Apple Silicon (via x86_64 emulation)
  • All dependencies install correctly via `make install`
  • Pre-commit hooks setup works
  • Git operations work with modern configuration
  • Configurations for optional services (Ollama, PostgreSQL) are available in the compose, but I haven't tested them

references

This implementation follows best practices from:

- Dockerfile with Python 3.12, uv, pre-commit, deno
- docker-compose with optional services (Ollama, PostgreSQL, pgvector)
- docs with setup instructions
- platform compatibility (x86_64) to support all dependencies (tested on an M4 chip)
@lars20070
Copy link
Contributor

lars20070 commented Nov 18, 2025

@dsfaccini I have been working on a similar dev container here. It is based on the official Python 3 dev container template. I try to keep the container fairly unopinionated and mount .vscode, .cursor, .claude etc. externally. These folders contain IDE settings, MCP servers and coding agent rules. But this setup might differ significantly between developers, see for example my setup.

I would also suggest to use Ollama from host systems. Inside the container it is much slower.

@dsfaccini
Copy link
Contributor Author

hey @lars20070 ! thank you for your input. this setup isn't mounting .vscode, but it's a good observation! I believe the makefile extension in the devcontainer.json is creating it. ftr, the pydantic-ai repo itself has a .claude but not a .cursor.

about ollama, I agree with you that running it on the host is the better alternative: this setup doesn't remove that possibility, the commented-out placeholder for ollama is there to offer users an easy way to set up an ollama quickly, if they don't already have one running.

I ran make test in the devcontainer after you mentioned it on slack and did see some erros related to logfire/otel, I'll come back to these this week, thank you for making me aware of that!

make test logs
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

msg = 'Failed to introspect calling code. Please report this issue to Logfire. Falling back to normal message formatting whi.... This happens when running in an interactive shell, using exec(), or running .pyc files without the source .py files.'
stacklevel = 6

def warn_inspect_arguments(msg: str, stacklevel: int):
    msg = (
        'Failed to introspect calling code. '
        'Please report this issue to Logfire. '
        'Falling back to normal message formatting '
        'which may result in loss of information if using an f-string. '
        'Set inspect_arguments=False in logfire.configure() to suppress this warning. '
        'The problem was:\n'
    ) + msg
  warnings.warn(msg, InspectArgumentsFailedWarning, stacklevel=stacklevel)

E logfire._internal.formatter.InspectArgumentsFailedWarning: Failed to introspect calling code. Please report this issue to Logfire. Falling back to normal message formatting which may result in loss of information if using an f-string. Set inspect_arguments=False in logfire.configure() to suppress this warning. The problem was:
E No source code available. This happens when running in an interactive shell, using exec(), or running .pyc files without the source .py files.

.venv/lib/python3.12/site-packages/logfire/_internal/formatter.py:433: InspectArgumentsFailedWarning
----------------------------------------------------------------- Captured log setup -----------------------------------------------------------------
WARNING opentelemetry.sdk.metrics._internal:init.py:511 shutdown can only be called once
================================================================ slowest 20 durations ================================================================
17.64s setup tests/models/test_openai_responses.py::test_openai_responses_image_generation_stream
12.02s setup tests/models/test_openai_responses.py::test_openai_responses_image_and_text_output
11.95s call tests/providers/test_provider_names.py::test_infer_provider[google-vertex-GoogleProvider-Your default credentials were not found]
11.42s call tests/test_temporal.py::test_multiple_agents@temporal
11.40s call tests/test_temporal.py::test_complex_agent_run_in_workflow@temporal
11.34s setup tests/models/test_google.py::test_google_image_generation
8.46s setup tests/models/test_openai_responses.py::test_openai_responses_image_generation
7.26s setup tests/models/test_openai_responses.py::test_openai_responses_multiple_images
6.85s call tests/test_temporal.py::test_complex_agent_run@temporal
5.78s setup tests/models/test_openai_responses.py::test_openai_responses_image_generation_tool_without_image_output
5.54s setup tests/models/test_google.py::test_google_image_generation_stream
5.12s setup tests/test_temporal.py::test_simple_agent_run_in_workflow@temporal
4.70s call tests/test_tenacity.py::TestConnectionPool::test_connection_pool
4.56s setup tests/test_mcp.py::test_tool_returning_multiple_items
4.51s setup tests/test_mcp.py::test_tool_returning_image_resource_link
4.50s setup tests/test_mcp.py::test_tool_returning_image
4.48s setup tests/models/test_anthropic.py::test_image_as_binary_content_tool_response
4.43s setup tests/models/test_openai_responses.py::test_openai_responses_image_or_text_output
4.26s setup tests/models/test_openai_responses.py::test_image_as_binary_content_tool_response
4.21s setup tests/models/test_openai.py::test_image_as_binary_content_tool_response
Summary of Failures
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ File ┃ Function ┃ Function Line ┃ Error Line ┃ Error ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ ../Users/david/projects/forks/pydan… │ test_genai_attribute_collection │ 760 │ 777 │ AssertionError │
│ ../Users/david/projects/forks/pydan… │ test_span_query_evaluator │ 396 │ 400 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_context_subtree_concurrent │ 28 │ 51 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_query_evaluator │ 544 │ 551 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_tree_descendants_methods │ 396 │ 400 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_query_logical_combinations │ 589 │ 593 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_query_negation │ 557 │ 562 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_tree_ancestors_methods │ 337 │ 341 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_matches_function_directly │ 754 │ 759 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_log_levels_and_exceptions │ 483 │ 487 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_query_complex_hierarchica… │ 713 │ 717 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_query_child_count │ 789 │ 794 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_query_descendant_conditio… │ 674 │ 678 │ InspectArgumentsFailedWarning │
│ ../Users/david/projects/forks/pydan… │ test_span_query_timing_conditions │ 628 │ 633 │ InspectArgumentsFailedWarning │
└────────────────────────────────────────┴────────────────────────────────────────┴─────────────────┴──────────────┴─────────────────────────────────┘
Results (112.24s):
14 failed
2105 passed
726 skipped
12 errors
make: *** [Makefile:56: test] Error 1
(pydantic-ai) vscode ➜ /workspace (feat/add-devcontainer-setup) $

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants