Skip to content

Conversation

@strawgate
Copy link
Owner

@strawgate strawgate commented Nov 16, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Stores no longer close resources supplied by callers; externally-provided clients/caches are preserved so callers manage cleanup.
  • Refactor

    • Centralized context-manager lifecycle for stores with exit-stack based enter/exit/close semantics to improve cleanup reliability.
    • Added an API-stability flag that emits a warning when unstable behavior is used.
  • Tests

    • RocksDB, DuckDB and disk tests now use shared persistent path fixtures and return pre-wiped stores for stable setup.
  • Chores

    • Sync build mappings updated for async↔sync lifecycle helper names.

github-actions bot and others added 15 commits November 10, 2025 16:59
Stores now only manage the lifecycle of clients they create internally.
When a user provides a client to a store constructor, the store no longer
enters/exits the client's context manager or closes it. This enables:

- Clear ownership semantics ("you create it, you manage it")
- Prevention of double-context entry issues
- Client reuse across multiple stores
- Better alignment with Python resource management patterns

Changes:
- Add _client_provided_by_user attribute to BaseContextManagerStore
- Update __aexit__ and close() to conditionally call _close()
- Update all affected stores (Redis, MongoDB, DynamoDB, Valkey, Memcached,
  Disk, DuckDB, RocksDB, Elasticsearch) to track client ownership
- Remove duplicate __aexit__ override in RocksDB store
- Update all docstrings to clarify lifecycle management expectations
- Generate corresponding sync package changes

This is a breaking change for users who relied on stores closing
user-provided clients, but improves correctness and enables important
use cases like connection pooling and client reuse.

Closes #232

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: William Easton <[email protected]>
- Initialize _client_provided_by_user = False in BaseContextManagerStore.__init__()
- Remove redundant = False assignments from all store implementations
- Add ownership checks in _close() methods to prevent closing user-provided clients
- Add ownership checks in __del__() methods where applicable (RocksDB, Disk)
- Fixes test failures where stores were closing user-provided clients

This simplifies the ownership pattern: stores only set the flag to True
when users provide a client, relying on the base class default of False
for internally-created clients.

Co-authored-by: William Easton <[email protected]>
…arguments

Changes:
- Updated BaseStore to accept stable_api parameter (default: False)
- Updated BaseContextManagerStore to accept client_provided_by_user parameter (default: False)
- Removed magic attribute setting in favor of explicit constructor arguments
- Updated all store implementations to pass these flags to base class constructors
- Removed redundant getattr calls in base class methods since attributes are always initialized
- Fixes initialization order bug where base class __init__ was resetting flags

This eliminates the bug where stores would set _client_provided_by_user = True,
then call super().__init__() which would reset it back to False, causing
user-provided clients to be incorrectly closed during cleanup.

Stores updated:
- Redis, DynamoDB, MongoDB, Elasticsearch, Memcached
- Valkey, RocksDB, Disk, DuckDB, Memory, MultiDiskStore

Co-authored-by: William Easton <[email protected]>
… fix test

- Remove redundant _client_provided_by_user checks from all store _close() methods
  since base class already guards these calls
- Add defensive getattr in __del__ methods to prevent AttributeError during cleanup
- Fix RocksDB test to properly close user-provided database before temp cleanup
- This addresses CodeRabbit feedback and fixes Windows test failures

Stores updated:
- RocksDB, Disk, DuckDB (both async and sync)
- Redis, Valkey, MongoDB, Elasticsearch, Memcached, DynamoDB (async only)

Co-authored-by: William Easton <[email protected]>
- MongoDB: Remove duplicate client cleanup from __aexit__, update _close() to call __aexit__(None, None, None) for symmetry with __aenter__
- DynamoDB: Move client cleanup before super().__aexit__() to prevent double-cleanup, set _client = None after closing
- RocksDB: Add try-except in __del__ for best-effort cleanup with appropriate noqa comments
- DuckDB: Use getattr for _is_closed check in __del__ for consistency
- Disk: Set _client_provided_by_user early for exception safety
- Memcached: Change truthiness check to explicit 'is not None' for consistency
- Valkey: Fix _close() to close the correct client (_connected_client) and set it to None for idempotency

Co-authored-by: William Easton <[email protected]>
…ment

- MongoDB: Fix double-cleanup by clearing client ref before super().__aexit__()
- MongoDB: Change _close() to use client.close() instead of __aexit__()
- MongoDB: Add null check in _close() since client may be None after context exit
- DynamoDB: Implement _close() to handle non-context-manager cleanup path
- Now properly passes exception info when exiting client's context manager

Co-authored-by: William Easton <[email protected]>
Replace self._client = None pattern with _client_context_entered flag
to track whether the client context has been entered. This eliminates
the need for type: ignore[assignment] comments and maintains type
safety throughout the store's lifetime.

Changes:
- Added _client_context_entered boolean flag to MongoDB and DynamoDB stores
- Set flag to True in __aenter__ when entering client context
- Check flag in __aexit__ before exiting client context
- Update _close() to only close if context wasn't already exited
- Remove self._client = None assignments and type ignore comments

Co-authored-by: William Easton <[email protected]>
Previously, `_client_context_entered` was set to False BEFORE calling
`super().__aexit__()`. This caused `_close()` to think the context wasn't
already exited, leading to double cleanup attempts.

Now the flag is reset in a finally block AFTER all cleanup completes,
ensuring `_close()` correctly knows the context was already exited.

Affected stores:
- MongoDBStore (async and sync)
- DynamoDBStore (async only, no sync version)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Implements automatic client context entry at the base class level,
allowing stores to work correctly even when not used with `async with`.

Changes:
- Add `_enter_client_context()` helper in BaseContextManagerStore
- Add `_get_client_for_context()` override point for stores to specify their client
- Add `_get_entered_client()` helper to retrieve the entered client
- Call `_enter_client_context()` in both `__aenter__()` and `setup()`
- Update MongoDB and DynamoDB stores to use the new base class pattern
- Remove manual `__aenter__`/`__aexit__` overrides from store implementations

This ensures that clients requiring context manager entry (like MongoDB's
AsyncMongoClient and DynamoDB's aioboto3 client) are properly entered
whether the store is used with `async with` or called directly.

Fixes #232

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Simplified client lifecycle management by:
- Removed _enter_client_context() method from base classes
- Replaced manual context management with AsyncExitStack/ExitStack
- Added _ensure_exit_stack() method that creates exit stack and registers client contexts
- Removed complex _close() methods from MongoDB and DynamoDB stores
- Exit stack now handles all cleanup automatically through its internal stack

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Previously, stores returned clients via `_get_client_for_context()` and the base
class managed them. Now stores directly register their cleanup callbacks with the
exit stack via `_register_cleanup_callbacks(stack)`.

This gives stores more control over their cleanup registration and simplifies
the base class by removing an abstraction layer.

Also updated codegen to map AsyncExitStack -> ExitStack and
enter_async_context -> enter_context for sync code generation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Simplified the BaseContextManagerStore lifecycle management pattern by:
- Removing abstract _register_cleanup_callbacks method
- Initializing AsyncExitStack in __init__ instead of lazily
- Allowing stores to directly register cleanup callbacks via _exit_stack
- Stores now call _exit_stack.push_async_callback() or _exit_stack.callback()
  in their _setup() method to register cleanup operations

This gives stores direct control over exit stack management while keeping
the base class responsible for entering and exiting the stack.

Updated all stores that inherit from BaseContextManagerStore:
- MongoDB, DynamoDB: register client context manager
- Redis, Valkey, Memcached, Elasticsearch: register async close callbacks
- RocksDB, Disk, MultiDisk, DuckDB: register sync close callbacks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 16, 2025

Warning

Rate limit exceeded

@strawgate has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 5 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 4bb78cf and 7942382.

📒 Files selected for processing (13)
  • key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (4 hunks)
  • key-value/key-value-aio/tests/stores/base.py (3 hunks)
  • key-value/key-value-aio/tests/stores/disk/test_disk.py (2 hunks)
  • key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (1 hunks)
  • key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1 hunks)
  • key-value/key-value-aio/tests/stores/filetree/test_filetree.py (1 hunks)
  • key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (4 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/base.py (3 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (2 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (1 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2 hunks)
📝 Walkthrough

Walkthrough

Adds a stable_api flag to BaseStore and introduces BaseContextManagerStore using AsyncExitStack/ExitStack for centralized enter/exit lifecycle. Many stores now track whether a client/cache was provided by the user and register conditional cleanup via new _setup hooks and exit-stack callbacks; explicit per-store close overrides removed.

Changes

Cohort / File(s) Change Summary
Base lifecycle & API
key-value/key-value-aio/src/key_value/aio/stores/base.py, key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py
Add stable_api: bool = False to BaseStore.__init__; add BaseContextManagerStore with client_provided_by_user, _exit_stack/AsyncExitStack, lazy-enter helper, updated context methods (__aenter__/__aexit__/__enter__/__exit__), close() and setup() hooks, and stability warning emission.
Disk stores
key-value/key-value-aio/src/key_value/aio/stores/disk/*, key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/*
Pass stable_api=True to base; add _setup() to register cache cleanup via exit stack when store owns cache; remove unconditional _close() overrides; make __del__ conditional on ownership.
MultiDisk / Multi-store
key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py, key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py
Add async/sync _setup() registering _sync_close via exit stack; remove _close() bridge; pass stable_api=True to base.
DB / connection-backed stores
key-value/key-value-aio/src/key_value/aio/stores/{duckdb,mongodb,dynamodb,elasticsearch}/store.py, key-value/key-value-sync/src/key_value/sync/code_gen/stores/{duckdb,mongodb,elasticsearch}/store.py
Propagate client_provided_by_user to base; replace per-store enter/exit/close overrides with _setup() that conditionally registers client close when store owns the client; remove _close/finalizer logic where applicable; do not close user-provided connections/clients.
Cache & KV adapters
key-value/key-value-aio/src/key_value/aio/stores/{memcached,redis,rocksdb,valkey}/store.py, key-value/key-value-sync/src/key_value/sync/code_gen/stores/{memcached,redis,rocksdb,valkey}/store.py
Track _client_provided_by_user, add _setup() to register client/db close only if owned, remove explicit _close() overrides, and guard __del__ cleanup on ownership.
Memory / simple stores
key-value/key-value-aio/src/key_value/aio/stores/memory/store.py, key-value/key-value-sync/src/key_value/sync/code_gen/stores/memory/store.py
Pass stable_api=True into base constructor instead of assigning _stable_api locally.
Valkey / Glide client
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py, key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py
Pass client_provided_by_user and stable_api=True to base; register async cleanup in _setup() when store owns client; remove _close() override.
Tests — fixtures (RocksDB, DuckDB, Disk, MultiDisk)
key-value/*/tests/stores/{rocksdb,duckdb,disk,multi_disk}/*.py, key-value/*/tests/code_gen/stores/{rocksdb,duckdb,disk,multi_disk}/*.py
Add path-scoped fixtures (rocksdb_path, duckdb_path, disk_path, multi_disk_path); convert store fixtures to accept path and construct store instances; wipe DB/cache before returning; ensure user-provided DBs are closed in tests.
Tooling
scripts/build_sync_library.py
Extend async→sync name mapping: "AsyncExitStack": "ExitStack", "enter_async_context": "enter_context", "push_async_callback": "callback".

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Store
    participant Base as BaseContextManagerStore
    participant Stack as AsyncExitStack/ExitStack
    participant Resource

    User->>Store: enter context
    Store->>Base: _ensure_exit_stack_entered()
    Base->>Stack: enter_async_context / enter_context
    Base->>Store: setup()
    alt Store owns Resource
        Store->>Stack: push_async_callback / callback(close_resource)
    end
    Store-->>User: ready
    User->>Store: use store
    User->>Store: exit context
    Base->>Stack: __aexit__ / close
    Stack->>Resource: invoke cleanup callbacks (owned only)
    Base-->>User: exited
Loading
sequenceDiagram
    participant Caller
    participant Store
    participant Base

    Caller->>Store: __init__(resource_provided?)
    alt resource provided
        Store->>Base: super(..., client_provided_by_user=True)
        Note right of Base: no automatic cleanup registered
    else resource created
        Store->>Base: super(..., client_provided_by_user=False)
        Note right of Base: cleanup registered during setup
    end
Loading

Possibly related PRs

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.86% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main objective of the PR: implementing a client ownership split mechanism for store lifecycle management across multiple store implementations.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link

claude bot commented Nov 16, 2025

Test Failure Analysis

Summary: The static analysis for key-value-sync is failing because the code generation script is missing a mapping for push_async_callbackcallback, causing type errors in the generated synchronous code.

Root Cause: The PR adds AsyncExitStack to ExitStack mapping in scripts/build_sync_library.py (line 227), but forgot to add the corresponding method mapping:

  • AsyncExitStack.push_async_callback() needs to be mapped to ExitStack.callback()
  • The ExitStack class (used in sync code) doesn't have a push_async_callback method - it only has callback()

The type checker correctly identifies this issue:

Cannot access attribute "push_async_callback" for class "ExitStack[bool | None]"
  Attribute "push_async_callback" is unknown (reportAttributeAccessIssue)

Suggested Solution: Add the missing method name mapping to scripts/build_sync_library.py:

  1. File to modify: scripts/build_sync_library.py
  2. Change needed: In the names_map dictionary (around line 248), add:
    "push_async_callback": "callback",
  3. Location: Add it near the other callback-related mappings, such as after "enter_async_context": "enter_context" (line 242)
  4. Re-run codegen: After making this change, run make codegen to regenerate the sync library

Why this fixes the issue:

  • AsyncExitStack.push_async_callback() is used in async stores to register cleanup callbacks for async functions
  • ExitStack.callback() is the synchronous equivalent that registers cleanup callbacks for sync functions
  • The codegen script needs to know to transform push_async_callbackcallback when converting async code to sync
Detailed Analysis

Failed Job: static_analysis (key-value/key-value-sync)

Error Log Excerpts:

/home/runner/work/py-key-value/py-key-value/key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py:236:30 - error: Cannot access attribute "push_async_callback" for class "ExitStack[bool | None]"
  Attribute "push_async_callback" is unknown (reportAttributeAccessIssue)

/home/runner/work/py-key-value/py-key-value/key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py:95:30 - error: Cannot access attribute "push_async_callback" for class "ExitStack[bool | None]"
  Attribute "push_async_callback" is unknown (reportAttributeAccessIssue)

Affected Files (all generated sync code):

  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py:236
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py:95
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py:212 (likely)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/memcached/store.py (likely)

Code Examples:

Async version (key-value-aio/src/key_value/aio/stores/elasticsearch/store.py:264):

async def _setup(self) -> None:
    # Register client cleanup if we own the client
    if not self._client_provided_by_user:
        self._exit_stack.push_async_callback(self._client.close)  # ✓ Valid for AsyncExitStack

Generated sync version (key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py:236):

def _setup(self) -> None:
    # Register client cleanup if we own the client
    if not self._client_provided_by_user:
        self._exit_stack.push_async_callback(self._client.close)  # ✗ Invalid for ExitStack

Should be:

def _setup(self) -> None:
    # Register client cleanup if we own the client
    if not self._client_provided_by_user:
        self._exit_stack.callback(self._client.close)  # ✓ Valid for ExitStack
Related Files

To Fix:

  • scripts/build_sync_library.py - Add "push_async_callback": "callback" to names_map dictionary

Affected by Generated Code (will be auto-fixed by re-running codegen):

  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/memcached/store.py

Related:

  • key-value/key-value-aio/src/key_value/aio/stores/base.py - Introduces the client ownership pattern using AsyncExitStack

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)

47-81: Client-ownership flag and setup-time cleanup registration look correct

The introduction of client_provided = db is not None, propagation via client_provided_by_user to the base class, and registering _close_and_flush in _setup only when we own the DB aligns RocksDBStore with the shared ownership model used in other stores and ensures close() only tears down internally created databases.

Minor readability tweak you might consider (not introduced in this PR): changing if db: at Line 67 to if db is not None: to decouple control flow from any potential truthiness semantics of Rdict and make the ownership check intent explicit.

Also applies to: 83-87

key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)

95-110: stable_api flag wiring is straightforward; confirm warning strategy is intentional

Adding the stable_api: bool = False parameter and emitting a UserWarning whenever _stable_api is false makes the stability contract explicit at construction time. This will cause every store that doesn’t explicitly pass stable_api=True to warn on instantiation; that may be exactly what you want, but it’s worth confirming you’re comfortable with the noise level for end users, or alternatively flipping the default to True and opting out from unstable store constructors.

Also applies to: 126-130

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ab83ea and 2fdb360.

📒 Files selected for processing (25)
  • key-value/key-value-aio/src/key_value/aio/stores/base.py (5 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py (1 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (4 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (6 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (6 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (3 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (1 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/memory/store.py (1 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (4 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (3 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (5 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (3 hunks)
  • key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (1 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (5 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (1 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (4 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (6 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (3 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/memory/store.py (1 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (4 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (3 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (5 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (3 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1 hunks)
  • scripts/build_sync_library.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (22)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (2)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • close (480-484)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (450-454)
key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (7)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (2)
  • _client (100-106)
  • _setup (86-97)
key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
  • _setup (127-128)
  • close (480-484)
key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
  • _setup (90-93)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • _setup (252-283)
key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
  • _setup (261-268)
key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (1)
  • _setup (196-199)
key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (1)
  • _setup (222-225)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (450-454)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (4)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
  • _setup (89-92)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
  • _setup (221-252)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
  • _setup (194-197)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)
  • _setup (84-95)
key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (1)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (1)
  • _client (100-106)
key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • _setup (127-128)
key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
  • _setup (90-93)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (2)
  • _setup (83-87)
  • _close_and_flush (89-93)
key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • _setup (127-128)
key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
  • _setup (90-93)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (2)
  • _setup (110-112)
  • _sync_close (146-148)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • close_connection (269-272)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • close (480-484)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (450-454)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (3)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
  • close_connection (238-241)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • close (480-484)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (450-454)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (6)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • _setup (133-134)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
  • _setup (89-92)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
  • _setup (233-240)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
  • _setup (209-212)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
  • _setup (83-87)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (2)
  • _setup (84-95)
  • _client (98-104)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (5)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (2)
  • _setup (133-134)
  • close (450-454)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
  • _setup (194-197)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
  • _setup (209-212)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
  • _setup (83-87)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (2)
  • _setup (84-95)
  • _client (98-104)
key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (7)
key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
  • _setup (127-128)
  • close (480-484)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • _setup (252-283)
key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (1)
  • _setup (85-88)
key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (1)
  • _setup (196-199)
key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (1)
  • _setup (222-225)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (2)
  • _setup (86-97)
  • _client (100-106)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
  • _setup (233-240)
key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (8)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • _setup (127-128)
key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
  • _setup (90-93)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • _setup (252-283)
key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (1)
  • _setup (133-173)
key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
  • _setup (261-268)
key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (1)
  • _setup (85-88)
key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (1)
  • _setup (222-225)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
  • _setup (194-197)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • close (480-484)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (450-454)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (4)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
  • _setup (133-134)
  • close (450-454)
  • _delete_store (500-502)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
  • _setup (89-92)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
  • _setup (194-197)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
  • _setup (83-87)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • close (480-484)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (450-454)
key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
  • _setup (127-128)
  • close (480-484)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • _setup (252-283)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
  • _setup (89-92)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py (2)
  • _setup (110-112)
  • _sync_close (152-154)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • _setup (133-134)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
  • _setup (89-92)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (7)
key-value/key-value-aio/tests/stores/disk/test_disk.py (1)
  • disk_cache (31-32)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (1)
  • disk_cache (34-35)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
  • _setup (221-252)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
  • _setup (194-197)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
  • _setup (209-212)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
  • _setup (83-87)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (450-454)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (4)
  • _ensure_exit_stack_entered (432-436)
  • setup (157-179)
  • setup (456-465)
  • close (450-454)
key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (5)
key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
  • _setup (127-128)
  • _delete_store (530-532)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • _setup (252-283)
key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
  • _setup (261-268)
key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (2)
  • _setup (85-88)
  • _delete_store (157-159)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (2)
  • _setup (209-212)
  • _delete_store (215-216)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (4)
  • _ensure_exit_stack_entered (460-464)
  • setup (151-173)
  • setup (486-495)
  • close (480-484)
🪛 GitHub Actions: Run Tests
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py

[error] 236-236: Type of "push_async_callback" is unknown (reportUnknownMemberType). Command failed: uv run basedpyright .

🪛 GitHub Check: static_analysis (key-value/key-value-sync)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py

[failure] 95-95:
Cannot access attribute "push_async_callback" for class "ExitStack[bool | None]"
  Attribute "push_async_callback" is unknown (reportAttributeAccessIssue)


[failure] 95-95:
Type of "push_async_callback" is unknown (reportUnknownMemberType)

key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py

[failure] 236-236:
Cannot access attribute "push_async_callback" for class "ExitStack[bool | None]"
  Attribute "push_async_callback" is unknown (reportAttributeAccessIssue)


[failure] 236-236:
Type of "push_async_callback" is unknown (reportUnknownMemberType)

🔇 Additional comments (43)
key-value/key-value-aio/src/key_value/aio/stores/memory/store.py (1)

118-122: LGTM: Correct stable_api configuration for in-memory store.

Passing stable_api=True is appropriate since MemoryStore manages only in-memory data structures without external clients or resources requiring lifecycle cleanup.

scripts/build_sync_library.py (1)

227-227: Correct mapping for AsyncExitStack.

The mapping correctly converts AsyncExitStack to its sync counterpart ExitStack.

key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (6)

92-93: LGTM: Clear lifecycle documentation.

The docstring accurately reflects the new ownership semantics—user-provided connections remain the caller's responsibility.


132-134: LGTM: Consistent documentation.

The main __init__ docstring correctly mirrors the ownership semantics from the overload.


144-144: LGTM: Ownership tracking initialized.

The flag correctly identifies whether the connection was user-provided.


168-173: LGTM: Base class integration.

The ownership flag is correctly passed to BaseContextManagerStore, enabling centralized lifecycle management.


266-274: LGTM: Conditional cleanup properly implemented.

The ownership check ensures user-provided connections aren't closed. The synchronous callback is appropriate since duckdb.DuckDBPyConnection.close() is not async. AsyncExitStack handles this correctly via LIFO execution during aclose().


378-382: LGTM: Defensive cleanup with ownership check.

The __del__ method correctly respects ownership via _client_provided_by_user and uses defensive getattr calls.

Minor observation: self._connection is assigned before super().__init__() (which sets _client_provided_by_user). If the base __init__ fails mid-execution, getattr(self, "_client_provided_by_user", False) defaults to False, potentially closing a user connection. However, this is an unlikely edge case and the outer exception handler mitigates risk.

key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (6)

11-11: LGTM: Clean import removal.

Removing unused Self import while retaining override.


162-164: LGTM: Explicit lifecycle documentation.

Clear explanation that user-provided clients remain under caller control. This aligns with ownership semantics across other stores.


172-173: LGTM: Correct ownership tracking.

Properly captures whether the client was user-provided. The is not None check is appropriate for parameter presence detection.


192-192: LGTM: Ownership flag propagated to base class.

Correctly passes the ownership flag to enable conditional cleanup in the base lifecycle management.


337-337: LGTM: Helpful maintenance comment.

Documents the lifecycle pattern shift, clarifying that explicit _close override is no longer needed since the exit stack handles cleanup.


196-199: LGTM: Clean exit-stack-based lifecycle.

Conditionally registers the client for cleanup only when the store owns it. The implementation correctly uses enter_async_context to manage the AsyncMongoClient's lifecycle. PyMongo AsyncMongoClient implements the async context manager protocol, so this pattern properly handles resource cleanup.

key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (4)

55-67: LGTM: Clear lifecycle documentation.

The docstring accurately describes parameter semantics and lifecycle management responsibilities.


68-69: LGTM: Straightforward ownership tracking.


79-83: LGTM: Lifecycle integration with base class.

Correctly passes ownership flag and enables stable API for exit-stack-based lifecycle management.


95-97: Cleanup registration verified.

The valkey-glide Python library v2.1.0 exposes an async close method, confirming the async callback registration at lines 95–97 is correctly implemented. The pattern properly handles client lifecycle management.

key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py (2)

104-107: LGTM: Constructor properly configures stable API flag.

The addition of stable_api=True aligns with the new lifecycle management approach.


109-112: No fixes needed — verified that diskcache.Cache.close() safely handles multiple calls.

The concern about potentially calling cache.close() twice (via both exit stack cleanup and __del__) is not an issue. Diskcache Cache objects are idempotent/safe to close multiple times, and closed Cache objects automatically re-open when accessed, making the double-cleanup pattern safe.

key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (4)

53-55: LGTM: Clear lifecycle ownership documentation.

The docstring explicitly states ownership semantics, preventing confusion about resource management responsibilities.


68-69: LGTM: Ownership tracking properly initialized and propagated.

The ownership flag is correctly set based on whether disk_cache was provided and properly passed to the base class constructor for lifecycle management.

Also applies to: 83-87


89-93: LGTM: Conditional cleanup registration respects ownership.

The _setup method correctly registers cache cleanup only when the store owns the cache, preventing inappropriate closure of user-provided resources.


136-137: Defensive checks are appropriate and safe.

Cache.close() is safe to call multiple times, and closed Cache objects will automatically re-open when accessed. The dual cleanup paths (both _setup at line 93 and __del__) pose no risk.

key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (2)

63-82: Client ownership handling in MemcachedStore.__init__ is consistent and correct

Deriving client_provided from client is not None and passing client_provided_by_user=client_provided into BaseContextManagerStore cleanly separates user‑provided vs internally created clients. The control flow around client vs host/port is clear and side‑effect free; ignoring host/port when a client is supplied is acceptable and matches the docstring semantics. No changes needed.


84-88: _setup correctly hooks Memcached client cleanup into the exit stack

Conditionally registering self._client.close with push_async_callback when _client_provided_by_user is false ensures that only store‑owned aiomcache clients are closed on teardown, while user‑provided clients are left alone. This aligns with the new lifecycle model used by the other stores and avoids double‑closing.

key-value/key-value-aio/src/key_value/aio/stores/base.py (2)

89-123: stable_api flag and stability warning behavior look sound

Capturing stable_api in BaseStore.__init__ and emitting a warning only when it is False gives a simple, opt‑in way for mature stores to suppress the instability warning without changing existing call sites. The behavior is straightforward and keeps the warning at construction time, which is a reasonable point for users to see it.


430-495: AsyncExitStack-based lifecycle in BaseContextManagerStore is well-structured

The combination of _client_provided_by_user, an AsyncExitStack, and _exit_stack_entered plus _ensure_exit_stack_entered() provides a clear, reusable pattern for resource cleanup:

  • __aenter__ and the overridden setup() both ensure the exit stack is entered before _setup, allowing subclasses to register cleanup callbacks there.
  • __aexit__ and close() both close the exit stack and reset _exit_stack_entered, making context‑manager usage and explicit close() calls equivalent from a cleanup perspective.
  • The design is compatible with the existing BaseStore.setup locking and seeding logic and doesn’t introduce obvious race conditions.

This is a solid foundation for the per‑store lifecycle handling added elsewhere in the PR.

key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (2)

57-95: RedisStore constructor cleanly separates owned vs user-provided clients

Using client_provided = client is not None and branching between client, url, and host/port is straightforward and ensures only internally created Redis instances are considered store‑owned. Passing client_provided_by_user=client_provided and stable_api=True into the base initializer ties RedisStore into the new lifecycle and stability model without changing the public surface. This looks correct.


221-229: Redis client cleanup and destroy behavior align with the new lifecycle model

Registering self._client.aclose on the exit stack when _client_provided_by_user is false ensures that only store‑owned redis.asyncio.Redis instances are closed during teardown, matching redis‑py’s recommended async close pattern. (redis.readthedocs.io)

_delete_store delegating to await self._client.flushdb() is consistent with the sync implementation and satisfies the expected boolean contract via the truthiness of the return value. Overall, the lifecycle changes are coherent and non‑breaking.

key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (2)

92-123: DynamoDBStore constructor correctly wires client ownership into the base class

The client parameter docstring matches the implementation: when a client is provided, it’s assigned directly to _client, and client_provided_by_user=True is propagated to BaseContextManagerStore, so this store does not attempt to manage its lifecycle. When client is None, you create a raw aioboto3 DynamoDB client (self._raw_client) and leave _client unset, ready to be initialized during _setup. The separation between _raw_client (context-managed by the store) and _client (the connected client used by operations) is clear and aligns with the ownership model in the rest of the PR.


135-175: _setup integrates DynamoDB client acquisition and cleanup with the exit stack

Conditionally calling await self._exit_stack.enter_async_context(self._raw_client) when the client is not user‑provided and _raw_client exists ensures that:

  • Store‑created clients are entered exactly once under the AsyncExitStack and will be cleaned up automatically on __aexit__/close().
  • User‑provided clients skip this path and are used directly via _connected_client, avoiding double context management.

The subsequent table existence check, creation, waiter, and TTL configuration continue to use _connected_client and are unaffected by the lifecycle changes. The explanatory comment about not needing _close anymore is accurate given the exit stack handling.

key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (1)

61-64: Explicitly closing the user-provided RocksDB instance matches the new ownership model

With the store no longer responsible for closing user‑provided Rdict instances, calling db.close() in the test after await store.close() is the right way to ensure resources are released before cleaning up the temporary directory. This keeps the test aligned with the revised lifecycle semantics and avoids potential file‑handle leaks.

key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)

64-67: Explicitly closing user-provided RocksDB instance matches new ownership semantics

Calling db.close() after store.close() and before temp_dir.cleanup() correctly reflects that the store no longer owns user-provided DB handles and prevents lingering file handles in the temporary directory.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/memory/store.py (1)

111-116: Passing stable_api=True through the base initializer is a clean integration

Wiring stable_api=True directly into super().__init__ makes the MemoryStore’s stability declaration explicit at construction time and avoids ad-hoc mutation of base-class state later.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (1)

107-113: ExitStack-based cleanup for MultiDiskStore is consistent with the new lifecycle model

Registering _sync_close in _setup and passing stable_api=True into super().__init__ cleanly hooks MultiDiskStore into the shared ExitStack-driven lifecycle while preserving the existing best-effort __del__ cleanup path.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)

186-191: Finalizer now respects user ownership and degrades gracefully

Guarding __del__ with _client_provided_by_user and wrapping _close_and_flush() in a broad try/except gives you best-effort cleanup for internally owned DBs while correctly avoiding interference with user-managed ones and staying robust to partially initialized instances during GC.

key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)

218-220: Async client ownership wiring for Elasticsearch looks consistent

The new client_provided flag, the client_provided_by_user plumbing into BaseContextManagerStore, and the _setup registration via push_async_callback(self._client.close) align with the ownership pattern used in other async stores (memcached/redis/mongodb). Behavior matches the docstring: user‑provided clients are never auto‑closed; internally created ones are cleaned up via the exit stack.

Also applies to: 232-232, 257-258, 260-265

key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)

4-4: RocksDB ownership and async lifecycle integration look correct

The RocksDB store cleanly distinguishes between user‑provided and internally created Rdict instances via client_provided, passes this into client_provided_by_user, and uses _setup + AsyncExitStack.callback(self._close_and_flush) to close only store‑owned databases. The updated docstring and guarded __del__ (best‑effort, only when we own the DB) are consistent with that contract and with the sync RocksDB implementation.

Also applies to: 54-56, 68-69, 83-86, 88-93, 198-203

key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (2)

95-97: Client ownership flag and base constructor wiring look solid

The client_provided = connection is not None flag, updated docstrings, and forwarding via client_provided_by_user=client_provided into BaseContextManagerStore cleanly implement the “user-provided connection is not closed by the store” semantics, while keeping stable_api=False behavior explicit.

Also applies to: 134-137, 147-148, 170-171


235-244: ExitStack callback correctly scopes connection cleanup to owned connections

Registering a close_connection callback only when self._client_provided_by_user is false, and guarding with _is_closed, integrates properly with the new BaseContextManagerStore lifecycle so that owned connections are closed exactly once via close() / context-manager exit.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (2)

14-14: MongoDB client ownership handling is consistent with the new base class

The updated client docstring, client_provided = client is not None flag, and forwarding via client_provided_by_user=client_provided into BaseContextManagerStore correctly ensure that user-provided clients are never closed by the store, while URL/default clients are owned and managed. The added @override usage for methods is also appropriate here.

Also applies to: 159-168, 170-171, 187-191


194-198: Context manager support verified; implementation is correct

PyMongo 4.x MongoClient implements the context manager protocol and automatically calls client.close() when exiting the with-block, confirming that the _exit_stack.enter_context(self._client) approach in lines 194-198 (and 312-313) is sound for PyMongo 4.0.0. The implementation properly centralizes cleanup under the context manager lifecycle.

Repository owner deleted a comment from claude bot Nov 16, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)

93-95: Critical: Callback registers async method to synchronous ExitStack.

The code registers self._client.close (an async method) to ExitStack.callback(), which is synchronous. During context exit, the callback will receive a coroutine object instead of executing cleanup, leaving the client connection open.

For a synchronous store, you must use a synchronous cleanup approach:

         # Register client cleanup if we own the client
         if not self._client_provided_by_user:
-            self._exit_stack.callback(self._client.close)
+            def _close_client() -> None:
+                if self._connected_client is not None:
+                    self._connected_client.close()
+
+            self._exit_stack.callback(_close_client)

Verify that glide_sync.BaseClient exposes a synchronous close() method (not async). If only async close is available, consider using an async context manager pattern instead.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2fdb360 and e96b709.

📒 Files selected for processing (11)
  • key-value/key-value-aio/src/key_value/aio/stores/base.py (5 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (5 hunks)
  • key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (6 hunks)
  • key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (5 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (5 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (3 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (3 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (3 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2 hunks)
  • scripts/build_sync_library.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (4)
  • _ensure_exit_stack_entered (432-436)
  • setup (157-179)
  • setup (459-468)
  • close (453-457)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (453-457)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (2)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (1)
  • _client (100-106)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
  • close (453-457)
key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)
  • _client (98-104)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (1)
  • _client (100-106)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
  • RocksDBStore (23-191)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
  • keys (389-395)
  • delete (356-360)
  • close (453-457)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
  • close (483-487)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)
  • RocksDBStore (20-203)
key-value/key-value-aio/src/key_value/aio/stores/base.py (3)
  • keys (417-423)
  • delete (384-388)
  • close (483-487)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (3)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
  • _setup (133-134)
  • close (453-457)
  • _delete_store (503-505)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
  • _setup (219-244)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
  • _setup (233-240)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (4)
  • _ensure_exit_stack_entered (460-464)
  • setup (151-173)
  • setup (489-498)
  • close (483-487)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (3)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (2)
  • _setup (133-134)
  • close (453-457)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
  • _setup (219-244)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
  • _setup (209-212)
🪛 GitHub Actions: Run Tests
key-value/key-value-aio/src/key_value/aio/stores/base.py

[error] 480-480: pyright: Type "bool | None" is not assignable to return type "None".

key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py

[error] 450-450: Pyright: Type 'bool | None' is not assignable to return type 'None' (reportReturnType).

🪛 GitHub Check: static_analysis (key-value/key-value-aio)
key-value/key-value-aio/src/key_value/aio/stores/base.py

[failure] 480-480:
Type "bool | None" is not assignable to return type "None"
  Type "bool | None" is not assignable to type "None"
    "bool" is not assignable to "None" (reportReturnType)

🪛 GitHub Check: static_analysis (key-value/key-value-sync)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py

[failure] 450-450:
Type "bool | None" is not assignable to return type "None"
  Type "bool | None" is not assignable to type "None"
    "bool" is not assignable to "None" (reportReturnType)

🔇 Additional comments (21)
scripts/build_sync_library.py (1)

227-227: LGTM! Previous feedback addressed.

The AsyncExitStack mappings are correct and complete for methods used in the codebase. The push_async_callbackcallback mapping (line 249) properly addresses the prior review finding.

Also applies to: 242-242, 249-249

key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)

27-36: LGTM: Store fixture correctly refactored.

Conversion to synchronous factory with explicit path parameter and pre-wipe logic aligns with the new lifecycle management model where stores register cleanup via setup hooks.


71-72: LGTM: Correct cleanup of user-provided client.

Since the store was created with db=db, the client_provided_by_user flag prevents automatic cleanup, so manual db.close() is required.

key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (4)

91-92: Documentation clearly states lifecycle ownership.

The updated docstring appropriately warns users that provided connections are caller-managed.


143-143: Clean ownership tracking.

The boolean flag correctly captures whether the connection was user-provided.


264-266: Lifecycle management correctly delegates to exit stack.

The conditional cleanup registration ensures connections are only closed when created by the store, not when user-provided. The synchronous close() callback is compatible with AsyncExitStack.callback().


166-171: No issues found. The stable_api=False parameter is correct and intentional.

Verification confirms stable_api is a base class parameter that flags API stability status. When False, it triggers _warn_about_stability() warnings. DuckDB is consistently marked as unstable (stable_api=False) across both sync and async implementations, while production-ready stores (Redis, Valkey, Memory, Disk) use stable_api=True. The parameter and its value are correct.

key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (4)

5-5: LGTM: Override decorator added for clarity.

Adding override from typing_extensions properly marks method overrides.


77-124: Client ownership tracking implemented correctly.

The initialization properly handles both cases:

  • When client is provided: stores it directly in _client and sets _raw_client = None
  • When client is not provided: creates _raw_client for later entry in _setup

The client_provided flag is correctly passed to the base class to coordinate lifecycle management.


137-138: Client lifecycle registration is correct.

The condition properly checks ownership before registering cleanup. The _raw_client is not None check on line 137 is defensive—based on initialization logic, if client_provided_by_user is False, _raw_client is always set. Redundant but acceptable defensive programming.


248-248: Accurate comment about lifecycle management.

Correctly notes that the exit stack pattern eliminates the need for explicit _close override.

key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)

74-75: Correct cleanup of user-provided resource.

Properly closes the user-provided database after store.close(), aligning with the ownership model where stores don't manage user-provided clients (see store.py where client_provided_by_user=True prevents store from closing user-provided db).

Based on learnings.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (3)

57-69: LGTM: Clear lifecycle documentation.

The docstring clearly explains client ownership semantics—when the store manages cleanup vs. when the caller retains responsibility.


70-70: LGTM: Ownership flag correctly computed.


81-81: Code changes verified. The super().__init__() call at line 81 correctly passes all parameters:

  • client_provided_by_user is accepted by BaseContextManagerStore.__init__
  • default_collection and stable_api are accepted by BaseStore.__init__ via **kwargs

All parameter signatures match the base class definitions.

key-value/key-value-aio/src/key_value/aio/stores/base.py (1)

81-123: Stable API flag wiring in BaseStore looks sound

Adding the kw‑only stable_api: bool = False parameter, storing it on _stable_api, and centralizing the warning via _warn_about_stability() is consistent and backward‑compatible with existing callers.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)

87-131: Stable API flag addition mirrors the async base cleanly

The new stable_api kw‑only parameter, backing field, and warning hook are wired the same way as in the async base and remain backward‑compatible with existing sync stores.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)

189-237: Elasticsearch client ownership and cleanup are correctly integrated with the exit stack

The updated elasticsearch_client docs, the client_provided = elasticsearch_client is not None flag, passing client_provided_by_user into super().__init__, and registering self._client.close via self._exit_stack.callback only when the client is owned by the store all look consistent with the new BaseContextManagerStore lifecycle.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)

94-99: DuckDB connection ownership and cleanup via ExitStack are wired correctly

Connection ownership is clearly documented, client_provided is derived from the presence of connection and passed as client_provided_by_user, and _setup registers self._connection.close on the exit stack only when the store owns the connection. This matches the intended lifecycle pattern.

Also applies to: 133-141, 146-147, 168-169, 233-235

key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (2)

60-71: Redis client ownership split is clearly expressed and correctly forwarded to the base

The ctor doc updates, client_provided = client is not None, and passing client_provided_by_user=client_provided (with stable_api=True) cleanly separate user‑managed clients from store‑owned ones and avoid spurious stability warnings.

Also applies to: 88-89


209-216: Redis cleanup and destroy hooks align with the new context-manager lifecycle

Registering self._client.close on the exit stack only when the client is store‑owned, and implementing _delete_store via flushdb(), fits the BaseContextManagerStore + BaseDestroyStore lifecycle model and keeps ownership semantics consistent.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)

87-131: BaseStore stable_api flag wiring looks correct; consider annotating the field

The new stable_api kw-only parameter, _stable_api assignment, and _warn_about_stability() call are coherent and preserve backward compatibility for existing keyword callers. For consistency with other attributes, you might also add a class-level type annotation for _stable_api: bool.

♻️ Duplicate comments (2)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)

22-27: Session-scoped fixture with TemporaryDirectory will delete the directory prematurely.

This issue was already flagged in a previous review. Use tmp_path_factory instead for session-scoped temporary paths, as recommended by pytest.


35-38: Fix incorrect Rdict.delete() API usage.

Line 38 uses client.delete(key=key) but Rdict.delete expects a positional argument. This was flagged in a previous review.

Apply this diff:

-        for key in client.keys(backwards=True):
-            client.delete(key=key)
+        for key in client.keys(backwards=True):
+            client.delete(key)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e96b709 and 895072c.

📒 Files selected for processing (6)
  • key-value/key-value-aio/src/key_value/aio/stores/base.py (5 hunks)
  • key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1 hunks)
  • key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2 hunks)
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (5 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (4)
  • duckdb_path (36-40)
  • store (23-27)
  • store (44-48)
  • store (59-63)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
  • rocksdb_path (23-27)
  • store (31-40)
key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)
  • RocksDBStore (20-203)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
  • rocksdb_path (20-24)
  • store (28-37)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
  • RocksDBStore (23-191)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (4)
  • _ensure_exit_stack_entered (432-436)
  • setup (157-179)
  • setup (461-470)
  • close (455-459)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (4)
  • _ensure_exit_stack_entered (460-464)
  • setup (151-173)
  • setup (489-498)
  • close (483-487)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (2)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (4)
  • duckdb_path (39-43)
  • store (26-30)
  • store (47-51)
  • store (62-66)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • DuckDBStore (62-353)
🔇 Additional comments (10)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)

75-76: Correct handling of user-provided DB lifecycle.

Explicitly closing the user-provided database before cleanup aligns with the ownership semantics in the store implementation. This prevents resource leaks.

key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1)

44-48: LGTM!

The fixture correctly accepts the duckdb_path dependency, initializes the store with the provided path, and ensures cleanup via close(). This aligns with the PR's lifecycle management refactor.

key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1)

47-51: LGTM once duckdb_path fixture is fixed.

The store fixture correctly accepts and uses the duckdb_path parameter, consistent with the async version and lifecycle management patterns.

key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)

8-16: ExitStack import usage is consistent

ExitStack is imported and used only in BaseContextManagerStore, which cleanly centralizes sync resource cleanup; no issues here.

key-value/key-value-aio/src/key_value/aio/stores/base.py (5)

9-9: LGTM: AsyncExitStack import

Correct import for async context stack management.


89-90: LGTM: stable_api flag implementation

The flag is properly initialized, documented, and triggers appropriate warnings for unstable stores.

Also applies to: 102-103, 120-123


446-458: LGTM: Constructor correctly initializes exit stack and ownership flag

The client_provided_by_user attribute is stored for subclass use, as documented.


466-470: LGTM: Context manager lifecycle is correctly implemented

  • __aexit__ return type bool | None correctly matches AsyncExitStack (past review addressed)
  • aclose() usage is correct for async exit stacks
  • Setup override properly ensures exit stack entry before _setup(), enabling cleanup registration

Also applies to: 472-481, 483-487, 489-498


460-464: Async context manager protocol does not support concurrent entry—original concern is invalid

The Python async context manager protocol (__aenter__/__aexit__) expects a single entry per instance. Concurrent __aenter__ calls violate this contract and are not a supported usage pattern. The check-then-act pattern here is defensive programming: it ensures the exit stack enters exactly once while allowing setup() to be called multiple times. No concurrent entry patterns exist in tests or usage.

Likely an incorrect or invalid review comment.

key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (1)

72-73: Good: proper cleanup of user-provided database.

Correctly closes the user-provided db instance after the store is closed, preventing resource leaks. This aligns with the ownership model where the caller manages the database lifecycle when providing it explicitly.

Comment on lines 19 to 24
@pytest.fixture(scope="session")
async def rocksdb_path(self) -> AsyncGenerator[Path, None]:
with TemporaryDirectory() as temp_dir:
db_path = Path(temp_dir) / "test_db"
db_path.mkdir(parents=True, exist_ok=True)
yield db_path
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove unnecessary async from session-scoped generator fixture.

The fixture uses synchronous operations only (TemporaryDirectory, Path.mkdir) and contains no await calls. Making it async is unnecessary and may cause compatibility issues with pytest's session scope. The sync variant correctly uses a regular generator.

Apply this diff:

     @pytest.fixture(scope="session")
-    async def rocksdb_path(self) -> AsyncGenerator[Path, None]:
+    def rocksdb_path(self) -> Generator[Path, None, None]:
         with TemporaryDirectory() as temp_dir:
             db_path = Path(temp_dir) / "test_db"
             db_path.mkdir(parents=True, exist_ok=True)
             yield db_path

Add the import:

-from collections.abc import AsyncGenerator
+from collections.abc import Generator

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py around lines 19
to 24, the fixture is unnecessarily declared async and typed as AsyncGenerator
while it performs only synchronous operations; change the fixture to a regular
(non-async) generator by removing the async keyword, update its return type to a
synchronous Generator (e.g., Generator[Path, None, None] or Iterator[Path]), and
add the corresponding import from typing at the top of the file; keep the
TemporaryDirectory context, Path.mkdir, and yield as-is.

Comment on lines 28 to 37
def store(self, rocksdb_path: Path) -> RocksDBStore:
"""Create a RocksDB store for testing."""
# Create a temporary directory for the RocksDB database
with TemporaryDirectory() as temp_dir:
db_path = Path(temp_dir) / "test_db"
rocksdb_store = RocksDBStore(path=db_path)
yield rocksdb_store
store = RocksDBStore(path=rocksdb_path)

# Wipe the store before returning it
client = store._db
for key in client.keys(backwards=True):
client.delete(key=key)

return store
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Verify RocksDB locking with session-scoped path and function-scoped store.

Each test creates a new RocksDBStore instance pointing to the same session-scoped path. RocksDB enforces single-writer constraints, so if a previous test's store isn't properly closed before the next test starts, lock errors may occur. The fixture relies on __del__ cleanup or explicit close() calls, which is timing-dependent.

Confirm that inherited tests from BaseStoreTests properly close the store, or consider making the fixture yield and add explicit cleanup.


🏁 Script executed:

#!/bin/bash
# Check if BaseStoreTests fixtures or teardown handle store cleanup

# Search for store cleanup in base test class
ast-grep --pattern 'class BaseStoreTests {
  $$$
  close
  $$$
}'

# Check for fixture cleanup patterns
rg -n -A5 'def store' key-value/key-value-aio/tests/stores/base.py

# Look for __aexit__ or close calls in base tests
rg -n -C3 'async with.*store|await.*\.close\(\)' key-value/key-value-aio/tests/stores/base.py

Length of output: 539


Make the store fixture yield and add explicit cleanup to match base fixture pattern.

The base fixture (lines 266–273 in base.py) uses yield with either async with or explicit await store.close(). Your RocksDB fixture uses return with no cleanup, relying on unreliable __del__ timing. Since each test gets a new store instance pointing to the same session-scoped path, stale locks will accumulate and cause intermittent failures.

Change to:

@pytest.fixture
async def store(self, rocksdb_path: Path):
    """Create a RocksDB store for testing."""
    store = RocksDBStore(path=rocksdb_path)
    
    # Wipe the store before returning it
    client = store._db
    for key in client.keys(backwards=True):
        client.delete(key=key)
    
    yield store
    await store.close()
🤖 Prompt for AI Agents
In key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py around lines
28–37, the fixture currently returns a RocksDBStore without performing explicit
cleanup, causing stale locks; change the fixture to be an async pytest fixture
that yields the store (keep the pre-yield wipe loop) and after the yield call
await store.close() so the store is deterministically closed and cleaned up
between tests.

Comment on lines 399 to +470
class BaseContextManagerStore(BaseStore, ABC):
"""An abstract base class for context manager stores."""
"""An abstract base class for context manager stores.
Stores that accept a client parameter should pass `client_provided_by_user=True` to
the constructor. This ensures the store does not manage the lifecycle of user-provided
clients (i.e., does not close them).
The base class provides an ExitStack that stores can use to register cleanup
callbacks. Stores should add their cleanup operations to the exit stack as needed.
The base class handles entering and exiting the exit stack.
"""

_client_provided_by_user: bool
_exit_stack: ExitStack
_exit_stack_entered: bool

def __init__(self, *, client_provided_by_user: bool = False, **kwargs: Any) -> None:
"""Initialize the context manager store with client ownership configuration.
Args:
client_provided_by_user: Whether the client was provided by the user. If True,
the store will not manage the client's lifecycle (will not close it).
Defaults to False.
**kwargs: Additional arguments to pass to the base store constructor.
"""
self._client_provided_by_user = client_provided_by_user
self._exit_stack = ExitStack()
self._exit_stack_entered = False
super().__init__(**kwargs)

def _ensure_exit_stack_entered(self) -> None:
"""Ensure the exit stack has been entered."""
if not self._exit_stack_entered:
self._exit_stack.__enter__()
self._exit_stack_entered = True

def __enter__(self) -> Self:
# Enter the exit stack
self._ensure_exit_stack_entered()
self.setup()
return self

def __exit__(self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None) -> None:
self._close()
def __exit__(
self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None
) -> bool | None:
# Close the exit stack, which handles all cleanup
if self._exit_stack_entered:
result = self._exit_stack.__exit__(exc_type, exc_value, traceback)
self._exit_stack_entered = False

return result
return None

def close(self) -> None:
self._close()
# Close the exit stack if it has been entered
if self._exit_stack_entered:
self._exit_stack.close()
self._exit_stack_entered = False

@abstractmethod
def _close(self) -> None:
"""Close the store."""
...
def setup(self) -> None:
"""Initialize the store if not already initialized.
This override ensures the exit stack is entered before the store's _setup()
method is called, allowing stores to register cleanup callbacks during setup.
"""
# Ensure exit stack is entered
self._ensure_exit_stack_entered()
# Call parent setup
super().setup()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

BaseContextManagerStore ExitStack lifecycle is sound; __enter__ can be simplified

The ExitStack-based lifecycle (_ensure_exit_stack_entered, __enter__/__exit__, close, and setup override) correctly ensures callbacks are registered before setup and cleaned up once, and returning the result of ExitStack.__exit__ preserves exception-suppression semantics. One minor simplification: __enter__ currently calls _ensure_exit_stack_entered() and then setup(), but setup() already calls _ensure_exit_stack_entered(), so you could safely collapse __enter__ to:

def __enter__(self) -> Self:
    self.setup()
    return self

to avoid redundant checks.

🤖 Prompt for AI Agents
In key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py around
lines 402-470, __enter__ redundantly calls _ensure_exit_stack_entered() before
calling setup(), but setup() already calls _ensure_exit_stack_entered();
simplify __enter__ to just call self.setup() and return self, removing the
duplicate _ensure_exit_stack_entered() call to avoid an unnecessary check while
preserving behavior.

@claude
Copy link

claude bot commented Nov 16, 2025

Test Failure Analysis

Summary: The DuckDB persistent store tests are failing because the session-scoped duckdb_path fixture's temporary directory is being deleted immediately after creation, leaving an invalid database path.

Root Cause: In the test fixture refactoring (lines 35-42 of key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py), the duckdb_path fixture uses a session-scoped fixture with TemporaryDirectory() as a context manager. The issue is:

  1. The fixture creates a temporary directory with with TemporaryDirectory() as temp_dir:
  2. It yields Path(temp_dir) / "test.db"
  3. The with block exits immediately after the first yield, deleting the temp directory
  4. All subsequent tests in the session try to use the deleted path, resulting in:
    _duckdb.IOException: IO Error: Could not read from file "/tmp/tmpzzmx3rmm/test.db": Is a directory
    

The temp directory is cleaned up, but the path is still being used by tests, pointing to a non-existent or invalid location.

Suggested Solution: Follow the RocksDB pattern and ensure the temp directory persists for the entire session:

  1. File to modify: key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (lines 35-50)

  2. Change the fixture to create a real file path instead of just a directory:

@pytest.fixture(scope="session")
async def duckdb_path(self) -> AsyncGenerator[Path, None]:
    with TemporaryDirectory() as temp_dir:
        db_path = Path(temp_dir) / "test.db"
        yield db_path

@override
@pytest.fixture
async def store(self, duckdb_path: Path) -> AsyncGenerator[DuckDBStore, None]:
    """Test with persistent DuckDB database file."""
    duckdb_store = DuckDBStore(database_path=duckdb_path)
    
    # Wipe existing data before each test
    try:
        # Clear all data from the database
        duckdb_store._connection.execute(f"DROP TABLE IF EXISTS {duckdb_store._table_name}")
    except Exception:
        pass  # Table might not exist on first run
    
    yield duckdb_store
    await duckdb_store.close()

Alternative approach (if wiping is complex): Change the scope to function instead of session, so each test gets its own temp directory:

@pytest.fixture  # Remove scope="session"
async def duckdb_path(self) -> AsyncGenerator[Path, None]:
    with TemporaryDirectory() as temp_dir:
        db_path = Path(temp_dir) / "test.db"
        yield db_path
  1. Also apply the same fix to the sync version in key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py

Why this fixes the issue:

  • Currently, the session-scoped fixture's context manager exits too early, deleting the directory before tests run
  • By either wiping the DB between tests (keeping session scope) OR changing to function scope, the temp directory will exist when tests need it
  • The RocksDB tests use the same session-scoped pattern but wipe the database between tests, which works correctly
Detailed Analysis

Error from logs:

_duckdb.IOException: IO Error: Could not read from file "/tmp/tmpzzmx3rmm/test.db": Is a directory

src/key_value/aio/stores/duckdb/store.py:156: IOException

This error occurs at:

self._connection = duckdb.connect(database=database_path)

Affected tests (all DuckDB persistent tests):

  • TestDuckDBStorePersistent::test_store[with_ctx_manager]
  • TestDuckDBStorePersistent::test_store[no_ctx_manager]
  • TestDuckDBStorePersistent::test_empty_get[with_ctx_manager]
  • TestDuckDBStorePersistent::test_empty_get[no_ctx_manager]
  • TestDuckDBStorePersistent::test_empty_put[with_ctx_manager]
  • And all other tests in TestDuckDBStorePersistent class

Why RocksDB tests pass:
RocksDB tests were updated similarly but include database wiping logic:

@override
@pytest.fixture
def store(self, rocksdb_path: Path) -> RocksDBStore:
    store = RocksDBStore(path=rocksdb_path)
    
    # Wipe the store before returning it
    client = store._db
    for key in client.keys(backwards=True):
        client.delete(key=key)
    
    return store

This approach ensures the path exists (from session fixture) and data is cleared between tests (from store fixture).

Related Files

Primary Files to Fix:

  • key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py:35-50 - Async DuckDB test fixtures
  • key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py:38-50 - Sync DuckDB test fixtures (generated)

For Reference (working pattern):

  • key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py:19-36 - RocksDB uses session-scoped path + database wiping

Source Code:

  • key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py:156 - Where the error occurs (database connection)

@claude
Copy link

claude bot commented Nov 16, 2025

Test Failure Analysis

Summary: The Windows test run is failing during teardown with a PermissionError when trying to delete the diskcache database file, caused by the file still being held open by a DiskStore instance that wasn't properly closed.

Root Cause:

The PR introduces client ownership tracking (via _client_provided_by_user) which prevents stores from closing user-provided clients. However, the test fixture creates a store-owned cache (not user-provided), but the fixture itself doesn't properly close the store before the TemporaryDirectory context manager tries to clean up.

Looking at key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py:22-24:

@pytest.fixture(scope="session")
def disk_store(self) -> Generator[DiskStore, None, None]:
    with tempfile.TemporaryDirectory() as temp_dir:
        yield DiskStore(directory=temp_dir, max_size=TEST_SIZE_LIMIT)

The problem:

  1. The DiskStore creates its own cache (via directory= parameter), so _client_provided_by_user = False
  2. The fixture yields the store but never calls store.close() before the with block exits
  3. When the TemporaryDirectory context manager exits, it tries to delete the directory
  4. On Windows, the diskcache database file is still held open by the DiskStore
  5. Windows can't delete files that are open → PermissionError: [WinError 32]

Error from logs:

PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmp2k5mcu5p\\cache.db'

Suggested Solution: Update the test fixture to properly close the store before the TemporaryDirectory cleanup:

File to modify: key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (lines 21-24)

@pytest.fixture(scope="session")
def disk_store(self) -> Generator[DiskStore, None, None]:
    with tempfile.TemporaryDirectory() as temp_dir:
        store = DiskStore(directory=temp_dir, max_size=TEST_SIZE_LIMIT)
        yield store
        store.close()  # Close the store before TemporaryDirectory cleanup

Also update the async version: key-value/key-value-aio/tests/stores/disk/test_disk.py (lines 21-24)

@pytest.fixture(scope="session")
async def disk_store(self) -> AsyncGenerator[DiskStore, None]:
    with tempfile.TemporaryDirectory() as temp_dir:
        store = DiskStore(directory=temp_dir, max_size=TEST_SIZE_LIMIT)
        yield store
        await store.close()  # Close the store before TemporaryDirectory cleanup

Why this fixes the issue:

  • The store.close() call will invoke _close() (because _client_provided_by_user = False)
  • _close() closes the diskcache Cache object, releasing the file handles
  • Windows can then successfully delete the temporary directory

Related: This is the same Windows file locking issue that PR #235 is attempting to address for other store tests.

Detailed Analysis

Full error traceback:

PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmp2k5mcu5p\\cache.db'
  File "D:\\a\\py-key-value\\py-key-value\\key-value\\key-value-sync\\tests\\code_gen\\stores\\disk\\test_disk.py", line 23, in disk_store
    with tempfile.TemporaryDirectory() as temp_dir:

Key changes from PR #245:

The PR added _client_provided_by_user tracking to BaseContextManagerStore:

async def __aexit__(self, ...) -> None:
    # Only close the client if the store created it
    if not getattr(self, "_client_provided_by_user", False):
        await self._close()

async def close(self) -> None:
    # Only close the client if the store created it
    if not getattr(self, "_client_provided_by_user", False):
        await self._close()

And DiskStore.__init__ sets:

if disk_cache:
    self._cache = disk_cache
    self._client_provided_by_user = True
elif directory:
    # Creates cache internally
    self._cache = Cache(directory=directory, ...)
    self._client_provided_by_user = False  # Store owns this cache

The test creates a store with directory=, so the store SHOULD close it, but the test fixture never calls close() or uses the context manager, so the cache is never closed.

Related Files

To Fix:

  • key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py:21-24 - Sync test fixture
  • key-value/key-value-aio/tests/stores/disk/test_disk.py:21-24 - Async test fixture

Root Cause:

  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py:444-450 - BaseContextManagerStore.close() only calls _close() if store owns client
  • key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py:65-83 - DiskStore sets _client_provided_by_user flag

Related Issue:

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)

31-40: Fix Rdict.delete() API usage and add proper cleanup.

Two critical issues:

  1. Line 38: client.delete(key=key) uses a keyword argument, but Rdict.delete() expects a positional argument.
  2. The fixture returns the store without cleanup, which can cause RocksDB lock errors when multiple tests use the same session-scoped path.

Apply this diff:

 @override
 @pytest.fixture
-def store(self, rocksdb_path: Path) -> RocksDBStore:
+def store(self, rocksdb_path: Path) -> Generator[RocksDBStore, None, None]:
     """Create a RocksDB store for testing."""
     store = RocksDBStore(path=rocksdb_path)
 
     # Wipe the store before returning it
     client = store._db
     for key in client.keys(backwards=True):
-        client.delete(key=key)
+        client.delete(key)
 
-    return store
+    yield store
+    store.close()
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)

19-24: Remove unnecessary async from fixture with only synchronous operations.

The fixture performs only synchronous operations (TemporaryDirectory, Path.mkdir) with no await calls. Making it async is unnecessary and may cause pytest compatibility issues with class-scoped fixtures.

Apply this diff:

+from collections.abc import Generator
+
 @pytest.fixture(scope="class")
-async def rocksdb_path(self) -> AsyncGenerator[Path, None]:
+def rocksdb_path(self) -> Generator[Path, None, None]:
     with TemporaryDirectory() as temp_dir:
         db_path = Path(temp_dir) / "test_db"
         db_path.mkdir(parents=True, exist_ok=True)
         yield db_path

28-37: Fix Rdict.delete() API usage and convert to async yield pattern.

Two critical issues:

  1. Line 35: client.delete(key=key) uses a keyword argument, but Rdict.delete() expects a positional argument.
  2. The fixture is synchronous and returns the store without cleanup. For async stores, the fixture must be async, yield the store, then await store.close() to avoid RocksDB lock errors between tests.

Apply this diff:

+from collections.abc import AsyncGenerator
+
 @override
 @pytest.fixture
-def store(self, rocksdb_path: Path) -> RocksDBStore:
+async def store(self, rocksdb_path: Path) -> AsyncGenerator[RocksDBStore, None]:
     """Create a RocksDB store for testing."""
     store = RocksDBStore(path=rocksdb_path)
 
     # Wipe the store before returning it
     client = store._db
     for key in client.keys(backwards=True):
-        client.delete(key=key)
+        client.delete(key)
 
-    return store
+    yield store
+    await store.close()
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1)

35-39: Past review issue incompletely addressed: fixture should be a regular generator.

This fixture contains no async operations yet is declared async. The past review comment correctly identified this and was marked as addressed, but the fixture is still async. Since TemporaryDirectory is synchronous and there are no await statements, this should be a regular generator matching the sync version pattern.

Apply this diff:

-    @pytest.fixture
-    async def duckdb_path(self) -> AsyncGenerator[Path, None]:
+    @pytest.fixture
+    def duckdb_path(self) -> Generator[Path, None, None]:
         with TemporaryDirectory(ignore_cleanup_errors=True) as temp_dir:
             db_path = Path(temp_dir) / "test.db"
             yield db_path

Add the missing import at the top of the file:

-from collections.abc import AsyncGenerator
+from collections.abc import AsyncGenerator, Generator
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 895072c and 4bb78cf.

📒 Files selected for processing (8)
  • key-value/key-value-aio/tests/stores/disk/test_disk.py (2 hunks)
  • key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (1 hunks)
  • key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1 hunks)
  • key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (2 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (1 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1 hunks)
  • key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (8)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
  • RocksDBStore (23-191)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
  • keys (389-395)
  • delete (356-360)
  • close (455-459)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (2)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (4)
  • duckdb_path (39-42)
  • store (26-30)
  • store (46-52)
  • store (63-67)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
  • DuckDBStore (62-353)
key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (3)
key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (2)
  • multi_disk_path (23-25)
  • store (29-36)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (1)
  • store (29-34)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (1)
  • MultiDiskStore (31-151)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (3)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
  • rocksdb_path (23-27)
  • store (31-40)
key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)
  • RocksDBStore (20-203)
key-value/key-value-aio/src/key_value/aio/stores/base.py (3)
  • keys (417-423)
  • delete (384-388)
  • close (483-487)
key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (1)
key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (2)
  • multi_disk_path (26-28)
  • store (32-39)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (2)
key-value/key-value-aio/tests/stores/disk/test_disk.py (3)
  • disk_path (20-22)
  • store (26-31)
  • disk_cache (34-36)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
  • DiskStore (21-130)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (4)
  • duckdb_path (36-39)
  • store (23-27)
  • store (43-49)
  • store (60-64)
key-value/key-value-aio/tests/stores/disk/test_disk.py (2)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (3)
  • disk_path (23-25)
  • store (29-34)
  • disk_cache (37-39)
key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
  • DiskStore (18-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: test_quick (3.10, windows-2022, key-value/key-value-aio)
  • GitHub Check: test_quick (3.10, windows-latest, key-value/key-value-sync)
  • GitHub Check: test_quick (3.10, windows-2022, key-value/key-value-sync)
  • GitHub Check: test_quick (3.10, windows-latest, key-value/key-value-aio)
  • GitHub Check: test_quick (3.10, macos-latest, key-value/key-value-sync)
  • GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-sync)
  • GitHub Check: test_quick (3.10, macos-14, key-value/key-value-sync)
  • GitHub Check: test_quick (3.10, macos-latest, key-value/key-value-aio)
  • GitHub Check: test_quick (3.10, macos-14, key-value/key-value-aio)
  • GitHub Check: test_quick (3.10, ubuntu-latest, key-value/key-value-aio)
  • GitHub Check: test_quick (3.10, ubuntu-latest, key-value/key-value-sync)
  • GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-aio)
🔇 Additional comments (14)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (2)

7-7: LGTM! Clean fixture refactoring.

The Path-based disk_path fixture aligns with the new lifecycle management pattern and matches the async variant. Class scope appropriately reduces temp directory overhead while the TemporaryDirectory context manager ensures proper cleanup.

Also applies to: 22-25


29-32: LGTM! Fixture dependencies correctly structured.

The store fixture now constructs DiskStore with the directory parameter, enabling store-managed lifecycle. The disk_cache fixture properly extracts and validates the cache instance. Function-scoped store with cache.clear() ensures test isolation.

Also applies to: 37-39

key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (2)

38-42: Previous issues correctly addressed.

The fixture now properly yields the database file path without calling mkdir(), and uses function scope (default) for test isolation. Implementation matches the async version and correctly lets DuckDB create the database file.


46-52: Proper lifecycle management.

The fixture correctly creates the store with the provided path and explicitly closes it after yielding, ensuring the connection is cleaned up before the temporary directory is removed.

key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)

22-27: LGTM: Fixture lifecycle is correct.

The yield-based pattern correctly keeps the temporary directory alive for all class-scoped tests.


75-76: LGTM: Correct lifecycle management for user-provided resources.

Explicitly closing the user-provided database instance aligns with the ownership model where the store only manages resources it creates.

key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (1)

72-73: LGTM: Correct lifecycle management for user-provided resources.

Explicitly closing the user-provided database instance aligns with the ownership model where the store only manages resources it creates.

key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1)

43-49: LGTM!

The fixture correctly accepts the external duckdb_path, delegates connection lifecycle to the store (which will register cleanup via exit stack since no connection is provided), and properly awaits close(). This aligns with the PR's client ownership pattern.

key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (1)

29-36: Manual cache clearing is the correct teardown approach for this fixture pattern.

The fixture intentionally creates a bare store instance without invoking the store's lifecycle management (setup/context manager). Since _exit_stack is never entered, calling close() or relying on __aexit__() would be no-ops. Manual cache clearing achieves the test's goal—preserve the directory while wiping data between tests—and aligns with the ownership semantics where the test provides the path and owns cleanup.

This pattern is consistent across both sync and async test fixtures.

key-value/key-value-aio/tests/stores/disk/test_disk.py (3)

24-31: LGTM! Store construction aligns with new lifecycle ownership.

The fixture now constructs DiskStore internally from a Path, properly utilizing the new directory parameter. Cache clearing via private attribute access is acceptable in tests and correctly suppressed.


33-36: LGTM! Fixture dependency aligns with verification pattern.

The disk_cache fixture now derives from store, creating appropriate coupling for tests that verify internal storage format. Runtime type assertion provides safety despite type hints.


19-22: The async test file is auto-generated from the sync source; review concerns misidentify intentional design patterns.

The file header confirms: "this file is auto-generated by 'build_sync_library.py'". The pattern concerns mirror the sync source exactly:

  • Class-scoped directory + function-scoped store: Both sync and async versions use this design. Each test method receives a fresh store instance; _cache.clear() provides per-test cleanup.
  • AsyncGenerator with sync context manager: The sync source uses identical Generator + sync context manager. The async version preserves this parity by design.

Suggesting changes to the async file misses that it's a generated artifact—edits will be overwritten. Any design improvements belong in the sync source file instead.

Likely an incorrect or invalid review comment.

key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (2)

25-28: Clean path fixture pattern.

Good separation of concerns - the path fixture handles temp directory lifecycle independently. The ignore_cleanup_errors=True flag appropriately handles potential cleanup issues (file locks, etc.).


32-39: Fixture pattern matches async version - good consistency.

The post-yield cache clearing is consistent with the async implementation. However, given the PR's focus on "client ownership split for store lifecycle management" and that MultiDiskStore now registers _sync_close() via the exit stack in _setup(), the manual cache clearing may be redundant. The store's lifecycle management should handle cleanup via __del__ and the exit stack.

Consider whether this manual cleanup is still necessary, or if the store's lifecycle management is sufficient now.

Based on learnings from the PR objectives about lifecycle management changes.

Comment on lines 22 to 25
@pytest.fixture
async def multi_disk_path(self) -> AsyncGenerator[Path, None]:
with tempfile.TemporaryDirectory(ignore_cleanup_errors=True) as temp_dir:
yield Path(temp_dir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Async fixture performs no async operations.

The fixture is declared async def with return type AsyncGenerator[Path, None], but uses only synchronous operations (tempfile.TemporaryDirectory with regular with). The sync version in the relevant snippets correctly uses Generator[Path, None, None].

If pytest-asyncio requires async fixtures in this context, consider clarifying with a comment. Otherwise, use def with Generator[Path, None, None] for accuracy.

     @pytest.fixture
-    async def multi_disk_path(self) -> AsyncGenerator[Path, None]:
+    def multi_disk_path(self) -> Generator[Path, None, None]:
         with tempfile.TemporaryDirectory(ignore_cleanup_errors=True) as temp_dir:
             yield Path(temp_dir)

Add the import:

-from collections.abc import AsyncGenerator
+from collections.abc import AsyncGenerator, Generator

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In key-value/key-value-aio/tests/stores/disk/test_multi_disk.py around lines 22
to 25, the fixture is declared async and typed as AsyncGenerator but performs
only synchronous operations; change it to a synchronous fixture by replacing
"async def" with "def" and use the correct typing Generator[Path, None, None],
and add the necessary import from typing (Generator) at top of the file; if
pytest-asyncio truly requires an async fixture, instead leave async and add a
clarifying comment explaining no await is needed.

@claude
Copy link

claude bot commented Nov 16, 2025

Test Failure Analysis

Summary: Tests are failing across all platforms due to a missing pytest fixture per_test_temp_dir that was referenced in the FileTreeStore, DiskStore, and other test refactorings but never defined.

Root Cause: The PR refactored multiple store test fixtures to use a common per_test_temp_dir fixture instead of creating their own temporary directories, but the fixture definition was not added to any conftest.py file.

The error appears in tests like:

  • key-value/key-value-aio/tests/stores/filetree/test_filetree.py:20
  • key-value/key-value-aio/tests/stores/disk/test_disk.py

Suggested Solution: Add the per_test_temp_dir fixture to key-value/key-value-aio/tests/conftest.py:

import tempfile
from pathlib import Path
from collections.abc import Generator

@pytest.fixture
def per_test_temp_dir() -> Generator[Path, None, None]:
    """Create a temporary directory that is cleaned up after each test."""
    with tempfile.TemporaryDirectory() as temp_dir:
        yield Path(temp_dir)

This fixture should also be added to the sync package's conftest:

  • key-value/key-value-sync/tests/conftest.py

After adding the fixture, run make codegen to propagate changes to the sync package.

Detailed Analysis

Error Details

The test failure shows:

E       fixture 'per_test_temp_dir' not found
>       available fixtures: _class_scoped_runner, _function_scoped_runner, ..., tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, ...

Files Affected

The following test files were refactored to use per_test_temp_dir:

  • key-value/key-value-aio/tests/stores/filetree/test_filetree.py - line 20
  • key-value/key-value-aio/tests/stores/disk/test_disk.py - line 19
  • Similar changes in other store tests based on the PR description

Why This Happened

The refactoring changed from:

@pytest.fixture
async def store(self) -> AsyncGenerator[FileTreeStore, None]:
    with tempfile.TemporaryDirectory() as temp_dir:
        temp_path = Path(temp_dir)
        yield FileTreeStore(data_directory=temp_path, ...)

To:

@pytest.fixture
async def store(self, per_test_temp_dir: Path) -> FileTreeStore:
    return FileTreeStore(data_directory=per_test_temp_dir, ...)

This pattern is cleaner and allows for fixture reuse, but requires the per_test_temp_dir fixture to be defined globally.

Related Files

Files that need the fixture:

  • key-value/key-value-aio/tests/conftest.py - Add the fixture definition here
  • key-value/key-value-sync/tests/conftest.py - Add for the sync package (or generate via codegen)

Files using the fixture:

  • key-value/key-value-aio/tests/stores/filetree/test_filetree.py:20 - FileTreeStore tests
  • key-value/key-value-aio/tests/stores/disk/test_disk.py:19 - DiskStore tests
  • Any other store tests that were refactored in this PR

Note: There's also a separate Windows-specific PermissionError during test cleanup, but that appears to be a secondary issue related to file locking. The missing fixture is the primary blocker preventing tests from running at all.

@sonarqubecloud
Copy link

@strawgate strawgate merged commit 17add25 into main Nov 17, 2025
79 checks passed
@strawgate strawgate deleted the claude/issue-232-20251110-1652 branch November 17, 2025 16:10
This was referenced Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants