-
Notifications
You must be signed in to change notification settings - Fork 2
Implement client ownership split for store lifecycle management #245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Stores now only manage the lifecycle of clients they create internally.
When a user provides a client to a store constructor, the store no longer
enters/exits the client's context manager or closes it. This enables:
- Clear ownership semantics ("you create it, you manage it")
- Prevention of double-context entry issues
- Client reuse across multiple stores
- Better alignment with Python resource management patterns
Changes:
- Add _client_provided_by_user attribute to BaseContextManagerStore
- Update __aexit__ and close() to conditionally call _close()
- Update all affected stores (Redis, MongoDB, DynamoDB, Valkey, Memcached,
Disk, DuckDB, RocksDB, Elasticsearch) to track client ownership
- Remove duplicate __aexit__ override in RocksDB store
- Update all docstrings to clarify lifecycle management expectations
- Generate corresponding sync package changes
This is a breaking change for users who relied on stores closing
user-provided clients, but improves correctness and enables important
use cases like connection pooling and client reuse.
Closes #232
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: William Easton <[email protected]>
- Initialize _client_provided_by_user = False in BaseContextManagerStore.__init__() - Remove redundant = False assignments from all store implementations - Add ownership checks in _close() methods to prevent closing user-provided clients - Add ownership checks in __del__() methods where applicable (RocksDB, Disk) - Fixes test failures where stores were closing user-provided clients This simplifies the ownership pattern: stores only set the flag to True when users provide a client, relying on the base class default of False for internally-created clients. Co-authored-by: William Easton <[email protected]>
…arguments Changes: - Updated BaseStore to accept stable_api parameter (default: False) - Updated BaseContextManagerStore to accept client_provided_by_user parameter (default: False) - Removed magic attribute setting in favor of explicit constructor arguments - Updated all store implementations to pass these flags to base class constructors - Removed redundant getattr calls in base class methods since attributes are always initialized - Fixes initialization order bug where base class __init__ was resetting flags This eliminates the bug where stores would set _client_provided_by_user = True, then call super().__init__() which would reset it back to False, causing user-provided clients to be incorrectly closed during cleanup. Stores updated: - Redis, DynamoDB, MongoDB, Elasticsearch, Memcached - Valkey, RocksDB, Disk, DuckDB, Memory, MultiDiskStore Co-authored-by: William Easton <[email protected]>
… fix test - Remove redundant _client_provided_by_user checks from all store _close() methods since base class already guards these calls - Add defensive getattr in __del__ methods to prevent AttributeError during cleanup - Fix RocksDB test to properly close user-provided database before temp cleanup - This addresses CodeRabbit feedback and fixes Windows test failures Stores updated: - RocksDB, Disk, DuckDB (both async and sync) - Redis, Valkey, MongoDB, Elasticsearch, Memcached, DynamoDB (async only) Co-authored-by: William Easton <[email protected]>
- MongoDB: Remove duplicate client cleanup from __aexit__, update _close() to call __aexit__(None, None, None) for symmetry with __aenter__ - DynamoDB: Move client cleanup before super().__aexit__() to prevent double-cleanup, set _client = None after closing - RocksDB: Add try-except in __del__ for best-effort cleanup with appropriate noqa comments - DuckDB: Use getattr for _is_closed check in __del__ for consistency - Disk: Set _client_provided_by_user early for exception safety - Memcached: Change truthiness check to explicit 'is not None' for consistency - Valkey: Fix _close() to close the correct client (_connected_client) and set it to None for idempotency Co-authored-by: William Easton <[email protected]>
…ment - MongoDB: Fix double-cleanup by clearing client ref before super().__aexit__() - MongoDB: Change _close() to use client.close() instead of __aexit__() - MongoDB: Add null check in _close() since client may be None after context exit - DynamoDB: Implement _close() to handle non-context-manager cleanup path - Now properly passes exception info when exiting client's context manager Co-authored-by: William Easton <[email protected]>
Replace self._client = None pattern with _client_context_entered flag to track whether the client context has been entered. This eliminates the need for type: ignore[assignment] comments and maintains type safety throughout the store's lifetime. Changes: - Added _client_context_entered boolean flag to MongoDB and DynamoDB stores - Set flag to True in __aenter__ when entering client context - Check flag in __aexit__ before exiting client context - Update _close() to only close if context wasn't already exited - Remove self._client = None assignments and type ignore comments Co-authored-by: William Easton <[email protected]>
Previously, `_client_context_entered` was set to False BEFORE calling `super().__aexit__()`. This caused `_close()` to think the context wasn't already exited, leading to double cleanup attempts. Now the flag is reset in a finally block AFTER all cleanup completes, ensuring `_close()` correctly knows the context was already exited. Affected stores: - MongoDBStore (async and sync) - DynamoDBStore (async only, no sync version) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Implements automatic client context entry at the base class level, allowing stores to work correctly even when not used with `async with`. Changes: - Add `_enter_client_context()` helper in BaseContextManagerStore - Add `_get_client_for_context()` override point for stores to specify their client - Add `_get_entered_client()` helper to retrieve the entered client - Call `_enter_client_context()` in both `__aenter__()` and `setup()` - Update MongoDB and DynamoDB stores to use the new base class pattern - Remove manual `__aenter__`/`__aexit__` overrides from store implementations This ensures that clients requiring context manager entry (like MongoDB's AsyncMongoClient and DynamoDB's aioboto3 client) are properly entered whether the store is used with `async with` or called directly. Fixes #232 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Simplified client lifecycle management by: - Removed _enter_client_context() method from base classes - Replaced manual context management with AsyncExitStack/ExitStack - Added _ensure_exit_stack() method that creates exit stack and registers client contexts - Removed complex _close() methods from MongoDB and DynamoDB stores - Exit stack now handles all cleanup automatically through its internal stack 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Previously, stores returned clients via `_get_client_for_context()` and the base class managed them. Now stores directly register their cleanup callbacks with the exit stack via `_register_cleanup_callbacks(stack)`. This gives stores more control over their cleanup registration and simplifies the base class by removing an abstraction layer. Also updated codegen to map AsyncExitStack -> ExitStack and enter_async_context -> enter_context for sync code generation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Simplified the BaseContextManagerStore lifecycle management pattern by: - Removing abstract _register_cleanup_callbacks method - Initializing AsyncExitStack in __init__ instead of lazily - Allowing stores to directly register cleanup callbacks via _exit_stack - Stores now call _exit_stack.push_async_callback() or _exit_stack.callback() in their _setup() method to register cleanup operations This gives stores direct control over exit stack management while keeping the base class responsible for entering and exiting the stack. Updated all stores that inherit from BaseContextManagerStore: - MongoDB, DynamoDB: register client context manager - Redis, Valkey, Memcached, Elasticsearch: register async close callbacks - RocksDB, Disk, MultiDisk, DuckDB: register sync close callbacks 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
|
Warning Rate limit exceeded@strawgate has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 5 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (13)
📝 WalkthroughWalkthroughAdds a stable_api flag to BaseStore and introduces BaseContextManagerStore using AsyncExitStack/ExitStack for centralized enter/exit lifecycle. Many stores now track whether a client/cache was provided by the user and register conditional cleanup via new _setup hooks and exit-stack callbacks; explicit per-store close overrides removed. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Store
participant Base as BaseContextManagerStore
participant Stack as AsyncExitStack/ExitStack
participant Resource
User->>Store: enter context
Store->>Base: _ensure_exit_stack_entered()
Base->>Stack: enter_async_context / enter_context
Base->>Store: setup()
alt Store owns Resource
Store->>Stack: push_async_callback / callback(close_resource)
end
Store-->>User: ready
User->>Store: use store
User->>Store: exit context
Base->>Stack: __aexit__ / close
Stack->>Resource: invoke cleanup callbacks (owned only)
Base-->>User: exited
sequenceDiagram
participant Caller
participant Store
participant Base
Caller->>Store: __init__(resource_provided?)
alt resource provided
Store->>Base: super(..., client_provided_by_user=True)
Note right of Base: no automatic cleanup registered
else resource created
Store->>Base: super(..., client_provided_by_user=False)
Note right of Base: cleanup registered during setup
end
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Test Failure AnalysisSummary: The static analysis for Root Cause: The PR adds
The type checker correctly identifies this issue: Suggested Solution: Add the missing method name mapping to
Why this fixes the issue:
Detailed AnalysisFailed Job: Error Log Excerpts: Affected Files (all generated sync code):
Code Examples: Async version ( async def _setup(self) -> None:
# Register client cleanup if we own the client
if not self._client_provided_by_user:
self._exit_stack.push_async_callback(self._client.close) # ✓ Valid for AsyncExitStackGenerated sync version ( def _setup(self) -> None:
# Register client cleanup if we own the client
if not self._client_provided_by_user:
self._exit_stack.push_async_callback(self._client.close) # ✗ Invalid for ExitStackShould be: def _setup(self) -> None:
# Register client cleanup if we own the client
if not self._client_provided_by_user:
self._exit_stack.callback(self._client.close) # ✓ Valid for ExitStackRelated FilesTo Fix:
Affected by Generated Code (will be auto-fixed by re-running codegen):
Related:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
47-81: Client-ownership flag and setup-time cleanup registration look correctThe introduction of
client_provided = db is not None, propagation viaclient_provided_by_userto the base class, and registering_close_and_flushin_setuponly when we own the DB aligns RocksDBStore with the shared ownership model used in other stores and ensuresclose()only tears down internally created databases.Minor readability tweak you might consider (not introduced in this PR): changing
if db:at Line 67 toif db is not None:to decouple control flow from any potential truthiness semantics ofRdictand make the ownership check intent explicit.Also applies to: 83-87
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
95-110:stable_apiflag wiring is straightforward; confirm warning strategy is intentionalAdding the
stable_api: bool = Falseparameter and emitting aUserWarningwhenever_stable_apiis false makes the stability contract explicit at construction time. This will cause every store that doesn’t explicitly passstable_api=Trueto warn on instantiation; that may be exactly what you want, but it’s worth confirming you’re comfortable with the noise level for end users, or alternatively flipping the default toTrueand opting out from unstable store constructors.Also applies to: 126-130
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (25)
key-value/key-value-aio/src/key_value/aio/stores/base.py(5 hunks)key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py(1 hunks)key-value/key-value-aio/src/key_value/aio/stores/disk/store.py(4 hunks)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py(6 hunks)key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py(6 hunks)key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py(3 hunks)key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py(1 hunks)key-value/key-value-aio/src/key_value/aio/stores/memory/store.py(1 hunks)key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py(4 hunks)key-value/key-value-aio/src/key_value/aio/stores/redis/store.py(3 hunks)key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py(5 hunks)key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py(3 hunks)key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py(1 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py(5 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py(1 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py(4 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py(6 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py(3 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/memory/store.py(1 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py(4 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py(3 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py(5 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py(3 hunks)key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py(1 hunks)scripts/build_sync_library.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (22)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (2)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
close(480-484)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(450-454)
key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (7)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (2)
_client(100-106)_setup(86-97)key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
_setup(127-128)close(480-484)key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
_setup(90-93)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
_setup(252-283)key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
_setup(261-268)key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (1)
_setup(196-199)key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (1)
_setup(222-225)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(450-454)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (4)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
_setup(89-92)key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
_setup(221-252)key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
_setup(194-197)key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)
_setup(84-95)
key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (1)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (1)
_client(100-106)
key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
_setup(127-128)key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
_setup(90-93)key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (2)
_setup(83-87)_close_and_flush(89-93)
key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
_setup(127-128)key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
_setup(90-93)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (2)
_setup(110-112)_sync_close(146-148)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
close_connection(269-272)key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
close(480-484)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(450-454)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (3)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
close_connection(238-241)key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
close(480-484)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(450-454)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (6)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
_setup(133-134)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
_setup(89-92)key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
_setup(233-240)key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
_setup(209-212)key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
_setup(83-87)key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (2)
_setup(84-95)_client(98-104)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (5)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (2)
_setup(133-134)close(450-454)key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
_setup(194-197)key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
_setup(209-212)key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
_setup(83-87)key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (2)
_setup(84-95)_client(98-104)
key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (7)
key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
_setup(127-128)close(480-484)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
_setup(252-283)key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (1)
_setup(85-88)key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (1)
_setup(196-199)key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (1)
_setup(222-225)key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (2)
_setup(86-97)_client(100-106)key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
_setup(233-240)
key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (8)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
_setup(127-128)key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
_setup(90-93)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
_setup(252-283)key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (1)
_setup(133-173)key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
_setup(261-268)key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (1)
_setup(85-88)key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (1)
_setup(222-225)key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
_setup(194-197)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
close(480-484)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(450-454)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (4)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
_setup(133-134)close(450-454)_delete_store(500-502)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
_setup(89-92)key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
_setup(194-197)key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
_setup(83-87)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
close(480-484)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(450-454)
key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
_setup(127-128)close(480-484)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
_setup(252-283)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
_setup(89-92)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (3)
key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py (2)
_setup(110-112)_sync_close(152-154)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
_setup(133-134)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
_setup(89-92)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (7)
key-value/key-value-aio/tests/stores/disk/test_disk.py (1)
disk_cache(31-32)key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (1)
disk_cache(34-35)key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
_setup(221-252)key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (1)
_setup(194-197)key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
_setup(209-212)key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
_setup(83-87)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(450-454)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (4)
_ensure_exit_stack_entered(432-436)setup(157-179)setup(456-465)close(450-454)
key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (5)
key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
_setup(127-128)_delete_store(530-532)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
_setup(252-283)key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
_setup(261-268)key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (2)
_setup(85-88)_delete_store(157-159)key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (2)
_setup(209-212)_delete_store(215-216)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (4)
_ensure_exit_stack_entered(460-464)setup(151-173)setup(486-495)close(480-484)
🪛 GitHub Actions: Run Tests
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py
[error] 236-236: Type of "push_async_callback" is unknown (reportUnknownMemberType). Command failed: uv run basedpyright .
🪛 GitHub Check: static_analysis (key-value/key-value-sync)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py
[failure] 95-95:
Cannot access attribute "push_async_callback" for class "ExitStack[bool | None]"
Attribute "push_async_callback" is unknown (reportAttributeAccessIssue)
[failure] 95-95:
Type of "push_async_callback" is unknown (reportUnknownMemberType)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py
[failure] 236-236:
Cannot access attribute "push_async_callback" for class "ExitStack[bool | None]"
Attribute "push_async_callback" is unknown (reportAttributeAccessIssue)
[failure] 236-236:
Type of "push_async_callback" is unknown (reportUnknownMemberType)
🔇 Additional comments (43)
key-value/key-value-aio/src/key_value/aio/stores/memory/store.py (1)
118-122: LGTM: Correct stable_api configuration for in-memory store.Passing
stable_api=Trueis appropriate since MemoryStore manages only in-memory data structures without external clients or resources requiring lifecycle cleanup.scripts/build_sync_library.py (1)
227-227: Correct mapping for AsyncExitStack.The mapping correctly converts
AsyncExitStackto its sync counterpartExitStack.key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (6)
92-93: LGTM: Clear lifecycle documentation.The docstring accurately reflects the new ownership semantics—user-provided connections remain the caller's responsibility.
132-134: LGTM: Consistent documentation.The main
__init__docstring correctly mirrors the ownership semantics from the overload.
144-144: LGTM: Ownership tracking initialized.The flag correctly identifies whether the connection was user-provided.
168-173: LGTM: Base class integration.The ownership flag is correctly passed to
BaseContextManagerStore, enabling centralized lifecycle management.
266-274: LGTM: Conditional cleanup properly implemented.The ownership check ensures user-provided connections aren't closed. The synchronous callback is appropriate since
duckdb.DuckDBPyConnection.close()is not async. AsyncExitStack handles this correctly via LIFO execution duringaclose().
378-382: LGTM: Defensive cleanup with ownership check.The
__del__method correctly respects ownership via_client_provided_by_userand uses defensivegetattrcalls.Minor observation:
self._connectionis assigned beforesuper().__init__()(which sets_client_provided_by_user). If the base__init__fails mid-execution,getattr(self, "_client_provided_by_user", False)defaults toFalse, potentially closing a user connection. However, this is an unlikely edge case and the outer exception handler mitigates risk.key-value/key-value-aio/src/key_value/aio/stores/mongodb/store.py (6)
11-11: LGTM: Clean import removal.Removing unused
Selfimport while retainingoverride.
162-164: LGTM: Explicit lifecycle documentation.Clear explanation that user-provided clients remain under caller control. This aligns with ownership semantics across other stores.
172-173: LGTM: Correct ownership tracking.Properly captures whether the client was user-provided. The
is not Nonecheck is appropriate for parameter presence detection.
192-192: LGTM: Ownership flag propagated to base class.Correctly passes the ownership flag to enable conditional cleanup in the base lifecycle management.
337-337: LGTM: Helpful maintenance comment.Documents the lifecycle pattern shift, clarifying that explicit
_closeoverride is no longer needed since the exit stack handles cleanup.
196-199: LGTM: Clean exit-stack-based lifecycle.Conditionally registers the client for cleanup only when the store owns it. The implementation correctly uses
enter_async_contextto manage the AsyncMongoClient's lifecycle. PyMongo AsyncMongoClient implements the async context manager protocol, so this pattern properly handles resource cleanup.key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (4)
55-67: LGTM: Clear lifecycle documentation.The docstring accurately describes parameter semantics and lifecycle management responsibilities.
68-69: LGTM: Straightforward ownership tracking.
79-83: LGTM: Lifecycle integration with base class.Correctly passes ownership flag and enables stable API for exit-stack-based lifecycle management.
95-97: Cleanup registration verified.The valkey-glide Python library v2.1.0 exposes an async close method, confirming the async callback registration at lines 95–97 is correctly implemented. The pattern properly handles client lifecycle management.
key-value/key-value-aio/src/key_value/aio/stores/disk/multi_store.py (2)
104-107: LGTM: Constructor properly configures stable API flag.The addition of
stable_api=Truealigns with the new lifecycle management approach.
109-112: No fixes needed — verified that diskcache.Cache.close() safely handles multiple calls.The concern about potentially calling
cache.close()twice (via both exit stack cleanup and__del__) is not an issue. Diskcache Cache objects are idempotent/safe to close multiple times, and closed Cache objects automatically re-open when accessed, making the double-cleanup pattern safe.key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (4)
53-55: LGTM: Clear lifecycle ownership documentation.The docstring explicitly states ownership semantics, preventing confusion about resource management responsibilities.
68-69: LGTM: Ownership tracking properly initialized and propagated.The ownership flag is correctly set based on whether
disk_cachewas provided and properly passed to the base class constructor for lifecycle management.Also applies to: 83-87
89-93: LGTM: Conditional cleanup registration respects ownership.The
_setupmethod correctly registers cache cleanup only when the store owns the cache, preventing inappropriate closure of user-provided resources.
136-137: Defensive checks are appropriate and safe.Cache.close() is safe to call multiple times, and closed Cache objects will automatically re-open when accessed. The dual cleanup paths (both
_setupat line 93 and__del__) pose no risk.key-value/key-value-aio/src/key_value/aio/stores/memcached/store.py (2)
63-82: Client ownership handling inMemcachedStore.__init__is consistent and correctDeriving
client_providedfromclient is not Noneand passingclient_provided_by_user=client_providedintoBaseContextManagerStorecleanly separates user‑provided vs internally created clients. The control flow aroundclientvshost/portis clear and side‑effect free; ignoringhost/portwhen aclientis supplied is acceptable and matches the docstring semantics. No changes needed.
84-88:_setupcorrectly hooks Memcached client cleanup into the exit stackConditionally registering
self._client.closewithpush_async_callbackwhen_client_provided_by_useris false ensures that only store‑owned aiomcache clients are closed on teardown, while user‑provided clients are left alone. This aligns with the new lifecycle model used by the other stores and avoids double‑closing.key-value/key-value-aio/src/key_value/aio/stores/base.py (2)
89-123:stable_apiflag and stability warning behavior look soundCapturing
stable_apiinBaseStore.__init__and emitting a warning only when it isFalsegives a simple, opt‑in way for mature stores to suppress the instability warning without changing existing call sites. The behavior is straightforward and keeps the warning at construction time, which is a reasonable point for users to see it.
430-495: AsyncExitStack-based lifecycle inBaseContextManagerStoreis well-structuredThe combination of
_client_provided_by_user, anAsyncExitStack, and_exit_stack_enteredplus_ensure_exit_stack_entered()provides a clear, reusable pattern for resource cleanup:
__aenter__and the overriddensetup()both ensure the exit stack is entered before_setup, allowing subclasses to register cleanup callbacks there.__aexit__andclose()both close the exit stack and reset_exit_stack_entered, making context‑manager usage and explicitclose()calls equivalent from a cleanup perspective.- The design is compatible with the existing
BaseStore.setuplocking and seeding logic and doesn’t introduce obvious race conditions.This is a solid foundation for the per‑store lifecycle handling added elsewhere in the PR.
key-value/key-value-aio/src/key_value/aio/stores/redis/store.py (2)
57-95: RedisStore constructor cleanly separates owned vs user-provided clientsUsing
client_provided = client is not Noneand branching betweenclient,url, andhost/portis straightforward and ensures only internally createdRedisinstances are considered store‑owned. Passingclient_provided_by_user=client_providedandstable_api=Trueinto the base initializer ties RedisStore into the new lifecycle and stability model without changing the public surface. This looks correct.
221-229: Redis client cleanup and destroy behavior align with the new lifecycle modelRegistering
self._client.acloseon the exit stack when_client_provided_by_useris false ensures that only store‑ownedredis.asyncio.Redisinstances are closed during teardown, matching redis‑py’s recommended async close pattern. (redis.readthedocs.io)
_delete_storedelegating toawait self._client.flushdb()is consistent with the sync implementation and satisfies the expected boolean contract via the truthiness of the return value. Overall, the lifecycle changes are coherent and non‑breaking.key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (2)
92-123: DynamoDBStore constructor correctly wires client ownership into the base classThe
clientparameter docstring matches the implementation: when a client is provided, it’s assigned directly to_client, andclient_provided_by_user=Trueis propagated toBaseContextManagerStore, so this store does not attempt to manage its lifecycle. WhenclientisNone, you create a raw aioboto3 DynamoDB client (self._raw_client) and leave_clientunset, ready to be initialized during_setup. The separation between_raw_client(context-managed by the store) and_client(the connected client used by operations) is clear and aligns with the ownership model in the rest of the PR.
135-175:_setupintegrates DynamoDB client acquisition and cleanup with the exit stackConditionally calling
await self._exit_stack.enter_async_context(self._raw_client)when the client is not user‑provided and_raw_clientexists ensures that:
- Store‑created clients are entered exactly once under the AsyncExitStack and will be cleaned up automatically on
__aexit__/close().- User‑provided clients skip this path and are used directly via
_connected_client, avoiding double context management.The subsequent table existence check, creation, waiter, and TTL configuration continue to use
_connected_clientand are unaffected by the lifecycle changes. The explanatory comment about not needing_closeanymore is accurate given the exit stack handling.key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (1)
61-64: Explicitly closing the user-provided RocksDB instance matches the new ownership modelWith the store no longer responsible for closing user‑provided
Rdictinstances, callingdb.close()in the test afterawait store.close()is the right way to ensure resources are released before cleaning up the temporary directory. This keeps the test aligned with the revised lifecycle semantics and avoids potential file‑handle leaks.key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)
64-67: Explicitly closing user-provided RocksDB instance matches new ownership semanticsCalling
db.close()afterstore.close()and beforetemp_dir.cleanup()correctly reflects that the store no longer owns user-provided DB handles and prevents lingering file handles in the temporary directory.key-value/key-value-sync/src/key_value/sync/code_gen/stores/memory/store.py (1)
111-116: Passingstable_api=Truethrough the base initializer is a clean integrationWiring
stable_api=Truedirectly intosuper().__init__makes the MemoryStore’s stability declaration explicit at construction time and avoids ad-hoc mutation of base-class state later.key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (1)
107-113: ExitStack-based cleanup for MultiDiskStore is consistent with the new lifecycle modelRegistering
_sync_closein_setupand passingstable_api=Trueintosuper().__init__cleanly hooks MultiDiskStore into the shared ExitStack-driven lifecycle while preserving the existing best-effort__del__cleanup path.key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
186-191: Finalizer now respects user ownership and degrades gracefullyGuarding
__del__with_client_provided_by_userand wrapping_close_and_flush()in a broad try/except gives you best-effort cleanup for internally owned DBs while correctly avoiding interference with user-managed ones and staying robust to partially initialized instances during GC.key-value/key-value-aio/src/key_value/aio/stores/elasticsearch/store.py (1)
218-220: Async client ownership wiring for Elasticsearch looks consistentThe new
client_providedflag, theclient_provided_by_userplumbing intoBaseContextManagerStore, and the_setupregistration viapush_async_callback(self._client.close)align with the ownership pattern used in other async stores (memcached/redis/mongodb). Behavior matches the docstring: user‑provided clients are never auto‑closed; internally created ones are cleaned up via the exit stack.Also applies to: 232-232, 257-258, 260-265
key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)
4-4: RocksDB ownership and async lifecycle integration look correctThe RocksDB store cleanly distinguishes between user‑provided and internally created
Rdictinstances viaclient_provided, passes this intoclient_provided_by_user, and uses_setup+AsyncExitStack.callback(self._close_and_flush)to close only store‑owned databases. The updated docstring and guarded__del__(best‑effort, only when we own the DB) are consistent with that contract and with the sync RocksDB implementation.Also applies to: 54-56, 68-69, 83-86, 88-93, 198-203
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (2)
95-97: Client ownership flag and base constructor wiring look solidThe
client_provided = connection is not Noneflag, updated docstrings, and forwarding viaclient_provided_by_user=client_providedintoBaseContextManagerStorecleanly implement the “user-provided connection is not closed by the store” semantics, while keepingstable_api=Falsebehavior explicit.Also applies to: 134-137, 147-148, 170-171
235-244: ExitStack callback correctly scopes connection cleanup to owned connectionsRegistering a
close_connectioncallback only whenself._client_provided_by_useris false, and guarding with_is_closed, integrates properly with the newBaseContextManagerStorelifecycle so that owned connections are closed exactly once viaclose()/ context-manager exit.key-value/key-value-sync/src/key_value/sync/code_gen/stores/mongodb/store.py (2)
14-14: MongoDB client ownership handling is consistent with the new base classThe updated
clientdocstring,client_provided = client is not Noneflag, and forwarding viaclient_provided_by_user=client_providedintoBaseContextManagerStorecorrectly ensure that user-provided clients are never closed by the store, while URL/default clients are owned and managed. The added@overrideusage for methods is also appropriate here.Also applies to: 159-168, 170-171, 187-191
194-198: Context manager support verified; implementation is correctPyMongo 4.x MongoClient implements the context manager protocol and automatically calls client.close() when exiting the with-block, confirming that the
_exit_stack.enter_context(self._client)approach in lines 194-198 (and 312-313) is sound for PyMongo 4.0.0. The implementation properly centralizes cleanup under the context manager lifecycle.
key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py
Show resolved
Hide resolved
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py
Outdated
Show resolved
Hide resolved
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py
Show resolved
Hide resolved
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py
Show resolved
Hide resolved
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)
93-95: Critical: Callback registers async method to synchronous ExitStack.The code registers
self._client.close(an async method) toExitStack.callback(), which is synchronous. During context exit, the callback will receive a coroutine object instead of executing cleanup, leaving the client connection open.For a synchronous store, you must use a synchronous cleanup approach:
# Register client cleanup if we own the client if not self._client_provided_by_user: - self._exit_stack.callback(self._client.close) + def _close_client() -> None: + if self._connected_client is not None: + self._connected_client.close() + + self._exit_stack.callback(_close_client)Verify that
glide_sync.BaseClientexposes a synchronousclose()method (not async). If only async close is available, consider using an async context manager pattern instead.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (11)
key-value/key-value-aio/src/key_value/aio/stores/base.py(5 hunks)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py(5 hunks)key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py(6 hunks)key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py(2 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py(5 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py(5 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py(3 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py(3 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py(3 hunks)key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py(2 hunks)scripts/build_sync_library.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (4)
_ensure_exit_stack_entered(432-436)setup(157-179)setup(459-468)close(453-457)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(453-457)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (2)
key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (1)
_client(100-106)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
close(453-457)
key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (1)
_client(98-104)key-value/key-value-aio/src/key_value/aio/stores/valkey/store.py (1)
_client(100-106)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
RocksDBStore(23-191)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
keys(389-395)delete(356-360)close(453-457)
key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
close(483-487)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)
RocksDBStore(20-203)key-value/key-value-aio/src/key_value/aio/stores/base.py (3)
keys(417-423)delete(384-388)close(483-487)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (3)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
_setup(133-134)close(453-457)_delete_store(503-505)key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
_setup(219-244)key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
_setup(233-240)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (4)
_ensure_exit_stack_entered(460-464)setup(151-173)setup(489-498)close(483-487)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (3)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (2)
_setup(133-134)close(453-457)key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
_setup(219-244)key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (1)
_setup(209-212)
🪛 GitHub Actions: Run Tests
key-value/key-value-aio/src/key_value/aio/stores/base.py
[error] 480-480: pyright: Type "bool | None" is not assignable to return type "None".
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py
[error] 450-450: Pyright: Type 'bool | None' is not assignable to return type 'None' (reportReturnType).
🪛 GitHub Check: static_analysis (key-value/key-value-aio)
key-value/key-value-aio/src/key_value/aio/stores/base.py
[failure] 480-480:
Type "bool | None" is not assignable to return type "None"
Type "bool | None" is not assignable to type "None"
"bool" is not assignable to "None" (reportReturnType)
🪛 GitHub Check: static_analysis (key-value/key-value-sync)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py
[failure] 450-450:
Type "bool | None" is not assignable to return type "None"
Type "bool | None" is not assignable to type "None"
"bool" is not assignable to "None" (reportReturnType)
🔇 Additional comments (21)
scripts/build_sync_library.py (1)
227-227: LGTM! Previous feedback addressed.The AsyncExitStack mappings are correct and complete for methods used in the codebase. The
push_async_callback→callbackmapping (line 249) properly addresses the prior review finding.Also applies to: 242-242, 249-249
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
27-36: LGTM: Store fixture correctly refactored.Conversion to synchronous factory with explicit path parameter and pre-wipe logic aligns with the new lifecycle management model where stores register cleanup via setup hooks.
71-72: LGTM: Correct cleanup of user-provided client.Since the store was created with
db=db, theclient_provided_by_userflag prevents automatic cleanup, so manualdb.close()is required.key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (4)
91-92: Documentation clearly states lifecycle ownership.The updated docstring appropriately warns users that provided connections are caller-managed.
143-143: Clean ownership tracking.The boolean flag correctly captures whether the connection was user-provided.
264-266: Lifecycle management correctly delegates to exit stack.The conditional cleanup registration ensures connections are only closed when created by the store, not when user-provided. The synchronous
close()callback is compatible withAsyncExitStack.callback().
166-171: No issues found. Thestable_api=Falseparameter is correct and intentional.Verification confirms
stable_apiis a base class parameter that flags API stability status. WhenFalse, it triggers_warn_about_stability()warnings. DuckDB is consistently marked as unstable (stable_api=False) across both sync and async implementations, while production-ready stores (Redis, Valkey, Memory, Disk) usestable_api=True. The parameter and its value are correct.key-value/key-value-aio/src/key_value/aio/stores/dynamodb/store.py (4)
5-5: LGTM: Override decorator added for clarity.Adding
overridefrom typing_extensions properly marks method overrides.
77-124: Client ownership tracking implemented correctly.The initialization properly handles both cases:
- When client is provided: stores it directly in
_clientand sets_raw_client = None- When client is not provided: creates
_raw_clientfor later entry in_setupThe
client_providedflag is correctly passed to the base class to coordinate lifecycle management.
137-138: Client lifecycle registration is correct.The condition properly checks ownership before registering cleanup. The
_raw_client is not Nonecheck on line 137 is defensive—based on initialization logic, ifclient_provided_by_useris False,_raw_clientis always set. Redundant but acceptable defensive programming.
248-248: Accurate comment about lifecycle management.Correctly notes that the exit stack pattern eliminates the need for explicit
_closeoverride.key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)
74-75: Correct cleanup of user-provided resource.Properly closes the user-provided database after
store.close(), aligning with the ownership model where stores don't manage user-provided clients (see store.py whereclient_provided_by_user=Trueprevents store from closing user-provided db).Based on learnings.
key-value/key-value-sync/src/key_value/sync/code_gen/stores/valkey/store.py (3)
57-69: LGTM: Clear lifecycle documentation.The docstring clearly explains client ownership semantics—when the store manages cleanup vs. when the caller retains responsibility.
70-70: LGTM: Ownership flag correctly computed.
81-81: Code changes verified. Thesuper().__init__()call at line 81 correctly passes all parameters:
client_provided_by_useris accepted byBaseContextManagerStore.__init__default_collectionandstable_apiare accepted byBaseStore.__init__via**kwargsAll parameter signatures match the base class definitions.
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
81-123: Stable API flag wiring in BaseStore looks soundAdding the kw‑only
stable_api: bool = Falseparameter, storing it on_stable_api, and centralizing the warning via_warn_about_stability()is consistent and backward‑compatible with existing callers.key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
87-131: Stable API flag addition mirrors the async base cleanlyThe new
stable_apikw‑only parameter, backing field, and warning hook are wired the same way as in the async base and remain backward‑compatible with existing sync stores.key-value/key-value-sync/src/key_value/sync/code_gen/stores/elasticsearch/store.py (1)
189-237: Elasticsearch client ownership and cleanup are correctly integrated with the exit stackThe updated
elasticsearch_clientdocs, theclient_provided = elasticsearch_client is not Noneflag, passingclient_provided_by_userintosuper().__init__, and registeringself._client.closeviaself._exit_stack.callbackonly when the client is owned by the store all look consistent with the new BaseContextManagerStore lifecycle.key-value/key-value-sync/src/key_value/sync/code_gen/stores/duckdb/store.py (1)
94-99: DuckDB connection ownership and cleanup via ExitStack are wired correctlyConnection ownership is clearly documented,
client_providedis derived from the presence ofconnectionand passed asclient_provided_by_user, and_setupregistersself._connection.closeon the exit stack only when the store owns the connection. This matches the intended lifecycle pattern.Also applies to: 133-141, 146-147, 168-169, 233-235
key-value/key-value-sync/src/key_value/sync/code_gen/stores/redis/store.py (2)
60-71: Redis client ownership split is clearly expressed and correctly forwarded to the baseThe ctor doc updates,
client_provided = client is not None, and passingclient_provided_by_user=client_provided(withstable_api=True) cleanly separate user‑managed clients from store‑owned ones and avoid spurious stability warnings.Also applies to: 88-89
209-216: Redis cleanup and destroy hooks align with the new context-manager lifecycleRegistering
self._client.closeon the exit stack only when the client is store‑owned, and implementing_delete_storeviaflushdb(), fits the BaseContextManagerStore + BaseDestroyStore lifecycle model and keeps ownership semantics consistent.
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py
Outdated
Show resolved
Hide resolved
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
87-131: BaseStorestable_apiflag wiring looks correct; consider annotating the fieldThe new
stable_apikw-only parameter,_stable_apiassignment, and_warn_about_stability()call are coherent and preserve backward compatibility for existing keyword callers. For consistency with other attributes, you might also add a class-level type annotation for_stable_api: bool.
♻️ Duplicate comments (2)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
22-27: Session-scoped fixture with TemporaryDirectory will delete the directory prematurely.This issue was already flagged in a previous review. Use
tmp_path_factoryinstead for session-scoped temporary paths, as recommended by pytest.
35-38: Fix incorrect Rdict.delete() API usage.Line 38 uses
client.delete(key=key)but Rdict.delete expects a positional argument. This was flagged in a previous review.Apply this diff:
- for key in client.keys(backwards=True): - client.delete(key=key) + for key in client.keys(backwards=True): + client.delete(key)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
key-value/key-value-aio/src/key_value/aio/stores/base.py(5 hunks)key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py(1 hunks)key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py(2 hunks)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py(5 hunks)key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py(1 hunks)key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (4)
duckdb_path(36-40)store(23-27)store(44-48)store(59-63)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
rocksdb_path(23-27)store(31-40)key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)
RocksDBStore(20-203)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
rocksdb_path(20-24)store(28-37)key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
RocksDBStore(23-191)
key-value/key-value-aio/src/key_value/aio/stores/base.py (1)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (4)
_ensure_exit_stack_entered(432-436)setup(157-179)setup(461-470)close(455-459)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
key-value/key-value-aio/src/key_value/aio/stores/base.py (4)
_ensure_exit_stack_entered(460-464)setup(151-173)setup(489-498)close(483-487)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (2)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (4)
duckdb_path(39-43)store(26-30)store(47-51)store(62-66)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
DuckDBStore(62-353)
🔇 Additional comments (10)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)
75-76: Correct handling of user-provided DB lifecycle.Explicitly closing the user-provided database before cleanup aligns with the ownership semantics in the store implementation. This prevents resource leaks.
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1)
44-48: LGTM!The fixture correctly accepts the
duckdb_pathdependency, initializes the store with the provided path, and ensures cleanup viaclose(). This aligns with the PR's lifecycle management refactor.key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1)
47-51: LGTM onceduckdb_pathfixture is fixed.The store fixture correctly accepts and uses the
duckdb_pathparameter, consistent with the async version and lifecycle management patterns.key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (1)
8-16: ExitStack import usage is consistent
ExitStackis imported and used only inBaseContextManagerStore, which cleanly centralizes sync resource cleanup; no issues here.key-value/key-value-aio/src/key_value/aio/stores/base.py (5)
9-9: LGTM: AsyncExitStack importCorrect import for async context stack management.
89-90: LGTM: stable_api flag implementationThe flag is properly initialized, documented, and triggers appropriate warnings for unstable stores.
Also applies to: 102-103, 120-123
446-458: LGTM: Constructor correctly initializes exit stack and ownership flagThe
client_provided_by_userattribute is stored for subclass use, as documented.
466-470: LGTM: Context manager lifecycle is correctly implemented
__aexit__return typebool | Nonecorrectly matchesAsyncExitStack(past review addressed)aclose()usage is correct for async exit stacks- Setup override properly ensures exit stack entry before
_setup(), enabling cleanup registrationAlso applies to: 472-481, 483-487, 489-498
460-464: Async context manager protocol does not support concurrent entry—original concern is invalidThe Python async context manager protocol (
__aenter__/__aexit__) expects a single entry per instance. Concurrent__aenter__calls violate this contract and are not a supported usage pattern. The check-then-act pattern here is defensive programming: it ensures the exit stack enters exactly once while allowingsetup()to be called multiple times. No concurrent entry patterns exist in tests or usage.Likely an incorrect or invalid review comment.
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (1)
72-73: Good: proper cleanup of user-provided database.Correctly closes the user-provided
dbinstance after the store is closed, preventing resource leaks. This aligns with the ownership model where the caller manages the database lifecycle when providing it explicitly.
| @pytest.fixture(scope="session") | ||
| async def rocksdb_path(self) -> AsyncGenerator[Path, None]: | ||
| with TemporaryDirectory() as temp_dir: | ||
| db_path = Path(temp_dir) / "test_db" | ||
| db_path.mkdir(parents=True, exist_ok=True) | ||
| yield db_path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unnecessary async from session-scoped generator fixture.
The fixture uses synchronous operations only (TemporaryDirectory, Path.mkdir) and contains no await calls. Making it async is unnecessary and may cause compatibility issues with pytest's session scope. The sync variant correctly uses a regular generator.
Apply this diff:
@pytest.fixture(scope="session")
- async def rocksdb_path(self) -> AsyncGenerator[Path, None]:
+ def rocksdb_path(self) -> Generator[Path, None, None]:
with TemporaryDirectory() as temp_dir:
db_path = Path(temp_dir) / "test_db"
db_path.mkdir(parents=True, exist_ok=True)
yield db_pathAdd the import:
-from collections.abc import AsyncGenerator
+from collections.abc import GeneratorCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py around lines 19
to 24, the fixture is unnecessarily declared async and typed as AsyncGenerator
while it performs only synchronous operations; change the fixture to a regular
(non-async) generator by removing the async keyword, update its return type to a
synchronous Generator (e.g., Generator[Path, None, None] or Iterator[Path]), and
add the corresponding import from typing at the top of the file; keep the
TemporaryDirectory context, Path.mkdir, and yield as-is.
| def store(self, rocksdb_path: Path) -> RocksDBStore: | ||
| """Create a RocksDB store for testing.""" | ||
| # Create a temporary directory for the RocksDB database | ||
| with TemporaryDirectory() as temp_dir: | ||
| db_path = Path(temp_dir) / "test_db" | ||
| rocksdb_store = RocksDBStore(path=db_path) | ||
| yield rocksdb_store | ||
| store = RocksDBStore(path=rocksdb_path) | ||
|
|
||
| # Wipe the store before returning it | ||
| client = store._db | ||
| for key in client.keys(backwards=True): | ||
| client.delete(key=key) | ||
|
|
||
| return store |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify RocksDB locking with session-scoped path and function-scoped store.
Each test creates a new RocksDBStore instance pointing to the same session-scoped path. RocksDB enforces single-writer constraints, so if a previous test's store isn't properly closed before the next test starts, lock errors may occur. The fixture relies on __del__ cleanup or explicit close() calls, which is timing-dependent.
Confirm that inherited tests from BaseStoreTests properly close the store, or consider making the fixture yield and add explicit cleanup.
🏁 Script executed:
#!/bin/bash
# Check if BaseStoreTests fixtures or teardown handle store cleanup
# Search for store cleanup in base test class
ast-grep --pattern 'class BaseStoreTests {
$$$
close
$$$
}'
# Check for fixture cleanup patterns
rg -n -A5 'def store' key-value/key-value-aio/tests/stores/base.py
# Look for __aexit__ or close calls in base tests
rg -n -C3 'async with.*store|await.*\.close\(\)' key-value/key-value-aio/tests/stores/base.pyLength of output: 539
Make the store fixture yield and add explicit cleanup to match base fixture pattern.
The base fixture (lines 266–273 in base.py) uses yield with either async with or explicit await store.close(). Your RocksDB fixture uses return with no cleanup, relying on unreliable __del__ timing. Since each test gets a new store instance pointing to the same session-scoped path, stale locks will accumulate and cause intermittent failures.
Change to:
@pytest.fixture
async def store(self, rocksdb_path: Path):
"""Create a RocksDB store for testing."""
store = RocksDBStore(path=rocksdb_path)
# Wipe the store before returning it
client = store._db
for key in client.keys(backwards=True):
client.delete(key=key)
yield store
await store.close()🤖 Prompt for AI Agents
In key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py around lines
28–37, the fixture currently returns a RocksDBStore without performing explicit
cleanup, causing stale locks; change the fixture to be an async pytest fixture
that yields the store (keep the pre-yield wipe loop) and after the yield call
await store.close() so the store is deterministically closed and cleaned up
between tests.
| class BaseContextManagerStore(BaseStore, ABC): | ||
| """An abstract base class for context manager stores.""" | ||
| """An abstract base class for context manager stores. | ||
| Stores that accept a client parameter should pass `client_provided_by_user=True` to | ||
| the constructor. This ensures the store does not manage the lifecycle of user-provided | ||
| clients (i.e., does not close them). | ||
| The base class provides an ExitStack that stores can use to register cleanup | ||
| callbacks. Stores should add their cleanup operations to the exit stack as needed. | ||
| The base class handles entering and exiting the exit stack. | ||
| """ | ||
|
|
||
| _client_provided_by_user: bool | ||
| _exit_stack: ExitStack | ||
| _exit_stack_entered: bool | ||
|
|
||
| def __init__(self, *, client_provided_by_user: bool = False, **kwargs: Any) -> None: | ||
| """Initialize the context manager store with client ownership configuration. | ||
| Args: | ||
| client_provided_by_user: Whether the client was provided by the user. If True, | ||
| the store will not manage the client's lifecycle (will not close it). | ||
| Defaults to False. | ||
| **kwargs: Additional arguments to pass to the base store constructor. | ||
| """ | ||
| self._client_provided_by_user = client_provided_by_user | ||
| self._exit_stack = ExitStack() | ||
| self._exit_stack_entered = False | ||
| super().__init__(**kwargs) | ||
|
|
||
| def _ensure_exit_stack_entered(self) -> None: | ||
| """Ensure the exit stack has been entered.""" | ||
| if not self._exit_stack_entered: | ||
| self._exit_stack.__enter__() | ||
| self._exit_stack_entered = True | ||
|
|
||
| def __enter__(self) -> Self: | ||
| # Enter the exit stack | ||
| self._ensure_exit_stack_entered() | ||
| self.setup() | ||
| return self | ||
|
|
||
| def __exit__(self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None) -> None: | ||
| self._close() | ||
| def __exit__( | ||
| self, exc_type: type[BaseException] | None, exc_value: BaseException | None, traceback: TracebackType | None | ||
| ) -> bool | None: | ||
| # Close the exit stack, which handles all cleanup | ||
| if self._exit_stack_entered: | ||
| result = self._exit_stack.__exit__(exc_type, exc_value, traceback) | ||
| self._exit_stack_entered = False | ||
|
|
||
| return result | ||
| return None | ||
|
|
||
| def close(self) -> None: | ||
| self._close() | ||
| # Close the exit stack if it has been entered | ||
| if self._exit_stack_entered: | ||
| self._exit_stack.close() | ||
| self._exit_stack_entered = False | ||
|
|
||
| @abstractmethod | ||
| def _close(self) -> None: | ||
| """Close the store.""" | ||
| ... | ||
| def setup(self) -> None: | ||
| """Initialize the store if not already initialized. | ||
| This override ensures the exit stack is entered before the store's _setup() | ||
| method is called, allowing stores to register cleanup callbacks during setup. | ||
| """ | ||
| # Ensure exit stack is entered | ||
| self._ensure_exit_stack_entered() | ||
| # Call parent setup | ||
| super().setup() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
BaseContextManagerStore ExitStack lifecycle is sound; __enter__ can be simplified
The ExitStack-based lifecycle (_ensure_exit_stack_entered, __enter__/__exit__, close, and setup override) correctly ensures callbacks are registered before setup and cleaned up once, and returning the result of ExitStack.__exit__ preserves exception-suppression semantics. One minor simplification: __enter__ currently calls _ensure_exit_stack_entered() and then setup(), but setup() already calls _ensure_exit_stack_entered(), so you could safely collapse __enter__ to:
def __enter__(self) -> Self:
self.setup()
return selfto avoid redundant checks.
🤖 Prompt for AI Agents
In key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py around
lines 402-470, __enter__ redundantly calls _ensure_exit_stack_entered() before
calling setup(), but setup() already calls _ensure_exit_stack_entered();
simplify __enter__ to just call self.setup() and return self, removing the
duplicate _ensure_exit_stack_entered() call to avoid an unnecessary check while
preserving behavior.
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py
Outdated
Show resolved
Hide resolved
Test Failure AnalysisSummary: The DuckDB persistent store tests are failing because the session-scoped Root Cause: In the test fixture refactoring (lines 35-42 of
The temp directory is cleaned up, but the path is still being used by tests, pointing to a non-existent or invalid location. Suggested Solution: Follow the RocksDB pattern and ensure the temp directory persists for the entire session:
@pytest.fixture(scope="session")
async def duckdb_path(self) -> AsyncGenerator[Path, None]:
with TemporaryDirectory() as temp_dir:
db_path = Path(temp_dir) / "test.db"
yield db_path
@override
@pytest.fixture
async def store(self, duckdb_path: Path) -> AsyncGenerator[DuckDBStore, None]:
"""Test with persistent DuckDB database file."""
duckdb_store = DuckDBStore(database_path=duckdb_path)
# Wipe existing data before each test
try:
# Clear all data from the database
duckdb_store._connection.execute(f"DROP TABLE IF EXISTS {duckdb_store._table_name}")
except Exception:
pass # Table might not exist on first run
yield duckdb_store
await duckdb_store.close()Alternative approach (if wiping is complex): Change the scope to @pytest.fixture # Remove scope="session"
async def duckdb_path(self) -> AsyncGenerator[Path, None]:
with TemporaryDirectory() as temp_dir:
db_path = Path(temp_dir) / "test.db"
yield db_path
Why this fixes the issue:
Detailed AnalysisError from logs: This error occurs at: self._connection = duckdb.connect(database=database_path)Affected tests (all DuckDB persistent tests):
Why RocksDB tests pass: @override
@pytest.fixture
def store(self, rocksdb_path: Path) -> RocksDBStore:
store = RocksDBStore(path=rocksdb_path)
# Wipe the store before returning it
client = store._db
for key in client.keys(backwards=True):
client.delete(key=key)
return storeThis approach ensures the path exists (from session fixture) and data is cleared between tests (from store fixture). Related FilesPrimary Files to Fix:
For Reference (working pattern):
Source Code:
|
Test Failure AnalysisSummary: The Windows test run is failing during teardown with a PermissionError when trying to delete the diskcache database file, caused by the file still being held open by a DiskStore instance that wasn't properly closed. Root Cause: The PR introduces client ownership tracking (via Looking at @pytest.fixture(scope="session")
def disk_store(self) -> Generator[DiskStore, None, None]:
with tempfile.TemporaryDirectory() as temp_dir:
yield DiskStore(directory=temp_dir, max_size=TEST_SIZE_LIMIT)The problem:
Error from logs: Suggested Solution: Update the test fixture to properly close the store before the TemporaryDirectory cleanup: File to modify: @pytest.fixture(scope="session")
def disk_store(self) -> Generator[DiskStore, None, None]:
with tempfile.TemporaryDirectory() as temp_dir:
store = DiskStore(directory=temp_dir, max_size=TEST_SIZE_LIMIT)
yield store
store.close() # Close the store before TemporaryDirectory cleanupAlso update the async version: @pytest.fixture(scope="session")
async def disk_store(self) -> AsyncGenerator[DiskStore, None]:
with tempfile.TemporaryDirectory() as temp_dir:
store = DiskStore(directory=temp_dir, max_size=TEST_SIZE_LIMIT)
yield store
await store.close() # Close the store before TemporaryDirectory cleanupWhy this fixes the issue:
Related: This is the same Windows file locking issue that PR #235 is attempting to address for other store tests. Detailed AnalysisFull error traceback: Key changes from PR #245: The PR added async def __aexit__(self, ...) -> None:
# Only close the client if the store created it
if not getattr(self, "_client_provided_by_user", False):
await self._close()
async def close(self) -> None:
# Only close the client if the store created it
if not getattr(self, "_client_provided_by_user", False):
await self._close()And if disk_cache:
self._cache = disk_cache
self._client_provided_by_user = True
elif directory:
# Creates cache internally
self._cache = Cache(directory=directory, ...)
self._client_provided_by_user = False # Store owns this cacheThe test creates a store with Related FilesTo Fix:
Root Cause:
Related Issue:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (1)
31-40: Fix Rdict.delete() API usage and add proper cleanup.Two critical issues:
- Line 38:
client.delete(key=key)uses a keyword argument, but Rdict.delete() expects a positional argument.- The fixture returns the store without cleanup, which can cause RocksDB lock errors when multiple tests use the same session-scoped path.
Apply this diff:
@override @pytest.fixture -def store(self, rocksdb_path: Path) -> RocksDBStore: +def store(self, rocksdb_path: Path) -> Generator[RocksDBStore, None, None]: """Create a RocksDB store for testing.""" store = RocksDBStore(path=rocksdb_path) # Wipe the store before returning it client = store._db for key in client.keys(backwards=True): - client.delete(key=key) + client.delete(key) - return store + yield store + store.close()key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (2)
19-24: Remove unnecessaryasyncfrom fixture with only synchronous operations.The fixture performs only synchronous operations (
TemporaryDirectory,Path.mkdir) with noawaitcalls. Making it async is unnecessary and may cause pytest compatibility issues with class-scoped fixtures.Apply this diff:
+from collections.abc import Generator + @pytest.fixture(scope="class") -async def rocksdb_path(self) -> AsyncGenerator[Path, None]: +def rocksdb_path(self) -> Generator[Path, None, None]: with TemporaryDirectory() as temp_dir: db_path = Path(temp_dir) / "test_db" db_path.mkdir(parents=True, exist_ok=True) yield db_path
28-37: Fix Rdict.delete() API usage and convert to async yield pattern.Two critical issues:
- Line 35:
client.delete(key=key)uses a keyword argument, but Rdict.delete() expects a positional argument.- The fixture is synchronous and returns the store without cleanup. For async stores, the fixture must be async, yield the store, then
await store.close()to avoid RocksDB lock errors between tests.Apply this diff:
+from collections.abc import AsyncGenerator + @override @pytest.fixture -def store(self, rocksdb_path: Path) -> RocksDBStore: +async def store(self, rocksdb_path: Path) -> AsyncGenerator[RocksDBStore, None]: """Create a RocksDB store for testing.""" store = RocksDBStore(path=rocksdb_path) # Wipe the store before returning it client = store._db for key in client.keys(backwards=True): - client.delete(key=key) + client.delete(key) - return store + yield store + await store.close()key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1)
35-39: Past review issue incompletely addressed: fixture should be a regular generator.This fixture contains no async operations yet is declared
async. The past review comment correctly identified this and was marked as addressed, but the fixture is still async. SinceTemporaryDirectoryis synchronous and there are noawaitstatements, this should be a regular generator matching the sync version pattern.Apply this diff:
- @pytest.fixture - async def duckdb_path(self) -> AsyncGenerator[Path, None]: + @pytest.fixture + def duckdb_path(self) -> Generator[Path, None, None]: with TemporaryDirectory(ignore_cleanup_errors=True) as temp_dir: db_path = Path(temp_dir) / "test.db" yield db_pathAdd the missing import at the top of the file:
-from collections.abc import AsyncGenerator +from collections.abc import AsyncGenerator, Generator
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (8)
key-value/key-value-aio/tests/stores/disk/test_disk.py(2 hunks)key-value/key-value-aio/tests/stores/disk/test_multi_disk.py(1 hunks)key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py(1 hunks)key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py(2 hunks)key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py(2 hunks)key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py(1 hunks)key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py(1 hunks)key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (8)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
key-value/key-value-sync/src/key_value/sync/code_gen/stores/rocksdb/store.py (1)
RocksDBStore(23-191)key-value/key-value-sync/src/key_value/sync/code_gen/stores/base.py (3)
keys(389-395)delete(356-360)close(455-459)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (2)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (4)
duckdb_path(39-42)store(26-30)store(46-52)store(63-67)key-value/key-value-aio/src/key_value/aio/stores/duckdb/store.py (1)
DuckDBStore(62-353)
key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (3)
key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (2)
multi_disk_path(23-25)store(29-36)key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (1)
store(29-34)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/multi_store.py (1)
MultiDiskStore(31-151)
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (3)
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
rocksdb_path(23-27)store(31-40)key-value/key-value-aio/src/key_value/aio/stores/rocksdb/store.py (1)
RocksDBStore(20-203)key-value/key-value-aio/src/key_value/aio/stores/base.py (3)
keys(417-423)delete(384-388)close(483-487)
key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (1)
key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (2)
multi_disk_path(26-28)store(32-39)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (2)
key-value/key-value-aio/tests/stores/disk/test_disk.py (3)
disk_path(20-22)store(26-31)disk_cache(34-36)key-value/key-value-sync/src/key_value/sync/code_gen/stores/disk/store.py (1)
DiskStore(21-130)
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (1)
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (4)
duckdb_path(36-39)store(23-27)store(43-49)store(60-64)
key-value/key-value-aio/tests/stores/disk/test_disk.py (2)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (3)
disk_path(23-25)store(29-34)disk_cache(37-39)key-value/key-value-aio/src/key_value/aio/stores/disk/store.py (1)
DiskStore(18-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: test_quick (3.10, windows-2022, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, windows-latest, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, windows-2022, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, windows-latest, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, macos-latest, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, macos-14, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, macos-latest, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, macos-14, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, ubuntu-latest, key-value/key-value-aio)
- GitHub Check: test_quick (3.10, ubuntu-latest, key-value/key-value-sync)
- GitHub Check: test_quick (3.10, ubuntu-22.04, key-value/key-value-aio)
🔇 Additional comments (14)
key-value/key-value-sync/tests/code_gen/stores/disk/test_disk.py (2)
7-7: LGTM! Clean fixture refactoring.The Path-based disk_path fixture aligns with the new lifecycle management pattern and matches the async variant. Class scope appropriately reduces temp directory overhead while the TemporaryDirectory context manager ensures proper cleanup.
Also applies to: 22-25
29-32: LGTM! Fixture dependencies correctly structured.The store fixture now constructs DiskStore with the directory parameter, enabling store-managed lifecycle. The disk_cache fixture properly extracts and validates the cache instance. Function-scoped store with cache.clear() ensures test isolation.
Also applies to: 37-39
key-value/key-value-sync/tests/code_gen/stores/duckdb/test_duckdb.py (2)
38-42: Previous issues correctly addressed.The fixture now properly yields the database file path without calling
mkdir(), and uses function scope (default) for test isolation. Implementation matches the async version and correctly lets DuckDB create the database file.
46-52: Proper lifecycle management.The fixture correctly creates the store with the provided path and explicitly closes it after yielding, ensuring the connection is cleaned up before the temporary directory is removed.
key-value/key-value-sync/tests/code_gen/stores/rocksdb/test_rocksdb.py (2)
22-27: LGTM: Fixture lifecycle is correct.The yield-based pattern correctly keeps the temporary directory alive for all class-scoped tests.
75-76: LGTM: Correct lifecycle management for user-provided resources.Explicitly closing the user-provided database instance aligns with the ownership model where the store only manages resources it creates.
key-value/key-value-aio/tests/stores/rocksdb/test_rocksdb.py (1)
72-73: LGTM: Correct lifecycle management for user-provided resources.Explicitly closing the user-provided database instance aligns with the ownership model where the store only manages resources it creates.
key-value/key-value-aio/tests/stores/duckdb/test_duckdb.py (1)
43-49: LGTM!The fixture correctly accepts the external
duckdb_path, delegates connection lifecycle to the store (which will register cleanup via exit stack since no connection is provided), and properly awaitsclose(). This aligns with the PR's client ownership pattern.key-value/key-value-aio/tests/stores/disk/test_multi_disk.py (1)
29-36: Manual cache clearing is the correct teardown approach for this fixture pattern.The fixture intentionally creates a bare store instance without invoking the store's lifecycle management (setup/context manager). Since
_exit_stackis never entered, callingclose()or relying on__aexit__()would be no-ops. Manual cache clearing achieves the test's goal—preserve the directory while wiping data between tests—and aligns with the ownership semantics where the test provides the path and owns cleanup.This pattern is consistent across both sync and async test fixtures.
key-value/key-value-aio/tests/stores/disk/test_disk.py (3)
24-31: LGTM! Store construction aligns with new lifecycle ownership.The fixture now constructs
DiskStoreinternally from aPath, properly utilizing the newdirectoryparameter. Cache clearing via private attribute access is acceptable in tests and correctly suppressed.
33-36: LGTM! Fixture dependency aligns with verification pattern.The
disk_cachefixture now derives fromstore, creating appropriate coupling for tests that verify internal storage format. Runtime type assertion provides safety despite type hints.
19-22: The async test file is auto-generated from the sync source; review concerns misidentify intentional design patterns.The file header confirms: "this file is auto-generated by 'build_sync_library.py'". The pattern concerns mirror the sync source exactly:
- Class-scoped directory + function-scoped store: Both sync and async versions use this design. Each test method receives a fresh store instance;
_cache.clear()provides per-test cleanup.- AsyncGenerator with sync context manager: The sync source uses identical
Generator+ sync context manager. The async version preserves this parity by design.Suggesting changes to the async file misses that it's a generated artifact—edits will be overwritten. Any design improvements belong in the sync source file instead.
Likely an incorrect or invalid review comment.
key-value/key-value-sync/tests/code_gen/stores/disk/test_multi_disk.py (2)
25-28: Clean path fixture pattern.Good separation of concerns - the path fixture handles temp directory lifecycle independently. The
ignore_cleanup_errors=Trueflag appropriately handles potential cleanup issues (file locks, etc.).
32-39: Fixture pattern matches async version - good consistency.The post-yield cache clearing is consistent with the async implementation. However, given the PR's focus on "client ownership split for store lifecycle management" and that
MultiDiskStorenow registers_sync_close()via the exit stack in_setup(), the manual cache clearing may be redundant. The store's lifecycle management should handle cleanup via__del__and the exit stack.Consider whether this manual cleanup is still necessary, or if the store's lifecycle management is sufficient now.
Based on learnings from the PR objectives about lifecycle management changes.
| @pytest.fixture | ||
| async def multi_disk_path(self) -> AsyncGenerator[Path, None]: | ||
| with tempfile.TemporaryDirectory(ignore_cleanup_errors=True) as temp_dir: | ||
| yield Path(temp_dir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Async fixture performs no async operations.
The fixture is declared async def with return type AsyncGenerator[Path, None], but uses only synchronous operations (tempfile.TemporaryDirectory with regular with). The sync version in the relevant snippets correctly uses Generator[Path, None, None].
If pytest-asyncio requires async fixtures in this context, consider clarifying with a comment. Otherwise, use def with Generator[Path, None, None] for accuracy.
@pytest.fixture
- async def multi_disk_path(self) -> AsyncGenerator[Path, None]:
+ def multi_disk_path(self) -> Generator[Path, None, None]:
with tempfile.TemporaryDirectory(ignore_cleanup_errors=True) as temp_dir:
yield Path(temp_dir)Add the import:
-from collections.abc import AsyncGenerator
+from collections.abc import AsyncGenerator, GeneratorCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In key-value/key-value-aio/tests/stores/disk/test_multi_disk.py around lines 22
to 25, the fixture is declared async and typed as AsyncGenerator but performs
only synchronous operations; change it to a synchronous fixture by replacing
"async def" with "def" and use the correct typing Generator[Path, None, None],
and add the necessary import from typing (Generator) at top of the file; if
pytest-asyncio truly requires an async fixture, instead leave async and add a
clarifying comment explaining no await is needed.
Test Failure AnalysisSummary: Tests are failing across all platforms due to a missing pytest fixture Root Cause: The PR refactored multiple store test fixtures to use a common The error appears in tests like:
Suggested Solution: Add the import tempfile
from pathlib import Path
from collections.abc import Generator
@pytest.fixture
def per_test_temp_dir() -> Generator[Path, None, None]:
"""Create a temporary directory that is cleaned up after each test."""
with tempfile.TemporaryDirectory() as temp_dir:
yield Path(temp_dir)This fixture should also be added to the sync package's conftest:
After adding the fixture, run Detailed AnalysisError DetailsThe test failure shows: Files AffectedThe following test files were refactored to use
Why This HappenedThe refactoring changed from: @pytest.fixture
async def store(self) -> AsyncGenerator[FileTreeStore, None]:
with tempfile.TemporaryDirectory() as temp_dir:
temp_path = Path(temp_dir)
yield FileTreeStore(data_directory=temp_path, ...)To: @pytest.fixture
async def store(self, per_test_temp_dir: Path) -> FileTreeStore:
return FileTreeStore(data_directory=per_test_temp_dir, ...)This pattern is cleaner and allows for fixture reuse, but requires the Related FilesFiles that need the fixture:
Files using the fixture:
Note: There's also a separate Windows-specific PermissionError during test cleanup, but that appears to be a secondary issue related to file locking. The missing fixture is the primary blocker preventing tests from running at all. |
|



Summary by CodeRabbit
Bug Fixes
Refactor
Tests
Chores