-
-
Notifications
You must be signed in to change notification settings - Fork 148
fix: create stream before merge schema #1381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: create stream before merge schema #1381
Conversation
issue: merge schema and commit to memory happening before stream creation from storage fix: first create stream from storage if not present then merge schema and commit to memory
WalkthroughThe query flow is simplified by removing the schema update step and fallback logic for missing streams. The new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant QueryHandler
participant StreamCreator
participant LogicalPlanner
participant Executor
Client->>QueryHandler: Send Query Request
QueryHandler->>StreamCreator: create_streams_for_distributed(stream_names)
StreamCreator-->>QueryHandler: Streams Created
QueryHandler->>LogicalPlanner: into_query(query, session_state, time_range)
LogicalPlanner-->>QueryHandler: Logical Plan
QueryHandler->>Executor: Execute Logical Plan
Executor-->>QueryHandler: Query Results
QueryHandler-->>Client: Return Results
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (5)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used🧠 Learnings (5)📓 Common learnings
src/parseable/mod.rs (2)
src/handlers/http/demo_data.rs (1)
src/alerts/alerts_utils.rs (2)
src/handlers/airplane.rs (1)
🧬 Code Graph Analysis (1)src/handlers/airplane.rs (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
🔇 Additional comments (7)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/handlers/http/query.rs (2)
100-100
: Consider removing the empty line.This empty line appears to be leftover from the code reorganization and can be removed for cleaner code formatting.
118-118
: Consider removing the empty line.This empty line appears to be leftover from the code reorganization and can be removed for cleaner code formatting.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/handlers/http/query.rs
(2 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: de-sh
PR: parseablehq/parseable#1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1340
File: src/query/mod.rs:64-66
Timestamp: 2025-06-18T06:39:04.775Z
Learning: In src/query/mod.rs, QUERY_SESSION_STATE and QUERY_SESSION serve different architectural purposes: QUERY_SESSION_STATE is used for stats calculation and allows dynamic registration of individual parquet files from the staging path (files created every minute), while QUERY_SESSION is used for object store queries with the global schema provider. Session contexts with schema providers don't support registering individual tables/parquets, so both session objects are necessary for their respective use cases.
src/handlers/http/query.rs (2)
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1340
File: src/query/mod.rs:64-66
Timestamp: 2025-06-18T06:39:04.775Z
Learning: In src/query/mod.rs, QUERY_SESSION_STATE and QUERY_SESSION serve different architectural purposes: QUERY_SESSION_STATE is used for stats calculation and allows dynamic registration of individual parquet files from the staging path (files created every minute), while QUERY_SESSION is used for object store queries with the global schema provider. Session contexts with schema providers don't support registering individual tables/parquets, so both session objects are necessary for their respective use cases.
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/handlers/http/users/dashboards.rs:0-0
Timestamp: 2025-05-01T10:27:56.858Z
Learning: The `add_tile()` function in `src/handlers/http/users/dashboards.rs` should use `get_dashboard_by_user(dashboard_id, &user_id)` instead of `get_dashboard(dashboard_id)` to ensure proper authorization checks when modifying a dashboard.
🧬 Code Graph Analysis (1)
src/handlers/http/query.rs (2)
src/utils/actix.rs (1)
extract_session_key_from_req
(51-71)src/utils/mod.rs (1)
user_auth_for_datasets
(91-150)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: coverage
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Build Default x86_64-apple-darwin
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Build Kafka aarch64-apple-darwin
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: Build Default x86_64-unknown-linux-gnu
🔇 Additional comments (2)
src/handlers/http/query.rs (2)
92-92
: Good fix: Schema update now happens after logical query construction.This repositioning ensures that streams are created from storage (if needed) during
into_query()
before the schema is merged and committed to memory. This addresses the core issue described in the PR objectives.
121-121
: Good fix: Consistent schema update positioning.This repositioning mirrors the fix in
get_records_and_fields
and ensures the same correct ordering: logical query construction → schema update → authentication/authorization. This maintains consistency across both query functions.
issue: merge schema and commit to memory happening before stream creation from storage
fix: first create stream from storage if not present then merge schema and commit to memory
Summary by CodeRabbit