-
-
Notifications
You must be signed in to change notification settings - Fork 148
update: Prism home changes #1371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update: Prism home changes #1371
Conversation
WalkthroughThe alert summary system is refactored to provide detailed state-partitioned alert information with top alerts by severity. The home response replaces alerts info with this new summary and adds a top five ingestion streams field computed by aggregating stream stats. Dashboard listing is updated to support a limit parameter parsed from HTTP requests and applied in dashboard retrieval. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant HTTPHandler
participant DashboardsModule
Client->>HTTPHandler: GET /users/dashboards?limit=5
HTTPHandler->>HTTPHandler: Parse query param "limit" = 5
HTTPHandler->>DashboardsModule: list_dashboards(limit=5)
DashboardsModule-->>HTTPHandler: Return top 5 dashboards sorted by modified date
HTTPHandler-->>Client: HTTP 200 with dashboard list
sequenceDiagram
participant Caller
participant AlertsModule
Caller->>AlertsModule: get_alerts_summary()
AlertsModule->>AlertsModule: Iterate alerts, partition by state
AlertsModule->>AlertsModule: Sort alerts by severity, truncate top 5 per state
AlertsModule-->>Caller: AlertsSummary with totals and top alerts
sequenceDiagram
participant Caller
participant HomeModule
Caller->>HomeModule: generate_home_response(stream_metadata)
HomeModule->>AlertsModule: get_alerts_summary()
HomeModule->>HomeModule: get_top_5_streams_by_ingestion(stream_metadata)
HomeModule-->>Caller: HomeResponse with alerts_summary and top_five_ingestion
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/prism/home/mod.rs (1)
43-49
: Consider removing unused struct or document its intended usage.The
DatasetStats
struct is defined but not used anywhere in the current implementation. This could indicate either incomplete implementation or dead code.If this struct is intended for future use, consider adding a comment explaining its purpose. Otherwise, remove it to avoid confusion:
-#[derive(Debug, Serialize, Default)] -pub struct DatasetStats { - dataset_name: String, - events: u64, - ingestion_size: u64, - storage_size: u64, -}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/prism/home/mod.rs
(5 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/prism/home/mod.rs (2)
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1340
File: src/query/mod.rs:64-66
Timestamp: 2025-06-18T06:39:04.775Z
Learning: In src/query/mod.rs, QUERY_SESSION_STATE and QUERY_SESSION serve different architectural purposes: QUERY_SESSION_STATE is used for stats calculation and allows dynamic registration of individual parquet files from the staging path (files created every minute), while QUERY_SESSION is used for object store queries with the global schema provider. Session contexts with schema providers don't support registering individual tables/parquets, so both session objects are necessary for their respective use cases.
Learnt from: de-sh
PR: parseablehq/parseable#1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Build Default x86_64-apple-darwin
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Build Default x86_64-unknown-linux-gnu
- GitHub Check: Build Kafka aarch64-apple-darwin
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: coverage
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
🔇 Additional comments (5)
src/prism/home/mod.rs (5)
77-77
: LGTM: Appropriate field addition for top ingestion streams.The new field
top_five_ingestion
is well-typed and follows the established pattern of theHomeResponse
struct.
130-130
: LGTM: Explicit typing improves code clarity.Adding explicit type annotation for
stream_wise_stream_json
enhances readability and makes the code more self-documenting.
157-158
: LGTM: Clean integration of top streams calculation.The function call is well-placed and follows the pattern of computing data before constructing the response.
184-184
: LGTM: Proper inclusion of computed field in response.The field is correctly included in the
HomeResponse
construction.
188-215
: LGTM: Efficient and correct implementation of top streams aggregation.The implementation correctly:
- Aggregates stats across all
ObjectStoreFormat
entries per stream- Uses
fold
for efficient accumulation- Sorts by ingestion size in descending order
- Limits results to top 5 streams
The algorithm is sound and should perform well for typical dataset sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/users/dashboards.rs (1)
347-361
: LGTM! Consider using iterator chaining for better performance.The implementation correctly adds limit support with proper filtering and sorting. However, for better performance with large dashboard collections, consider avoiding the intermediate vector allocation.
- pub async fn list_dashboards(&self, limit: usize) -> Vec<Dashboard> { - // limit the number of dashboards returned in order of modified date - // if limit is 0, return all dashboards - let dashboards = self.0.read().await; - let mut sorted_dashboards = dashboards - .iter() - .filter(|d| d.dashboard_id.is_some()) - .cloned() - .collect::<Vec<Dashboard>>(); - sorted_dashboards.sort_by_key(|d| std::cmp::Reverse(d.modified)); - if limit > 0 { - sorted_dashboards.truncate(limit); - } - sorted_dashboards + pub async fn list_dashboards(&self, limit: usize) -> Vec<Dashboard> { + let dashboards = self.0.read().await; + let mut sorted_dashboards: Vec<_> = dashboards + .iter() + .filter(|d| d.dashboard_id.is_some()) + .collect(); + sorted_dashboards.sort_by_key(|d| std::cmp::Reverse(d.modified)); + + let iter = sorted_dashboards.into_iter(); + let limited_iter = if limit > 0 { + Box::new(iter.take(limit)) as Box<dyn Iterator<Item = _>> + } else { + Box::new(iter) as Box<dyn Iterator<Item = _>> + }; + + limited_iter.cloned().collect() }src/handlers/http/users/dashboards.rs (1)
35-48
: Implementation looks good! Consider enhancing error messages for better UX.The query parameter parsing and error handling are properly implemented. The default value of 0 (no limit) is appropriate.
Consider providing more specific error messages:
if let Some(limit) = query_map.get("limit") { if let Ok(parsed_limit) = limit.parse::<usize>() { dashboard_limit = parsed_limit; } else { - return Err(DashboardError::Metadata("Invalid limit value")); + return Err(DashboardError::Metadata("Invalid limit value - must be a non-negative integer")); } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/alerts/mod.rs
(1 hunks)src/handlers/http/users/dashboards.rs
(1 hunks)src/prism/home/mod.rs
(9 hunks)src/users/dashboards.rs
(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
src/users/dashboards.rs (1)
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/handlers/http/users/dashboards.rs:0-0
Timestamp: 2025-05-01T10:27:56.858Z
Learning: The `add_tile()` function in `src/handlers/http/users/dashboards.rs` should use `get_dashboard_by_user(dashboard_id, &user_id)` instead of `get_dashboard(dashboard_id)` to ensure proper authorization checks when modifying a dashboard.
src/handlers/http/users/dashboards.rs (2)
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/handlers/http/users/dashboards.rs:0-0
Timestamp: 2025-05-01T10:27:56.858Z
Learning: The `add_tile()` function in `src/handlers/http/users/dashboards.rs` should use `get_dashboard_by_user(dashboard_id, &user_id)` instead of `get_dashboard(dashboard_id)` to ensure proper authorization checks when modifying a dashboard.
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/users/dashboards.rs:154-165
Timestamp: 2025-05-01T12:22:42.363Z
Learning: Title validation for dashboards is performed in the `create_dashboard` HTTP handler function rather than in the `DASHBOARDS.create` method, avoiding redundant validation.
src/prism/home/mod.rs (2)
Learnt from: de-sh
PR: parseablehq/parseable#1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/handlers/http/users/dashboards.rs:0-0
Timestamp: 2025-05-01T10:27:56.858Z
Learning: The `add_tile()` function in `src/handlers/http/users/dashboards.rs` should use `get_dashboard_by_user(dashboard_id, &user_id)` instead of `get_dashboard(dashboard_id)` to ensure proper authorization checks when modifying a dashboard.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
- GitHub Check: Build Default x86_64-unknown-linux-gnu
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Build Default x86_64-apple-darwin
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: Build Kafka aarch64-apple-darwin
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
- GitHub Check: coverage
🔇 Additional comments (3)
src/alerts/mod.rs (1)
1034-1131
: Well-structured refactoring of alert summary!The new state-partitioned structure with severity-based prioritization provides better organization and insights. The implementation correctly handles all alert states and maintains the top 5 alerts per state based on severity priority.
src/prism/home/mod.rs (2)
180-207
: Excellent implementation of top 5 streams aggregation!The function correctly:
- Aggregates stats across all formats per stream
- Sorts by ingestion in descending order
- Limits to top 5 streams
- Returns an efficient HashMap structure
98-103
: Proper integration with refactored alert summary and dashboard listing.The changes correctly integrate with:
- New
AlertsSummary
structure from the alerts module- Updated
list_dashboards(0)
call to get all dashboards- Field renames for consistency (
ingestion_size
→ingestion
,storage_size
→storage
)Also applies to: 175-176, 408-408
Summary by CodeRabbit