Skip to content

update: Prism home changes #1371

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 14, 2025

Conversation

nikhilsinhaparseable
Copy link
Contributor

@nikhilsinhaparseable nikhilsinhaparseable commented Jul 12, 2025

Summary by CodeRabbit

  • New Features
    • Added a new section displaying the top five streams by ingestion size on the home page, offering quick insights into the most active data streams.
    • Enhanced alert summaries to show detailed top alerts by state and severity for better alert visibility.
    • Improved dashboard listing with sorting by modification date and optional limit on the number of dashboards returned.

Copy link
Contributor

coderabbitai bot commented Jul 12, 2025

Walkthrough

The alert summary system is refactored to provide detailed state-partitioned alert information with top alerts by severity. The home response replaces alerts info with this new summary and adds a top five ingestion streams field computed by aggregating stream stats. Dashboard listing is updated to support a limit parameter parsed from HTTP requests and applied in dashboard retrieval.

Changes

File(s) Change Summary
src/prism/home/mod.rs Replaced alerts_info with alerts_summary in HomeResponse; renamed fields in DatedStats; added top_five_ingestion field and helper function to compute it; updated generate_home_response.
src/alerts/mod.rs Replaced AlertsInfo struct with detailed AlertsSummary struct partitioned by alert state; refactored get_alerts_info to get_alerts_summary returning detailed summaries with top alerts by severity.
src/handlers/http/users/dashboards.rs Updated list_dashboards to accept HttpRequest, parse optional "limit" query param, and pass it to dashboard listing; added error handling for query parsing.
src/users/dashboards.rs Modified list_dashboards method to accept a limit parameter; filters, sorts by modification date descending, and truncates dashboard list accordingly.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant HTTPHandler
    participant DashboardsModule

    Client->>HTTPHandler: GET /users/dashboards?limit=5
    HTTPHandler->>HTTPHandler: Parse query param "limit" = 5
    HTTPHandler->>DashboardsModule: list_dashboards(limit=5)
    DashboardsModule-->>HTTPHandler: Return top 5 dashboards sorted by modified date
    HTTPHandler-->>Client: HTTP 200 with dashboard list
Loading
sequenceDiagram
    participant Caller
    participant AlertsModule

    Caller->>AlertsModule: get_alerts_summary()
    AlertsModule->>AlertsModule: Iterate alerts, partition by state
    AlertsModule->>AlertsModule: Sort alerts by severity, truncate top 5 per state
    AlertsModule-->>Caller: AlertsSummary with totals and top alerts
Loading
sequenceDiagram
    participant Caller
    participant HomeModule

    Caller->>HomeModule: generate_home_response(stream_metadata)
    HomeModule->>AlertsModule: get_alerts_summary()
    HomeModule->>HomeModule: get_top_5_streams_by_ingestion(stream_metadata)
    HomeModule-->>Caller: HomeResponse with alerts_summary and top_five_ingestion
Loading

Possibly related PRs

Suggested reviewers

  • parmesant

Poem

🐇 Alerts sorted, states aligned,
Top fives by severity defined.
Streams ingest with stats so bright,
Dashboards limited, shown just right.
A rabbit hops with code anew,
Bringing order, fresh and true!
🥕✨📈

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
src/prism/home/mod.rs (1)

43-49: Consider removing unused struct or document its intended usage.

The DatasetStats struct is defined but not used anywhere in the current implementation. This could indicate either incomplete implementation or dead code.

If this struct is intended for future use, consider adding a comment explaining its purpose. Otherwise, remove it to avoid confusion:

-#[derive(Debug, Serialize, Default)]
-pub struct DatasetStats {
-    dataset_name: String,
-    events: u64,
-    ingestion_size: u64,
-    storage_size: u64,
-}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c4533be and 34b149b.

📒 Files selected for processing (1)
  • src/prism/home/mod.rs (5 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/prism/home/mod.rs (2)
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1340
File: src/query/mod.rs:64-66
Timestamp: 2025-06-18T06:39:04.775Z
Learning: In src/query/mod.rs, QUERY_SESSION_STATE and QUERY_SESSION serve different architectural purposes: QUERY_SESSION_STATE is used for stats calculation and allows dynamic registration of individual parquet files from the staging path (files created every minute), while QUERY_SESSION is used for object store queries with the global schema provider. Session contexts with schema providers don't support registering individual tables/parquets, so both session objects are necessary for their respective use cases.
Learnt from: de-sh
PR: parseablehq/parseable#1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: Build Kafka x86_64-unknown-linux-gnu
  • GitHub Check: Build Default x86_64-pc-windows-msvc
  • GitHub Check: Build Default x86_64-apple-darwin
  • GitHub Check: Build Default aarch64-apple-darwin
  • GitHub Check: Build Default x86_64-unknown-linux-gnu
  • GitHub Check: Build Kafka aarch64-apple-darwin
  • GitHub Check: Build Default aarch64-unknown-linux-gnu
  • GitHub Check: coverage
  • GitHub Check: Quest Smoke and Load Tests for Distributed deployments
  • GitHub Check: Quest Smoke and Load Tests for Standalone deployments
🔇 Additional comments (5)
src/prism/home/mod.rs (5)

77-77: LGTM: Appropriate field addition for top ingestion streams.

The new field top_five_ingestion is well-typed and follows the established pattern of the HomeResponse struct.


130-130: LGTM: Explicit typing improves code clarity.

Adding explicit type annotation for stream_wise_stream_json enhances readability and makes the code more self-documenting.


157-158: LGTM: Clean integration of top streams calculation.

The function call is well-placed and follows the pattern of computing data before constructing the response.


184-184: LGTM: Proper inclusion of computed field in response.

The field is correctly included in the HomeResponse construction.


188-215: LGTM: Efficient and correct implementation of top streams aggregation.

The implementation correctly:

  • Aggregates stats across all ObjectStoreFormat entries per stream
  • Uses fold for efficient accumulation
  • Sorts by ingestion size in descending order
  • Limits results to top 5 streams

The algorithm is sound and should perform well for typical dataset sizes.

coderabbitai[bot]
coderabbitai bot previously approved these changes Jul 12, 2025
coderabbitai[bot]
coderabbitai bot previously approved these changes Jul 12, 2025
@nikhilsinhaparseable nikhilsinhaparseable changed the title update: add top 5 ingested streams in home api update: Prism home changes Jul 14, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/users/dashboards.rs (1)

347-361: LGTM! Consider using iterator chaining for better performance.

The implementation correctly adds limit support with proper filtering and sorting. However, for better performance with large dashboard collections, consider avoiding the intermediate vector allocation.

-    pub async fn list_dashboards(&self, limit: usize) -> Vec<Dashboard> {
-        // limit the number of dashboards returned in order of modified date
-        // if limit is 0, return all dashboards
-        let dashboards = self.0.read().await;
-        let mut sorted_dashboards = dashboards
-            .iter()
-            .filter(|d| d.dashboard_id.is_some())
-            .cloned()
-            .collect::<Vec<Dashboard>>();
-        sorted_dashboards.sort_by_key(|d| std::cmp::Reverse(d.modified));
-        if limit > 0 {
-            sorted_dashboards.truncate(limit);
-        }
-        sorted_dashboards
+    pub async fn list_dashboards(&self, limit: usize) -> Vec<Dashboard> {
+        let dashboards = self.0.read().await;
+        let mut sorted_dashboards: Vec<_> = dashboards
+            .iter()
+            .filter(|d| d.dashboard_id.is_some())
+            .collect();
+        sorted_dashboards.sort_by_key(|d| std::cmp::Reverse(d.modified));
+        
+        let iter = sorted_dashboards.into_iter();
+        let limited_iter = if limit > 0 {
+            Box::new(iter.take(limit)) as Box<dyn Iterator<Item = _>>
+        } else {
+            Box::new(iter) as Box<dyn Iterator<Item = _>>
+        };
+        
+        limited_iter.cloned().collect()
     }
src/handlers/http/users/dashboards.rs (1)

35-48: Implementation looks good! Consider enhancing error messages for better UX.

The query parameter parsing and error handling are properly implemented. The default value of 0 (no limit) is appropriate.

Consider providing more specific error messages:

         if let Some(limit) = query_map.get("limit") {
             if let Ok(parsed_limit) = limit.parse::<usize>() {
                 dashboard_limit = parsed_limit;
             } else {
-                return Err(DashboardError::Metadata("Invalid limit value"));
+                return Err(DashboardError::Metadata("Invalid limit value - must be a non-negative integer"));
             }
         }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c67be9a and fff948f.

📒 Files selected for processing (4)
  • src/alerts/mod.rs (1 hunks)
  • src/handlers/http/users/dashboards.rs (1 hunks)
  • src/prism/home/mod.rs (9 hunks)
  • src/users/dashboards.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
src/users/dashboards.rs (1)
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/handlers/http/users/dashboards.rs:0-0
Timestamp: 2025-05-01T10:27:56.858Z
Learning: The `add_tile()` function in `src/handlers/http/users/dashboards.rs` should use `get_dashboard_by_user(dashboard_id, &user_id)` instead of `get_dashboard(dashboard_id)` to ensure proper authorization checks when modifying a dashboard.
src/handlers/http/users/dashboards.rs (2)
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/handlers/http/users/dashboards.rs:0-0
Timestamp: 2025-05-01T10:27:56.858Z
Learning: The `add_tile()` function in `src/handlers/http/users/dashboards.rs` should use `get_dashboard_by_user(dashboard_id, &user_id)` instead of `get_dashboard(dashboard_id)` to ensure proper authorization checks when modifying a dashboard.
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/users/dashboards.rs:154-165
Timestamp: 2025-05-01T12:22:42.363Z
Learning: Title validation for dashboards is performed in the `create_dashboard` HTTP handler function rather than in the `DASHBOARDS.create` method, avoiding redundant validation.
src/prism/home/mod.rs (2)
Learnt from: de-sh
PR: parseablehq/parseable#1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.
Learnt from: nikhilsinhaparseable
PR: parseablehq/parseable#1305
File: src/handlers/http/users/dashboards.rs:0-0
Timestamp: 2025-05-01T10:27:56.858Z
Learning: The `add_tile()` function in `src/handlers/http/users/dashboards.rs` should use `get_dashboard_by_user(dashboard_id, &user_id)` instead of `get_dashboard(dashboard_id)` to ensure proper authorization checks when modifying a dashboard.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: Quest Smoke and Load Tests for Distributed deployments
  • GitHub Check: Quest Smoke and Load Tests for Standalone deployments
  • GitHub Check: Build Default x86_64-unknown-linux-gnu
  • GitHub Check: Build Default aarch64-apple-darwin
  • GitHub Check: Build Default x86_64-pc-windows-msvc
  • GitHub Check: Build Default x86_64-apple-darwin
  • GitHub Check: Build Default aarch64-unknown-linux-gnu
  • GitHub Check: Build Kafka aarch64-apple-darwin
  • GitHub Check: Build Kafka x86_64-unknown-linux-gnu
  • GitHub Check: coverage
🔇 Additional comments (3)
src/alerts/mod.rs (1)

1034-1131: Well-structured refactoring of alert summary!

The new state-partitioned structure with severity-based prioritization provides better organization and insights. The implementation correctly handles all alert states and maintains the top 5 alerts per state based on severity priority.

src/prism/home/mod.rs (2)

180-207: Excellent implementation of top 5 streams aggregation!

The function correctly:

  • Aggregates stats across all formats per stream
  • Sorts by ingestion in descending order
  • Limits to top 5 streams
  • Returns an efficient HashMap structure

98-103: Proper integration with refactored alert summary and dashboard listing.

The changes correctly integrate with:

  • New AlertsSummary structure from the alerts module
  • Updated list_dashboards(0) call to get all dashboards
  • Field renames for consistency (ingestion_sizeingestion, storage_sizestorage)

Also applies to: 175-176, 408-408

@nitisht nitisht merged commit 2bd8f2f into parseablehq:main Jul 14, 2025
13 of 14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants