Skip to content

Conversation

@mtrezza
Copy link
Member

@mtrezza mtrezza commented Nov 8, 2025

Pull Request

Issue

Closes: #7610

Approach

Tasks

  • Add tests
  • Add changes to documentation (guides, repository pages, code comments)

Summary by CodeRabbit

  • New Features
    • Automated performance benchmarking runs on pull requests and posts a Performance Impact Report with comparisons and summaries.
    • Comprehensive performance test suite covering object create/read/update, queries, batch operations, and authentication flows.
    • New npm scripts to run performance benchmarks locally for development and testing.

@parse-github-assistant
Copy link

parse-github-assistant bot commented Nov 8, 2025

🚀 Thanks for opening this pull request!

@coderabbitai
Copy link

coderabbitai bot commented Nov 8, 2025

Warning

Rate limit exceeded

@mtrezza has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 19 minutes and 38 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 66422cf and 2a0d1e8.

📒 Files selected for processing (3)
  • .github/workflows/ci-performance.yml (1 hunks)
  • CONTRIBUTING.md (2 hunks)
  • benchmark/performance.js (1 hunks)
📝 Walkthrough

Walkthrough

Adds a CI performance benchmarking pipeline: a GitHub Actions workflow that runs baseline and PR benchmarks, collects JSON results, compares them, produces a Markdown report, posts a PR comment, and includes a new Node.js benchmark suite plus npm scripts for local runs.

Changes

Cohort / File(s) Summary
GitHub Actions Workflow
\.github/workflows/ci-performance.yml
New workflow that runs baseline and PR benchmark phases on PRs, captures baseline.json and pr.json, validates/normalizes results, compares them via an inline Node script producing comparison.md, uploads artifacts, and posts a Performance Impact Report as a PR comment and job summary.
Benchmark Suite
benchmark/performance.js
New Node.js benchmark suite that boots an in-process Parse Server with MongoDB, provides cleanup and measurement utilities, implements benchmarks (Create, Read, Update, Query, Batch Save, User Signup, User Login), outputs github-action-benchmark JSON, exports runBenchmarks(), and runs when executed directly.
NPM Scripts / package manifest
package.json
Adds benchmark, benchmark:only, and benchmark:quick npm scripts and a minor madge:circular script formatting tweak to allow the new scripts.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant PR as Pull Request
    participant GHA as GitHub Actions
    participant Base as Base Checkout
    participant Branch as PR Checkout
    participant Bench as Benchmark Runner
    participant Compare as Comparator
    participant Art as Artifacts
    participant Comment as PR Comment

    GHA->>Base: Checkout base branch
    Base->>Bench: Install, build, run benchmarks (baseline)
    Bench-->>GHA: Upload `baseline.json` artifact
    GHA->>Branch: Checkout PR branch
    Branch->>Bench: Install, build, run benchmarks (PR)
    Bench-->>GHA: Upload `pr.json` artifact
    GHA->>Compare: Fetch artifacts, normalize JSON, compare results
    Compare-->>GHA: Produce `comparison.md` (status, regressions/improvements)
    GHA->>Art: Upload artifacts (baseline/pr/comparison)
    GHA->>Comment: Post Performance Impact Report + JSON payload
    GHA->>PR: Append comparison to job summary
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Areas to inspect closely:
    • .github/workflows/ci-performance.yml: inline Node comparison script correctness, artifact upload/download logic, branch filtering and trigger patterns, secret/env handling.
    • benchmark/performance.js: Parse Server init/teardown, MongoDB connection handling, measurement accuracy/statistics, cleanup routine effects on shared DBs.
    • package.json: verify scripts invoke expected binaries (mongodb-runner) and exit behaviours for CI reproducibility.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is largely incomplete with missing key sections. The Approach section is empty, and the documentation task is marked incomplete despite documentation being relevant for this feature. Complete the Approach section explaining the CI workflow design, benchmark implementation, and comparison logic. Also mark or document the changes made to guides/comments as indicated by the implementation.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding a performance impact step to the CI workflow.
Linked Issues check ✅ Passed The PR implements the core requirement from #7610 by adding a GitHub Actions workflow that runs performance benchmarks, compares baseline vs PR results, and displays impact in the PR thread via comments and job summary.
Out of Scope Changes check ✅ Passed All changes are directly scoped to implementing the performance CI feature: workflow file, benchmark script, and npm scripts for running benchmarks.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@parseplatformorg
Copy link
Contributor

parseplatformorg commented Nov 8, 2025

Snyk checks have passed. No issues have been found so far.

Status Scanner Critical High Medium Low Total (0)
Open Source Security 0 0 0 0 0 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
benchmark/performance.js (4)

46-53: Consider adding error handling for server initialization.

The server.listen callback doesn't handle potential errors (e.g., port already in use). While this may be acceptable for a benchmark script, consider adding error handling for better diagnostics.

   return new Promise((resolve, reject) => {
-    const server = app.listen(1337, () => {
+    const server = app.listen(1337, (err) => {
+      if (err) {
+        reject(err);
+        return;
+      }
       Parse.initialize(APP_ID);
       Parse.masterKey = MASTER_KEY;
       Parse.serverURL = SERVER_URL;
       resolve(server);
     });
   });

59-71: Add error handling for MongoDB connection.

The MongoClient.connect call can throw an error if the connection fails, but there's no try-catch wrapper. This could cause the benchmark to fail ungracefully.

 async function cleanupDatabase() {
-  if (!mongoClient) {
-    mongoClient = await MongoClient.connect(MONGODB_URI);
+  try {
+    if (!mongoClient) {
+      mongoClient = await MongoClient.connect(MONGODB_URI);
+    }
+    const db = mongoClient.db();
+    const collections = await db.listCollections().toArray();
+
+    for (const collection of collections) {
+      if (!collection.name.startsWith('system.')) {
+        await db.collection(collection.name).deleteMany({});
+      }
+    }
+  } catch (error) {
+    console.error('Error cleaning database:', error);
+    throw error;
   }
-  const db = mongoClient.db();
-  const collections = await db.listCollections().toArray();
-
-  for (const collection of collections) {
-    if (!collection.name.startsWith('system.')) {
-      await db.collection(collection.name).deleteMany({});
-    }
-  }
 }

222-233: Consider adding counter to username for collision safety.

While unlikely, using only Date.now() in the username could theoretically cause collisions if iterations happen within the same millisecond. Including the counter would guarantee uniqueness.

 async function benchmarkUserSignup() {
   let counter = 0;
 
   return measureOperation('User Signup', async () => {
     const user = new Parse.User();
-    user.set('username', `benchmark_user_${Date.now()}_${counter}`);
+    user.set('username', `benchmark_user_${Date.now()}_${counter++}`);
     user.set('password', 'benchmark_password');
-    user.set('email', `benchmark${counter}@example.com`);
-    counter++;
+    user.set('email', `benchmark${counter - 1}@example.com`);
     await user.signUp();
   }, Math.floor(ITERATIONS / 10)); // Fewer iterations for user operations
 }

277-277: Consider making server startup wait time configurable.

The hard-coded 2-second wait might not be sufficient in slower CI environments or under load. Consider making this configurable via environment variable or adding a health check instead.

+    const STARTUP_WAIT = parseInt(process.env.BENCHMARK_STARTUP_WAIT || '2000', 10);
     // Wait for server to be ready
-    await new Promise(resolve => setTimeout(resolve, 2000));
+    await new Promise(resolve => setTimeout(resolve, STARTUP_WAIT));
.github/workflows/ci-performance.yml (1)

173-183: Clarify overlapping threshold conditions with else-if.

The current logic has overlapping conditions (>10 and >20) that work but aren't immediately clear. Using else-if chains makes the intent explicit.

           let status = '✅';
           if (change > 10) {
             status = '⚠️ Slower';
             hasRegression = true;
-          } else if (change > 20) {
+          }
+          if (change > 20) {
             status = '❌ Much Slower';
             hasRegression = true;
           } else if (change < -10) {

Better approach:

           let status = '✅';
-          if (change > 10) {
-            status = '⚠️ Slower';
-            hasRegression = true;
-          } else if (change > 20) {
+          if (change > 20) {
             status = '❌ Much Slower';
             hasRegression = true;
+          } else if (change > 10) {
+            status = '⚠️ Slower';
+            hasRegression = true;
           } else if (change < -10) {
             status = '🚀 Faster';
             hasImprovement = true;
           }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7682644 and 2595892.

📒 Files selected for processing (3)
  • .github/workflows/ci-performance.yml (1 hunks)
  • benchmark/performance.js (1 hunks)
  • package.json (1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.917Z
Learning: When reviewing Parse Server PRs that add new features, always check whether the feature is documented in the README.md file, though for new Parse Server options this is optional rather than required.
📚 Learning: 2025-11-08T13:46:04.917Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.917Z
Learning: For new Parse Server options, verify that the option is documented in src/Options/index.js and that npm run definitions has been executed to reflect changes in src/Options/docs.js and src/Options/Definitions.js. README.md documentation is a bonus but not required for new options.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-04-30T19:31:35.344Z
Learnt from: RahulLanjewar93
Repo: parse-community/parse-server PR: 9744
File: spec/ParseLiveQuery.spec.js:0-0
Timestamp: 2025-04-30T19:31:35.344Z
Learning: In the Parse Server codebase, the functions in QueryTools.js are typically tested through end-to-end behavior tests rather than direct unit tests, even though the functions are exported from the module.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`. The preferred pattern is to create a Promise that resolves when an expected event occurs, then await that Promise.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: Tests in the parse-server repository should use promise-based approaches rather than callback patterns with `done()`. Use a pattern where a Promise is created that resolves when the event occurs, then await that promise.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-05-04T20:41:05.147Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1312-1338
Timestamp: 2025-05-04T20:41:05.147Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`.

Applied to files:

  • benchmark/performance.js
🧬 Code graph analysis (1)
benchmark/performance.js (1)
src/Adapters/Storage/Mongo/MongoStorageAdapter.js (1)
  • MongoClient (25-25)
🪛 GitHub Check: Lint
benchmark/performance.js

[warning] 302-302:
Unexpected console statement


[warning] 298-298:
Unexpected console statement


[warning] 294-294:
Unexpected console statement


[warning] 290-290:
Unexpected console statement


[warning] 286-286:
Unexpected console statement


[warning] 282-282:
Unexpected console statement


[warning] 273-273:
Unexpected console statement


[warning] 267-267:
Unexpected console statement


[warning] 266-266:
Unexpected console statement


[warning] 265-265:
Unexpected console statement

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
  • GitHub Check: Redis Cache
  • GitHub Check: Node 18
  • GitHub Check: PostgreSQL 16, PostGIS 3.5
  • GitHub Check: PostgreSQL 15, PostGIS 3.3
  • GitHub Check: PostgreSQL 18, PostGIS 3.6
  • GitHub Check: Node 22
  • GitHub Check: PostgreSQL 17, PostGIS 3.5
  • GitHub Check: PostgreSQL 15, PostGIS 3.5
  • GitHub Check: PostgreSQL 15, PostGIS 3.4
  • GitHub Check: Node 20
  • GitHub Check: MongoDB 8, ReplicaSet
  • GitHub Check: MongoDB 7, ReplicaSet
  • GitHub Check: MongoDB 6, ReplicaSet
  • GitHub Check: Docker Build
🔇 Additional comments (9)
package.json (2)

141-141: LGTM: Trailing comma added for maintainability.

The trailing comma addition follows JSON best practices and makes it easier to add subsequent entries.


142-144: LGTM: Benchmark scripts properly configured.

The benchmark scripts follow the same pattern as existing test scripts and correctly integrate with mongodb-runner. The MongoDB version (8.0.4) is consistent with the CI workflow configuration.

benchmark/performance.js (5)

11-24: LGTM: Configuration is well-structured.

The configuration uses environment variables appropriately and provides sensible defaults for local development and CI execution.


76-103: LGTM: Performance measurement implementation is solid.

The measurement function correctly captures timing data and calculates relevant statistics (mean, percentiles, min/max). The output format is compatible with github-action-benchmark.


108-259: LGTM: Benchmark suite covers key Parse Server operations.

The benchmark implementations are well-designed:

  • Appropriate setup and cleanup for each test
  • Reduced iterations for expensive operations (user ops, batch saves)
  • Good coverage of core functionality (CRUD, queries, auth)

264-335: LGTM: Orchestration logic is well-structured.

The runBenchmarks function properly:

  • Separates status output (stderr) from results (stdout)
  • Cleans up database between benchmarks for isolation
  • Handles errors gracefully with appropriate exit codes
  • Cleans up resources in finally block

338-342: LGTM: Module exports follow Node.js best practices.

The module correctly supports both direct execution via node benchmark/performance.js and programmatic usage via require().

.github/workflows/ci-performance.yml (2)

205-269: LGTM: PR comment logic properly updates existing comments.

The implementation correctly:

  • Finds existing performance comments by bot user and content
  • Updates existing comments instead of creating duplicates
  • Includes detailed results in a collapsible section
  • Provides iteration count and environment info

271-278: LGTM: Job summary provides quick visibility.

Adding the comparison to the GitHub job summary gives maintainers immediate visibility into performance impact without opening the PR.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.github/workflows/ci-performance.yml (1)

10-14: Path filter still references the wrong workflow filename — issue not resolved.

Line 14 still references performance.yml, but the actual workflow file is ci-performance.yml. This prevents the workflow from triggering when this file itself is modified.

Apply this diff:

     paths-ignore:
       - '**.md'
       - 'docs/**'
       - '.github/**'
-      - '!.github/workflows/performance.yml'
+      - '!.github/workflows/ci-performance.yml'
🧹 Nitpick comments (3)
.github/workflows/ci-performance.yml (3)

49-57: JSON extraction improved but could be simplified further.

The switch from grep to sed -n '/^\[/,/^\]/p' is a solid improvement over the previous fragile approach. However, the simplest and most reliable method is to redirect benchmark stdout directly to the JSON file during command execution, avoiding post-processing entirely.

Consider this alternative approach:

       - name: Run baseline benchmarks
         id: baseline
         run: |
-          npm run benchmark > baseline-output.txt 2>&1 || true
-          # Extract JSON from output (everything between first [ and last ])
-          sed -n '/^\[/,/^\]/p' baseline-output.txt > baseline.json || echo '[]' > baseline.json
+          npm run benchmark > baseline.json 2> baseline-output.txt || true
           echo "Baseline benchmark results:"
           cat baseline.json
         continue-on-error: true

This separates JSON output (stdout) from human-readable logs (stderr) at capture time, eliminating the need for post-processing.


87-95: PR benchmark JSON extraction — apply the same simplification suggested for baseline.

This step mirrors the baseline extraction pattern. For consistency and reliability, apply the same stdout/stderr separation:

       - name: Run PR benchmarks
         id: pr-bench
         run: |
-          npm run benchmark > pr-output.txt 2>&1 || true
-          # Extract JSON from output (everything between first [ and last ])
-          sed -n '/^\[/,/^\]/p' pr-output.txt > pr.json || echo '[]' > pr.json
+          npm run benchmark > pr.json 2> pr-output.txt || true
           echo "PR benchmark results:"
           cat pr.json
         continue-on-error: true

120-135: Add benchmark storage step for baseline results to match PR symmetry.

Currently, only PR benchmark results are stored via benchmark-action/github-action-benchmark@v1 (lines 120–135). The baseline results are only uploaded as an artifact but not stored via the benchmark action. For consistency and to enable the action's built-in comparison/alerting features on baseline data, add a corresponding step:

Insert this step before line 120 to store baseline results:

      - name: Store benchmark result (baseline)
        uses: benchmark-action/github-action-benchmark@v1
        if: github.event_name == 'pull_request'
        with:
          name: Parse Server Performance (baseline)
          tool: 'customSmallerIsBetter'
          output-file-path: baseline.json
          github-token: ${{ secrets.GITHUB_TOKEN }}
          auto-push: false
          save-data-file: false
          comment-on-alert: false

This ensures both baseline and PR runs are tracked consistently.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2595892 and 66422cf.

📒 Files selected for processing (1)
  • .github/workflows/ci-performance.yml (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.917Z
Learning: When reviewing Parse Server PRs that add new features, always check whether the feature is documented in the README.md file, though for new Parse Server options this is optional rather than required.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
  • GitHub Check: PostgreSQL 15, PostGIS 3.4
  • GitHub Check: PostgreSQL 16, PostGIS 3.5
  • GitHub Check: PostgreSQL 15, PostGIS 3.3
  • GitHub Check: Node 22
  • GitHub Check: PostgreSQL 18, PostGIS 3.6
  • GitHub Check: PostgreSQL 15, PostGIS 3.5
  • GitHub Check: PostgreSQL 17, PostGIS 3.5
  • GitHub Check: MongoDB 8, ReplicaSet
  • GitHub Check: Node 18
  • GitHub Check: Node 20
  • GitHub Check: Redis Cache
  • GitHub Check: MongoDB 6, ReplicaSet
  • GitHub Check: MongoDB 7, ReplicaSet
  • GitHub Check: Docker Build

@codecov
Copy link

codecov bot commented Nov 8, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.07%. Comparing base (3c9af48) to head (2a0d1e8).
⚠️ Report is 13 commits behind head on alpha.

Additional details and impacted files
@@           Coverage Diff           @@
##            alpha    #9916   +/-   ##
=======================================
  Coverage   93.07%   93.07%           
=======================================
  Files         187      187           
  Lines       15224    15224           
  Branches      177      177           
=======================================
  Hits        14170    14170           
  Misses       1042     1042           
  Partials       12       12           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@parseplatformorg
Copy link
Contributor

🎉 This change has been released in version 8.5.0-alpha.8

@parseplatformorg parseplatformorg added the state:released-alpha Released as alpha version label Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

state:released-alpha Released as alpha version

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add performance impact CI step

2 participants