-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
ci: Add performance impact step to CI #9916
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
🚀 Thanks for opening this pull request! |
|
Warning Rate limit exceeded@mtrezza has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 19 minutes and 38 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughAdds a CI performance benchmarking pipeline: a GitHub Actions workflow that runs baseline and PR benchmarks, collects JSON results, compares them, produces a Markdown report, posts a PR comment, and includes a new Node.js benchmark suite plus npm scripts for local runs. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant PR as Pull Request
participant GHA as GitHub Actions
participant Base as Base Checkout
participant Branch as PR Checkout
participant Bench as Benchmark Runner
participant Compare as Comparator
participant Art as Artifacts
participant Comment as PR Comment
GHA->>Base: Checkout base branch
Base->>Bench: Install, build, run benchmarks (baseline)
Bench-->>GHA: Upload `baseline.json` artifact
GHA->>Branch: Checkout PR branch
Branch->>Bench: Install, build, run benchmarks (PR)
Bench-->>GHA: Upload `pr.json` artifact
GHA->>Compare: Fetch artifacts, normalize JSON, compare results
Compare-->>GHA: Produce `comparison.md` (status, regressions/improvements)
GHA->>Art: Upload artifacts (baseline/pr/comparison)
GHA->>Comment: Post Performance Impact Report + JSON payload
GHA->>PR: Append comparison to job summary
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
✅ Snyk checks have passed. No issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (5)
benchmark/performance.js (4)
46-53: Consider adding error handling for server initialization.The server.listen callback doesn't handle potential errors (e.g., port already in use). While this may be acceptable for a benchmark script, consider adding error handling for better diagnostics.
return new Promise((resolve, reject) => { - const server = app.listen(1337, () => { + const server = app.listen(1337, (err) => { + if (err) { + reject(err); + return; + } Parse.initialize(APP_ID); Parse.masterKey = MASTER_KEY; Parse.serverURL = SERVER_URL; resolve(server); }); });
59-71: Add error handling for MongoDB connection.The MongoClient.connect call can throw an error if the connection fails, but there's no try-catch wrapper. This could cause the benchmark to fail ungracefully.
async function cleanupDatabase() { - if (!mongoClient) { - mongoClient = await MongoClient.connect(MONGODB_URI); + try { + if (!mongoClient) { + mongoClient = await MongoClient.connect(MONGODB_URI); + } + const db = mongoClient.db(); + const collections = await db.listCollections().toArray(); + + for (const collection of collections) { + if (!collection.name.startsWith('system.')) { + await db.collection(collection.name).deleteMany({}); + } + } + } catch (error) { + console.error('Error cleaning database:', error); + throw error; } - const db = mongoClient.db(); - const collections = await db.listCollections().toArray(); - - for (const collection of collections) { - if (!collection.name.startsWith('system.')) { - await db.collection(collection.name).deleteMany({}); - } - } }
222-233: Consider adding counter to username for collision safety.While unlikely, using only
Date.now()in the username could theoretically cause collisions if iterations happen within the same millisecond. Including the counter would guarantee uniqueness.async function benchmarkUserSignup() { let counter = 0; return measureOperation('User Signup', async () => { const user = new Parse.User(); - user.set('username', `benchmark_user_${Date.now()}_${counter}`); + user.set('username', `benchmark_user_${Date.now()}_${counter++}`); user.set('password', 'benchmark_password'); - user.set('email', `benchmark${counter}@example.com`); - counter++; + user.set('email', `benchmark${counter - 1}@example.com`); await user.signUp(); }, Math.floor(ITERATIONS / 10)); // Fewer iterations for user operations }
277-277: Consider making server startup wait time configurable.The hard-coded 2-second wait might not be sufficient in slower CI environments or under load. Consider making this configurable via environment variable or adding a health check instead.
+ const STARTUP_WAIT = parseInt(process.env.BENCHMARK_STARTUP_WAIT || '2000', 10); // Wait for server to be ready - await new Promise(resolve => setTimeout(resolve, 2000)); + await new Promise(resolve => setTimeout(resolve, STARTUP_WAIT));.github/workflows/ci-performance.yml (1)
173-183: Clarify overlapping threshold conditions with else-if.The current logic has overlapping conditions (>10 and >20) that work but aren't immediately clear. Using else-if chains makes the intent explicit.
let status = '✅'; if (change > 10) { status = '⚠️ Slower'; hasRegression = true; - } else if (change > 20) { + } + if (change > 20) { status = '❌ Much Slower'; hasRegression = true; } else if (change < -10) {Better approach:
let status = '✅'; - if (change > 10) { - status = '⚠️ Slower'; - hasRegression = true; - } else if (change > 20) { + if (change > 20) { status = '❌ Much Slower'; hasRegression = true; + } else if (change > 10) { + status = '⚠️ Slower'; + hasRegression = true; } else if (change < -10) { status = '🚀 Faster'; hasImprovement = true; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/ci-performance.yml(1 hunks)benchmark/performance.js(1 hunks)package.json(1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.917Z
Learning: When reviewing Parse Server PRs that add new features, always check whether the feature is documented in the README.md file, though for new Parse Server options this is optional rather than required.
📚 Learning: 2025-11-08T13:46:04.917Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.917Z
Learning: For new Parse Server options, verify that the option is documented in src/Options/index.js and that npm run definitions has been executed to reflect changes in src/Options/docs.js and src/Options/Definitions.js. README.md documentation is a bonus but not required for new options.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-04-30T19:31:35.344Z
Learnt from: RahulLanjewar93
Repo: parse-community/parse-server PR: 9744
File: spec/ParseLiveQuery.spec.js:0-0
Timestamp: 2025-04-30T19:31:35.344Z
Learning: In the Parse Server codebase, the functions in QueryTools.js are typically tested through end-to-end behavior tests rather than direct unit tests, even though the functions are exported from the module.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`. The preferred pattern is to create a Promise that resolves when an expected event occurs, then await that Promise.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: Tests in the parse-server repository should use promise-based approaches rather than callback patterns with `done()`. Use a pattern where a Promise is created that resolves when the event occurs, then await that promise.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-05-04T20:41:05.147Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1312-1338
Timestamp: 2025-05-04T20:41:05.147Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`.
Applied to files:
benchmark/performance.js
🧬 Code graph analysis (1)
benchmark/performance.js (1)
src/Adapters/Storage/Mongo/MongoStorageAdapter.js (1)
MongoClient(25-25)
🪛 GitHub Check: Lint
benchmark/performance.js
[warning] 302-302:
Unexpected console statement
[warning] 298-298:
Unexpected console statement
[warning] 294-294:
Unexpected console statement
[warning] 290-290:
Unexpected console statement
[warning] 286-286:
Unexpected console statement
[warning] 282-282:
Unexpected console statement
[warning] 273-273:
Unexpected console statement
[warning] 267-267:
Unexpected console statement
[warning] 266-266:
Unexpected console statement
[warning] 265-265:
Unexpected console statement
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: Redis Cache
- GitHub Check: Node 18
- GitHub Check: PostgreSQL 16, PostGIS 3.5
- GitHub Check: PostgreSQL 15, PostGIS 3.3
- GitHub Check: PostgreSQL 18, PostGIS 3.6
- GitHub Check: Node 22
- GitHub Check: PostgreSQL 17, PostGIS 3.5
- GitHub Check: PostgreSQL 15, PostGIS 3.5
- GitHub Check: PostgreSQL 15, PostGIS 3.4
- GitHub Check: Node 20
- GitHub Check: MongoDB 8, ReplicaSet
- GitHub Check: MongoDB 7, ReplicaSet
- GitHub Check: MongoDB 6, ReplicaSet
- GitHub Check: Docker Build
🔇 Additional comments (9)
package.json (2)
141-141: LGTM: Trailing comma added for maintainability.The trailing comma addition follows JSON best practices and makes it easier to add subsequent entries.
142-144: LGTM: Benchmark scripts properly configured.The benchmark scripts follow the same pattern as existing test scripts and correctly integrate with mongodb-runner. The MongoDB version (8.0.4) is consistent with the CI workflow configuration.
benchmark/performance.js (5)
11-24: LGTM: Configuration is well-structured.The configuration uses environment variables appropriately and provides sensible defaults for local development and CI execution.
76-103: LGTM: Performance measurement implementation is solid.The measurement function correctly captures timing data and calculates relevant statistics (mean, percentiles, min/max). The output format is compatible with github-action-benchmark.
108-259: LGTM: Benchmark suite covers key Parse Server operations.The benchmark implementations are well-designed:
- Appropriate setup and cleanup for each test
- Reduced iterations for expensive operations (user ops, batch saves)
- Good coverage of core functionality (CRUD, queries, auth)
264-335: LGTM: Orchestration logic is well-structured.The runBenchmarks function properly:
- Separates status output (stderr) from results (stdout)
- Cleans up database between benchmarks for isolation
- Handles errors gracefully with appropriate exit codes
- Cleans up resources in finally block
338-342: LGTM: Module exports follow Node.js best practices.The module correctly supports both direct execution via
node benchmark/performance.jsand programmatic usage viarequire()..github/workflows/ci-performance.yml (2)
205-269: LGTM: PR comment logic properly updates existing comments.The implementation correctly:
- Finds existing performance comments by bot user and content
- Updates existing comments instead of creating duplicates
- Includes detailed results in a collapsible section
- Provides iteration count and environment info
271-278: LGTM: Job summary provides quick visibility.Adding the comparison to the GitHub job summary gives maintainers immediate visibility into performance impact without opening the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/ci-performance.yml (1)
10-14: Path filter still references the wrong workflow filename — issue not resolved.Line 14 still references
performance.yml, but the actual workflow file isci-performance.yml. This prevents the workflow from triggering when this file itself is modified.Apply this diff:
paths-ignore: - '**.md' - 'docs/**' - '.github/**' - - '!.github/workflows/performance.yml' + - '!.github/workflows/ci-performance.yml'
🧹 Nitpick comments (3)
.github/workflows/ci-performance.yml (3)
49-57: JSON extraction improved but could be simplified further.The switch from
greptosed -n '/^\[/,/^\]/p'is a solid improvement over the previous fragile approach. However, the simplest and most reliable method is to redirect benchmark stdout directly to the JSON file during command execution, avoiding post-processing entirely.Consider this alternative approach:
- name: Run baseline benchmarks id: baseline run: | - npm run benchmark > baseline-output.txt 2>&1 || true - # Extract JSON from output (everything between first [ and last ]) - sed -n '/^\[/,/^\]/p' baseline-output.txt > baseline.json || echo '[]' > baseline.json + npm run benchmark > baseline.json 2> baseline-output.txt || true echo "Baseline benchmark results:" cat baseline.json continue-on-error: trueThis separates JSON output (stdout) from human-readable logs (stderr) at capture time, eliminating the need for post-processing.
87-95: PR benchmark JSON extraction — apply the same simplification suggested for baseline.This step mirrors the baseline extraction pattern. For consistency and reliability, apply the same stdout/stderr separation:
- name: Run PR benchmarks id: pr-bench run: | - npm run benchmark > pr-output.txt 2>&1 || true - # Extract JSON from output (everything between first [ and last ]) - sed -n '/^\[/,/^\]/p' pr-output.txt > pr.json || echo '[]' > pr.json + npm run benchmark > pr.json 2> pr-output.txt || true echo "PR benchmark results:" cat pr.json continue-on-error: true
120-135: Add benchmark storage step for baseline results to match PR symmetry.Currently, only PR benchmark results are stored via
benchmark-action/github-action-benchmark@v1(lines 120–135). The baseline results are only uploaded as an artifact but not stored via the benchmark action. For consistency and to enable the action's built-in comparison/alerting features on baseline data, add a corresponding step:Insert this step before line 120 to store baseline results:
- name: Store benchmark result (baseline) uses: benchmark-action/github-action-benchmark@v1 if: github.event_name == 'pull_request' with: name: Parse Server Performance (baseline) tool: 'customSmallerIsBetter' output-file-path: baseline.json github-token: ${{ secrets.GITHUB_TOKEN }} auto-push: false save-data-file: false comment-on-alert: falseThis ensures both baseline and PR runs are tracked consistently.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/ci-performance.yml(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.917Z
Learning: When reviewing Parse Server PRs that add new features, always check whether the feature is documented in the README.md file, though for new Parse Server options this is optional rather than required.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: PostgreSQL 15, PostGIS 3.4
- GitHub Check: PostgreSQL 16, PostGIS 3.5
- GitHub Check: PostgreSQL 15, PostGIS 3.3
- GitHub Check: Node 22
- GitHub Check: PostgreSQL 18, PostGIS 3.6
- GitHub Check: PostgreSQL 15, PostGIS 3.5
- GitHub Check: PostgreSQL 17, PostGIS 3.5
- GitHub Check: MongoDB 8, ReplicaSet
- GitHub Check: Node 18
- GitHub Check: Node 20
- GitHub Check: Redis Cache
- GitHub Check: MongoDB 6, ReplicaSet
- GitHub Check: MongoDB 7, ReplicaSet
- GitHub Check: Docker Build
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## alpha #9916 +/- ##
=======================================
Coverage 93.07% 93.07%
=======================================
Files 187 187
Lines 15224 15224
Branches 177 177
=======================================
Hits 14170 14170
Misses 1042 1042
Partials 12 12 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
🎉 This change has been released in version 8.5.0-alpha.8 |
Pull Request
Issue
Closes: #7610
Approach
Tasks
Summary by CodeRabbit