-
Notifications
You must be signed in to change notification settings - Fork 29
feat(async): add async team mode for external event-driven multi-turn conversation #218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… conversation This commit implements the async team mode feature which enables external event-driven multi-turn agent conversations. Key changes include: Backend: - Add WAITING status to SubtaskStatus enum - Add new fields to Subtask model: waiting_for, waiting_since, waiting_timeout, resume_count, max_resume_count - Create database migration for new subtask fields - Implement GitHub/GitLab webhook endpoints for CI/CD event reception - Implement generic callback endpoint for custom integrations - Add AsyncResumeService for managing session recovery - Add configuration options: ASYNC_MODE_ENABLED, DEFAULT_MAX_RESUME_COUNT, DEFAULT_WAITING_TIMEOUT, GITHUB_WEBHOOK_SECRET, GITLAB_WEBHOOK_TOKEN Executor: - Add output_parser.py for detecting waiting signals (git push, PR/MR creation) - Integrate output parsing into response_processor for automatic WAITING detection - Add WAITING status to shared TaskStatus enum Frontend: - Add 'async' to TeamMode type - Add WAITING to TaskStatus type - Update team mode agent filters to support async mode (ClaudeCode only) - Update task context to handle WAITING status in refresh logic The async mode allows agents to: 1. Execute tasks and detect CI-triggering actions 2. Enter WAITING state when external events are expected 3. Resume execution when webhooks arrive with CI results 4. Support multiple resume cycles with configurable limits
WalkthroughThis PR introduces async-mode support for subtasks, enabling external event-driven resumption via webhooks. It adds database schema changes, webhook handlers (GitHub, GitLab, generic callback), an async resume service, waiting-signal detection in agent output, and a new team mode option. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent<br/>(Executor)
participant Parser as Output<br/>Parser
participant Processor as Response<br/>Processor
participant Service as Async<br/>Resume Service
participant DB as Database
participant Notify as Notification<br/>Service
Agent->>Parser: output with waiting signal
Parser->>Parser: detect_waiting_signal()<br/>pattern match + confidence
alt Signal Detected & Above Threshold
Parser-->>Processor: WaitingSignal
Processor->>Service: set_waiting_state(subtask_id)
Service->>DB: UPDATE subtask<br/>status=WAITING<br/>waiting_for, waiting_since
DB-->>Service: ✓
Service->>Notify: _send_notification<br/>event="subtask_waiting"
Notify-->>Service: ✓
Processor-->>Agent: return RUNNING<br/>(keep processing)
else No Signal
Processor-->>Agent: return COMPLETED
end
sequenceDiagram
participant Webhook as External<br/>Webhook<br/>(GitHub/GitLab)
participant Handler as Webhook<br/>Handler
participant Service as Async<br/>Resume Service
participant DB as Database
participant Notify as Notification<br/>Service
Webhook->>Handler: POST event<br/>(repo, branch, status)
Handler->>Handler: verify_signature()<br/>or verify_token()
alt Verification Failed
Handler-->>Webhook: 401 Unauthorized
else Verification Success
Handler->>Handler: extract_event_info()
Handler->>Service: resume_from_webhook<br/>(repo, branch, payload)
Service->>DB: SELECT * FROM subtasks<br/>WHERE status=WAITING<br/>AND waiting_for=ci_pipeline
DB-->>Service: [subtasks]
loop For Each Matching Subtask
Service->>Service: match task by repo/branch<br/>(case-insensitive)
alt Match Found
Service->>Service: _resume_subtask()
Service->>DB: UPDATE subtask<br/>status=RUNNING<br/>resume_count++<br/>clear waiting_*
DB-->>Service: ✓
Service->>Notify: _send_notification<br/>event="subtask_resumed"
Notify-->>Service: ✓
end
end
Handler-->>Webhook: 200 OK<br/>resumed_count
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Use absolute import path 'executor.agents.claude_code.output_parser' instead of relative 'agents.claude_code.output_parser' to avoid circular import through agents/__init__.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (14)
executor/agents/claude_code/response_processor.py (1)
496-502: UseTaskStatus.WAITING.valuefor consistency.The status string should use the enum value for type safety and consistency with other status reports in this file (e.g.,
TaskStatus.COMPLETED.valueon line 530,TaskStatus.FAILED.valueon line 549).state_manager.report_progress( progress=50, # Not fully complete yet - status="WAITING", + status=TaskStatus.WAITING.value, message=f"Waiting for {waiting_for}", extra_result=result_dict )executor/agents/claude_code/output_parser.py (3)
43-71: Consider extracting magic numbers for confidence thresholds.The base confidence values (0.7-0.95) are inline. For maintainability, consider defining named constants like
HIGH_CONFIDENCE = 0.9for commonly used thresholds. However, this is optional given the current scope.
166-181: Consider documenting the asymmetric confidence adjustment.Positive indicators add 0.05 while negative indicators subtract 0.1. This conservative approach is sensible but worth a brief comment explaining the rationale (e.g., "penalize negatives more heavily to reduce false positives").
1-6: Remove unnecessary shebang for library module.This file is a library module, not an executable script. The shebang on line 1 is unnecessary and flagged by static analysis (EXE001).
-#!/usr/bin/env python # SPDX-FileCopyrightText: 2025 Weibo, Inc. # # SPDX-License-Identifier: Apache-2.0frontend/src/features/settings/components/team-modes/index.ts (1)
13-13: DuplicateTeamModedefinition - consolidate withtypes.ts.This duplicates the
TeamModetype already exported fromtypes.ts(line 7). Since line 11 exports everything fromtypes.ts, consumers could importTeamModefrom either location, risking inconsistency.Remove this duplicate and rely on the re-export from
types.ts:-export type TeamMode = 'solo' | 'pipeline' | 'route' | 'coordinate' | 'collaborate' | 'async';backend/alembic/versions/2a3b4c5d6e7f_add_subtask_async_mode_fields.py (1)
37-66: Consider using Alembic'sop.add_columnfor better cross-database support.Raw SQL with
ADD COLUMN IF NOT EXISTSis MySQL 8.0+ specific. Using Alembic operations provides:
- Cross-database compatibility
- Better migration tracking
- Type safety
- op.execute(""" - ALTER TABLE subtasks - ADD COLUMN IF NOT EXISTS waiting_for VARCHAR(50) NULL COMMENT 'Event type being waited for (e.g., ci_pipeline, approval)' - """) + op.add_column('subtasks', sa.Column('waiting_for', sa.String(50), nullable=True, + comment='Event type being waited for (e.g., ci_pipeline, approval)'))However,
op.add_columnwill fail if the column exists. If idempotency is required for re-runnable migrations, document this clearly.backend/app/api/endpoints/webhooks/__init__.py (1)
5-9: Sort__all__for consistency.Static analysis indicates
__all__is not sorted. Per coding guidelines, apply isort-style sorting.-from app.api.endpoints.webhooks.github import router as github_router -from app.api.endpoints.webhooks.gitlab import router as gitlab_router from app.api.endpoints.webhooks.callback import router as callback_router +from app.api.endpoints.webhooks.github import router as github_router +from app.api.endpoints.webhooks.gitlab import router as gitlab_router -__all__ = ["github_router", "gitlab_router", "callback_router"] +__all__ = ["callback_router", "github_router", "gitlab_router"]backend/app/api/endpoints/webhooks/github.py (2)
10-15: Remove unused importdatetime.The
datetimemodule is imported but not used in this file.import hashlib import hmac import json import logging -from datetime import datetime from typing import Any, Dict, Optional
156-161: Improve exception handling per best practices.Use
logging.exceptionto include traceback and chain exceptions withraise ... from.# Parse payload try: payload = json.loads(body) except json.JSONDecodeError: - logger.error("Failed to parse GitHub webhook payload") - raise HTTPException(status_code=400, detail="Invalid JSON payload") + logger.exception("Failed to parse GitHub webhook payload") + raise HTTPException(status_code=400, detail="Invalid JSON payload") from Nonebackend/app/api/endpoints/webhooks/callback.py (2)
39-47: Inconsistent variable naming and deterministic token concern.The variable
SECRET_keyuses mixed casing; usesecret_keyfor PEP 8 compliance. Additionally, since tokens are deterministic and never expire, once a subtask_id is known, the token remains valid indefinitely. Consider whether this is acceptable for your security model.def generate_callback_token(subtask_id: int) -> str: """ Generate a secure callback token for a subtask. This should be stored with the subtask when entering WAITING state. """ # Use HMAC with SECRET key and subtask_id to generate deterministic token - SECRET_key = settings.SECRET_KEY.encode("utf-8") + secret_key = settings.SECRET_KEY.encode("utf-8") message = f"callback:{subtask_id}".encode("utf-8") - return hmac.new(SECRET_key, message, hashlib.sha256).hexdigest()[:32] + return hmac.new(secret_key, message, hashlib.sha256).hexdigest()[:32]
84-91: Improve exception handling in JSON parsing.Use
logging.exceptionto capture the stack trace and chain the exception withraise ... fromfor better debugging.try: if body: payload = json.loads(body) else: payload = {} except json.JSONDecodeError: - logger.error("Failed to parse callback payload") - raise HTTPException(status_code=400, detail="Invalid JSON payload") + logger.exception("Failed to parse callback payload") + raise HTTPException(status_code=400, detail="Invalid JSON payload") from Nonebackend/app/api/endpoints/webhooks/gitlab.py (1)
139-143: Improve exception handling in JSON parsing.Use
logging.exceptionto capture the stack trace and chain the exception withraise ... fromfor better debugging. This is consistent with the pattern needed in callback.py.try: payload = json.loads(body) except json.JSONDecodeError: - logger.error("Failed to parse GitLab webhook payload") - raise HTTPException(status_code=400, detail="Invalid JSON payload") + logger.exception("Failed to parse GitLab webhook payload") + raise HTTPException(status_code=400, detail="Invalid JSON payload") from Nonebackend/app/services/async_resume_service.py (2)
262-266: Remove unnecessary f-string prefix.Lines 265 and 363 use f-strings without any placeholders.
# Send notification await self._send_notification( subtask=subtask, event="subtask.max_resume_reached", - description=f"Subtask failed: max resume count reached", + description="Subtask failed: max resume count reached", )And on line 363:
await self._send_notification( subtask=subtask, event="subtask.waiting_timeout", - description=f"Subtask failed: waiting timeout exceeded", + description="Subtask failed: waiting timeout exceeded", )
374-395: Improve exception handling in notification sending.Catching broad
Exceptionis acceptable here for resilience, but uselogging.exceptionto include the full stack trace for debugging.async def _send_notification( self, subtask: Subtask, event: str, description: str, ) -> None: """Send webhook notification for subtask events.""" try: notification = Notification( user_name="system", event=event, id=str(subtask.id), start_time=subtask.created_at.isoformat() if subtask.created_at else "", end_time=datetime.now().isoformat(), description=description, status=subtask.status.value, detail_url="", ) await webhook_notification_service.send_notification(notification) except Exception as e: - logger.error(f"Failed to send notification: {e}") + logger.exception("Failed to send notification for subtask %s", subtask.id)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
backend/alembic/versions/2a3b4c5d6e7f_add_subtask_async_mode_fields.py(1 hunks)backend/app/api/api.py(2 hunks)backend/app/api/endpoints/webhooks/__init__.py(1 hunks)backend/app/api/endpoints/webhooks/callback.py(1 hunks)backend/app/api/endpoints/webhooks/github.py(1 hunks)backend/app/api/endpoints/webhooks/gitlab.py(1 hunks)backend/app/core/config.py(1 hunks)backend/app/models/subtask.py(2 hunks)backend/app/schemas/subtask.py(3 hunks)backend/app/services/async_resume_service.py(1 hunks)executor/agents/claude_code/output_parser.py(1 hunks)executor/agents/claude_code/response_processor.py(2 hunks)frontend/src/features/settings/components/team-modes/index.ts(2 hunks)frontend/src/features/settings/components/team-modes/types.ts(1 hunks)frontend/src/features/tasks/contexts/taskContext.tsx(1 hunks)frontend/src/types/api.ts(1 hunks)shared/status.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: TypeScript must be in strict mode with type hints required
Use functional components with hooks in React/TypeScript
Use Prettier formatter with single quotes and no semicolons in TypeScript/React
Useconstoverlet, never usevarin TypeScript/React
Use functional patterns over class-based patterns in React/TypeScript
Files:
frontend/src/features/settings/components/team-modes/types.tsfrontend/src/features/settings/components/team-modes/index.tsfrontend/src/types/api.tsfrontend/src/features/tasks/contexts/taskContext.tsx
frontend/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
frontend/**/*.{ts,tsx}: Run ESLint with Next.js configuration on TypeScript/React code:npm run lint
Run Prettier formatter on TypeScript/React code:npm run format
Name React components in PascalCase and component files in kebab-case
Use conditional rendering and mobile-first responsive design with Tailwind CSS in React
Files:
frontend/src/features/settings/components/team-modes/types.tsfrontend/src/features/settings/components/team-modes/index.tsfrontend/src/types/api.tsfrontend/src/features/tasks/contexts/taskContext.tsx
frontend/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use only
NEXT_PUBLIC_*environment variables for client-safe values in frontend
Files:
frontend/src/features/settings/components/team-modes/types.tsfrontend/src/features/settings/components/team-modes/index.tsfrontend/src/types/api.tsfrontend/src/features/tasks/contexts/taskContext.tsx
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Python code must be PEP 8 compliant with Black formatter (line length: 88), isort for imports, and type hints required
Run Black formatter and isort on Python code:black . && isort .
Use descriptive names for Python functions and variables
Write docstrings for public Python functions and classes
Extract magic numbers to constants in Python code
Keep Python functions to a maximum of 50 lines (preferred)
Run pylint and flake8 on Python code:pylint app/ && flake8 app/
Files:
backend/app/api/endpoints/webhooks/__init__.pybackend/app/api/api.pybackend/app/models/subtask.pyexecutor/agents/claude_code/response_processor.pybackend/app/core/config.pybackend/app/api/endpoints/webhooks/callback.pybackend/app/api/endpoints/webhooks/github.pyshared/status.pyexecutor/agents/claude_code/output_parser.pybackend/app/schemas/subtask.pybackend/app/api/endpoints/webhooks/gitlab.pybackend/alembic/versions/2a3b4c5d6e7f_add_subtask_async_mode_fields.pybackend/app/services/async_resume_service.py
backend/app/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
backend/app/**/*.py: Add endpoint by creating route inapp/api/, schema inapp/schemas/, and logic inapp/services/
Backend encrypts Git tokens before database storage
Files:
backend/app/api/endpoints/webhooks/__init__.pybackend/app/api/api.pybackend/app/models/subtask.pybackend/app/core/config.pybackend/app/api/endpoints/webhooks/callback.pybackend/app/api/endpoints/webhooks/github.pybackend/app/schemas/subtask.pybackend/app/api/endpoints/webhooks/gitlab.pybackend/app/services/async_resume_service.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
backend/app/models/**/*.py: Create database migrations with Alembic after model changes:alembic revision --autogenerate -m 'description'
Add model by creating inapp/models/and generating migration with Alembic
Files:
backend/app/models/subtask.py
frontend/src/types/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Define TypeScript types in
src/types/directory
Files:
frontend/src/types/api.ts
backend/alembic/versions/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
backend/alembic/versions/**/*.py: Always review auto-generated Alembic migrations before applying them
Never edit applied migrations - create new migrations instead
Keep database migrations small and focused
Files:
backend/alembic/versions/2a3b4c5d6e7f_add_subtask_async_mode_fields.py
🧠 Learnings (1)
📚 Learning: 2025-11-29T16:31:31.304Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-29T16:31:31.304Z
Learning: Applies to backend/app/**/*.py : Add endpoint by creating route in `app/api/`, schema in `app/schemas/`, and logic in `app/services/`
Applied to files:
backend/app/api/endpoints/webhooks/__init__.pybackend/app/api/api.py
🧬 Code graph analysis (7)
executor/agents/claude_code/response_processor.py (7)
executor/agents/claude_code/output_parser.py (2)
detect_waiting_signal(96-142)detect_waiting_signal(213-224)executor/agents/claude_code/claude_code_agent.py (1)
add_thinking_step(275-298)executor/agents/agno/thinking_step_manager.py (1)
add_thinking_step(38-72)executor/agents/claude_code/progress_state_manager.py (2)
update_workbench_status(82-101)report_progress(103-135)executor/agents/base.py (1)
report_progress(103-126)frontend/src/types/api.ts (1)
TaskStatus(120-128)shared/status.py (1)
TaskStatus(7-19)
backend/app/api/endpoints/webhooks/callback.py (1)
backend/app/services/async_resume_service.py (1)
resume_subtask_by_id(188-226)
frontend/src/features/settings/components/team-modes/index.ts (2)
frontend/src/features/settings/components/team-modes/types.ts (1)
TeamMode(7-7)frontend/src/features/settings/components/BotEdit.tsx (1)
AgentType(35-35)
backend/app/api/endpoints/webhooks/github.py (2)
frontend/src/apis/client.ts (2)
post(86-91)request(22-80)backend/app/services/async_resume_service.py (1)
resume_from_webhook(94-186)
executor/agents/claude_code/output_parser.py (2)
shared/logger.py (1)
setup_logger(37-107)executor_manager/scheduler/scheduler.py (1)
start(183-201)
backend/app/api/endpoints/webhooks/gitlab.py (1)
backend/app/services/async_resume_service.py (1)
resume_from_webhook(94-186)
backend/app/services/async_resume_service.py (4)
backend/app/models/subtask.py (2)
Subtask(32-72)SubtaskStatus(17-24)backend/app/schemas/subtask.py (1)
SubtaskStatus(25-32)backend/app/services/webhook_notification.py (2)
Notification(18-28)send_notification(96-134)backend/app/models/kind.py (1)
Kind(25-42)
🪛 Ruff (0.14.6)
backend/app/api/endpoints/webhooks/__init__.py
9-9: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
backend/app/api/endpoints/webhooks/callback.py
61-61: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
90-90: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
91-91: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
118-118: Unused function argument: db
(ARG001)
118-118: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
backend/app/api/endpoints/webhooks/github.py
138-138: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
160-160: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
161-161: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
executor/agents/claude_code/output_parser.py
1-1: Shebang is present but file is not executable
(EXE001)
backend/app/api/endpoints/webhooks/gitlab.py
122-122: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
142-142: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
143-143: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
backend/app/services/async_resume_service.py
265-265: f-string without any placeholders
Remove extraneous f prefix
(F541)
363-363: f-string without any placeholders
Remove extraneous f prefix
(F541)
393-393: Do not catch blind exception: Exception
(BLE001)
394-394: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Test wegent CLI Integration
- GitHub Check: e2e-tests
🔇 Additional comments (24)
executor/agents/claude_code/response_processor.py (2)
28-28: LGTM!Import correctly references the convenience function from the new output_parser module.
468-502: Based on my verification, I have enough information to provide the rewritten review comment. Let me compile my findings:
Update the docstring for
update_workbench_statusto document "waiting" as a valid status value.The method correctly handles the "waiting" status by not stopping monitoring (line 96 only calls
_stop_monitoring()whenstatus == "completed"), which allows the frontend to periodically refresh WAITING tasks as intended. However, the docstring at line 84 lists only"running" | "completed" | "failed"while the code accepts"waiting"(passed at line 486 of response_processor.py). Additionally,initialize_workbench(line 66) has the same incomplete docstring. Update both docstrings to include"waiting"as a documented valid status, and add a comment explaining why "waiting" intentionally does not stop monitoring, unlike "completed."executor/agents/claude_code/output_parser.py (5)
24-38: LGTM!Well-structured enum and dataclass definitions with appropriate type hints. The
strmixin inWaitingSignalTypeenables direct serialization.
83-94: LGTM!Pre-compiling regex patterns in
__init__is a good performance optimization. The configurableconfidence_thresholdprovides flexibility.
96-142: LGTM!The method correctly finds the highest-confidence match above the threshold and extracts relevant context. The early return for empty input is a good guard.
191-206: LGTM!Clean wrapper providing a simpler boolean API for callers.
209-223: LGTM!The module-level singleton and convenience function provide a clean API for consumers. The default confidence threshold of 0.7 is reasonable.
frontend/src/features/tasks/contexts/taskContext.tsx (1)
137-160: LGTM! The refresh logic correctly handles WAITING tasks.The existing
hasIncompleteTaskscheck already treatsWAITINGas incomplete since it only excludes terminal states (COMPLETED,FAILED,CANCELLED,DELETE). The added comment on lines 146-147 clearly documents the rationale for periodic refresh of WAITING tasks that may be resumed by external webhooks.frontend/src/features/settings/components/team-modes/index.ts (1)
25-33: LGTM! Async mode configuration is well-documented and correctly implemented.The
MODE_AGENT_FILTERcorrectly restrictsasyncmode toClaudeCodeonly, matching the PR description's single-bot scenario requirement. The comment clearly explains the use case.backend/alembic/versions/2a3b4c5d6e7f_add_subtask_async_mode_fields.py (2)
103-113: Downgrade ordering is correct - data migration before schema change.The UPDATE to convert WAITING → PENDING (lines 105-107) correctly precedes the enum modification, preventing foreign key/constraint violations. This is the proper pattern for reversible enum migrations.
30-34: MySQL-specific syntax is appropriate for this MySQL-only project—no portability concern.The project exclusively uses MySQL (verified by docker-compose.yml, .env.example, and requirements.txt which contain only
pymysqlandasyncmydrivers with no PostgreSQL or SQLite support). The migration'sMODIFY COLUMNandENUMsyntax is correct for MySQL and aligns with the project's database architecture. The migration is well-structured with proper upgrade/downgrade logic and idempotency guards viaIF NOT EXISTS/IF EXISTS.Likely an incorrect or invalid review comment.
shared/status.py (1)
13-14: LGTM! WAITING status correctly added.The new
WAITINGenum member is properly documented and follows the existing pattern. The comment clearly indicates its purpose for async mode external event handling.backend/app/api/api.py (1)
16-20: LGTM! Webhook routers integrated correctly.The import and registration of the three webhook routers follow the existing patterns in the codebase. The
/webhooksprefix and consistent tagging align with the PR objectives for async team mode. Based on learnings, this follows the guideline of adding endpoints by creating routes inapp/api/.Also applies to: 38-41
frontend/src/types/api.ts (1)
120-128: LGTM! TaskStatus extended correctly.The
WAITINGstatus is appropriately placed afterRUNNINGand before terminal states, aligning with the backendSubtaskStatusenum. The inline comment clarifies its purpose for async mode. As per coding guidelines, this correctly defines types in thesrc/types/directory.backend/app/models/subtask.py (2)
17-24: LGTM! SubtaskStatus enum extended correctly.The
WAITINGstatus is appropriately positioned and follows the existing enum pattern.
58-63: Alembic migration exists and is properly structured.The migration file
./backend/alembic/versions/2a3b4c5d6e7f_add_subtask_async_mode_fields.pyhas been successfully generated with all five async-mode columns correctly defined:
waiting_for(VARCHAR(50), nullable)waiting_since(DATETIME, nullable)waiting_timeout(INT, nullable)resume_count(INT, default 0)max_resume_count(INT, default 5)The migration includes a proper downgrade path, updates the status ENUM to include 'WAITING', and adds a performance index on (status, waiting_for). All column definitions align with the model changes.
backend/app/schemas/subtask.py (3)
25-32: LGTM! Schema enum extended correctly.The
WAITINGstatus mirrors the model enum and follows the existing pattern.
57-62: LGTM! Async-mode fields added to SubtaskBase.The new fields have appropriate types and defaults that align with the model layer. The defaults match the configuration values (
max_resume_count=5).
84-89: LGTM! Optional fields for SubtaskUpdate.All async-mode fields are correctly marked as
Optionalto allow partial updates, following the existing pattern in this schema.backend/app/api/endpoints/webhooks/github.py (2)
29-48: LGTM! Signature verification is correctly implemented.The
verify_github_signaturefunction properly:
- Uses
hmac.compare_digestfor timing-safe comparison- Validates the
sha256=prefix- Logs a warning when no secret is configured
133-139: Note:Depends()in default arguments is an accepted FastAPI pattern.Static analysis flags
Depends(get_db)as B008, but this is the standard FastAPI dependency injection pattern and is safe to ignore here.backend/app/api/endpoints/webhooks/gitlab.py (2)
160-168: Hardcodedwaiting_for="ci_pipeline"may not match MR events.The webhook always passes
waiting_for="ci_pipeline", but Merge Request events might have subtasks waiting for"merge_request"or similar. Consider usingevent_info["event_type"]to match appropriately.# Try to resume any waiting subtasks result = await async_resume_service.resume_from_webhook( db=db, git_repo=event_info["repo"], branch_name=event_info["branch"], webhook_payload=event_info["raw_payload"], - waiting_for="ci_pipeline", + waiting_for=event_info["event_type"], # or map to expected waiting_for values source="gitlab", )
93-112: Based on my thorough verification, I can now provide the final assessment:The review comment's core concern is factually incorrect.
According to GitLab's official webhook documentation, the
project_namefield in Job Hook payloads already includes the namespace in the format "namespace/project" (e.g., "gitlab-org/gitlab-test"). This is consistent with thepath_with_namespacefield used in Pipeline and Merge Request hooks.The Job Hook payload structure is legitimately different from Pipeline/MR hooks—it uses top-level fields (
project_name,ref) rather than accessing a nestedprojectobject. The current implementation is correct for this payload structure.While there is a minor code style inconsistency (different field names across hook types), this does not cause matching failures. The
resume_from_webhookfunction's substring matching logic (lines 165-169) provides additional safety and should work correctly with both formats since they both use "namespace/project" format.
Repo identifier is consistent across GitLab webhook types.
Job Hook uses
project_namewhich already includes the namespace (e.g., "group/project"), matching thepath_with_namespaceformat used by Pipeline and Merge Request hooks. No functional matching failures will occur inresume_from_webhook.backend/app/services/async_resume_service.py (1)
120-134: The composite index on(status, waiting_for)exists and is properly configured.The migration
2a3b4c5d6e7f_add_subtask_async_mode_fields.pycreates the indexix_subtasks_waiting_statusonsubtasks(status, waiting_for)at lines 64-66, which directly supports the filtering criteria in the query. The index will ensure efficient lookups for WAITING subtasks filtered bywaiting_for.
| @router.get("/callback/token/{subtask_id}") | ||
| async def get_callback_url( | ||
| subtask_id: int, | ||
| db: Session = Depends(get_db), | ||
| ): | ||
| """ | ||
| Get the callback URL for a subtask. | ||
| This is a utility endpoint to generate callback URLs for external systems. | ||
| Should only be accessible to authorized users. | ||
| """ | ||
| token = generate_callback_token(subtask_id) | ||
|
|
||
| # Build callback URL | ||
| # Note: In production, this should use the actual public URL | ||
| base_url = settings.FRONTEND_URL.rstrip("/") | ||
| callback_url = f"{base_url}/api/webhooks/callback/{subtask_id}/{token}" | ||
|
|
||
| return { | ||
| "subtask_id": subtask_id, | ||
| "token": token, | ||
| "callback_url": callback_url, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused db parameter and missing authorization.
The db parameter is unused—remove it or use it to validate the subtask exists before generating a token. The docstring notes this endpoint "should only be accessible to authorized users," but no authorization is implemented.
@router.get("/callback/token/{subtask_id}")
async def get_callback_url(
subtask_id: int,
- db: Session = Depends(get_db),
):
"""
Get the callback URL for a subtask.
This is a utility endpoint to generate callback URLs for external systems.
- Should only be accessible to authorized users.
+ TODO: Add authorization to restrict access to authorized users only.
"""🧰 Tools
🪛 Ruff (0.14.6)
118-118: Unused function argument: db
(ARG001)
118-118: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In backend/app/api/endpoints/webhooks/callback.py around lines 115 to 137, the
get_callback_url endpoint currently accepts an unused db parameter and lacks
authorization; either remove the db parameter or use it to look up and validate
that the subtask exists (and belongs to the requesting user) before generating a
token, and add an authentication/authorization dependency (e.g.,
Depends(get_current_user) or an admin check) to the endpoint signature so only
authorized users can call it; return a 404 if the subtask is not found or a 403
if the user is not permitted.
| elif event_type == "pull_request": | ||
| pr = payload.get("pull_request", {}) | ||
| action = payload.get("action") | ||
|
|
||
| # Process PR events that might be relevant | ||
| if action not in ["opened", "synchronize", "closed", "merged"]: | ||
| return None | ||
|
|
||
| repo = payload.get("repository", {}) | ||
| repo_full_name = repo.get("full_name", "") | ||
| head_branch = pr.get("head", {}).get("ref", "") | ||
|
|
||
| return { | ||
| "repo": repo_full_name, | ||
| "branch": head_branch, | ||
| "status": "completed" if action in ["closed", "merged"] else "in_progress", | ||
| "conclusion": "success" if pr.get("merged") else action, | ||
| "event_type": "pull_request", | ||
| "name": f"PR #{pr.get('number')}: {pr.get('title', '')}", | ||
| "html_url": pr.get("html_url"), | ||
| "raw_payload": payload, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
GitHub pull request webhook event actions documentation
💡 Result:
GitHub’s webhook "pull_request" event includes an action field that indicates what happened. Common action values (per GitHub docs) include:
- opened, edited, closed, reopened
- assigned, unassigned
- review_requested, review_request_removed
- labeled, unlabelled (unlabeled)
- synchronize
- converted_to_draft, ready_for_review
- locked, unlocked
- milestoned, demilestoned
- auto_merge_enabled, auto_merge_disabled
(Exact sets and names may vary by product/version; see official docs below.)
Relevant documentation
- Webhook event types and pull_request payload (lists pull_request action values and payload shape). [1]
- Actions workflows: which pull_request activity types trigger workflows and how to restrict them with types. [2]
Sources
[1] GitHub — Webhook event types / PullRequestEvent payload.
[2] GitHub — Events that trigger workflows (pull_request activity types and workflow behavior).
🌐 Web query:
GitHub webhook pull request merged closed action payload
💡 Result:
Short answer: GitHub sends a pull_request webhook with action "closed" for both merges and non-merge closes; the payload's pull_request object contains merged (boolean), merged_at (timestamp or null), merge_commit_sha and merged_by to tell you if it was merged and by whom.
Important fields to check in the payload
- action: "closed". [Check X-GitHub-Event header = "pull_request".] [1]
- pull_request.merged: true => PR was merged; false => PR was closed without merge. [1]
- pull_request.merged_at: timestamp when merged (or null). [1]
- pull_request.merge_commit_sha: commit SHA of the merge (or null). [1]
- pull_request.merged_by: user object for who merged (or null). [1]
- number: PR number. [1]
- repository, sender, installation, etc. (standard webhook fields). [1][2]
Example logic (pseudocode)
- if payload.action == "closed" and payload.pull_request.merged == true → handle merged
- else if payload.action == "closed" and payload.pull_request.merged == false → handle closed/unmerged
Sources:
[1] GitHub: Webhook events and payloads — pull_request (payload fields: action, pull_request.merged, merged_at, merge_commit_sha, merged_by).
[2] AWS CodeBuild docs (mapping: pull_request action "closed" + merged==true → merged event).
GitHub sends "closed" action with a merged boolean field, not a separate "merged" action—line 112's condition is dead code.
The verification confirms your concern is correct. GitHub's pull_request webhook always sends action: "closed" for merged PRs; the merge status is indicated by the pull_request.merged boolean field in the payload, not a separate action value. The condition on line 112 that includes "merged" in the allowed actions will never match an actual GitHub event.
While the code still works correctly because line 118 checks pr.get("merged") directly, the condition is misleading and should be:
if action not in ["opened", "synchronize", "closed"]:
return None🤖 Prompt for AI Agents
In backend/app/api/endpoints/webhooks/github.py around lines 107 to 128, the
allowed actions list incorrectly includes "merged" which GitHub never sends as
an action (merged PRs come as action "closed" with pull_request.merged=true);
update the action check to only include ["opened", "synchronize", "closed"] so
the dead "merged" branch is removed and behavior matches actual webhook
payloads.
| def verify_gitlab_token(token: Optional[str], expected_token: str) -> bool: | ||
| """Verify GitLab webhook token.""" | ||
| if not expected_token: | ||
| # If no token configured, skip verification (not recommended for production) | ||
| logger.warning("GitLab webhook token not configured, skipping verification") | ||
| return True | ||
|
|
||
| return token == expected_token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use timing-safe comparison for token verification.
Direct string comparison with == is vulnerable to timing attacks. Use secrets.compare_digest for constant-time comparison, consistent with the callback.py implementation.
+import secrets
+
def verify_gitlab_token(token: Optional[str], expected_token: str) -> bool:
"""Verify GitLab webhook token."""
if not expected_token:
# If no token configured, skip verification (not recommended for production)
logger.warning("GitLab webhook token not configured, skipping verification")
return True
- return token == expected_token
+ if token is None:
+ return False
+ return secrets.compare_digest(token, expected_token)🤖 Prompt for AI Agents
In backend/app/api/endpoints/webhooks/gitlab.py around lines 26 to 33, the token
comparison uses a direct == which is vulnerable to timing attacks; replace it
with a timing-safe comparison using secrets.compare_digest. Ensure you import
the secrets module at top of the file, handle the case where token may be None
by comparing empty string or returning False appropriately, and return the
result of secrets.compare_digest(token, expected_token) instead of the direct
equality check.
| # Async team mode configuration | ||
| ASYNC_MODE_ENABLED: bool = True | ||
| DEFAULT_MAX_RESUME_COUNT: int = 5 | ||
| DEFAULT_WAITING_TIMEOUT: int = 3600 # Default timeout in seconds, 0 means no timeout | ||
| GITHUB_WEBHOOK_SECRET: str = "" | ||
| GITLAB_WEBHOOK_TOKEN: str = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate the github.py file mentioned in the review
find . -name "github.py" -type f | head -20Repository: wecode-ai/Wegent
Length of output: 138
🏁 Script executed:
# Check the github webhook handler for signature verification logic
wc -l ./backend/app/api/endpoints/webhooks/github.pyRepository: wecode-ai/Wegent
Length of output: 110
🏁 Script executed:
# Read the github webhook handler
cat -n ./backend/app/api/endpoints/webhooks/github.pyRepository: wecode-ai/Wegent
Length of output: 7684
🏁 Script executed:
# Look for gitlab webhook handler
find . -path "*/webhooks/*gitlab*" -type fRepository: wecode-ai/Wegent
Length of output: 106
🏁 Script executed:
# Check gitlab webhook handler
wc -l ./backend/app/api/endpoints/webhooks/gitlab.pyRepository: wecode-ai/Wegent
Length of output: 110
🏁 Script executed:
# Read gitlab webhook handler - focus on token verification
cat -n ./backend/app/api/endpoints/webhooks/gitlab.py | head -80Repository: wecode-ai/Wegent
Length of output: 3260
🏁 Script executed:
# Check if there's any startup validation in config.py or main.py
find . -name "main.py" -o -name "__main__.py" | head -10Repository: wecode-ai/Wegent
Length of output: 127
🏁 Script executed:
# Check backend main.py for startup validation
wc -l ./backend/app/main.pyRepository: wecode-ai/Wegent
Length of output: 85
🏁 Script executed:
# Read main.py - search for webhook secret or startup validation
cat -n ./backend/app/main.py | grep -i "webhook\|secret\|startup\|lifespan" -A 2 -B 2Repository: wecode-ai/Wegent
Length of output: 487
🏁 Script executed:
# Read full startup block
sed -n '108,140p' ./backend/app/main.pyRepository: wecode-ai/Wegent
Length of output: 1378
🏁 Script executed:
# Check config.py for production mode and any validation
wc -l ./backend/app/core/config.pyRepository: wecode-ai/Wegent
Length of output: 92
🏁 Script executed:
# Read full config.py
cat -n ./backend/app/core/config.pyRepository: wecode-ai/Wegent
Length of output: 4451
🏁 Script executed:
# Search for deployment or documentation files
find . -name "README*" -o -name "DEPLOY*" -o -name "docker-compose*" -o -name "Dockerfile*" | grep -i "backend\|deploy" | head -10Repository: wecode-ai/Wegent
Length of output: 162
🏁 Script executed:
# Search for webhook documentation
grep -i "webhook\|github.*secret\|gitlab.*token" ./backend/README.mdRepository: wecode-ai/Wegent
Length of output: 42
🏁 Script executed:
# Search for .env or environment variable documentation
find . -name ".env*" -o -name "*env.example*" | head -20Repository: wecode-ai/Wegent
Length of output: 104
🏁 Script executed:
# Check .env.example for webhook configuration
grep -i "webhook\|github.*secret\|gitlab.*token" ./backend/.env.exampleRepository: wecode-ai/Wegent
Length of output: 259
Security concern about empty webhook secret defaults is valid and confirmed.
The code allows GitHub and GitLab webhooks to skip signature verification when secrets are not configured. Both github.py (lines 33–36) and gitlab.py (lines 28–31) check for empty secrets and return True (accepting the webhook) with only a logger.warning().
Verification confirms:
GITHUB_WEBHOOK_SECRETandGITLAB_WEBHOOK_TOKENdefault to empty strings inconfig.py(lines 100–101)- No startup validation enforces secrets in production mode
- No deployment documentation addresses this configuration requirement
The original suggestion to document this prominently or enforce in production is sound. Consider adding a startup check that logs a critical warning or fails if ENVIRONMENT == "production" and either secret is empty.
🤖 Prompt for AI Agents
In backend/app/core/config.py around lines 96 to 101, the GITHUB_WEBHOOK_SECRET
and GITLAB_WEBHOOK_TOKEN default to empty strings which allows webhook signature
checks to be skipped; add a startup validation that checks if ENVIRONMENT ==
"production" and if either secret/token is empty then log a critical error and
abort startup (raise SystemExit or similar) to prevent running without secrets;
for non-production environments log a prominent warning that secrets are missing
and point to documentation; update the deployment docs to state that both
webhook secrets must be set in production.
| if ( | ||
| git_repo.lower() == task_git_repo.lower() | ||
| or git_repo.lower() in task_git_repo.lower() | ||
| or task_git_repo.lower() in git_repo.lower() | ||
| ): | ||
| if branch_name.lower() == task_branch.lower(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overly permissive repository matching could cause false positives.
The substring matching logic (git_repo.lower() in task_git_repo.lower() and vice versa) could incorrectly match unrelated repositories. For example, "org/app" would match "org/app-extended" or "another-org/app".
Consider stricter matching or normalizing repo identifiers:
- # Match by repo and branch (case-insensitive)
- if (
- git_repo.lower() == task_git_repo.lower()
- or git_repo.lower() in task_git_repo.lower()
- or task_git_repo.lower() in git_repo.lower()
- ):
+ # Match by repo (case-insensitive, exact match on repo name)
+ # Normalize by extracting owner/repo or just repo name
+ webhook_repo_normalized = git_repo.lower().rstrip("/")
+ task_repo_normalized = task_git_repo.lower().rstrip("/")
+
+ # Allow matching with or without host prefix (e.g., github.com/owner/repo vs owner/repo)
+ webhook_repo_parts = webhook_repo_normalized.split("/")[-2:] # last two segments
+ task_repo_parts = task_repo_normalized.split("/")[-2:]
+
+ if webhook_repo_parts == task_repo_parts:
if branch_name.lower() == task_branch.lower():🤖 Prompt for AI Agents
In backend/app/services/async_resume_service.py around lines 165 to 170, the
current repo-matching uses substring checks which can produce false positives
(e.g., "org/app" matching "org/app-extended"); replace the permissive substring
logic with strict normalized comparisons: normalize both repo identifiers to a
canonical "owner/repo" form (trim whitespace and slashes, lowercase, and if URLs
strip .git and host prefix), parse out owner and repo name and compare owner ==
owner and repo == repo (or fallback to full equality after normalization);
remove the git_repo in task_git_repo and vice-versa substring checks so only
exact normalized matches (plus the existing branch_name lowercase comparison)
determine a match.
| if subtask.resume_count >= subtask.max_resume_count: | ||
| logger.warning( | ||
| f"Subtask {subtask.id} reached max resume count ({subtask.max_resume_count})" | ||
| ) | ||
|
|
||
| # Mark as failed | ||
| subtask.status = SubtaskStatus.FAILED | ||
| subtask.error_message = f"Max resume count ({subtask.max_resume_count}) reached" | ||
| subtask.waiting_for = None | ||
| subtask.waiting_since = None | ||
|
|
||
| db.commit() | ||
|
|
||
| # Send notification | ||
| await self._send_notification( | ||
| subtask=subtask, | ||
| event="subtask.max_resume_reached", | ||
| description=f"Subtask failed: max resume count reached", | ||
| ) | ||
|
|
||
| return { | ||
| "success": False, | ||
| "resumed": False, | ||
| "subtask_id": subtask.id, | ||
| "error": "Max resume count reached", | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, check for database locking mechanisms
echo "=== Searching for database locking mechanisms ==="
rg -n "with_for_update|FOR UPDATE|select_for_update" --type py
# Also check for session/transaction patterns
echo -e "\n=== Searching for transaction patterns ==="
rg -n "db\.session|begin\(\)|commit\(\)" --type py | head -30Repository: wecode-ai/Wegent
Length of output: 2096
🏁 Script executed:
#!/bin/bash
# Check file size first
wc -l backend/app/services/async_resume_service.py
# Read around the problem area (lines 248-273)
echo -e "\n=== Code around lines 248-273 ==="
sed -n '230,290p' backend/app/services/async_resume_service.py | cat -n
# Check how subtask is retrieved
echo -e "\n=== Looking for subtask retrieval patterns ==="
rg -n "def.*resume|Subtask\.query|db\.query.*[Ss]ubtask" backend/app/services/async_resume_service.py | head -20Repository: wecode-ai/Wegent
Length of output: 3016
🏁 Script executed:
#!/bin/bash
# Read the full _resume_subtask method and the calling methods
echo "=== Full context of _resume_subtask method ==="
sed -n '228,300p' backend/app/services/async_resume_service.py | cat -n
# Look at the resume_from_webhook and resume_subtask_by_id methods
echo -e "\n=== resume_from_webhook method ==="
sed -n '94,160p' backend/app/services/async_resume_service.py | cat -n
echo -e "\n=== resume_subtask_by_id method ==="
sed -n '188,225p' backend/app/services/async_resume_service.py | cat -nRepository: wecode-ai/Wegent
Length of output: 7046
🏁 Script executed:
#!/bin/bash
# Find the Subtask model
echo "=== Finding Subtask model ==="
fd -t f "*.py" -path "*/models/*" | xargs rg -l "class Subtask" 2>/dev/null
# Read the Subtask model
echo -e "\n=== Subtask model definition ==="
rg -n "class Subtask" --type py -A 50 | head -80Repository: wecode-ai/Wegent
Length of output: 5178
🏁 Script executed:
#!/bin/bash
# Read the Subtask model
echo "=== Subtask model definition ==="
cat backend/app/models/subtask.pyRepository: wecode-ai/Wegent
Length of output: 2655
🏁 Script executed:
#!/bin/bash
# Check where resume_from_webhook and resume_subtask_by_id are called
echo "=== API endpoints that call resume methods ==="
rg -n "resume_from_webhook|resume_subtask_by_id" --type py | grep -v "def resume" | head -20
# Check if this is called from async tasks or webhooks
echo -e "\n=== Checking for webhook endpoint definitions ==="
rg -n "webhook|@app\.post|@app\.get" backend/app/api/ --type py -A 2 | grep -B2 -A2 "resume" | head -40Repository: wecode-ai/Wegent
Length of output: 1180
🏁 Script executed:
#!/bin/bash
# Check if there are any unique constraints or check constraints on resume_count
echo "=== Looking for database migrations or constraints ==="
fd -t f "*.py" backend/app/db/migrations 2>/dev/null | head -5
# Check if there's an alembic migration that adds constraints
echo -e "\n=== Looking for alembic versions ==="
find backend -name "versions" -type d 2>/dev/null
# Check the actual endpoint code to see full context
echo -e "\n=== Callback endpoint code ==="
sed -n '80,110p' backend/app/api/endpoints/webhooks/callback.pyRepository: wecode-ai/Wegent
Length of output: 1199
Based on my verification, I have confirmed the race condition concern is legitimate and valid. Here's what I found:
-
No locking mechanisms exist - The codebase has zero database-level locking (
with_for_update,FOR UPDATE,select_for_update) -
Subtask model has no optimistic locking - No version fields or similar mechanisms for concurrency control
-
Concurrent requests are definitely possible - The resume methods are called from webhook endpoints (GitHub, GitLab, callback) which are HTTP endpoints that can receive multiple concurrent requests from external systems
-
The race condition window is real - Between querying the subtask without locking and checking
resume_count >= max_resume_count, another concurrent request could load the same subtask and both would pass the check
The vulnerability is: Two concurrent requests both load a subtask with resume_count=4, max_resume_count=5. Both pass the check (4 >= 5 is false), both increment to 5, and both succeed when only the first should have succeeded.
Add database-level locking to prevent race condition on max_resume_count check.
Use db.query(Subtask).filter(...).with_for_update().first() when retrieving the subtask in resume_subtask_by_id (line 210) and resume_from_webhook (line 126) to acquire an exclusive database lock. This ensures only one request can modify the subtask's resume_count at a time. Alternatively, implement an atomic database update using a trigger or stored procedure that both checks the limit and increments in a single transaction.
🧰 Tools
🪛 Ruff (0.14.6)
265-265: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
In backend/app/services/async_resume_service.py around lines 248-273, the
max-resume-count branch can suffer a race because the subtask is read/checked
without a DB lock; change the retrieval logic in resume_subtask_by_id (around
line 210) and resume_from_webhook (around line 126) to fetch the subtask inside
the same transaction using a SELECT ... FOR UPDATE (e.g.,
db.query(Subtask).filter(...).with_for_update().first()), re-check resume_count
>= max_resume_count after acquiring the lock, perform the increment/status
changes and db.commit() while holding the lock, and return the response; this
ensures only one concurrent request can pass the check and modify resume_count
atomically.
| import { Bot, Team } from '@/types/api'; | ||
|
|
||
| export type TeamMode = 'solo' | 'pipeline' | 'route' | 'coordinate' | 'collaborate'; | ||
| export type TeamMode = 'solo' | 'pipeline' | 'route' | 'coordinate' | 'collaborate' | 'async'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the contents of index.ts and types.ts in the team-modes directory
# First, find the files
fd "index.ts|types.ts" frontend/src/features/settings/components/team-modes/
# Read both files to verify the claim
echo "=== Contents of types.ts ==="
cat frontend/src/features/settings/components/team-modes/types.ts
echo -e "\n=== Contents of index.ts ==="
cat frontend/src/features/settings/components/team-modes/index.tsRepository: wecode-ai/Wegent
Length of output: 3519
🏁 Script executed:
#!/bin/bash
# Search for how TeamMode is imported/used across the codebase
rg -n "import.*TeamMode|from.*team-modes" --type=ts --type=tsx -g '!node_modules' -C 2Repository: wecode-ai/Wegent
Length of output: 87
🏁 Script executed:
#!/bin/bash
# Search for how TeamMode is imported/used across the codebase
rg -n "import.*TeamMode|from.*team-modes" --type ts -C 2Repository: wecode-ai/Wegent
Length of output: 1246
Based on my verification, I can now provide the rewritten review comment:
Remove duplicate TeamMode type definition from index.ts.
TeamMode is defined in types.ts (line 8) and then re-exported via export * from './types' in index.ts (line 10). However, index.ts also defines TeamMode locally (line 12), creating a redundant duplicate. Since both definitions are identical, this doesn't cause a runtime conflict, but it violates the DRY principle and creates a maintenance burden—if one definition is updated in the future without updating the other, they could diverge.
Remove the TeamMode definition from index.ts (line 12) and rely solely on the re-export from types.ts.
🤖 Prompt for AI Agents
In frontend/src/features/settings/components/team-modes/index.ts around lines 10
to 12, remove the duplicate local TeamMode type definition (the one at line 12)
and rely on the existing export * from './types' which already provides TeamMode
from types.ts; simply delete the redundant type declaration and ensure there are
no other local references requiring modification so the re-exported type from
types.ts is used everywhere.
Summary
Changes
Backend
WAITINGstatus toSubtaskStatusenumSubtaskmodel:waiting_for,waiting_since,waiting_timeout,resume_count,max_resume_countPOST /api/webhooks/github- GitHub check_run/workflow_run eventsPOST /api/webhooks/gitlab- GitLab Pipeline/MR hooksPOST /api/webhooks/callback/{subtask_id}/{token}- Generic callbackAsyncResumeServicefor managing session recovery workflowExecutor
output_parser.pyfor detecting waiting signals (git push, PR/MR creation)response_processorfor automatic WAITING state detectionWAITINGstatus to sharedTaskStatusenumFrontend
asynctoTeamModetype (ClaudeCode only, single-bot scenario)WAITINGtoTaskStatustypeTest Plan
Summary by CodeRabbit
New Features
✏️ Tip: You can customize this high-level summary in your review settings.