-
Notifications
You must be signed in to change notification settings - Fork 29
feat(backend): add Completion Condition mechanism for CI monitoring #215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add a generic Completion Condition system to track async external conditions (e.g., CI pipelines) and trigger auto-fix workflows: - Add CompletionCondition model with status tracking and retry logic - Add GitHub/GitLab webhook endpoints for receiving CI events - Add CIMonitorService for handling CI events and auto-fix workflow - Add CompletionConditionService for CRUD operations - Add new webhook notification events: condition.satisfied, condition.ci_failed, condition.max_retry_reached, task.fully_completed - Add configuration for CI monitoring (CI_MONITOR_ENABLED, CI_MAX_RETRIES) - Add database migration for completion_conditions table The system automatically detects CI failures and creates fix subtasks with relevant logs, up to a configurable max retry limit.
WalkthroughThis PR introduces an asynchronous completion conditions feature to track external CI/CD pipeline status and other async workflows. It includes a new database table, ORM models, Pydantic schemas, two services for managing completion conditions and CI monitoring, API endpoints for CRUD operations and webhook handlers for GitHub and GitLab, plus necessary configuration settings and database migration. Changes
Sequence Diagram(s)sequenceDiagram
participant GitHub as GitHub/GitLab
participant Webhook as Webhook Handler
participant CIMonitor as CI Monitor Service
participant CompCond as Completion Condition<br/>Service
participant DB as Database
participant Notifier as Webhook Notifier
GitHub->>Webhook: POST webhook event<br/>(check_run/workflow_run/pipeline)
Webhook->>Webhook: Verify signature & parse payload
Webhook->>CIMonitor: handle_ci_started() or<br/>handle_ci_success() or<br/>handle_ci_failure()
CIMonitor->>CompCond: find_by_repo_and_branch()
CompCond->>DB: Query completion_conditions
DB-->>CompCond: Return matching conditions
CompCond-->>CIMonitor: Condition list
CIMonitor->>CompCond: update_status(SATISFIED/FAILED)
CompCond->>DB: Update condition status & timestamp
DB-->>CompCond: Confirmed
CompCond-->>CIMonitor: Updated condition
CIMonitor->>Notifier: Send notification<br/>(condition satisfied/failed)
Notifier->>DB: Log notification
alt Retry needed
CIMonitor->>CompCond: increment_retry()
CompCond->>DB: Increment retry_count
end
CIMonitor-->>Webhook: Done
Webhook-->>GitHub: 200 OK
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (20)
backend/alembic/versions/2b3c4d5e6f7g_add_completion_conditions_table.py (2)
46-52: Redundant index on primary key.Line 47 creates
ix_completion_conditions_idon theidcolumn, butidis already thePRIMARY KEY(line 46), which implicitly creates a unique index. This index is unnecessary and wastes storage.PRIMARY KEY (id), - KEY ix_completion_conditions_id (id), KEY ix_completion_conditions_subtask_id (subtask_id),
25-54: Raw SQL reduces portability across database backends.Using
op.execute()with raw MySQL-specific SQL (e.g.,ENGINE=InnoDB,AUTO_INCREMENT,KEYsyntax) limits portability. If multi-database support is needed, consider using SQLAlchemy'sop.create_table()with dialect-agnostic constructs.For better maintainability, Alembic's schema operations could be used:
op.create_table( 'completion_conditions', sa.Column('id', sa.Integer(), primary_key=True, autoincrement=True), sa.Column('subtask_id', sa.Integer(), nullable=False, index=True), # ... additional columns )backend/app/models/__init__.py (1)
18-18:__all__is not sorted.Static analysis (RUF022) flags that
__all__should be sorted alphabetically for consistency. This is a minor style issue.-__all__ = ["User", "Kind", "Subtask", "SharedTeam", "SkillBinary", "CompletionCondition"] +__all__ = ["CompletionCondition", "Kind", "SharedTeam", "SkillBinary", "Subtask", "User"]backend/app/models/completion_condition.py (2)
18-41: Duplicate enum definitions with schemas module.These enums (
ConditionType,ConditionStatus,GitPlatform) are duplicated inbackend/app/schemas/completion_condition.py. This violates DRY and risks divergence.Define enums in one place (the model file is appropriate) and import in schemas:
# In backend/app/schemas/completion_condition.py from app.models.completion_condition import ConditionType, ConditionStatus, GitPlatform
89-96: Missing composite index defined in migration.The migration (line 52) defines a composite index
ix_completion_conditions_repo_branch (repo_full_name, branch_name), but this index is not declared in the model's__table_args__. While the migration will create it, the model should reflect the schema for accuracy.Add the composite index to
__table_args__:+from sqlalchemy import Index __table_args__ = ( + Index("ix_completion_conditions_repo_branch", "repo_full_name", "branch_name"), { "sqlite_autoincrement": True, "mysql_engine": "InnoDB", "mysql_charset": "utf8mb4", "mysql_collate": "utf8mb4_unicode_ci", }, )backend/app/api/endpoints/webhooks/gitlab.py (3)
41-45: Improve exception handling for JSON parsing.Use
logging.exceptionto include traceback, catch a more specific exception, and chain the raised exception.+import json + # Parse JSON payload try: payload = await request.json() -except Exception as e: - logger.error(f"Failed to parse GitLab webhook payload: {e}") - raise HTTPException(status_code=400, detail="Invalid JSON payload") +except json.JSONDecodeError as e: + logger.exception("Failed to parse GitLab webhook payload") + raise HTTPException(status_code=400, detail="Invalid JSON payload") from e
69-75: Remove unused variable and add safety check forgit_domainextraction.
shais assigned but never used. Also, thegit_domainextraction can raiseIndexErrorifweb_urlis malformed.ref = object_attributes.get("ref", "") # Branch name - sha = object_attributes.get("sha", "") pipeline_url = object_attributes.get("url", "") # Extract project information repo_full_name = project.get("path_with_namespace", "") - git_domain = project.get("web_url", "").split("/")[2] if project.get("web_url") else "" + web_url = project.get("web_url", "") + web_url_parts = web_url.split("/") + git_domain = web_url_parts[2] if len(web_url_parts) > 2 else ""
109-129: Remove unused variables inhandle_build_event.
build_idandpipeline_idare assigned but never used. If they're not needed for the current implementation, remove them to avoid confusion.async def handle_build_event(payload: Dict[str, Any]): """Handle GitLab build/job event""" object_attributes = payload.get("object_attributes", {}) project = payload.get("project", {}) # Extract build information - build_id = str(object_attributes.get("id", "")) build_name = object_attributes.get("name", "") status = object_attributes.get("status", "") ref = object_attributes.get("ref", "") - pipeline_id = str(object_attributes.get("pipeline_id", "")) # Extract project information repo_full_name = project.get("path_with_namespace", "")backend/app/services/ci_monitor.py (3)
35-75: Unusedgit_domainparameter inhandle_ci_started.The
git_domainparameter is accepted but never used in the method body. Either remove it or store it in the condition if needed for future use.
211-234: Placeholder implementation with unused variables and exception handling issues.The
providervariable is assigned but never used (dead code). Also, uselogger.exceptionto include the traceback for debugging.async def _fetch_ci_logs( self, condition: CompletionCondition, git_platform: GitPlatform, ) -> Optional[str]: """Fetch CI logs from the git platform""" try: if git_platform == GitPlatform.GITHUB: - from app.repository.github_provider import GitHubProvider - - provider = GitHubProvider() # Note: This would need user context for authentication # For now, return a placeholder return f"CI logs for GitHub run {condition.external_id} - implement log fetching" elif git_platform == GitPlatform.GITLAB: - from app.repository.gitlab_provider import GitLabProvider - - provider = GitLabProvider() return f"CI logs for GitLab pipeline {condition.external_id} - implement log fetching" + return None except Exception as e: - logger.error(f"Failed to fetch CI logs: {e}") + logger.exception("Failed to fetch CI logs") return None
325-395: Uselogger.exceptionin notification error handlers.All notification methods use
logger.errorwhich doesn't include the traceback. Replace withlogger.exceptionfor better debugging (applies to lines 347, 371, 395, 425).except Exception as e: - logger.error(f"Failed to send condition satisfied notification: {e}") + logger.exception("Failed to send condition satisfied notification")backend/app/api/endpoints/webhooks/github.py (4)
15-19: Remove unused imports.
get_db,ConditionStatus, andcompletion_condition_serviceare imported but never used in this file.-from app.api.dependencies import get_db from app.core.config import settings -from app.models.completion_condition import ConditionStatus, GitPlatform +from app.models.completion_condition import GitPlatform -from app.services.completion_condition import completion_condition_service from app.services.ci_monitor import ci_monitor_service
82-87: Improve exception handling for JSON parsing.Same pattern as GitLab webhook—use specific exception type,
logger.exception, and chain the exception.+import json + # Parse JSON payload try: payload = await request.json() -except Exception as e: - logger.error(f"Failed to parse GitHub webhook payload: {e}") - raise HTTPException(status_code=400, detail="Invalid JSON payload") +except json.JSONDecodeError as e: + logger.exception("Failed to parse GitHub webhook payload") + raise HTTPException(status_code=400, detail="Invalid JSON payload") from e
117-120: Remove unused variablehead_sha.
head_shais extracted but never used.conclusion = check_run.get("conclusion") - head_sha = check_run.get("head_sha", "") head_branch = check_run.get("check_suite", {}).get("head_branch", "")
174-176: Remove unused variablerun_number.
run_numberis extracted but never used.html_url = workflow_run.get("html_url", "") external_id = str(workflow_run.get("id", "")) - run_number = workflow_run.get("run_number", "")backend/app/schemas/completion_condition.py (2)
14-38: Consider consolidating enums with the model definitions.
ConditionType,ConditionStatus, andGitPlatformare defined in both this file andbackend/app/models/completion_condition.py. Consider importing from one location to avoid duplication and potential drift.Option 1: Import from models:
from app.models.completion_condition import ConditionType, ConditionStatus, GitPlatformOption 2: Keep enums in schemas (canonical source) and import in models.
135-156: Webhook event schemas are defined but not used for validation.
GitHubCheckRunEvent,GitHubWorkflowRunEvent, andGitLabPipelineEventare well-defined but the webhook handlers parse payloads as rawDict[str, Any]. Consider using these schemas for request validation to catch malformed payloads early.backend/app/services/completion_condition.py (3)
15-15: Unused import:settings.The
settingsimport fromapp.core.configis not used in this module.-from app.core.config import settings
190-217: Consider using bulk update for better performance.Loading all matching records into memory and updating them individually is inefficient for large datasets. A single
UPDATEstatement would be more performant.def cancel_by_subtask( self, db: Session, *, subtask_id: int, ) -> int: """Cancel all pending/in_progress conditions for a subtask""" - conditions = ( - db.query(CompletionCondition) - .filter( - and_( - CompletionCondition.subtask_id == subtask_id, - CompletionCondition.status.in_( - [ConditionStatus.PENDING, ConditionStatus.IN_PROGRESS] - ), - ) + count = ( + db.query(CompletionCondition) + .filter( + and_( + CompletionCondition.subtask_id == subtask_id, + CompletionCondition.status.in_( + [ConditionStatus.PENDING, ConditionStatus.IN_PROGRESS] + ), + ) ) - .all() + .update( + {CompletionCondition.status: ConditionStatus.CANCELLED}, + synchronize_session="fetch", + ) ) - - count = 0 - for condition in conditions: - condition.status = ConditionStatus.CANCELLED - count += 1 - db.commit() logger.info(f"Cancelled {count} conditions for subtask {subtask_id}") return countThe same optimization applies to
cancel_by_task(lines 219-246).
258-265: Multiple iterations over conditions list can be consolidated.Using
collections.Counteror a single loop would reduce from 4 iterations to 1.+from collections import Counterdef get_task_completion_status( self, db: Session, *, task_id: int, user_id: Optional[int] = None, ) -> Dict[str, Any]: """Get overall completion status for a task""" conditions = self.get_by_task_id(db, task_id=task_id, user_id=user_id) - pending = sum(1 for c in conditions if c.status == ConditionStatus.PENDING) - in_progress = sum( - 1 for c in conditions if c.status == ConditionStatus.IN_PROGRESS - ) - satisfied = sum( - 1 for c in conditions if c.status == ConditionStatus.SATISFIED - ) - failed = sum(1 for c in conditions if c.status == ConditionStatus.FAILED) + status_counts = Counter(c.status for c in conditions) + pending = status_counts.get(ConditionStatus.PENDING, 0) + in_progress = status_counts.get(ConditionStatus.IN_PROGRESS, 0) + satisfied = status_counts.get(ConditionStatus.SATISFIED, 0) + failed = status_counts.get(ConditionStatus.FAILED, 0)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
backend/alembic/versions/2b3c4d5e6f7g_add_completion_conditions_table.py(1 hunks)backend/app/api/api.py(3 hunks)backend/app/api/dependencies.py(2 hunks)backend/app/api/endpoints/completion_conditions.py(1 hunks)backend/app/api/endpoints/webhooks/__init__.py(1 hunks)backend/app/api/endpoints/webhooks/github.py(1 hunks)backend/app/api/endpoints/webhooks/gitlab.py(1 hunks)backend/app/core/config.py(1 hunks)backend/app/models/__init__.py(2 hunks)backend/app/models/completion_condition.py(1 hunks)backend/app/schemas/completion_condition.py(1 hunks)backend/app/services/ci_monitor.py(1 hunks)backend/app/services/completion_condition.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: PEP 8 compliant code with Black formatter (line length: 88) and isort for imports
Type hints required in Python code
Run pylint and flake8 for Python linting
Use descriptive names for Python functions and variables
Add docstrings for public Python functions and classes
Extract magic numbers to constants in Python
Keep Python functions to a maximum of 50 lines (preferred)
Files:
backend/app/models/completion_condition.pybackend/app/api/dependencies.pybackend/app/models/__init__.pybackend/app/core/config.pybackend/app/services/ci_monitor.pybackend/app/api/api.pybackend/app/api/endpoints/webhooks/gitlab.pybackend/app/api/endpoints/webhooks/github.pybackend/app/api/endpoints/webhooks/__init__.pybackend/app/api/endpoints/completion_conditions.pybackend/alembic/versions/2b3c4d5e6f7g_add_completion_conditions_table.pybackend/app/schemas/completion_condition.pybackend/app/services/completion_condition.py
backend/app/models/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use database migrations via Alembic for backend model changes
Files:
backend/app/models/completion_condition.pybackend/app/models/__init__.py
🧬 Code graph analysis (8)
backend/app/models/completion_condition.py (1)
backend/app/schemas/completion_condition.py (3)
ConditionType(15-21)ConditionStatus(24-31)GitPlatform(34-38)
backend/app/models/__init__.py (1)
backend/app/models/completion_condition.py (1)
CompletionCondition(44-96)
backend/app/services/ci_monitor.py (11)
backend/app/api/dependencies.py (1)
get_db_context(26-35)backend/app/models/completion_condition.py (4)
CompletionCondition(44-96)ConditionStatus(27-34)ConditionType(18-24)GitPlatform(37-41)backend/app/schemas/completion_condition.py (3)
ConditionStatus(24-31)ConditionType(15-21)GitPlatform(34-38)backend/app/services/webhook_notification.py (2)
Notification(18-28)send_notification(96-134)backend/app/services/completion_condition.py (4)
find_by_repo_and_branch(115-135)can_retry(186-188)increment_retry(162-184)get_task_completion_status(248-284)backend/app/repository/github_provider.py (1)
GitHubProvider(25-825)backend/app/repository/gitlab_provider.py (1)
GitLabProvider(23-861)backend/app/services/subtask.py (1)
get_subtask_by_id(81-94)backend/app/schemas/subtask.py (1)
SubtaskCreate(58-61)backend/app/api/endpoints/completion_conditions.py (1)
get_task_completion_status(112-141)backend/app/services/adapters/task_kinds.py (1)
get_task_by_id(469-490)
backend/app/api/endpoints/webhooks/gitlab.py (3)
backend/app/models/completion_condition.py (1)
GitPlatform(37-41)backend/app/schemas/completion_condition.py (1)
GitPlatform(34-38)backend/app/services/ci_monitor.py (3)
handle_ci_started(35-75)handle_ci_success(77-120)handle_ci_failure(122-153)
backend/app/api/endpoints/webhooks/github.py (4)
backend/app/api/dependencies.py (1)
get_db(13-22)backend/app/models/completion_condition.py (2)
ConditionStatus(27-34)GitPlatform(37-41)backend/app/schemas/completion_condition.py (2)
ConditionStatus(24-31)GitPlatform(34-38)backend/app/services/ci_monitor.py (3)
handle_ci_started(35-75)handle_ci_success(77-120)handle_ci_failure(122-153)
backend/app/api/endpoints/completion_conditions.py (5)
backend/app/api/dependencies.py (1)
get_db(13-22)backend/app/schemas/completion_condition.py (5)
CompletionConditionCreate(58-61)CompletionConditionInDB(76-88)CompletionConditionListResponse(91-95)TaskCompletionStatus(98-108)ConditionStatus(24-31)backend/app/core/security.py (1)
get_current_user(28-50)backend/app/services/completion_condition.py (6)
get_by_subtask_id(85-98)get_by_task_id(100-113)get_by_id(70-83)create_condition(39-68)update_status(137-160)get_task_completion_status(248-284)backend/app/models/completion_condition.py (1)
ConditionStatus(27-34)
backend/app/schemas/completion_condition.py (1)
backend/app/models/completion_condition.py (3)
ConditionType(18-24)ConditionStatus(27-34)GitPlatform(37-41)
backend/app/services/completion_condition.py (4)
backend/app/models/completion_condition.py (4)
CompletionCondition(44-96)ConditionStatus(27-34)ConditionType(18-24)GitPlatform(37-41)backend/app/schemas/completion_condition.py (5)
ConditionStatus(24-31)ConditionType(15-21)GitPlatform(34-38)CompletionConditionCreate(58-61)CompletionConditionUpdate(64-73)backend/app/services/base.py (2)
BaseService(18-99)count(48-56)backend/app/api/endpoints/completion_conditions.py (1)
get_task_completion_status(112-141)
🪛 Ruff (0.14.6)
backend/app/models/__init__.py
18-18: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
backend/app/services/ci_monitor.py
42-42: Unused method argument: git_domain
(ARG002)
81-81: Unused method argument: external_id
(ARG002)
126-126: Unused method argument: external_id
(ARG002)
229-229: Local variable provider is assigned to but never used
Remove assignment to unused variable provider
(F841)
232-232: Do not catch blind exception: Exception
(BLE001)
233-233: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
346-346: Do not catch blind exception: Exception
(BLE001)
347-347: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
370-370: Do not catch blind exception: Exception
(BLE001)
371-371: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
394-394: Do not catch blind exception: Exception
(BLE001)
395-395: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
424-424: Do not catch blind exception: Exception
(BLE001)
425-425: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
backend/app/api/endpoints/webhooks/gitlab.py
43-43: Do not catch blind exception: Exception
(BLE001)
44-44: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
45-45: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
70-70: Local variable sha is assigned to but never used
Remove assignment to unused variable sha
(F841)
115-115: Local variable build_id is assigned to but never used
Remove assignment to unused variable build_id
(F841)
119-119: Local variable pipeline_id is assigned to but never used
Remove assignment to unused variable pipeline_id
(F841)
backend/app/api/endpoints/webhooks/github.py
85-85: Do not catch blind exception: Exception
(BLE001)
86-86: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
87-87: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
117-117: Local variable head_sha is assigned to but never used
Remove assignment to unused variable head_sha
(F841)
176-176: Local variable run_number is assigned to but never used
Remove assignment to unused variable run_number
(F841)
backend/app/api/endpoints/completion_conditions.py
31-31: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
32-32: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
59-59: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
60-60: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
74-74: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
75-75: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
87-87: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
88-88: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
114-114: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
115-115: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Test wegent CLI Integration
- GitHub Check: e2e-tests
🔇 Additional comments (14)
backend/alembic/versions/2b3c4d5e6f7g_add_completion_conditions_table.py (1)
28-30: Consider adding foreign key constraints.The columns
subtask_id,task_id, anduser_idreference other tables but lackFOREIGN KEYconstraints. While this may be intentional for flexibility or performance, adding FK constraints would enforce referential integrity and prevent orphan records.If referential integrity is desired, consider adding:
FOREIGN KEY (subtask_id) REFERENCES subtasks(id) ON DELETE CASCADE, FOREIGN KEY (task_id) REFERENCES tasks(id) ON DELETE CASCADE, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADEbackend/app/api/dependencies.py (1)
25-35: LGTM - useful context manager for background tasks.The new
get_db_context()function properly implements a context manager for database sessions outside FastAPI's dependency injection. The implementation is correct with proper cleanup in thefinallyblock.Minor note: The logic duplicates
get_db(). You could optionally refactor to reuse:@contextmanager def get_db_context() -> Generator[Session, None, None]: yield from get_db()backend/app/api/endpoints/webhooks/__init__.py (1)
1-11: LGTM!Clean package initialization with proper re-exports for the webhook routers.
backend/app/core/config.py (1)
92-97: Review default forCI_MONITOR_ENABLED.Defaulting
CI_MONITOR_ENABLEDtoTruemay cause unexpected behavior if webhook secrets (GITHUB_WEBHOOK_SECRET,GITLAB_WEBHOOK_TOKEN) are not configured. Consider defaulting toFalseso CI monitoring is opt-in.Also,
CI_CHECK_TYPESas a comma-separated string requires parsing logic elsewhere. Alist[str]with a custom validator could be cleaner:from pydantic import field_validator CI_CHECK_TYPES: list[str] = ["test", "lint", "build"] @field_validator("CI_CHECK_TYPES", mode="before") @classmethod def parse_check_types(cls, v): if isinstance(v, str): return [t.strip() for t in v.split(",") if t.strip()] return vbackend/app/models/completion_condition.py (1)
44-88: LGTM - well-structured model.The
CompletionConditionmodel is well-designed with clear field groupings, appropriate types, sensible defaults, and good documentation. The use offunc.now()for timestamps and the JSON metadata field for extensibility are good patterns.backend/app/api/api.py (1)
33-47: LGTM! Route registrations follow existing patterns.The new routes for completion conditions and external webhooks are properly structured. The comment on line 41 correctly notes that webhooks don't use auth middleware since GitHub/GitLab webhooks rely on signature/token verification instead (implemented in the respective webhook handlers).
backend/app/api/endpoints/webhooks/github.py (1)
26-52: LGTM! Secure signature verification implementation.Correct use of
hmac.compare_digestfor timing-safe comparison and proper handling of both SHA-256 and SHA-1 signature formats.backend/app/schemas/completion_condition.py (1)
76-89: LGTM! Well-structured database schema with ORM mode enabled.The
CompletionConditionInDBschema properly extends the base and configuresfrom_attributes = Truefor ORM compatibility. For Pydantic v2, you could optionally usemodel_config = ConfigDict(from_attributes=True)as the modern syntax, but the current approach works correctly.backend/app/api/endpoints/completion_conditions.py (1)
1-24: Imports and router setup look good.The module structure follows FastAPI conventions. The B008 static analysis warnings about
Depends()in default arguments are false positives—this is the standard and documented pattern for FastAPI dependency injection.backend/app/services/completion_condition.py (5)
31-37: Service class structure looks good.Extending
BaseServiceprovides consistent CRUD patterns while allowing custom methods for domain-specific operations.
39-68:create_conditionimplementation is correct.Fields are properly mapped from the schema to the ORM model, and the transaction is committed with appropriate logging.
70-113: Query methods are well-structured with proper user scoping.The optional
user_idfilter provides flexibility for both user-scoped and admin access patterns.
286-305:has_unsatisfied_conditionsis efficient.Using
.count()directly on the query avoids loading records into memory.
308-309: Global singleton instance is appropriate for this service pattern.
| @router.get("/{condition_id}", response_model=CompletionConditionInDB) | ||
| def get_completion_condition( | ||
| condition_id: int, | ||
| db: Session = Depends(get_db), | ||
| current_user: User = Depends(security.get_current_user), | ||
| ): | ||
| """Get a specific completion condition by ID""" | ||
| condition = completion_condition_service.get_by_id( | ||
| db, condition_id=condition_id, user_id=current_user.id | ||
| ) | ||
| if not condition: | ||
| raise HTTPException(status_code=404, detail="Completion condition not found") | ||
| return condition |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Route path collision—/tasks/{task_id}/... will be captured by /{condition_id}.
FastAPI matches routes in declaration order. The /{condition_id} route (line 56) will match requests intended for /tasks/{task_id}/completion-status (line 111), treating "tasks" as the condition_id. This will result in a validation error or 404.
Option 1 (preferred): Reorder routes so the more specific path comes first:
router = APIRouter()
+@router.get("/tasks/{task_id}/completion-status", response_model=TaskCompletionStatus)
+def get_task_completion_status(
+ task_id: int,
+ db: Session = Depends(get_db),
+ current_user: User = Depends(security.get_current_user),
+):
+ # ... implementation
+
@router.get("", response_model=CompletionConditionListResponse)
def list_completion_conditions(...):
...
@router.get("/{condition_id}", response_model=CompletionConditionInDB)
def get_completion_condition(...):
...
-
-@router.get("/tasks/{task_id}/completion-status", response_model=TaskCompletionStatus)
-def get_task_completion_status(...):
- ...Option 2: Change the path to avoid collision, e.g., use a query parameter or a different path structure like /task-status/{task_id}.
Also applies to: 111-140
🧰 Tools
🪛 Ruff (0.14.6)
59-59: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
60-60: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In backend/app/api/endpoints/completion_conditions.py around lines 56-68 (and
related handlers at 111-140), the GET route path "/{condition_id}" will greedily
match requests like "/tasks/{task_id}/completion-status", causing route
collision; fix by reordering routes so the more specific
"/tasks/{task_id}/completion-status" (lines 111-140) is declared before the
generic "/{condition_id}" route, or alternatively change the generic route to a
non-conflicting path (e.g., "/by-id/{condition_id}" or
"/condition/{condition_id}") and update usages/tests accordingly.
| @router.delete("/{condition_id}/cancel") | ||
| def cancel_completion_condition( | ||
| condition_id: int, | ||
| db: Session = Depends(get_db), | ||
| current_user: User = Depends(security.get_current_user), | ||
| ): | ||
| """Cancel a completion condition""" | ||
| from app.models.completion_condition import ConditionStatus | ||
|
|
||
| condition = completion_condition_service.get_by_id( | ||
| db, condition_id=condition_id, user_id=current_user.id | ||
| ) | ||
| if not condition: | ||
| raise HTTPException(status_code=404, detail="Completion condition not found") | ||
|
|
||
| if condition.status in [ConditionStatus.SATISFIED, ConditionStatus.FAILED]: | ||
| raise HTTPException( | ||
| status_code=400, | ||
| detail=f"Cannot cancel condition in {condition.status} status", | ||
| ) | ||
|
|
||
| condition = completion_condition_service.update_status( | ||
| db, condition_id=condition_id, status=ConditionStatus.CANCELLED | ||
| ) | ||
| return {"status": "cancelled", "id": condition_id} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Move import to module level.
The ConditionStatus import on line 91 should be at the top of the file with other imports for consistency and to avoid repeated import overhead on each request.
from app.schemas.completion_condition import (
CompletionConditionCreate,
CompletionConditionInDB,
CompletionConditionListResponse,
TaskCompletionStatus,
)
+from app.models.completion_condition import ConditionStatus
from app.services.completion_condition import completion_condition_service @router.delete("/{condition_id}/cancel")
def cancel_completion_condition(
condition_id: int,
db: Session = Depends(get_db),
current_user: User = Depends(security.get_current_user),
):
"""Cancel a completion condition"""
- from app.models.completion_condition import ConditionStatus
-
condition = completion_condition_service.get_by_id(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @router.delete("/{condition_id}/cancel") | |
| def cancel_completion_condition( | |
| condition_id: int, | |
| db: Session = Depends(get_db), | |
| current_user: User = Depends(security.get_current_user), | |
| ): | |
| """Cancel a completion condition""" | |
| from app.models.completion_condition import ConditionStatus | |
| condition = completion_condition_service.get_by_id( | |
| db, condition_id=condition_id, user_id=current_user.id | |
| ) | |
| if not condition: | |
| raise HTTPException(status_code=404, detail="Completion condition not found") | |
| if condition.status in [ConditionStatus.SATISFIED, ConditionStatus.FAILED]: | |
| raise HTTPException( | |
| status_code=400, | |
| detail=f"Cannot cancel condition in {condition.status} status", | |
| ) | |
| condition = completion_condition_service.update_status( | |
| db, condition_id=condition_id, status=ConditionStatus.CANCELLED | |
| ) | |
| return {"status": "cancelled", "id": condition_id} | |
| @router.delete("/{condition_id}/cancel") | |
| def cancel_completion_condition( | |
| condition_id: int, | |
| db: Session = Depends(get_db), | |
| current_user: User = Depends(security.get_current_user), | |
| ): | |
| """Cancel a completion condition""" | |
| condition = completion_condition_service.get_by_id( | |
| db, condition_id=condition_id, user_id=current_user.id | |
| ) | |
| if not condition: | |
| raise HTTPException(status_code=404, detail="Completion condition not found") | |
| if condition.status in [ConditionStatus.SATISFIED, ConditionStatus.FAILED]: | |
| raise HTTPException( | |
| status_code=400, | |
| detail=f"Cannot cancel condition in {condition.status} status", | |
| ) | |
| condition = completion_condition_service.update_status( | |
| db, condition_id=condition_id, status=ConditionStatus.CANCELLED | |
| ) | |
| return {"status": "cancelled", "id": condition_id} |
🧰 Tools
🪛 Ruff (0.14.6)
87-87: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
88-88: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In backend/app/api/endpoints/completion_conditions.py around lines 84 to 108,
the code does a local import of ConditionStatus inside the
cancel_completion_condition function; move "from app.models.completion_condition
import ConditionStatus" to the module-level imports at the top of the file
alongside the other imports and remove the in-function import so the function
uses the top-level name, keeping import order consistent with the file's import
grouping.
| # Convert conditions to schema objects | ||
| from app.schemas.completion_condition import CompletionConditionInDB | ||
|
|
||
| conditions_in_db = [ | ||
| CompletionConditionInDB.model_validate(c) for c in status["conditions"] | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Move import to module level.
Same issue as above—the CompletionConditionInDB import on line 126 is already imported at the top of the file (line 18), making this local import redundant.
- # Convert conditions to schema objects
- from app.schemas.completion_condition import CompletionConditionInDB
-
conditions_in_db = [
CompletionConditionInDB.model_validate(c) for c in status["conditions"]
]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Convert conditions to schema objects | |
| from app.schemas.completion_condition import CompletionConditionInDB | |
| conditions_in_db = [ | |
| CompletionConditionInDB.model_validate(c) for c in status["conditions"] | |
| ] | |
| conditions_in_db = [ | |
| CompletionConditionInDB.model_validate(c) for c in status["conditions"] | |
| ] |
🤖 Prompt for AI Agents
In backend/app/api/endpoints/completion_conditions.py around lines 125 to 130,
there is a redundant local import of CompletionConditionInDB that is already
imported at the top of the module; remove the local import statement and rely on
the existing top-level import so the list comprehension uses the module-level
CompletionConditionInDB, then run linters/tests to ensure no unused-import
warnings or name errors remain.
| return TaskCompletionStatus( | ||
| task_id=task_id, | ||
| subtask_completed=True, # This would need to be checked from subtask status | ||
| all_conditions_satisfied=status["all_conditions_satisfied"], | ||
| pending_conditions=status["pending_conditions"], | ||
| in_progress_conditions=status["in_progress_conditions"], | ||
| satisfied_conditions=status["satisfied_conditions"], | ||
| failed_conditions=status["failed_conditions"], | ||
| conditions=conditions_in_db, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded subtask_completed=True may return incorrect status.
The comment acknowledges this needs to be checked from subtask status. Returning a hardcoded True could mislead API consumers into thinking a task is further along than it actually is.
Would you like me to generate an implementation that fetches the actual subtask completion status, or should I open an issue to track this as a follow-up?
🤖 Prompt for AI Agents
In backend/app/api/endpoints/completion_conditions.py around lines 132-140, the
code returns subtask_completed=True hardcoded; replace that with a real
computation that checks subtask status. Query the DB (or use the existing status
payload) to fetch subtask records for the given task_id, determine whether the
relevant subtask(s) are completed (e.g., all subtasks have status "COMPLETED" or
the specific subtask has status "COMPLETED"), set subtask_completed accordingly,
await the DB call if async, and default to False when no subtask records exist
or on error; update the returned TaskCompletionStatus to use this computed value
instead of the literal True.
| # Verify token if configured | ||
| if settings.GITLAB_WEBHOOK_TOKEN: | ||
| if x_gitlab_token != settings.GITLAB_WEBHOOK_TOKEN: | ||
| logger.warning("GitLab webhook token verification failed") | ||
| raise HTTPException(status_code=401, detail="Invalid token") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use constant-time comparison for token verification.
Simple string comparison (!=) is vulnerable to timing attacks. Use hmac.compare_digest for secure token verification.
+import hmac
+
# Verify token if configured
if settings.GITLAB_WEBHOOK_TOKEN:
- if x_gitlab_token != settings.GITLAB_WEBHOOK_TOKEN:
+ if not hmac.compare_digest(x_gitlab_token or "", settings.GITLAB_WEBHOOK_TOKEN):
logger.warning("GitLab webhook token verification failed")
raise HTTPException(status_code=401, detail="Invalid token")🤖 Prompt for AI Agents
In backend/app/api/endpoints/webhooks/gitlab.py around lines 34 to 38, the
webhook token is compared using a plain != string comparison which is vulnerable
to timing attacks; replace that comparison with hmac.compare_digest to perform a
constant-time comparison (import hmac at top), ensure both operands are the same
type (both str or both bytes) before calling compare_digest, and use the result
to raise the HTTPException if the comparison fails.
| fix_subtask_data = SubtaskCreate( | ||
| task_id=condition.task_id, | ||
| team_id=original_subtask.team_id, | ||
| title=f"Auto-fix CI failure (attempt {condition.retry_count})", | ||
| bot_ids=original_subtask.bot_ids, | ||
| prompt=fix_prompt, | ||
| parent_id=original_subtask.id, | ||
| message_id=original_subtask.message_id + 1, | ||
| ) | ||
|
|
||
| fix_subtask = subtask_service.create_subtask( | ||
| db, obj_in=fix_subtask_data, user_id=condition.user_id | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential TypeError if message_id is None.
If original_subtask.message_id is None, the + 1 operation will raise a TypeError. Add a fallback.
fix_subtask_data = SubtaskCreate(
task_id=condition.task_id,
team_id=original_subtask.team_id,
title=f"Auto-fix CI failure (attempt {condition.retry_count})",
bot_ids=original_subtask.bot_ids,
prompt=fix_prompt,
parent_id=original_subtask.id,
- message_id=original_subtask.message_id + 1,
+ message_id=(original_subtask.message_id or 0) + 1,
)🤖 Prompt for AI Agents
In backend/app/services/ci_monitor.py around lines 261 to 273, the code does
original_subtask.message_id + 1 which will raise a TypeError if message_id is
None; change the expression to use a safe fallback (e.g.
(original_subtask.message_id or 0) + 1 or int(original_subtask.message_id or 0)
+ 1) so message_id defaults to 0 when None before incrementing, then pass that
computed value into SubtaskCreate.
|
|
||
| # Get task info | ||
| task = task_kinds_service.get_task_by_id(db, task_id=task_id) | ||
| if not task: | ||
| return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing required user_id argument in get_task_by_id call.
Based on backend/app/services/adapters/task_kinds.py, get_task_by_id requires a user_id parameter. This call will raise a TypeError at runtime.
- # Get task info
- task = task_kinds_service.get_task_by_id(db, task_id=task_id)
- if not task:
- return
+ # Get task info - need to get conditions first to find user_id
+ conditions = completion_condition_service.get_by_task_id(db, task_id=task_id)
+ if not conditions:
+ return
+ user_id = conditions[0].user_id
+
+ task = task_kinds_service.get_task_by_id(db, task_id=task_id, user_id=user_id)
+ if not task:
+ returnCommittable suggestion skipped: line range outside the PR's diff.
| if status == ConditionStatus.SATISFIED: | ||
| condition.satisfied_at = datetime.utcnow() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Python datetime.utcnow() deprecated 3.12
💡 Result:
Yes — in Python 3.12 datetime.datetime.utcnow() (and utcfromtimestamp()) were deprecated. The docs say to use timezone-aware datetimes instead: datetime.datetime.now(tz=datetime.UTC) (and datetime.datetime.fromtimestamp(ts, tz=datetime.UTC)). [1]
If you must support older Python (pre-3.11) use datetime.timezone.utc as the tz arg:
-
Python ≥3.11+ (preferred):
- datetime.datetime.now(datetime.UTC)
- datetime.datetime.fromtimestamp(ts, datetime.UTC)
-
Older versions:
- datetime.datetime.now(datetime.timezone.utc)
- datetime.datetime.fromtimestamp(ts, datetime.timezone.utc)
Examples:
- datetime.datetime.now(datetime.UTC) # Python 3.11+
- datetime.datetime.now(datetime.timezone.utc) # works on older Pythons
References:
[1] Python 3.12 “What’s New” (datetime deprecations).
[2] Discussion/issue notes showing replacement and compatibility (datetime.UTC added in 3.11).
🏁 Script executed:
#!/bin/bash
# Check if the file exists and examine the relevant lines
cat -n backend/app/services/completion_condition.py | sed -n '150,160p'Repository: wecode-ai/Wegent
Length of output: 503
🏁 Script executed:
#!/bin/bash
# Check for Python version requirements in the project
find . -maxdepth 2 -type f \( -name "pyproject.toml" -o -name "setup.py" -o -name "requirements.txt" -o -name ".python-version" \) | head -20Repository: wecode-ai/Wegent
Length of output: 261
🏁 Script executed:
#!/bin/bash
# Check Python version requirements in backend
cat backend/pyproject.toml | grep -A5 -B5 "python"Repository: wecode-ai/Wegent
Length of output: 146
🏁 Script executed:
#!/bin/bash
# Check the current imports in the file
head -30 backend/app/services/completion_condition.pyRepository: wecode-ai/Wegent
Length of output: 736
🏁 Script executed:
#!/bin/bash
# Check Python version from requirements files
cat backend/requirements.txt | grep -i "python"Repository: wecode-ai/Wegent
Length of output: 167
🏁 Script executed:
#!/bin/bash
# Try different approach to read pyproject.toml - check just the first 50 lines
head -50 backend/pyproject.tomlRepository: wecode-ai/Wegent
Length of output: 585
Replace deprecated datetime.utcnow() with timezone-aware alternative for Python 3.12+ compatibility.
The project targets Python 3.10 (target-version = ['py310'] in pyproject.toml), and datetime.utcnow() is deprecated in Python 3.12+. Replace with datetime.now(timezone.utc) for forward compatibility:
-from datetime import datetime
+from datetime import datetime, timezone if status == ConditionStatus.SATISFIED:
- condition.satisfied_at = datetime.utcnow()
+ condition.satisfied_at = datetime.now(timezone.utc)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if status == ConditionStatus.SATISFIED: | |
| condition.satisfied_at = datetime.utcnow() | |
| if status == ConditionStatus.SATISFIED: | |
| condition.satisfied_at = datetime.now(timezone.utc) |
🤖 Prompt for AI Agents
In backend/app/services/completion_condition.py around lines 154 to 155, replace
the deprecated use of datetime.utcnow() with a timezone-aware call: use
datetime.now(timezone.utc) and ensure timezone is imported from datetime (or
import datetime as dt and use dt.timezone.utc) so the assigned satisfied_at is
UTC-aware and compatible with Python 3.12+.
Summary
Changes
New Files
backend/app/models/completion_condition.py- CompletionCondition model with status trackingbackend/app/schemas/completion_condition.py- Pydantic schemas for API requests/responsesbackend/app/services/completion_condition.py- CRUD operations for completion conditionsbackend/app/services/ci_monitor.py- CI event handling and auto-fix logicbackend/app/api/endpoints/webhooks/github.py- GitHub webhook endpointbackend/app/api/endpoints/webhooks/gitlab.py- GitLab webhook endpointbackend/app/api/endpoints/completion_conditions.py- REST API for completion conditionsbackend/alembic/versions/2b3c4d5e6f7g_add_completion_conditions_table.py- Database migrationModified Files
backend/app/api/api.py- Register new routesbackend/app/api/dependencies.py- Add get_db_context helperbackend/app/core/config.py- Add CI monitoring configurationbackend/app/models/__init__.py- Export CompletionCondition modelConfiguration
New environment variables:
CI_MONITOR_ENABLED(default: true) - Enable/disable CI monitoringCI_MAX_RETRIES(default: 5) - Maximum auto-fix retry attemptsCI_CHECK_TYPES(default: "test,lint,build") - CI check types to monitorGITHUB_WEBHOOK_SECRET- GitHub webhook signature verificationGITLAB_WEBHOOK_TOKEN- GitLab webhook token verificationTest plan
Summary by CodeRabbit
Release Notes
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.