-
Notifications
You must be signed in to change notification settings - Fork 0
Added jira integration #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis update introduces comprehensive Jira Cloud integration to the CLI tool, including new modules for fetching, modeling, and displaying Jira metrics, configuration enhancements, documentation, and a test script. The changes enable users to configure Jira credentials, retrieve and analyze Jira issue metrics, and view them in a styled CLI dashboard, with full documentation and test coverage. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant Config
participant JiraAPI
participant Metrics
participant Display
User->>CLI: Run config command
CLI->>Config: Prompt for Jira domain, email, API key
Config->>JiraAPI: Test connection
JiraAPI-->>Config: Success/Failure
Config-->>CLI: Save credentials if successful
User->>CLI: Run review command
CLI->>Config: Retrieve Jira credentials
CLI->>JiraAPI: Fetch issues and metrics
JiraAPI-->>Metrics: Return issue data
Metrics-->>CLI: Aggregate metrics
CLI->>Display: Render Jira metrics dashboard
Display-->>User: Show styled metrics output
Poem
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
PR Analysis
PR Feedback
How to useInstructions
|
| ] | ||
| } | ||
|
|
||
| response = requests.get(search_url, headers=headers, params=params, timeout=30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a retry mechanism for API requests to handle transient network issues or rate limiting from Jira's API. This can improve the robustness of the metrics collection process. [important]
| return None | ||
|
|
||
| # Create authentication header | ||
| auth_string = f"{email}:{api_key}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a more secure method for handling API keys, such as environment variables or a secure vault, to avoid storing sensitive information in plain text. [important]
| return | ||
|
|
||
| # Sort priorities by count in descending order | ||
| sorted_priorities = sorted(priority_counts.items(), key=lambda x: x[1], reverse=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using a more efficient data structure for storing and accessing metrics, such as a named tuple or a custom class, to improve readability and performance when displaying metrics. [medium]
| logger.error(f"Jira API error: {response.status_code} - {response.text}") | ||
| return None | ||
|
|
||
| data = response.json() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add exception handling for JSON decoding errors when processing API responses to prevent crashes due to unexpected data formats. [medium]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds Jira Cloud integration to the Wellcode CLI, covering metrics collection, display, configuration, documentation, and a sample test script.
- Introduces
jira_metrics.pyandjira_display.pyfor Jira data retrieval and Rich-based reporting - Extends CLI commands (
configandreview) to support Jira configuration and inclusion in the review workflow - Adds documentation (
JIRA_INTEGRATION.md, README updates) and an example integration test script
Reviewed Changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| test_jira_integration.py | Example script for manual Jira integration testing |
| src/wellcode_cli/jira/jira_metrics.py | Implements Jira API calls, paging, and metrics models |
| src/wellcode_cli/jira/jira_display.py | Renders Jira metrics using Rich panels |
| src/wellcode_cli/jira/models/init.py | Initializes Jira models package |
| src/wellcode_cli/jira/init.py | Initializes Jira integration package |
| src/wellcode_cli/github/github_format_ai.py | Includes Jira metrics in AI analysis summary |
| src/wellcode_cli/config.py | Adds getters for Jira API key, domain, and email |
| src/wellcode_cli/commands/review.py | Integrates Jira into the review command workflow |
| src/wellcode_cli/commands/config.py | Adds interactive Jira configuration handlers |
| README.md | Updates Optional Integrations list to include Jira |
| JIRA_INTEGRATION.md | New documentation on setting up and using Jira integration |
Comments suppressed due to low confidence (2)
test_jira_integration.py:17
- This script uses print statements instead of assertions and isn't integrated with a test runner. Consider converting these to pytest/unit tests with proper assertions and placing them in a dedicated tests/ directory for automated coverage.
def test_jira_integration():
src/wellcode_cli/jira/jira_display.py:173
- The panel title styling tag is missing a closing '[/]' which may break the intended Rich formatting. Update the title to "[bold cyan]Priority Distribution[/]".
Panel("\n".join(priority_lines), title="[bold cyan]Priority Distribution", box=ROUNDED
| jql_query += f" AND assignee = '{user_filter}'" | ||
|
|
||
| # Get all issues with pagination | ||
| all_issues = [] |
Copilot
AI
May 23, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Collecting all issues into memory before processing may lead to high memory usage for large datasets. Consider processing issues page-by-page (e.g., updating metrics per page) to reduce memory footprint.
| all_issues = [] |
| # Estimate actual work time as 25% of total time (accounting for weekends, etc.) | ||
| estimated_work_hours = total_hours * 0.25 | ||
|
|
||
| return max(0.5, min(estimated_work_hours, 40)) # Cap between 0.5 and 40 hours |
Copilot
AI
May 23, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The use of magic numbers (0.25, 0.5, 40) for time estimation is not self-explanatory. Extract these values into named constants or configuration parameters to improve clarity and maintainability.
| # Estimate actual work time as 25% of total time (accounting for weekends, etc.) | |
| estimated_work_hours = total_hours * 0.25 | |
| return max(0.5, min(estimated_work_hours, 40)) # Cap between 0.5 and 40 hours | |
| # Estimate actual work time using the defined factor (accounting for weekends, etc.) | |
| estimated_work_hours = total_hours * WORK_HOURS_ESTIMATION_FACTOR | |
| return max(MIN_WORK_HOURS, min(estimated_work_hours, MAX_WORK_HOURS)) # Cap between min and max hours |
| console.print("[yellow]⚠️ Linear integration not configured[/]") | ||
|
|
||
| # Jira metrics | ||
| if get_jira_api_key(): |
Copilot
AI
May 23, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check only verifies the presence of the API key but not the Jira domain or email. Consider validating all required configuration values (domain, email, and API key) before attempting to fetch metrics.
| if get_jira_api_key(): | |
| if is_jira_configured(): |
|
|
||
| ### Optional Integrations | ||
| - **Linear**: Issue tracking metrics | ||
| - **Jira Cloud**: Issue tracking metrics (alternative to Linear) |
Copilot
AI
May 23, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The README mentions Jira Cloud integration but lacks a link to the detailed JIRA_INTEGRATION.md guide. Add a reference or hyperlink under Optional Integrations or Documentation for users to access the full instructions.
| - **Jira Cloud**: Issue tracking metrics (alternative to Linear) | |
| - **Jira Cloud**: Issue tracking metrics (alternative to Linear). See the [Jira Cloud Integration Guide](JIRA_INTEGRATION.md) for setup instructions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🔭 Outside diff range comments (1)
src/wellcode_cli/commands/config.py (1)
1-264:⚠️ Potential issueFix Black formatting issues.
The pipeline check indicates that this file needs Black formatting.
Run the following command to fix the formatting:
black src/wellcode_cli/commands/config.py🧰 Tools
🪛 GitHub Actions: Build and Quality Check
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
🧹 Nitpick comments (5)
test_jira_integration.py (1)
63-63: Remove unused import.The
IssueMetricsclass is imported but never used in the test script.-from wellcode_cli.jira.models.metrics import JiraOrgMetrics, IssueMetrics, ProjectMetrics +from wellcode_cli.jira.models.metrics import JiraOrgMetrics, ProjectMetrics🧰 Tools
🪛 Ruff (0.11.9)
63-63:
wellcode_cli.jira.models.metrics.IssueMetricsimported but unusedRemove unused import:
wellcode_cli.jira.models.metrics.IssueMetrics(F401)
JIRA_INTEGRATION.md (1)
107-107: Remove duplicate word "Project".The word "Project" appears twice in succession.
Apply this diff:
-- **Assignee Involvement**: Number of people working on each project -- **Project Lead**: Project lead information +- **Assignee Involvement**: Number of people working on each project +- **Project Lead**: Lead information🧰 Tools
🪛 LanguageTool
[duplication] ~107-~107: Possible typo: you repeated a word.
Context: ...ent**: Number of people working on each project - Project Lead: Project lead information - **Pr...(ENGLISH_WORD_REPEAT_RULE)
src/wellcode_cli/jira/jira_metrics.py (2)
211-214: Consider making work time estimation factor configurableThe 25% factor for estimating actual work time from total elapsed time appears arbitrary. Different teams may have different work patterns.
Consider making this estimation factor configurable or at least document the rationale:
- # Estimate actual work time as 25% of total time (accounting for weekends, etc.) - estimated_work_hours = total_hours * 0.25 + # Estimate actual work time as a fraction of total time (accounting for weekends, meetings, etc.) + work_time_factor = 0.25 # TODO: Make this configurable based on team patterns + estimated_work_hours = total_hours * work_time_factor
232-243: Make business hours configurableThe function assumes a 9-5 workday, which may not be appropriate for all organizations or regions.
Consider making the business hours configurable:
# At the module level or from config BUSINESS_START_HOUR = 9 # Make configurable BUSINESS_END_HOUR = 17 # Make configurable # Then use in the function: day_end = min( current_date.replace(hour=BUSINESS_END_HOUR, minute=0, second=0, microsecond=0), end_date, ) day_start = max( current_date.replace(hour=BUSINESS_START_HOUR, minute=0, second=0, microsecond=0), start_date, )src/wellcode_cli/jira/models/metrics.py (1)
25-28: Consider more specific error handling in JSON encoderThe broad exception handling might mask serialization issues during development.
Consider logging when falling back to string representation:
try: return super().default(obj) - except Exception: + except Exception as e: + # Log unexpected serialization issues for debugging + import logging + logging.debug(f"Failed to serialize {type(obj)}: {e}") return str(obj)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
JIRA_INTEGRATION.md(1 hunks)README.md(3 hunks)requirements.txt(1 hunks)src/wellcode_cli/commands/config.py(4 hunks)src/wellcode_cli/commands/review.py(3 hunks)src/wellcode_cli/config.py(1 hunks)src/wellcode_cli/github/github_format_ai.py(1 hunks)src/wellcode_cli/jira/__init__.py(1 hunks)src/wellcode_cli/jira/jira_display.py(1 hunks)src/wellcode_cli/jira/jira_metrics.py(1 hunks)src/wellcode_cli/jira/models/__init__.py(1 hunks)src/wellcode_cli/jira/models/metrics.py(1 hunks)test_jira_integration.py(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
src/wellcode_cli/commands/config.py (1)
src/wellcode_cli/jira/jira_metrics.py (1)
test_jira_connection(278-303)
src/wellcode_cli/jira/jira_display.py (1)
src/wellcode_cli/jira/models/metrics.py (5)
get_stats(90-120)get_stats(207-240)get_stats(300-346)get_stats(424-445)get_stats(485-496)
🪛 GitHub Actions: Build and Quality Check
src/wellcode_cli/jira/models/__init__.py
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
src/wellcode_cli/jira/__init__.py
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
src/wellcode_cli/commands/config.py
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
src/wellcode_cli/jira/jira_display.py
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
src/wellcode_cli/jira/jira_metrics.py
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
src/wellcode_cli/jira/models/metrics.py
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
🪛 Ruff (0.11.9)
test_jira_integration.py
63-63: wellcode_cli.jira.models.metrics.IssueMetrics imported but unused
Remove unused import: wellcode_cli.jira.models.metrics.IssueMetrics
(F401)
src/wellcode_cli/jira/models/metrics.py
277-277: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
🪛 LanguageTool
JIRA_INTEGRATION.md
[uncategorized] ~76-~76: Loose punctuation mark.
Context: ..." ``` ### Filtering Options - --user: Filter by assignee (use email address o...
(UNLIKELY_OPENING_PUNCTUATION)
[duplication] ~107-~107: Possible typo: you repeated a word.
Context: ...ent**: Number of people working on each project - Project Lead: Project lead information - **Pr...
(ENGLISH_WORD_REPEAT_RULE)
🔇 Additional comments (11)
src/wellcode_cli/config.py (1)
51-60: LGTM! Consistent implementation following established patterns.The three new Jira configuration functions follow the exact same pattern as existing configuration getters, maintaining consistency in:
- Function naming conventions
- Return type annotations
- Implementation approach
- Code formatting
README.md (1)
11-11: LGTM! Clear and consistent documentation updates.The documentation properly positions Jira Cloud as an optional integration alongside existing tools, and correctly identifies it as an alternative to Linear for issue tracking. The updates are well-integrated into the existing documentation structure.
Also applies to: 30-30, 92-92
src/wellcode_cli/github/github_format_ai.py (1)
104-106: LGTM! Consistent integration with existing AI analysis flow.The addition of Jira metrics to the AI analysis follows the same pattern as other metric sources (GitHub, Linear, Split), ensuring comprehensive analysis across all configured integrations.
src/wellcode_cli/commands/review.py (2)
13-13: LGTM! Proper imports for Jira integration.The imports are correctly added to support Jira configuration checking, metrics fetching, and display functionality.
Also applies to: 22-23
127-137: LGTM! Well-structured Jira metrics integration.The Jira metrics fetching follows the established pattern used by other integrations (Linear, Split):
- Configuration check with appropriate warning if not configured
- Status updates during fetching
- Error handling with user-friendly messages
- Conditional metrics display only on successful retrieval
The implementation is consistent and provides good user experience.
test_jira_integration.py (2)
17-59: LGTM! Comprehensive connection and metrics testing.The test properly validates the integration structure with fake credentials, which is appropriate for a demo script. The error handling gracefully manages expected failures while still testing the function interfaces.
60-123: LGTM! Thorough data model and display testing.The test script excellently validates:
- Data model instantiation and updates
- Sample issue processing
- Project metrics tracking
- Component and version counting
- Display functionality
The sample Jira issue data structure accurately reflects the Jira API response format, making this a valuable integration test.
🧰 Tools
🪛 Ruff (0.11.9)
63-63:
wellcode_cli.jira.models.metrics.IssueMetricsimported but unusedRemove unused import:
wellcode_cli.jira.models.metrics.IssueMetrics(F401)
src/wellcode_cli/commands/config.py (1)
202-263: Well-structured Jira configuration implementation!The implementation correctly:
- Handles all three required Jira fields (domain, email, API key)
- Tests the connection before saving
- Provides clear user prompts and error messages
- Handles reconfiguration and clearing of existing settings
JIRA_INTEGRATION.md (1)
1-223: Comprehensive and well-structured documentation!The documentation provides excellent coverage of:
- Clear setup instructions with step-by-step guidance
- Detailed metrics explanations
- Troubleshooting guidance
- Security considerations
- Advanced configuration options
🧰 Tools
🪛 LanguageTool
[uncategorized] ~76-~76: Loose punctuation mark.
Context: ..." ``` ### Filtering Options ---user: Filter by assignee (use email address o...(UNLIKELY_OPENING_PUNCTUATION)
[duplication] ~107-~107: Possible typo: you repeated a word.
Context: ...ent**: Number of people working on each project - Project Lead: Project lead information - **Pr...(ENGLISH_WORD_REPEAT_RULE)
src/wellcode_cli/jira/jira_display.py (2)
11-20: Clean time formatting implementation!The
format_timefunction provides excellent human-readable time formatting with appropriate unit selection (minutes, hours, days).
22-136: Excellent visual metrics dashboard!The main display function provides:
- Clear health indicators (🟢🟡🔴) for quick status assessment
- Well-organized panels for different metric categories
- Proper handling of empty data scenarios
- Good use of Rich styling for visual hierarchy
| plotly | ||
| markdown | ||
| cryptography>=43.0.1 | ||
| requests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Pin the requests dependency version for security and stability.
Adding requests without version constraints poses security and stability risks. Most other dependencies in this file are properly pinned (e.g., setuptools==75.6.0, cryptography>=43.0.1).
Apply this diff to pin to a recent stable version:
-requests
+requests>=2.31.0,<3.0.0Let me verify the latest stable version of requests:
🌐 Web query:
What is the latest stable version of requests library and any recent security advisories?
💡 Result:
Latest Stable Version of Requests
The latest stable version of the Python Requests library is 2.32.3[4]. This version is compatible with Python 3.8 and above, as officially supported[1][5].
Recent Security Advisories
2024–2025 Security Advisories
- CVE-2024-35195: A recent moderate severity vulnerability in Requests was addressed where "subsequent requests to the same host ignore cert verification", potentially exposing connections to man-in-the-middle attacks if certificate verification is bypassed. This issue affected some distributions (e.g., Red Hat Enterprise Linux 8) and has been patched in recent updates[6].
Significant Past Vulnerabilities
- CVE-2023-32681: Requests versions 2.3.0 to 2.30.0 had a flaw that could leak Proxy-Authorization headers to destination servers when following HTTPS redirects. This exposed proxy credentials if proxies were configured with authentication. The issue was patched in version 2.31.0, and users relying on proxies with credentials were strongly advised to upgrade and rotate credentials[7][9].
No additional Requests-specific vulnerabilities have been reported in 2025 as of this writing[9].
Recommendations
- Upgrade to the latest version, 2.32.3, to benefit from security and bug fixes.
- If you use authenticated proxies, ensure you’re running at least version 2.31.0 to avoid credential leaks[7][9].
- Check your operating system’s package repositories for patched versions if installing via system packages (e.g., RHEL)[6].
Summary Table
| Version | Release Date | Key Security Patches |
|---|---|---|
| 2.32.3 | 2025 | Latest stable, moderate cert verification fix[4][6] |
| 2.31.0+ | 2023-05 | Proxy-Authorization leak fixed (CVE-2023-32681)[7][9] |
Stay up to date with the latest releases to ensure security and stability.
Citations:
- 1: https://pypi.org/project/requests/
- 2: https://iproyal.com/blog/python-requests-library/
- 3: https://www.activestate.com/resources/quick-reads/how-to-pip-install-requests-python-package/
- 4: https://www.youtube.com/watch?v=VHRyoFk_CNg
- 5: https://github.com/psf/requests
- 6: https://access.redhat.com/errata/RHSA-2025:0012
- 7: https://requests.readthedocs.io/en/latest/community/updates/
- 8: https://scrapfly.io/blog/how-to-install-requests-python/
- 9: https://stack.watch/product/python/requests/
- 10: https://devguide.python.org/versions/
Pin requests to the latest secure version (≥2.32.3,<3.0.0)
Unpinned dependencies expose you to security risks and breaking changes. The Requests library has recent advisories—CVE-2024-35195 (cert-verification bypass) and CVE-2023-32681 (proxy-auth header leak)—that are fixed in 2.32.3. We should pin Requests accordingly.
• File: requirements.txt, line 13
• Change from unpinned to a range that includes the latest patch but excludes a potential v3 release
-requests
+requests>=2.32.3,<3.0.0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| requests | |
| -requests | |
| +requests>=2.32.3,<3.0.0 |
🤖 Prompt for AI Agents
In requirements.txt at line 13, the requests dependency is currently unpinned,
which poses security and stability risks. Update the line to pin requests to a
secure and stable version range by specifying "requests>=2.32.3,<3.0.0" to
include the latest patched version 2.32.3 and exclude any potential breaking
changes from a major version 3 release.
| @@ -0,0 +1 @@ | |||
| # Jira models package No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Black formatting issue.
The pipeline indicates a Black formatting failure. The comment appears to have trailing whitespace.
Apply this diff to fix the formatting:
-# Jira models package
+# Jira models package📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Jira models package | |
| # Jira models package |
🧰 Tools
🪛 GitHub Actions: Build and Quality Check
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/models/__init__.py at line 1, remove any trailing
whitespace after the comment to fix the Black formatting issue. Ensure the line
ends immediately after the comment text with no extra spaces or tabs.
| @@ -0,0 +1 @@ | |||
| # Jira Cloud integration package No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Black formatting issue.
The pipeline indicates a Black formatting failure. The comment appears to have trailing whitespace.
Apply this diff to fix the formatting:
-# Jira Cloud integration package
+# Jira Cloud integration package📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Jira Cloud integration package | |
| # Jira Cloud integration package |
🧰 Tools
🪛 GitHub Actions: Build and Quality Check
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/__init__.py at line 1, remove any trailing whitespace
after the comment "# Jira Cloud integration package" to fix the Black formatting
issue reported by the pipeline.
| # Optional integrations with secret masking | ||
| optional_configs = { | ||
| "Linear": ("LINEAR_API_KEY", "Enter your Linear API key"), | ||
| "Jira": ("JIRA_API_KEY", "Enter your Jira API key"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Update the dictionary entry to reflect Jira's multi-field requirement.
The entry for Jira in optional_configs is misleading as it shows "JIRA_API_KEY" as the key, but the actual implementation requires three fields (JIRA_DOMAIN, JIRA_EMAIL, JIRA_API_KEY). Consider updating this to better reflect the actual requirement.
Apply this diff to clarify the Jira configuration entry:
- "Jira": ("JIRA_API_KEY", "Enter your Jira API key"),
+ "Jira": ("JIRA_CONFIG", "Configure Jira integration"),Alternatively, you could document this as a comment:
"Linear": ("LINEAR_API_KEY", "Enter your Linear API key"),
- "Jira": ("JIRA_API_KEY", "Enter your Jira API key"),
+ "Jira": ("JIRA_API_KEY", "Enter your Jira API key"), # Note: Jira requires 3 fields
"Split.io": ("SPLIT_API_KEY", "Enter your Split.io API key"),Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/wellcode_cli/commands/config.py at line 115, the Jira entry in the
optional_configs dictionary currently lists only "JIRA_API_KEY" as the key,
which is misleading since the implementation requires three fields:
"JIRA_DOMAIN", "JIRA_EMAIL", and "JIRA_API_KEY". Update this dictionary entry to
either list all three required keys or add a clarifying comment explaining the
multi-field requirement for Jira configuration to accurately reflect the actual
setup.
| import statistics | ||
| from datetime import datetime, timezone | ||
|
|
||
| from rich.box import ROUNDED | ||
| from rich.console import Console | ||
| from rich.panel import Panel | ||
|
|
||
| console = Console() | ||
|
|
||
|
|
||
| def format_time(hours: float) -> str: | ||
| """Format time in hours to a human-readable string""" | ||
| if hours < 1: | ||
| return f"{hours * 60:.0f}m" | ||
| elif hours < 24: | ||
| return f"{hours:.1f}h" | ||
| else: | ||
| days = hours / 24 | ||
| return f"{days:.1f}d" | ||
|
|
||
|
|
||
| def display_jira_metrics(org_metrics): | ||
| """Display Jira metrics with a modern UI using Rich components.""" | ||
| # Header with organization info and time range | ||
| now = datetime.now(timezone.utc) | ||
| console.print( | ||
| Panel( | ||
| "[bold cyan]Jira Engineering Analytics[/]\n" | ||
| + f"[dim]Organization: {org_metrics.name}[/]\n" | ||
| + f"[dim]Report Generated: {now.strftime('%Y-%m-%d %H:%M')} UTC[/]", | ||
| box=ROUNDED, | ||
| style="cyan", | ||
| ) | ||
| ) | ||
|
|
||
| # 1. Core Issue Metrics with health indicators | ||
| total_issues = org_metrics.issues.total_created | ||
| completed_issues = org_metrics.issues.total_completed | ||
| completion_rate = (completed_issues / total_issues * 100) if total_issues > 0 else 0 | ||
|
|
||
| health_indicator = ( | ||
| "🟢" if completion_rate > 80 else "🟡" if completion_rate > 60 else "🔴" | ||
| ) | ||
|
|
||
| console.print( | ||
| Panel( | ||
| f"{health_indicator} [bold green]Issues Created:[/] {total_issues}\n" | ||
| + f"[bold yellow]Issues Completed:[/] {completed_issues} ({completion_rate:.1f}% completion rate)\n" | ||
| + f"[bold red]Bugs Created:[/] {org_metrics.issues.bugs_created}\n" | ||
| + f"[bold blue]Stories Created:[/] {org_metrics.issues.stories_created}\n" | ||
| + f"[bold magenta]Tasks Created:[/] {org_metrics.issues.tasks_created}\n" | ||
| + f"[bold cyan]Epics Created:[/] {org_metrics.issues.epics_created}", | ||
| title="[bold]Issue Flow", | ||
| box=ROUNDED, | ||
| ) | ||
| ) | ||
|
|
||
| # 2. Time Metrics with visual indicators | ||
| cycle = org_metrics.cycle_time | ||
| avg_cycle_time = statistics.mean(cycle.cycle_times) if cycle.cycle_times else 0 | ||
| cycle_health = ( | ||
| "🟢" if avg_cycle_time < 24 else "🟡" if avg_cycle_time < 72 else "🔴" | ||
| ) | ||
|
|
||
| console.print( | ||
| Panel( | ||
| f"{cycle_health} [bold]Average Cycle Time:[/] {format_time(avg_cycle_time)}\n" | ||
| + f"[bold]Median Cycle Time:[/] {format_time(statistics.median(cycle.cycle_times) if cycle.cycle_times else 0)}\n" | ||
| + f"[bold]95th Percentile:[/] {format_time(cycle.get_stats()['p95_cycle_time'])}\n" | ||
| + f"[bold]Average Resolution Time:[/] {format_time(statistics.mean(cycle.resolution_times) if cycle.resolution_times else 0)}", | ||
| title="[bold blue]Time Metrics", | ||
| box=ROUNDED, | ||
| ) | ||
| ) | ||
|
|
||
| # 3. Estimation Accuracy | ||
| est = org_metrics.estimation | ||
| if est.total_estimated > 0: | ||
| accuracy_rate = est.accurate_estimates / est.total_estimated * 100 | ||
| accuracy_health = ( | ||
| "🟢" if accuracy_rate > 80 else "🟡" if accuracy_rate > 60 else "🔴" | ||
| ) | ||
|
|
||
| console.print( | ||
| Panel( | ||
| f"{accuracy_health} [bold]Estimation Accuracy:[/] {accuracy_rate:.1f}%\n" | ||
| + f"[bold green]Accurate Estimates:[/] {est.accurate_estimates}\n" | ||
| + f"[bold red]Underestimates:[/] {est.underestimates}\n" | ||
| + f"[bold yellow]Overestimates:[/] {est.overestimates}\n" | ||
| + f"[bold]Average Variance:[/] {statistics.mean(est.estimation_variance) if est.estimation_variance else 0:.1f}%", | ||
| title="[bold yellow]Estimation Health", | ||
| box=ROUNDED, | ||
| ) | ||
| ) | ||
|
|
||
| # 4. Project Performance | ||
| if org_metrics.projects: | ||
| project_panels = [] | ||
| for project_key, project in org_metrics.projects.items(): | ||
| completion_rate = ( | ||
| (project.completed_issues / project.total_issues * 100) | ||
| if project.total_issues > 0 | ||
| else 0 | ||
| ) | ||
| project_health = ( | ||
| "🟢" if completion_rate > 80 else "🟡" if completion_rate > 60 else "🔴" | ||
| ) | ||
|
|
||
| project_panels.append( | ||
| f"{project_health} [bold cyan]{project.name} ({project_key})[/]\n" | ||
| + f"Issues: {project.total_issues} total, {project.completed_issues} completed ({completion_rate:.1f}%)\n" | ||
| + f"Bugs: {project.bugs_count} | Stories: {project.stories_count} | Tasks: {project.tasks_count} | Epics: {project.epics_count}\n" | ||
| + f"Assignees: {len(project.assignees_involved)}\n" | ||
| + f"Lead: {project.lead or 'Not set'} | Type: {project.project_type or 'Unknown'}" | ||
| ) | ||
|
|
||
| console.print( | ||
| Panel( | ||
| "\n\n".join(project_panels), | ||
| title="[bold magenta]Project Health", | ||
| box=ROUNDED, | ||
| ) | ||
| ) | ||
|
|
||
| # 5. Priority Distribution | ||
| if org_metrics.issues.by_priority: | ||
| display_priority_distribution(org_metrics.issues.by_priority) | ||
|
|
||
| # 6. Assignee Performance | ||
| if org_metrics.issues.by_assignee: | ||
| display_assignee_performance(org_metrics.issues.by_assignee, org_metrics.cycle_time.by_assignee) | ||
|
|
||
| # 7. Component and Version Distribution | ||
| if org_metrics.component_counts or org_metrics.version_counts: | ||
| display_component_version_summary(org_metrics.component_counts, org_metrics.version_counts) | ||
|
|
||
|
|
||
| def display_priority_distribution(priority_counts): | ||
| """Display a visual summary of issue priorities.""" | ||
| if not priority_counts: | ||
| return | ||
|
|
||
| # Sort priorities by count in descending order | ||
| sorted_priorities = sorted(priority_counts.items(), key=lambda x: x[1], reverse=True) | ||
|
|
||
| # Calculate the maximum count for scaling | ||
| max_count = max(count for _, count in sorted_priorities) | ||
| max_bar_length = 30 # Maximum length of the bar in characters | ||
|
|
||
| # Create the priority summary | ||
| priority_lines = [] | ||
| for priority, count in sorted_priorities: | ||
| # Calculate bar length proportional to count | ||
| bar_length = int((count / max_count) * max_bar_length) | ||
| bar = "█" * bar_length | ||
|
|
||
| # Choose color based on priority name | ||
| color = ( | ||
| "red" | ||
| if "highest" in priority.lower() or "critical" in priority.lower() | ||
| else ( | ||
| "yellow" | ||
| if "high" in priority.lower() | ||
| else "blue" if "medium" in priority.lower() else "green" | ||
| ) | ||
| ) | ||
|
|
||
| priority_lines.append(f"[{color}]{priority:<15}[/] {bar} ({count})") | ||
|
|
||
| console.print( | ||
| Panel( | ||
| "\n".join(priority_lines), title="[bold cyan]Priority Distribution", box=ROUNDED | ||
| ) | ||
| ) | ||
|
|
||
|
|
||
| def display_assignee_performance(assignee_counts, assignee_cycle_times): | ||
| """Display assignee performance metrics.""" | ||
| if not assignee_counts: | ||
| return | ||
|
|
||
| # Sort assignees by issue count in descending order | ||
| sorted_assignees = sorted(assignee_counts.items(), key=lambda x: x[1], reverse=True) | ||
|
|
||
| # Take top 10 assignees | ||
| top_assignees = sorted_assignees[:10] | ||
|
|
||
| assignee_lines = [] | ||
| for assignee, count in top_assignees: | ||
| avg_cycle_time = 0 | ||
| if assignee in assignee_cycle_times and assignee_cycle_times[assignee]: | ||
| avg_cycle_time = statistics.mean(assignee_cycle_times[assignee]) | ||
|
|
||
| # Performance indicator based on cycle time | ||
| performance_indicator = ( | ||
| "🟢" if avg_cycle_time < 24 else "🟡" if avg_cycle_time < 72 else "🔴" | ||
| ) | ||
|
|
||
| assignee_lines.append( | ||
| f"{performance_indicator} [bold]{assignee:<20}[/] Issues: {count:>3} | Avg Cycle: {format_time(avg_cycle_time)}" | ||
| ) | ||
|
|
||
| console.print( | ||
| Panel( | ||
| "\n".join(assignee_lines), | ||
| title="[bold green]Top Assignee Performance", | ||
| box=ROUNDED, | ||
| ) | ||
| ) | ||
|
|
||
|
|
||
| def display_component_version_summary(component_counts, version_counts): | ||
| """Display a summary of components and versions.""" | ||
| panels = [] | ||
|
|
||
| if component_counts: | ||
| # Sort components by count in descending order | ||
| sorted_components = sorted(component_counts.items(), key=lambda x: x[1], reverse=True) | ||
| top_components = sorted_components[:5] # Top 5 components | ||
|
|
||
| component_lines = [] | ||
| for component, count in top_components: | ||
| component_lines.append(f"[cyan]{component:<25}[/] ({count})") | ||
|
|
||
| panels.append( | ||
| Panel( | ||
| "\n".join(component_lines), | ||
| title="[bold cyan]Top Components", | ||
| box=ROUNDED, | ||
| ) | ||
| ) | ||
|
|
||
| if version_counts: | ||
| # Sort versions by count in descending order | ||
| sorted_versions = sorted(version_counts.items(), key=lambda x: x[1], reverse=True) | ||
| top_versions = sorted_versions[:5] # Top 5 versions | ||
|
|
||
| version_lines = [] | ||
| for version, count in top_versions: | ||
| version_lines.append(f"[magenta]{version:<25}[/] ({count})") | ||
|
|
||
| panels.append( | ||
| Panel( | ||
| "\n".join(version_lines), | ||
| title="[bold magenta]Top Fix Versions", | ||
| box=ROUNDED, | ||
| ) | ||
| ) | ||
|
|
||
| # Display panels side by side if both exist | ||
| if len(panels) == 2: | ||
| from rich.columns import Columns | ||
| console.print(Columns(panels)) | ||
| elif panels: | ||
| console.print(panels[0]) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Black formatting issues.
The pipeline check indicates that this file needs Black formatting.
Run the following command to fix the formatting:
black src/wellcode_cli/jira/jira_display.py🧰 Tools
🪛 GitHub Actions: Build and Quality Check
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/jira_display.py from lines 1 to 255, the code
formatting does not comply with Black's style guidelines. To fix this, run the
command "black src/wellcode_cli/jira/jira_display.py" in the terminal to
automatically reformat the file according to Black's standards, ensuring
consistent indentation, spacing, and line breaks.
| import logging | ||
| import base64 | ||
| from datetime import datetime, timedelta | ||
| from typing import Optional | ||
|
|
||
| import requests | ||
| from rich.console import Console | ||
|
|
||
| from ..config import get_jira_api_key, get_jira_domain, get_jira_email | ||
| from .models.metrics import JiraOrgMetrics, ProjectMetrics | ||
|
|
||
| console = Console() | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| def get_jira_metrics(start_date, end_date, user_filter=None) -> Optional[JiraOrgMetrics]: | ||
| """Get Jira metrics for the specified date range""" | ||
|
|
||
| # Get configuration | ||
| api_key = get_jira_api_key() | ||
| domain = get_jira_domain() | ||
| email = get_jira_email() | ||
|
|
||
| if not all([api_key, domain, email]): | ||
| logger.error("Jira configuration incomplete. Missing API key, domain, or email.") | ||
| return None | ||
|
|
||
| # Create authentication header | ||
| auth_string = f"{email}:{api_key}" | ||
| auth_bytes = auth_string.encode('ascii') | ||
| auth_b64 = base64.b64encode(auth_bytes).decode('ascii') | ||
|
|
||
| headers = { | ||
| "Authorization": f"Basic {auth_b64}", | ||
| "Accept": "application/json", | ||
| "Content-Type": "application/json" | ||
| } | ||
|
|
||
| base_url = f"https://{domain}.atlassian.net/rest/api/3" | ||
|
|
||
| org_metrics = JiraOrgMetrics(name=domain) | ||
|
|
||
| try: | ||
| # Build JQL query for date range | ||
| start_date_str = start_date.strftime("%Y-%m-%d") | ||
| end_date_str = end_date.strftime("%Y-%m-%d") | ||
|
|
||
| jql_query = f"created >= '{start_date_str}' AND created <= '{end_date_str}'" | ||
|
|
||
| # Add user filter if specified | ||
| if user_filter: | ||
| jql_query += f" AND assignee = '{user_filter}'" | ||
|
|
||
| # Get all issues with pagination | ||
| all_issues = [] | ||
| start_at = 0 | ||
| max_results = 100 | ||
| total_issues = None | ||
|
|
||
| while total_issues is None or start_at < total_issues: | ||
| search_url = f"{base_url}/search" | ||
| params = { | ||
| "jql": jql_query, | ||
| "startAt": start_at, | ||
| "maxResults": max_results, | ||
| "fields": [ | ||
| "summary", | ||
| "status", | ||
| "issuetype", | ||
| "priority", | ||
| "assignee", | ||
| "project", | ||
| "created", | ||
| "resolutiondate", | ||
| "components", | ||
| "fixVersions", | ||
| "customfield_10016", # Story Points (common field ID) | ||
| "timeoriginalestimate", | ||
| "timespent", | ||
| "worklog" | ||
| ] | ||
| } | ||
|
|
||
| response = requests.get(search_url, headers=headers, params=params, timeout=30) | ||
|
|
||
| if response.status_code != 200: | ||
| logger.error(f"Jira API error: {response.status_code} - {response.text}") | ||
| return None | ||
|
|
||
| data = response.json() | ||
|
|
||
| if total_issues is None: | ||
| total_issues = data.get("total", 0) | ||
| console.print(f"Found {total_issues} issues to process...") | ||
|
|
||
| issues = data.get("issues", []) | ||
| all_issues.extend(issues) | ||
|
|
||
| start_at += max_results | ||
|
|
||
| if len(issues) < max_results: | ||
| break | ||
|
|
||
| console.print(f"Processing {len(all_issues)} issues...") | ||
|
|
||
| # Process all issues | ||
| for issue in all_issues: | ||
| # Update issue metrics | ||
| org_metrics.issues.update_from_issue(issue) | ||
|
|
||
| # Update cycle time metrics | ||
| org_metrics.cycle_time.update_from_issue(issue) | ||
|
|
||
| # Calculate actual time for estimation metrics | ||
| actual_time = calculate_actual_time(issue) | ||
| if actual_time > 0: | ||
| org_metrics.estimation.update_from_issue(issue, actual_time) | ||
|
|
||
| # Update project metrics | ||
| project_data = issue.get("fields", {}).get("project", {}) | ||
| if project_data: | ||
| project_key = project_data.get("key") | ||
| project_name = project_data.get("name", "") | ||
|
|
||
| if project_key not in org_metrics.projects: | ||
| # Get additional project details | ||
| project_details = get_project_details(base_url, headers, project_key) | ||
| org_metrics.projects[project_key] = ProjectMetrics( | ||
| key=project_key, | ||
| name=project_name, | ||
| lead=project_details.get("lead"), | ||
| project_type=project_details.get("projectTypeKey") | ||
| ) | ||
|
|
||
| org_metrics.projects[project_key].update_from_issue(issue) | ||
|
|
||
| # Update component metrics | ||
| components = issue.get("fields", {}).get("components", []) | ||
| for component in components: | ||
| component_name = component.get("name", "") | ||
| if component_name: | ||
| if component_name not in org_metrics.component_counts: | ||
| org_metrics.component_counts[component_name] = 0 | ||
| org_metrics.component_counts[component_name] += 1 | ||
|
|
||
| # Update version metrics | ||
| fix_versions = issue.get("fields", {}).get("fixVersions", []) | ||
| for version in fix_versions: | ||
| version_name = version.get("name", "") | ||
| if version_name: | ||
| if version_name not in org_metrics.version_counts: | ||
| org_metrics.version_counts[version_name] = 0 | ||
| org_metrics.version_counts[version_name] += 1 | ||
|
|
||
| # Aggregate metrics after processing all issues | ||
| org_metrics.aggregate_metrics() | ||
|
|
||
| return org_metrics | ||
|
|
||
| except requests.exceptions.RequestException as e: | ||
| logger.error(f"Network error while fetching Jira metrics: {str(e)}") | ||
| return None | ||
| except Exception as e: | ||
| logger.error(f"Unexpected error while fetching Jira metrics: {str(e)}") | ||
| return None | ||
|
|
||
|
|
||
| def get_project_details(base_url: str, headers: dict, project_key: str) -> dict: | ||
| """Get additional project details from Jira API""" | ||
| try: | ||
| project_url = f"{base_url}/project/{project_key}" | ||
| response = requests.get(project_url, headers=headers, timeout=30) | ||
|
|
||
| if response.status_code == 200: | ||
| project_data = response.json() | ||
| return { | ||
| "lead": project_data.get("lead", {}).get("displayName"), | ||
| "projectTypeKey": project_data.get("projectTypeKey"), | ||
| "description": project_data.get("description", ""), | ||
| } | ||
| except Exception as e: | ||
| logger.warning(f"Could not fetch project details for {project_key}: {str(e)}") | ||
|
|
||
| return {} | ||
|
|
||
|
|
||
| def calculate_actual_time(issue: dict) -> float: | ||
| """Calculate actual time spent on an issue in hours""" | ||
| fields = issue.get("fields", {}) | ||
|
|
||
| # Try to get time spent from the issue | ||
| time_spent = fields.get("timespent") # Time in seconds | ||
| if time_spent: | ||
| return time_spent / 3600 # Convert to hours | ||
|
|
||
| # If no time spent recorded, try to estimate from worklogs | ||
| try: | ||
| # Note: This would require additional API call to get worklogs | ||
| # For now, we'll use a simple estimation based on resolution time | ||
| created = fields.get("created") | ||
| resolved = fields.get("resolutiondate") | ||
|
|
||
| if created and resolved: | ||
| created_dt = datetime.fromisoformat(created.replace("Z", "+00:00")) | ||
| resolved_dt = datetime.fromisoformat(resolved.replace("Z", "+00:00")) | ||
|
|
||
| # Calculate business hours between dates (rough estimation) | ||
| total_hours = (resolved_dt - created_dt).total_seconds() / 3600 | ||
|
|
||
| # Estimate actual work time as 25% of total time (accounting for weekends, etc.) | ||
| estimated_work_hours = total_hours * 0.25 | ||
|
|
||
| return max(0.5, min(estimated_work_hours, 40)) # Cap between 0.5 and 40 hours | ||
|
|
||
| except (ValueError, TypeError): | ||
| pass | ||
|
|
||
| return 0 | ||
|
|
||
|
|
||
| def calculate_work_hours(start_date: datetime, end_date: datetime) -> float: | ||
| """Calculate work hours between two dates, excluding weekends""" | ||
| if not start_date or not end_date: | ||
| return 0 | ||
|
|
||
| total_hours = 0 | ||
| current_date = start_date | ||
|
|
||
| while current_date < end_date: | ||
| if current_date.weekday() < 5: # Monday to Friday | ||
| day_end = min( | ||
| current_date.replace(hour=17, minute=0, second=0, microsecond=0), | ||
| end_date, | ||
| ) | ||
| day_start = max( | ||
| current_date.replace(hour=9, minute=0, second=0, microsecond=0), | ||
| start_date, | ||
| ) | ||
|
|
||
| if day_end > day_start: | ||
| work_hours = (day_end - day_start).total_seconds() / 3600 | ||
| total_hours += min(8, work_hours) # Cap at 8 hours per day | ||
|
|
||
| current_date = current_date.replace( | ||
| hour=9, minute=0, second=0, microsecond=0 | ||
| ) + timedelta(days=1) | ||
|
|
||
| return total_hours | ||
|
|
||
|
|
||
| def get_jira_projects(domain: str, email: str, api_key: str) -> list: | ||
| """Get list of accessible Jira projects""" | ||
| auth_string = f"{email}:{api_key}" | ||
| auth_bytes = auth_string.encode('ascii') | ||
| auth_b64 = base64.b64encode(auth_bytes).decode('ascii') | ||
|
|
||
| headers = { | ||
| "Authorization": f"Basic {auth_b64}", | ||
| "Accept": "application/json" | ||
| } | ||
|
|
||
| try: | ||
| url = f"https://{domain}.atlassian.net/rest/api/3/project" | ||
| response = requests.get(url, headers=headers, timeout=30) | ||
|
|
||
| if response.status_code == 200: | ||
| return response.json() | ||
| else: | ||
| logger.error(f"Failed to fetch projects: {response.status_code}") | ||
| return [] | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"Error fetching Jira projects: {str(e)}") | ||
| return [] | ||
|
|
||
|
|
||
| def test_jira_connection(domain: str, email: str, api_key: str) -> bool: | ||
| """Test Jira connection with provided credentials""" | ||
| auth_string = f"{email}:{api_key}" | ||
| auth_bytes = auth_string.encode('ascii') | ||
| auth_b64 = base64.b64encode(auth_bytes).decode('ascii') | ||
|
|
||
| headers = { | ||
| "Authorization": f"Basic {auth_b64}", | ||
| "Accept": "application/json" | ||
| } | ||
|
|
||
| try: | ||
| url = f"https://{domain}.atlassian.net/rest/api/3/myself" | ||
| response = requests.get(url, headers=headers, timeout=10) | ||
|
|
||
| if response.status_code == 200: | ||
| user_data = response.json() | ||
| console.print(f"[green]✓ Connected to Jira as {user_data.get('displayName', email)}[/]") | ||
| return True | ||
| else: | ||
| console.print(f"[red]✗ Jira connection failed: {response.status_code}[/]") | ||
| return False | ||
|
|
||
| except Exception as e: | ||
| console.print(f"[red]✗ Jira connection error: {str(e)}[/]") | ||
| return False No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Black formatting issues
The pipeline indicates that this file needs to be reformatted with Black. Please run black on this file to ensure consistent code style.
#!/bin/bash
# Verify Black formatting issues
black --check --diff src/wellcode_cli/jira/jira_metrics.py🧰 Tools
🪛 GitHub Actions: Build and Quality Check
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/jira_metrics.py from lines 1 to 303, the code does not
comply with Black formatting standards. Run the Black formatter on this file to
automatically fix all style and formatting issues, ensuring consistent
indentation, line length, spacing, and other style conventions as per Black's
defaults.
| "resolutiondate", | ||
| "components", | ||
| "fixVersions", | ||
| "customfield_10016", # Story Points (common field ID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make story points field ID configurable
The hardcoded custom field ID customfield_10016 for story points may not be consistent across different Jira instances. This should be configurable.
Consider adding a configuration option for the story points field ID:
- "customfield_10016", # Story Points (common field ID)
+ get_jira_story_points_field() or "customfield_10016", # Story Points fieldAnd add the corresponding getter function in your config module.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/jira_metrics.py at line 78, the story points field ID
is hardcoded as "customfield_10016", which may vary between Jira instances. To
fix this, remove the hardcoded value and instead retrieve the story points field
ID from a configuration setting. Add a new configuration option for the story
points field ID in your config module and implement a getter function to access
it. Then update the code to use this getter function to obtain the field ID
dynamically.
| import json | ||
| import statistics | ||
| from collections import defaultdict | ||
| from dataclasses import dataclass, field | ||
| from datetime import datetime | ||
| from typing import Dict, List, Set, Optional | ||
|
|
||
|
|
||
| class MetricsJSONEncoder(json.JSONEncoder): | ||
| def default(self, obj): | ||
| if isinstance(obj, datetime): | ||
| return obj.isoformat() | ||
| if isinstance(obj, set): | ||
| return list(obj) | ||
| if isinstance(obj, defaultdict): | ||
| return dict(obj) | ||
| if callable(obj): | ||
| return None | ||
| if hasattr(obj, "__dict__"): | ||
| return { | ||
| k: v | ||
| for k, v in obj.__dict__.items() | ||
| if not k.startswith("_") and not callable(v) | ||
| } | ||
| try: | ||
| return super().default(obj) | ||
| except Exception: | ||
| return str(obj) | ||
|
|
||
|
|
||
| @dataclass | ||
| class BaseMetrics: | ||
| def to_dict(self): | ||
| def convert(obj): | ||
| if isinstance(obj, datetime): | ||
| return obj.isoformat() | ||
| if isinstance(obj, set): | ||
| return list(obj) | ||
| if isinstance(obj, defaultdict): | ||
| return dict(obj) | ||
| if callable(obj): | ||
| return None | ||
| if hasattr(obj, "to_dict"): | ||
| return obj.to_dict() | ||
| if hasattr(obj, "__dict__"): | ||
| return { | ||
| k: convert(v) | ||
| for k, v in obj.__dict__.items() | ||
| if not k.startswith("_") and not callable(v) | ||
| } | ||
| return obj | ||
|
|
||
| return { | ||
| k: convert(v) | ||
| for k, v in self.__dict__.items() | ||
| if not k.startswith("_") and not callable(v) | ||
| } | ||
|
|
||
|
|
||
| @dataclass | ||
| class IssueMetrics(BaseMetrics): | ||
| total_created: int = 0 | ||
| total_completed: int = 0 | ||
| total_in_progress: int = 0 | ||
| bugs_created: int = 0 | ||
| bugs_completed: int = 0 | ||
| stories_created: int = 0 | ||
| stories_completed: int = 0 | ||
| tasks_created: int = 0 | ||
| tasks_completed: int = 0 | ||
| epics_created: int = 0 | ||
| epics_completed: int = 0 | ||
| by_priority: Dict[str, int] = field(default_factory=lambda: defaultdict(int)) | ||
| by_status: Dict[str, int] = field(default_factory=lambda: defaultdict(int)) | ||
| by_assignee: Dict[str, int] = field(default_factory=lambda: defaultdict(int)) | ||
| by_project: Dict[str, Dict] = field( | ||
| default_factory=lambda: defaultdict( | ||
| lambda: { | ||
| "total": 0, | ||
| "bugs": 0, | ||
| "stories": 0, | ||
| "tasks": 0, | ||
| "epics": 0, | ||
| "completed": 0, | ||
| "in_progress": 0, | ||
| } | ||
| ) | ||
| ) | ||
|
|
||
| def get_stats(self) -> Dict: | ||
| completion_rate = ( | ||
| (self.total_completed / self.total_created * 100) | ||
| if self.total_created > 0 | ||
| else 0 | ||
| ) | ||
| bug_rate = ( | ||
| (self.bugs_created / self.total_created * 100) | ||
| if self.total_created > 0 | ||
| else 0 | ||
| ) | ||
|
|
||
| return { | ||
| "total_issues": self.total_created, | ||
| "completion_rate": completion_rate, | ||
| "bug_rate": bug_rate, | ||
| "stories_to_bugs_ratio": ( | ||
| self.stories_created / self.bugs_created | ||
| if self.bugs_created > 0 | ||
| else 0 | ||
| ), | ||
| "in_progress_rate": ( | ||
| (self.total_in_progress / self.total_created * 100) | ||
| if self.total_created > 0 | ||
| else 0 | ||
| ), | ||
| "priority_distribution": dict(self.by_priority), | ||
| "status_distribution": dict(self.by_status), | ||
| "assignee_distribution": dict(self.by_assignee), | ||
| "project_metrics": dict(self.by_project), | ||
| } | ||
|
|
||
| def update_from_issue(self, issue: dict): | ||
| self.total_created += 1 | ||
|
|
||
| # Get issue type and status | ||
| issue_type = issue.get("fields", {}).get("issuetype", {}).get("name", "").lower() | ||
| status_name = issue.get("fields", {}).get("status", {}).get("name", "Unknown") | ||
| status_category = issue.get("fields", {}).get("status", {}).get("statusCategory", {}).get("key", "") | ||
|
|
||
| # Update status metrics | ||
| self.by_status[status_name] += 1 | ||
|
|
||
| # Update completion status based on status category | ||
| if status_category == "done": | ||
| self.total_completed += 1 | ||
| elif status_category == "indeterminate": | ||
| self.total_in_progress += 1 | ||
|
|
||
| # Update issue type metrics | ||
| if "bug" in issue_type: | ||
| self.bugs_created += 1 | ||
| if status_category == "done": | ||
| self.bugs_completed += 1 | ||
| elif "story" in issue_type: | ||
| self.stories_created += 1 | ||
| if status_category == "done": | ||
| self.stories_completed += 1 | ||
| elif "task" in issue_type: | ||
| self.tasks_created += 1 | ||
| if status_category == "done": | ||
| self.tasks_completed += 1 | ||
| elif "epic" in issue_type: | ||
| self.epics_created += 1 | ||
| if status_category == "done": | ||
| self.epics_completed += 1 | ||
|
|
||
| # Update priority metrics | ||
| priority = issue.get("fields", {}).get("priority", {}) | ||
| if priority: | ||
| priority_name = priority.get("name", "Unknown") | ||
| self.by_priority[priority_name] += 1 | ||
|
|
||
| # Update assignee metrics | ||
| assignee = issue.get("fields", {}).get("assignee", {}) | ||
| if assignee: | ||
| assignee_name = assignee.get("displayName", "Unassigned") | ||
| self.by_assignee[assignee_name] += 1 | ||
| else: | ||
| self.by_assignee["Unassigned"] += 1 | ||
|
|
||
| # Update project metrics | ||
| project = issue.get("fields", {}).get("project", {}) | ||
| if project: | ||
| project_key = project.get("key") | ||
| if project_key: | ||
| self.by_project[project_key]["total"] += 1 | ||
| if "bug" in issue_type: | ||
| self.by_project[project_key]["bugs"] += 1 | ||
| elif "story" in issue_type: | ||
| self.by_project[project_key]["stories"] += 1 | ||
| elif "task" in issue_type: | ||
| self.by_project[project_key]["tasks"] += 1 | ||
| elif "epic" in issue_type: | ||
| self.by_project[project_key]["epics"] += 1 | ||
|
|
||
| if status_category == "done": | ||
| self.by_project[project_key]["completed"] += 1 | ||
| elif status_category == "indeterminate": | ||
| self.by_project[project_key]["in_progress"] += 1 | ||
|
|
||
|
|
||
| @dataclass | ||
| class CycleTimeMetrics(BaseMetrics): | ||
| cycle_times: List[float] = field(default_factory=list) | ||
| time_to_start: List[float] = field(default_factory=list) | ||
| time_in_progress: List[float] = field(default_factory=list) | ||
| time_in_review: List[float] = field(default_factory=list) | ||
| resolution_times: List[float] = field(default_factory=list) | ||
| by_assignee: Dict[str, List[float]] = field(default_factory=lambda: defaultdict(list)) | ||
| by_priority: Dict[str, List[float]] = field( | ||
| default_factory=lambda: defaultdict(list) | ||
| ) | ||
| by_issue_type: Dict[str, List[float]] = field( | ||
| default_factory=lambda: defaultdict(list) | ||
| ) | ||
|
|
||
| def get_stats(self) -> Dict: | ||
| def safe_mean(lst: List[float]) -> float: | ||
| return statistics.mean(lst) if lst else 0 | ||
|
|
||
| def safe_median(lst: List[float]) -> float: | ||
| return statistics.median(lst) if lst else 0 | ||
|
|
||
| def safe_p95(lst: List[float]) -> float: | ||
| if not lst: | ||
| return 0 | ||
| sorted_list = sorted(lst) | ||
| index = int(0.95 * len(sorted_list)) | ||
| return sorted_list[min(index, len(sorted_list) - 1)] | ||
|
|
||
| return { | ||
| "avg_cycle_time": safe_mean(self.cycle_times), | ||
| "median_cycle_time": safe_median(self.cycle_times), | ||
| "p95_cycle_time": safe_p95(self.cycle_times), | ||
| "avg_time_to_start": safe_mean(self.time_to_start), | ||
| "avg_time_in_progress": safe_mean(self.time_in_progress), | ||
| "avg_time_in_review": safe_mean(self.time_in_review), | ||
| "avg_resolution_time": safe_mean(self.resolution_times), | ||
| "assignee_cycle_times": { | ||
| assignee: safe_mean(times) for assignee, times in self.by_assignee.items() | ||
| }, | ||
| "priority_cycle_times": { | ||
| priority: safe_mean(times) | ||
| for priority, times in self.by_priority.items() | ||
| }, | ||
| "issue_type_cycle_times": { | ||
| issue_type: safe_mean(times) | ||
| for issue_type, times in self.by_issue_type.items() | ||
| }, | ||
| } | ||
|
|
||
| def update_from_issue(self, issue: dict): | ||
| fields = issue.get("fields", {}) | ||
| created = fields.get("created") | ||
| resolved = fields.get("resolutiondate") | ||
|
|
||
| if not created: | ||
| return | ||
|
|
||
| try: | ||
| created_dt = datetime.fromisoformat(created.replace("Z", "+00:00")) | ||
|
|
||
| if resolved: | ||
| resolved_dt = datetime.fromisoformat(resolved.replace("Z", "+00:00")) | ||
| cycle_time = (resolved_dt - created_dt).total_seconds() / 3600 # hours | ||
| self.cycle_times.append(cycle_time) | ||
| self.resolution_times.append(cycle_time) | ||
|
|
||
| # Track by assignee | ||
| assignee = fields.get("assignee", {}) | ||
| if assignee: | ||
| assignee_name = assignee.get("displayName", "Unassigned") | ||
| self.by_assignee[assignee_name].append(cycle_time) | ||
|
|
||
| # Track by priority | ||
| priority = fields.get("priority", {}) | ||
| if priority: | ||
| priority_name = priority.get("name", "Unknown") | ||
| self.by_priority[priority_name].append(cycle_time) | ||
|
|
||
| # Track by issue type | ||
| issue_type = fields.get("issuetype", {}) | ||
| if issue_type: | ||
| type_name = issue_type.get("name", "Unknown") | ||
| self.by_issue_type[type_name].append(cycle_time) | ||
|
|
||
| except (ValueError, TypeError) as e: | ||
| # Skip issues with invalid date formats | ||
| pass | ||
|
|
||
|
|
||
| @dataclass | ||
| class EstimationMetrics(BaseMetrics): | ||
| total_estimated: int = 0 | ||
| accurate_estimates: int = 0 | ||
| underestimates: int = 0 | ||
| overestimates: int = 0 | ||
| estimation_variance: List[float] = field(default_factory=list) | ||
| by_assignee: Dict[str, Dict] = field( | ||
| default_factory=lambda: defaultdict( | ||
| lambda: {"total": 0, "accurate": 0, "under": 0, "over": 0, "variance": []} | ||
| ) | ||
| ) | ||
| by_issue_type: Dict[str, Dict] = field( | ||
| default_factory=lambda: defaultdict( | ||
| lambda: {"total": 0, "accurate": 0, "under": 0, "over": 0, "variance": []} | ||
| ) | ||
| ) | ||
|
|
||
| def get_stats(self) -> Dict: | ||
| def safe_mean(lst: List[float]) -> float: | ||
| return statistics.mean(lst) if lst else 0 | ||
|
|
||
| accuracy_rate = ( | ||
| (self.accurate_estimates / self.total_estimated * 100) | ||
| if self.total_estimated > 0 | ||
| else 0 | ||
| ) | ||
|
|
||
| return { | ||
| "total_estimated": self.total_estimated, | ||
| "accuracy_rate": accuracy_rate, | ||
| "underestimate_rate": ( | ||
| (self.underestimates / self.total_estimated * 100) | ||
| if self.total_estimated > 0 | ||
| else 0 | ||
| ), | ||
| "overestimate_rate": ( | ||
| (self.overestimates / self.total_estimated * 100) | ||
| if self.total_estimated > 0 | ||
| else 0 | ||
| ), | ||
| "avg_variance": safe_mean(self.estimation_variance), | ||
| "assignee_accuracy": { | ||
| assignee: { | ||
| "accuracy_rate": ( | ||
| (stats["accurate"] / stats["total"] * 100) | ||
| if stats["total"] > 0 | ||
| else 0 | ||
| ), | ||
| "avg_variance": safe_mean(stats["variance"]), | ||
| } | ||
| for assignee, stats in self.by_assignee.items() | ||
| }, | ||
| "issue_type_accuracy": { | ||
| issue_type: { | ||
| "accuracy_rate": ( | ||
| (stats["accurate"] / stats["total"] * 100) | ||
| if stats["total"] > 0 | ||
| else 0 | ||
| ), | ||
| "avg_variance": safe_mean(stats["variance"]), | ||
| } | ||
| for issue_type, stats in self.by_issue_type.items() | ||
| }, | ||
| } | ||
|
|
||
| def update_from_issue(self, issue: dict, actual_time: float): | ||
| fields = issue.get("fields", {}) | ||
|
|
||
| # Try to get story points or time estimate | ||
| story_points = fields.get("customfield_10016") # Common story points field | ||
| original_estimate = fields.get("timeoriginalestimate") # Time estimate in seconds | ||
|
|
||
| estimate_hours = None | ||
| if story_points: | ||
| # Convert story points to hours (assuming 1 point = 4 hours) | ||
| estimate_hours = story_points * 4 | ||
| elif original_estimate: | ||
| # Convert seconds to hours | ||
| estimate_hours = original_estimate / 3600 | ||
|
|
||
| if not estimate_hours or actual_time <= 0: | ||
| return | ||
|
|
||
| variance_percent = ((actual_time - estimate_hours) / estimate_hours) * 100 | ||
|
|
||
| self.total_estimated += 1 | ||
| self.estimation_variance.append(variance_percent) | ||
|
|
||
| # Categorize accuracy (within 25% is considered accurate) | ||
| if abs(variance_percent) <= 25: | ||
| self.accurate_estimates += 1 | ||
| elif variance_percent > 25: | ||
| self.underestimates += 1 | ||
| else: | ||
| self.overestimates += 1 | ||
|
|
||
| # Track by assignee | ||
| assignee = fields.get("assignee", {}) | ||
| if assignee: | ||
| assignee_name = assignee.get("displayName", "Unassigned") | ||
| assignee_stats = self.by_assignee[assignee_name] | ||
| assignee_stats["total"] += 1 | ||
| assignee_stats["variance"].append(variance_percent) | ||
| if abs(variance_percent) <= 25: | ||
| assignee_stats["accurate"] += 1 | ||
| elif variance_percent > 25: | ||
| assignee_stats["under"] += 1 | ||
| else: | ||
| assignee_stats["over"] += 1 | ||
|
|
||
| # Track by issue type | ||
| issue_type = fields.get("issuetype", {}) | ||
| if issue_type: | ||
| type_name = issue_type.get("name", "Unknown") | ||
| type_stats = self.by_issue_type[type_name] | ||
| type_stats["total"] += 1 | ||
| type_stats["variance"].append(variance_percent) | ||
| if abs(variance_percent) <= 25: | ||
| type_stats["accurate"] += 1 | ||
| elif variance_percent > 25: | ||
| type_stats["under"] += 1 | ||
| else: | ||
| type_stats["over"] += 1 | ||
|
|
||
|
|
||
| @dataclass | ||
| class ProjectMetrics(BaseMetrics): | ||
| key: str | ||
| name: str | ||
| total_issues: int = 0 | ||
| completed_issues: int = 0 | ||
| bugs_count: int = 0 | ||
| stories_count: int = 0 | ||
| tasks_count: int = 0 | ||
| epics_count: int = 0 | ||
| avg_cycle_time: float = 0 | ||
| assignees_involved: Set[str] = field(default_factory=set) | ||
| estimation_accuracy: float = 0 | ||
| lead: Optional[str] = None | ||
| project_type: Optional[str] = None | ||
|
|
||
| def get_stats(self) -> Dict: | ||
| completion_rate = ( | ||
| (self.completed_issues / self.total_issues * 100) | ||
| if self.total_issues > 0 | ||
| else 0 | ||
| ) | ||
| return { | ||
| "key": self.key, | ||
| "name": self.name, | ||
| "total_issues": self.total_issues, | ||
| "completed_issues": self.completed_issues, | ||
| "completion_rate": completion_rate, | ||
| "bugs_count": self.bugs_count, | ||
| "stories_count": self.stories_count, | ||
| "tasks_count": self.tasks_count, | ||
| "epics_count": self.epics_count, | ||
| "avg_cycle_time": self.avg_cycle_time, | ||
| "assignees_involved": list(self.assignees_involved), | ||
| "estimation_accuracy": self.estimation_accuracy, | ||
| "lead": self.lead, | ||
| "project_type": self.project_type, | ||
| } | ||
|
|
||
| def update_from_issue(self, issue: dict): | ||
| self.total_issues += 1 | ||
|
|
||
| fields = issue.get("fields", {}) | ||
| status_category = fields.get("status", {}).get("statusCategory", {}).get("key", "") | ||
|
|
||
| if status_category == "done": | ||
| self.completed_issues += 1 | ||
|
|
||
| # Update issue type counts | ||
| issue_type = fields.get("issuetype", {}).get("name", "").lower() | ||
| if "bug" in issue_type: | ||
| self.bugs_count += 1 | ||
| elif "story" in issue_type: | ||
| self.stories_count += 1 | ||
| elif "task" in issue_type: | ||
| self.tasks_count += 1 | ||
| elif "epic" in issue_type: | ||
| self.epics_count += 1 | ||
|
|
||
| # Track assignee involvement | ||
| assignee = fields.get("assignee", {}) | ||
| if assignee: | ||
| assignee_name = assignee.get("displayName") | ||
| if assignee_name: | ||
| self.assignees_involved.add(assignee_name) | ||
|
|
||
|
|
||
| @dataclass | ||
| class JiraOrgMetrics(BaseMetrics): | ||
| name: str | ||
| issues: IssueMetrics = field(default_factory=IssueMetrics) | ||
| projects: Dict[str, ProjectMetrics] = field(default_factory=dict) | ||
| cycle_time: CycleTimeMetrics = field(default_factory=CycleTimeMetrics) | ||
| estimation: EstimationMetrics = field(default_factory=EstimationMetrics) | ||
| component_counts: Dict[str, int] = field(default_factory=dict) | ||
| version_counts: Dict[str, int] = field(default_factory=dict) | ||
|
|
||
| def get_stats(self) -> Dict: | ||
| return { | ||
| "name": self.name, | ||
| "projects": { | ||
| key: project.get_stats() for key, project in self.projects.items() | ||
| }, | ||
| "issues": self.issues.get_stats(), | ||
| "cycle_time": self.cycle_time.get_stats(), | ||
| "estimation": self.estimation.get_stats(), | ||
| "component_distribution": self.component_counts, | ||
| "version_distribution": self.version_counts, | ||
| } | ||
|
|
||
| def aggregate_metrics(self): | ||
| """Aggregate metrics across all projects""" | ||
| if self.projects: | ||
| # Calculate average cycle time across projects | ||
| project_cycle_times = [ | ||
| p.avg_cycle_time for p in self.projects.values() if p.avg_cycle_time > 0 | ||
| ] | ||
| if project_cycle_times: | ||
| avg_cycle_time = statistics.mean(project_cycle_times) | ||
| for project in self.projects.values(): | ||
| if project.avg_cycle_time == 0: | ||
| project.avg_cycle_time = avg_cycle_time | ||
|
|
||
| # Calculate estimation accuracy across projects | ||
| project_accuracies = [ | ||
| p.estimation_accuracy for p in self.projects.values() if p.estimation_accuracy > 0 | ||
| ] | ||
| if project_accuracies: | ||
| avg_accuracy = statistics.mean(project_accuracies) | ||
| for project in self.projects.values(): | ||
| if project.estimation_accuracy == 0: | ||
| project.estimation_accuracy = avg_accuracy No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Black formatting issues
The pipeline indicates that this file needs to be reformatted with Black. Please run black on this file to ensure consistent code style.
#!/bin/bash
# Verify Black formatting issues
black --check --diff src/wellcode_cli/jira/models/metrics.py🧰 Tools
🪛 Ruff (0.11.9)
277-277: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
🪛 GitHub Actions: Build and Quality Check
[error] 1-1: Black formatting check failed. File would be reformatted. Run 'black' to fix code style issues.
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/models/metrics.py lines 1 to 519, the code formatting
does not comply with Black's style guidelines. To fix this, run the Black
formatter on the file by executing `black
src/wellcode_cli/jira/models/metrics.py` in the terminal. This will
automatically reformat the code to ensure consistent style and pass the pipeline
checks.
| except (ValueError, TypeError) as e: | ||
| # Skip issues with invalid date formats | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused variable in exception handler
The variable e is assigned but never used. Since you're not logging the exception details, you can simplify the except clause.
- except (ValueError, TypeError) as e:
+ except (ValueError, TypeError):
# Skip issues with invalid date formats
pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except (ValueError, TypeError) as e: | |
| # Skip issues with invalid date formats | |
| pass | |
| except (ValueError, TypeError): | |
| # Skip issues with invalid date formats | |
| pass |
🧰 Tools
🪛 Ruff (0.11.9)
277-277: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/models/metrics.py around lines 277 to 279, the
exception handler assigns the caught exception to the variable 'e' but does not
use it. Remove the variable 'e' from the except clause to simplify the code by
changing 'except (ValueError, TypeError) as e:' to 'except (ValueError,
TypeError):' since the exception details are not needed.
| # Convert story points to hours (assuming 1 point = 4 hours) | ||
| estimate_hours = story_points * 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make story points to hours conversion configurable
The hardcoded conversion of 1 story point = 4 hours may not be appropriate for all teams, as story point scales vary significantly.
Consider making this conversion rate configurable:
if story_points:
- # Convert story points to hours (assuming 1 point = 4 hours)
- estimate_hours = story_points * 4
+ # Convert story points to hours using configurable rate
+ hours_per_point = 4 # TODO: Get from config - this varies by team
+ estimate_hours = story_points * hours_per_pointThis allows teams to adjust the conversion based on their specific velocity and estimation practices.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Convert story points to hours (assuming 1 point = 4 hours) | |
| estimate_hours = story_points * 4 | |
| if story_points: | |
| # Convert story points to hours using configurable rate | |
| hours_per_point = 4 # TODO: Get from config - this varies by team | |
| estimate_hours = story_points * hours_per_point |
🤖 Prompt for AI Agents
In src/wellcode_cli/jira/models/metrics.py around lines 357 to 358, the
conversion from story points to hours is hardcoded as 1 point = 4 hours, which
may not suit all teams. Refactor this by introducing a configurable parameter
for the conversion rate, such as a constant or a setting that can be passed or
loaded from configuration. Replace the hardcoded multiplier with this
configurable value to allow teams to adjust the conversion according to their
estimation practices.
PR Type:
Enhancement
PR Description:
PR Main Files Walkthrough:
files:
src/wellcode_cli/commands/config.py: Added Jira to the list of optional integrations. Introduced special handling for Jira configuration, requiring domain, email, and API key. Implemented a function to handle Jira-specific configuration and connection testing.src/wellcode_cli/commands/review.py: Integrated Jira metrics into the review command. Added functions to fetch and display Jira metrics if Jira is configured.src/wellcode_cli/config.py: Added functions to retrieve Jira-specific configuration values such as API key, domain, and email.src/wellcode_cli/jira/jira_display.py: Implemented functions to display Jira metrics using Rich components, including issue flow, cycle time, estimation accuracy, and project performance.src/wellcode_cli/jira/jira_metrics.py: Developed functions to fetch Jira metrics using the Jira API. Implemented logic to handle authentication, issue processing, and metrics aggregation.src/wellcode_cli/jira/models/metrics.py: Created data models for Jira metrics, including issue metrics, cycle time metrics, estimation metrics, and project metrics. Implemented methods to update metrics from Jira issues.test_jira_integration.py: Added a test script to verify Jira integration, including connection testing and metrics collection with sample data.JIRA_INTEGRATION.md: Added documentation for Jira Cloud integration, including setup instructions, usage, metrics explanation, and troubleshooting.README.md: Updated README to include Jira Cloud as an optional integration for issue tracking metrics.requirements.txt: Addedrequestslibrary to handle HTTP requests for Jira API integration.User Description:
Description
Related Issue
Fixes #
Type of Change
Testing
Checklist
Summary by CodeRabbit
New Features
Documentation
Chores
requestspackage as a new dependency.Tests