Iterate Pr
Added March 5, 2026 Source: Sentry team
Automates getting your pull request to pass all CI checks and address review feedback. It fetches CI failures and categorized feedback, then iteratively applies fixes and pushes changes until the PR is ready to merge.
Installation
This skill is self-contained. Copy the SKILL.md below directly into your project to get started.
.claude/skills/iterate-pr/SKILL.md # Claude Code
.cursor/skills/iterate-pr/SKILL.md # CursorOr install as a personal skill (available across all your projects):
~/.claude/skills/iterate-pr/SKILL.mdYou can also install using the skills CLI:
npx skills add getsentry/skills --skill iterate-prRequires Node.js 18+.
SKILL.md
---
name: iterate-pr
description: Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automates the feedback-fix-push-wait cycle.
---
# Iterate on PR Until CI Passes
Continuously iterate on the current branch until all CI checks pass and review feedback is addressed.
**Requires**: GitHub CLI (`gh`) authenticated.
**Important**: All scripts must be run from the repository root directory (where `.git` is located), not from the skill directory. Use the full path to the script via `${CLAUDE_SKILL_ROOT}`.
## Bundled Scripts
### `scripts/fetch_pr_checks.py` ([source](https://raw.githubusercontent.com/getsentry/skills/main/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_checks.py))
Fetches CI check status and extracts failure snippets from logs.
```bash
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py [--pr NUMBER]
```
Returns JSON:
```json
{
"pr": {"number": 123, "branch": "feat/foo"},
"summary": {"total": 5, "passed": 3, "failed": 2, "pending": 0},
"checks": [
{"name": "tests", "status": "fail", "log_snippet": "...", "run_id": 123},
{"name": "lint", "status": "pass"}
]
}
```
### `scripts/fetch_pr_feedback.py` ([source](https://raw.githubusercontent.com/getsentry/skills/main/plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_feedback.py))
Fetches and categorizes PR review feedback using the [LOGAF scale](https://develop.sentry.dev/engineering-practices/code-review/#logaf-scale).
```bash
uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py [--pr NUMBER]
```
Returns JSON with feedback categorized as:
- `high` - Must address before merge (`h:`, blocker, changes requested)
- `medium` - Should address (`m:`, standard feedback)
- `low` - Optional (`l:`, nit, style, suggestion)
- `bot` - Informational automated comments (Codecov, Dependabot, etc.)
- `resolved` - Already resolved threads
Review bot feedback (from Sentry, Warden, Cursor, Bugbot, CodeQL, etc.) appears in `high`/`medium`/`low` with `review_bot: true` — it is NOT placed in the `bot` bucket.
Each feedback item may also include:
- `thread_id` - GraphQL node ID for inline review comments (used for replies)
## Workflow
### 1. Identify PR
```bash
gh pr view --json number,url,headRefName
```
Stop if no PR exists for the current branch.
### 2. Gather Review Feedback
Run `${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py` to get categorized feedback already posted on the PR.
### 3. Handle Feedback by LOGAF Priority
**Auto-fix (no prompt):**
- `high` - must address (blockers, security, changes requested)
- `medium` - should address (standard feedback)
When fixing feedback:
- Understand the root cause, not just the surface symptom
- Check for similar issues in nearby code or related files
- Fix all instances, not just the one mentioned
This includes review bot feedback (items with `review_bot: true`). Treat it the same as human feedback:
- Real issue found → fix it
- False positive → skip, but explain why in a brief comment
- Never silently ignore review bot feedback — always verify the finding
**Prompt user for selection:**
- `low` - present numbered list and ask which to address:
```
Found 3 low-priority suggestions:
1. [l] "Consider renaming this variable" - @reviewer in api.py:42
2. [nit] "Could use a list comprehension" - @reviewer in utils.py:18
3. [style] "Add a docstring" - @reviewer in models.py:55
Which would you like to address? (e.g., "1,3" or "all" or "none")
```
**Skip silently:**
- `resolved` threads
- `bot` comments (informational only — Codecov, Dependabot, etc.)
#### Replying to Comments
After processing each inline review comment, reply on the PR thread to acknowledge the action taken. Only reply to items with a `thread_id` (inline review comments).
**When to reply:**
- `high` and `medium` items — whether fixed or determined to be false positives
- `low` items — whether fixed or declined by the user
**How to reply:** Use the `addPullRequestReviewThreadReply` GraphQL mutation with `pullRequestReviewThreadId` and `body` inputs.
**Reply format:**
- 1-2 sentences: what was changed, why it's not an issue, or acknowledgment of declined items
- End every reply with `\n\n*— Claude Code*`
- Before replying, check if the thread already has a reply ending with `*- Claude Code*` or `*— Claude Code*` to avoid duplicates on re-loops
- If the `gh api` call fails, log and continue — do not block the workflow
### 4. Check CI Status
Run `${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py` to get structured failure data.
**Wait if pending:** If review bot checks (sentry, warden, cursor, bugbot, seer, codeql) are still running, wait before proceeding—they post actionable feedback that must be evaluated. Informational bots (codecov) are not worth waiting for.
### 5. Fix CI Failures
For each failure in the script output:
1. Read the `log_snippet` and trace backwards from the error to understand WHY it failed — not just what failed
2. Read the relevant code and check for related issues (e.g., if a type error in one call site, check other call sites)
3. Fix the root cause with minimal, targeted changes
4. Find existing tests for the affected code and run them. If the fix introduces behavior not covered by existing tests, extend them to cover it (add a test case, not a whole new test file)
Do NOT assume what failed based on check name alone—always read the logs. Do NOT "quick fix and hope" — understand the failure thoroughly before changing code.
### 6. Verify Locally, Then Commit and Push
Before committing, verify your fixes locally:
- If you fixed a test failure: re-run that specific test locally
- If you fixed a lint/type error: re-run the linter or type checker on affected files
- For any code fix: run existing tests covering the changed code
If local verification fails, fix before proceeding — do not push known-broken code.
```bash
git add <files>
git commit -m "fix: <descriptive message>"
git push
```
### 7. Monitor CI and Address Feedback
Poll CI status and review feedback in a loop instead of blocking:
1. Run `uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py` to get current CI status
2. If all checks passed → proceed to exit conditions
3. If any checks failed (none pending) → return to step 5
4. If checks are still pending:
a. Run `uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py` for new review feedback
b. Address any new high/medium feedback immediately (same as step 3)
c. If changes were needed, commit and push (this restarts CI), then continue polling
d. Sleep 30 seconds, then repeat from sub-step 1
5. After all checks pass, do a final feedback check: `sleep 10`, then run `uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py`. Address any new high/medium feedback — if changes are needed, return to step 6.
### 8. Repeat
If step 7 required code changes (from new feedback after CI passed), return to step 2 for a fresh cycle. CI failures during monitoring are already handled within step 7's polling loop.
## Exit Conditions
**Success:** All checks pass, post-CI feedback re-check is clean (no new unaddressed high/medium feedback including review bot findings), user has decided on low-priority items.
**Ask for help:** Same failure after 2 attempts, feedback needs clarification, infrastructure issues.
**Stop:** No PR exists, branch needs rebase.
## Fallback
If scripts fail, use `gh` CLI directly:
- `gh pr checks name,state,bucket,link`
- `gh run view <run-id> --log-failed`
- `gh api repos/{owner}/{repo}/pulls/{number}/comments`
## Companion Files
The following companion files are referenced above and included here for standalone use.
### scripts/fetch_pr_checks.py
```python
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.9"
# ///
"""
Fetch PR CI checks and extract relevant failure snippets.
Usage:
python fetch_pr_checks.py [--pr PR_NUMBER]
If --pr is not specified, uses the PR for the current branch.
Output: JSON to stdout with structured check data.
"""
from __future__ import annotations
import argparse
import json
import re
import subprocess
import sys
from typing import Any
def run_gh(args: list[str]) -> dict[str, Any] | list[Any] | None:
"""Run a gh CLI command and return parsed JSON output."""
try:
result = subprocess.run(
["gh"] + args,
capture_output=True,
text=True,
check=True,
)
return json.loads(result.stdout) if result.stdout.strip() else None
except subprocess.CalledProcessError as e:
print(f"Error running gh {' '.join(args)}: {e.stderr}", file=sys.stderr)
return None
except json.JSONDecodeError:
return None
def get_pr_info(pr_number: int | None = None) -> dict[str, Any] | None:
"""Get PR info, optionally by number or for current branch."""
args = ["pr", "view", "--json", "number,url,headRefName,baseRefName"]
if pr_number:
args.insert(2, str(pr_number))
return run_gh(args)
def get_checks(pr_number: int | None = None) -> list[dict[str, Any]]:
"""Get all checks for a PR by parsing tab-separated gh output."""
args = ["gh", "pr", "checks"]
if pr_number:
args.append(str(pr_number))
try:
result = subprocess.run(
args,
capture_output=True,
text=True,
)
if not result.stdout.strip():
return []
checks = []
for line in result.stdout.strip().split("\n"):
if not line.strip():
continue
parts = line.split("\t")
if len(parts) >= 2:
checks.append({
"name": parts[0].strip(),
"bucket": parts[1].strip(),
"link": parts[3].strip() if len(parts) > 3 else "",
"workflow": "",
})
return checks
except Exception:
return []
def get_failed_runs(branch: str) -> list[dict[str, Any]]:
"""Get recent failed workflow runs for a branch."""
result = run_gh([
"run", "list",
"--branch", branch,
"--limit", "10",
"--json", "databaseId,name,status,conclusion,headSha"
])
if not isinstance(result, list):
return []
# Return runs that failed or are in progress
return [r for r in result if r.get("conclusion") == "failure"]
def extract_failure_snippet(log_text: str, max_lines: int = 50) -> str:
"""Extract relevant failure snippet from log text.
Looks for common failure markers and extracts surrounding context.
"""
lines = log_text.split("\n")
# Patterns that indicate failure points (case-insensitive via re.IGNORECASE)
failure_patterns = [
r"error[:\s]",
r"failed[:\s]",
r"failure[:\s]",
r"traceback",
r"exception",
r"assert(ion)?.*failed",
r"FAILED",
r"panic:",
r"fatal:",
r"npm ERR!",
r"yarn error",
r"ModuleNotFoundError",
r"ImportError",
r"SyntaxError",
r"TypeError",
r"ValueError",
r"KeyError",
r"AttributeError",
r"NameError",
r"IndentationError",
r"===.*FAILURES.*===",
r"___.*___", # pytest failure separators
]
combined_pattern = "|".join(failure_patterns)
# Find lines matching failure patterns
failure_indices = []
for i, line in enumerate(lines):
if re.search(combined_pattern, line, re.IGNORECASE):
failure_indices.append(i)
if not failure_indices:
# No clear failure point, return last N lines
return "\n".join(lines[-max_lines:])
# Extract context around first failure point
# Include some context before and after
first_failure = failure_indices[0]
start = max(0, first_failure - 5)
end = min(len(lines), first_failure + max_lines - 5)
snippet_lines = lines[start:end]
# If there are more failures after our snippet, note it
remaining_failures = [i for i in failure_indices if i >= end]
if remaining_failures:
snippet_lines.append(f"\n... ({len(remaining_failures)} more error(s) follow)")
return "\n".join(snippet_lines)
def get_run_logs(run_id: int) -> str | None:
"""Get failed logs for a workflow run."""
try:
result = subprocess.run(
["gh", "run", "view", str(run_id), "--log-failed"],
capture_output=True,
text=True,
timeout=60,
)
return result.stdout if result.stdout else result.stderr
except subprocess.TimeoutExpired:
return None
except subprocess.CalledProcessError:
return None
def main():
parser = argparse.ArgumentParser(description="Fetch PR CI checks with failure snippets")
parser.add_argument("--pr", type=int, help="PR number (defaults to current branch PR)")
args = parser.parse_args()
# Get PR info
pr_info = get_pr_info(args.pr)
if not pr_info:
print(json.dumps({"error": "No PR found for current branch"}))
sys.exit(1)
pr_number = pr_info["number"]
branch = pr_info["headRefName"]
# Get checks
checks = get_checks(pr_number)
# Process checks and add failure snippets
processed_checks = []
failed_runs = None # Lazy load
for check in checks:
processed = {
"name": check.get("name", "unknown"),
"status": check.get("bucket", check.get("state", "unknown")),
"link": check.get("link", ""),
"workflow": check.get("workflow", ""),
}
# For failures, try to get log snippet
if processed["status"] == "fail":
if failed_runs is None:
failed_runs = get_failed_runs(branch)
# Find matching run by workflow name
workflow_name = processed["workflow"] or processed["name"]
matching_run = next(
(r for r in failed_runs if workflow_name in r.get("name", "")),
None
)
if matching_run:
logs = get_run_logs(matching_run["databaseId"])
if logs:
processed["log_snippet"] = extract_failure_snippet(logs)
processed["run_id"] = matching_run["databaseId"]
processed_checks.append(processed)
# Build output
output = {
"pr": {
"number": pr_number,
"url": pr_info.get("url", ""),
"branch": branch,
"base": pr_info.get("baseRefName", ""),
},
"summary": {
"total": len(processed_checks),
"passed": sum(1 for c in processed_checks if c["status"] == "pass"),
"failed": sum(1 for c in processed_checks if c["status"] == "fail"),
"pending": sum(1 for c in processed_checks if c["status"] == "pending"),
"skipped": sum(1 for c in processed_checks if c["status"] in ("skipping", "cancel")),
},
"checks": processed_checks,
}
print(json.dumps(output, indent=2))
if __name__ == "__main__":
main()
```
### scripts/fetch_pr_feedback.py
```python
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.9"
# ///
"""
Fetch and categorize PR review feedback.
Usage:
python fetch_pr_feedback.py [--pr PR_NUMBER]
If --pr is not specified, uses the PR for the current branch.
Output: JSON to stdout with categorized feedback.
Categories (using LOGAF scale - see https://develop.sentry.dev/engineering-practices/code-review/#logaf-scale):
- high: Must address before merge (h:, blocker, changes requested)
- medium: Should address (m:, standard feedback)
- low: Optional suggestions (l:, nit, style)
- bot: Informational automated comments (Codecov, Dependabot, etc.)
- resolved: Already resolved threads
Bot classification:
- Review bots (Sentry, Warden, Cursor, Bugbot, etc.) provide actionable code
feedback. Their comments are categorized by content into high/medium/low with
a ``review_bot: true`` flag — they are NOT placed in the ``bot`` bucket.
- Info bots (Codecov, Dependabot, Renovate, etc.) post status reports and are
placed in the ``bot`` bucket for silent skipping.
"""
from __future__ import annotations
import argparse
import json
import re
import subprocess
import sys
from typing import Any
# Bots that provide actionable code review feedback (security issues, lint
# violations, bugs). Their comments are categorized by content, not skipped.
REVIEW_BOT_PATTERNS = [
r"(?i)^sentry",
r"(?i)^warden",
r"(?i)^cursor",
r"(?i)^bugbot",
r"(?i)^seer",
r"(?i)^copilot",
r"(?i)^codex",
r"(?i)^claude",
r"(?i)^codeql",
]
# Bots that post informational status reports (coverage, dependency updates).
# These are placed in the ``bot`` bucket and skipped silently.
INFO_BOT_PATTERNS = [
r"(?i)^codecov",
r"(?i)^dependabot",
r"(?i)^renovate",
r"(?i)^github-actions",
r"(?i)^mergify",
r"(?i)^semantic-release",
r"(?i)^sonarcloud",
r"(?i)^snyk",
r"(?i)bot$",
r"(?i)\[bot\]$",
]
def run_gh(args: list[str]) -> dict[str, Any] | list[Any] | None:
"""Run a gh CLI command and return parsed JSON output."""
try:
result = subprocess.run(
["gh"] + args,
capture_output=True,
text=True,
check=True,
)
return json.loads(result.stdout) if result.stdout.strip() else None
except subprocess.CalledProcessError as e:
print(f"Error running gh {' '.join(args)}: {e.stderr}", file=sys.stderr)
return None
except json.JSONDecodeError:
return None
def get_repo_info() -> tuple[str, str] | None:
"""Get owner and repo name from current directory."""
result = run_gh(["repo", "view", "--json", "owner,name"])
if result:
return result.get("owner", {}).get("login"), result.get("name")
return None
def get_pr_info(pr_number: int | None = None) -> dict[str, Any] | None:
"""Get PR info, optionally by number or for current branch."""
args = ["pr", "view", "--json", "number,url,headRefName,author,reviews,reviewDecision"]
if pr_number:
args.insert(2, str(pr_number))
return run_gh(args)
def is_review_bot(username: str) -> bool:
"""Check if username matches a review bot that posts actionable feedback."""
return any(re.search(p, username) for p in REVIEW_BOT_PATTERNS)
def is_info_bot(username: str) -> bool:
"""Check if username matches an informational bot (skip silently)."""
return any(re.search(p, username) for p in INFO_BOT_PATTERNS)
def is_bot(username: str) -> bool:
"""Check if username matches any known bot pattern."""
return is_review_bot(username) or is_info_bot(username)
def get_review_comments(owner: str, repo: str, pr_number: int) -> list[dict[str, Any]]:
"""Get inline code review comments via API."""
result = run_gh([
"api",
f"repos/{owner}/{repo}/pulls/{pr_number}/comments",
"--paginate",
])
return result if isinstance(result, list) else []
def get_issue_comments(owner: str, repo: str, pr_number: int) -> list[dict[str, Any]]:
"""Get PR conversation comments (includes bot comments)."""
result = run_gh([
"api",
f"repos/{owner}/{repo}/issues/{pr_number}/comments",
"--paginate",
])
return result if isinstance(result, list) else []
def get_review_threads(owner: str, repo: str, pr_number: int) -> list[dict[str, Any]]:
"""Get review threads with resolution status via GraphQL."""
query = """
query($owner: String!, $repo: String!, $pr: Int!) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $pr) {
reviewThreads(first: 100) {
nodes {
id
isResolved
isOutdated
path
line
comments(first: 10) {
nodes {
id
body
author {
login
}
createdAt
}
}
}
}
}
}
}
"""
try:
result = subprocess.run(
[
"gh", "api", "graphql",
"-f", f"query={query}",
"-F", f"owner={owner}",
"-F", f"repo={repo}",
"-F", f"pr={pr_number}",
],
capture_output=True,
text=True,
check=True,
)
data = json.loads(result.stdout)
threads = data.get("data", {}).get("repository", {}).get("pullRequest", {}).get("reviewThreads", {}).get("nodes", [])
return threads
except (subprocess.CalledProcessError, json.JSONDecodeError):
return []
def detect_logaf(body: str) -> str | None:
"""Detect LOGAF scale markers in comment body.
LOGAF scale (https://develop.sentry.dev/engineering-practices/code-review/#logaf-scale):
- l: / [l] / low: → low priority (optional)
- m: / [m] / medium: → medium priority (should address)
- h: / [h] / high: → high priority (must address)
Returns 'high', 'medium', 'low', or None if no marker found.
"""
# Check for LOGAF markers at start of comment (with optional whitespace)
logaf_patterns = [
# h: or [h] or high: patterns
(r"^\s*(?:h:|h\s*:|high:|\[h\])", "high"),
# m: or [m] or medium: patterns
(r"^\s*(?:m:|m\s*:|medium:|\[m\])", "medium"),
# l: or [l] or low: patterns
(r"^\s*(?:l:|l\s*:|low:|\[l\])", "low"),
]
for pattern, level in logaf_patterns:
if re.search(pattern, body, re.IGNORECASE):
return level
return None
def categorize_comment(comment: dict[str, Any], body: str) -> str:
"""Categorize a comment based on content and author.
Uses LOGAF scale: high (must fix), medium (should fix), low (optional).
"""
author = comment.get("author", {}).get("login", "") or comment.get("user", {}).get("login", "")
# Info bots are skipped silently; review bots fall through to content
# categorization so their actionable feedback is not lost.
if is_info_bot(author) and not is_review_bot(author):
return "bot"
# Check for explicit LOGAF markers first
logaf_level = detect_logaf(body)
if logaf_level:
return logaf_level
# Look for high-priority (blocking) indicators
high_patterns = [
r"(?i)must\s+(fix|change|update|address)",
r"(?i)this\s+(is\s+)?(wrong|incorrect|broken|buggy)",
r"(?i)security\s+(issue|vulnerability|concern)",
r"(?i)will\s+(break|cause|fail)",
r"(?i)critical",
r"(?i)blocker",
]
for pattern in high_patterns:
if re.search(pattern, body):
return "high"
# Look for low-priority (suggestion) indicators
low_patterns = [
r"(?i)nit[:\s]",
r"(?i)nitpick",
r"(?i)suggestion[:\s]",
r"(?i)consider\s+",
r"(?i)could\s+(also\s+)?",
r"(?i)might\s+(want\s+to|be\s+better)",
r"(?i)optional[:\s]",
r"(?i)minor[:\s]",
r"(?i)style[:\s]",
r"(?i)prefer\s+",
r"(?i)what\s+do\s+you\s+think",
r"(?i)up\s+to\s+you",
r"(?i)take\s+it\s+or\s+leave",
r"(?i)fwiw",
]
for pattern in low_patterns:
if re.search(pattern, body):
return "low"
# Default to medium for non-bot comments without clear indicators
return "medium"
def extract_feedback_item(
body: str,
author: str,
path: str | None = None,
line: int | None = None,
url: str | None = None,
is_resolved: bool = False,
is_outdated: bool = False,
review_bot: bool = False,
thread_id: str | None = None,
) -> dict[str, Any]:
"""Create a standardized feedback item."""
# Truncate long bodies for summary
summary = body[:200] + "..." if len(body) > 200 else body
summary = summary.replace("\n", " ").strip()
item = {
"author": author,
"body": summary,
"full_body": body,
}
if path:
item["path"] = path
if line:
item["line"] = line
if url:
item["url"] = url
if is_resolved:
item["resolved"] = True
if is_outdated:
item["outdated"] = True
if review_bot:
item["review_bot"] = True
if thread_id:
item["thread_id"] = thread_id
return item
def main():
parser = argparse.ArgumentParser(description="Fetch and categorize PR feedback")
parser.add_argument("--pr", type=int, help="PR number (defaults to current branch PR)")
args = parser.parse_args()
# Get repo info
repo_info = get_repo_info()
if not repo_info:
print(json.dumps({"error": "Could not determine repository"}))
sys.exit(1)
owner, repo = repo_info
# Get PR info
pr_info = get_pr_info(args.pr)
if not pr_info:
print(json.dumps({"error": "No PR found for current branch"}))
sys.exit(1)
pr_number = pr_info["number"]
pr_author = pr_info.get("author", {}).get("login", "")
# Get review decision
review_decision = pr_info.get("reviewDecision", "")
# Categorized feedback using LOGAF scale
feedback = {
"high": [], # Must address before merge
"medium": [], # Should address
"low": [], # Optional suggestions
"bot": [],
"resolved": [],
}
# Process reviews for overall status
reviews = pr_info.get("reviews", [])
for review in reviews:
if review.get("state") == "CHANGES_REQUESTED":
author = review.get("author", {}).get("login", "")
body = review.get("body", "")
if body and author != pr_author:
item = extract_feedback_item(body, author)
item["type"] = "changes_requested"
feedback["high"].append(item)
# Get review threads (inline comments with resolution status)
threads = get_review_threads(owner, repo, pr_number)
seen_thread_ids = set()
for thread in threads:
if not thread.get("comments", {}).get("nodes"):
continue
first_comment = thread["comments"]["nodes"][0]
author = first_comment.get("author", {}).get("login", "")
body = first_comment.get("body", "")
# Skip if author is PR author (self-comments)
if author == pr_author:
continue
# Skip empty or very short comments
if not body or len(body.strip()) < 3:
continue
is_resolved = thread.get("isResolved", False)
is_outdated = thread.get("isOutdated", False)
thread_id = thread.get("id")
item = extract_feedback_item(
body=body,
author=author,
path=thread.get("path"),
line=thread.get("line"),
is_resolved=is_resolved,
is_outdated=is_outdated,
thread_id=thread_id,
)
if thread_id:
seen_thread_ids.add(thread_id)
if is_resolved:
feedback["resolved"].append(item)
elif is_review_bot(author):
category = categorize_comment(first_comment, body)
item["review_bot"] = True
feedback[category].append(item)
elif is_info_bot(author):
feedback["bot"].append(item)
else:
category = categorize_comment(first_comment, body)
feedback[category].append(item)
# Get issue comments (general PR conversation)
issue_comments = get_issue_comments(owner, repo, pr_number)
for comment in issue_comments:
author = comment.get("user", {}).get("login", "")
body = comment.get("body", "")
# Skip if author is PR author
if author == pr_author:
continue
# Skip empty comments
if not body or len(body.strip()) < 3:
continue
item = extract_feedback_item(
body=body,
author=author,
url=comment.get("html_url"),
)
if is_review_bot(author):
category = categorize_comment(comment, body)
item["review_bot"] = True
feedback[category].append(item)
elif is_info_bot(author):
feedback["bot"].append(item)
else:
category = categorize_comment(comment, body)
feedback[category].append(item)
# Count review bot items across priority buckets
review_bot_count = sum(
1 for bucket in ("high", "medium", "low")
for item in feedback[bucket]
if item.get("review_bot")
)
# Build output
output = {
"pr": {
"number": pr_number,
"url": pr_info.get("url", ""),
"author": pr_author,
"review_decision": review_decision,
},
"summary": {
"high": len(feedback["high"]),
"medium": len(feedback["medium"]),
"low": len(feedback["low"]),
"bot_comments": len(feedback["bot"]),
"resolved": len(feedback["resolved"]),
"review_bot_feedback": review_bot_count,
"needs_attention": len(feedback["high"]) + len(feedback["medium"]),
},
"feedback": feedback,
}
# Add actionable summary based on LOGAF priorities
if feedback["high"]:
output["action_required"] = "Address high-priority feedback before merge"
elif feedback["medium"]:
output["action_required"] = "Address medium-priority feedback"
elif feedback["low"]:
output["action_required"] = "Review low-priority suggestions - ask user which to address"
else:
output["action_required"] = None
print(json.dumps(output, indent=2))
if __name__ == "__main__":
main()
```
Originally by Sentry team, adapted here as an Agent Skills compatible SKILL.md.
This skill follows the Agent Skills open standard, supported by Claude Code, Cursor, Codex, Gemini CLI, and 20+ more editors.
Works with
Agent Skills format — supported by 20+ editors. Learn more