feat(iterate-pr): Add --summary flag to check and feedback scripts#97
feat(iterate-pr): Add --summary flag to check and feedback scripts#97
Conversation
Adds a --summary flag to both fetch_pr_checks.py and fetch_pr_feedback.py that outputs compact human-readable text instead of full JSON. This avoids the need for python3 -c piping to extract key info during polling loops. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| elif s["pending"]: | ||
| lines.append(f" {s['pending']} pending") | ||
| else: | ||
| lines.append(" All checks passed") |
There was a problem hiding this comment.
Summary shows "All checks passed" with skipped checks
Medium Severity
When checks have "skipping" or "cancel" status, s["failed"] and s["pending"] are both 0, so format_summary prints "All checks passed" even though skipped/cancelled checks exist. Meanwhile the passed/total ratio (e.g. "3/5 passed") contradicts this message. The branching logic doesn't account for the skipped count tracked in the summary dict.
| """Format a compact summary of PR feedback.""" | ||
| s = output["summary"] | ||
| pr = output["pr"] | ||
| lines = [f"PR #{pr['number']} — {s['needs_attention']} need attention ({s['high']} high, {s['medium']} medium, {s['low']} low)"] |
There was a problem hiding this comment.
Possible missing needs_attention key causes KeyError
High Severity
format_summary accesses s['needs_attention'] from the summary dict, but the summary construction (not shown in full in the diff) may not include a needs_attention key. If this key doesn't exist, calling --summary will raise a KeyError at runtime, making the entire feature non-functional.
| elif s["pending"]: | ||
| lines.append(f" {s['pending']} pending") | ||
| else: | ||
| lines.append(" All checks passed") |
There was a problem hiding this comment.
Summary incorrectly says "All checks passed" when checks skipped
Medium Severity
When checks have "skipping" or "cancel" status, both s["failed"] and s["pending"] are 0, so the else branch prints "All checks passed." However, the header line would show something like "3/5 passed" since total includes skipped checks. This contradiction is misleading — the summary claims all checks passed while the counts indicate otherwise, and the non-pass list below would still show [SKIPPING]/[CANCEL] entries.
| if s["failed"]: | ||
| lines.append(f" {s['failed']} failed, {s['pending']} pending") | ||
| elif s["pending"]: | ||
| lines.append(f" {s['pending']} pending") | ||
| else: | ||
| lines.append(" All checks passed") |
There was a problem hiding this comment.
Bug: The summary logic reports "All checks passed" when checks are skipped or cancelled, as it only considers failed and pending statuses, creating a contradictory output.
Severity: MEDIUM
Suggested Fix
Modify the conditional logic in format_summary to also check for s["skipped"] > 0. If there are skipped checks, report them in the summary line instead of defaulting to "All checks passed". For example, add a condition like elif s["skipped"]: and report the number of skipped checks.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: plugins/sentry-skills/skills/iterate-pr/scripts/fetch_pr_checks.py#L177-L182
Potential issue: The script's summary generation in `format_summary` incorrectly reports
"All checks passed" when there are no `failed` or `pending` checks, even if some checks
were `skipped` or `cancelled`. The conditional logic at lines 177-182 does not account
for the `skipped` status. This leads to a contradictory output where the summary says
all checks passed, but the detailed list shows skipped checks. This could mislead an
automated agent or user into believing the CI process is complete and successful when it
is not.
Did we get this right? 👍 / 👎 to inform future reviews.


Changed to draft: i'll first try an alternative approach of convincing claude to not use the intermediate parsing
Summary
--summaryflag tofetch_pr_checks.pyandfetch_pr_feedback.pythat outputs compact human-readable text instead of full JSON--summaryfor status checks, falling back to full JSON when details are neededWhy
During the iterate-pr polling loop, the agent pipes full JSON output through
python3 -c "import json,sys; d=json.load(sys.stdin)..."just to extract the summary and non-passing checks. This wastes context window tokens on throwaway formatting code and requires an extra tool call each time and a permission check. A built-in--summaryflag lets the scripts output exactly what's needed for a quick status check, keeping the polling loop lean.Test plan
fetch_pr_checks.py --summaryon a PR with mixed pass/fail checksfetch_pr_feedback.py --summaryon a PR with review comments--summary🤖 Generated with Claude Code