Harden CI for self-hosted SLURM runners#1135
Harden CI for self-hosted SLURM runners#1135sbryngelson wants to merge 29 commits intoMFlowCode:masterfrom
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
read -t 0.1 (sub-second timeout) in a loop with process substitution file descriptors triggers a bash internal error (unwind_frame_run: read_builtin: frame not found) leading to a segfault. Use integer timeout (read -t 1) instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
Nitpicks 🔍
|
|
CodeAnt AI finished reviewing your PR. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors SLURM monitoring to use state-driven polling, improves exit-code extraction and tailing behavior, adds cancellation on abnormal exit; updates GPU SBATCH requests to Changes
Sequence Diagram(s)sequenceDiagram
participant Monitor as monitor_slurm_job.sh
participant SlurmCtl as squeue/sacct/scontrol
participant JobFS as Job output file (filesystem)
participant Tail as tail process
Monitor->>SlurmCtl: get_job_state(job_id) (squeue → sacct fallback)
SlurmCtl-->>Monitor: STATE (PENDING/RUNNING/COMPLETED/UNKNOWN)
alt STATE PENDING/CONFIGURING
Monitor->>Monitor: sleep (~10s), continue polling
else STATE RUNNING/COMPLETING
Monitor->>Tail: start non-blocking tail (1s timeout, burst cap)
Tail->>JobFS: read new lines
JobFS-->>Tail: new output
Tail-->>Monitor: heartbeat + output
else STATE TERMINAL
Monitor->>Tail: stop tail, drain remaining lines (1s timeout, cap)
Monitor->>SlurmCtl: scontrol show job -> ExitCode (fallback sacct)
SlurmCtl-->>Monitor: ExitCode or UNKNOWN
alt ExitCode == 0
Monitor-->>Monitor: set monitor_success, exit 0
else
Monitor->>SlurmCtl: scancel job_id (if needed)
Monitor-->>Monitor: exit non-zero
end
else STATE UNKNOWN
Monitor->>Monitor: periodic warning, longer sleep, retry
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Updates the Phoenix (GT) GitHub Actions SLURM submission scripts to target H200 GPU resources for improved scheduling, and adjusts the SLURM job monitor script to avoid a Bash segfault caused by fractional read -t timeouts.
Changes:
- Update Phoenix GPU
sbatchdirectives to request H200 GPUs and adjust task count per node. - Replace fractional
read -ttimeouts with integer timeouts in the SLURM job monitor script.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| .github/workflows/phoenix/submit.sh | Switch GPU submission options to request H200 GPUs and increase --ntasks-per-node. |
| .github/workflows/phoenix/submit-bench.sh | Same as above for benchmark submissions. |
| .github/scripts/monitor_slurm_job.sh | Use integer read -t timeouts to prevent Bash segfaults during output streaming. |
PR MFlowCode#1124 changed bench.yml to use workflow_run (triggered after Test Suite completes), which broke the approve-to-run flow for fork PRs. Revert to the original pull_request + pull_request_review triggers while keeping improvements (frontier_amd matrix, concurrency group, timeout, run_parallel_benchmarks.sh). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1135 +/- ##
=======================================
Coverage 44.07% 44.07%
=======================================
Files 70 70
Lines 20431 20431
Branches 1974 1974
=======================================
Hits 9004 9004
Misses 10291 10291
Partials 1136 1136 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Write failed test UUIDs to tests/failed_uuids.txt after a test run. In CI, if 1-5 tests fail, automatically re-run just those tests. If 6+ fail, treat it as a real issue and fail immediately. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name=".github/workflows/test.yml">
<violation number="1" location=".github/workflows/test.yml:138">
P2: The initial test run is forced to succeed with `|| true`, so real failures can be masked when `tests/failed_uuids.txt` is missing. Preserve the exit code and fail the job if no retry is performed.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/test.yml:
- Around line 137-153: The test step currently uses "|| true" after invoking
mfc.sh which hides crashes; remove that and capture the test command's exit
status (run mfc.sh test -v ... and set TEST_EXIT=$? or use if ! ...; then
TEST_EXIT=$? else TEST_EXIT=0) so you can decide later; keep the existing retry
logic that checks tests/failed_uuids.txt (NUM_FAILED, FAILED) and, after that
block, if tests/failed_uuids.txt does not exist and TEST_EXIT is non‑zero then
exit with TEST_EXIT (or otherwise propagate the original non‑zero exit code) to
avoid reporting success on a crash. Ensure you reference the same mfc.sh
invocation and tests/failed_uuids.txt and use TEST_EXIT when making the final
exit decision.
🧹 Nitpick comments (2)
toolchain/mfc/test/test.py (1)
209-216: Race condition:failed_testsis mutated from worker threads without synchronization.
failed_testsis appended to inhandle_case(line 505) which runs in worker threads. While CPython's GIL makeslist.appendatomic, iterating overfailed_testshere (line 213) is safe only becausesched.sched(line 179) has already joined all workers by this point. This is fine but fragile — a future refactor that moves this block could introduce a bug. A brief comment would help..github/workflows/test.yml (1)
144-144: Minor: useless use ofcat.
tr '\n' ' ' < tests/failed_uuids.txtis slightly cleaner, though this is purely cosmetic.
Don't mask non-zero exit codes when tests crash before writing failed_uuids.txt. Only suppress the exit code when the file exists (meaning the test framework ran to completion and we can retry). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace squeue exit-code polling with get_job_state() that parses the actual state string (squeue + sacct fallback). Never give up on UNKNOWN state — CI timeout is the backstop. Cancel orphaned SLURM jobs on abnormal monitor exit. Include job state in heartbeats. Incorporates changes from PR MFlowCode#1140. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name=".github/scripts/monitor_slurm_job.sh">
<violation number="1" location=".github/scripts/monitor_slurm_job.sh:40">
P2: Guard the `squeue` pipeline so transient command failures don't abort the script under `set -euo pipefail`; otherwise a temporary SLURM outage exits the monitor instead of returning "UNKNOWN".</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Without || exit $?, a failed retry would silently exit 0. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
Revert test.yml to clean: true (default) — the corrupted build cache from the ioctl failure was causing 100% test failures. The Lustre-safe cleanup is only needed for bench.yml where pr/master are separate trees. Also tune qodo PR reviewer: reduce max findings to 5, lower suggestion depth to medium, and add instructions to focus on correctness over style for CI scripts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Lustre-safe cleanup step was wiping the build cache (pr/build/, master/build/), forcing full rebuilds every run. This added ~32 min of build time and pushed NVHPC gpu-omp benchmarks past the 4h SLURM limit. Restore default checkout behavior to preserve build cache across runs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
Split Frontier GPU test configs into 2 shards (~75 min each) so they fit within the batch partition's 2h wall time limit. This allows all Frontier SLURM jobs to run concurrently instead of serially on the extended partition (which has a 1-job-per-user limit), reducing total CI wall clock from ~4.5h to ~2h. Changes: - Add --shard CLI argument (e.g., --shard 1/2) with modulo-based round-robin distribution across shards - Switch Frontier submit scripts from extended to batch/hackathon (CFD154 account, 1h59m wall time) - Shard the 3 Frontier GPU matrix entries into 6 (2 shards each) - CPU entries remain unsharded Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Caution Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted. Error details |
There was a problem hiding this comment.
1 issue found across 7 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="toolchain/mfc/test/test.py">
<violation number="1" location="toolchain/mfc/test/test.py:103">
P2: Validate the `--shard` argument before using it; as written, invalid or `0` shard counts will raise exceptions (including ZeroDivisionError) during modulo operations.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move build retry logic from shell scripts to GHA using nick-fields/retry with 60s backoff between attempts. This gives better visibility into retries and lets login node memory pressure subside between attempts. Also reduce build parallelism from -j 8 to -j 4 to lower peak memory on shared Frontier login nodes, and remove the outdated Node 16 version overrides from self-hosted runner env. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
Without set -e, the benchmark build loop could silently ignore failures of earlier benchmarks if a later one succeeded, since only the last command's exit code would propagate. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
3 issues found across 4 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".github/workflows/frontier/build.sh">
<violation number="1" location=".github/workflows/frontier/build.sh:23">
P2: Benchmark loop no longer checks for errors, so a failing benchmark can be masked by later successful runs (script has no `set -e`). Add explicit failure handling to stop on the first failing benchmark.</violation>
</file>
<file name=".github/workflows/frontier_amd/build.sh">
<violation number="1" location=".github/workflows/frontier_amd/build.sh:23">
P2: Failures in benchmark runs can be silently ignored because the loop doesn't check each `./mfc.sh run` exit code (the script doesn't use `set -e`). A failing benchmark earlier in the loop can be masked by a later successful run, letting CI pass incorrectly. Consider exiting non-zero as soon as a benchmark run fails.</violation>
</file>
<file name=".github/workflows/bench.yml">
<violation number="1" location=".github/workflows/bench.yml:106">
P2: `nick-fields/retry@v3` requires `timeout_minutes` or `timeout_seconds`. Without one, the step will fail before running the build command. Add a timeout input to keep the retry step valid.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| exit 1 | ||
| if [ "$run_bench" == "bench" ]; then | ||
| for dir in benchmarks/*/; do | ||
| ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts |
There was a problem hiding this comment.
P2: Benchmark loop no longer checks for errors, so a failing benchmark can be masked by later successful runs (script has no set -e). Add explicit failure handling to stop on the first failing benchmark.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/frontier/build.sh, line 23:
<comment>Benchmark loop no longer checks for errors, so a failing benchmark can be masked by later successful runs (script has no `set -e`). Add explicit failure handling to stop on the first failing benchmark.</comment>
<file context>
@@ -18,39 +18,10 @@ fi
-exit 1
+if [ "$run_bench" == "bench" ]; then
+ for dir in benchmarks/*/; do
+ ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts
+ done
+else
</file context>
| ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts | |
| ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts || exit $? |
| exit 1 | ||
| if [ "$run_bench" == "bench" ]; then | ||
| for dir in benchmarks/*/; do | ||
| ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts |
There was a problem hiding this comment.
P2: Failures in benchmark runs can be silently ignored because the loop doesn't check each ./mfc.sh run exit code (the script doesn't use set -e). A failing benchmark earlier in the loop can be masked by a later successful run, letting CI pass incorrectly. Consider exiting non-zero as soon as a benchmark run fails.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/frontier_amd/build.sh, line 23:
<comment>Failures in benchmark runs can be silently ignored because the loop doesn't check each `./mfc.sh run` exit code (the script doesn't use `set -e`). A failing benchmark earlier in the loop can be masked by a later successful run, letting CI pass incorrectly. Consider exiting non-zero as soon as a benchmark run fails.</comment>
<file context>
@@ -18,39 +18,10 @@ fi
-exit 1
+if [ "$run_bench" == "bench" ]; then
+ for dir in benchmarks/*/; do
+ ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts
+ done
+else
</file context>
| ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts | |
| ./mfc.sh run -v "$dir/case.py" --case-optimization -j 4 --dry-run $build_opts || exit 1 |
| wait %1 && wait %2 | ||
| uses: nick-fields/retry@v3 | ||
| with: | ||
| max_attempts: 3 |
There was a problem hiding this comment.
P2: nick-fields/retry@v3 requires timeout_minutes or timeout_seconds. Without one, the step will fail before running the build command. Add a timeout input to keep the retry step valid.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/bench.yml, line 106:
<comment>`nick-fields/retry@v3` requires `timeout_minutes` or `timeout_seconds`. Without one, the step will fail before running the build command. Add a timeout input to keep the retry step valid.</comment>
<file context>
@@ -104,10 +101,14 @@ jobs:
- wait %1 && wait %2
+ uses: nick-fields/retry@v3
+ with:
+ max_attempts: 3
+ retry_wait_seconds: 60
+ command: |
</file context>
| max_attempts: 3 | |
| timeout_minutes: 480 | |
| max_attempts: 3 |
| while [ ! -f "$output_file" ]; do | ||
| # Check if job is still queued/running | ||
| if squeue -j "$job_id" &>/dev/null; then | ||
| squeue_retries=0 # Reset on success | ||
| sleep 5 | ||
| else | ||
| squeue_retries=$((squeue_retries + 1)) | ||
| if [ $squeue_retries -ge $max_squeue_retries ]; then | ||
| # Job not in queue and output file doesn't exist | ||
| if [ ! -f "$output_file" ]; then | ||
| echo "ERROR: Job $job_id not in queue and output file not created" | ||
| state=$(get_job_state "$job_id") | ||
|
|
||
| case "$state" in | ||
| PENDING|CONFIGURING) | ||
| unknown_count=0 | ||
| sleep 5 | ||
| ;; | ||
| RUNNING|COMPLETING) | ||
| unknown_count=0 | ||
| # Job is running but output file not yet visible (NFS delay) | ||
| sleep 2 | ||
| ;; | ||
| UNKNOWN) | ||
| unknown_count=$((unknown_count + 1)) | ||
| # Only print warning periodically to avoid log spam | ||
| if [ $((unknown_count % 12)) -eq 1 ]; then | ||
| echo "Warning: Could not query job $job_id state (SLURM may be temporarily unavailable)..." | ||
| fi | ||
| sleep 5 | ||
| ;; | ||
| *) | ||
| # Terminal state — job finished without creating output | ||
| if is_terminal_state "$state"; then | ||
| echo "ERROR: Job $job_id reached terminal state ($state) without creating output file" | ||
| exit 1 | ||
| fi | ||
| break | ||
| fi | ||
| # Exponential backoff | ||
| sleep_time=$((2 ** squeue_retries)) | ||
| echo "Warning: squeue check failed, retrying in ${sleep_time}s..." | ||
| sleep $sleep_time | ||
| fi | ||
| # Unrecognized state, keep waiting | ||
| sleep 5 | ||
| ;; | ||
| esac | ||
| done |
There was a problem hiding this comment.
Suggestion: Add a grace period after a job reaches a terminal state to wait for the output file to appear, preventing premature failures caused by filesystem delays. [possible issue, importance: 8]
| while [ ! -f "$output_file" ]; do | |
| # Check if job is still queued/running | |
| if squeue -j "$job_id" &>/dev/null; then | |
| squeue_retries=0 # Reset on success | |
| sleep 5 | |
| else | |
| squeue_retries=$((squeue_retries + 1)) | |
| if [ $squeue_retries -ge $max_squeue_retries ]; then | |
| # Job not in queue and output file doesn't exist | |
| if [ ! -f "$output_file" ]; then | |
| echo "ERROR: Job $job_id not in queue and output file not created" | |
| state=$(get_job_state "$job_id") | |
| case "$state" in | |
| PENDING|CONFIGURING) | |
| unknown_count=0 | |
| sleep 5 | |
| ;; | |
| RUNNING|COMPLETING) | |
| unknown_count=0 | |
| # Job is running but output file not yet visible (NFS delay) | |
| sleep 2 | |
| ;; | |
| UNKNOWN) | |
| unknown_count=$((unknown_count + 1)) | |
| # Only print warning periodically to avoid log spam | |
| if [ $((unknown_count % 12)) -eq 1 ]; then | |
| echo "Warning: Could not query job $job_id state (SLURM may be temporarily unavailable)..." | |
| fi | |
| sleep 5 | |
| ;; | |
| *) | |
| # Terminal state — job finished without creating output | |
| if is_terminal_state "$state"; then | |
| echo "ERROR: Job $job_id reached terminal state ($state) without creating output file" | |
| exit 1 | |
| fi | |
| break | |
| fi | |
| # Exponential backoff | |
| sleep_time=$((2 ** squeue_retries)) | |
| echo "Warning: squeue check failed, retrying in ${sleep_time}s..." | |
| sleep $sleep_time | |
| fi | |
| # Unrecognized state, keep waiting | |
| sleep 5 | |
| ;; | |
| esac | |
| done | |
| terminal_grace_seconds=45 | |
| terminal_seen_at=0 | |
| while [ ! -f "$output_file" ]; do | |
| state=$(get_job_state "$job_id") | |
| case "$state" in | |
| PENDING|CONFIGURING) | |
| unknown_count=0 | |
| terminal_seen_at=0 | |
| sleep 5 | |
| ;; | |
| RUNNING|COMPLETING) | |
| unknown_count=0 | |
| terminal_seen_at=0 | |
| # Job is running but output file not yet visible (NFS delay) | |
| sleep 2 | |
| ;; | |
| UNKNOWN) | |
| unknown_count=$((unknown_count + 1)) | |
| terminal_seen_at=0 | |
| # Only print warning periodically to avoid log spam | |
| if [ $((unknown_count % 12)) -eq 1 ]; then | |
| echo "Warning: Could not query job $job_id state (SLURM may be temporarily unavailable)..." | |
| fi | |
| sleep 5 | |
| ;; | |
| *) | |
| if is_terminal_state "$state"; then | |
| now=$(date +%s) | |
| if [ "$terminal_seen_at" -eq 0 ]; then | |
| terminal_seen_at=$now | |
| fi | |
| if [ $((now - terminal_seen_at)) -lt "$terminal_grace_seconds" ]; then | |
| # Give NFS/FS a chance to publish the output file | |
| sleep 2 | |
| continue | |
| fi | |
| echo "ERROR: Job $job_id reached terminal state ($state) without creating output file after ${terminal_grace_seconds}s grace" | |
| exit 1 | |
| fi | |
| terminal_seen_at=0 | |
| # Unrecognized state, keep waiting | |
| sleep 5 | |
| ;; | |
| esac | |
| done |
User description
Summary
Systematic hardening of CI infrastructure for self-hosted runners (Phoenix, Frontier) on Lustre filesystems:
get_job_state()(squeue primary, sacct fallback). Adds orphan job cleanup via EXIT trap, terminal state detection, and informative heartbeat messagesread -t 0.1to integerread -t 1to avoid process substitution FD corruptionrm -rfbefore checkout withclean: falseto avoid ESTALE/ENOTEMPTY errors fromgit cleanon Lustre-X -Pflags for parsable, job-level-only output|| trueto squeue/sacct command substitutionsshutil.rmtreeup to 5 times with backoff in Python toolchainpull_request_reviewdoesn't cancelpull_requestrunsworkflow_runback to directpull_request+pull_request_reviewtriggers@page/@refpatternsTest plan
🤖 Generated with Claude Code
CodeAnt-AI Description
Harden CI for self-hosted SLURM runners, add test sharding and automatic test retries
What Changed
Impact
✅ Fewer CI flakes due to targeted automatic test retries✅ Fewer orphaned SLURM jobs when monitor exits unexpectedly✅ Faster/safer GPU test runs by splitting suites across runners💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.