Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR updates container image references across multiple packages from v1.0.0-beta.3 to v1.0.0-beta.4, with new digest hashes. Additionally, the kubevirt-csi-driver and cluster-autoscaler images are pinned to specific versions with updated digests. All changes are configuration-only updates to image references in values files and tag files. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @cozystack-bot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request is dedicated to preparing the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request prepares for the v1.0.0-beta.4 release by updating various version tags and image digests. While many updates are correct, I've found several critical issues where image tags were updated but their corresponding SHA256 digests were not. This means that these components would still be using old images despite the version bump. This is critical for a release and must be fixed. I've left specific comments on the affected files. Additionally, there's a minor concern about an unconventional version tag 0.0.0 being used.
| @@ -1,2 +1,2 @@ | |||
| e2e: | |||
| image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.0-beta.3@sha256:eac71ef0de3450fce96255629e77903630c63ade62b81e7055f1a689f92ee153 | |||
| image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.0-beta.4@sha256:eac71ef0de3450fce96255629e77903630c63ade62b81e7055f1a689f92ee153 | |||
| image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v1.0.0-beta.4@sha256:1f7827a1978bd9c81ac924dd0e78f6a3ce834a9a64af55047e220812bc15a944 | ||
| tokenProxy: | ||
| image: ghcr.io/cozystack/cozystack/token-proxy:v1.0.0-beta.3@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc | ||
| image: ghcr.io/cozystack/cozystack/token-proxy:v1.0.0-beta.4@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc |
| @@ -1 +1 @@ | |||
| ghcr.io/cozystack/cozystack/grafana-dashboards:v1.0.0-beta.3@sha256:e866b5b3874b9d390b341183d2ee070e1387440c14cfe51af831695def6dc2ec | |||
| ghcr.io/cozystack/cozystack/grafana-dashboards:v1.0.0-beta.4@sha256:e866b5b3874b9d390b341183d2ee070e1387440c14cfe51af831695def6dc2ec | |||
packages/system/kamaji/values.yaml
Outdated
| image: | ||
| pullPolicy: IfNotPresent | ||
| tag: v1.0.0-beta.3@sha256:fe9b6bb548edfc26be8aaac65801d598a4e2f9884ddf748083b9e509fa00259e | ||
| tag: v1.0.0-beta.4@sha256:fe9b6bb548edfc26be8aaac65801d598a4e2f9884ddf748083b9e509fa00259e |
packages/system/kamaji/values.yaml
Outdated
| memory: 100Mi | ||
| extraArgs: | ||
| - --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v1.0.0-beta.3@sha256:fe9b6bb548edfc26be8aaac65801d598a4e2f9884ddf748083b9e509fa00259e | ||
| - --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v1.0.0-beta.4@sha256:fe9b6bb548edfc26be8aaac65801d598a4e2f9884ddf748083b9e509fa00259e |
| portSecurity: true | ||
| routes: "" | ||
| image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v1.0.0-beta.3@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a | ||
| image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v1.0.0-beta.4@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a |
| objectstorage: | ||
| controller: | ||
| image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v1.0.0-beta.3@sha256:bb2b2b95cbc3d613b077a87a6c281a3ceff8ef8655d770fb2f8fd6b5f1d0c588" | ||
| image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v1.0.0-beta.4@sha256:bb2b2b95cbc3d613b077a87a6c281a3ceff8ef8655d770fb2f8fd6b5f1d0c588" |
| @@ -1 +1 @@ | |||
| ghcr.io/cozystack/cozystack/kubevirt-csi-driver:latest@sha256:cd58760c97ba50ef74ff940cdcda4b9a6d8554eec8305fc96c2e8bbea72b75a9 | |||
| ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:b52fe4c54483ccc4de7d8fedebbb684fdd70eb9c8e5d83d2b605e9801fe85062 | |||
There was a problem hiding this comment.
The image tag is being changed from latest to 0.0.0. While pinning from latest is a good practice, using 0.0.0 as a version tag is unconventional and can be confusing as it doesn't clearly indicate the version of the software. Consider using a more descriptive version tag. This is also updated in packages/system/kubevirt-csi-node/values.yaml.
| storageClass: replicated | ||
| csiDriver: | ||
| image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:latest@sha256:cd58760c97ba50ef74ff940cdcda4b9a6d8554eec8305fc96c2e8bbea72b75a9 | ||
| image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:b52fe4c54483ccc4de7d8fedebbb684fdd70eb9c8e5d83d2b605e9801fe85062 |
There was a problem hiding this comment.
The image tag is being changed from latest to 0.0.0. While pinning from latest is a good practice, using 0.0.0 as a version tag is unconventional and can be confusing. It doesn't clearly indicate the version of the software. Consider using a more descriptive version tag. This change is also present in packages/apps/kubernetes/images/kubevirt-csi-driver.tag.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
hack/cozyreport.sh (1)
139-148:⚠️ Potential issue | 🟠 MajorPre-existing: Kamaji guard checks for
linstor-controllerinstead of the kamaji deployment.The condition at line 139 gates the kamaji resource collection block on the existence of
linstor-controllerincozy-linstor, but the block collects resources from thecozy-kamajinamespace and explicitly referencesdeployment/kamaji. The guard should check for the kamaji deployment instead:if kubectl get deploy -n cozy-kamaji kamaji >/dev/null 2>&1; thenThis appears to be a copy-paste error from the linstor block that follows (line 189).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@packages/system/cozystack-api/templates/deployment.yaml`:
- Around line 29-40: The init container "wait-for-applicationdefinitions"
currently loops indefinitely; modify its command in the initContainers block to
enforce a deadline (e.g., track a start time or loop counter and exit non‑zero
after a configurable timeout) so the pod fails and surfaces errors via
restart/backoff, and ensure the exit path logs the timeout reason; also replace
the hardcoded image "docker.io/alpine/k8s:1.33.4" with a templated value from
values.yaml (e.g., an initContainer image and optional digest field) so the
image/tag/digest is managed consistently with the main container.
| initContainers: | ||
| - name: wait-for-applicationdefinitions | ||
| image: docker.io/alpine/k8s:1.33.4 | ||
| command: | ||
| - sh | ||
| - -c | ||
| - | | ||
| until kubectl get applicationdefinitions.cozystack.io --no-headers 2>/dev/null | grep -q .; do | ||
| echo "Waiting for ApplicationDefinitions to appear..." | ||
| sleep 2 | ||
| done | ||
| echo "ApplicationDefinitions found" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find cozystack-api service account and RBAC rules
echo "=== Looking for cozystack-api service account definition ==="
fd -t f -e yaml | xargs grep -l "cozystack-api" | head -20
echo ""
echo "=== Looking for ClusterRole/Role definitions related to applicationdefinitions ==="
fd -t f -e yaml | xargs grep -l "applicationdefinitions" | head -20
echo ""
echo "=== Checking the deployment file and surrounding context ==="
cat -n packages/system/cozystack-api/templates/deployment.yaml | head -45Repository: cozystack/cozystack
Length of output: 2854
🏁 Script executed:
#!/bin/bash
# Search for RBAC configuration in the cozystack-api chart
echo "=== RBAC files in cozystack-api chart ==="
find packages/system/cozystack-api -type f -name "*.yaml" | xargs basename -a | sort | uniq
echo ""
echo "=== Full file listing ==="
find packages/system/cozystack-api/templates -type f -name "*.yaml" | sortRepository: cozystack/cozystack
Length of output: 1006
🏁 Script executed:
#!/bin/bash
# Check for image references to see templating patterns
echo "=== Image references in deployment.yaml ==="
grep -n "image:" packages/system/cozystack-api/templates/deployment.yaml
echo ""
echo "=== Check values.yaml for image templating ==="
if [ -f packages/system/cozystack-api/values.yaml ]; then
grep -n "image" packages/system/cozystack-api/values.yaml | head -20
fiRepository: cozystack/cozystack
Length of output: 392
🌐 Web query:
alpine/k8s 1.33.4 docker hub image
💡 Result:
The Docker Hub image/tag you want is:
docker pull alpine/k8s:1.33.4It’s from the alpine/k8s repository on Docker Hub (Kubernetes toolbox image). [1] The 1.33.4 tag is listed as a released version (dated Aug 17, 2025). [2]
🏁 Script executed:
#!/bin/bash
# Check the RBAC files for applicationdefinitions permissions
echo "=== rbac.yaml content ==="
cat -n packages/system/cozystack-api/templates/rbac.yaml
echo ""
echo "=== rbac-bind.yaml content ==="
cat -n packages/system/cozystack-api/templates/rbac-bind.yaml
echo ""
echo "=== sa.yaml content ==="
cat -n packages/system/cozystack-api/templates/sa.yamlRepository: cozystack/cozystack
Length of output: 1726
🏁 Script executed:
#!/bin/bash
# Search for any ClusterRole that grants access to applicationdefinitions
echo "=== Searching for ClusterRole with applicationdefinitions resource ==="
find packages/system -type f -name "*.yaml" | xargs grep -l "applicationdefinitions" | while read f; do
echo "File: $f"
grep -B5 -A5 "applicationdefinitions" "$f" | head -20
echo ""
doneRepository: cozystack/cozystack
Length of output: 1579
Init container loop has no timeout — pod can hang indefinitely.
If ApplicationDefinitions never appear (e.g., a CRD is missing, the HelmRelease that creates them is broken, or a fresh cluster is misconfigured), this init container will block forever with no visibility beyond its log line. Consider adding a deadline so the pod eventually fails and surfaces the issue through normal pod-restart backoff and alerting.
Additionally, the image tag is hardcoded in the template rather than being templated through values.yaml like the main container image (which also uses digest pinning). This makes version management inconsistent and harder to track across releases.
⏱️ Proposed fix: add a timeout
- sh
- -c
- |
- until kubectl get applicationdefinitions.cozystack.io --no-headers 2>/dev/null | grep -q .; do
+ TIMEOUT=300
+ ELAPSED=0
+ until kubectl get applicationdefinitions.cozystack.io --no-headers 2>/dev/null | grep -q .; do
+ if [ "$ELAPSED" -ge "$TIMEOUT" ]; then
+ echo "Timed out waiting for ApplicationDefinitions after ${TIMEOUT}s"
+ exit 1
+ fi
echo "Waiting for ApplicationDefinitions to appear..."
sleep 2
+ ELAPSED=$((ELAPSED + 2))
done
echo "ApplicationDefinitions found"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| initContainers: | |
| - name: wait-for-applicationdefinitions | |
| image: docker.io/alpine/k8s:1.33.4 | |
| command: | |
| - sh | |
| - -c | |
| - | | |
| until kubectl get applicationdefinitions.cozystack.io --no-headers 2>/dev/null | grep -q .; do | |
| echo "Waiting for ApplicationDefinitions to appear..." | |
| sleep 2 | |
| done | |
| echo "ApplicationDefinitions found" | |
| initContainers: | |
| - name: wait-for-applicationdefinitions | |
| image: docker.io/alpine/k8s:1.33.4 | |
| command: | |
| - sh | |
| - -c | |
| - | | |
| TIMEOUT=300 | |
| ELAPSED=0 | |
| until kubectl get applicationdefinitions.cozystack.io --no-headers 2>/dev/null | grep -q .; do | |
| if [ "$ELAPSED" -ge "$TIMEOUT" ]; then | |
| echo "Timed out waiting for ApplicationDefinitions after ${TIMEOUT}s" | |
| exit 1 | |
| fi | |
| echo "Waiting for ApplicationDefinitions to appear..." | |
| sleep 2 | |
| ELAPSED=$((ELAPSED + 2)) | |
| done | |
| echo "ApplicationDefinitions found" |
🤖 Prompt for AI Agents
In `@packages/system/cozystack-api/templates/deployment.yaml` around lines 29 -
40, The init container "wait-for-applicationdefinitions" currently loops
indefinitely; modify its command in the initContainers block to enforce a
deadline (e.g., track a start time or loop counter and exit non‑zero after a
configurable timeout) so the pod fails and surfaces errors via restart/backoff,
and ensure the exit path logs the timeout reason; also replace the hardcoded
image "docker.io/alpine/k8s:1.33.4" with a templated value from values.yaml
(e.g., an initContainer image and optional digest field) so the image/tag/digest
is managed consistently with the main container.
d64a5cb to
a8657e2
Compare
a8657e2 to
8127d90
Compare
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
8127d90 to
b6a840e
Compare
This PR prepares the release
v1.0.0-beta.4.Summary by CodeRabbit