Conversation
- Dockerfile.allinone: single container with Rust engine + Kali tools + Node.js + Python - entrypoint_allinone.sh: entrypoint for the all-in-one image - src/main.rs: support SHIELDCI_LOCAL_TOOLS env for in-process tool execution, configurable OLLAMA_HOST - kali_mcp.py: skip host.docker.internal rewriting in local mode - docker-compose.yml: add allinone service profile - .dockerignore: updated for cleaner builds
- Control plane (k8s/base/): hardened deployments for dispatcher, results-collector, TTL controller, Redis, PostgreSQL, Ollama, registry - All pods: runAsNonRoot, seccomp RuntimeDefault, readOnlyRootFilesystem, drop ALL capabilities, resource limits - Per-scan ephemeral namespaces (k8s/templates/): engine + target pods with gVisor runtime, NetworkPolicy zero-trust isolation - Kaniko image builds in separate build namespace (no Docker daemon) - Scope verification: blocklist enforcement (cloud metadata, RFC 1918, K8s internals) integrated into orchestrator Phase 0 - Results encryption: AES-256-GCM at rest, mTLS for submission - Internal OCI registry with NetworkPolicy restricting access - GitHub App (github-app/): webhook handler with K8s dispatch mode - BullMQ job queue with rate limiting and concurrency controls - TTL controller safety net for abandoned scan namespace cleanup - Helm chart (k8s/helm/) for parameterized deployment - Scope verification module (scope_verify.py) for DNS/file ownership proof
There was a problem hiding this comment.
Pull request overview
This PR expands ShieldCI from a local Docker-based scanner into a more complete “containerized/Kubernetes” platform by adding SAST/SCA tools, scope/authorization verification, and a K8s control-plane (dispatcher, results collector, TTL controller) with Helm/Kustomize deployment manifests.
Changes:
- Add config support for scope enforcement, SAST (Semgrep), and SCA (Trivy); expand tool routing and add K8s-mode results push in the Rust engine.
- Introduce K8s control-plane services (dispatcher, results-collector, TTL controller) plus network policies, quotas, and pod templates for isolated scan/build namespaces.
- Add GitHub App orchestration and update CI workflow / container images for “all-in-one” + K8s engine variants.
Reviewed changes
Copilot reviewed 57 out of 60 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/shieldci.yml | Adds example config blocks for auth, SAST/SCA toggles, and scope constraints. |
| tests/shield_results.json | Updates expected fixture output markdown. |
| tests/scan_output.log | Updates logged output fixture to match new orchestrator output. |
| tests/repo | Updates submodule pointer for the test repo fixture. |
| src/main.rs | Adds new config structs, extra tool args routing, Ollama host env support, local-tools mode, expanded test plan, and result push endpoint support. |
| shield_results.json | Adds a sample structured results JSON output at repo root. |
| scope_verify.py | Adds Python scope ownership verification + blocklist logic (DNS TXT / well-known file). |
| kali_mcp.py | Expands MCP toolset (nuclei/semgrep/trivy/zap), adds local vs docker host resolution, and parses tool outputs. |
| k8s/ttl-controller/src/index.js | Adds a namespace TTL sweeper controller. |
| k8s/ttl-controller/package.json | Declares TTL controller dependencies. |
| k8s/ttl-controller/Dockerfile | Containerizes TTL controller. |
| k8s/templates/target-pod.yaml | Adds hardened target pod + service template for scan namespaces. |
| k8s/templates/rbac-quota.yaml | Adds scan/build service accounts and per-namespace quotas/limits. |
| k8s/templates/network-policy.yaml | Adds scan-namespace isolation network policies. |
| k8s/templates/namespace.yaml | Adds per-scan namespace template with TTL annotations. |
| k8s/templates/kaniko-build.yaml | Adds Kaniko build pod + build namespace + build netpol template. |
| k8s/templates/engine-pod.yaml | Adds hardened engine pod template with env wiring for K8s services. |
| k8s/results-collector/src/index.js | Adds results-collector service with encrypted storage + optional mTLS listener. |
| k8s/results-collector/package.json | Declares results-collector dependencies. |
| k8s/results-collector/Dockerfile | Containerizes results-collector. |
| k8s/helm/values.yaml | Defines Helm values for control-plane services/resources/secrets. |
| k8s/helm/templates/ttl-controller.yaml | Helm deployment template for TTL controller. |
| k8s/helm/templates/secrets.yaml | Helm templates for DB/encryption secrets. |
| k8s/helm/templates/results-collector.yaml | Helm deployment/service for results collector. |
| k8s/helm/templates/redis.yaml | Helm deployment/service for Redis. |
| k8s/helm/templates/rbac.yaml | Helm RBAC for dispatcher + TTL controller. |
| k8s/helm/templates/postgresql.yaml | Helm deployment/service for PostgreSQL. |
| k8s/helm/templates/ollama.yaml | Helm deployment/service for Ollama. |
| k8s/helm/templates/namespace.yaml | Helm namespace bootstrap. |
| k8s/helm/templates/dispatcher.yaml | Helm deployment/service for dispatcher. |
| k8s/helm/Chart.yaml | Adds Helm chart metadata. |
| k8s/dispatcher/src/scope-check.js | Adds K8s-mode scope/blocklist validation for clone URLs. |
| k8s/dispatcher/src/orchestrator.js | Adds K8s scan orchestration (namespaces, Kaniko build, engine/target pods, cleanup, result fetch). |
| k8s/dispatcher/src/index.js | Adds dispatcher API + BullMQ worker to execute scan jobs. |
| k8s/dispatcher/package.json | Declares dispatcher dependencies. |
| k8s/dispatcher/Dockerfile | Containerizes dispatcher. |
| k8s/base/ttl-controller.yaml | Kustomize base deployment for TTL controller. |
| k8s/base/secrets.yaml | Kustomize base secrets (placeholders) for control plane services. |
| k8s/base/results-collector.yaml | Kustomize base deployment/service for results collector. |
| k8s/base/registry.yaml | Kustomize base internal registry deployment/service/netpol. |
| k8s/base/redis.yaml | Kustomize base Redis deployment/service with auth. |
| k8s/base/rbac.yaml | Kustomize base RBAC for dispatcher + TTL controller. |
| k8s/base/postgresql.yaml | Kustomize base PostgreSQL deployment/service. |
| k8s/base/ollama.yaml | Kustomize base Ollama deployment/service. |
| k8s/base/network-policy.yaml | Kustomize base control-plane network policies. |
| k8s/base/namespace.yaml | Kustomize base namespace for control plane. |
| k8s/base/kustomization.yaml | Kustomize assembly of control-plane resources. |
| k8s/base/dispatcher.yaml | Kustomize base dispatcher deployment/service. |
| github-app/src/scope.js | Adds GitHub App-side scope verification (blocklist + DNS/file proof). |
| github-app/src/scanner.js | Adds GitHub App scan orchestration (K8s dispatcher or local Docker). |
| github-app/src/index.js | Adds GitHub App webhook server + check run + PR comment integration. |
| github-app/package.json | Declares GitHub App dependencies. |
| github-app/README.md | Documents GitHub App env vars and setup. |
| github-app/.env.example | Adds example env configuration for GitHub App. |
| entrypoint_allinone.sh | Adds entrypoint for all-in-one container workflow. |
| docker-compose.yml | Adds compose setup for engine + kali image + all-in-one profile. |
| Dockerfile.k8s | Adds K8s-oriented engine image (no Docker socket; tools baked in). |
| Dockerfile.engine | Updates Rust builder version and engine env defaults. |
| Dockerfile.allinone | Adds all-in-one image packaging for local/dev usage. |
| Dockerfile | Expands Kali tools image with nuclei/semgrep/trivy installs. |
| .github/workflows/shieldci.yml | Updates GitHub Actions workflow behavior, paths, permissions, and results push step. |
| .dockerignore | Refines Docker build context ignores to reduce test artifacts and keep built binary. |
Comments suppressed due to low confidence (3)
k8s/helm/templates/results-collector.yaml:1
- The results-collector implementation reads
TLS_CERT_PATH,TLS_KEY_PATH, andTLS_CA_PATH, but the Helm manifest setsTLS_CERT,TLS_KEY, andTLS_CA. As written, mTLS will be unintentionally disabled at runtime; update either the code to read the current variable names, or update the chart to export the*_PATHvariables expected by the service.
k8s/results-collector/src/index.js:1 - Defaulting
ENCRYPTION_KEYto a random value on boot means previously stored results become undecryptable after a restart/redeploy, andBuffer.from(..., 'hex')may produce an invalid key length if the env var isn’t exactly 64 hex chars (which will break AES-256-GCM). Require an explicit encryption key in non-dev environments and validate it (hex + 32 bytes) during startup with a clear fatal error.
src/main.rs:1 - In K8s mode the results-collector is configured as an mTLS endpoint, and the engine pod template sets
SHIELDCI_CLIENT_CERT/SHIELDCI_CLIENT_KEY, butpush_results_to_collectordoesn’t load a client identity/certificate or CA bundle into the reqwest client. This will make result submission fail whenrequestCert: true/rejectUnauthorized: trueis enabled; update the client builder to use the mounted client cert+key and CA (and fail loudly if missing in mTLS mode).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| SHIELDCI_API_URL: http://localhost:3000 | ||
| SHIELDCI_API_KEY: fc09420a3737855a3094ff7831a6219565cee6777a0fbeec |
There was a problem hiding this comment.
Hardcoding an API key (and even a URL) directly in the workflow will leak credentials via repo history/logs and makes forks unsafe. Use GitHub Actions secrets (e.g., ${{ secrets.SHIELDCI_API_KEY }} / ${{ secrets.SHIELDCI_API_URL }}) and fail the step with a clear error if they’re not set.
| - name: Build ShieldCI engine | ||
| run: | | ||
| cd "$GITHUB_WORKSPACE" | ||
| cd "$HOME/Desktop/ShieldCI" |
There was a problem hiding this comment.
The workflow depends on a developer-machine-specific path ($HOME/Desktop/ShieldCI) rather than the checked-out repo ($GITHUB_WORKSPACE). This will break on most self-hosted runners and makes the workflow non-portable; run builds/scans from $GITHUB_WORKSPACE and write artifacts to $RUNNER_TEMP or $GITHUB_WORKSPACE.
| run: | | ||
| if [ ! -f "$GITHUB_WORKSPACE/target/release/shield-ci" ]; then | ||
| echo "ERROR: ShieldCI engine not found after build" | ||
| if [ ! -f "$HOME/Desktop/ShieldCI/target/release/shield-ci" ]; then |
There was a problem hiding this comment.
The workflow depends on a developer-machine-specific path ($HOME/Desktop/ShieldCI) rather than the checked-out repo ($GITHUB_WORKSPACE). This will break on most self-hosted runners and makes the workflow non-portable; run builds/scans from $GITHUB_WORKSPACE and write artifacts to $RUNNER_TEMP or $GITHUB_WORKSPACE.
| cd "$GITHUB_WORKSPACE/tests" | ||
| "$GITHUB_WORKSPACE/target/release/shield-ci" 2>&1 | tee scan_output.log || true | ||
| cd "$HOME/Desktop/ShieldCI/tests" | ||
| "$HOME/Desktop/ShieldCI/target/release/shield-ci" 2>&1 | tee scan_output.log || true |
There was a problem hiding this comment.
The workflow depends on a developer-machine-specific path ($HOME/Desktop/ShieldCI) rather than the checked-out repo ($GITHUB_WORKSPACE). This will break on most self-hosted runners and makes the workflow non-portable; run builds/scans from $GITHUB_WORKSPACE and write artifacts to $RUNNER_TEMP or $GITHUB_WORKSPACE.
| export SHIELDCI_RESULTS_FILE="$HOME/Desktop/ShieldCI/tests/shield_results.json" | ||
| python3 "$HOME/Desktop/ShieldCI/push_results.py" |
There was a problem hiding this comment.
The workflow depends on a developer-machine-specific path ($HOME/Desktop/ShieldCI) rather than the checked-out repo ($GITHUB_WORKSPACE). This will break on most self-hosted runners and makes the workflow non-portable; run builds/scans from $GITHUB_WORKSPACE and write artifacts to $RUNNER_TEMP or $GITHUB_WORKSPACE.
| const BLOCKED_CIDRS = [ | ||
| // AWS / cloud metadata | ||
| { prefix: "169.254.169.254", mask: 32 }, | ||
| // RFC1918 private ranges | ||
| { prefix: "10.0.0.0", mask: 8 }, | ||
| { prefix: "172.16.0.0", mask: 12 }, | ||
| { prefix: "192.168.0.0", mask: 16 }, | ||
| // Link-local | ||
| { prefix: "169.254.0.0", mask: 16 }, | ||
| // Loopback (except explicitly allowed) | ||
| { prefix: "127.0.0.0", mask: 8 }, | ||
| // IPv6 link-local | ||
| { prefix: "fe80::", mask: 10 }, | ||
| // IPv6 loopback | ||
| { prefix: "::1", mask: 128 }, | ||
| ]; |
There was a problem hiding this comment.
The blocklist declares IPv6 CIDRs but ipInCidr explicitly only supports IPv4, so IPv6 resolutions won’t be blocked (even though the list suggests they are). Either implement IPv6 CIDR checks (and resolve AAAA records) or remove the IPv6 entries to avoid a false sense of SSRF protection.
| function ipInCidr(ip, prefix, maskBits) { | ||
| if (!net.isIPv4(ip) || !net.isIPv4(prefix)) return false; |
There was a problem hiding this comment.
The blocklist declares IPv6 CIDRs but ipInCidr explicitly only supports IPv4, so IPv6 resolutions won’t be blocked (even though the list suggests they are). Either implement IPv6 CIDR checks (and resolve AAAA records) or remove the IPv6 entries to avoid a false sense of SSRF protection.
| const content = fs.readFileSync(configPath, "utf8"); | ||
|
|
||
| // Extract port from build section | ||
| const portMatch = content.match(/port:\s*(\d+)/); |
There was a problem hiding this comment.
Parsing YAML with regexes is fragile (indentation, comments, multi-doc YAML, quoted strings, nested keys) and can easily mis-parse shieldci.yml in real repos. Since js-yaml is already used elsewhere in this PR (dispatcher), consider using a YAML parser here as well and reading structured keys (build.port, scope.allowed_targets, scope.authorization_proof, etc.).
| const scopeSection = content.match(/scope:\s*\n([\s\S]*?)(?=\n\S|\Z)/); | ||
| const authProof = content.match(/authorization_proof:\s*["']?(\w+)["']?/); |
There was a problem hiding this comment.
Parsing YAML with regexes is fragile (indentation, comments, multi-doc YAML, quoted strings, nested keys) and can easily mis-parse shieldci.yml in real repos. Since js-yaml is already used elsewhere in this PR (dispatcher), consider using a YAML parser here as well and reading structured keys (build.port, scope.allowed_targets, scope.authorization_proof, etc.).
| const method = authProof ? authProof[1] : "none"; | ||
|
|
||
| // Determine target URL — check for explicit target_url in scope config | ||
| const targetUrlMatch = content.match(/target_url:\s*["']?([^\s"']+)["']?/); |
There was a problem hiding this comment.
Parsing YAML with regexes is fragile (indentation, comments, multi-doc YAML, quoted strings, nested keys) and can easily mis-parse shieldci.yml in real repos. Since js-yaml is already used elsewhere in this PR (dispatcher), consider using a YAML parser here as well and reading structured keys (build.port, scope.allowed_targets, scope.authorization_proof, etc.).
No description provided.