fix(server): implement multi-workflow mode API wiring#131
Conversation
Replace the dead TODO block in cmd/server/main.go with a working implementation that connects to PostgreSQL via store.NewPGStore, runs database migrations, bootstraps an optional admin user on first run, starts the api.NewRouter on a dedicated port (--multi-workflow-addr, default :8090), and shuts down cleanly alongside the single-config engine. New flags: --multi-workflow-addr New imports: apihandler (api package), bcrypt, time Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Implements the previously TODO multi-workflow (PostgreSQL-backed) startup path in cmd/server/main.go, wiring database initialization and starting the REST API server when --database-dsn is provided.
Changes:
- Adds multi-workflow mode wiring: PG store creation, migrations, and optional bootstrap admin creation.
- Starts the multi-workflow REST API server on a dedicated address/port (
--multi-workflow-addr). - Introduces
WorkflowEngineManagerconstruction for multi-workflow mode and adds related imports/flags.
| if *adminEmail != "" && *adminPassword != "" { | ||
| _, lookupErr := pgStore.Users().GetByEmail(context.Background(), *adminEmail) | ||
| if lookupErr != nil { | ||
| hash, hashErr := bcrypt.GenerateFromPassword([]byte(*adminPassword), bcrypt.DefaultCost) |
There was a problem hiding this comment.
Admin bootstrap treats any error from GetByEmail as “user not found” and attempts creation. This can mask real DB/query errors and lead to misleading logs. Please only bootstrap when errors.Is(lookupErr, evstore.ErrNotFound); otherwise handle/return the unexpected error.
| go func() { | ||
| logger.Info("multi-workflow API listening", "addr", *multiWorkflowAddr) | ||
| if sErr := apiServer.ListenAndServe(); sErr != nil && sErr != http.ErrServerClosed { | ||
| logger.Error("multi-workflow API server error", "error", sErr) | ||
| } | ||
| }() |
There was a problem hiding this comment.
ListenAndServe runs in a goroutine and errors are only logged. If the multi-workflow API port is already in use, startup will continue in single-config mode even though multi-workflow mode was requested. Consider failing fast on serve/bind errors (e.g., net.Listen first and exit on failure, or propagate the error via a channel).
| // Multi-workflow mode: connect to PostgreSQL, run migrations, start the | ||
| // REST API router on a dedicated port alongside the single-config engine. | ||
| logger.Info("Starting in multi-workflow mode", |
There was a problem hiding this comment.
PR description mentions mounting the API router at /api/v1 alongside existing routes, but the implementation starts a separate HTTP server on --multi-workflow-addr (dedicated port). Please align the description with the actual behavior, or adjust the implementation to mount the router into the existing server if that’s the intended UX.
| logger.Info("Starting in multi-workflow mode", | ||
| "database_dsn_set", true, | ||
| "jwt_secret_set", *jwtSecret != "", | ||
| "admin_email_set", *adminEmail != "", | ||
| "api_addr", *multiWorkflowAddr, | ||
| ) |
There was a problem hiding this comment.
Multi-workflow mode starts the REST API even when --jwt-secret is empty. In this codebase the API signs/validates JWTs with the configured HMAC secret; an empty secret makes tokens forgeable. Please fail fast when *jwtSecret is empty (or generate a secure random secret explicitly for dev-only) before starting the API server.
|
@copilot open a new pull request to apply changes based on the comments in this thread |
* Initial plan * fix(server): address review comments on multi-workflow mode wiring Co-authored-by: intel352 <77607+intel352@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: intel352 <77607+intel352@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
Comments suppressed due to low confidence (2)
api/workflow_handler.go:286
Deploycallsh.engine.DeployWorkflow(...)and then independently updates the workflow status in the store. If the injected engine isworkflow.WorkflowEngineManager, it already updates the workflow’s DB status internally, so this causes duplicate updates/version bumps and can create inconsistencies if one update succeeds and the other fails. Prefer a single source of truth: either let the engine runner own the status update and re-fetch/return the record, or remove the engine manager’s internal store mutation and keep the handler update.
if h.engine != nil {
if err := h.engine.DeployWorkflow(r.Context(), id); err != nil {
WriteError(w, http.StatusInternalServerError, "failed to deploy workflow engine")
return
}
}
wf.Status = store.WorkflowStatusActive
wf.UpdatedAt = time.Now()
if err := h.workflows.Update(r.Context(), wf); err != nil {
WriteError(w, http.StatusInternalServerError, "internal error")
api/workflow_handler.go:317
Stophas the same double-write issue asDeploy: calling the engine runner and then updating the workflow status in the handler can lead to duplicate DB updates/version bumps and mismatched state if one step fails. Consolidate status persistence into either the engine runner or the handler (and return the store’s canonical record).
if h.engine != nil {
if err := h.engine.StopWorkflow(r.Context(), id); err != nil {
WriteError(w, http.StatusInternalServerError, "failed to stop workflow engine")
return
}
}
wf.Status = store.WorkflowStatusStopped
wf.UpdatedAt = time.Now()
if err := h.workflows.Update(r.Context(), wf); err != nil {
WriteError(w, http.StatusInternalServerError, "internal error")
| secret = "dev-secret-change-me" | ||
| logger.Error("No JWT secret configured — using insecure default; set JWT_SECRET env var or -jwt-secret flag") |
There was a problem hiding this comment.
runMultiWorkflow still exists and is no longer referenced from main(), but it’s being kept partially updated (e.g., switching to apihandler.*). This risks the inlined multi-workflow startup logic and runMultiWorkflow drifting further apart (and runMultiWorkflow currently allows an insecure default JWT secret). Consider deleting runMultiWorkflow or refactoring main() to call it so there’s a single implementation to maintain.
| secret = "dev-secret-change-me" | |
| logger.Error("No JWT secret configured — using insecure default; set JWT_SECRET env var or -jwt-secret flag") | |
| log.Fatal("No JWT secret configured — refusing to start; set JWT_SECRET env var or -jwt-secret flag") |
| "encoding/json" | ||
| "errors" | ||
| "flag" | ||
| "errors" |
There was a problem hiding this comment.
The import block includes errors twice, which will make this file fail to compile (imported and not used/duplicate import). Remove the duplicate and keep a single errors import.
| "errors" |
| if *adminEmail != "" && *adminPassword != "" { | ||
| _, lookupErr := pgStore.Users().GetByEmail(context.Background(), *adminEmail) | ||
| switch { | ||
| case errors.Is(lookupErr, evstore.ErrNotFound): |
There was a problem hiding this comment.
Admin bootstrap looks up the user with context.Background() and uses the raw --admin-email value. Consider reusing the existing timed context (dbCtx) and normalizing the email (trim + lower-case) to match the auth handlers’ behavior, otherwise the bootstrap user may not be able to log in if the stored email casing differs.
| if createErr := pgStore.Users().Create(context.Background(), adminUser); createErr != nil { | ||
| logger.Warn("multi-workflow mode: failed to create admin user (may already exist)", "error", createErr) |
There was a problem hiding this comment.
Admin bootstrap creates the user with context.Background(), so these DB operations ignore the connection/migration timeout and any shutdown cancellation. Use the same bounded context used for startup (or a derived context) so a hung DB call can’t block shutdown indefinitely.
|
|
||
| // OAuth providers keyed by provider name (e.g. "google", "okta"). | ||
| OAuthProviders map[string]*OAuthProviderConfig | ||
|
|
||
| // Engine is an optional engine lifecycle manager used by the workflow | ||
| // deploy/stop endpoints to actually start and stop workflow engines. | ||
| Engine EngineRunner | ||
| } |
There was a problem hiding this comment.
The PR description says only cmd/server/main.go is modified, but this PR also changes the api/ package (router + workflow handler). Please update the PR description/test plan to reflect the additional files/behavior being changed, since it impacts review scope and release notes.
| if h.engine != nil { | ||
| if err := h.engine.DeployWorkflow(r.Context(), id); err != nil { | ||
| WriteError(w, http.StatusInternalServerError, "failed to deploy workflow engine") | ||
| return | ||
| } | ||
| } |
There was a problem hiding this comment.
New behavior when h.engine != nil (calling DeployWorkflow/StopWorkflow and handling errors) isn’t covered by existing tests in api/workflow_handler_test.go (which only exercises the nil-engine path). Add tests for the engine-runner path, including success and failure cases, to prevent regressions and to validate HTTP status mapping.
Summary
cmd/server/main.gowith working multi-workflow mode implementationstore/,api/, andengine_manager.gopackages--database-dsnis providedTest plan
go build ./cmd/server/compiles cleanlygo test ./cmd/server/passes all testsgolangci-lint runpasses with 0 issuescmd/server/main.go(1 file, 99 insertions, 29 deletions)Closes #71
🤖 Generated with Claude Code