Skip to content

fix: reflect request Origin in CORS allow-origin for specific origin config#43

Merged
KochC merged 12 commits intomainfrom
dev
Mar 27, 2026
Merged

fix: reflect request Origin in CORS allow-origin for specific origin config#43
KochC merged 12 commits intomainfrom
dev

Conversation

@KochC
Copy link
Copy Markdown
Owner

@KochC KochC commented Mar 27, 2026

Summary

  • Fixes CORS Access-Control-Allow-Origin header: when OPENCODE_LLM_PROXY_CORS_ORIGIN is set to a specific origin, the header now reflects the actual request Origin value (if it matches) rather than always echoing the configured value verbatim. This is the correct CORS behaviour and ensures browsers accept the response.
  • Previously const allowOrigin = configuredOrigin === "*" ? "*" : configuredOrigin was a no-op — both branches returned configuredOrigin unchanged.

KochC added 12 commits March 27, 2026 15:44
- Add 17 new integration tests: CORS edge cases (disallowed origins,
  no-origin header, OPTIONS for disallowed origin), auth (401/pass-through),
  and error handling (400/502/404) for /v1/chat/completions
  Closes #14, closes #16
- Add ESLint with flat config, npm run lint script, and Lint job in CI
  Closes #15
- Improve README with quickstart section, npm install instructions, and
  corrected package name; add type column to env vars table
  Closes #17
- Implement streaming for POST /v1/chat/completions (issue #11):
  subscribe to opencode event stream, pipe message.part.updated deltas
  as SSE chat.completion.chunk events, finish on session.idle
- Implement streaming for POST /v1/responses (issue #11):
  emit response.created / output_text.delta / response.completed events
- Fix provider-agnostic system prompt hint (issue #12): remove
  'OpenAI-compatible' wording so non-OpenAI models are not confused
- Add TextEncoder and ReadableStream to ESLint globals
- Add streaming integration tests (happy path, unknown model, session.error)
- Extract createSseQueue() helper, eliminating duplicated SSE queue pattern
  in /v1/chat/completions and /v1/responses streaming branches (closes #34)
- Add tests for GET /v1/models happy path, empty providers, and error path (closes #33)
- Add tests for POST /v1/responses: happy path, validation, streaming, session.error (closes #32)
- Fix package.json description to be provider-agnostic (closes #35)
- Add engines field declaring bun >=1.0.0 requirement (closes #35)
- Line coverage: 55% -> 89%, function coverage: 83% -> 94%
- POST /v1/messages — Anthropic Messages API with streaming (SSE)
- POST /v1beta/models/:model:generateContent — Gemini non-streaming
- POST /v1beta/models/:model:streamGenerateContent — Gemini NDJSON streaming
- New helpers: normalizeAnthropicMessages, normalizeGeminiContents,
  extractGeminiSystemInstruction, mapFinishReasonToAnthropic/Gemini
- 35 new tests (77 -> 112 total, all passing)
- Update README to document all supported API formats

Closes #38, #39
- Lead with value proposition, ASCII diagram, and feature table
- Quickstart reduced to 4 steps; works in under 60 seconds
- SDK examples for OpenAI, Anthropic, Gemini (JS+Python), LangChain
- UI integration guides: Open WebUI, Chatbox, Continue, Zed
- Reference section kept concise; full prose docs moved inline
- package.json: sharper description, 20 keywords covering all search terms
  (openai-compatible, anthropic, gemini, ollama, langchain, open-webui,
   llm-proxy, ai-gateway, local-llm, github-copilot, model-router, …)
@KochC KochC merged commit f8da40a into main Mar 27, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant