Skip to content

feat: add Google Gemini API endpoint (POST /v1beta/models/:model:generateContent) #39

@KochC

Description

@KochC

Summary

Add support for the Google Gemini API format so clients using the Gemini SDK or REST API can point at this proxy without changes.

Endpoints

  • POST /v1beta/models/:model:generateContent — non-streaming
  • POST /v1beta/models/:model:streamGenerateContent — streaming (newline-delimited JSON)

Request format

{
  "contents": [{ "role": "user", "parts": [{ "text": "Hello" }] }],
  "systemInstruction": { "parts": [{ "text": "You are helpful." }] },
  "generationConfig": { "maxOutputTokens": 1024, "temperature": 0.7 }
}

The model is extracted from the URL path parameter (e.g. gemini-2.0-flash).

Non-streaming response

{
  "candidates": [{
    "content": { "role": "model", "parts": [{ "text": "..." }] },
    "finishReason": "STOP",
    "index": 0
  }],
  "usageMetadata": {
    "promptTokenCount": 10,
    "candidatesTokenCount": 20,
    "totalTokenCount": 30
  }
}

Streaming response

Newline-delimited JSON — each line is a partial GenerateContentResponse (same shape as above, with partial text in the delta chunk).

Notes

  • Model extracted from URL path
  • Reuse existing resolveModel, executePrompt, executePromptStreaming, buildPrompt, buildSystemPrompt internals
  • Normalize Gemini contents → internal message format; systemInstruction → system prompt string
  • Map Gemini finishReason (STOP, MAX_TOKENS) appropriately
  • Auth via existing OPENCODE_LLM_PROXY_TOKEN / Authorization: Bearer header

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions