-
Notifications
You must be signed in to change notification settings - Fork 1
feat: add Google Gemini API endpoint (POST /v1beta/models/:model:generateContent) #39
Copy link
Copy link
Open
Description
Summary
Add support for the Google Gemini API format so clients using the Gemini SDK or REST API can point at this proxy without changes.
Endpoints
POST /v1beta/models/:model:generateContent— non-streamingPOST /v1beta/models/:model:streamGenerateContent— streaming (newline-delimited JSON)
Request format
{
"contents": [{ "role": "user", "parts": [{ "text": "Hello" }] }],
"systemInstruction": { "parts": [{ "text": "You are helpful." }] },
"generationConfig": { "maxOutputTokens": 1024, "temperature": 0.7 }
}The model is extracted from the URL path parameter (e.g. gemini-2.0-flash).
Non-streaming response
{
"candidates": [{
"content": { "role": "model", "parts": [{ "text": "..." }] },
"finishReason": "STOP",
"index": 0
}],
"usageMetadata": {
"promptTokenCount": 10,
"candidatesTokenCount": 20,
"totalTokenCount": 30
}
}Streaming response
Newline-delimited JSON — each line is a partial GenerateContentResponse (same shape as above, with partial text in the delta chunk).
Notes
- Model extracted from URL path
- Reuse existing
resolveModel,executePrompt,executePromptStreaming,buildPrompt,buildSystemPromptinternals - Normalize Gemini
contents→ internal message format;systemInstruction→ system prompt string - Map Gemini
finishReason(STOP, MAX_TOKENS) appropriately - Auth via existing
OPENCODE_LLM_PROXY_TOKEN/Authorization: Bearerheader
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels