diff --git a/docs/index.html b/docs/index.html index 54b7b34..5271394 100644 --- a/docs/index.html +++ b/docs/index.html @@ -475,13 +475,15 @@ " class="sc-iKGpAq sc-cCYyou dXXcln cFvDiF">

Optional. Set to true for Android Play Integrity; Authorization contains MLPA token from /verify/play.

Request Body schema: application/json
required
Stream (boolean) or Stream (null) (Stream)
Default: false
Array of objects (Messages)
Default: []
Model (string) or Model (null) (Model)
Default: "openai/gpt-4o"
Temperature (number) or Temperature (null) (Temperature)
Default: 0.1
Max Completion Tokens (integer) or Max Completion Tokens (null) (Max Completion Tokens)
Default: 8192
Top P (number) or Top P (null) (Top P)
Default: 0.01
Mock Response (string) or Mock Response (null) (Mock Response)
Array of Tools (any) or Tools (null) (Tools)
Tool Choice (string) or Tool Choice (object) or Tool Choice (null) (Tool Choice)
N (integer) or N (null) (N)
Stream Options (object) or Stream Options (null) (Stream Options)
Stop (string) or Array of Stop (strings) or Stop (null) (Stop)
Max Tokens (integer) or Max Tokens (null) (Max Tokens)
Presence Penalty (number) or Presence Penalty (null) (Presence Penalty)
Frequency Penalty (number) or Frequency Penalty (null) (Frequency Penalty)
Logit Bias (object) or Logit Bias (null) (Logit Bias)
Response Format (object) or Response Format (null) (Response Format)
Seed (integer) or Seed (null) (Seed)
Parallel Tool Calls (boolean) or Parallel Tool Calls (null) (Parallel Tool Calls)
Logprobs (boolean) or Logprobs (null) (Logprobs)
Top Logprobs (integer) or Top Logprobs (null) (Top Logprobs)

Responses

Request samples

Content type
application/json
{
  • "stream": false,
  • "messages": [ ],
  • "model": "openai/gpt-4o",
  • "temperature": 0.1,
  • "max_completion_tokens": 8192,
  • "top_p": 0.01,
  • "mock_response": "string",
  • "tools": [
    ],
  • "tool_choice": "string",
  • "n": 0,
  • "stream_options": { },
  • "stop": "string",
  • "max_tokens": 0,
  • "presence_penalty": 0,
  • "frequency_penalty": 0,
  • "logit_bias": { },
  • "response_format": { },
  • "seed": 0,
  • "parallel_tool_calls": true,
  • "logprobs": true,
  • "top_logprobs": 0
}

Response samples

Content type
application/json
null

Mock

Request samples

Content type
application/json
{
  • "stream": false,
  • "messages": [ ],
  • "model": "openai/gpt-4o",
  • "temperature": 0.1,
  • "max_completion_tokens": 8192,
  • "top_p": 0.01,
  • "mock_response": "string",
  • "tools": [
    ],
  • "tool_choice": "string",
  • "n": 0,
  • "stream_options": { },
  • "stop": "string",
  • "max_tokens": 0,
  • "presence_penalty": 0,
  • "frequency_penalty": 0,
  • "logit_bias": { },
  • "response_format": { },
  • "seed": 0,
  • "parallel_tool_calls": true,
  • "logprobs": true,
  • "top_logprobs": 0
}

Response samples

Content type
application/json
null

Mock

Mock endpoints for testing purposes.

Chat Completion

Mock LiteLLM endpoint with simulated latency.

@@ -533,7 +535,7 @@ " class="sc-iKGpAq sc-cCYyou sc-cjERFZ dXXcln fTBBlJ dkmSdy">

Validation Error

Response samples

Content type
application/json
null