diff --git a/docs.json b/docs.json index 4386fbd0..c0d65039 100644 --- a/docs.json +++ b/docs.json @@ -425,6 +425,7 @@ "integrations/llms/featherless", "integrations/llms/jina-ai", "integrations/llms/lambda", + "integrations/llms/latitude", "integrations/llms/lemon-fox", "integrations/llms/lepton", "integrations/llms/lingyi-01.ai", diff --git a/integrations/guardrails/lasso.mdx b/integrations/guardrails/lasso.mdx index d23b6766..f8a8b82b 100644 --- a/integrations/guardrails/lasso.mdx +++ b/integrations/guardrails/lasso.mdx @@ -10,17 +10,17 @@ To get started with Lasso Security, visit their documentation: ## Using Lasso with Portkey -### 1. Add Lasso Credentials to Portkey +### 1. Add Lasso credentials to Portkey * Navigate to the `Integrations` page under `Settings` * Click on the edit button for the Lasso integration * Add your Lasso API Key (obtain this from your Lasso Security account) +* Optionally, set a custom **API Endpoint** if you use a dedicated Lasso deployment (defaults to `https://server.lasso.security`) -### 2. Add Lasso's Guardrail Check +### 2. Add Lasso's guardrail check * Navigate to the `Guardrails` page and click the `Create` button -* Search for "Scan Content" and click `Add` -* Set the timeout in milliseconds (default: 10000ms) +* Search for "Classifier" and click `Add` * Set any `actions` you want on your check, and create the Guardrail! @@ -29,7 +29,7 @@ To get started with Lasso Security, visit their documentation: | Check Name | Description | Parameters | Supported Hooks | |------------|-------------|------------|-----------------| -| Scan Content | Lasso Security's Deputies analyze content for various security risks including jailbreak attempts, custom policy violations, sexual content, hate speech, illegal content, and more. | `Timeout` (number) | `beforeRequestHook` | +| Classifier | Classifies content for security risks using Lasso Security's Deputies v3 API. Returns detailed findings with action types (BLOCK, WARN, AUTO_MASKING) and severity levels. | `messages` (array), `conversationId` (string, optional), `userId` (string, optional) | `beforeRequestHook`, `afterRequestHook` | @@ -117,7 +117,19 @@ For more, refer to the [Config documentation](/product/ai-gateway/configs). Your requests are now guarded by Lasso Security's protective measures, and you can see the verdict and any actions taken directly in your Portkey logs! -## Key Security Features +## Verdict behavior + +The Lasso plugin uses the Deputies v3 API and determines whether to block a request based on `violations_detected` and the `action` field in findings: + +| Scenario | Verdict | Behavior | +|----------|---------|----------| +| No violations detected | Allow | Request passes through | +| Violations with `BLOCK` action | Block | Request is blocked | +| Violations with only `WARN` actions | Allow | Request passes through, findings included in response data | +| Violations with only `AUTO_MASKING` actions | Allow | Request passes through, findings included in response data | +| API error | Block | Request is blocked (fail-safe) | + +## Key security features Lasso Security's Deputies analyze content for various security risks across multiple categories: diff --git a/integrations/llms/latitude.mdx b/integrations/llms/latitude.mdx new file mode 100644 index 00000000..21a65546 --- /dev/null +++ b/integrations/llms/latitude.mdx @@ -0,0 +1,285 @@ +--- +title: "Latitude AI" +description: Use Latitude AI's OpenAI-compatible inference for chat completions, tool calling, and structured output through Portkey. +--- + +## Quick start + +Get started with Latitude AI in under 2 minutes: + + + +```python Python icon="python" +from portkey_ai import Portkey + +# 1. Install: pip install portkey-ai +# 2. Add @latitude provider in model catalog +# 3. Use it: + +portkey = Portkey(api_key="PORTKEY_API_KEY") + +response = portkey.chat.completions.create( + model="@latitude/qwen-2.5-7b", + messages=[{"role": "user", "content": "Hello!"}] +) + +print(response.choices[0].message.content) +``` + +```js Javascript icon="square-js" +import Portkey from 'portkey-ai' + +// 1. Install: npm install portkey-ai +// 2. Add @latitude provider in model catalog +// 3. Use it: + +const portkey = new Portkey({ + apiKey: "PORTKEY_API_KEY" +}) + +const response = await portkey.chat.completions.create({ + model: "@latitude/qwen-2.5-7b", + messages: [{ role: "user", content: "Hello!" }] +}) + +console.log(response.choices[0].message.content) +``` + +```python OpenAI Py icon="python" +from openai import OpenAI +from portkey_ai import PORTKEY_GATEWAY_URL + +# 1. Install: pip install openai portkey-ai +# 2. Add @latitude provider in model catalog +# 3. Use it: + +client = OpenAI( + api_key="PORTKEY_API_KEY", # Portkey API key + base_url=PORTKEY_GATEWAY_URL +) + +response = client.chat.completions.create( + model="@latitude/qwen-2.5-7b", + messages=[{"role": "user", "content": "Hello!"}] +) + +print(response.choices[0].message.content) +``` + +```js OpenAI JS icon="square-js" +import OpenAI from "openai" +import { PORTKEY_GATEWAY_URL } from "portkey-ai" + +// 1. Install: npm install openai portkey-ai +// 2. Add @latitude provider in model catalog +// 3. Use it: + +const client = new OpenAI({ + apiKey: "PORTKEY_API_KEY", // Portkey API key + baseURL: PORTKEY_GATEWAY_URL +}) + +const response = await client.chat.completions.create({ + model: "@latitude/qwen-2.5-7b", + messages: [{ role: "user", content: "Hello!" }] +}) + +console.log(response.choices[0].message.content) +``` + +```sh cURL icon="square-terminal" +# 1. Add @latitude provider in model catalog +# 2. Use it: + +curl https://api.portkey.ai/v1/chat/completions \ + -H "Content-Type: application/json" \ + -H "x-portkey-api-key: $PORTKEY_API_KEY" \ + -d '{ + "model": "@latitude/qwen-2.5-7b", + "messages": [{"role": "user", "content": "Hello!"}] + }' +``` + + + +## Add provider in model catalog + +Before making requests, add Latitude AI to your Model Catalog: + +1. Go to [**Model Catalog → Add Provider**](https://app.portkey.ai/model-catalog/providers) +2. Select **Latitude** +3. Enter your [Latitude AI API key](https://ai.latitude.sh) +4. Name your provider (e.g., `latitude`) + + + See all setup options and detailed configuration instructions + + +--- + +## Latitude AI capabilities + +### Tool calling + +Use Latitude AI's tool calling feature to trigger external functions: + + + +```python Python +from portkey_ai import Portkey + +portkey = Portkey(api_key="PORTKEY_API_KEY") + +tools = [{ + "type": "function", + "function": { + "name": "getWeather", + "description": "Get the current weather", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "City and state"}, + "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]} + }, + "required": ["location"] + } + } +}] + +response = portkey.chat.completions.create( + model="@latitude/qwen-2.5-7b", + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "What's the weather like in Delhi?"} + ], + tools=tools, + tool_choice="auto" +) + +print(response.choices[0].finish_reason) +``` + +```javascript Node.js +import Portkey from 'portkey-ai'; + +const portkey = new Portkey({ + apiKey: 'PORTKEY_API_KEY' +}); + +const tools = [{ + type: "function", + function: { + name: "getWeather", + description: "Get the current weather", + parameters: { + type: "object", + properties: { + location: { type: "string", description: "City and state" }, + unit: { type: "string", enum: ["celsius", "fahrenheit"] } + }, + required: ["location"] + } + } +}]; + +const response = await portkey.chat.completions.create({ + model: "@latitude/qwen-2.5-7b", + messages: [ + { role: "system", content: "You are a helpful assistant." }, + { role: "user", content: "What's the weather like in Delhi?" } + ], + tools, + tool_choice: "auto" +}); + +console.log(response.choices[0].finish_reason); +``` + + + +### JSON output + +Force structured JSON responses from Latitude AI models: + + + +```python Python +from portkey_ai import Portkey + +portkey = Portkey(api_key="PORTKEY_API_KEY") + +response = portkey.chat.completions.create( + model="@latitude/qwen-2.5-7b", + messages=[ + {"role": "system", "content": "Respond in JSON format with keys: answer, confidence"}, + {"role": "user", "content": "What is the capital of France?"} + ], + response_format={"type": "json_object"} +) + +print(response.choices[0].message.content) +``` + +```javascript Node.js +import Portkey from 'portkey-ai'; + +const portkey = new Portkey({ + apiKey: 'PORTKEY_API_KEY' +}); + +const response = await portkey.chat.completions.create({ + model: "@latitude/qwen-2.5-7b", + messages: [ + { role: "system", content: "Respond in JSON format with keys: answer, confidence" }, + { role: "user", content: "What is the capital of France?" } + ], + response_format: { type: "json_object" } +}); + +console.log(response.choices[0].message.content); +``` + + + +--- + +## Supported models + +| Model | Context | Features | +|-------|---------|----------| +| `qwen-2.5-7b` | 131K | Tools, JSON mode | +| `llama-3.1-8b` | 128K | Tools, JSON mode | +| `qwen3-32b` | 131K | Tools, JSON mode | +| `gemma-2-27b` | 8K | Tools, JSON mode | +| `deepseek-r1-distill-14b` | 64K | Tools, JSON mode, Reasoning | +| `qwen2.5-coder-32b` | 131K | Tools, JSON mode | +| `qwen-2.5-vl-7b` | 32K | Tools, JSON mode, Vision | + + + View the complete list of models available on Latitude AI + + +--- + +## Next steps + + + + Add fallbacks, load balancing, and more + + + Monitor and trace your Latitude AI requests + + + Manage and version your prompts + + + Add custom metadata to requests + + + +For complete SDK documentation: + + + Complete Portkey SDK documentation +