diff --git a/docs.json b/docs.json
index aee7f148..d7fcde98 100644
--- a/docs.json
+++ b/docs.json
@@ -421,6 +421,7 @@
"integrations/llms/github",
"integrations/llms/groq",
"integrations/llms/huggingface",
+ "integrations/llms/hpc-ai",
"integrations/llms/inference.net",
"integrations/llms/featherless",
"integrations/llms/jina-ai",
@@ -1229,6 +1230,7 @@
"guides/integrations/mistral",
"guides/integrations/vercel-ai",
"guides/integrations/deepinfra",
+ "guides/integrations/hpc-ai",
"guides/integrations/groq",
"guides/integrations/langchain",
"guides/integrations/mixtral-8x22b",
@@ -2218,6 +2220,7 @@
"guides/integrations/mistral",
"guides/integrations/vercel-ai",
"guides/integrations/deepinfra",
+ "guides/integrations/hpc-ai",
"guides/integrations/groq",
"guides/integrations/langchain",
"guides/integrations/mixtral-8x22b",
diff --git a/guides/integrations/hpc-ai.mdx b/guides/integrations/hpc-ai.mdx
new file mode 100644
index 00000000..8a8d9ed8
--- /dev/null
+++ b/guides/integrations/hpc-ai.mdx
@@ -0,0 +1,51 @@
+---
+title: "HPC-AI"
+---
+
+## Portkey + HPC-AI
+
+[Portkey](https://app.portkey.ai/) is the control plane for AI apps: AI Gateway, observability, prompt management, and more.
+
+[HPC-AI](https://www.hpc-ai.com/) offers **Model APIs** with an **OpenAI-compatible** interface for chat completions.
+
+### Quickstart
+
+Portkey is compatible with the OpenAI request shape. Point the OpenAI client at the Portkey gateway and pass `provider` + your Portkey API key via `createHeaders`.
+
+You need:
+
+* A Portkey API key from [the Portkey app](https://app.portkey.ai/)
+* An HPC-AI API key from [HPC-AI](https://www.hpc-ai.com/)
+
+```sh
+pip install -qU portkey-ai openai
+```
+
+### With OpenAI Client
+
+```python
+from openai import OpenAI
+from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
+
+client = OpenAI(
+ api_key="YOUR_HPC_AI_API_KEY",
+ base_url=PORTKEY_GATEWAY_URL,
+ default_headers=createHeaders(
+ provider="hpc-ai",
+ api_key="YOUR_PORTKEY_API_KEY",
+ ),
+)
+
+chat_complete = client.chat.completions.create(
+ model="minimax/minimax-m2.5",
+ messages=[{"role": "user", "content": "Who are you?"}],
+)
+
+print(chat_complete.choices[0].message.content)
+```
+
+### Observability
+
+Routing through Portkey lets you track tokens, latency, cost, and more in the Portkey dashboard.
+
+For the full integration page (Model Catalog, cURL, and SDK examples), see [/integrations/llms/hpc-ai](/integrations/llms/hpc-ai).
diff --git a/integrations/llms.mdx b/integrations/llms.mdx
index 940ef516..29cb3587 100644
--- a/integrations/llms.mdx
+++ b/integrations/llms.mdx
@@ -90,6 +90,10 @@ description: "Portkey connects with all major LLM providers and orchestration fr
+
+
+
+
diff --git a/integrations/llms/hpc-ai.mdx b/integrations/llms/hpc-ai.mdx
new file mode 100644
index 00000000..7b15a035
--- /dev/null
+++ b/integrations/llms/hpc-ai.mdx
@@ -0,0 +1,166 @@
+---
+title: "HPC-AI"
+description: "Integrate HPC-AI Model APIs (OpenAI-compatible) with Portkey's AI Gateway"
+---
+
+Portkey provides a gateway to [HPC-AI](https://www.hpc-ai.com/) **Model APIs**, which expose an **OpenAI-compatible** HTTP API for chat completions.
+
+With Portkey, use fast routing, observability, prompt management, and secure API keys through [Model Catalog](/product/model-catalog).
+
+## Quick Start
+
+Get HPC-AI working in 3 steps:
+
+
+```python Python icon="python"
+from portkey_ai import Portkey
+
+# 1. Install: pip install portkey-ai
+# 2. Add @hpc-ai provider in model catalog
+# 3. Use it:
+
+portkey = Portkey(api_key="PORTKEY_API_KEY")
+
+response = portkey.chat.completions.create(
+ model="@hpc-ai/minimax/minimax-m2.5",
+ messages=[{"role": "user", "content": "Say this is a test"}]
+)
+
+print(response.choices[0].message.content)
+```
+
+```js Javascript icon="square-js"
+import Portkey from 'portkey-ai'
+
+// 1. Install: npm install portkey-ai
+// 2. Add @hpc-ai provider in model catalog
+// 3. Use it:
+
+const portkey = new Portkey({
+ apiKey: "PORTKEY_API_KEY"
+})
+
+const response = await portkey.chat.completions.create({
+ model: "@hpc-ai/minimax/minimax-m2.5",
+ messages: [{ role: "user", content: "Say this is a test" }]
+})
+
+console.log(response.choices[0].message.content)
+```
+
+```python OpenAI Py icon="python"
+from openai import OpenAI
+from portkey_ai import PORTKEY_GATEWAY_URL
+
+# 1. Install: pip install openai portkey-ai
+# 2. Add @hpc-ai provider in model catalog
+# 3. Use it:
+
+client = OpenAI(
+ api_key="PORTKEY_API_KEY", # Portkey API key
+ base_url=PORTKEY_GATEWAY_URL
+)
+
+response = client.chat.completions.create(
+ model="@hpc-ai/minimax/minimax-m2.5",
+ messages=[{"role": "user", "content": "Say this is a test"}]
+)
+
+print(response.choices[0].message.content)
+```
+
+```js OpenAI JS icon="square-js"
+import OpenAI from "openai"
+import { PORTKEY_GATEWAY_URL } from "portkey-ai"
+
+// 1. Install: npm install openai portkey-ai
+// 2. Add @hpc-ai provider in model catalog
+// 3. Use it:
+
+const client = new OpenAI({
+ apiKey: "PORTKEY_API_KEY", // Portkey API key
+ baseURL: PORTKEY_GATEWAY_URL
+})
+
+const response = await client.chat.completions.create({
+ model: "@hpc-ai/minimax/minimax-m2.5",
+ messages: [{ role: "user", content: "Say this is a test" }]
+})
+
+console.log(response.choices[0].message.content)
+```
+
+```sh cURL icon="square-terminal"
+# 1. Add @hpc-ai provider in model catalog
+# 2. Use it:
+
+curl https://api.portkey.ai/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -H "x-portkey-api-key: $PORTKEY_API_KEY" \
+ -d '{
+ "model": "@hpc-ai/minimax/minimax-m2.5",
+ "messages": [
+ { "role": "user", "content": "Say this is a test" }
+ ]
+ }'
+```
+
+
+
+**Tip:** You can also set `provider="@hpc-ai"` in `Portkey()` and use `model="minimax/minimax-m2.5"` or `model="moonshotai/kimi-k2.5"` in the request.
+
+
+## Open-source AI Gateway
+
+When self-hosting the [Portkey AI Gateway](https://github.com/Portkey-AI/gateway), set `x-portkey-provider: hpc-ai` and `Authorization: Bearer `. The default upstream base URL is `https://api.hpc-ai.com/inference/v1`. Override it with `x-portkey-custom-host` or `custom_host` in your config when needed.
+
+## Add Provider in Model Catalog
+
+1. Go to [**Model Catalog → Add Provider**](https://app.portkey.ai/model-catalog/providers)
+2. Select **HPC-AI** (when listed) or add credentials for provider slug `hpc-ai`
+3. Enter your HPC-AI API key from the [HPC-AI console](https://www.hpc-ai.com/)
+4. Name your provider (e.g., `hpc-ai-prod`)
+
+
+ See all setup options, code examples, and detailed instructions
+
+
+## Supported Endpoints
+
+| Endpoint | Supported |
+|----------|-----------|
+| `/chat/completions` | ✅ |
+
+Other OpenAI-compatible endpoints may be added as the gateway integration expands.
+
+## Supported Models
+
+Examples available through this integration:
+
+- `minimax/minimax-m2.5`
+- `moonshotai/kimi-k2.5`
+
+
+ Product home, pricing, and developer documentation
+
+
+## Next Steps
+
+
+
+ Add metadata to your HPC-AI requests
+
+
+ Add gateway configs to your requests
+
+
+ Trace your requests
+
+
+ Set up fallbacks across providers
+
+
+
+
+ Complete Portkey SDK documentation
+
diff --git a/openapi/portkey-models.json b/openapi/portkey-models.json
index d40bf616..101eb581 100644
--- a/openapi/portkey-models.json
+++ b/openapi/portkey-models.json
@@ -34,7 +34,7 @@
"/model-configs/pricing/{provider}/{model}": {
"get": {
"summary": "Get Model Pricing",
- "description": "Returns pricing configuration for a specific model.\n\n**Note:** Prices are in USD cents per token.\n\n## Supported Providers\n\nopenai, anthropic, google, azure-openai, bedrock, mistral-ai, cohere, together-ai, groq, deepseek, fireworks-ai, perplexity-ai, anyscale, deepinfra, cerebras, x-ai, and 25+ more.\n\n## Response Fields\n\n| Field | Description | Unit |\n|-------|-------------|------|\n| `request_token.price` | Input token cost | cents/token |\n| `response_token.price` | Output token cost | cents/token |\n| `cache_write_input_token.price` | Cache write cost | cents/token |\n| `cache_read_input_token.price` | Cache read cost | cents/token |\n| `additional_units.*` | Provider-specific features | cents/unit |",
+ "description": "Returns pricing configuration for a specific model.\n\n**Note:** Prices are in USD cents per token.\n\n## Supported Providers\n\nopenai, anthropic, google, azure-openai, bedrock, mistral-ai, cohere, together-ai, groq, deepseek, fireworks-ai, perplexity-ai, anyscale, deepinfra, hpc-ai, cerebras, x-ai, and 25+ more.\n\n## Response Fields\n\n| Field | Description | Unit |\n|-------|-------------|------|\n| `request_token.price` | Input token cost | cents/token |\n| `response_token.price` | Output token cost | cents/token |\n| `cache_write_input_token.price` | Cache write cost | cents/token |\n| `cache_read_input_token.price` | Cache read cost | cents/token |\n| `additional_units.*` | Provider-specific features | cents/unit |",
"operationId": "getModelPricing",
"tags": ["Pricing"],
"parameters": [
diff --git a/product/model-catalog/portkey-models.mdx b/product/model-catalog/portkey-models.mdx
index 49a3af63..26d62c8f 100644
--- a/product/model-catalog/portkey-models.mdx
+++ b/product/model-catalog/portkey-models.mdx
@@ -103,7 +103,7 @@ GET https://api.portkey.ai/model-configs/pricing/{provider}/{model}
Provider identifier. Use lowercase with hyphens.
- Examples: `openai`, `anthropic`, `google`, `azure-openai`, `bedrock`, `together-ai`, `groq`, `deepseek`, `x-ai`, `mistral-ai`, `cohere`, `fireworks-ai`, `perplexity-ai`, `anyscale`, `deepinfra`, `cerebras`
+ Examples: `openai`, `anthropic`, `google`, `azure-openai`, `bedrock`, `together-ai`, `groq`, `deepseek`, `x-ai`, `mistral-ai`, `cohere`, `fireworks-ai`, `perplexity-ai`, `anyscale`, `deepinfra`, `hpc-ai`, `cerebras`
@@ -456,7 +456,7 @@ Use this endpoint to discover all available models and their pricing for a provi
Provider identifier. Use lowercase with hyphens.
- Examples: `openai`, `anthropic`, `google`, `bedrock`, `azure-openai`, `together-ai`, `groq`, `deepseek`, `x-ai`, `mistral-ai`
+ Examples: `openai`, `anthropic`, `google`, `bedrock`, `azure-openai`, `together-ai`, `groq`, `deepseek`, `x-ai`, `mistral-ai`, `hpc-ai`
#### Response Schema