Skip to content

Add explicit context caching section to Gemini page#821

Open
mintlify[bot] wants to merge 1 commit intomainfrom
mintlify/gemini-context-caching-1774427736
Open

Add explicit context caching section to Gemini page#821
mintlify[bot] wants to merge 1 commit intomainfrom
mintlify/gemini-context-caching-1774427736

Conversation

@mintlify
Copy link
Copy Markdown
Contributor

@mintlify mintlify bot commented Mar 25, 2026

Summary

  • Added a new Explicit context caching section to the Google Gemini integration page (integrations/llms/gemini.mdx)
  • Documents how to create a cache using the cachedContents API endpoint through Portkey
  • Documents how to use the cache in chat completions via the cached_content parameter
  • Includes code examples in cURL, Python, and NodeJS for both creating a cache and using it in requests
  • Notes the 1024-token minimum requirement and model matching constraint
  • Matches the structure and style of the existing Vertex AI explicit caching section

@mintlify
Copy link
Copy Markdown
Contributor Author

mintlify bot commented Mar 25, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
portkey-docs 🟢 Ready View Preview Mar 25, 2026, 8:39 AM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants