Groq LLM plugin for elizaOS - Fast inference with Llama, Qwen, and other models.
This plugin provides Groq API integration for elizaOS agents, enabling ultra-fast text generation, audio transcription, and text-to-speech synthesis.
- π Fast Inference - Leverage Groq's LPU for industry-leading inference speeds
- π Text Generation - Generate text with Llama, Qwen, and other models
- π€ Audio Transcription - Transcribe audio with Whisper models
- π Text-to-Speech - Generate speech with PlayAI voices
- π’ Object Generation - Generate structured JSON objects
- π― Tokenization - Tokenize and detokenize text
This plugin is available for three languages:
| Language | Package | Registry |
|---|---|---|
| TypeScript/JavaScript | @elizaos/plugin-groq |
npm |
| Python | elizaos-plugin-groq |
PyPI |
| Rust | elizaos-plugin-groq |
crates.io |
npm install @elizaos/plugin-groq
# or
bun add @elizaos/plugin-groqpip install elizaos-plugin-groqcargo add elizaos-plugin-groqimport { groqPlugin } from "@elizaos/plugin-groq";
// Add to your agent's plugins
const agent = new Agent({
plugins: [groqPlugin],
});from elizaos_plugin_groq import GroqClient, GenerateTextParams
async with GroqClient(api_key="your-api-key") as client:
response = await client.generate_text_large(
GenerateTextParams(prompt="What is the nature of reality?")
)
print(response)use elizaos_plugin_groq::{GroqClient, GenerateTextParams};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let client = GroqClient::new("your-api-key", None)?;
let response = client.generate_text_large(GenerateTextParams {
prompt: "What is the nature of reality?".to_string(),
..Default::default()
}).await?;
println!("{}", response);
Ok(())
}Set the following environment variables:
| Variable | Required | Default | Description |
|---|---|---|---|
GROQ_API_KEY |
Yes | - | Your Groq API key |
GROQ_BASE_URL |
No | https://api.groq.com/openai/v1 |
Custom API base URL |
GROQ_SMALL_MODEL |
No | openai/gpt-oss-20b |
Model for small tasks |
GROQ_LARGE_MODEL |
No | llama-3.3-70b-versatile |
Model for large tasks |
GROQ_TTS_MODEL |
No | canopylabs/orpheus-v1-english |
Text-to-speech model |
GROQ_TTS_VOICE |
No | troy |
TTS voice name |
GROQ_TTS_RESPONSE_FORMAT |
No | wav |
TTS response format |
This plugin provides handlers for the following elizaOS model types:
| Model Type | Description |
|---|---|
TEXT_SMALL |
Fast text generation with smaller models |
TEXT_LARGE |
High-quality text generation with larger models |
OBJECT_SMALL |
JSON object generation (small) |
OBJECT_LARGE |
JSON object generation (large) |
TRANSCRIPTION |
Audio transcription with Whisper |
TEXT_TO_SPEECH |
Speech synthesis with PlayAI |
TEXT_TOKENIZER_ENCODE |
Tokenize text to tokens |
TEXT_TOKENIZER_DECODE |
Detokenize tokens to text |
# TypeScript
bun install
bun run build
# Rust
cd rust && cargo build --release
# Python
cd python && pip install -e ".[dev]"# TypeScript
bun run test
# Rust
cd rust && cargo test
# Python
cd python && pytest# TypeScript
bun run format:check
# Rust
cd rust && cargo clippy
# Python
cd python && ruff check .MIT License - see LICENSE for details.