Build AI Agents with Groq
Ultra-fast inference using custom LPU hardware, delivering sub-second response times.
Available Groq Models
| Model | ID |
|---|---|
| Llama 3.3 70B | llama-3.3-70b-versatile |
| Llama 3.1 8B Instant | llama-3.1-8b-instant |
| Mixtral 8x7B | mixtral-8x7b-32768 |
| Gemma 2 9B | gemma2-9b-it |
Why use Groq for AI agents?
Fastest inference speeds
Low latency
Open-source models
Free tier available
Best use cases
- Real-time chatbots
- Low-latency agents
- Rapid prototyping
- Interactive tools
Quick start code
agent.py
from agno.agent import Agent
from agno.models.groq import Groq
agent = Agent(
name="My Groq Agent",
model=Groq(id="llama-3.3-70b-versatile"),
instructions=["You are a helpful assistant."],
markdown=True,
)
agent.print_response("Hello! What can you help me with?")Install: pip install agno Then set GROQ_API_KEY