Quick Start
Get your first AI agent running in 5 minutes
Quick Start
Your first AI agent in 5 minutes
Get a working AI agent with REST API endpoints and structured AI outputs.
Prerequisites: Python 3.10-3.13. No Python? Run: curl -LsSf https://astral.sh/uv/install.sh | sh && uv venv --python 3.13 && source .venv/bin/activate
1. Install Agentfield
Install the CLI:
curl -sSf https://agentfield.ai/get | shReload your shell and verify:
source ~/.zshrc
af --versionYou should see:
AgentField Control Plane
Version: 0.1.8
Commit: fb9f5e2
Built: 2025-11-17T23:03:17Z
Go version: go1.24.2
OS/Arch: darwin/arm64Install the Python SDK:
pip install agentfield2. Create and Start Your Agent
# Create your agent (uses sensible defaults)
af init my-agent --defaults
# Start the control plane (in a new terminal)
af server
# Run your agent (in your project directory)
cd my-agent
pip install -r requirements.txt
python main.pyYou'll see:
Agent 'my-agent' starting...
Connected to Agentfield server
Registered endpoints:
- my-agent.demo_echo
Agent ready at http://localhost:80013. Test Your Agent
Your agent is now a REST API. Test it:
curl -X POST http://localhost:8080/api/v1/execute/my-agent.demo_echo \
-H "Content-Type: application/json" \
-d '{"input": {"message": "Hello, Agentfield!"}}'Response:
{
"execution_id": "exec_20251117_161256_3d1irv83",
"status": "succeeded",
"result": {
"original": "Hello, Agentfield!",
"echoed": "Hello, Agentfield!",
"length": 18
},
"duration_ms": 8
}It works! Your agent is running and responding to requests.
4. Enable AI Features
The demo_echo endpoint works without any API keys. Now let's enable AI-powered capabilities.
Set your API key
Create a .env file in your my-agent directory:
# my-agent/.env
OPENAI_API_KEY=sk-...Using a different provider? Set OPENROUTER_API_KEY or ANTHROPIC_API_KEY instead. See supported models.
Install python-dotenv
pip install python-dotenvUpdate main.py
Open main.py and make these changes:
- Add imports at the top:
import os
from dotenv import load_dotenv
load_dotenv()- Uncomment and update the
ai_configsection:
app = Agent(
node_id="my-agent",
agentfield_server="http://localhost:8080",
version="1.0.0",
dev_mode=True,
ai_config=AIConfig(
model="openai/gpt-4o", # LiteLLM format: provider/model
temperature=0.7,
),
)Agentfield uses LiteLLM under the hood. Model names follow the provider/model format (e.g., openai/gpt-4o, anthropic/claude-sonnet-4-20250514, openrouter/meta-llama/llama-3-70b).
Uncomment the AI reasoner
Open reasoners.py and uncomment the SentimentAnalysis class and analyze_sentiment function (around line 15).
Restart and test
# Stop your agent (Ctrl+C) and restart it
python main.pyNow test the AI-powered endpoint:
curl -X POST http://localhost:8080/api/v1/execute/my-agent.demo_analyze_sentiment \
-H "Content-Type: application/json" \
-d '{"input": {"text": "I love building with Agentfield!"}}'Response:
{
"execution_id": "exec_20251117_161355_dd2rdzzb",
"status": "succeeded",
"result": {
"sentiment": "positive",
"confidence": 0.95,
"key_phrases": ["love building", "Agentfield"],
"reasoning": "The text expresses strong positive emotion with enthusiastic language."
},
"duration_ms": 2943
}AI is working! Your agent now has structured AI outputs with type-safe schemas.
What Just Happened?
You created an agent with two working endpoints:
- ✅
demo_echo- Works immediately (no API keys needed) - ✅
demo_analyze_sentiment- AI-powered with structured output - ✅ Auto-generated REST API endpoints
- ✅ Type-safe JSON responses via Pydantic schemas
The AI reasoner uses structured output—the response always matches your schema, making it easy to integrate into real applications.
Next Steps
Build Your First Agent
Multi-step reasoning, cross-agent communication, and custom reasoners
Core Concepts
Learn about reasoners, skills, and multi-agent communication
Deploy to Production
Docker, Kubernetes, and production deployment guides
Want to use Go instead of Python? Check out the Build Your First Agent guide for language options and detailed explanations.