Build Any Agent (Python)
Complete guide to building production-grade Agentfield agents in Python — architecture, patterns, and best practices.
Build Any Agent — Python Guide
This guide covers the complete mental model and patterns for building production Agentfield agents in Python. For the raw system prompt used by AI coding assistants, see /llms/python-guide.txt.
For AI coding assistants: Load /llms/python-guide.txt as context for complete agent architecture guidance.
The Core Philosophy
Individual LLM calls have limited reasoning capacity. By composing multiple focused LLM calls in strategic architectures, you can build systems that reason at a much higher level than any single call.
The key insight: decompose every complex task into 3–7 granular, independent sub-tasks. The constraint that every AI call must use a simple, flat Pydantic schema with 2–4 attributes isn't a limitation — it's a forcing function for proper decomposition.
Reasoners vs Skills
Every function in your agent is either a reasoner (AI-powered) or a skill (deterministic):
| Reasoner | Skill | |
|---|---|---|
| Decorator | @app.reasoner() | @app.skill() |
| Uses AI | Yes — calls app.ai() | No — pure logic |
| Return type | Pydantic model or dict | dict |
| Examples | sentiment analysis, routing decisions | DB queries, API calls, calculations |
from agentfield import Agent, AIConfig
from pydantic import BaseModel
import os
app = Agent(
node_id="my-agent",
agentfield_server=os.getenv("AGENTFIELD_SERVER", "http://localhost:8080"),
ai_config=AIConfig(model=os.getenv("SMALL_MODEL", "gpt-4o-mini"))
)
class SentimentResult(BaseModel):
sentiment: str # "positive" | "negative" | "neutral"
confidence: float # 0.0 - 1.0
reasoning: str
@app.skill(tags=["database"])
def get_ticket(ticket_id: int) -> dict:
return {"id": ticket_id, "message": "I can't log in!"}
@app.reasoner()
async def analyze_sentiment(message: str) -> SentimentResult:
return await app.ai(
system="You analyze customer sentiment.",
user=f"Analyze: {message}",
schema=SentimentResult,
temperature=0.3
)
if __name__ == "__main__":
app.run(auto_port=True)Schema Design Rules
Keep schemas simple: 2–4 attributes, flat structure. This is required for compatibility with smaller, faster models.
# GOOD — simple, flat, 3 attributes
class PriorityResult(BaseModel):
priority: Literal["low", "medium", "high", "critical"]
needs_escalation: bool
reasoning: str
# BAD — too complex, nested
class ComplexResult(BaseModel):
analysis: dict # avoid dicts in schemas
metadata: NestedModel # avoid nesting
tags: list[TagModel] # avoid lists of modelsFor complex output, break into multiple simple AI calls and combine programmatically.
Orchestration Pattern
The main pattern is orchestrator → specialized reasoners → actions:
@app.reasoner()
async def orchestrate(ticket_id: int) -> dict:
ticket = get_ticket(ticket_id)
# Run independent analyses in parallel
sentiment, topics = await asyncio.gather(
analyze_sentiment(ticket["message"]),
extract_topics(ticket["message"]),
return_exceptions=True
)
# Handle partial failures gracefully
if isinstance(sentiment, Exception):
sentiment = SentimentResult(sentiment="neutral", confidence=0.0, reasoning="failed")
# Sequential step that depends on prior results
priority = await assess_priority(
sentiment=sentiment.sentiment,
topic=topics.primary_topic if not isinstance(topics, Exception) else "unknown"
)
return {
"sentiment": sentiment.dict(),
"priority": priority.dict(),
}Cross-Agent Calls
Call other agents through the control plane using app.call():
result = await app.call(
"research-agent.deep_search",
query="AI agent architectures",
max_results=10
)Direct reasoner calls (within the same agent) are also valid and faster for intra-agent orchestration.
Progress Updates with app.note()
For long-running reasoners, emit progress notes that stream to the UI:
@app.reasoner()
async def long_analysis(document: str) -> dict:
app.note("Starting document analysis", tags=["analysis", "start"])
result = await app.ai(user=f"Analyze: {document}", schema=AnalysisResult)
app.note(f"Analysis complete: {result.summary[:100]}", tags=["analysis", "done"])
return result.dict()Organizing Large Agents with AgentRouter
For agents with many reasoners, use AgentRouter to split across files:
# reasoners/analysis.py
from agentfield import AgentRouter
router = AgentRouter(prefix="/analysis", tags=["analysis"])
@router.reasoner()
async def analyze_sentiment(text: str) -> SentimentResult:
from main import app
return await app.ai(user=f"Analyze: {text}", schema=SentimentResult)# main.py
from agentfield import Agent, AIConfig
from reasoners.analysis import router as analysis_router
app = Agent(node_id="my-agent", ...)
app.include_router(analysis_router)
if __name__ == "__main__":
app.run(auto_port=True)Memory API
Access persistent memory across scopes:
# Scopes: "global", "agent", "session", "run"
await app.memory.set("session", "user_preferences", {"theme": "dark"})
prefs = await app.memory.get("session", "user_preferences")Model Selection
Use environment variables so models can be configured per deployment:
ai_config=AIConfig(
model=os.getenv("SMALL_MODEL", "openai/gpt-4o-mini"), # default: fast + cheap
fallback_models=[os.getenv("FALLBACK_MODEL", "anthropic/claude-haiku")]
)
# Override per call for complex reasoning
result = await app.ai(
user="Complex multi-step reasoning task...",
model=os.getenv("LARGE_MODEL", "openai/gpt-4o"), # override: powerful + slow
temperature=0.2
)Project Structure
For production agents, follow this convention:
my-agent/
├── main.py # Agent init + user-facing orchestrators
├── models.py # All Pydantic schemas
├── reasoners/
│ ├── __init__.py # AgentRouter setup + exports
│ ├── analysis.py # Analysis reasoners
│ └── synthesis.py # Synthesis reasoners
├── requirements.txt
└── .env # AGENTFIELD_SERVER, SMALL_MODEL, etc.Pre-Implementation Checklist
Before writing code:
- Broken into 3–7 logical steps
- Each schema has 2–4 attributes, no nesting
- Clear orchestrator → sub-reasoner call graph defined
- Independent steps will use
asyncio.gather() - AI tasks are reasoners, deterministic tasks are skills
- Error handling: critical steps fail-fast, optional steps degrade gracefully
- Model env vars:
SMALL_MODELfor most,LARGE_MODELfor complex reasoning