Reasoners & Skills
Understanding the fundamental primitives that make Agentfield agents production-ready
Reasoners & Skills
The fundamental primitives for building autonomous software
Production AI systems face a fundamental question: when should software use AI judgment versus deterministic code?
You can't make everything an LLM call. That's slow, expensive, and unpredictable. But you can't avoid AI entirely either. That's rigid and can't adapt to nuance.
Traditional frameworks force you to choose. Agentfield gives you both, formalized as reasoners (AI-guided decision making) and skills (reliable execution). Together, they create what we call guided autonomy: software that combines AI intelligence with programmatic control.
What You'd Otherwise Build
Traditional Approach
What you build:
- Manual decision: when to use AI vs code
- Custom routing logic for each case
- Error handling for both paths
- Separate testing strategies
- Documentation for each pattern
- API endpoints for each function
- Workflow tracking system
Then you write business logic.
Agentfield Approach
What you write:
@app.reasoner() # AI-powered
async def analyze_sentiment(text: str):
return await app.ai(...)
@app.skill() # Deterministic
def calculate_tax(amount: float):
return amount * 0.08Agentfield provides:
- ✓ Clear AI vs deterministic separation
- ✓ Unified execution model
- ✓ Automatic API endpoints
- ✓ Workflow tracking
- ✓ Cryptographic identity
- ✓ Consistent error handling
- ✓ Full observability
The Production Dilemma
Here's the problem every production AI system hits:
Scenario: You're building a customer support system. A ticket comes in: "Your platform keeps crashing and I'm losing data!"
You need to:
- Analyze sentiment (is the customer frustrated? angry? confused?)
- Determine priority (is this urgent? can it wait?)
- Route to the right team (technical? billing? account management?)
- Update your database with the decision
- Send notifications to the assigned team
Which parts need AI? Which parts need code?
If you wrap everything in LLM calls:
- Database updates become unpredictable
- Notifications might get malformed
- You're paying for AI to do arithmetic
- Response times are inconsistent (200ms to 10 seconds)
If you avoid AI entirely:
- You're writing brittle if/else chains for sentiment
- Priority rules become unmaintainable
- You can't adapt to new ticket types
- Edge cases break your logic
Agentfield's answer: Use AI where you need judgment. Use code where you need reliability. Make them work together seamlessly.
Reasoners: AI-Guided Decision Making
A reasoner is a microservice that combines AI intelligence with your business logic. It's not just "a function that calls an LLM." It's a production component that uses AI for analysis, then your code for routing, validation, and action.
Long-Running Reasoners
Reasoners can run for arbitrarily long periods—hours or even days—when executed asynchronously. This is critical for complex workflows where one agent calls another, which calls another, creating nested reasoning chains.
Unlike traditional frameworks that timeout after minutes, Agentfield's control plane tracks long-running executions without limits. Perfect for:
- Multi-step research and analysis
- Nested agent-to-agent coordination
- Complex decision workflows with multiple AI calls
- Batch processing that requires extended reasoning
Learn more in Async Execution & Webhooks.
Here's what that looks like:
from agentfield import Agent
from pydantic import BaseModel
app = Agent("support-system")
class TriageDecision(BaseModel):
sentiment: str # "frustrated", "angry", "confused", "neutral"
priority: str # "low", "medium", "high", "critical"
category: str # "technical", "billing", "account"
needs_escalation: bool
reasoning: str # Why did we make this decision?
@app.reasoner()
async def triage_support_ticket(ticket: dict) -> dict:
"""
Analyzes a support ticket and routes it autonomously.
AI provides judgment. Code provides control.
"""
# Step 1: AI analyzes the ticket
analysis = await app.ai(
system="You are a support triage expert. Analyze tickets for sentiment, priority, and category.",
user=f"Ticket #{ticket['id']}: {ticket['message']}\nCustomer tier: {ticket['tier']}",
schema=TriageDecision
)
# Step 2: Your code makes routing decisions based on AI insights
if analysis.needs_escalation:
# High-stakes decision: route to senior team
queue = await escalate_to_human(ticket, analysis.reasoning)
app.note(f"⚠️ Escalated ticket {ticket['id']} to senior team", tags=["escalation"])
elif analysis.priority == "critical":
# Urgent but not escalation-worthy: fast-track queue
queue = await assign_to_senior_team(ticket)
app.note(f"🚨 Critical priority: {analysis.reasoning}", tags=["critical"])
else:
# Standard routing based on category
queue = await add_to_standard_queue(ticket, analysis.category)
# Step 3: Update systems (deterministic operations)
await update_ticket_metadata(ticket['id'], {
'priority': analysis.priority,
'category': analysis.category,
'sentiment': analysis.sentiment,
'ai_reasoning': analysis.reasoning,
'assigned_queue': queue
})
# Step 4: Send notifications
await notify_team(queue, ticket, analysis)
return {
'ticket_id': ticket['id'],
'routed_to': queue,
'analysis': analysis.dict()
}What just happened?
- AI provided judgment: Sentiment analysis, priority assessment, category classification
- Code provided control: Routing logic, database updates, notifications
- Pydantic enforced structure: AI output is typed and validated
- Agentfield made it infrastructure: Automatic API endpoint, workflow tracking, cryptographic identity
This is guided autonomy. The reasoner doesn't just call AI and return the result. It orchestrates: AI analyzes, code decides what to do with that analysis, then executes reliably.
What Agentfield Does Automatically
When you define a reasoner with @app.reasoner(), Agentfield:
- Generates a cryptographic identity (DID) for this specific reasoner
- Registers it with the control plane for service discovery
- Exposes it as a REST API:
POST /api/v1/execute/support-system.triage_support_ticket - Tracks every execution in workflow DAGs with inputs, outputs, and timing
- Issues verifiable credentials proving what ran, when, and by whom
You wrote a Python function. Agentfield turned it into production infrastructure.
Skills: Reliable Execution
Skills are deterministic functions. They handle everything that doesn't need AI judgment: database queries, calculations, API calls, data transformations.
@app.skill(tags=["database", "users"])
def get_user_profile(user_id: int) -> dict:
"""Retrieves user profile from database."""
user = db.query(User).filter_by(id=user_id).first()
return user.to_dict() if user else None
@app.skill(tags=["calculations", "pricing"])
def calculate_discount(price: float, tier: str) -> float:
"""Calculates discount based on customer tier."""
discount_rates = {
"gold": 0.20,
"silver": 0.10,
"bronze": 0.05
}
return price * (1 - discount_rates.get(tier, 0))
@app.skill(tags=["notifications"])
async def send_slack_alert(channel: str, message: str) -> bool:
"""Sends alert to Slack channel."""
response = await slack_client.post_message(channel, message)
return response.okSkills can be sync or async. They work with your existing code. The tags parameter helps with organization and discovery in the Agentfield UI.
Why separate skills from reasoners?
Because in production, you need to know what's deterministic and what's not. Skills are:
- Predictable: Same input, same output
- Fast: No LLM latency
- Testable: Standard unit tests work
- Debuggable: No AI black box
When something breaks at 3am, you want to know if it's your database query (skill) or your AI analysis (reasoner).
Structured Output: Making AI Predictable
The key to guided autonomy is making AI output predictable enough for your code to act on. That's where Pydantic schemas come in.
from pydantic import BaseModel, Field
class FeedbackAnalysis(BaseModel):
sentiment: str = Field(description="positive, negative, or neutral")
confidence: float = Field(ge=0.0, le=1.0, description="Confidence score")
keywords: list[str] = Field(description="Key topics mentioned")
is_feature_request: bool
urgency: str = Field(description="low, medium, or high")
reasoning: str = Field(description="Explanation of the analysis")
@app.reasoner()
async def analyze_customer_feedback(feedback: dict) -> dict:
"""
Analyzes customer feedback and takes autonomous action.
Pydantic ensures AI output is structured and reliable.
"""
# AI provides structured analysis
analysis = await app.ai(
system="Analyze customer feedback for sentiment, topics, and urgency.",
user=feedback['message'],
schema=FeedbackAnalysis
)
# Your code can now reliably act on AI insights
actions_taken = []
if analysis.sentiment == "negative" and analysis.confidence > 0.8:
# High-confidence negative feedback: immediate action
await notify_customer_success_team(feedback, analysis)
await create_follow_up_task(feedback['customer_id'], analysis.keywords)
actions_taken.extend(["notified_cs_team", "created_follow_up"])
if analysis.is_feature_request:
# Route to product team
await route_to_product_team(feedback, analysis.keywords)
actions_taken.append("routed_to_product")
if analysis.urgency == "high":
# Flag for immediate review
await flag_for_review(feedback['id'], analysis.reasoning)
actions_taken.append("flagged_urgent")
return {
'feedback_id': feedback['id'],
'analysis': analysis.dict(),
'actions_taken': actions_taken
}Why this matters:
- Type safety: Your IDE knows
analysis.sentimentis a string - Validation: Pydantic ensures
confidenceis between 0 and 1 - Reliability: Your routing logic won't break on unexpected AI output
- Auditability: The
reasoningfield explains every decision - Testability: You can mock
analysisobjects in tests
The app.ai() method instructs the LLM to format its response according to your schema. You get back a fully typed Pydantic object that your code can immediately use.
Combining Reasoners and Skills
The real power emerges when you combine them. Here's a production pattern:
class PricingRecommendation(BaseModel):
recommended_price: float
discount_tier: str
upsell_opportunity: bool
reasoning: str
@app.reasoner()
async def recommend_pricing(user_id: int, product_id: int) -> dict:
"""
AI-powered pricing recommendation using skills for data.
Pattern: Skills gather data → AI analyzes → Skills execute.
"""
# Skills: Get deterministic data
user = get_user_profile(user_id) # Skill
base_price = get_product_price(product_id) # Skill
purchase_history = get_purchase_history(user_id) # Skill
# Reasoner: AI analyzes and recommends
recommendation = await app.ai(
system="You are a pricing strategist. Recommend optimal pricing based on user data.",
user=f"""
User tier: {user['tier']}
Purchase history: {purchase_history}
Base price: ${base_price}
Lifetime value: ${user['lifetime_value']}
""",
schema=PricingRecommendation
)
# Skills: Calculate final price and execute
final_price = calculate_discount(
recommendation.recommended_price,
recommendation.discount_tier
) # Skill
if recommendation.upsell_opportunity:
# AI detected upsell potential: trigger marketing automation
await trigger_upsell_campaign(user_id, product_id) # Skill
# Update pricing in database
await update_user_pricing(user_id, product_id, final_price) # Skill
app.note(f"""
## Pricing Decision
**User:** {user_id}
**Product:** {product_id}
**Final Price:** ${final_price}
**Reasoning:** {recommendation.reasoning}
""", tags=["pricing", "decision"])
return {
'user_id': user_id,
'product_id': product_id,
'final_price': final_price,
'recommendation': recommendation.dict()
}The pattern:
- Skills gather data (database queries, API calls)
- Reasoner uses AI to analyze and decide
- Skills execute actions (calculations, updates, notifications)
This is how you build autonomous software that's both intelligent and reliable.
The app.ai() Method
app.ai() is your interface to language models. It handles multiple patterns:
The SDK routes all requests through the control plane, which handles:
- Model selection and routing
- Rate limiting and cost tracking
- Credential issuance for audit trails
- Automatic retries on failures
Every Function Becomes a Microservice
Here's the paradigm shift: every reasoner and skill automatically becomes a REST API endpoint.
When you write:
@app.reasoner()
async def analyze_support_ticket(ticket: dict) -> TicketAnalysis:
...You can immediately call it from anywhere:
// Call from React, Vue, Angular, etc.
const analysis = await fetch(
'/api/v1/execute/support-system.analyze_support_ticket',
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
input: {
ticket: {
id: 12345,
message: "Platform keeps crashing!",
tier: "enterprise"
}
}
})
}
);
const result = await analysis.json();
console.log(result.result.analysis);# Call from another Python service
import requests
response = requests.post(
'http://af-server/api/v1/execute/support-system.analyze_support_ticket',
json={
'input': {
'ticket': {
'id': 12345,
'message': 'Platform keeps crashing!',
'tier': 'enterprise'
}
}
}
)
result = response.json()// Call from iOS, Android, React Native
let url = URL(string: "http://af-server/api/v1/execute/support-system.analyze_support_ticket")!
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
let body = [
"input": [
"ticket": [
"id": 12345,
"message": "Platform keeps crashing!",
"tier": "enterprise"
]
]
]
request.httpBody = try? JSONSerialization.data(withJSONObject: body)
// Execute request...From .NET, Go, Ruby, PHP, or anything that speaks HTTP.
No SDK required for consumers. No custom integration code. Just standard REST APIs that work like any other backend service.
This is why Agentfield treats agents as microservices, not scripts.
Long-Running Tasks? Use Async Execution
The examples above use synchronous execution (/execute/) which blocks until complete (90 second timeout). For LLM reasoning, research, or any task taking > 10 seconds, use async execution:
# Returns immediately with execution ID
curl -X POST http://localhost:8080/api/v1/execute/async/support-system.analyze_support_ticket \
-H "Content-Type: application/json" \
-d '{
"input": {...},
"webhook": {
"url": "https://your-app.com/webhooks",
"secret": "your-secret"
}
}'Get results via webhook callbacks or polling. See Make Your Agent Async for patterns and examples.
Observability with Agent Notes
Production systems need observability. Agentfield provides app.note() for emitting structured, markdown-formatted notes during execution.
@app.reasoner()
async def evaluate_loan_application(application: dict) -> dict:
"""
Evaluates loan application with full audit trail.
Notes create a timeline of decisions visible in Agentfield UI.
"""
# Step 1: Risk assessment
risk = await app.ai(
"Evaluate loan risk based on application data",
f"Application: {application}",
schema=RiskAssessment
)
app.note(f"""
## Risk Assessment Complete
**Application ID:** {application['id']}
**Risk Score:** {risk.score}/10
**Decision:** {risk.decision}
**Key Factors:**
- Credit score: {application['credit_score']}
- Income: ${application['income']}
- Debt ratio: {application['debt_ratio']}%
**AI Reasoning:** {risk.reasoning}
""", tags=["risk-assessment", "decision"])
# Step 2: Check for high-risk scenarios
if risk.score > 7.5:
app.note("⚠️ High risk detected - flagging for manual review", tags=["alert", "manual-review"])
await flag_for_manual_review(application['id'])
# Notify risk team
await send_slack_alert(
channel="#risk-team",
message=f"High-risk loan application {application['id']} requires review"
)
app.note("✅ Risk team notified via Slack", tags=["notification"])
# Step 3: Final decision
if risk.decision == "approve":
await approve_loan(application['id'], risk.approved_amount)
app.note(f"✅ Loan approved for ${risk.approved_amount}", tags=["approval"])
else:
await reject_loan(application['id'], risk.reasoning)
app.note(f"❌ Loan rejected: {risk.reasoning}", tags=["rejection"])
return {
'application_id': application['id'],
'decision': risk.decision,
'risk_score': risk.score,
'reasoning': risk.reasoning
}Where notes appear:
- Workflow Timeline: Chronological view of all decisions
- Execution Details: Notes specific to each step
- Audit Exports: Included in verifiable credential exports
Best practices:
- Use markdown for clarity
- Emit notes at key decision points
- Tag consistently for filtering
- Include relevant context (IDs, scores, amounts)
- Use emoji for visual scanning (⚠️ warnings, ✅ success, 🔍 analysis)
Organizing with Routers
As your agent grows, organize reasoners and skills into logical groups using routers. This provides FastAPI-style ergonomics and affects how you call functions via HTTP.
from agentfield.router import AgentRouter
# Create routers for different domains
users = AgentRouter(prefix="users")
billing = AgentRouter(prefix="billing")
support = AgentRouter(prefix="support")
# User operations
@users.reasoner()
async def analyze_user_behavior(user_id: str) -> BehaviorAnalysis:
"""Analyzes user behavior patterns."""
...
@users.skill()
def get_user_settings(user_id: str) -> dict:
"""Retrieves user settings from database."""
...
# Billing operations
@billing.reasoner()
async def recommend_plan(user_id: str) -> PlanRecommendation:
"""Recommends subscription plan based on usage."""
...
@billing.skill()
def calculate_invoice(user_id: str, period: str) -> float:
"""Calculates invoice amount for billing period."""
...
# Support operations
@support.reasoner()
async def triage_ticket(ticket: dict) -> TriageDecision:
"""Triages support ticket and routes appropriately."""
...
# Include all routers in your agent
app.include_router(users)
app.include_router(billing)
app.include_router(support)How Routers Affect HTTP Endpoints
The router prefix becomes part of the API endpoint. This is crucial for understanding how to call your functions:
# Without router: analyze_user_behavior becomes:
curl -X POST http://localhost:8080/api/v1/execute/user-agent.analyze_user_behavior
# With router prefix "users": analyze_user_behavior becomes:
curl -X POST http://localhost:8080/api/v1/execute/user-agent.users_analyze_user_behaviorPrefix translation examples:
Router prefixes are automatically converted to valid identifiers:
| Router Prefix | Function Name | HTTP Endpoint |
|---|---|---|
"users" | analyze_behavior | user-agent.users_analyze_behavior |
"Billing" | calculate_cost | user-agent.billing_calculate_cost |
"Support/Inbox" | route_ticket | user-agent.support_inbox_route_ticket |
Calling Router Functions
# Call users router reasoner
curl -X POST http://localhost:8080/api/v1/execute/user-agent.users_analyze_user_behavior \
-H "Content-Type: application/json" \
-d '{
"input": {
"user_id": "abc123"
}
}'
# Call billing router reasoner
curl -X POST http://localhost:8080/api/v1/execute/user-agent.billing_recommend_plan \
-H "Content-Type: application/json" \
-d '{
"input": {
"user_id": "abc123"
}
}'
# Call support router reasoner
curl -X POST http://localhost:8080/api/v1/execute/user-agent.support_triage_ticket \
-H "Content-Type: application/json" \
-d '{
"input": {
"ticket": {
"id": 456,
"message": "Need help with billing"
}
}
}'# Call from another agent via Agentfield control plane
result = await app.call(
"user-agent.users_analyze_user_behavior",
user_id="abc123"
)
# Call billing function
plan = await app.call(
"user-agent.billing_recommend_plan",
user_id="abc123"
)
# Call support function
triage = await app.call(
"user-agent.support_triage_ticket",
ticket={"id": 456, "message": "Need help"}
)# Within same agent (direct import)
from skills.user_operations import get_user_settings
@app.reasoner()
async def process_user(user_id: str):
settings = get_user_settings(user_id) # Direct call
...Key Insight: Routers organize your code AND namespace your API endpoints. The prefix you choose directly affects how external systems call your functions.
Learn more in the AgentRouter documentation.
What This Enables
For Developers
Write agents like FastAPI services. Familiar patterns, production infrastructure built-in. Focus on business logic, not plumbing.
For Production
Every function is a microservice. Automatic API endpoints, workflow tracking, cryptographic audit trails. Deploy with confidence.
For AI Systems
Guided autonomy: AI provides judgment, code provides control. Structured output makes AI predictable. Combine intelligence with reliability.
For Teams
Clear separation of concerns. Skills are testable and deterministic. Reasoners are auditable and observable. Debug with confidence.
Next Steps
You now understand the building blocks of autonomous software with Agentfield:
- Make Your Agent Async - Execute long-running tasks with webhooks or polling
- Cross-Agent Communication - Make agents coordinate across process boundaries
- Shared Memory - Share state automatically between agents
- Identity & Trust - Verify execution with cryptographic proof
Or jump straight to building with the Quick Start Guide.