AI Tool Calling
Complete orchestrator + worker example with LLM-driven tool discovery and invocation
AI Tool Calling Example
Build an orchestrator that lets LLMs discover and invoke worker agent capabilities
This example demonstrates the full tool-calling pipeline: a worker agent provides deterministic skills (weather lookup, math), and an orchestrator agent lets the LLM automatically discover and invoke them.
Architecture
User → Orchestrator Agent → LLM (decides tools) → Control Plane → Worker Agent
← tool results ← ← execute skill
→ LLM (final answer)The orchestrator never hardcodes which tools to call. The LLM discovers what's available and decides based on the user's question.
Worker Agent
The worker registers deterministic skills that the orchestrator's LLM can discover and call.
"""
Worker Agent - Provides utility skills for the orchestrator to discover and call.
Start this agent first, then run the orchestrator.
"""
import os
from agentfield import Agent, AIConfig
app = Agent(
node_id="utility-worker",
agentfield_server=os.getenv("AGENTFIELD_URL", "http://localhost:8080"),
ai_config=AIConfig(
model=os.getenv("MODEL", "openrouter/openai/gpt-4o-mini"),
temperature=0.3,
),
)
@app.skill(tags=["weather"])
def get_weather(city: str) -> dict:
"""Get the current weather for a city. Returns temperature, conditions, and humidity."""
weather_data = {
"new york": {"temp_f": 72, "conditions": "Partly cloudy", "humidity": 65},
"london": {"temp_f": 58, "conditions": "Overcast", "humidity": 80},
"tokyo": {"temp_f": 81, "conditions": "Sunny", "humidity": 55},
"paris": {"temp_f": 64, "conditions": "Light rain", "humidity": 75},
"sydney": {"temp_f": 68, "conditions": "Clear", "humidity": 50},
}
key = city.lower().strip()
data = weather_data.get(key, {"temp_f": 70, "conditions": "Clear", "humidity": 60})
return {"city": city, **data}
@app.skill(tags=["math"])
def calculate(operation: str, a: float, b: float) -> dict:
"""Perform a basic math operation. Supports: add, subtract, multiply, divide."""
ops = {
"add": a + b,
"subtract": a - b,
"multiply": a * b,
"divide": a / b if b != 0 else float("inf"),
}
result = ops.get(operation.lower())
if result is None:
return {"error": f"Unknown operation: {operation}. Use: add, subtract, multiply, divide"}
return {"operation": operation, "a": a, "b": b, "result": result}
@app.reasoner(tags=["text"])
async def summarize(text: str) -> dict:
"""Use AI to create a concise summary of the given text."""
result = await app.ai(
system="You are a concise summarizer. Respond with only the summary, no preamble.",
user=f"Summarize this in 1-2 sentences:\n\n{text}",
)
return {"summary": str(result)}
if __name__ == "__main__":
app.run(port=8001)/**
* Worker Agent - Provides utility skills for the orchestrator to discover and call.
* Start this agent first, then run the orchestrator.
*/
import { Agent } from '@agentfield/sdk';
const app = new Agent({
nodeId: 'utility-worker-ts',
agentFieldUrl: process.env.AGENTFIELD_URL ?? 'http://localhost:8080',
port: 8003,
aiConfig: {
provider: 'openrouter',
model: process.env.MODEL ?? 'openai/gpt-4o-mini',
apiKey: process.env.OPENROUTER_API_KEY,
temperature: 0.3,
},
});
app.skill('get_weather', async (ctx) => {
const input = ctx.input as { city: string };
const weatherData: Record<string, { temp_f: number; conditions: string; humidity: number }> = {
'new york': { temp_f: 72, conditions: 'Partly cloudy', humidity: 65 },
'london': { temp_f: 58, conditions: 'Overcast', humidity: 80 },
'tokyo': { temp_f: 81, conditions: 'Sunny', humidity: 55 },
'paris': { temp_f: 64, conditions: 'Light rain', humidity: 75 },
'sydney': { temp_f: 68, conditions: 'Clear', humidity: 50 },
};
const key = input.city.toLowerCase().trim();
const data = weatherData[key] ?? { temp_f: 70, conditions: 'Clear', humidity: 60 };
return { city: input.city, ...data };
}, {
tags: ['weather'],
description: 'Get the current weather for a city. Returns temperature, conditions, and humidity.',
inputSchema: {
type: 'object',
properties: { city: { type: 'string', description: 'The city to get weather for' } },
required: ['city'],
},
});
app.skill('calculate', async (ctx) => {
const input = ctx.input as { operation: string; a: number; b: number };
const ops: Record<string, number> = {
add: input.a + input.b,
subtract: input.a - input.b,
multiply: input.a * input.b,
divide: input.b !== 0 ? input.a / input.b : Infinity,
};
const result = ops[input.operation.toLowerCase()];
if (result === undefined) {
return { error: `Unknown operation: ${input.operation}` };
}
return { operation: input.operation, a: input.a, b: input.b, result };
}, {
tags: ['math'],
description: 'Perform a basic math operation. Supports: add, subtract, multiply, divide.',
inputSchema: {
type: 'object',
properties: {
operation: { type: 'string', description: 'Math operation: add, subtract, multiply, divide' },
a: { type: 'number', description: 'First operand' },
b: { type: 'number', description: 'Second operand' },
},
required: ['operation', 'a', 'b'],
},
});
app.reasoner('summarize', async (ctx) => {
const text = ctx.input?.text ?? ctx.input;
const result = await ctx.ai(
`Summarize this in 1-2 sentences:\n\n${text}`,
{ system: 'You are a concise summarizer.' }
);
return { summary: result };
}, {
tags: ['text'],
description: 'Use AI to create a concise summary of the given text.',
});
app.serve();Orchestrator Agent
The orchestrator uses tools="discover" to let the LLM find and call worker capabilities automatically.
"""
Orchestrator Agent - Demonstrates four tool-calling patterns.
Requires: Control plane running + worker agent registered.
"""
import os
from agentfield import Agent, AIConfig, ToolCallConfig
app = Agent(
node_id="orchestrator",
agentfield_server=os.getenv("AGENTFIELD_URL", "http://localhost:8080"),
ai_config=AIConfig(
model=os.getenv("MODEL", "openrouter/openai/gpt-4o-mini"),
temperature=0.2,
),
)
# Pattern 1: Simple discover-all
@app.reasoner(tags=["demo"])
async def ask_with_tools(question: str) -> dict:
"""Auto-discover ALL tools and let the LLM use them."""
result = await app.ai(
system="You are a helpful assistant. Use tools to answer accurately.",
user=question,
tools="discover",
)
return {"answer": str(result)}
# Pattern 2: Filtered discovery by tags
@app.reasoner(tags=["demo"])
async def weather_report(cities: str) -> dict:
"""Discover only weather-tagged tools."""
result = await app.ai(
system="You are a weather reporter. Get weather for each city.",
user=f"What's the weather like in: {cities}?",
tools=ToolCallConfig(tags=["weather"]),
)
return {"report": str(result)}
# Pattern 3: Progressive/lazy discovery
@app.reasoner(tags=["demo"])
async def smart_query(question: str) -> dict:
"""Use lazy hydration for large capability catalogs."""
result = await app.ai(
system="You are a helpful assistant with access to tools.",
user=question,
tools=ToolCallConfig(
schema_hydration="lazy",
max_candidate_tools=30,
max_hydrated_tools=8,
),
)
return {"answer": str(result)}
# Pattern 4: Guardrailed execution
@app.reasoner(tags=["demo"])
async def guarded_query(question: str) -> dict:
"""Strict limits on tool usage."""
result = await app.ai(
system="You are a helpful assistant. Be efficient with tool usage.",
user=question,
tools="discover",
max_turns=3,
max_tool_calls=5,
)
return {"answer": str(result)}
if __name__ == "__main__":
app.run(port=8002)/**
* Orchestrator Agent - Demonstrates four tool-calling patterns.
* Requires: Control plane running + worker agent registered.
*/
import { Agent } from '@agentfield/sdk';
import type { ToolCallConfig } from '@agentfield/sdk';
const app = new Agent({
nodeId: 'orchestrator-ts',
agentFieldUrl: process.env.AGENTFIELD_URL ?? 'http://localhost:8080',
port: 8004,
aiConfig: {
provider: 'openrouter',
model: process.env.MODEL ?? 'openai/gpt-4o-mini',
apiKey: process.env.OPENROUTER_API_KEY,
temperature: 0.2,
},
});
// Pattern 1: Simple discover-all
app.reasoner('ask_with_tools', async (ctx) => {
const { text, trace } = await ctx.aiWithTools(ctx.input.question, {
tools: 'discover',
system: 'You are a helpful assistant. Use tools to answer accurately.',
});
console.log(`Tool calls: ${trace.totalToolCalls}, Turns: ${trace.totalTurns}`);
return { answer: text, trace };
}, { tags: ['demo'] });
// Pattern 2: Filtered discovery by tags
app.reasoner('weather_report', async (ctx) => {
const { text } = await ctx.aiWithTools(
`What's the weather in: ${ctx.input.cities}?`,
{
tools: { tags: ['weather'] } satisfies ToolCallConfig,
system: 'You are a weather reporter.',
}
);
return { report: text };
}, { tags: ['demo'] });
// Pattern 3: Progressive/lazy discovery
app.reasoner('smart_query', async (ctx) => {
const { text } = await ctx.aiWithTools(ctx.input.question, {
tools: {
schemaHydration: 'lazy',
maxCandidateTools: 30,
maxHydratedTools: 8,
} satisfies ToolCallConfig,
});
return { answer: text };
}, { tags: ['demo'] });
// Pattern 4: Guardrailed execution
app.reasoner('guarded_query', async (ctx) => {
const { text } = await ctx.aiWithTools(ctx.input.question, {
tools: 'discover',
maxTurns: 3,
maxToolCalls: 5,
});
return { answer: text };
}, { tags: ['demo'] });
app.run();Running the Example
Start the control plane
af serverStart the worker agent
cd examples/python_agent_nodes/tool_calling
pip install agentfield
python worker.pycd examples/ts_agent_nodes/tool_calling
npm install
npx tsx worker.tsStart the orchestrator
python orchestrator.pynpx tsx orchestrator.tsTest it
# Simple tool discovery
curl -X POST http://localhost:8080/api/v1/execute/orchestrator.ask_with_tools \
-H "Content-Type: application/json" \
-d '{"input": {"question": "What is the weather in Tokyo and what is 42 * 17?"}}'
# Filtered by tags
curl -X POST http://localhost:8080/api/v1/execute/orchestrator.weather_report \
-H "Content-Type: application/json" \
-d '{"input": {"cities": "London, Paris, Sydney"}}'
# With guardrails
curl -X POST http://localhost:8080/api/v1/execute/orchestrator.guarded_query \
-H "Content-Type: application/json" \
-d '{"input": {"question": "What is 100 + 200?"}}'What Happens Under the Hood
When you call orchestrator.ask_with_tools with "What is the weather in Tokyo and what is 42 * 17?":
-
Discovery: The orchestrator queries the control plane. It finds
utility-worker.get_weatherandutility-worker.calculate. -
Schema conversion: These capabilities are converted to OpenAI function-calling format:
[ {"type": "function", "function": {"name": "utility-worker__get_weather", "description": "Get the current weather for a city.", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]}}}, {"type": "function", "function": {"name": "utility-worker__calculate", "description": "Perform a basic math operation.", "parameters": {"type": "object", "properties": {"operation": {"type": "string"}, "a": {"type": "number"}, "b": {"type": "number"}}, "required": ["operation", "a", "b"]}}} ] -
LLM decides: The LLM sees both tools and decides to call both:
get_weather(city="Tokyo")calculate(operation="multiply", a=42, b=17)
-
Dispatch: Each call is routed through the control plane to the worker agent via
app.call(). -
Results: The worker returns
{"city": "Tokyo", "temp_f": 81, "conditions": "Sunny"}and{"result": 714}. -
Final answer: The LLM sees both results and produces: "The weather in Tokyo is 81°F and Sunny. 42 × 17 = 714."
Key Concepts Demonstrated
| Pattern | Description | When to Use |
|---|---|---|
tools="discover" | Auto-discover all capabilities | Simple setups, few agents |
ToolCallConfig(tags=[...]) | Filter by tags | Large systems, domain separation |
schema_hydration="lazy" | Progressive schema loading | Many capabilities (50+) |
max_turns / max_tool_calls | Execution guardrails | Cost-sensitive, production workloads |
Related
- AI Tool Calling Concept — How the discover → ai → call pipeline works
- app.ai() Reference — Full Python API reference
- ctx.ai() Reference — Full TypeScript API reference
- Cross-Agent Communication — How
app.call()routes through the control plane