Helm
Deploy Agentfield on Kubernetes using Helm charts
Helm Deployment
Values-driven Kubernetes install with full customization
Helm provides the most flexible way to deploy Agentfield on Kubernetes. Use values.yaml to customize storage, authentication, scaling, and demo agents.
For plain Kubernetes manifests without Helm, see the Kustomize guide.
Quick Start
Install with PostgreSQL and Demo Agent
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set postgres.enabled=true \
--set controlPlane.storage.mode=postgres \
--set demoPythonAgent.enabled=truePort-Forward the UI/API
kubectl -n agentfield port-forward svc/agentfield-control-plane 8080:8080Wait for Demo Agent
The Python agent installs dependencies on first boot:
kubectl -n agentfield wait --for=condition=Ready pod -l app.kubernetes.io/component=demo-python-agent --timeout=600sExecute an Agent
curl -X POST http://localhost:8080/api/v1/execute/demo-python-agent.hello \
-H "Content-Type: application/json" \
-d '{"input":{"name":"World"}}'Storage Options
Local Storage (SQLite/BoltDB)
Best for development and single-node deployments:
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set controlPlane.storage.mode=localPostgreSQL (Recommended for Production)
Deploy with the bundled PostgreSQL StatefulSet:
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set postgres.enabled=true \
--set controlPlane.storage.mode=postgresOr use an external managed database (AWS RDS, Cloud SQL, etc.):
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set controlPlane.storage.mode=postgres \
--set controlPlane.env.AGENTFIELD_POSTGRES_HOST=your-db.rds.amazonaws.com \
--set controlPlane.env.AGENTFIELD_POSTGRES_USER=agentfield \
--set controlPlane.env.AGENTFIELD_POSTGRES_PASSWORD=your-password \
--set controlPlane.env.AGENTFIELD_POSTGRES_DB=agentfieldDemo Agents
Python Demo Agent (No Build Required)
Installs the SDK from PyPI at startup:
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set demoPythonAgent.enabled=trueGo Demo Agent (Requires Custom Image)
Build and load the image first (see Dockerfile.demo-go-agent):
docker build -t agentfield-demo-go-agent:local -f deployments/docker/Dockerfile.demo-go-agent .
minikube image load agentfield-demo-go-agent:localdocker build -t agentfield-demo-go-agent:local -f deployments/docker/Dockerfile.demo-go-agent .
kind load docker-image agentfield-demo-go-agent:local --name <cluster-name>Then enable in Helm:
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set demoAgent.enabled=trueAPI Authentication
Enable API key authentication for the control plane:
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set apiAuth.enabled=true \
--set apiAuth.apiKey='your-secret-key'When enabled, API calls require the key header (UI remains accessible):
curl -H "X-API-Key: your-secret-key" http://localhost:8080/api/v1/nodesCommon Configurations
Full Production Setup
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set postgres.enabled=true \
--set controlPlane.storage.mode=postgres \
--set controlPlane.replicas=3 \
--set apiAuth.enabled=true \
--set apiAuth.apiKey='production-secret-key'Development with Local Storage
helm upgrade --install agentfield deployments/helm/agentfield \
-n agentfield --create-namespace \
--set controlPlane.storage.mode=local \
--set demoPythonAgent.enabled=trueUseful Commands
# Check release status
helm status agentfield -n agentfield
# View computed values
helm get values agentfield -n agentfield
# Upgrade with new values
helm upgrade agentfield deployments/helm/agentfield \
-n agentfield \
--reuse-values \
--set controlPlane.replicas=5
# Uninstall
helm uninstall agentfield -n agentfieldNotes
- The chart sets
AGENTFIELD_CONFIG_FILE=/dev/nullso the control plane uses built-in defaults plus environment variables - Admin gRPC listens on port 8180 (HTTP port + 100) and is exposed via the Service port named
grpc
Source files:
- Chart.yaml — Chart metadata and version
- values.yaml — All configurable options
- Templates — Kubernetes resource templates
Related Documentation
- Kubernetes (Kustomize) — Plain manifests without Helm
- Docker Deployment — Local development with Docker Compose
- Monitoring — Prometheus metrics and observability