Agent Builder — UI Wireframe
Config Studio (left) + Pipeline Canvas + Trace Panel (right)
Config Studio
agent-builder v1
Agents
Tools
Models
Prompts
Memory
Guardrails
Create New Agent
An agent is a reusable AI worker. It references a model, tools, memory and guardrails.
Agent Name
Description
shown to users in the pipeline builder
Model
reference from Models tab
model:gpt-4o:v1
model:claude-3-5:v1
model:gemini-pro:v1
Tools
references from Tools tab
web_search
✕
code_exec
✕
http_request
✕
+ Add Tool
System Prompt
reference from Prompts tab
prompt:researcher-base:v2
prompt:generic-assistant:v1
Memory
from Memory tab
memory:sliding-10:v1
None
Guardrails
from Guardrails tab
guard:no-pii:v1
None
Will save as
agent:researcher-agent:v1
Save to Registry →
Create New Tool
A tool is something the LLM can call mid-response. The description tells the LLM when to use it.
Tool Name
Description
the LLM reads this to decide when to call this tool
Search the web for current information. Use when the user asks about recent events, real-time data, or anything beyond your training cutoff.
Type
HTTP API
Python Function
Code Executor
Database Query
Endpoint URL
Auth Type
API Key (Header)
Bearer Token
None
Input Schema
what parameters the LLM must pass
{ "query": "string (required)", "max_results": "number (optional, default 5)" }
Timeout (ms)
Will save as
tool:web_search:v1
Save to Registry →
Create Model Config
A reusable wrapper around an LLM provider. Pin temperature and token limits once, reference everywhere.
Config Name
Provider
OpenAI
Anthropic
Google
Mistral
Model
gpt-4o
gpt-4o-mini
gpt-4-turbo
Temperature
Max Tokens
Top-p
API Key Secret
pulled from Vault at runtime, never stored here
Stream responses
Will save as
model:gpt-4o-precise:v1
Save to Registry →
Create System Prompt
Keep prompts separate from agents so you can update them in one place without touching the agent config.
Prompt Name
Prompt Text
You are a research agent. Your job is to search the web for relevant, accurate information and return a clean, structured summary. Rules: - Always cite your sources - Never hallucinate facts - If uncertain, say so explicitly - Keep responses concise and factual
Variables
dynamic values injected at runtime
{{user_name}}
✕
{{today_date}}
✕
+ Add Variable
Will save as
prompt:researcher-base:v1
Save to Registry →
Create Memory Config
Defines how an agent remembers conversation history across turns. Referenced by agents, applied by the runtime.
Config Name
Memory Type
Sliding Window
Full History
Summarised History
Window Size
last N messages to keep
TTL (hours)
expire after inactivity
Scope
Per Session
Per User
Per Pipeline Run
Storage Backend
Redis (fast, default)
Postgres (durable)
Redis + Postgres (both)
At runtime: worker loads last 10 messages from Redis → injects into LLM prompt → saves updated history back to Redis after response.
Will save as
memory:sliding-10:v1
Save to Registry →
Create Guardrail
Wraps an agent with input and output rules. Checked automatically before and after every LLM call.
Guardrail Name
Input Rules
checked before the agent sees the message
Block prompt injection attempts
INPUT
Reject messages > 4000 chars
INPUT
Block off-topic requests (finance only)
BLOCK
+ Add Input Rule
Output Rules
checked before the response leaves the agent
Redact phone numbers and emails
OUTPUT
Block responses > 2000 tokens
OUTPUT
Never reveal system prompt
BLOCK
+ Add Output Rule
Log all guardrail violations
Will save as
guardrail:no-pii:v1
Save to Registry →
Pipeline Canvas
Zoom
Save
▶ Run
+ Agent
+ HITL
+ Condition
ENTRY
Pipeline Input
AGENT
Researcher
agent:researcher:v2
🔧 web_search
🧠 sliding-10
🛡 no-pii
AGENT
Fact Check
agent:factcheck:v1
🔧 web_search
🛡 no-pii
👤
HITL Review
awaiting approval
⏸ checkpointed
AGENT
Writer
agent:writer:v1
🛡 no-pii
AGENT
Publisher
agent:publisher:v1
🔧 http_post
EXIT
Pipeline Output
Live Trace
Running — waiting for HITL approval
00:00.1
✓
researcher:v2 started
00:01.4
→
tool: web_search("AI trends 2025") called
00:03.2
←
tool: web_search returned 8 results
00:04.8
✓
researcher:v2 completed (tokens: 1,240)
00:05.0
✓
factcheck:v1 completed (tokens: 880)
00:05.1
✓
state checkpointed → Redis + Postgres
00:05.2
⏸
HITL node reached — pipeline paused, awaiting human approval
—
·
writer:v1 — waiting
—
·
publisher:v1 — waiting