synth agent
Theagent command runs and manages Research Agent jobs for automated prompt optimization. Research Agents spawn sandboxed environments that analyze your code and apply MIPRO optimization to improve prompt performance.
Commands
agent run
Start a new Research Agent job.| Option | Description |
|---|---|
--config, -c <path> | Path to TOML configuration file (recommended) |
--repo, -r <url> | Repository URL (alternative to config file) |
--branch, -b <branch> | Repository branch (default: main) |
--task, -t <text> | Task description for the agent |
--dataset, -d <id> | HuggingFace dataset ID (e.g., PolyAI/banking77) |
--tool <tool> | Optimization tool: mipro (default: mipro) |
--model, -m <model> | Agent model (default: gpt-5.1-codex-mini) |
--reasoning-effort <level> | Reasoning effort: low, medium, high (default: medium) |
--iterations, -n <n> | Number of optimization iterations (default: 10) |
--max-agent-spend <usd> | Max agent LLM spend in USD (default: 25.0) |
--max-synth-spend <usd> | Max optimization spend in USD (default: 150.0) |
--poll/--no-poll | Wait for completion and stream events (default: —poll) |
--timeout <seconds> | Timeout when polling (default: 3600) |
agent status
Check the status of an existing job.agent list
List recent Research Agent jobs.| Option | Description |
|---|---|
--limit <n> | Number of jobs to show (default: 10) |
--status <status> | Filter by status: queued, running, succeeded, failed |
agent events
Stream events from a research agent job.| Option | Description |
|---|---|
--since <n> | Show events after this sequence number (default: 0) |
--follow, -f | Follow events in real-time |
agent results
Get results from a completed research agent job.| Option | Description |
|---|---|
--output, -o <path> | Write results to file (JSON) |
agent cancel
Cancel a running job.Configuration File
Research Agent jobs are configured via TOML files. Here’s a complete example:Configuration Reference
[research_agent]
| Field | Type | Description |
|---|---|---|
repo_url | string | GitHub repository URL |
repo_branch | string | Branch to use (default: “main”) |
model | string | Agent model (e.g., “gpt-5.1-codex-mini”) |
reasoning_effort | string | low, medium, or high |
max_agent_spend_usd | float | Max agent inference spend |
max_synth_spend_usd | float | Max optimization spend |
[research_agent.research]
| Field | Type | Description |
|---|---|---|
task_description | string | Detailed optimization instructions |
tools | array | Optimization tools: [“mipro”] |
primary_metric | string | Metric to optimize (default: “accuracy”) |
num_iterations | int | Number of optimization iterations |
[[research_agent.research.datasets]]
| Field | Type | Description |
|---|---|---|
source_type | string | ”huggingface”, “upload”, or “inline” |
hf_repo_id | string | HuggingFace dataset ID |
hf_split | string | Dataset split (default: “train”) |
[research_agent.research.mipro_config]
| Field | Type | Description |
|---|---|---|
meta_model | string | Model for generating proposals |
meta_provider | string | Provider: “groq”, “openai”, “google” |
num_trials | int | Number of optimization trials |
proposer_effort | string | LOW_CONTEXT, LOW, MEDIUM, HIGH |
Environment Variables
| Variable | Description |
|---|---|
SYNTH_API_KEY | Your Synth API key (required) |
Examples
Banking77 Intent Classification
Quick Iteration (Iris Dataset)
CI/CD Integration
Output
When using--poll, the CLI shows real-time progress:
See Also
- Research Agent SDK - Python SDK for programmatic access
- Research Agent Dashboard - Web interface guide
- MIPRO Quickstart - Understanding MIPRO optimization