Skip to main content

1) Prepare an ADASTaskSet

Minimal dataset:
{
  "metadata": { "name": "support-intents" },
  "initial_prompt": "You classify customer requests into intents.",
  "tasks": [
    { "id": "t1", "input": { "query": "Cancel my plan" } },
    { "id": "t2", "input": { "query": "Update my card" } }
  ],
  "gold_outputs": [
    { "task_id": "t1", "output": { "intent": "cancellation" } },
    { "task_id": "t2", "output": { "intent": "billing" } }
  ],
  "judge_config": { "mode": "rubric" }
}
tasks[].input and gold_outputs[].output can be any JSON shape. If your workflow speaks JSON, it fits here.

2) Create a training job

curl -X POST $HOST/api/adas/jobs \
  -H "Authorization: Bearer $SYNTH_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "dataset": { ...ADASTaskSet... },
    "policy_model": "gpt-4o-mini",
    "rollout_budget": 200,
    "proposer_effort": "medium",
    "auto_start": true
  }'
Response includes adas_job_id.

3) Monitor training

Poll:
curl -H "Authorization: Bearer $SYNTH_API_KEY" \
  $HOST/api/adas/jobs/adas_XXXX
Stream events:
curl -N -H "Authorization: Bearer $SYNTH_API_KEY" \
  $HOST/api/adas/jobs/adas_XXXX/events/stream

4) Download the best graph

When status is terminal (succeeded), download:
curl -H "Authorization: Bearer $SYNTH_API_KEY" \
  $HOST/api/adas/jobs/adas_XXXX/download
You’ll get the best prompt snapshot plus a ready‑to‑run Python snippet.

5) Run inference

curl -X POST $HOST/api/adas/graph/completions \
  -H "Authorization: Bearer $SYNTH_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "job_id": "adas_XXXX",
    "input": { "query": "Upgrade my plan" }
  }'
To target a specific snapshot or override the model, add prompt_snapshot_id or model.

Next steps

  • Learn how judging works: product/workflows/judging.
  • Try real examples: cookbooks/workflows/overview.