Skip to main content
The Python SDK exposes Workflows (Graphs API) via the GraphGenJob class. It wraps:
  • creating a training job from an GraphGenTaskSet,
  • monitoring progress,
  • downloading the best prompt snapshot,
  • and running inference against the trained graph.

Create from a dataset

from synth_ai.sdk.api.train.graphgen import GraphGenJob

job = GraphGenJob.from_dataset(
    "my_tasks.json",
    policy_model="gpt-4o-mini",
    rollout_budget=200,
    proposer_effort="medium",
)
submit_result = job.submit()
print(submit_result.graph_gen_job_id)
from_dataset accepts:
  • a file path,
  • a raw dict,
  • or an GraphGenTaskSet object.

Monitor training

status = job.get_status()
print(status["status"], status.get("best_score"))
For live progress, use the streaming helper:
final_status = job.stream_until_complete()

Download the best prompt

prompt = job.download_prompt()
print(prompt)

Run inference

result = job.run_inference({"query": "Upgrade my plan"})
print(result["output"])
Optional inference args:
  • model: override the policy model for this call.
  • graph_snapshot_id: run a specific snapshot instead of the best.

Run judge (Verifier Graphs)

For graphs trained with graph_type="verifier", use run_judge to evaluate execution traces:
# Pass a V3 trace dict or SessionTraceInput object
judgment = job.run_judge(session_trace)
print(f"Score: {judgment.score}")
print(f"Reasoning: {judgment.reasoning}")
  • Product overview: product/workflows
  • Dataset + judging: product/workflows/judging
  • Examples: cookbooks/workflows/overview