Training
When you train a graph viaPOST /api/adas/jobs, we bill:
- Policy rollouts: tokens consumed by your chosen policy model while evaluating candidates. These use the same underlying model rates as other Synth training APIs.
- Learning compute: proposer and judge tokens used to improve the graph. Because Workflows is a product surface, this portion carries product‑level margin.
rollout_budget(how many evaluations you run),- the average input/output size of your tasks,
- and the policy/judge models you select.
Inference
When you serve a trained graph viaPOST /api/adas/graph/completions, each request is metered and billable based on the policy model tokens used for that call. This lets you ship a graph into production without a separate inference pricing path.
Detailed per‑model pricing tables for Workflows inference and judges will be published alongside the rest of the pricing reference. Until then, expect Workflows to follow the same model rates as the underlying provider plus a small product margin on hosted learning and inference.