ft:org-id:job-id). Deploying that model simply means wiring it back into your inference stack—usually a task app or a hosted API.
Need CLI specifics? See Run Task Apps Locally, Run Task Apps on Modal, Deploy Task Apps, and Run Evaluations for the full flag reference.
1. Capture the fine-tuned model id
When the CLI reports success it prints the final payload:.env as FINE_TUNED_MODEL_ID).
2. Update configs to use the new model
- Task apps – set the default policy or evaluation model to the new identifier. Example:
- Eval configs – override the model flag when calling the CLI:
- Downstream services – whenever you call Synth’s inference API, swap the
modelfield to the fine-tuned id.
3. Redeploy task apps (if hosted)
If you run on Modal, redeploy so the new model id ships with the container:uvx synth-ai eval --task-url https://<modal>.modal.run --model ft:....
4. Share the model within your organization
List all fine-tuned models:5. Keep iterating
- Re-run rollouts to verify improvements.
- Archive the training TOML, datasets, and job output for reproducibility.
- Set up an automated pipeline: collect traces → filter → train → redeploy → evaluate.
.env produced during setup. Use --env-file only when you need to override or layer additional secrets.