Skip to main content
SFT jobs finish with a new model identifier (for example ft:org-id:job-id). Deploying that model simply means wiring it back into your inference stack—usually a task app or a hosted API. Need CLI specifics? See Run Task Apps Locally, Run Task Apps on Modal, Deploy Task Apps, and Run Evaluations for the full flag reference.

1. Capture the fine-tuned model id

When the CLI reports success it prints the final payload:
{
  "status": "succeeded",
  "fine_tuned_model": "ft:abc123:2024-09-18-034500",
  ...
}
If you skipped polling, query the job directly:
curl -H "Authorization: Bearer $SYNTH_API_KEY" \
  https://api.usesynth.ai/api/learning/jobs/<job_id> \
  | jq '.fine_tuned_model'
Store the value somewhere safe (for example in .env as FINE_TUNED_MODEL_ID).

2. Update configs to use the new model

  • Task apps – set the default policy or evaluation model to the new identifier. Example:
    [rollout]
    model = "ft:abc123:2024-09-18-034500"
    
  • Eval configs – override the model flag when calling the CLI:
    uvx synth-ai eval \
      --app-id your-task-id \
      --model ft:abc123:2024-09-18-034500 \
      --seeds 1-25
    
  • Downstream services – whenever you call Synth’s inference API, swap the model field to the fine-tuned id.

3. Redeploy task apps (if hosted)

If you run on Modal, redeploy so the new model id ships with the container:
uvx synth-ai deploy your-task-id \
  --name my-task-app
Ensure the deployment can reach the model (Modal workers need access to the Synth API). Run a smoke rollout with uvx synth-ai eval --task-url https://<modal>.modal.run --model ft:....

4. Share the model within your organization

List all fine-tuned models:
curl -H "Authorization: Bearer $SYNTH_API_KEY" \
  https://api.usesynth.ai/api/learning/models
This endpoint returns metadata (base model, status, job id, timestamps) so other team members or automations can reference the correct identifier.

5. Keep iterating

  • Re-run rollouts to verify improvements.
  • Archive the training TOML, datasets, and job output for reproducibility.
  • Set up an automated pipeline: collect traces → filter → train → redeploy → evaluate.
Tip: the CLI auto-loads the .env produced during setup. Use --env-file only when you need to override or layer additional secrets.