Skip to main content

Downloading Graphs

After training, you can download your optimized graph for local use, inspection, or integration into your own systems.

Download Formats

FormatContentsUse Case
Text exportReadable prompt textInspection, documentation
JSONFull graph structureProgrammatic use
YAMLGraph definitionConfiguration files

API Endpoint

GET /api/adas/jobs/{adas_job_id}/download

Response

{
  "graph_id": "graph_abc123",
  "best_snapshot_id": "snap_xyz789",
  "best_score": 0.87,
  "prompt_text": "You are a helpful assistant that answers questions...",
  "graph_yaml": "nodes:\n  - id: main\n    type: llm\n    ...",
  "metadata": {
    "policy_model": "gpt-4o-mini",
    "training_generations": 5,
    "total_evaluations": 200
  }
}

Python SDK

Download with ADASJob

from synth_ai.sdk.api.train.adas import ADASJob

# After training completes
job = ADASJob.from_existing("adas_abc123", api_key=api_key)

# Download the best prompt
prompt = job.download_prompt()
print(prompt)

Download Text Export

Get a readable text version of the optimized prompts:
text_export = job.download_graph_txt()
print(text_export)
Output:
=== Graph: intent_classifier ===
Best Score: 0.87 (validation)
Policy Model: gpt-4o-mini

--- Node: classifier ---
System Prompt:
You are an intent classifier for customer support queries.
Classify each query into one of: billing, technical, account, other.

Respond with just the intent category.

--- Node: confidence_check ---
...

Download Full Graph

Get the complete graph definition for programmatic use:
graph_data = job.download_full_graph()

# Access graph structure
nodes = graph_data["nodes"]
edges = graph_data["edges"]
prompts = graph_data["prompts"]

# Save to file
import json
with open("my_graph.json", "w") as f:
    json.dump(graph_data, f, indent=2)

CLI Download

# Download best prompt as text
uvx synth-ai artifacts download adas_abc123

# Download as JSON
uvx synth-ai artifacts download adas_abc123 --format json

# Save to file
uvx synth-ai artifacts download adas_abc123 --output my_graph.json

# Download specific snapshot
uvx synth-ai artifacts download adas_abc123 --snapshot snap_xyz789

cURL Example

curl -H "Authorization: Bearer $SYNTH_API_KEY" \
  "$SYNTH_BACKEND_URL/api/adas/jobs/adas_abc123/download"

Running Downloaded Graphs Locally

Once downloaded, you can run graphs without the Synth API:

Using the Prompt Text

For simple single-prompt graphs:
import openai

# Downloaded prompt
system_prompt = """You are an intent classifier..."""

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "I want to cancel my subscription"}
    ]
)
print(response.choices[0].message.content)

Using the Full Graph

For multi-node graphs, implement the graph execution logic:
import json
from openai import OpenAI

# Load downloaded graph
with open("my_graph.json") as f:
    graph = json.load(f)

client = OpenAI()

def execute_node(node, inputs):
    """Execute a single graph node."""
    prompt = node["prompt"].format(**inputs)
    response = client.chat.completions.create(
        model=node.get("model", "gpt-4o-mini"),
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

def run_graph(graph, initial_input):
    """Execute the full graph."""
    state = initial_input.copy()

    for node in graph["nodes"]:
        result = execute_node(node, state)
        state[node["output_key"]] = result

    return state[graph["output_key"]]

# Run locally
result = run_graph(graph, {"question": "What is 2+2?"})

Graph Schema

Downloaded graphs follow this structure:
# graph.yaml
id: graph_abc123
name: intent_classifier
type: policy
structure: dag

nodes:
  - id: classifier
    type: llm
    model: gpt-4o-mini
    prompt: |
      You are an intent classifier...
    input_keys: [query]
    output_key: intent

  - id: response_generator
    type: llm
    model: gpt-4o-mini
    prompt: |
      Given the intent: {intent}
      Generate a response for: {query}
    input_keys: [query, intent]
    output_key: response

edges:
  - from: classifier
    to: response_generator

input_key: query
output_key: response

metadata:
  training_job: adas_abc123
  best_score: 0.87
  created_at: 2024-01-15T10:30:00Z

Version Control

Track your graphs in version control:
# Download and commit
uvx synth-ai artifacts download adas_abc123 --output graphs/intent_v1.json
git add graphs/intent_v1.json
git commit -m "Add optimized intent classifier v1 (score: 0.87)"