Skip to main content
# GEPA (Banking77)
cd synth-ai
uv run python /path/to/cookbooks/code/training/prompt_learning/gepa/run_walkthrough.py

# MIPRO (Banking77)
cd synth-ai
uv run python /path/to/cookbooks/code/training/prompt_learning/mipro/run_walkthrough.py
Sources:

Prerequisites

  • Python 3.11+
  • uv package manager
  • Accounts + API keys set up for Synth and your models
  • Environment variables:
    • SYNTH_API_KEY
    • ENVIRONMENT_API_KEY
    • GROQ_API_KEY (or other model provider key, depending on config)

GEPA: Banking77 (In-Process)

Run a full GEPA optimization loop from a single script:
cd synth-ai
uv run python /path/to/cookbooks/code/training/prompt_learning/gepa/run_walkthrough.py
Source:
run_walkthrough.py · config.toml · walkthrough.md

What happens

  1. An in-process task app is started in a background thread.
  2. A Cloudflare tunnel is created automatically and registered with Synth.
  3. A GEPA prompt-learning job is submitted, monitored, and polled until completion.
  4. Final results are returned to Python, and the task app + tunnel are cleaned up automatically.

Sample results

Job pl_5ea04259c2fd4c7a83.33% accuracy on Banking77 in ~35 seconds.
{
  "job_id": "pl_5ea04259c2fd4c7a",
  "algorithm": "gepa",
  "dataset": "banking77",
  "best_score": 0.8333,
  "best_prompt_rank": 1,
  "num_generations": 8,
  "total_time_seconds": 35.6
}
See full run output in:
results.json

MIPRO: Banking77 (In-Process)

Swap the algorithm to MIPRO, keep the same in-process pattern:
cd synth-ai
uv run python /path/to/cookbooks/code/training/prompt_learning/mipro/run_walkthrough.py
Source:
run_walkthrough.py · config.toml · walkthrough.md

Sample results

Job pl_e95cc778c0fb474260.0% accuracy on Banking77 in ~130 seconds.
{
  "job_id": "pl_e95cc778c0fb4742",
  "algorithm": "mipro",
  "dataset": "banking77",
  "best_score": 0.60,
  "best_prompt_rank": 1,
  "total_time_seconds": 130.4
}
See full run output in:
results.json

Core pattern (GEPA + MIPRO)

Both walkthroughs share the same in-process pattern: start a task app, create a tunnel, run a prompt-learning job pointed at the tunnel URL, and clean everything up when done.
from synth_ai.task import InProcessTaskApp
from synth_ai.sdk.api.train.prompt_learning import PromptLearningJob


async def optimize_prompt(
    task_app_path: str,
    config_path: str,
    port: int = 8001,
):
    async with InProcessTaskApp(
        task_app_path=task_app_path,
        port=port,
    ) as task_app:
        # Build job from TOML config
        job = PromptLearningJob.from_config(
            config_path=config_path,
            task_app_url=task_app.url,
        )

        # Run until complete and fetch best prompt + metrics
        results = await job.poll_until_complete()

    return results
Minimal GEPA/MIPRO wrappers:
# GEPA
results = await optimize_prompt(
    task_app_path="task_app.py",
    config_path="code/training/prompt_learning/gepa/config.toml",
)

# MIPRO
results = await optimize_prompt(
    task_app_path="task_app.py",
    config_path="code/training/prompt_learning/mipro/config.toml",
)
Full GEPA walkthrough:
gepa/walkthrough.md
Full MIPRO walkthrough:
mipro/walkthrough.md