Skip to main content
Task Apps don’t need to be in Python. You can implement them in any language that can serve HTTP requests and make LLM calls.

Why Polyglot Task Apps?

  • Use your preferred language - No need to rewrite existing code in Python
  • Better performance - Compiled languages can be faster for CPU-intensive tasks
  • Smaller deployments - Single binaries with no runtime dependencies
  • Existing codebases - Integrate directly with your current infrastructure
  • No Python required - Start optimization jobs via API calls

How It Works

┌─────────────────┐         ┌──────────────────┐
│  MIPRO/GEPA     │  HTTP   │  Your Task App   │
│  Optimizer      │ ──────> │  (any language)  │
│                 │         │                  │
│  Proposes new   │         │  Evaluates the   │
│  prompts        │ <────── │  prompt, returns │
│                 │  reward │  reward          │
└─────────────────┘         └──────────────────┘
The optimizer calls your /rollout endpoint with candidate prompts, and you return a reward indicating how well each prompt performed.

The Contract

All Task Apps implement the same OpenAPI contract, regardless of language: Required Endpoints:
  • GET /health - Health check (unauthenticated OK)
  • POST /rollout - Evaluate a prompt (authenticated)
Optional Endpoints:
  • GET /task_info - Dataset metadata (authenticated)
Key Request Fields:
  • env.seed - Dataset index
  • policy.config.inference_url - LLM endpoint
  • policy.config.prompt_template - The prompt to evaluate
Key Response Fields:
  • metrics.mean_return - Reward (0.0-1.0) that drives optimization
  • trajectories[].steps[].reward - Per-step reward
See the full OpenAPI specification for complete details.

Accessing the Contract

Via CLI

# View the contract
synth contracts show task-app

# Get the file path for code generators
synth contracts path task-app

Direct Download

curl -O https://raw.githubusercontent.com/synth-laboratories/synth-ai/main/synth_ai/contracts/task_app.yaml

Generate Types

# Rust
openapi-generator generate -i task_app.yaml -g rust -o ./types

# Go
openapi-generator generate -i task_app.yaml -g go -o ./types

# TypeScript
openapi-generator generate -i task_app.yaml -g typescript-axios -o ./types

Authentication

Task Apps involve two separate authentication flows:

1. Task App Authentication (X-API-Key)

Requests to your task app from the optimizer include an X-API-Key header:
export ENVIRONMENT_API_KEY=your-secret-key
Your task app should verify X-API-Key matches ENVIRONMENT_API_KEY.

2. LLM API Authentication (Authorization: Bearer)

When your task app makes requests to OpenAI/Groq/etc:
export OPENAI_API_KEY=sk-...    # or
export GROQ_API_KEY=gsk_...
Important: The X-API-Key header from the optimizer is for task app auth only - do NOT forward it to the LLM API.

Running Optimization (No Python Required)

Start optimization jobs directly via API calls:
# 1. Start your task app
ENVIRONMENT_API_KEY=my-secret ./synth-task-app

# 2. Expose via tunnel
cloudflared tunnel --url http://localhost:8001

# 3. Start optimization
curl -X POST https://agent-learning.onrender.com/api/prompt-learning/online/jobs \
  -H "Authorization: Bearer $SYNTH_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "algorithm": "mipro",
    "config_body": {
      "prompt_learning": {
        "task_app_url": "https://random-words.trycloudflare.com",
        "task_app_api_key": "my-secret"
      }
    }
  }'

Language Implementations

Performance Comparison

LanguageBinary SizeDependenciesStartup TimeCross-Compile
Rust~5-10MBSomeFast (~50ms)Yes (via rustup)
Go~8-12MBNoneVery Fast (~10ms)Yes (built-in)
TypeScriptN/A (Node)ManyMedium (~200ms)N/A
Zig~1-5MBNoneVery Fast (~10ms)Yes (trivial)

Debugging Tips

Testing Locally

# Health check
curl http://localhost:8001/health

# Manual rollout
curl -X POST http://localhost:8001/rollout \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-secret" \
  -d '{
    "run_id": "test-1",
    "env": {"seed": 0},
    "policy": {
      "config": {
        "model": "gpt-4o-mini",
        "inference_url": "https://api.openai.com/v1"
      }
    },
    "mode": "eval"
  }'

Common Issues

  1. 404 errors from LLM endpoint: Check URL construction with query parameters
  2. Authentication failures: Verify X-API-Key matches ENVIRONMENT_API_KEY
  3. Missing rewards: Ensure reward field is present in each step
  4. Tool call parsing: Extract predictions from tool_calls or content correctly