Skip to main content
Task Apps don’t need to be in Python. You can implement them in any language that can serve HTTP requests and make LLM calls. This guide shows you how to build Task Apps in Rust, Go, TypeScript, and Zig.

Why Polyglot Task Apps?

  • Use your preferred language - No need to rewrite existing code in Python
  • Better performance - Compiled languages can be faster for CPU-intensive tasks
  • Smaller deployments - Single binaries with no runtime dependencies
  • Existing codebases - Integrate directly with your current infrastructure
  • No Python required - Start optimization jobs via API calls

How It Works

┌─────────────────┐         ┌──────────────────┐
│  MIPRO/GEPA     │  HTTP   │  Your Task App   │
│  Optimizer      │ ──────> │  (any language)  │
│                 │         │                  │
│  Proposes new   │         │  Evaluates the   │
│  prompts        │ <────── │  prompt, returns │
│                 │  reward │  reward          │
└─────────────────┘         └──────────────────┘
The optimizer calls your /rollout endpoint with candidate prompts, and you return a reward indicating how well each prompt performed. That’s it.

The Contract

All Task Apps implement the same OpenAPI contract, regardless of language: Required Endpoints:
  • GET /health - Health check (unauthenticated OK)
  • POST /rollout - Evaluate a prompt (authenticated)
Optional Endpoints:
  • GET /task_info - Dataset metadata (authenticated)
Key Request Fields:
  • env.seed - Dataset index
  • policy.config.inference_url - LLM endpoint (see notes below)
  • policy.config.prompt_template - The prompt to evaluate
Key Response Fields:
  • metrics.mean_return - Reward (0.0-1.0) that drives optimization
  • trajectories[].steps[].reward - Per-step reward
See the full OpenAPI specification for complete details.

Accessing the OpenAPI Contract

The Task App contract is available through multiple methods, so you can access it without installing Python or any dependencies: If you have the Synth CLI installed:
# View the contract in your terminal
synth contracts show task-app

# Get the file path (for use with code generators)
synth contracts path task-app

# Example: Generate types for your language
CONTRACT_PATH=$(synth contracts path task-app)
openapi-generator generate -i "$CONTRACT_PATH" -g rust -o ./generated
The CLI commands are available after installing synth-ai:
# Install via uv (recommended)
uv tool install synth-ai

# Or via pip
pip install synth-ai

2. Direct Download (No Installation Required)

Download the contract directly from GitHub:
# Download to current directory
curl -O https://raw.githubusercontent.com/synth-laboratories/synth-ai/main/synth_ai/contracts/task_app.yaml

# Or download and view
curl https://raw.githubusercontent.com/synth-laboratories/synth-ai/main/synth_ai/contracts/task_app.yaml

3. Via Python SDK

If you’re working in Python:
from synth_ai.contracts import get_task_app_contract, TASK_APP_CONTRACT_PATH

# Get as string
contract_yaml = get_task_app_contract()

# Get file path
print(TASK_APP_CONTRACT_PATH)

4. View in Browser

Using the Contract with Code Generators

Once you have the contract, you can generate types for your language: For Rust:
openapi-generator generate -i task_app.yaml -g rust -o ./types
For Go:
openapi-generator generate -i task_app.yaml -g go -o ./types
For TypeScript:
openapi-generator generate -i task_app.yaml -g typescript-axios -o ./types
For Other Languages:
# See available generators
openapi-generator list

# Generate for your language
openapi-generator generate -i task_app.yaml -g [generator-name] -o ./types

Contract Contents

The OpenAPI specification includes:
  1. Complete type definitions for all request/response schemas
  2. Detailed field descriptions explaining what each field means
  3. Behavioral documentation describing how to implement /rollout
  4. Authentication requirements for each endpoint
  5. Example payloads for common scenarios
  6. Error response schemas for proper error handling
All the examples in this guide (Rust, Go, TypeScript, Zig) implement this exact contract.

Authentication

Task Apps involve two separate authentication flows:

1. Task App Authentication (X-API-Key)

Requests to your task app from the optimizer include an X-API-Key header. This authenticates the optimizer to your task app.
# Set this when starting your task app
export ENVIRONMENT_API_KEY=your-secret-key
Your task app should verify X-API-Key matches ENVIRONMENT_API_KEY on /rollout and /task_info endpoints.

2. LLM API Authentication (Authorization: Bearer)

When your task app makes requests to OpenAI/Groq/etc, you need to add a Bearer token:
# Set this when starting your task app
export OPENAI_API_KEY=sk-...    # or
export GROQ_API_KEY=gsk_...
Your task app should read this from the environment and add it to LLM requests:
Authorization: Bearer sk-...
Important: The X-API-Key header from the optimizer is for task app auth only - do NOT forward it to the LLM API.

Banking77 Examples

All examples below implement the same Banking77 intent classification task. They share a common dataset file and produce identical results.

Rust

Features:
  • Fast, type-safe implementation using Axum
  • Async/await with Tokio runtime
  • Strong compile-time guarantees
  • Tested end-to-end with MIPRO - achieves 100% accuracy
Quick Start:
cd examples/polyglot/rust/
cargo run --release
With authentication:
ENVIRONMENT_API_KEY=your-secret cargo run --release
Dependencies:
[dependencies]
axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.11", features = ["json"] }
anyhow = "1"
tracing = "0.1"
tracing-subscriber = "0.3"
Key Implementation Details: The most critical part is URL construction with query parameters. The inference_url from the optimizer includes query params for tracing:
// CORRECT: Insert path before query string
let url = if let Some(query_start) = inference_url.find('?') {
    let (base, query) = inference_url.split_at(query_start);
    format!("{}/chat/completions{}", base.trim_end_matches('/'), query)
} else {
    format!("{}/chat/completions", inference_url.trim_end_matches('/'))
};

// Result: http://host/path/chat/completions?cid=xxx
Full Example: rust/src/main.rs

Go

Features:
  • Zero external dependencies (uses only Go standard library)
  • Single static binary
  • Built-in cross-compilation
  • Excellent concurrency with goroutines
  • Tested end-to-end with MIPRO - achieves 100% accuracy
Quick Start:
cd examples/polyglot/go/
go build -o synth-task-app
./synth-task-app
With authentication:
ENVIRONMENT_API_KEY=your-secret ./synth-task-app
Cross-Compilation:
# Linux AMD64
GOOS=linux GOARCH=amd64 go build -o synth-task-app-linux-amd64

# Linux ARM64
GOOS=linux GOARCH=arm64 go build -o synth-task-app-linux-arm64

# macOS ARM64 (Apple Silicon)
GOOS=darwin GOARCH=arm64 go build -o synth-task-app-macos-arm64

# Windows
GOOS=windows GOARCH=amd64 go build -o synth-task-app.exe
Key Implementation Details: URL construction in Go:
// CORRECT: Insert path before query string
var url string
if queryIdx := strings.Index(inferenceURL, "?"); queryIdx != -1 {
    base := strings.TrimSuffix(inferenceURL[:queryIdx], "/")
    query := inferenceURL[queryIdx:]
    url = base + "/chat/completions" + query
} else {
    url = strings.TrimSuffix(inferenceURL, "/") + "/chat/completions"
}

// Result: http://host/path/chat/completions?cid=xxx
Full Example: go/main.go

TypeScript

Features:
  • Uses Hono - fast, lightweight web framework
  • Works with Node.js, Deno, Bun, and Cloudflare Workers
  • Type-safe with full TypeScript support
  • Easy deployment to edge platforms
Quick Start:
cd examples/polyglot/typescript/
npm install
npm run dev
With authentication:
ENVIRONMENT_API_KEY=your-secret npm run dev
Build for production:
npm run build
npm start
Dependencies:
{
  "dependencies": {
    "hono": "^4.0.0"
  },
  "devDependencies": {
    "@types/node": "^20.0.0",
    "typescript": "^5.0.0",
    "tsx": "^4.0.0"
  }
}
Key Implementation Details: URL construction in TypeScript:
// CORRECT: Insert path before query string
let url: string;
const queryIndex = inferenceUrl.indexOf("?");
if (queryIndex !== -1) {
  const base = inferenceUrl.slice(0, queryIndex).replace(/\/$/, "");
  const query = inferenceUrl.slice(queryIndex);
  url = `${base}/chat/completions${query}`;
} else {
  url = `${inferenceUrl.replace(/\/$/, "")}/chat/completions`;
}

// Result: http://host/path/chat/completions?cid=xxx
Deploying to Cloudflare Workers: The Hono framework works seamlessly with Cloudflare Workers:
  1. Install Wrangler: npm install -g wrangler
  2. Create wrangler.toml:
    name = "synth-task-app"
    main = "src/index.ts"
    compatibility_date = "2024-01-01"
    
    [vars]
    ENVIRONMENT_API_KEY = "your-secret"
    
  3. Modify for Workers (replace the serve() call with export default app;)
  4. Deploy: wrangler deploy
Full Example: typescript/src/index.ts

Zig

Features:
  • Zero external dependencies (uses only Zig standard library)
  • Single static binary (~1-5MB optimized)
  • Trivial cross-compilation to any target
  • No garbage collection (predictable latency)
  • Requires Zig 0.15+ (uses new HTTP client and I/O APIs)
Quick Start:
cd examples/polyglot/zig/
zig build -Doptimize=ReleaseFast
./zig-out/bin/synth-task-app
With authentication:
ENVIRONMENT_API_KEY=your-secret ./zig-out/bin/synth-task-app
Cross-Compilation: Zig makes cross-compilation trivial:
# Linux (static musl)
zig build -Doptimize=ReleaseFast -Dtarget=x86_64-linux-musl

# Linux ARM64
zig build -Doptimize=ReleaseFast -Dtarget=aarch64-linux-musl

# macOS ARM64
zig build -Doptimize=ReleaseFast -Dtarget=aarch64-macos

# Windows
zig build -Doptimize=ReleaseFast -Dtarget=x86_64-windows
Key Implementation Details: Zig requires manual HTTP request handling, but the core logic is straightforward. The implementation uses Zig’s standard library HTTP client for LLM calls. Full Example: zig/src/main.zig

Running Optimization (No Python Required!)

You can start optimization jobs directly via API calls, without installing the Python SDK:
# 1. Start your task app
ENVIRONMENT_API_KEY=my-secret ./synth-task-app

# 2. Expose via tunnel
cloudflared tunnel --url http://localhost:8001
# Note the URL: https://random-words.trycloudflare.com

# 3. Start optimization
curl -X POST https://api.usesynth.ai/api/prompt-learning/online/jobs \
  -H "Authorization: Bearer $SYNTH_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "algorithm": "mipro",
    "config_body": {
      "prompt_learning": {
        "task_app_url": "https://random-words.trycloudflare.com",
        "task_app_api_key": "my-secret"
      }
    }
  }'
The API returns a job ID that you can use to poll for results:
# Check job status
curl https://api.usesynth.ai/api/prompt-learning/online/jobs/{job_id} \
  -H "Authorization: Bearer $SYNTH_API_KEY"

Shared Dataset

All examples load from the same dataset file for consistency:
examples/polyglot/data/banking77.json
This contains 100 Banking77-style samples for intent classification. Each sample has:
{
  "query": "How do I reset my PIN?",
  "label": "change_pin"
}

Project Structure

examples/polyglot/
├── README.md              # Overview and contract documentation
├── data/
│   └── banking77.json     # Shared dataset (100 samples)
├── rust/
│   ├── Cargo.toml
│   ├── src/main.rs
│   └── README.md
├── go/
│   ├── go.mod
│   ├── main.go
│   └── README.md
├── typescript/
│   ├── package.json
│   ├── src/index.ts
│   └── README.md
└── zig/
    ├── build.zig
    ├── src/main.zig
    └── README.md

Customizing for Your Task

To adapt these examples for your own task:
  1. Replace the dataset - Load your own data in the appropriate format
  2. Update the tool schema - Modify the classify tool to match your output format
  3. Adjust reward computation - Change how you compare predictions to ground truth
  4. Update task metadata - Modify the /task_info response to reflect your task
The core structure remains the same:
  1. Parse the rollout request
  2. Extract the seed and load your data
  3. Render the prompt template with your data
  4. Call the LLM via inference_url/chat/completions
  5. Parse the response and compute reward
  6. Return the rollout response

Performance Characteristics

LanguageBinary SizeDependenciesStartup TimeCross-Compile
Rust~5-10MBSomeFast (~50ms)Yes (via rustup)
Go~8-12MBNoneVery Fast (~10ms)Yes (built-in)
TypeScriptN/A (Node)ManyMedium (~200ms)N/A
Zig~1-5MBNoneVery Fast (~10ms)Yes (trivial)

Language Recommendations

Choose Rust if:
  • You want strong type safety and compile-time guarantees
  • You need async/await for concurrent processing
  • You’re comfortable with Rust’s ownership model
Choose Go if:
  • You want zero dependencies and simple deployment
  • You need excellent concurrency with goroutines
  • You prefer a simpler learning curve
Choose TypeScript if:
  • You’re already in the Node.js ecosystem
  • You want to deploy to edge platforms (Cloudflare Workers)
  • You prefer JavaScript’s familiar syntax
Choose Zig if:
  • You want the smallest possible binaries
  • You need trivial cross-compilation
  • You want no garbage collection for predictable latency
  • You’re comfortable with a newer language

Debugging Tips

Testing Locally

Test your task app with curl before connecting to the optimizer:
# Health check
curl http://localhost:8001/health

# Manual rollout (requires mock inference_url)
curl -X POST http://localhost:8001/rollout \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-secret" \
  -d '{
    "run_id": "test-1",
    "env": {"seed": 0},
    "policy": {
      "config": {
        "model": "gpt-4o-mini",
        "inference_url": "https://api.openai.com/v1"
      }
    },
    "mode": "eval"
  }'

Common Issues

  1. 404 errors from LLM endpoint: Check that you’re constructing the URL correctly with query parameters BEFORE /chat/completions
  2. Authentication failures: Verify that X-API-Key matches ENVIRONMENT_API_KEY
  3. Missing rewards: Ensure reward field is present in each step’s response
  4. Tool call parsing: Make sure you’re extracting predictions from tool_calls or content correctly

Examples Repository

All polyglot examples are available in the synth-ai repository:

Next Steps

  • Try running one of the examples locally
  • Customize the dataset for your use case
  • Deploy to production using Cloudflare Tunnel
  • Start optimization via the API
  • Check out the Deployed Task App Walkthrough for a step-by-step guide