The trace_event decorators allow you to track and analyze interactions with AI models in your software. They come in two variants:

  • trace_event_sync for synchronous functions
  • trace_event_async for asynchronous functions

Requirements

The decorated method must belong to a class that defines:

  • system_name: A string identifying the type of agent (e.g., “Math_Agent”, “Translation_Agent”)
  • system_instance_id: A UUID string identifying a specific instance of the agent

Note: The system_instance_id is particularly important when running multiple instances in parallel, as it helps differentiate between records from different runs. The system_name allows you to group and analyze data across all instances of the same type of agent.

Usage

The decorators can be used with either the Synth OpenAI or Anthropic clients, or in combination with track-messages.

Usage with Anthropic

from synth_sdk import Anthropic
from synth_sdk import trace_event_sync
import uuid

class MathAgent:
    def __init__(self):
        self.system_instance_id = str(uuid.uuid4())
        self.system_name = "Math_Agent"
        self.client = Anthropic(api_key=anthropic_api_key)

    @trace_event_sync(
        event_type="math_solution",
    )
    def solve_math_problem(self, problem: str) -> str:
        response = self.client.messages.create(
            model="claude-3-haiku-20240307",
            max_tokens=1000,
            system="You are a math problem solver. Provide step-by-step solutions.",
            messages=[
                {"role": "user", "content": problem},
            ],
            temperature=0,
        )
        return response.content[0].text

The traced events can later be analyzed and uploaded using the Synth SDK’s dataset and upload functionality.

Usage with track_messages

You can combine trace_event decorators with track-messages for more detailed tracking of model interactions. Here’s an example using the async variant:

from synth_sdk.tracing.decorators import trace_event_async
from synth_sdk.tracing.trackers import track_messages_async
from anthropic import AsyncAnthropic
import uuid

class Agent:
    def __init__(self):
        self.system_instance_id = str(uuid.uuid4())
        self.system_name = "Example_System"
        self.client = AsyncAnthropic()

    @trace_event_async(
        event_type="solve_problem",
    )
    async def generate_response(self):
        # Define messages
        messages = [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "What's 2+2?"}
        ]

        # Make API call
        response = await self.client.messages.create(
            model="claude-3-haiku-20240307",
            system=messages[0]["content"],
            messages=messages[1:],
            temperature=0.7,
            max_tokens=1000,
        )

        assistant_message = [{"role": "assistant", "content": response.content[0].text}]

        # Track the messages explicitly
        await track_messages_async(
            input_messages=messages,
            output_messages=assistant_message,
            model_name="claude-3-haiku-20240307",
            model_params={"temperature": 0.7, "max_tokens": 1000},
            finetune=False,
        )

        return response.content[0].text

This approach allows you to:

  1. Track the overall function execution with trace_event_async
  2. Explicitly track the messages and model parameters with track_messages_async

Parameters

  • event_type (str, required): A descriptive name for the type of AI interaction being traced. This should reflect what is happening when the AI models are called (e.g., “math_solution”, “content_generation”, “translation”).

Example

Here’s a complete example showing how to use trace_event_sync with an Anthropic client:

from synth_sdk.provider_support.anthropic import Anthropic
from synth_sdk.tracing.decorators import trace_event_sync

class MathAgent:
    def __init__(self):
        self.system_instance_id = "math_agent_sync"
        self.system_name = "math_agent"
        self.client = Anthropic(api_key=anthropic_api_key)

    @trace_event_sync(
        event_type="math_solution",
    )
    def solve_math_problem(self, problem: str) -> str:
        response = self.client.messages.create(
            model="claude-3-haiku-20240307",
            max_tokens=1000,
            system="You are a math problem solver. Provide step-by-step solutions.",
            messages=[
                {"role": "user", "content": problem},
            ],
            temperature=0,
        )
        return response.content[0].text

The traced events can later be analyzed and uploaded using the Synth SDK’s dataset and upload functionality.

Using with track_messages

You can combine trace_event decorators with track-messages for more detailed tracking of model interactions. Here’s an example using the async variant:

from synth_sdk.tracing.decorators import trace_event_async
from synth_sdk.tracing.trackers import track_messages_async
from anthropic import AsyncAnthropic

class Agent:
    def __init__(self, system_instance_id: str):
        self.system_instance_id = system_instance_id
        self.system_name = "example_system"
        self.client = AsyncAnthropic()

    @trace_event_async(
        event_type="lm_call",
        manage_event="create_and_end",
        increment_partition=True,
    )
    async def generate_response(self):
        # Define messages
        messages = [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "What's 2+2?"}
        ]

        # Make API call
        response = await self.client.messages.create(
            model="claude-3-haiku-20240307",
            system=messages[0]["content"],
            messages=messages[1:],
            temperature=0.7,
            max_tokens=1000,
        )

        assistant_message = [{"role": "assistant", "content": response.content[0].text}]

        # Track the messages explicitly
        await track_messages_async(
            input_messages=messages,
            output_messages=assistant_message,
            model_name="claude-3-haiku-20240307",
            model_params={"temperature": 0.7, "max_tokens": 1000},
            finetune=False,
        )

        return response.content[0].text

This approach allows you to:

  1. Track the overall function execution with trace_event_async
  2. Explicitly track the messages and model parameters with track_messages_async