The track_messages functions allow you to track message interactions with language models. They must be used within methods decorated with trace_event decorators.

They come in two variants:

  • track_messages_sync for synchronous operations
  • track_messages_async for asynchronous operations

Requirements

  • Must be used within methods decorated with trace_event
  • The containing class must define:
    • system_name: A string identifying the type of agent (e.g., “Math_Agent”, “Translation_Agent”)
    • system_instance_id: A UUID string identifying a specific instance of the agent

Parameters

  • input_messages (List[Dict], required): List of input messages in the conversation
  • output_messages (List[Dict], required): List of output/response messages from the model
  • model_name (str, required): Name of the language model used
  • model_params (Dict, optional): Parameters used for the model call (e.g., temperature, max_tokens)

Usage Example

Here’s an example showing the required usage with trace_event:

from synth_sdk import trace_event_async
from synth_sdk import track_messages_async
from anthropic import AsyncAnthropic
import uuid

class Agent:
    def __init__(self):
        self.system_instance_id = str(uuid.uuid4())
        self.system_name = "Example_System"
        self.client = AsyncAnthropic()

    @trace_event_async(
        event_type="lm_call",
    )
    async def generate_response(self):
        messages = [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "What's 2+2?"}
        ]

        response = await self.client.messages.create(
            model="claude-3-haiku-20240307",
            system=messages[0]["content"],
            messages=messages[1:],
            temperature=0.7,
            max_tokens=1000,
        )

        assistant_message = [{"role": "assistant", "content": response.content[0].text}]

        # Must be called within a trace_event decorated method
        await track_messages_async(
            input_messages=messages,
            output_messages=assistant_message,
            model_name="claude-3-haiku-20240307",
            model_params={"temperature": 0.7, "max_tokens": 1000},
            finetune=False,
        )

        return response.content[0].text