Skip to main content

synth_ai.data.llm_calls

Unified abstractions for recording LLM API calls (inputs and results). These records normalize different provider API shapes (Chat Completions, Completions, Responses) into a single schema suitable for storage and analysis, and are intended to be attached to LMCAISEvent(s) as a list of call records. Integration proposal:
  • Update LMCAISEvent to store call_records: list[LLMCallRecord] and remove per-call fields like model_name, provider, and token counts from the event itself. Those belong on each LLMCallRecord. Aggregates (e.g., total_tokens across records, cost_usd) can remain on LMCAISEvent and be derived from the records.
Design goals:
  • Capture both input and output payloads in a provider-agnostic way.
  • Preserve provider-specific request params for auditability.
  • Represent tool calls (requested by the model) and tool results distinctly.
  • Support streaming (optionally via chunks), but emphasize a final collapsed LLMCallRecord for most analytics and fine-tuning data extraction.

Classes

LLMUsage

Token usage reported by the provider. All fields are optional because some providers or stages may omit them.

LLMRequestParams

Provider request parameters. Store provider-agnostic params explicitly and keep a raw_params map for anything provider-specific (top_k, frequency_penalty, etc.).

LLMContentPart

A content item within a message (text, tool-structured JSON, image, etc.).

LLMMessage

A message in a chat-style exchange. For Completions-style calls, role="user" with one text part is typical for input, and role="assistant" for output. Responses API can emit multiple parts; use parts for generality.

ToolCallSpec

A tool/function call requested by the model (not yet executed).

ToolCallResult

The result of executing a tool/function call outside the model. This is distinct from the model’s own output. Attach execution details for auditability.

LLMChunk

Optional streaming chunk representation (for Responses/Chat streaming).

LLMCallRecord

Normalized record of a single LLM API call. Fields capture both the request (input) and the response (output), with optional tool calls and results as emitted by/through the agent runtime.