Introduction
Composo’s tracing SDK enables you to capture and evaluate LLM calls from your agent applications in real-time. Currently supporting DIY agents built on OpenAI, Anthropic, and Google GenAI - with support for LangChain/LangGraph and other SDKs to come.Why Tracing Matters
Many agent frameworks abstract away the underlying LLM calls, making it difficult to understand what’s happening under the hood and evaluate performance effectively. Many evaluation platforms only let you send traces to a remote system and wait to view results later. Composo gives you the best of both worlds: trace and evaluate immediately, or view your traces in our platform or any of your own observability tooling, spreadsheets or CI/CD seamlessly. By instrumenting your LLM calls and marking agent boundaries, you can evaluate performance in real-time and take action right away - allowing adjustment and feedback in real time before it gets seen by your users.Key Features
- Mark Agent Boundaries: Use
AgentTracercontext manager or@agent_tracerdecorator to define which LLM calls belong to which agent - Hierarchical Tracing: Support for nested agents to model complex multi-agent architectures
- Independent Evaluation: Each agent’s performance is evaluated separately with average, min, max and standard-deviation statistics reported per agent
- Flexible Evaluation: Get evaluation results instantly in your code, or view traces in the Composo platform for deeper analysis (or through seamless sync with any observability platform like Grafana, Sentry, Langfuse, LangSmith, Braintrust)
Framework Support
- Currently Supported:
- Agents built on OpenAI LLMs
- Agents built on Anthropic LLMs
- Agents built on Google GenAI LLMs
- Coming Soon: Langchain, OpenAI Agents, and other popular frameworks
Quickstart
This guide walks you through adding tracing to your agent application in 3 steps. We’ll start with a simple multi-agent application and add tracing incrementally.Starting Code
Here’s a simple multi-agent application we want to trace:Step 1: Install and Initialize
Install the Composo SDK and initialize tracing for your LLM provider (OpenAI or Anthropic).Step 2: Mark Your Agent Boundaries
Wrap your agent logic withAgentTracer or @agent_tracer to mark boundaries.
For the function-based agent, add the decorator:
AgentTracer context manager:
tracer object from the root AgentTracer is needed for evaluation in Step 3.
Step 3: Evaluate Your Trace
Add evaluation after your agents complete:Complete Example
You can also instrument multiple providers simultaneously:
Next Steps
- Read our Agent Evaluation Blog - Deep dive into evaluation strategies
- Explore the Criteria Library - Find more pre-built criteria