Skip to main content
LangSmith gives you end-to-end visibility into your LLM application by capturing traces; a complete record of every step that ran during a request, from the inputs passed in to the final output returned. In this quickstart, you will add tracing to an AI assistant and view the results in LangSmith.
If you’re building with LangChain or LangGraph, you can enable LangSmith tracing with a single environment variable. Refer to trace with LangChain or trace with LangGraph.

Prerequisites

Before you begin, make sure you have: This example uses OpenAI as the LLM provider. You can adapt it for your own provider.

1. Set up your environment

  1. Create a project directory, install the dependencies, and configure the required environment variables:
    mkdir ls-quickstart && cd ls-quickstart
    python -m venv .venv && source .venv/bin/activate
    pip install -U langsmith openai
    
  2. Export your environment variables in your shell:
    export LANGSMITH_TRACING=true
    export LANGSMITH_API_KEY="<your-langsmith-api-key>"
    export OPENAI_API_KEY="<your-openai-api-key>"
    
    To send traces to a specific project, use the LANGSMITH_PROJECT environment variable. If this is not set, LangSmith will create a default tracing project automatically on trace ingestion.
    If you are using Anthropic, use the Anthropic wrapper. If you are using Google Gemini, use the Gemini wrapper. For other providers, use the @traceable decorator to trace calls manually.

2. Build the app

The following app uses two LangSmith tools to add tracing:
  • wrap_openai: wraps the OpenAI client so every LLM call is automatically logged as a nested span.
  • @traceable: wraps a function so its inputs, outputs, and any nested spans appear as a single trace in LangSmith.
The assistant function calls a tool (get_context) to retrieve relevant context, then passes that context to the model. Using @traceable on both functions captures the full pipeline in one trace, with the tool call and LLM call as nested spans. Create a file called app.py (or index.ts) with the following code:
from openai import OpenAI
from langsmith.wrappers import wrap_openai
from langsmith import traceable

client = wrap_openai(OpenAI())  # log every OpenAI call automatically

@traceable(run_type="tool")  # trace this as a tool span
def get_context(question: str) -> str:
    # In a real app, this would query a knowledge base or vector store
    return "LangSmith traces are stored for 14 days on the Developer plan."

@traceable  # capture the full pipeline as a single trace
def assistant(question: str) -> str:
    context = get_context(question)
    response = client.chat.completions.create(
        model="gpt-4.1-mini",
        messages=[
            {
                "role": "system",
                "content": f"Answer using the context below.\n\nContext: {context}",
            },
            {"role": "user", "content": question},
        ],
    )
    return response.choices[0].message.content

if __name__ == "__main__":
    print(assistant("How long are LangSmith traces stored?"))

3. Run the app

python app.py

4. View your trace

In the LangSmith UI, go to Tracing and select your default project. Click the assistant row to open the Trace details panel, which shows the assistant function with the get_context tool call and the OpenAI call nested inside it. LangSmith UI showing a trace with an outer application span and a nested LLM call span. The outer span captures your assistant function’s inputs and outputs. The nested get_context span records the tool call, and the ChatOpenAI span records the exact prompt sent to the model and the response returned.

Next steps

After logging traces, use Polly to analyze them and get AI-powered insights into your application’s performance.