langchain-core. Examples include chat models, tools, retrievers, and more.
Your integration package will typically implement a subclass of at least one of these components. Expand the tabs below to see details on each.
- Chat Models
- Embeddings
- Tools
- Middleware
- Checkpointers
- Sandboxes
Chat models are subclasses of the
BaseChatModel class. They implement methods for generating chat completions, handling message formatting, and managing model parameters.The chat model integration guide is currently WIP. In the meantime, read the chat model conceptual guide for details on how LangChain chat models function. You may also refer to existing integrations in the LangChain repo
Embedding models are subclasses of the
Embeddings class.The embedding model integration guide is currently WIP. In the meantime, read the embedding model conceptual guide for details on how LangChain embedding models function.
Tools are used in 2 main ways:
- To define an “input schema” or “args schema” to pass to a chat model’s tool calling feature along with a text request, such that the chat model can generate a “tool call”, or parameters to call the tool with.
- To take a “tool call” as generated above, and take some action and return a response that can be passed back to the chat model as a ToolMessage.
BaseTool base class. This interface has 3 properties and 2 methods that should be implemented in a subclass.The tools integration guide is currently WIP. In the meantime, read the tools conceptual guide for details on how LangChain tools function.
Middleware lets you customize agent behavior by hooking into model calls, tool calls, and agent lifecycle events. Middleware classes subclass the
Provider-specific middleware lives in the provider’s integration package (for example
AgentMiddleware base class.Read the custom middleware guide to understand hooks, state updates, and middleware patterns before building an integration.Middleware integrations typically fall into two categories:| Type | Description | Examples |
|---|---|---|
| Provider-specific | Leverages a provider’s unique capabilities | Prompt caching, native tool execution, content moderation |
| Cross-provider | Works with any model or tool | Rate limiting, PII detection, logging, guardrails |
langchain-anthropic). Cross-provider middleware can be published as a standalone package.You can also use these existing middleware integrations as reference:OpenAI content moderation
Single middleware with configuration options and exit behaviors.
Anthropic middleware
Multiple middleware classes for prompt caching, tools, memory, and file search.
AWS prompt caching
Provider-specific prompt caching with model behavior tables.
Custom middleware guide
Full reference for hooks, state updates, and patterns.
Checkpointers enable persistence in LangGraph, allowing agents to save and resume state across interactions.See existing checkpointer integrations in the LangGraph repo for implementation examples.
Sandbox integrations enable Deep Agents to run code in isolated environments.Implement the Put this in a file such as
SandboxBackendProtocol from Deep Agents. This protocol includes execute(), async variants, and the filesystem tool methods such as ls, read, write, edit, glob, and grep.In practice, if your sandbox environment can run shell commands and has python3 available, you should usually subclass BaseSandbox. BaseSandbox provides the filesystem operations through python3, so you mainly need to implement execute(), upload_files(), download_files(), and id.Example BaseSandbox scaffold
from __future__ import annotations
from deepagents.backends.protocol import (
ExecuteResponse,
FileDownloadResponse,
FileUploadResponse,
)
from deepagents.backends.sandbox import BaseSandbox
class MySandbox(BaseSandbox):
def __init__(self, client: MySandboxSdkClient) -> None:
self._client = client
@property
def id(self) -> str:
return self._client.sandbox_id
def execute(
self,
command: str,
*,
timeout: int | None = None,
) -> ExecuteResponse:
# Execute `command` in your sandbox and map the provider response
# into ExecuteResponse.
result = self._client.run(command=command, timeout=timeout)
output = result.stdout or ""
if result.stderr:
output += f"\n<stderr>{result.stderr}</stderr>"
return ExecuteResponse(
output=output,
exit_code=result.exit_code,
truncated=False,
)
def upload_files(
self,
files: list[tuple[str, bytes]],
) -> list[FileUploadResponse]:
# Validate paths, batch requests where possible, and map provider
# results back into FileUploadResponse objects in input order.
# Only catch and normalize errors that an LLM can plausibly retry
# or fix, such as invalid_path or file_not_found.
return self._client.upload_files(files)
def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
# Validate paths, batch requests where possible, and map provider
# results back into FileDownloadResponse objects in input order.
# Only catch and normalize errors that an LLM can plausibly retry
# or fix, such as invalid_path or file_not_found.
return self._client.download_files(paths)
async def aexecute(
self,
command: str,
*,
timeout: int | None = None,
) -> ExecuteResponse:
...
async def aupload_files(
self,
files: list[tuple[str, bytes]],
) -> list[FileUploadResponse]:
...
async def adownload_files(
self,
paths: list[str],
) -> list[FileDownloadResponse]:
...
Test your integration
Validate your integration with the sandbox standard test suite. The Python suite usesSandboxIntegrationTests from langchain_tests.integration_tests; subclass it and provide a sandbox fixture that yields a clean SandboxBackendProtocol instance.Example sandbox standard test setup
from __future__ import annotations
from collections.abc import Iterator
import pytest
from deepagents.backends.protocol import SandboxBackendProtocol
from langchain_tests.integration_tests import SandboxIntegrationTests
from langchain_myprovider import MySandbox
from myprovider_sdk import MySandboxSdkClient
class TestMySandboxStandard(SandboxIntegrationTests):
@pytest.fixture(scope="class")
def sandbox(self) -> Iterator[SandboxBackendProtocol]:
client = MySandboxSdkClient()
backend = MySandbox(client=client)
try:
yield backend
finally:
# Replace this with your provider's cleanup logic.
client.delete_sandbox(backend.id)
tests/integration_tests/test_sandbox.py. The standard suite will handle the actual filesystem and command-execution assertions for you.Reference implementation: See the Daytona partner integration, which subclasses BaseSandbox and implements execute(), upload_files(), download_files(), and id.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

