LangChain: Framework for building LLM applications
Modular framework for chaining LLMs with external data.
Learn more about langchain
LangChain is a Python framework for building applications that integrate large language models with external data sources and computational tools. It implements a modular architecture with standardized interfaces for models, embeddings, vector stores, and retrievers, enabling different component implementations to be substituted without modifying core application logic. The framework uses a chain-based composition pattern where operations are linked together sequentially or conditionally to create complex workflows such as retrieval-augmented generation systems and multi-step reasoning agents. Components communicate through a common data structure that preserves context and intermediate results as information flows through the processing pipeline. This abstraction layer allows developers to combine various language models, data retrieval mechanisms, and external APIs into unified applications while maintaining separation between business logic and infrastructure concerns.
Provider-Agnostic Interfaces
Standardized abstractions let you swap LLM providers, vector stores, and embeddings without rewriting application logic. Switch between OpenAI, Anthropic, or local models by changing configuration, not code.
Pre-Built Connector Ecosystem
Includes native integrations for 100+ model providers, vector databases, and external APIs. Eliminates boilerplate for common data sources and reduces time spent on integration code.
Composable Chain Architecture
Links LLM calls, tools, and data retrieval into reusable chains with typed inputs and outputs. Build complex reasoning pipelines by composing simple, testable components.
from langgraph.graph import StateGraph, MessagesState, START, END
def chatbot(state: MessagesState):
return {"messages": [{"role": "ai", "content": "Hello! How can I help?"}]}
graph = StateGraph(MessagesState)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)
app = graph.compile()
response = app.invoke({"messages": [{"role": "user", "content": "Hi"}]})Adds support for Anthropic's code_execution_20250825 tool; no breaking changes or new requirements specified.
- –Use code_execution_20250825 tool type to enable Claude's native code execution capabilities in your chains.
- –Release notes do not specify breaking changes, dependency updates, or migration steps beyond the new tool support.
Release notes do not specify breaking changes, new requirements, or functional updates beyond internal style cleanup.
- –Review internal code style changes that may affect custom extensions or forks of the model-profiles package.
- –No action required for standard users; this release contains only maintenance and code organization improvements.
Maintenance release reverting a SystemMessage feature and raising the default recursion limit; no breaking changes specified.
- –Reverted SystemMessage support in create_agent; avoid relying on this capability if upgrading from 1.0.4.
- –Default recursion limit increased; check logs if deep call stacks previously failed.
See how people are using langchain
Related Repositories
Discover similar tools and frameworks used by developers
streamlit
Python framework for reactive data web applications.
stablediffusion
Text-to-image diffusion in compressed latent space.
dinov2
PyTorch vision transformers pretrained on 142M unlabeled images.
opencv
Cross-platform C++ library for real-time computer vision algorithms.
mmdetection
Modular PyTorch framework for object detection research and deployment.