Navigate:
All Reposlangchain
~$LANGCH0.2%

LangChain: Framework for building LLM applications

Modular framework for chaining LLMs with external data.

LIVE RANKINGS • 06:51 AM • STEADY
TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25
OVERALL
#25
2
AI & ML
#15
4
30 DAY RANKING TREND
ovr#25
·AI#15
STARS
123.8K
FORKS
20.4K
DOWNLOADS
1.2M
7D STARS
+233
7D FORKS
+43
Tags:
See Repo:
Share:

Learn more about langchain

LangChain is a Python framework for building applications that integrate large language models with external data sources and computational tools. It implements a modular architecture with standardized interfaces for models, embeddings, vector stores, and retrievers, enabling different component implementations to be substituted without modifying core application logic. The framework uses a chain-based composition pattern where operations are linked together sequentially or conditionally to create complex workflows such as retrieval-augmented generation systems and multi-step reasoning agents. Components communicate through a common data structure that preserves context and intermediate results as information flows through the processing pipeline. This abstraction layer allows developers to combine various language models, data retrieval mechanisms, and external APIs into unified applications while maintaining separation between business logic and infrastructure concerns.


1

Provider-Agnostic Interfaces

Standardized abstractions let you swap LLM providers, vector stores, and embeddings without rewriting application logic. Switch between OpenAI, Anthropic, or local models by changing configuration, not code.

2

Pre-Built Connector Ecosystem

Includes native integrations for 100+ model providers, vector databases, and external APIs. Eliminates boilerplate for common data sources and reduces time spent on integration code.

3

Composable Chain Architecture

Links LLM calls, tools, and data retrieval into reusable chains with typed inputs and outputs. Build complex reasoning pipelines by composing simple, testable components.


from langgraph.graph import StateGraph, MessagesState, START, END

def chatbot(state: MessagesState):
    return {"messages": [{"role": "ai", "content": "Hello! How can I help?"}]}

graph = StateGraph(MessagesState)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)
app = graph.compile()

response = app.invoke({"messages": [{"role": "user", "content": "Hi"}]})


v1.0.3

Adds support for Anthropic's code_execution_20250825 tool; no breaking changes or new requirements specified.

  • Use code_execution_20250825 tool type to enable Claude's native code execution capabilities in your chains.
  • Release notes do not specify breaking changes, dependency updates, or migration steps beyond the new tool support.
v0.0.4

Release notes do not specify breaking changes, new requirements, or functional updates beyond internal style cleanup.

  • Review internal code style changes that may affect custom extensions or forks of the model-profiles package.
  • No action required for standard users; this release contains only maintenance and code organization improvements.
v1.0.5

Maintenance release reverting a SystemMessage feature and raising the default recursion limit; no breaking changes specified.

  • Reverted SystemMessage support in create_agent; avoid relying on this capability if upgrading from 1.0.4.
  • Default recursion limit increased; check logs if deep call stacks previously failed.

See how people are using langchain

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers