LlamaIndex: Data framework for LLM applications
Connect LLMs to external data via RAG workflows.
Learn more about llama_index
LlamaIndex is a Python data framework designed to integrate large language models with custom data sources. It works by ingesting data, structuring it into indexes, and enabling retrieval-augmented generation (RAG) workflows where relevant data is fetched to augment LLM prompts. The framework supports multiple LLM providers, embedding models, and vector databases through a modular integration system. Common applications include building question-answering systems over documents, creating agents that reason over structured data, and implementing multi-agent systems that coordinate across different data sources.
Modular Integration System
Core functionality separates from provider integrations across 300+ packages in LlamaHub. Install only required components for specific LLM, embedding, and vector store choices instead of bundling all dependencies.
Flexible Dependency Options
Choose between a starter package with common integrations or core-only package for custom setups. Balances convenience for quick starts with granular control for production deployments.
Multi-Agent Orchestration
Coordinates multiple specialized agents that collaborate on complex tasks. Define agent workflows with delegation, tool use, and memory sharing for sophisticated reasoning pipelines beyond single-agent capabilities.
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("./documents").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What are the key findings in these documents?")
print(response)v0.14.13
- –feat: add earlystoppingmethod parameter to agent workflows (#20389)
- –feat: Add token-based code splitting support to CodeSplitter (#20438)
- –Add RayIngestionPipeline integration for distributed data ingestion (#20443)
- –Added the multi-modal version of the Condensed Conversation & Context… (#20446)
- –Replace ChatMemoryBuffer with Memory (#20458)
v0.14.12
- –Feat/async tool spec support (#20338)
- –Improve `MockFunctionCallingLLM` (#20356)
- –fix(openai): sanitize generic Pydantic model schema names (#20371)
- –Element node parser (#20399)
- –improve llama dev logging (#20411)
v0.14.10
- –feat: add mock function calling llm (#20331)
- –test: fix typo 'reponse' to 'response' in variable names (#20329)
- –feat: add Airweave tool integration with advanced search features (#20111)
See how people are using llama_index
Top in AI & ML
Related Repositories
Discover similar tools and frameworks used by developers
video2x
ML-powered video upscaling, frame interpolation, and restoration with multiple backend support.
evo2
Foundation model for DNA sequence generation and scoring.
koboldcpp
Self-contained llama.cpp distribution with KoboldAI API for running LLMs on consumer hardware.
DeepSpeed
PyTorch library for training billion-parameter models efficiently.
context7
MCP server delivering version-specific library documentation to LLMs.