Navigate:
All Reposllama_index
~$LLAMAI0.1%

LlamaIndex: Data framework for LLM applications

Connect LLMs to external data via RAG workflows.

LIVE RANKINGS • 06:52 AM • STEADY
TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100
OVERALL
#95
30
AI & ML
#45
10
30 DAY RANKING TREND
ovr#95
·AI#45
STARS
46.2K
FORKS
6.7K
DOWNLOADS
7D STARS
+42
7D FORKS
+9
Tags:
See Repo:
Share:

Learn more about llama_index

LlamaIndex is a Python data framework designed to integrate large language models with custom data sources. It works by ingesting data, structuring it into indexes, and enabling retrieval-augmented generation (RAG) workflows where relevant data is fetched to augment LLM prompts. The framework supports multiple LLM providers, embedding models, and vector databases through a modular integration system. Common applications include building question-answering systems over documents, creating agents that reason over structured data, and implementing multi-agent systems that coordinate across different data sources.


1

Modular Integration System

Core functionality separates from provider integrations across 300+ packages in LlamaHub. Install only required components for specific LLM, embedding, and vector store choices instead of bundling all dependencies.

2

Flexible Dependency Options

Choose between a starter package with common integrations or core-only package for custom setups. Balances convenience for quick starts with granular control for production deployments.

3

Multi-Agent Orchestration

Coordinates multiple specialized agents that collaborate on complex tasks. Define agent workflows with delegation, tool use, and memory sharing for sophisticated reasoning pipelines beyond single-agent capabilities.


from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("./documents").load_data()
index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()
response = query_engine.query("What are the key findings in these documents?")
print(response)


vv0.14.8

Maintenance release fixing ReActAgent parsing bugs and adding OpenAI v2 SDK support across multiple integrations.

  • Update OpenAI-dependent packages (llms-openai, llms-upstage, readers-whisper, packs) to support OpenAI v2 SDK.
  • Fix ReActAgent parser stuck on 'Answer:' containing 'Action:' and multi-block ChatMessage handling in core.
vv0.14.7

Maintenance release adding tool-call block support to Anthropic, Mistral, and Ollama LLMs, plus new Serpex search tool and GitHub App auth.

  • Upgrade Anthropic, Mistral, or Ollama integrations to use new tool-call block feature for improved function calling.
  • Add GitHub App authentication to GitHub reader or enable optional SVG processing in Confluence reader if needed.
vv0.14.6

Maintenance release fixing streaming token duplication in Anthropic LLM, SQL injection risk in PostgresKVStore, and adding parallel tool call support for non-streaming workflows.

  • Replace raw SQL interpolation with parameterized queries in PostgresKVStore to prevent injection vulnerabilities.
  • Set allow_parallel_tool_calls in core for non-streaming agent workflows; fixes Anthropic double token streams.

See how people are using llama_index

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers