Navigate:
llama_index
~$LLAM0.3%

LlamaIndex: Data framework for LLM applications

Connect LLMs to external data via RAG workflows.

LIVE RANKINGS • 02:15 PM • STEADY
OVERALL
#108
98
AI & ML
#47
19
30 DAY RANKING TREND
ovr#108
·AI#47
STARS
46.9K
FORKS
6.8K
7D STARS
+142
7D FORKS
+23
Tags:
See Repo:
Share:

Learn more about llama_index

LlamaIndex is a Python data framework designed to integrate large language models with custom data sources. It works by ingesting data, structuring it into indexes, and enabling retrieval-augmented generation (RAG) workflows where relevant data is fetched to augment LLM prompts. The framework supports multiple LLM providers, embedding models, and vector databases through a modular integration system. Common applications include building question-answering systems over documents, creating agents that reason over structured data, and implementing multi-agent systems that coordinate across different data sources.

llama_index

1

Modular Integration System

Core functionality separates from provider integrations across 300+ packages in LlamaHub. Install only required components for specific LLM, embedding, and vector store choices instead of bundling all dependencies.

2

Flexible Dependency Options

Choose between a starter package with common integrations or core-only package for custom setups. Balances convenience for quick starts with granular control for production deployments.

3

Multi-Agent Orchestration

Coordinates multiple specialized agents that collaborate on complex tasks. Define agent workflows with delegation, tool use, and memory sharing for sophisticated reasoning pipelines beyond single-agent capabilities.


from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("./documents").load_data()
index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()
response = query_engine.query("What are the key findings in these documents?")
print(response)


vv0.14.13

v0.14.13

  • feat: add earlystoppingmethod parameter to agent workflows (#20389)
  • feat: Add token-based code splitting support to CodeSplitter (#20438)
  • Add RayIngestionPipeline integration for distributed data ingestion (#20443)
  • Added the multi-modal version of the Condensed Conversation & Context… (#20446)
  • Replace ChatMemoryBuffer with Memory (#20458)
vv0.14.12

v0.14.12

  • Feat/async tool spec support (#20338)
  • Improve `MockFunctionCallingLLM` (#20356)
  • fix(openai): sanitize generic Pydantic model schema names (#20371)
  • Element node parser (#20399)
  • improve llama dev logging (#20411)
vv0.14.10

v0.14.10

  • feat: add mock function calling llm (#20331)
  • test: fix typo 'reponse' to 'response' in variable names (#20329)
  • feat: add Airweave tool integration with advanced search features (#20111)

See how people are using llama_index

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers