Navigate:
llama_index
~$LLAM0.4%

LlamaIndex: Data framework for LLM applications

Connect LLMs to external data via RAG workflows.

LIVE RANKINGS • 10:20 AM • STEADY
OVERALL
#118
6
AI & ML
#50
30 DAY RANKING TREND
ovr#118
·AI#50
STARS
47.2K
FORKS
6.9K
7D STARS
+199
7D FORKS
+45
Tags:
See Repo:
Share:

Learn more about llama_index

LlamaIndex is a Python data framework designed to integrate large language models with custom data sources. It works by ingesting data, structuring it into indexes, and enabling retrieval-augmented generation (RAG) workflows where relevant data is fetched to augment LLM prompts. The framework supports multiple LLM providers, embedding models, and vector databases through a modular integration system. Common applications include building question-answering systems over documents, creating agents that reason over structured data, and implementing multi-agent systems that coordinate across different data sources.

llama_index

1

Modular Integration System

Core functionality separates from provider integrations across 300+ packages in LlamaHub. Install only required components for specific LLM, embedding, and vector store choices instead of bundling all dependencies.

2

Flexible Dependency Options

Choose between a starter package with common integrations or core-only package for custom setups. Balances convenience for quick starts with granular control for production deployments.

3

Multi-Agent Orchestration

Coordinates multiple specialized agents that collaborate on complex tasks. Define agent workflows with delegation, tool use, and memory sharing for sophisticated reasoning pipelines beyond single-agent capabilities.


from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("./documents").load_data()
index = VectorStoreIndex.from_documents(documents)

query_engine = index.as_query_engine()
response = query_engine.query("What are the key findings in these documents?")
print(response)


vv0.14.13

Adds early stopping for agent workflows and distributed data ingestion support.

  • feat: add earlystoppingmethod parameter to agent workflows
  • feat: Add token-based code splitting support to CodeSplitter
  • Add RayIngestionPipeline integration for distributed data ingestion
  • Added the multi-modal version of the Condensed Conversation & Context
  • Replace ChatMemoryBuffer with Memory
vv0.14.12

Introduces async tool spec support with improved function calling and node parsing.

  • Feat/async tool spec support
  • Improve `MockFunctionCallingLLM`
  • fix(openai): sanitize generic Pydantic model schema names
  • Element node parser
  • improve llama dev logging
vv0.14.10

Adds mock function calling LLM and Airweave tool integration with advanced search.

  • feat: add mock function calling llm
  • test: fix typo 'reponse' to 'response' in variable names
  • feat: add Airweave tool integration with advanced search features

See how people are using llama_index

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers