CrewAI: Multi-agent orchestration framework for AI agents
Python framework for autonomous multi-agent AI collaboration.
Learn more about crewAI
CrewAI is a Python framework for orchestrating multiple autonomous AI agents that collaborate to complete complex tasks through role-based interaction patterns. The framework implements a hierarchical task delegation system where agents are instantiated with defined roles, goals, and backstories, then assigned to specific tasks that are coordinated by a crew orchestrator. Each agent operates as an independent decision-making entity powered by large language models, with the crew managing inter-agent communication, task sequencing, and result aggregation. The architecture supports both sequential and hierarchical process flows, allowing agents to work in predefined pipelines or dynamically delegate subtasks based on their specialized capabilities. The framework abstracts the complexity of multi-agent coordination by providing a declarative interface for defining agent behaviors and task dependencies without requiring manual prompt engineering or state management.
Framework-Independent Architecture
Built without LangChain or external agent framework dependencies, providing a self-contained implementation. Eliminates version conflicts and reduces dependency bloat while maintaining full control over multi-agent orchestration logic.
Dual Execution Models
Crews enable autonomous multi-agent collaboration while Flows provide event-driven control with granular LLM calls. Developers choose between full autonomy for complex workflows or precise deterministic execution for predictable outcomes.
Integrated Observability Platform
Built-in tracing, monitoring, and centralized control plane for agent workflow management. Real-time metrics, logs, and performance analytics enable debugging and optimization without third-party tools.
from crewai import Agent, Task, Crew
researcher = Agent(
role='Researcher',
goal='Find latest AI news',
backstory='Expert at gathering information'
)
task = Task(
description='Research AI trends in 2024',
agent=researcher,
expected_output='Summary of AI trends'
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()Patch release fixing agent iteration limits and LLM provider routing bugs; no breaking changes or new requirements noted.
- –Update to resolve agent max_iterations parameter handling that may have caused premature task termination.
- –Apply fix for LLM model syntax routing errors that prevented correct provider selection in multi-provider setups.
Adds MCP first-class support and LLM message interceptor hooks; docs now require vectordb for tools and rename embedder to embedding_model.
- –Update tool configurations to use embedding_model instead of embedder and ensure vectordb is specified per new documentation requirements.
- –Leverage new LLM message interceptor hooks and MCP integration for custom processing; flow state pickling and RAG URL handling are now fixed.
Breaking change: embedder renamed to embedding_model; vectordb now required across all tool docs.
- –Update all tool configurations to use embedding_model instead of embedder and ensure vectordb is configured.
- –Verify Firecrawl tool integrations after bug fixes; stop_words handling refactored to property with validation.
See how people are using crewAI
Related Repositories
Discover similar tools and frameworks used by developers
gemini-cli
Access Google's powerful Gemini AI models directly from your terminal with an intuitive command-line interface for text, image, and multimodal interactions.
AI-Trader
LLM agent benchmarking framework for autonomous market trading.
goose
LLM-powered agent automating local software engineering workflows.
bolt.new
LLM-powered browser IDE with integrated WebContainers runtime.
vllm
Fast and memory-efficient inference engine for large language models with PagedAttention optimization for production deployments at scale.