Navigate:
~$CRUSH2.2%

Crush: Terminal-based AI coding agent

LLM-powered coding agent with LSP and MCP integration.

LIVE RANKINGS • 06:52 AM • STEADY
TOP 10TOP 10TOP 10TOP 10TOP 10TOP 10TOP 10TOP 10TOP 10TOP 10TOP 10TOP 10
OVERALL
#3
7
DEVELOPER TOOLS
#2
30 DAY RANKING TREND
ovr#3
·Devel#2
STARS
17.3K
FORKS
1.0K
DOWNLOADS
7
7D STARS
+367
7D FORKS
+43
See Repo:
Share:

Learn more about crush

Crush is a terminal-based coding agent that runs on macOS, Linux, Windows, and BSD systems. It connects to language models through configurable providers and integrates with LSPs for code context, while supporting Model Context Protocol extensions via HTTP, stdio, and SSE transports. The tool maintains separate work sessions per project and allows switching between different LLM models within a session while preserving conversation history. Common deployment involves local terminal usage with remote or local LLM backends, supporting workflows that combine code editing, analysis, and generation tasks.


1

Multi-Provider LLM Support

Switch between OpenAI, Anthropic, and custom LLM providers mid-session without losing conversation context. Configure multiple models through simple configuration files for different tasks or cost optimization.

2

LSP and MCP Integration

Native Language Server Protocol integration provides deep code understanding and navigation. Model Context Protocol support with HTTP, stdio, and SSE transports enables extensibility through external tools and services.

3

Per-Project Session Management

Independent work sessions maintain separate conversation contexts for each project. Switch between multiple coding tasks without state collision or context loss across different codebases.


from crush import CrushConfig, Provider

config = CrushConfig()
config.add_provider(
    Provider(
        id="openai",
        name="OpenAI",
        type="openai",
        base_url="https://api.openai.com/v1",
        api_key="sk-your-key-here",
        models=[{"id": "gpt-4o", "name": "GPT-4o"}]
    )
)
config.save()

vv0.17.0

Adds opt-in AI attribution trailers and surfaces recently used models at the top of the picker; no breaking changes noted.

  • Configure `trailer_style` to append 'Assisted-by' attribution lines to commits following Fedora AI contribution guidelines.
  • Recent models now appear at the top of the model chooser; bash completion now uses model names instead of IDs.
vv0.16.1

Fixes error reporting for OpenAI-compatible providers and reduces retry attempts from excessive to 2 extra retries on rate-limit errors.

  • Expect clearer error messages when API calls to providers fail, especially for OpenAI-compatible endpoints that previously returned blank errors.
  • Retry logic now caps at 2 additional attempts for rate-limit and similar transient errors, down from an unspecified higher count.
vv0.16.0

Adds background job management; removes persistent shell feature (breaking change).

  • Use new background job spawning to run servers or parallel builds asynchronously.
  • Persistent shell feature has been removed; update workflows relying on shell state persistence.

See how people are using crush

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers