Continue: Open-source AI coding agent CLI
Multi-LLM coding agent with interactive and automated modes.
Learn more about continue
Continue is a CLI-based coding agent that runs in multiple modes: TUI (terminal user interface) for interactive use, headless for background automation, and as IDE extensions for VS Code and JetBrains. The tool connects to various LLM providers including Claude, GPT, Gemini, and Qwen to execute coding tasks and workflows. It supports event-driven automation through PR triggers, scheduled execution, and custom event handlers. The architecture allows agents to execute workflows step-by-step with optional human approval at decision points.
Multi-Mode Execution
Run the same agent logic in TUI for interactive workflows, headless for CI/CD automation, or as IDE plugins for VS Code and JetBrains. No code changes required when switching between modes—agents adapt to the execution context automatically.
Event-Driven Automation
Workflows trigger on PR events, scheduled intervals, or custom event sources with configurable approval gates. Agents execute autonomously for trusted operations or require step-by-step human approval for sensitive changes.
Multi-Provider LLM Support
Connects to multiple LLM providers including OpenAI, Anthropic, local models, and custom endpoints through a unified interface. Switch between providers without changing code, optimizing for cost, latency, or capability requirements.
import { ContinueClient } from '@continuedev/cli';
const client = new ContinueClient();
const task = await client.createTask({
prompt: 'Fix all TypeScript errors in src/utils',
rules: ['Follow existing code style', 'Add unit tests'],
tools: ['read_file', 'write_file', 'run_terminal']
});
await task.execute();Daily beta build for testing; no specific changes documented in release notes.
- –Release notes do not specify breaking changes, new features, or bug fixes included in this beta.
- –Expect promotion to stable after 7 days if no critical issues surface during testing period.
Daily beta build for testing; no release notes provided detailing changes, fixes, or breaking updates.
- –Release notes do not specify any changes, breaking updates, or new requirements for this beta.
- –Wait 7 days for stable promotion or consult commit history if deploying this beta to production.
Stable release built directly from main branch with no documented changes, breaking updates, or migration steps.
- –Release notes do not specify any feature additions, bug fixes, or behavioral changes in this version.
- –No breaking changes, dependency updates, or configuration requirements are documented for this release.
See how people are using continue
Related Repositories
Discover similar tools and frameworks used by developers
fastmcp
Build Model Context Protocol servers with decorators.
LLaMA-Factory
Parameter-efficient fine-tuning framework for 100+ LLMs.
paperless-ngx
Self-hosted OCR document archive with ML classification.
cai
LLM-powered Python framework for automated penetration testing workflows.
onnxruntime
Cross-platform engine for optimized ONNX model execution.