Continue CLI (cn) is a headless, model-agnostic AI coding agent that runs tasks asynchronously in the cloud or background — without blocking your terminal. Unlike interactive tools such as Cursor or GitHub Copilot Chat, cn executes entire workflows (PR reviews, code migrations, issue triage) as background jobs you can trigger from a shell, a GitHub Actions YAML, or a cron schedule. With 10M+ VS Code extension installs and a growing open-source CLI in Alpha as of 2026, Continue is positioning itself as the automation layer for AI-assisted development at team scale.
What Is Continue CLI (cn) and Why Async Cloud Agents Matter
Continue CLI (cn) is the command-line interface for Continue.dev — a fully open-source, model-agnostic AI coding platform. The cn binary lets you invoke AI agents from the terminal, in CI pipelines, and inside cloud automation workflows without any graphical IDE. The “async cloud agent” model means the agent picks up a task, runs it to completion in the background (or in Continue’s hosted cloud), and delivers results asynchronously — you don’t need to sit and watch a spinner. This is a fundamentally different interaction model from interactive tools like Cursor, where the developer and AI collaborate in real time.
Why does async matter in 2026? Because 84–92% of developers now use or plan to use AI coding tools, and 41% of all code written this year is AI-generated (up from 27% in 2024). The bottleneck has shifted from “can AI help me code?” to “how do I run AI at scale across dozens of PRs and repositories without babysitting every run?” Continue CLI answers that directly: trigger once, get results asynchronously, and plug the output into your existing Git-based review workflow. Compared to Claude Code (which scores 80.8% on SWE-bench Verified as an interactive CLI) or Cursor’s agent mode, Continue CLI is the only major option that is simultaneously free/open-source, model-agnostic across 20+ providers, and designed from the ground up for headless, unattended execution.
Installing and Setting Up Continue CLI
Installing Continue CLI takes under two minutes on macOS, Linux, or Windows (WSL2). The binary is named cn and distributed via npm, Homebrew, and direct download.
npm (recommended for CI environments):
npm install -g @continue/cn
cn --version
Homebrew (macOS/Linux):
brew install continuedev/tap/cn
Verify installation:
cn --help
After installation, authenticate by setting your CONTINUE_API_KEY environment variable. You can generate a key from the Continue Mission Control dashboard at app.continue.dev. For local use without a cloud account, you can skip this and supply a model API key (e.g., ANTHROPIC_API_KEY or OPENAI_API_KEY) directly in config.yaml.
export CONTINUE_API_KEY=your-key-here
# or add to ~/.bashrc / ~/.zshrc for persistence
Run cn auth to confirm your credentials are valid:
cn auth status
# → Authenticated as user@example.com
Once authenticated, initialize a project config:
cd your-project
cn init
# Creates .continue/config.yaml in the current directory
The init command scaffolds a minimal config.yaml and a .continue/rules/ directory. From here you can start running agents interactively or switch to headless mode for automation.
Interactive vs Headless Mode — When to Use Each
Continue CLI supports two execution modes: interactive and headless. Choosing between them is one of the first decisions developers face when adopting cn at scale.
Interactive mode launches a terminal UI (TUI) where you type prompts and the agent responds in a conversational loop — similar to running Claude Code or Aider. Use interactive mode when you’re exploring a codebase, debugging a tricky problem, or prototyping a new prompt before automating it. Launch it with:
cn # or: cn --interactive
Headless mode is the async-first path. Pass a prompt with the -p flag and the agent executes without waiting for user input. This is what powers CI/CD automation:
cn -p "Review the diff in this PR for security vulnerabilities. Output a JSON array of findings."
When to use each:
| Scenario | Mode |
|---|---|
| Exploring a new codebase | Interactive |
| Writing code with step-by-step guidance | Interactive |
| PR review bot in GitHub Actions | Headless |
| Nightly code quality report | Headless |
| Issue triage on every new GitHub issue | Headless |
| Migration guide generation for a new API version | Headless |
| Pair programming session | Interactive |
Headless mode also supports the --allow flag to pre-authorize file write operations — essential when the agent needs to actually modify code rather than just report:
cn -p "Fix all ESLint errors in src/" --allow Write()
Without --allow Write(), Continue CLI defaults to read-only mode for all file system operations, which is the safe default for auditing and reporting tasks.
Configuring Continue CLI: config.yaml, Models, and Rules
The config.yaml file in .continue/ is the central configuration for Continue CLI. It defines which LLM models to use, what rules govern agent behavior, and which prompt shortcuts are available. Getting this right is what separates a throwaway demo from a reliable team-wide automation.
A typical config.yaml for a TypeScript project using Claude Sonnet:
models:
- provider: anthropic
model: claude-sonnet-4-6
apiKey: ${ANTHROPIC_API_KEY}
contextLength: 200000
rules:
- name: TypeScript Best Practices
path: .continue/rules/typescript.md
- name: Security Policy
path: .continue/rules/security.md
prompts:
pr-review:
description: "Full security and quality PR review"
prompt: |
Review the following git diff for:
1. Security vulnerabilities (OWASP Top 10)
2. Performance regressions
3. Type safety issues
Output findings as a markdown list grouped by severity.
Model providers supported include: Anthropic (Claude), OpenAI (GPT-4o), Google (Gemini), Mistral, local Ollama models, Azure OpenAI, Cohere, and 15+ others. This model-agnosticism is a key differentiator — you’re not locked into a single vendor, and you can swap models per task type (e.g., use a fast Haiku model for triage, Opus for deep reviews).
Rules files (.continue/rules/*.md) let you encode team conventions as instructions the agent always follows. Think of them as a persistent system prompt baked into every agent run. A rules file might include:
# TypeScript Rules
- Always use strict null checks
- Prefer functional array methods over for loops
- All async functions must handle errors with try/catch
- Never use `any` — use `unknown` and narrow the type
Rules are version-controlled alongside your code, making them reviewable, diffable, and enforceable across the whole team — a feature most competing tools lack.
Building Your First Async Cloud Agent
An async cloud agent in Continue is a headless cn invocation that runs to completion without human intervention and delivers its output as structured data or a Git action. Here’s a minimal end-to-end example: a PR code review agent.
Step 1: Write the agent prompt
Create .continue/prompts/pr-review.md:
You are a senior software engineer reviewing a pull request.
Review the provided git diff for:
1. Bugs and logic errors
2. Security vulnerabilities
3. Missing test coverage
4. Code style violations against our rules
Output a JSON object with this structure:
{
"summary": "one-sentence summary",
"severity": "low|medium|high|critical",
"findings": [{"type": "...", "file": "...", "line": 0, "description": "..."}],
"approved": true|false
}
Step 2: Run it headlessly against a diff
git diff main..feature/my-branch > /tmp/diff.txt
cn -p "$(cat .continue/prompts/pr-review.md)\n\nDiff:\n$(cat /tmp/diff.txt)" \
--output json \
> /tmp/review.json
cat /tmp/review.json
Step 3: Act on the output
APPROVED=$(python3 -c "import json; d=json.load(open('/tmp/review.json')); print(d['approved'])")
if [ "$APPROVED" = "False" ]; then
echo "Agent flagged issues — blocking merge"
exit 1
fi
This three-step pattern — prompt, run headlessly, parse output — is the foundation of every Continue cloud agent. More complex agents add tool use (reading files, running tests, posting to GitHub), but the core loop stays the same. For cloud execution (running the agent on Continue’s hosted infrastructure rather than locally), you deploy the agent through Mission Control and trigger it via webhook or the Continue API.
Integrating Continue CLI into GitHub Actions (CI/CD)
GitHub Actions is the most common deployment target for Continue CLI cloud agents. The combination lets you run AI-powered PR reviews, issue triage, and automated documentation on every push — with zero infrastructure to manage. Here’s a production-ready GitHub Actions workflow for a PR review agent:
name: AI PR Review
on:
pull_request:
types: [opened, synchronize]
jobs:
ai-review:
runs-on: ubuntu-latest
permissions:
pull-requests: write
contents: read
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Continue CLI
run: npm install -g @continue/cn
- name: Generate git diff
run: git diff origin/main..HEAD > /tmp/pr_diff.txt
- name: Run AI review
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
CONTINUE_API_KEY: ${{ secrets.CONTINUE_API_KEY }}
run: |
cn -p "Review this PR diff for bugs, security issues, and code quality.
Output findings as a markdown report suitable for a GitHub PR comment.
Diff:
$(cat /tmp/pr_diff.txt)" \
> /tmp/review_output.md
- name: Post review as PR comment
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const review = fs.readFileSync('/tmp/review_output.md', 'utf8');
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `## AI Code Review\n\n${review}`
});
Key points for production GitHub Actions integration:
- Store
ANTHROPIC_API_KEYandCONTINUE_API_KEYas GitHub Actions secrets, never in the YAML - Use
fetch-depth: 0on checkout to ensure the full Git history is available for diffing - The
--allow Write()flag is not needed here since the agent is only reading code and writing to stdout; keep the default read-only posture - For large PRs (>500 changed lines), consider chunking the diff to stay within model context limits
You can extend this pattern for issue triage (on: issues: types: [opened]), nightly security scans (on: schedule: - cron: '0 2 * * *'), and release note generation (on: release: types: [created]).
Running Parallel Agents for Faster Automation
One of Continue CLI’s most powerful but underused features is parallel agent execution. Because cn is a CLI, you can run multiple agents simultaneously using standard shell background jobs — no special orchestration layer needed. This is dramatically faster than running agents sequentially when the tasks are independent.
Example: Parallel analysis on a codebase
#!/bin/bash
# Run three agents in parallel
cn -p "Analyze src/ for security vulnerabilities. Output JSON." \
> /tmp/security_report.json &
SECURITY_PID=$!
cn -p "Find performance bottlenecks in src/api/. Output JSON." \
> /tmp/perf_report.json &
PERF_PID=$!
cn -p "Generate a migration guide from v1 to v2 API changes in CHANGELOG.md." \
> /tmp/migration_guide.md &
MIGRATION_PID=$!
# Wait for all three to complete
wait $SECURITY_PID $PERF_PID $MIGRATION_PID
echo "All agents complete."
echo "Security findings: $(python3 -c "import json; d=json.load(open('/tmp/security_report.json')); print(len(d.get('findings', [])))")"
Parallel agents in GitHub Actions use a matrix strategy:
jobs:
ai-analysis:
strategy:
matrix:
task: [security, performance, documentation]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm install -g @continue/cn
- name: Run ${{ matrix.task }} agent
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
case "${{ matrix.task }}" in
security) cn -p "Security audit of src/" > output.md ;;
performance) cn -p "Performance analysis of src/" > output.md ;;
documentation) cn -p "Generate API docs from src/" > output.md ;;
esac
- uses: actions/upload-artifact@v4
with:
name: ${{ matrix.task }}-report
path: output.md
Running three agents in parallel on a medium-sized codebase typically completes in 90–120 seconds versus 4–6 minutes sequentially. For teams running these analyses on every PR, the time saving compounds quickly. Developers already save approximately 3.6 hours per week using AI coding tools — parallel agent execution pushes that number higher for automation-heavy workflows.
Mission Control: Managing Cloud Agents at Team Scale
Mission Control is Continue’s hosted control plane at app.continue.dev. It’s the team-facing UI for deploying, monitoring, and governing cloud agents — especially important once you move beyond individual developer use to team-wide or organization-wide automation. Mission Control lets you share agents, set access controls, monitor run history, and trigger agents via webhook without anyone needing to install the CLI locally.
Core Mission Control features:
| Feature | What It Does |
|---|---|
| Agent Library | Share and version agent definitions across the team |
| Run History | View inputs, outputs, and logs for every agent execution |
| Webhooks | Trigger agents from GitHub, Linear, Slack, or any HTTP event |
| Access Controls | Restrict which agents team members can trigger or modify |
| Usage Analytics | Track model costs and token consumption per agent |
| Secrets Management | Store API keys centrally, injected at run time |
Deploying an agent to Mission Control:
# From your project root
cn deploy --name "pr-reviewer" \
--prompt .continue/prompts/pr-review.md \
--model claude-sonnet-4-6 \
--allow ReadFile()
This registers the agent in your team’s Mission Control workspace. Other team members can trigger it via:
cn run pr-reviewer --input "$(git diff main..HEAD)"
Or via the Mission Control webhook URL (suitable for GitHub Actions or Zapier integrations). For enterprise teams — 78% of Fortune 500 companies have AI-assisted development in production in 2026 — Mission Control’s centralized governance and audit trail are often prerequisites before rolling out AI automation to the broader engineering organization.
Continue CLI vs Claude Code vs Cursor CLI — Which Should You Use?
All three tools can run AI coding agents from the command line, but they’re optimized for different workflows. Here’s an honest comparison for developers choosing in 2026.
Continue CLI (cn) is the right choice if: you need full async/headless automation, you want model flexibility (avoid vendor lock-in), you’re building CI/CD pipelines, or you work in an open-source project that can’t pay for per-seat licenses. The trade-off is maturity — it’s in Alpha, documentation gaps exist, and some enterprise features are still coming.
Claude Code is the right choice if: you want the highest benchmark performance (80.8% SWE-bench), you’re doing intensive single-developer sessions, or you value Anthropic’s safety-focused approach to agentic behavior. The trade-off is cost (Claude API pricing, no free tier) and model lock-in (Anthropic only).
Cursor CLI / Cursor Agent is the right choice if: you’re already in the Cursor IDE ecosystem and want to extend your editor workflow to the terminal. The trade-off is that it’s primarily designed for the interactive desktop experience — headless CI/CD use is secondary.
| Dimension | Continue CLI | Claude Code | Cursor CLI |
|---|---|---|---|
| License | Open source (free) | Proprietary (paid API) | Proprietary (paid) |
| Model support | 20+ providers | Anthropic only | Multiple (IDE-centric) |
| Headless/CI mode | First-class | Good | Limited |
| Interactive TUI | Yes (improving) | Excellent | Via IDE |
| SWE-bench score | Model-dependent | 80.8% (Opus) | N/A |
| Team governance | Mission Control | Limited | Team plan |
| Config as code | config.yaml + rules | CLAUDE.md | .cursorrules |
| Parallel agents | Shell background | Worktrees | No native support |
| Alpha/Beta status | Alpha (2026) | GA | GA |
The most common real-world pattern: teams use Claude Code or Cursor for interactive development sessions and Continue CLI for automated pipeline tasks. They’re not mutually exclusive — they’re complementary layers of the AI developer toolkit.
Real-World Use Cases and Practical Examples
Understanding Continue CLI’s architecture is useful, but seeing it applied to concrete problems makes the investment click. Here are five real-world patterns that teams are running in production with Continue CLI in 2026.
1. Automated PR Review Bot
Trigger a security + quality review on every pull request. The agent reads the diff, applies team-specific rules from .continue/rules/, and posts a structured comment to GitHub. Teams report this catches ~30% of issues that slip through manual review, with a typical run time of 45–90 seconds per PR.
2. Nightly Dependency Audit
# Runs nightly via cron in GitHub Actions
cn -p "Audit package.json and package-lock.json for:
1. Known CVEs in dependencies
2. Outdated major versions with breaking changes
3. Unused dependencies that add attack surface
Output JSON with fix recommendations." \
> /tmp/dep_audit.json
3. Issue Triage Bot On every new GitHub issue, a Continue CLI agent reads the issue body, classifies it (bug/feature/question), assigns severity, and adds appropriate labels — all without human intervention. The GitHub Actions trigger:
on:
issues:
types: [opened]
4. Release Notes Generator
Before every release, run an agent that reads the git log since the last tag, groups commits by type (feat/fix/chore), and generates a human-readable CHANGELOG.md entry in your project’s voice and style.
5. Migration Guide on Dependency Update When a PR bumps a major dependency version (e.g., React 18 → 19), trigger an agent that reads the upstream changelog and generates a project-specific migration checklist — highlighting which of your files will be affected and what changes are needed.
Each of these follows the same pattern: headless cn -p "...", structured output, downstream action. The unlock is that Continue CLI is stateless and composable — it fits into the tools your team already uses (Git, GitHub Actions, Slack webhooks) rather than requiring a new platform.
FAQ
Q: Is Continue CLI free to use?
A: Yes. The cn CLI and Continue.dev platform are open source. You bring your own API key for whichever LLM you choose (Anthropic, OpenAI, Gemini, etc.) and pay that provider’s standard rates. Mission Control has a hosted tier with usage limits; the self-hosted path is entirely free.
Q: Does Continue CLI work with local models (Ollama)?
A: Yes. Set provider: ollama and model: llama3.2 (or any Ollama-supported model) in your config.yaml. Local models are useful for sensitive codebases where you don’t want code leaving your network. The trade-off is speed and capability compared to frontier models.
Q: How does Continue CLI handle large codebases with many files?
A: Continue CLI uses the same context management as the IDE extension — it retrieves relevant file snippets using embeddings rather than loading the entire codebase. For very large repos, the --context flag lets you specify which files or directories to scope to, preventing context window overflow.
Q: Can I run Continue CLI in a Docker container for CI/CD?
A: Yes. Use the official Node.js base image, install @continue/cn via npm, inject your API keys as environment variables, and run cn -p "..." as a standard step. The CLI has no GUI dependencies and works in any headless Linux environment.
Q: What’s the difference between Continue CLI Alpha and the stable VS Code extension?
A: The VS Code extension (10M+ installs) is the stable, production-ready product used for interactive pair programming. The CLI (cn) is in Alpha — it has the same core capabilities but fewer polish features, less documentation, and the API may change before Beta. For production CI/CD use, pin to a specific version (@continue/cn@x.y.z) to avoid unexpected breaking changes.
