agent [OPTIONS]

Options

FlagDefaultDescription
-p, --prompt <TEXT>Execute a single prompt and exit (non-interactive)
-m, --model <MODEL>claude-sonnet-4-20250514Model to use
--api-base-url <URL>auto-detectedAPI endpoint URL
--api-key <KEY>from envAPI key (prefer env var)
--provider <NAME>autoLLM provider: anthropic, openai, or auto
--permission-mode <MODE>askPermission mode: ask, allow, deny, plan, accept_edits
--dangerously-skip-permissionsfalseSkip all permission checks
-C, --cwd <DIR>current dirWorking directory
--max-turns <N>50Maximum agent turns per request
-v, --verbosefalseEnable verbose output
--dump-system-promptfalsePrint the system prompt and exit
-h, --helpShow help
--versionShow version

Environment variables

VariableEquivalent flagDescription
AGENT_CODE_API_KEY--api-keyAPI key (highest priority)
ANTHROPIC_API_KEY--api-keyAnthropic API key
OPENAI_API_KEY--api-keyOpenAI API key
AGENT_CODE_API_BASE_URL--api-base-urlAPI endpoint URL
AGENT_CODE_MODEL--modelModel name

Examples

# Interactive mode with Anthropic
ANTHROPIC_API_KEY=sk-ant-... agent

# One-shot with OpenAI
OPENAI_API_KEY=sk-... agent --model gpt-4o --prompt "explain main.rs"

# Local Ollama
agent --api-base-url http://localhost:11434/v1 --model llama3 --api-key x

# CI: fix tests without asking
agent --dangerously-skip-permissions --prompt "fix the failing tests"

# Read-only exploration
agent --permission-mode plan

# Debug: see what the LLM receives
agent --dump-system-prompt