-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Support per-mode default model configuration (plan mode vs. autopilot) #2958
Copy link
Copy link
Open
Labels
area:agentsSub-agents, fleet, autopilot, plan mode, background agents, and custom agentsSub-agents, fleet, autopilot, plan mode, background agents, and custom agentsarea:configurationConfig files, instruction files, settings, and environment variablesConfig files, instruction files, settings, and environment variablesarea:modelsModel selection, availability, switching, rate limits, and model-specific behaviorModel selection, availability, switching, rate limits, and model-specific behavior
Metadata
Metadata
Assignees
Labels
area:agentsSub-agents, fleet, autopilot, plan mode, background agents, and custom agentsSub-agents, fleet, autopilot, plan mode, background agents, and custom agentsarea:configurationConfig files, instruction files, settings, and environment variablesConfig files, instruction files, settings, and environment variablesarea:modelsModel selection, availability, switching, rate limits, and model-specific behaviorModel selection, availability, switching, rate limits, and model-specific behavior
Type
Fields
Give feedbackNo fields configured for Feature.
Describe the feature or problem you'd like to solve
Per-mode default model configuration (plan mode vs. autopilot)
Proposed solution
Allow users to configure a default AI model per interaction mode — specifically plan mode and autopilot mode — via the CLI config or a config file (e.g., ~/.copilot/config.json). Today, /model sets a single model for the entire session regardless of mode. Since plan mode is more reasoning-heavy (architecture, task decomposition) and autopilot is more execution-heavy (many sequential tool calls), users benefit from using different models optimized for each purpose — e.g., a premium reasoning model for planning and a faster/cheaper model for autopilot execution.
Example prompts or workflows
Additional context
This mirrors how some IDEs let you configure different models for different tasks (e.g., chat vs. inline completion). It would reduce the friction of manually switching models when changing modes, and help users manage their premium request quota more intentionally.