Describe the bug
When using Opus 4.7 with a Copilot Pro+ subscription, the effective available context window appears much smaller than comparable models like GPT 5.4 under the same conditions.
In practice, this causes auto-compact to trigger very frequently, including multiple times within a single prompt/session, which makes the model difficult to use even for medium-complexity tasks.
This is especially noticeable because Opus-class models are generally better suited for more complex tasks, which often require larger working context. I understand that the 1M tokens context allocation may not be feasible for cost or product reasons, but the current limit seems too restrictive to be practical.
In my case, a large portion of the context appears to be consumed by System/Tools, leaving much less room for actual task context than with GPT 5.4.
Opus 4.7 context

GPT 5.4 context (for comparison using the same plugins and tools on the same machine)

Affected version
GitHub Copilot CLI 1.0.36
Steps to reproduce the behavior
- Start a fresh Copilot CLI session on a Pro+ subscription.
- Switch to
Opus 4.7 model.
- Run a simple prompt to populate the context and observe that a large share of the context is already occupied by
System/Tools.
- In the same setup, read a moderate amount of content (for example, around 10 markdown files of roughly 200-300 lines / 15-20 KB each) and ask for a summary (or any other medium effort task).
- Observe that the context fills up and auto-compact triggers quickly, often multiple times during the same session or even during a single prompt.
- Repeat the same workflow with
GPT 5.4 (Or any other comparable model like Sonnet) and compare effective remaining context and auto-compact frequency.
Expected behavior
Opus 4.7 should reserve a smaller portion of the window for System/Tools, closer to GPT 5.4 (Or any other comparable model like Sonnet) under the same setup.
- A medium-complexity prompt should fit without repeated auto-compact during normal use.
- If the model must have a smaller context budget than GPT 5.4, it should still be large enough to handle moderate multi-file workflows reliably.
Additional context
- OS: Windows 11
- Shell: PowerShell
- Subscription: Copilot Pro+
- Comparison was made using the same plugins/tools setup as
GPT 5.4
Describe the bug
When using
Opus 4.7with a Copilot Pro+ subscription, the effective available context window appears much smaller than comparable models likeGPT 5.4under the same conditions.In practice, this causes auto-compact to trigger very frequently, including multiple times within a single prompt/session, which makes the model difficult to use even for medium-complexity tasks.
This is especially noticeable because Opus-class models are generally better suited for more complex tasks, which often require larger working context. I understand that the 1M tokens context allocation may not be feasible for cost or product reasons, but the current limit seems too restrictive to be practical.
In my case, a large portion of the context appears to be consumed by
System/Tools, leaving much less room for actual task context than withGPT 5.4.Opus 4.7contextGPT 5.4context (for comparison using the same plugins and tools on the same machine)Affected version
GitHub Copilot CLI 1.0.36
Steps to reproduce the behavior
Opus 4.7model.System/Tools.GPT 5.4(Or any other comparable model like Sonnet) and compare effective remaining context and auto-compact frequency.Expected behavior
Opus 4.7should reserve a smaller portion of the window forSystem/Tools, closer toGPT 5.4(Or any other comparable model like Sonnet) under the same setup.Additional context
GPT 5.4