Skip to content

GPT 5.2 Codex - Compaction happens way too early and context limit is wrong #11086

@Simplereally

Description

@Simplereally

Description

GPT 5.2 Codex via Copilot is compacting at 100k tokens when the model's context is 272k.

Plugins

None

OpenCode version

latest

Steps to reproduce

  1. Use gpt-5.2-codex via github copilot
  2. Context in top right indicates max 100k when it is actually 272k
  3. Reach ~90k+ tokens in session and compaction happens

Screenshot and/or share link

Image

Operating System

Windows 10 latest

Terminal

Windows Terminal

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingwindows

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions