Skip to content

V0.98.0#472

Merged
dwash96 merged 56 commits intomainfrom
v0.98.0
Mar 31, 2026
Merged

V0.98.0#472
dwash96 merged 56 commits intomainfrom
v0.98.0

Conversation

@dwash96
Copy link
Copy Markdown
Collaborator

@dwash96 dwash96 commented Mar 27, 2026

Addresses:
#469

Includes:

  • feat: Add support for multiple --mcp-servers-files arguments #473
  • feat: Add agent model display and configuration improvements #470
  • Modify ShowNumberedContext into ShowContext for content based targeting for stable hashline/hashpos based edits
  • Add "rules"-like file support to allow dynamically loadable files similar to what AGENTS.md or CLAUDE.md are used for
  • Add --rules argument and /rules command to support the above
  • Agent mode lints files as it edits them to detect syntax errors and ameliorate them during operation
  • Modify similarity detection so it is less intrusive
  • Change hashline/hashpos seperator since most LLM tokenizers see double colons as a single token (nice on the eyes, nice for the machine)
  • Conversation system decomposition into an instance-able set of base classes that will support multiple context streams for sub-agent development
  • Memory usage improvements by using GitPython's GitCmdObjectDB and general caching to search through the underlying git repository for generation of the repository map
  • Agent mode prompt updates to make the steps more easily accomplishable to the models and use slightly fewer tokens
  • Make available commands easier to search through by matching sub strings after start strings
  • Fix quoted file names being incorrectly spliced in the /add-like commands
  • Allow sending/injecting messages while a generation task is running in the TUI to increase general real-time steerability

Your Name and others added 30 commits March 21, 2026 05:58
- ShowContext tool re-imagined to not work on line numbers in favor of patterns in the file in question
- Enforce inability to edit until ShowContext is called
- Add more obscurity to content bits of hashpos representation so LLM does not accidentally learn patterns in the ending bits
- Don't allow hashlines to approximate anywhere in the file, use 30 line cap (that maybe can become configurable) for how tolerant of content based non exact matches we are
…ther long persistent context, like project plans)

- /rules command
- --rules argument
- General support for token counting, context management, hot reloading
…M less while fixing mistakes

- Remove indent_text tool since it almost never gets used
Co-authored-by: cecli (openai/openai_gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/openai_gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/openai_gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/openai_gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/openai_gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/openai_gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
szmania and others added 26 commits March 26, 2026 11:27
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
Co-authored-by: cecli (openai/gemini_cli/gemini-2.5-pro)
…s conversation system per coder class as a nother step towards sub-agent functionality
feat: Add support for multiple --mcp-servers-files arguments
feat: Add agent model display and configuration improvements
… running a generation task for greater steerability
@dwash96 dwash96 merged commit 57029fa into main Mar 31, 2026
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants