Conversation
…… (#10880) Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com>
Co-authored-by: Roo Code <roomote@roocode.com>
…codex handlers (#10888)
Co-authored-by: Roo Code <roomote@roocode.com>
Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
Co-authored-by: Roo Code <roomote@roocode.com>
* fix(condense): remove custom condensing model option Remove the ability to specify a different model/API configuration for condensing conversations. Modern conversations include provider-specific data (tool calls, reasoning blocks, thought signatures) that only the originating model can properly understand and summarize. Changes: - Remove condensingApiHandler parameter from summarizeConversation() - Remove condensingApiConfigId from context management and Task - Remove API config dropdown for CONDENSE in settings UI - Update telemetry to remove usedCustomApiHandler parameter - Update related tests Users can still customize the CONDENSE prompt text; only model selection is removed. * fix: remove condensingApiConfigId from types and test fixtures --------- Co-authored-by: Roo Code <roomote@roocode.com>
…ile copying (#10905) * Fix EXT-553: Remove percentage-based progress tracking for worktree file copying - Removed totalBytes from CopyProgress interface - Removed Math.min() clamping that caused stuck-at-100% issue - Changed UI from progress bar to spinner with activity indicator - Shows 'item — X MB copied' instead of percentage - Updated all 18 locale files - Uses native cp with polling (no new dependencies) * fix: translate copyingProgress text in all 17 non-English locale files --------- Co-authored-by: Roo Code <roomote@roocode.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com>
Co-authored-by: erdemgoksel <erdemgoksel@MAU-BILISIM42>
…ch checkboxes (#11253) Remove the "Enable URL context" and "Enable Grounding with Google search" checkboxes from Gemini and Vertex provider settings, along with: - enableUrlContext and enableGrounding fields from provider settings schemas - URL context and Google Search tool injection in completePrompt methods - Associated translation keys from all 18 locale files - Related test cases updated to reflect the removal - simplifySettings prop removed from Gemini and Vertex components (it was only used for the removed checkboxes in those components) Co-authored-by: Roo Code <roomote@roocode.com>
…ks" (#11256) Revert "refactor(task): append environment details into existing blocks (#11198)" This reverts commit b0dc6ae.
chore: add changeset for v3.47.3
* changeset version bump * Update CHANGELOG for version 3.47.3 Updated version number and removed redundant patch changes for 3.47.3. --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com>
Remove two console.warn messages that fire excessively when loading tasks from history: - 'Attempting to finalize unknown tool call' in finalizeStreamingToolCall() - 'Received chunk for unknown tool call' in processStreamingChunk() The defensive null-return behavior is preserved; only the log output is removed.
Co-authored-by: Roo Code <roomote@roocode.com>
Migrates the IO Intelligence provider from legacy BaseOpenAiCompatibleProvider (direct openai SDK) to OpenAICompatibleHandler (Vercel AI SDK). - Extends OpenAICompatibleHandler instead of BaseOpenAiCompatibleProvider - Uses getModelParams for model parameter resolution - Updates tests to mock ai module's streamText/generateText
* refactor: migrate featherless provider to AI SDK * fix: merge consecutive same-role messages in featherless R1 path convertToAiSdkMessages does not merge consecutive same-role messages like convertToR1Format did. When the system prompt is prepended as a user message and the conversation already starts with a user message, DeepSeek R1 can reject the request. Add mergeConsecutiveSameRoleMessages helper that collapses adjacent Anthropic messages sharing the same role before AI SDK conversion. Includes a test that verifies no two successive messages share a role.
…lent temperature overrides (#11218) * fix: DeepSeek temperature defaulting to 0 instead of 0.3 Pass defaultTemperature: DEEP_SEEK_DEFAULT_TEMPERATURE to getModelParams() in DeepSeekHandler.getModel() to ensure the correct default temperature (0.3) is used when no user configuration is provided. Closes #11194 * refactor: make defaultTemperature required in getModelParams Make the defaultTemperature parameter required in getModelParams() instead of defaulting to 0. This prevents providers with their own non-zero default temperature (like DeepSeek's 0.3) from being silently overridden by the implicit 0 default. Every provider now explicitly declares its temperature default, making the temperature resolution chain clear: user setting → model default → provider default --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com>
* feat: migrate Bedrock provider to AI SDK Replace the raw AWS SDK (@aws-sdk/client-bedrock-runtime) Bedrock handler with the Vercel AI SDK (@ai-sdk/amazon-bedrock). Reduces provider from 1,633 lines to 575 lines (65% reduction). Key changes: - Use streamText()/generateText() instead of ConverseStreamCommand/ConverseCommand - Use createAmazonBedrock() with native auth (access key, secret, session, profile via credentialProvider, API key, VPC endpoint as baseURL) - Reasoning config via providerOptions.bedrock.reasoningConfig - Anthropic beta headers via providerOptions.bedrock.anthropicBeta - Thinking signature captured from providerMetadata.bedrock.signature on reasoning-delta stream events - Thinking signature round-tripped via providerOptions.bedrock.signature on reasoning parts in convertToAiSdkMessages() - Redacted thinking captured from providerMetadata.bedrock.redactedData - isAiSdkProvider() returns true for reasoning block preservation - Keep: getModel, ARN parsing, cross-region inference, cost calculation, service tier pricing, 1M context beta Tests: 83 tests skipped (mock old AWS SDK internals, need rewrite for AI SDK mocking). 106 tests pass. 0 tests fail. * fix: address review feedback for Bedrock AI SDK migration - Wire usePromptCache into AI SDK via providerOptions.bedrock.cachePoint on system prompt and last two user messages - Remove debug logger.info that fires on every stream event with providerMetadata - Tighten isThrottlingError to match 'rate limit' instead of broad 'rate'/'limit' substrings that false-positive on context length errors - Use shared handleAiSdkError utility for consistent error handling with status code preservation for retry logic * fix: bedrock AI SDK migration - fix usage metrics, rewrite tests, remove dead code - Fix reasoningTokens always 0 (usage.details?.reasoningTokens → usage.reasoningTokens) - Fix cacheReadInputTokens always 0 (read from usage.inputTokenDetails instead of providerMetadata) - Fix invokedModelId not extracted for prompt router cost calculation - Rewrite all 6 skipped bedrock test suites for AI SDK mocking pattern (140 tests pass) - Remove dead code: bedrock-converse-format.ts, cache-strategy/ (6 files, ~2700 lines) * chore: remove dead @anthropic-ai/bedrock-sdk dep and stale AWS SDK mocks * chore: update pnpm-lock.yaml after removing @anthropic-ai/bedrock-sdk * fix: compute cache point indices from original Anthropic messages before AI SDK conversion The previous approach naively targeted the last 2 user messages in the post-conversion AI SDK array, but convertToAiSdkMessages() splits user messages containing tool_results into separate tool + user messages, causing cache points to land on the wrong messages (tiny text fragments instead of the intended meaty user turns). Now we identify the last 2 user messages in the original Anthropic message array (matching the Anthropic provider's caching strategy) and build a parallel-walk mapping to apply cachePoint to the correct corresponding AI SDK message. * perf: optimize prompt caching with 3-point message strategy + anchor for 20-block window Previous approach only cached the last 2 user messages (using 2 of 4 available cache checkpoints for messages). This left significant cache savings on the table for longer conversations. New strategy uses up to 3 message cache points (+ 1 system = 4 total): - Last user message: write to cache for next request - Second-to-last user message: read from cache for current request - Anchor message at ~1/3 position: ensures the 20-block lookback window from the second-to-last breakpoint hits a stable cache entry, covering all assistant/tool messages in the middle of the conversation Also extracted the parallel-walk mapping logic into a reusable applyCachePointsToAiSdkMessages() helper method. Industry benchmarks show 70-95% token cache rates are achievable; this change should significantly improve our 39% baseline for longer multi-turn conversations. * chore: remove stale bedrock-sdk external, fix arnInfo property name, remove unused exports --------- Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com>
…277) * feat: add disabledTools setting to globally disable native tools Add a disabledTools field to GlobalSettings that allows disabling specific native tools by name. This enables cloud agents to be configured with restricted tool access. Schema: - Add disabledTools: z.array(toolNamesSchema).optional() to globalSettingsSchema - Add disabledTools to organizationDefaultSettingsSchema.pick() - Add disabledTools to ExtensionState Pick type Prompt generation (tool filtering): - Add disabledTools to BuildToolsOptions interface - Pass disabledTools through filterSettings to filterNativeToolsForMode() - Remove disabled tools from allowedToolNames set in filterNativeToolsForMode() Execution-time validation (safety net): - Extract disabledTools from state in presentAssistantMessage - Convert disabledTools to toolRequirements format for validateToolUse() Wiring: - Add disabledTools to ClineProvider getState() and getStateToPostToWebview() - Pass disabledTools to all buildNativeToolsArrayWithRestrictions() call sites EXT-778 * fix: check toolRequirements before ALWAYS_AVAILABLE_TOOLS Moves the toolRequirements check before the ALWAYS_AVAILABLE_TOOLS early-return in isToolAllowedForMode(). This ensures disabledTools can block always-available tools (switch_mode, new_task, etc.) at execution time, making the validation layer consistent with the filtering layer.
Add GetCommands, GetModes, and GetModels to the IPC protocol so external clients can fetch slash commands, available modes, and Roo provider models without going through the internal webview message channel. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…eatures grid (#11280) Co-authored-by: Roo Code <roomote@roocode.com>
* refactor: migrate baseten provider to AI SDK * refactor(baseten): migrate to native @ai-sdk/baseten package Replace OpenAICompatibleHandler with dedicated @ai-sdk/baseten package, following the same pattern used by other native AI SDK providers (groq, deepseek, etc.). This uses createBaseten() for provider initialization and extends BaseProvider directly instead of the generic OpenAI-compatible handler.
…#11285) Co-authored-by: Roo Code <roomote@roocode.com>
* refactor: migrate zai provider to AI SDK using zhipu-ai-provider * Update src/api/providers/zai.ts Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: remove unused zai-format.ts (knip) --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Related GitHub Issue
Closes: #
Description
f179ba1
Test Procedure
Type of Change
srcor test files.Pre-Submission Checklist
npm run lint).console.log) has been removed.npm test).mainbranch.npm run changesetif this PR includes user-facing changes or dependency updates.Screenshots / Videos
Documentation Updates
Additional Notes
Get in Touch