Fix: MCP stdio bridge - Make tool responses MCP-compliant and add debugging#11
Conversation
Add a Python stdio-based MCP server that bridges to ProxySQL's HTTPS MCP endpoint, enabling Claude Code to use ProxySQL MCP tools directly. The bridge: - Implements stdio MCP server protocol (for Claude Code) - Acts as MCP client to ProxySQL's HTTPS endpoint - Supports initialize, tools/list, tools/call methods - Handles authentication via Bearer tokens - Configurable via environment variables Usage: - Configure in Claude Code MCP settings - Set PROXYSQL_MCP_ENDPOINT environment variable - Optional: PROXYSQL_MCP_TOKEN for auth
Add PROXYSQL_MCP_DEBUG environment variable to enable verbose logging of all stdio communication and ProxySQL HTTP requests/responses.
ProxySQL MCP wraps tool responses in {"result": {...}, "success": true}.
The bridge now unwraps this to return just the actual result to Claude Code.
This fixes the LLM error 'The prompt parameter was not received normally'
which occurred because the LLM was receiving the malformed nested structure.
- Unwrap ProxySQL's {"success": ..., "result": ...} wrapper in tool responses
for MCP protocol compliance
- Fix proxysql.cfg missing closing brace for mcp_variables section
…bridge - Redirect stderr to /tmp/proxysql_mcp_bridge.log for debugging - Add extreme verbosity with timestamps for all stdin/stdout/HTTP traffic - CRITICAL FIX: Set stdout to line-buffered mode to prevent responses from being buffered and never reaching Claude Code (causing timeouts) - Log all HTTP requests/responses to ProxySQL MCP server - Log all message handling and unwrapping operations
… buffering The previous sys.stdout.reconfigure(line_buffering=True) didn't work when stderr is redirected. Now we create a new io.TextIOWrapper around sys.stdout.buffer with line_buffering=False, ensuring immediate flush. Also sets PYTHONUNBUFFERED=1 for extra safety.
…timeout - Revert the stdout replacement changes (was probably not the issue) - Add detailed logging to _writeline to see exactly what's happening when writing to stdout
The MCP protocol requires tool call results to be wrapped in content items
with type and text fields. This matches what other MCP servers do.
Before: {"result": [{"name": "testdb", ...}]}
After: {"result": [{"type": "text", "text": "[{\"name\": \"testdb\", ...}]"}]}
This should fix the issue where Claude Code was timing out waiting for responses.
Remove all the unwrapping and TextContent wrapping logic that was added. Go back to the original simple pass-through that just returns the result from ProxySQL directly. The original format was correct.
The TextIOWrapper may have buffering issues when stderr is redirected. Writing directly to the binary buffer with encoded bytes ensures immediate delivery of responses to Claude Code.
The sys.stderr redirection may be interfering with stdout/stdin pipes. Now logging writes directly to a file handle instead of through sys.stderr.
Restore to exact original code from commit 01c182c. The original code is clean and simple - just passes through responses. Only added minimal file-based logging for debugging.
Added _log() calls to track: - stdout writes (bytes and content preview) - tools/call handler (name, response, result) - main startup Log is written to /tmp/proxysql_mcp_bridge.log
Per MCP spec (https://modelcontextprotocol.io/specification/2025-11-25/server/tools): - Tool call responses MUST have a "content" array - Each content item has "type" and "text" fields - Response includes "isError" boolean This was the root cause - Claude Code was waiting for the correct format.
The ProxySQL MCP server now wraps tool results in the correct MCP format: - result.content: array of content items (type: "text", text: "...") - result.isError: boolean Per MCP spec: https://modelcontextprotocol.io/specification/2025-11-25/server/tools Also simplified the bridge to pass through results directly since the server now returns the correct format.
Added documentation for: - Log file location (/tmp/proxysql_mcp_bridge.log) - What information is logged - How to use logs for debugging
📝 WalkthroughSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughThe pull request adds a new Python-based stdio bridge that relays MCP messages between Claude Code and ProxySQL's HTTPS endpoint, modifies the C++ tool handler to wrap results in MCP-compliant format, includes documentation for the bridge setup, and fixes a configuration syntax issue. Changes
Sequence Diagram(s)sequenceDiagram
participant Claude as Claude Code
participant Bridge as Stdio Bridge
participant ProxySQL as ProxySQL MCP
Claude->>Bridge: JSON-RPC initialize (stdin)
Bridge->>ProxySQL: HTTP POST initialize (with token)
ProxySQL-->>Bridge: Initialize response
Bridge->>Claude: JSON-RPC initialize response (stdout)
Claude->>Bridge: JSON-RPC tools/list (stdin)
Bridge->>ProxySQL: HTTP POST tools/list
ProxySQL-->>Bridge: Tools list response
Bridge->>Claude: JSON-RPC tools/list response (stdout)
Claude->>Bridge: JSON-RPC tools/call (stdin)
Bridge->>ProxySQL: HTTP POST tools/call
ProxySQL->>ProxySQL: Execute tool<br/>Wrap in MCP format
ProxySQL-->>Bridge: Tool result (MCP-wrapped)
Bridge->>Claude: JSON-RPC tools/call response (stdout)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
Summary of ChangesHello @renecannao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the integration between Claude Code and ProxySQL's Model Context Protocol (MCP) endpoint. The core issue of tool call responses not adhering to the MCP specification, which caused timeouts in Claude Code, has been addressed. By standardizing the response format directly within the ProxySQL C++ server and enhancing the Python stdio bridge with debugging logs, this change ensures reliable and transparent communication, facilitating smoother operation and easier troubleshooting for users. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request makes critical fixes to the MCP stdio bridge for Claude Code compatibility by ensuring tool responses adhere to the MCP specification. It also adds valuable debugging capabilities through logging and documentation updates.
My review focuses on improving the maintainability and robustness of the new C++ and Python code. I've suggested refactoring the C++ code to reduce duplication and improve clarity. For the new Python bridge script, my recommendations include making the logging more flexible, improving exception handling to be more specific, and updating to modern asyncio practices to ensure future compatibility and prevent potential issues with task cancellation.
| except json.JSONDecodeError as e: | ||
| await self._write_error(-32700, f"Parse error: {e}", "") | ||
| except Exception as e: | ||
| await self._write_error(-32603, f"Internal error: {e}", "") |
There was a problem hiding this comment.
The broad except Exception can catch and suppress asyncio.CancelledError, which would prevent the task from being properly cancelled. This can lead to resource leaks or unexpected behavior during shutdown. It's better to explicitly catch asyncio.CancelledError and re-raise it to allow for proper task cancellation.
| except json.JSONDecodeError as e: | |
| await self._write_error(-32700, f"Parse error: {e}", "") | |
| except Exception as e: | |
| await self._write_error(-32603, f"Internal error: {e}", "") | |
| except json.JSONDecodeError as e: | |
| await self._write_error(-32700, f"Parse error: {e}", "") | |
| except asyncio.CancelledError: | |
| raise | |
| except Exception as e: | |
| await self._write_error(-32603, f"Internal error: {e}", "") |
| if (response.is_object() && response.contains("success") && response.contains("result")) { | ||
| bool success = response["success"].get<bool>(); | ||
| if (!success) { | ||
| // Tool execution failed - return error in MCP format | ||
| json mcp_result; | ||
| mcp_result["content"] = json::array(); | ||
| json error_content; | ||
| error_content["type"] = "text"; | ||
| std::string error_msg = response.contains("error") ? response["error"].get<std::string>() : "Tool execution failed"; | ||
| error_content["text"] = error_msg; | ||
| mcp_result["content"].push_back(error_content); | ||
| mcp_result["isError"] = true; | ||
| return mcp_result; | ||
| } | ||
| // Success - wrap result in MCP-compliant format with content array | ||
| // Per MCP spec: https://modelcontextprotocol.io/specification/2025-11-25/server/tools | ||
| json actual_result = response["result"]; | ||
| json mcp_result; | ||
| mcp_result["content"] = json::array(); | ||
| json text_content; | ||
| text_content["type"] = "text"; | ||
| text_content["text"] = actual_result.dump(2); // Pretty-print JSON with 2-space indent | ||
| mcp_result["content"].push_back(text_content); | ||
| mcp_result["isError"] = false; | ||
| return mcp_result; | ||
| } | ||
|
|
||
| // Fallback: wrap response in MCP format (for compatibility with non-standard handlers) | ||
| json mcp_result; | ||
| mcp_result["content"] = json::array(); | ||
| json text_content; | ||
| text_content["type"] = "text"; | ||
| text_content["text"] = response.dump(2); | ||
| mcp_result["content"].push_back(text_content); | ||
| mcp_result["isError"] = false; | ||
| return mcp_result; |
There was a problem hiding this comment.
The logic for handling successful tool executions and the fallback case contains duplicated code for wrapping the response in the MCP format. This can be refactored to reduce redundancy and improve maintainability.
Additionally, if the tool's result is already a string, calling .dump() on it will add extra quotes, which is likely not the desired behavior. It's better to check if the result is a string and use its value directly in that case.
The suggested refactoring addresses both of these points by first determining the content to be wrapped and then performing the wrapping in a single, shared block of code.
if (response.is_object() && response.contains("success") && response.contains("result")) {
bool success = response["success"].get<bool>();
if (!success) {
// Tool execution failed - return error in MCP format
json mcp_result;
mcp_result["content"] = json::array();
json error_content;
error_content["type"] = "text";
std::string error_msg = response.contains("error") ? response["error"].get<std::string>() : "Tool execution failed";
error_content["text"] = error_msg;
mcp_result["content"].push_back(error_content);
mcp_result["isError"] = true;
return mcp_result;
}
// Success - use the "result" field as the content to be wrapped
response = response["result"];
}
// Wrap the response (or the 'result' field) in MCP-compliant format
// Per MCP spec: https://modelcontextprotocol.io/specification/2025-11-25/server/tools
json mcp_result;
json text_content;
text_content["type"] = "text";
if (response.is_string()) {
text_content["text"] = response.get<std::string>();
} else {
text_content["text"] = response.dump(2); // Pretty-print JSON with 2-space indent
}
mcp_result["content"] = json::array({text_content});
mcp_result["isError"] = false;
return mcp_result;| _log_file = open("/tmp/proxysql_mcp_bridge.log", "a", buffering=1) | ||
| def _log(msg): | ||
| _log_file.write(f"[{datetime.now().strftime('%H:%M:%S.%f')[:-3]}] {msg}\n") | ||
| _log_file.flush() |
There was a problem hiding this comment.
Hardcoding the log file path to /tmp/ can be brittle, as this path may not exist or be writable on all systems (e.g., Windows). It's better practice to make this configurable.
Consider using an environment variable to specify the log file path, falling back to the current default if it's not set. This would make the script more portable and flexible for users in different environments.
| _log_file = open("/tmp/proxysql_mcp_bridge.log", "a", buffering=1) | |
| def _log(msg): | |
| _log_file.write(f"[{datetime.now().strftime('%H:%M:%S.%f')[:-3]}] {msg}\n") | |
| _log_file.flush() | |
| _log_file_path = os.getenv("PROXYSQL_MCP_BRIDGE_LOG", "/tmp/proxysql_mcp_bridge.log") | |
| _log_file = open(_log_file_path, "a", buffering=1) | |
| def _log(msg): | |
| _log_file.write(f"[{datetime.now().strftime('%H:%M:%S.%f')[:-3]}] {msg}\n") | |
| _log_file.flush() |
| except httpx.HTTPStatusError as e: | ||
| return { | ||
| "jsonrpc": "2.0", | ||
| "error": { | ||
| "code": -32000, | ||
| "message": f"HTTP error: {e.response.status_code}", | ||
| "data": str(e) | ||
| }, | ||
| "id": request.get("id", "") | ||
| } | ||
| except Exception as e: | ||
| return { | ||
| "jsonrpc": "2.0", | ||
| "error": { | ||
| "code": -32603, | ||
| "message": f"Internal error: {str(e)}" | ||
| }, | ||
| "id": request.get("id", "") | ||
| } |
There was a problem hiding this comment.
Catching a broad Exception can hide bugs and make debugging more difficult. It's better to handle specific exceptions that you expect from httpx.
I suggest adding a specific except block for httpx.RequestError to handle network-related issues like connection errors or timeouts. This will provide more specific and helpful error messages to the user and improve the robustness of the client.
except httpx.HTTPStatusError as e:
return {
"jsonrpc": "2.0",
"error": {
"code": -32000,
"message": f"HTTP error: {e.response.status_code}",
"data": str(e)
},
"id": request.get("id", "")
}
except httpx.RequestError as e:
return {
"jsonrpc": "2.0",
"error": {
"code": -32002,
"message": f"Request to ProxySQL failed: {e}"
},
"id": request.get("id", "")
}
except Exception as e:
return {
"jsonrpc": "2.0",
"error": {
"code": -32603,
"message": f"Internal error: {str(e)}"
},
"id": request.get("id", "")
}| async def _readline(self) -> Optional[str]: | ||
| """Read a line from stdin.""" | ||
| loop = asyncio.get_event_loop() | ||
| line = await loop.run_in_executor(None, sys.stdin.readline) |
There was a problem hiding this comment.
asyncio.get_event_loop() is deprecated since Python 3.10 and its behavior can be unexpected in some contexts. You should use asyncio.get_running_loop() instead, which is available since Python 3.7 and is the recommended way to get the current running event loop.
| line = await loop.run_in_executor(None, sys.stdin.readline) | |
| loop = asyncio.get_running_loop() |
| async def _writeline(self, data: Any): | ||
| """Write JSON data to stdout.""" | ||
| loop = asyncio.get_event_loop() | ||
| output = json.dumps(data, ensure_ascii=False) + "\n" |
There was a problem hiding this comment.
Python bridge (scripts/mcp/proxysql_mcp_stdio_bridge.py): - Make log file path configurable via PROXYSQL_MCP_BRIDGE_LOG env var - Add httpx.RequestError exception handling for network issues - Fix asyncio.CancelledError not being re-raised (HIGH priority) - Replace deprecated asyncio.get_event_loop() with get_running_loop() C++ server (lib/MCP_Endpoint.cpp): - Refactor handle_tools_call() to reduce code duplication - Handle string responses directly without calling .dump() - Single shared wrapping block for all response types Per review: #11
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @scripts/mcp/proxysql_mcp_stdio_bridge.py:
- Around line 336-345: The endpoint validation is unreachable because endpoint
is assigned a default; change the os.getenv call that sets endpoint (currently
using "https://127.0.0.1:6071/mcp/query") to not supply a default (so it returns
None/empty when unset) and keep the existing if not endpoint check to exit with
an error; ensure any logging of endpoint (the _log call) tolerates a None/empty
value or adjust it to log that the env var is missing.
🧹 Nitpick comments (5)
scripts/mcp/STDIO_BRIDGE_README.md (2)
43-57: Replace hardcoded user-specific path with a placeholder.Line 48 contains a hardcoded path
/home/rene/proxysql-vec/scripts/mcp/...which is specific to a particular development environment. Use a generic placeholder to make the documentation more portable.📝 Suggested change
"command": "python3", - "args": ["/home/rene/proxysql-vec/scripts/mcp/proxysql_mcp_stdio_bridge.py"], + "args": ["/path/to/proxysql-vec/scripts/mcp/proxysql_mcp_stdio_bridge.py"], "env": {
7-12: Consider adding a language identifier to the ASCII diagram code block.Per markdownlint MD040, fenced code blocks should have a language specified. For ASCII diagrams, you can use
textorplaintextas the language identifier for consistency.📝 Suggested change
-``` +```text ┌─────────────┐ stdio ┌──────────────────┐ HTTPS ┌──────────┐ │ Claude Code│ ──────────> │ stdio Bridge │ ──────────> │ ProxySQL │ │ (MCP Client)│ │ (this script) │ │ MCP │ └─────────────┘ └──────────────────┘ └──────────┘ ```scripts/mcp/proxysql_mcp_stdio_bridge.py (3)
38-42: Log file is never explicitly closed.The file opened at line 39 remains open for the lifetime of the process. While this typically works because the OS closes file handles on process exit, consider registering an
atexithandler for cleaner resource management, especially if the bridge is imported as a module rather than run directly.📝 Suggested improvement
+import atexit + # Minimal logging to file for debugging # Log path can be configured via PROXYSQL_MCP_BRIDGE_LOG environment variable _log_file_path = os.getenv("PROXYSQL_MCP_BRIDGE_LOG", "/tmp/proxysql_mcp_bridge.log") _log_file = open(_log_file_path, "a", buffering=1) +atexit.register(_log_file.close) + def _log(msg): _log_file.write(f"[{datetime.now().strftime('%H:%M:%S.%f')[:-3]}] {msg}\n") _log_file.flush()
199-206: Minor style issues flagged by Ruff.Line 206 uses an f-string without placeholders. This is a minor style issue.
📝 Suggested fix
- _log(f"WRITE stdout: flushed") + _log("WRITE stdout: flushed")
257-272: Unusedparamsargument in_handle_initialize.The
paramsargument is not used. Consider logging the client info fromparamsfor debugging purposes, or remove the argument if it's not needed.📝 Option: Log client info for debugging
async def _handle_initialize(self, req_id: str, params: Dict[str, Any]) -> Dict[str, Any]: """Handle initialize request.""" + client_info = params.get("clientInfo", {}) + _log(f"initialize: client={client_info.get('name', 'unknown')}") return { "jsonrpc": "2.0",
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
lib/MCP_Endpoint.cppscripts/mcp/STDIO_BRIDGE_README.mdscripts/mcp/proxysql_mcp_stdio_bridge.pysrc/proxysql.cfg
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
scripts/mcp/STDIO_BRIDGE_README.md
7-7: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Ruff (0.14.11)
scripts/mcp/proxysql_mcp_stdio_bridge.py
38-38: Probable insecure usage of temporary file or directory: "/tmp/proxysql_mcp_bridge.log"
(S108)
90-90: Avoid specifying long messages outside the exception class
(TRY003)
119-119: Do not catch blind exception: Exception
(BLE001)
124-124: Use explicit conversion flag
Replace with conversion flag
(RUF010)
188-188: Do not catch blind exception: Exception
(BLE001)
203-203: Use explicit conversion flag
Replace with conversion flag
(RUF010)
206-206: f-string without any placeholders
Remove extraneous f prefix
(F541)
257-257: Unused method argument: params
(ARG002)
355-355: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (4)
src/proxysql.cfg (1)
60-71: LGTM!The
mcp_variablesblock is now properly closed with the brace, and the blank line separator beforegenai_variablesimproves readability and follows the pattern used elsewhere in this config file.lib/MCP_Endpoint.cpp (1)
344-380: MCP-compliant response wrapping looks correct.The logic properly:
- Unwraps ProxySQL's internal
{"success": ..., "result": ...}format- Handles failure cases with
isError: true- Wraps successful results in the MCP content array format
One minor note: the spec URL comment on line 367 references
2025-11-25, but the protocol version used elsewhere is2024-11-05. Consider updating the comment to match or use the correct specification date.scripts/mcp/proxysql_mcp_stdio_bridge.py (2)
163-189: Main server loop and error handling look good.The loop correctly:
- Uses async context manager for the ProxySQL client
- Sends the
notifications/initializednotification on startup- Re-raises
asyncio.CancelledErrorfor proper task cancellation (line 186-187)- Catches JSON parse errors separately from general exceptions
45-127: ProxySQLMCPEndpoint class implementation is solid.Good use of async context manager pattern, proper error handling with JSON-RPC compliant error responses, and appropriate timeout configuration (120s). The broad
Exceptioncatch at line 119 is acceptable here as a catch-all for unexpected errors in an RPC context.
| endpoint = os.getenv("PROXYSQL_MCP_ENDPOINT", "https://127.0.0.1:6071/mcp/query") | ||
| token = os.getenv("PROXYSQL_MCP_TOKEN", "") | ||
| insecure_ssl = os.getenv("PROXYSQL_MCP_INSECURE_SSL", "0").lower() in ("1", "true", "yes") | ||
|
|
||
| _log(f"START: endpoint={endpoint}, insecure_ssl={insecure_ssl}") | ||
|
|
||
| # Validate endpoint | ||
| if not endpoint: | ||
| sys.stderr.write("Error: PROXYSQL_MCP_ENDPOINT environment variable is required\n") | ||
| sys.exit(1) |
There was a problem hiding this comment.
Unreachable code: endpoint validation always passes.
Line 336 provides a default value for endpoint, so the check on line 343 (if not endpoint) will never be true. Either remove the default to make the environment variable truly required, or remove the dead check.
🐛 Option A: Make endpoint truly required (recommended)
- endpoint = os.getenv("PROXYSQL_MCP_ENDPOINT", "https://127.0.0.1:6071/mcp/query")
+ endpoint = os.getenv("PROXYSQL_MCP_ENDPOINT")
token = os.getenv("PROXYSQL_MCP_TOKEN", "")
insecure_ssl = os.getenv("PROXYSQL_MCP_INSECURE_SSL", "0").lower() in ("1", "true", "yes")Option B: Remove dead code and keep default
_log(f"START: endpoint={endpoint}, insecure_ssl={insecure_ssl}")
- # Validate endpoint
- if not endpoint:
- sys.stderr.write("Error: PROXYSQL_MCP_ENDPOINT environment variable is required\n")
- sys.exit(1)
-
# Run the server📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| endpoint = os.getenv("PROXYSQL_MCP_ENDPOINT", "https://127.0.0.1:6071/mcp/query") | |
| token = os.getenv("PROXYSQL_MCP_TOKEN", "") | |
| insecure_ssl = os.getenv("PROXYSQL_MCP_INSECURE_SSL", "0").lower() in ("1", "true", "yes") | |
| _log(f"START: endpoint={endpoint}, insecure_ssl={insecure_ssl}") | |
| # Validate endpoint | |
| if not endpoint: | |
| sys.stderr.write("Error: PROXYSQL_MCP_ENDPOINT environment variable is required\n") | |
| sys.exit(1) | |
| endpoint = os.getenv("PROXYSQL_MCP_ENDPOINT") | |
| token = os.getenv("PROXYSQL_MCP_TOKEN", "") | |
| insecure_ssl = os.getenv("PROXYSQL_MCP_INSECURE_SSL", "0").lower() in ("1", "true", "yes") | |
| _log(f"START: endpoint={endpoint}, insecure_ssl={insecure_ssl}") | |
| # Validate endpoint | |
| if not endpoint: | |
| sys.stderr.write("Error: PROXYSQL_MCP_ENDPOINT environment variable is required\n") | |
| sys.exit(1) |
🤖 Prompt for AI Agents
In @scripts/mcp/proxysql_mcp_stdio_bridge.py around lines 336 - 345, The
endpoint validation is unreachable because endpoint is assigned a default;
change the os.getenv call that sets endpoint (currently using
"https://127.0.0.1:6071/mcp/query") to not supply a default (so it returns
None/empty when unset) and keep the existing if not endpoint check to exit with
an error; ensure any logging of endpoint (the _log call) tolerates a None/empty
value or adjust it to log that the env var is missing.
Summary
This PR contains critical fixes and improvements to the MCP stdio bridge to make it work properly with Claude Code.
Root Cause & Fix
The main issue was that tool call responses were not following the MCP specification format. Per MCP spec, tool responses must have:
{ "result": { "content": [{"type": "text", "text": "..."}], "isError": false } }The ProxySQL MCP server was returning raw data instead, causing Claude Code to timeout waiting for properly formatted responses.
Changes
C++ Server (lib/MCP_Endpoint.cpp):
handle_tools_call()to wrap tool results in MCP-compliantcontentarray formatisError: truefor failed tool executionsPython Bridge (scripts/mcp/proxysql_mcp_stdio_bridge.py):
/tmp/proxysql_mcp_bridge.logfor debuggingDocumentation:
Test Plan
list_schemastool workslist_tablestool worksdescribe_tabletool worksCommits Overview
This branch includes the debugging journey: