feat: add InnoDB Cluster topology (#58)#63
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds an InnoDB Cluster replication topology: new CLI flag (--skip-router), defaults and validation entries, globals/template constants, embedded cluster templates, a sandbox implementation to create/bootstrap clusters via mysqlsh with optional mysqlrouter, and CI workflow coverage for cluster tests. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as dbdeployer CLI
participant Sandbox as CreateInnoDBCluster
participant Bin as mysqlsh/mysqlrouter
participant Nodes as mysqld nodes
participant Router as mysqlrouter process
CLI->>Sandbox: deploy --topology=innodb-cluster (nodes, skipRouter)
Sandbox->>Bin: locate mysqlsh (required) and mysqlrouter (optional)
Sandbox->>Nodes: allocate ports & create node sandboxes
Nodes-->>Sandbox: node sandboxes created
Sandbox->>Nodes: start nodes and prepare GR scripts
Sandbox->>Bin: run mysqlsh init_cluster script to create InnoDB Cluster
Bin-->>Sandbox: cluster initialized / status
alt skipRouter == false
Sandbox->>Bin: run mysqlrouter --bootstrap against primary
Bin-->>Router: router files created
Sandbox->>Router: start router (start.sh)
Router-->>Sandbox: router running (or return warning)
end
Sandbox-->>CLI: deployment complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces support for deploying InnoDB Cluster topologies using MySQL Shell and optional MySQL Router. The review identified several critical issues, including potential runtime panics due to unsafe slice access, missing application of InnoDB Cluster configuration templates, incorrect initialization order that conflicts with MySQL Shell's AdminAPI, redundant capability checks within loops, and unused function parameters.
| if err != nil { | ||
| return err | ||
| } | ||
| rev := vList[2] |
There was a problem hiding this comment.
The code accesses vList[2] without verifying that the slice contains at least three elements. If a version string with fewer than three parts (e.g., "8.4") is provided, this will cause a runtime panic.
if len(vList) < 3 {
return fmt.Errorf("invalid version format: %s (expected x.y.z)", sandboxDef.Version)
}
rev := vList[2]| if err != nil { | ||
| return err | ||
| } | ||
| sandboxDef.ReplOptions = SingleTemplates[globals.TmplReplicationOptions].Contents + "\n" + replOptionsText |
There was a problem hiding this comment.
The TmplInnoDBClusterOptions template is defined in the collection but never applied to the SandboxDef. This means specific InnoDB Cluster options, such as disabled_storage_engines, will be missing from the generated my.cnf files.
| sandboxDef.ReplOptions = SingleTemplates[globals.TmplReplicationOptions].Contents + "\n" + replOptionsText | |
| icData := common.StringMap{ | |
| "ServerId": sandboxDef.ServerId, | |
| "MasterIp": masterIp, | |
| "Port": sandboxDef.Port, | |
| } | |
| icOptions, err := common.SafeTemplateFill("innodb_cluster_options", | |
| InnoDBClusterTemplates[globals.TmplInnoDBClusterOptions].Contents, icData) | |
| if err != nil { | |
| return err | |
| } | |
| sandboxDef.ReplOptions = SingleTemplates[globals.TmplReplicationOptions].Contents + "\n" + replOptionsText + "\n" + icOptions |
| common.CondPrintln(path.Join(common.ReplaceLiteralHome(sandboxDef.SandboxDir), globals.ScriptInitializeNodes)) | ||
| logger.Printf("Running group replication initialization script\n") | ||
| _, err := common.RunCmd(path.Join(sandboxDef.SandboxDir, globals.ScriptInitializeNodes)) | ||
| if err != nil { | ||
| return fmt.Errorf("error initializing group replication for InnoDB Cluster: %s", err) | ||
| } | ||
|
|
There was a problem hiding this comment.
Calling globals.ScriptInitializeNodes (standard Group Replication initialization) before globals.ScriptInitCluster (MySQL Shell AdminAPI) will likely cause the deployment to fail. Standard GR initialization bootstraps the cluster manually, but MySQL Shell's dba.createCluster() expects nodes to be available but not already part of an active Group Replication cluster. You should skip the standard GR initialization and let MySQL Shell handle the bootstrap.
| isMinimumMySQLXDefault, err := common.HasCapability(sandboxDef.Flavor, common.MySQLXDefault, sandboxDef.Version) | ||
| if err != nil { | ||
| return err | ||
| } |
| } | ||
|
|
||
| // bootstrapRouter bootstraps MySQL Router against the InnoDB Cluster. | ||
| func bootstrapRouter(mysqlrouterPath, routerDir string, primaryPort int, dbPassword string, logger *defaults.Logger) error { |
There was a problem hiding this comment.
The dbPassword parameter is passed to bootstrapRouter but is never used within the function body. It should be removed if it's not required for the bootstrap process.
| func bootstrapRouter(mysqlrouterPath, routerDir string, primaryPort int, dbPassword string, logger *defaults.Logger) error { | |
| func bootstrapRouter(mysqlrouterPath, routerDir string, primaryPort int, logger *defaults.Logger) error { |
There was a problem hiding this comment.
Pull request overview
Adds a new replication topology, --topology=innodb-cluster, to deploy an InnoDB Cluster stack (Group Replication bootstrapped via MySQL Shell AdminAPI) with optional MySQL Router support controlled by --skip-router.
Changes:
- Introduces
CreateInnoDBCluster()to provision nodes, initialize Group Replication, bootstrap the cluster viamysqlsh, and optionally bootstrap/startmysqlrouter. - Adds new embedded template collection and scripts for cluster init/status and router start/stop.
- Extends CLI/defaults/globals to support the new topology label, scripts, and default prefix/base port, plus a
--skip-routerflag.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| sandbox/templates/cluster/router_stop.gotxt | Adds router stop helper script template |
| sandbox/templates/cluster/router_start.gotxt | Adds router start helper script template |
| sandbox/templates/cluster/innodb_cluster_options.gotxt | Adds InnoDB Cluster-specific options template |
| sandbox/templates/cluster/init_cluster.gotxt | Adds mysqlsh-driven cluster bootstrap template |
| sandbox/templates/cluster/check_cluster.gotxt | Adds mysqlsh-driven cluster status template |
| sandbox/templates.go | Registers the new cluster template collection |
| sandbox/replication.go | Adds SkipRouter and wires innodb-cluster topology dispatch |
| sandbox/innodb_cluster.go | Implements the new InnoDB Cluster deployment flow |
| sandbox/cluster_templates.go | Embeds new cluster templates into InnoDBClusterTemplates |
| globals/template_names.go | Defines template name constants for cluster templates |
| globals/globals.go | Adds topology/script/flag constants and updates allowed topologies |
| defaults/defaults.go | Adds defaults for ic_msb_ prefix and base port |
| cmd/replication.go | Adds --skip-router, documents new topology, forwards flag to sandbox layer |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| sandboxDef.Multi = true | ||
| sandboxDef.LoadGrants = true | ||
| sandboxDef.Prompt = fmt.Sprintf("%s%d", nodeLabel, i) | ||
| sandboxDef.SBType = "innodb-cluster-node" |
There was a problem hiding this comment.
sandboxDef.SBType is set to "innodb-cluster-node", which means sandbox/sandbox.go will emit a report-host = node-N line in my.sandbox.cnf. At the same time, the Group Replication options template being appended via sandboxDef.ReplOptions includes its own report-host=... line, so the generated config ends up with duplicate (and potentially conflicting) report-host settings. Consider setting sandboxDef.SkipReportHost = true for this topology, or reusing SBType="group-node" (as CreateGroupReplication does) so ReportHost is omitted and only the GR template controls it.
| sandboxDef.SBType = "innodb-cluster-node" | |
| sandboxDef.SBType = "innodb-cluster-node" | |
| sandboxDef.SkipReportHost = true |
| } | ||
|
|
||
| common.CondPrintf("InnoDB Cluster directory installed in %s\n", common.ReplaceLiteralHome(sandboxDef.SandboxDir)) | ||
| common.CondPrintf("run 'dbdeployer usage multiple' for basic instructions'\n") |
There was a problem hiding this comment.
The final user message has a stray trailing quote: run 'dbdeployer usage multiple' for basic instructions'. This looks like a typo and will show up in CLI output; please remove the extra ' (or adjust the wording if a different command should be suggested for this topology).
| common.CondPrintf("run 'dbdeployer usage multiple' for basic instructions'\n") | |
| common.CondPrintf("run 'dbdeployer usage multiple' for basic instructions\n") |
| # Configure the first instance for cluster use | ||
| $MYSQLSH --uri root:{{.DbPassword}}@127.0.0.1:{{.PrimaryPort}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | ||
|
|
There was a problem hiding this comment.
init_cluster embeds the root password directly in the --uri root:{{.DbPassword}}@... connection string. If --db-password contains URI-special characters (notably :, @, /, ?, #), mysqlsh parsing can break and cluster init will fail. Prefer passing the password via a safer mechanism (e.g., mysqlsh password option / stdin / environment), or ensure proper URI escaping/encoding before inserting it into the template.
| # Configure the first instance for cluster use | ||
| $MYSQLSH --uri root:{{.DbPassword}}@127.0.0.1:{{.PrimaryPort}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | ||
|
|
||
| sleep 2 | ||
|
|
||
| # Create the cluster on the primary | ||
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | ||
| var cluster = dba.createCluster('{{.ClusterName}}', {memberWeight: 90}); | ||
| print('Cluster created successfully'); |
There was a problem hiding this comment.
init_cluster hard-codes the cluster admin credentials to icadmin/icadmin (both for dba configure-instance and subsequent --uri usage). This introduces a weak, non-configurable credential into every deployed sandbox. Consider deriving these credentials from existing config (e.g., reuse --db-user/--db-password, or generate/store a random password in the sandbox directory and reference it consistently in the scripts).
| globals.NdbLabel) | ||
|
|
||
| } | ||
| skipRouter, _ := flags.GetBool(globals.SkipRouterLabel) |
There was a problem hiding this comment.
--skip-router is accepted unconditionally and forwarded into ReplicationData, but it only has an effect for the innodb-cluster topology. For consistency with existing option validation (e.g., --ndb-nodes, --single-primary), consider erroring out when --skip-router is provided with any other topology to avoid surprising no-op flags.
| skipRouter, _ := flags.GetBool(globals.SkipRouterLabel) | |
| skipRouter, _ := flags.GetBool(globals.SkipRouterLabel) | |
| if skipRouter && topology != globals.InnodbClusterLabel { | |
| common.Exitf(1, "option '%s' can only be used with '%s' topology ", | |
| globals.SkipRouterLabel, | |
| globals.InnodbClusterLabel) | |
| } |
| Long: `The replication command allows you to deploy several nodes in replication. | ||
| Allowed topologies are "master-slave" for all versions, and "group", "all-masters", "fan-in" | ||
| for 5.7.17+. | ||
| Topologies "pcx" and "ndb" are available for binaries of type Percona Xtradb Cluster and MySQL Cluster. | ||
| Topologies "pxc" and "ndb" are available for binaries of type Percona Xtradb Cluster and MySQL Cluster. | ||
| Topology "innodb-cluster" deploys Group Replication managed by MySQL Shell AdminAPI with optional | ||
| MySQL Router for connection routing (requires MySQL 8.0.11+ and mysqlsh). |
There was a problem hiding this comment.
--with-proxysql wiring later in this function computes the created sandbox directory using the master/slave prefix (MasterSlavePrefix). With the new innodb-cluster topology using InnoDBClusterPrefix, ProxySQL deployment will look in the wrong directory and fail when both --topology=innodb-cluster and --with-proxysql are used. Consider deriving sandboxDir from the selected topology (or reading it from the sandbox catalog/description after deployment) instead of assuming master/slave.
| @@ -0,0 +1,10 @@ | |||
| # InnoDB Cluster options | |||
| # Generated by dbdeployer {{.AppVersion}} | |||
| server-id={{.ServerId}} | |||
There was a problem hiding this comment.
innodb_cluster_options.gotxt uses server-id={{.ServerId}}, but ServerId is already rendered as server-id=<n> by sandbox/sandbox.go (used by templates/single/my_cnf.gotxt). If this template is ever included, it would produce server-id=server-id=<n>. Also, this template does not appear to be referenced anywhere in the new topology code, so it may be dead/unusable as-is. Consider either wiring it into the node config generation (and fixing placeholders), or removing it until it’s needed.
| server-id={{.ServerId}} | |
| {{.ServerId}} |
| replicationCmd.PersistentFlags().BoolP(globals.SuperReadOnlyLabel, "", false, "Set super-read-only for slaves") | ||
| replicationCmd.PersistentFlags().Bool(globals.ReplHistoryDirLabel, false, "uses the replication directory to store mysql client history") | ||
| setPflag(replicationCmd, globals.ChangeMasterOptions, "", "CHANGE_MASTER_OPTIONS", "", "options to add to CHANGE MASTER TO", true) | ||
| replicationCmd.PersistentFlags().Bool(globals.SkipRouterLabel, false, "Skip MySQL Router deployment for InnoDB Cluster topology") | ||
| replicationCmd.PersistentFlags().Bool("with-proxysql", false, "Deploy ProxySQL alongside the replication sandbox") |
There was a problem hiding this comment.
There is extensive testscript coverage for existing replication topologies (e.g., ts/templates/group/group.tmpl, ts/templates/replication/replication.tmpl), but no automated coverage for the new --topology=innodb-cluster flow. Consider adding a testscript smoke test that deploys with --topology=innodb-cluster --skip-router --skip-start using a mock basedir that includes a stub mysqlsh binary, and asserts that the expected scripts (init_cluster, check_cluster, etc.) are generated.
| // bootstrapRouter bootstraps MySQL Router against the InnoDB Cluster. | ||
| func bootstrapRouter(mysqlrouterPath, routerDir string, primaryPort int, dbPassword string, logger *defaults.Logger) error { | ||
| err := os.MkdirAll(routerDir, globals.PublicDirectoryAttr) |
There was a problem hiding this comment.
bootstrapRouter takes a dbPassword parameter but never uses it, making the API misleading. Consider removing this parameter (and the corresponding argument at the call site) or using it intentionally if credentials need to be passed differently.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cmd/replication.go (1)
244-258:⚠️ Potential issue | 🟠 MajorThe new Router-free path still breaks with
--with-proxysql.
SkipRouteris passed through correctly here, but thewith-proxysqlblock at Line 266 still rebuilds the sandbox directory withMasterSlavePrefixand then readsmaster/nodeNchildren. Forinnodb-cluster, that means the intended Router-free + ProxySQL path will fail right after the sandbox is created. Please derive that path/naming fromtopology(or return it fromCreateReplicationSandbox) instead of hardcoding the master-slave layout.Also applies to: 302-304, 329-330, 349-349
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/replication.go` around lines 244 - 258, The with-proxysql code assumes a master/slave layout and rebuilds sandbox paths using MasterSlavePrefix and node names (e.g., "master", "nodeN"), which breaks when SkipRouter is true or for topologies like innodb-cluster; update the logic in the with-proxysql block to derive the sandbox path/naming from the topology variable (or modify CreateReplicationSandbox to return the actual created path/naming) instead of hardcoding master/slave names—use the topology, nodes, ndbNodes and/or the returned value from CreateReplicationSandbox to compute the correct service directories for ProxySQL setup (replace references to MasterSlavePrefix and fixed "master"/"nodeN" names at the mentioned locations).defaults/defaults.go (1)
240-258:⚠️ Potential issue | 🟡 MinorAdd
InnoDBClusterBasePortto the existing base-port collision check.The new prefix is compared against
MultiplePrefix, but the matching base port is not.ValidateDefaultswill currently acceptmultiple-base-port == innodb-cluster-base-port, which makes the new topology the only one missing that guard.🩹 Proposed fix
noConflicts := nd.MultipleBasePort != nd.GroupReplicationSpBasePort && nd.MultipleBasePort != nd.GroupReplicationBasePort && nd.MultipleBasePort != nd.MasterSlaveBasePort && nd.MultipleBasePort != nd.FanInReplicationBasePort && nd.MultipleBasePort != nd.AllMastersReplicationBasePort && nd.MultipleBasePort != nd.NdbBasePort && nd.MultipleBasePort != nd.NdbClusterPort && nd.MultipleBasePort != nd.PxcBasePort && + nd.MultipleBasePort != nd.InnoDBClusterBasePort && nd.MultiplePrefix != nd.GroupSpPrefix && nd.MultiplePrefix != nd.GroupPrefix &&🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@defaults/defaults.go` around lines 240 - 258, The base-port collision check in the noConflicts expression omits InnoDBClusterBasePort so ValidateDefaults can wrongly allow nd.MultipleBasePort == nd.InnoDBClusterBasePort; update the noConflicts boolean (the block comparing nd.MultipleBasePort against other base ports) to also compare nd.MultipleBasePort != nd.InnoDBClusterBasePort so the new InnoDB cluster topology is guarded the same as the other topologies (look for the noConflicts declaration in defaults.go that checks MultipleBasePort).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sandbox/innodb_cluster.go`:
- Around line 531-583: The router bootstrap is only run inside the
!sandboxDef.SkipStart block so with --skip-start the generated router scripts
(ScriptRouterStart/ScriptRouterStop) lack the config directory; change the logic
to run bootstrapRouter whenever router is enabled (when !skipRouter &&
mysqlrouterPath != ""), not just when starting the cluster: after scripts are
written (after writeScripts loop) call bootstrapRouter(mysqlrouterPath,
routerDir, basePort+1, sandboxDef.DbPassword, logger) and treat failures as
non-fatal (log warning via common.CondPrintf/common.CondPrintln) so the config
dir is created for the generated router scripts even if SkipStart is true; keep
the existing bootstrap call inside the start branch or deduplicate by extracting
a small helper that logs warnings on error.
- Around line 193-204: The variables masterList and slaveList are initialized by
calling makeNodesList(nodes) but are immediately overwritten, causing a dead
assignment (SA4006); remove the unnecessary makeNodesList(nodes) initialization
and set masterList and slaveList directly where they are assigned later
(preserve the existing assignments to masterList = "1" and slaveList = ""),
keeping the related variables slaveLabel, slaveAbbr, masterAbbr, and masterLabel
unchanged.
In `@sandbox/templates/cluster/check_cluster.gotxt`:
- Around line 4-5: The MYSQLSH variable is assigned and executed unquoted which
breaks when the injected {{.MysqlShell}} contains spaces; change the assignment
to quote the value (e.g., MYSQLSH="{{.MysqlShell}}") and invoke it quoted (use
"$MYSQLSH" --uri ...) so the path is treated as a single token; apply the same
change to the other template that sets and executes MYSQLSH (init_cluster.gotxt)
to ensure both assignment and expansion are quoted.
In `@sandbox/templates/cluster/init_cluster.gotxt`:
- Around line 9-22: The template currently inlines DbPassword into shell URIs
(seen in MYSQLSH invocations and configure-instance/createCluster calls), which
is unsafe for passwords with shell/URI metacharacters; change the template to
avoid emitting the raw password by driving mysqlsh with a runtime environment
variable or external secret file instead of embedding {{.DbPassword}}
directly—e.g., have the bootstrap script expect an exported DB_PASSWORD (or read
from a secure file) and call MYSQLSH with URIs that omit the password (or use a
--password option read from the env/file) for both the root/configure-instance
calls and the icadmin/createCluster call so the template no longer renders the
secret inline.
- Around line 4-38: The template lacks a fail-fast setting so a failing mysqlsh
call (e.g., the dba configure-instance or cluster.addInstance invocations) can
be masked by later commands; update the init_cluster.gotxt template to enable
shell errexit (e.g., add a fail-fast setting such as "set -e" or "set -o errexit
-o pipefail") near the top (after the MYSQLSH variable) so any non-zero exit
from $MYSQLSH (used in dba configure-instance, dba.createCluster,
cluster.addInstance, cluster.status) will immediately stop the script and
surface the error back to CreateInnoDBCluster.
In `@sandbox/templates/cluster/innodb_cluster_options.gotxt`:
- Line 7: The template sets log_replica_updates (unsupported before MySQL
8.0.26) which breaks nodes 8.0.11–8.0.25; update the innodb_cluster_options
template to emit log_slave_updates for MySQL versions < 8.0.26 and
log_replica_updates for >= 8.0.26 (use the version variable/lookup available to
the template and conditional logic around the
log_replica_updates/log_slave_updates directive so the correct option is
rendered for each MySQL version).
In `@sandbox/templates/cluster/router_start.gotxt`:
- Around line 4-10: The script currently runs $ROUTER_DIR/start.sh but always
echoes "MySQL Router started" regardless of the start.sh exit status; change the
invocation to check the command's exit code and only print success on zero,
otherwise print the failure message and exit with the start.sh exit code.
Concretely, replace the plain call to $ROUTER_DIR/start.sh inside the ROUTER_DIR
guard with a conditional (e.g., if "$ROUTER_DIR/start.sh"; then echo "MySQL
Router started"; else echo "MySQL Router not bootstrapped"; exit $?. fi) so
failures from start.sh are propagated.
In `@sandbox/templates/cluster/router_stop.gotxt`:
- Around line 4-10: The stop helper should not trust mysqlrouter.pid blindly and
must propagate failures: update the logic around ROUTER_DIR so that when
$ROUTER_DIR/stop.sh is present you run it and check its exit status and return a
non-zero code on failure; when falling back to $ROUTER_DIR/mysqlrouter.pid, read
and validate the PID from the file, verify it corresponds to a running
mysqlrouter process (e.g., check /proc/<pid>/cmdline or use pgrep -f with the
pid) before sending SIGTERM, check the kill exit status and only remove the pid
file and print “MySQL Router stopped” if validation and kill succeed, otherwise
print an error and exit non‑zero; ensure all branches return proper exit codes
rather than unconditionally echoing success.
---
Outside diff comments:
In `@cmd/replication.go`:
- Around line 244-258: The with-proxysql code assumes a master/slave layout and
rebuilds sandbox paths using MasterSlavePrefix and node names (e.g., "master",
"nodeN"), which breaks when SkipRouter is true or for topologies like
innodb-cluster; update the logic in the with-proxysql block to derive the
sandbox path/naming from the topology variable (or modify
CreateReplicationSandbox to return the actual created path/naming) instead of
hardcoding master/slave names—use the topology, nodes, ndbNodes and/or the
returned value from CreateReplicationSandbox to compute the correct service
directories for ProxySQL setup (replace references to MasterSlavePrefix and
fixed "master"/"nodeN" names at the mentioned locations).
In `@defaults/defaults.go`:
- Around line 240-258: The base-port collision check in the noConflicts
expression omits InnoDBClusterBasePort so ValidateDefaults can wrongly allow
nd.MultipleBasePort == nd.InnoDBClusterBasePort; update the noConflicts boolean
(the block comparing nd.MultipleBasePort against other base ports) to also
compare nd.MultipleBasePort != nd.InnoDBClusterBasePort so the new InnoDB
cluster topology is guarded the same as the other topologies (look for the
noConflicts declaration in defaults.go that checks MultipleBasePort).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a949cf9a-f7b6-49a9-9f1b-d64f1abadce0
📒 Files selected for processing (13)
cmd/replication.godefaults/defaults.goglobals/globals.goglobals/template_names.gosandbox/cluster_templates.gosandbox/innodb_cluster.gosandbox/replication.gosandbox/templates.gosandbox/templates/cluster/check_cluster.gotxtsandbox/templates/cluster/init_cluster.gotxtsandbox/templates/cluster/innodb_cluster_options.gotxtsandbox/templates/cluster/router_start.gotxtsandbox/templates/cluster/router_stop.gotxt
| slaveLabel := defaults.Defaults().SlavePrefix | ||
| slaveAbbr := defaults.Defaults().SlaveAbbr | ||
| masterAbbr := defaults.Defaults().MasterAbbr | ||
| masterLabel := defaults.Defaults().MasterName | ||
| masterList := makeNodesList(nodes) | ||
| slaveList := masterList | ||
|
|
||
| // InnoDB Cluster always uses single-primary mode by default | ||
| // The primary is node 1, the rest are secondaries | ||
| masterList = "1" | ||
| slaveList = "" | ||
| for N := 2; N <= nodes; N++ { |
There was a problem hiding this comment.
Remove the dead makeNodesList initialization.
Both variables are overwritten before first use, which is the current SA4006 failure in CI.
🧹 Proposed fix
- masterList := makeNodesList(nodes)
- slaveList := masterList
+ masterList := "1"
+ slaveList := ""
// InnoDB Cluster always uses single-primary mode by default
// The primary is node 1, the rest are secondaries
- masterList = "1"
- slaveList = ""
for N := 2; N <= nodes; N++ {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| slaveLabel := defaults.Defaults().SlavePrefix | |
| slaveAbbr := defaults.Defaults().SlaveAbbr | |
| masterAbbr := defaults.Defaults().MasterAbbr | |
| masterLabel := defaults.Defaults().MasterName | |
| masterList := makeNodesList(nodes) | |
| slaveList := masterList | |
| // InnoDB Cluster always uses single-primary mode by default | |
| // The primary is node 1, the rest are secondaries | |
| masterList = "1" | |
| slaveList = "" | |
| for N := 2; N <= nodes; N++ { | |
| slaveLabel := defaults.Defaults().SlavePrefix | |
| slaveAbbr := defaults.Defaults().SlaveAbbr | |
| masterAbbr := defaults.Defaults().MasterAbbr | |
| masterLabel := defaults.Defaults().MasterName | |
| masterList := "1" | |
| slaveList := "" | |
| // InnoDB Cluster always uses single-primary mode by default | |
| // The primary is node 1, the rest are secondaries | |
| for N := 2; N <= nodes; N++ { |
🧰 Tools
🪛 GitHub Actions: CI
[error] 197-197: golangci-lint reported SA4006 (staticcheck): this value of masterList is never used.
🪛 GitHub Check: Lint
[failure] 198-198:
SA4006: this value of slaveList is never used (staticcheck)
[failure] 197-197:
SA4006: this value of masterList is never used (staticcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/innodb_cluster.go` around lines 193 - 204, The variables masterList
and slaveList are initialized by calling makeNodesList(nodes) but are
immediately overwritten, causing a dead assignment (SA4006); remove the
unnecessary makeNodesList(nodes) initialization and set masterList and slaveList
directly where they are assigned later (preserve the existing assignments to
masterList = "1" and slaveList = ""), keeping the related variables slaveLabel,
slaveAbbr, masterAbbr, and masterLabel unchanged.
| if !skipRouter { | ||
| sbCluster.scripts = append(sbCluster.scripts, | ||
| ScriptDef{globals.ScriptRouterStart, globals.TmplRouterStart, true}, | ||
| ScriptDef{globals.ScriptRouterStop, globals.TmplRouterStop, true}, | ||
| ) | ||
| } | ||
|
|
||
| for _, sb := range []ScriptBatch{sbMultiple, sbRepl, sbGroup, sbCluster} { | ||
| err := writeScripts(sb) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| } | ||
|
|
||
| if sandboxDef.EnableAdminAddress { | ||
| logger.Printf("Creating admin script for all nodes\n") | ||
| err = writeScript(logger, MultipleTemplates, globals.ScriptUseAllAdmin, | ||
| globals.TmplUseMultiAdmin, sandboxDef.SandboxDir, data, true) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| } | ||
|
|
||
| logger.Printf("Running parallel tasks\n") | ||
| concurrent.RunParallelTasksByPriority(execLists) | ||
|
|
||
| if !sandboxDef.SkipStart { | ||
| // First, run the standard GR initialization | ||
| common.CondPrintln(path.Join(common.ReplaceLiteralHome(sandboxDef.SandboxDir), globals.ScriptInitializeNodes)) | ||
| logger.Printf("Running group replication initialization script\n") | ||
| _, err := common.RunCmd(path.Join(sandboxDef.SandboxDir, globals.ScriptInitializeNodes)) | ||
| if err != nil { | ||
| return fmt.Errorf("error initializing group replication for InnoDB Cluster: %s", err) | ||
| } | ||
|
|
||
| // Then bootstrap the cluster via MySQL Shell | ||
| common.CondPrintln(path.Join(common.ReplaceLiteralHome(sandboxDef.SandboxDir), globals.ScriptInitCluster)) | ||
| logger.Printf("Running InnoDB Cluster initialization script\n") | ||
| _, err = common.RunCmd(path.Join(sandboxDef.SandboxDir, globals.ScriptInitCluster)) | ||
| if err != nil { | ||
| return fmt.Errorf("error initializing InnoDB Cluster: %s", err) | ||
| } | ||
|
|
||
| // Bootstrap MySQL Router if requested | ||
| if !skipRouter && mysqlrouterPath != "" { | ||
| logger.Printf("Bootstrapping MySQL Router\n") | ||
| err = bootstrapRouter(mysqlrouterPath, routerDir, basePort+1, sandboxDef.DbPassword, logger) | ||
| if err != nil { | ||
| common.CondPrintf("WARNING: MySQL Router bootstrap failed: %s\n", err) | ||
| common.CondPrintln("The cluster is functional without Router. Use mysqlsh to connect directly.") | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
--skip-start never bootstraps Router.
The cluster can still be initialized later via the generated init_cluster script, but Router bootstrap only exists in this !sandboxDef.SkipStart branch. With Router enabled and --skip-start, router_start/router_stop are generated without the config directory they need.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/innodb_cluster.go` around lines 531 - 583, The router bootstrap is
only run inside the !sandboxDef.SkipStart block so with --skip-start the
generated router scripts (ScriptRouterStart/ScriptRouterStop) lack the config
directory; change the logic to run bootstrapRouter whenever router is enabled
(when !skipRouter && mysqlrouterPath != ""), not just when starting the cluster:
after scripts are written (after writeScripts loop) call
bootstrapRouter(mysqlrouterPath, routerDir, basePort+1, sandboxDef.DbPassword,
logger) and treat failures as non-fatal (log warning via
common.CondPrintf/common.CondPrintln) so the config dir is created for the
generated router scripts even if SkipStart is true; keep the existing bootstrap
call inside the start branch or deduplicate by extracting a small helper that
logs warnings on error.
| MYSQLSH={{.MysqlShell}} | ||
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n sandbox/templates/cluster/check_cluster.gotxtRepository: ProxySQL/dbdeployer
Length of output: 384
🏁 Script executed:
rg "MysqlShell" --type-list | head -20Repository: ProxySQL/dbdeployer
Length of output: 452
🏁 Script executed:
fd -e "gotxt" -e "template" | head -20Repository: ProxySQL/dbdeployer
Length of output: 855
🏁 Script executed:
rg "MysqlShell" -t txtRepository: ProxySQL/dbdeployer
Length of output: 45
🏁 Script executed:
rg "MysqlShell"Repository: ProxySQL/dbdeployer
Length of output: 782
🏁 Script executed:
cat -n sandbox/templates/cluster/init_cluster.gotxt | head -15Repository: ProxySQL/dbdeployer
Length of output: 755
🏁 Script executed:
cat -n sandbox/templates/import/import_mysqlsh.gotxt | head -20Repository: ProxySQL/dbdeployer
Length of output: 595
🏁 Script executed:
cat -n sandbox/templates/single/mysqlsh.gotxt | head -20Repository: ProxySQL/dbdeployer
Length of output: 721
Quote the mysqlsh path before executing it.
{{.MysqlShell}} is injected into shell source unquoted and then expanded unquoted again. A valid --mysqlsh-path containing spaces will break this helper script.
🔧 Suggested fix
-MYSQLSH={{.MysqlShell}}
-$MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e "
+MYSQLSH="{{.MysqlShell}}"
+"$MYSQLSH" --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e "Note: The same issue exists in sandbox/templates/cluster/init_cluster.gotxt at line 4, which also uses unquoted variable assignment and expansion.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| MYSQLSH={{.MysqlShell}} | |
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | |
| MYSQLSH="{{.MysqlShell}}" | |
| "$MYSQLSH" --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/templates/cluster/check_cluster.gotxt` around lines 4 - 5, The
MYSQLSH variable is assigned and executed unquoted which breaks when the
injected {{.MysqlShell}} contains spaces; change the assignment to quote the
value (e.g., MYSQLSH="{{.MysqlShell}}") and invoke it quoted (use "$MYSQLSH"
--uri ...) so the path is treated as a single token; apply the same change to
the other template that sets and executes MYSQLSH (init_cluster.gotxt) to ensure
both assignment and expansion are quoted.
| MYSQLSH={{.MysqlShell}} | ||
|
|
||
| echo "Creating InnoDB Cluster..." | ||
|
|
||
| # Configure the first instance for cluster use | ||
| $MYSQLSH --uri root:{{.DbPassword}}@127.0.0.1:{{.PrimaryPort}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | ||
|
|
||
| sleep 2 | ||
|
|
||
| # Create the cluster on the primary | ||
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | ||
| var cluster = dba.createCluster('{{.ClusterName}}', {memberWeight: 90}); | ||
| print('Cluster created successfully'); | ||
| " | ||
|
|
||
| {{range .Replicas}} | ||
| echo "Adding instance 127.0.0.1:{{.Port}}..." | ||
| # Configure each replica | ||
| $MYSQLSH --uri root:{{$.DbPassword}}@127.0.0.1:{{.Port}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | ||
|
|
||
| sleep 2 | ||
|
|
||
| # Add to cluster | ||
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{$.PrimaryPort}} --js -e " | ||
| var cluster = dba.getCluster(); | ||
| cluster.addInstance('icadmin:icadmin@127.0.0.1:{{.Port}}', {recoveryMethod: 'incremental'}); | ||
| " | ||
| sleep 3 | ||
| {{end}} | ||
|
|
||
| echo "Checking cluster status..." | ||
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | ||
| var cluster = dba.getCluster(); | ||
| print(cluster.status()); | ||
| " |
There was a problem hiding this comment.
Stop the script on the first mysqlsh failure.
CreateInnoDBCluster only checks this script's exit code. Without set -e, an early configure-instance or addInstance failure can be masked by a later successful cluster.status().
💡 Proposed fix
MYSQLSH={{.MysqlShell}}
+set -e
echo "Creating InnoDB Cluster..."📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| MYSQLSH={{.MysqlShell}} | |
| echo "Creating InnoDB Cluster..." | |
| # Configure the first instance for cluster use | |
| $MYSQLSH --uri root:{{.DbPassword}}@127.0.0.1:{{.PrimaryPort}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | |
| sleep 2 | |
| # Create the cluster on the primary | |
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | |
| var cluster = dba.createCluster('{{.ClusterName}}', {memberWeight: 90}); | |
| print('Cluster created successfully'); | |
| " | |
| {{range .Replicas}} | |
| echo "Adding instance 127.0.0.1:{{.Port}}..." | |
| # Configure each replica | |
| $MYSQLSH --uri root:{{$.DbPassword}}@127.0.0.1:{{.Port}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | |
| sleep 2 | |
| # Add to cluster | |
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{$.PrimaryPort}} --js -e " | |
| var cluster = dba.getCluster(); | |
| cluster.addInstance('icadmin:icadmin@127.0.0.1:{{.Port}}', {recoveryMethod: 'incremental'}); | |
| " | |
| sleep 3 | |
| {{end}} | |
| echo "Checking cluster status..." | |
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | |
| var cluster = dba.getCluster(); | |
| print(cluster.status()); | |
| " | |
| MYSQLSH={{.MysqlShell}} | |
| set -e | |
| echo "Creating InnoDB Cluster..." | |
| # Configure the first instance for cluster use | |
| $MYSQLSH --uri root:{{.DbPassword}}@127.0.0.1:{{.PrimaryPort}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | |
| sleep 2 | |
| # Create the cluster on the primary | |
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | |
| var cluster = dba.createCluster('{{.ClusterName}}', {memberWeight: 90}); | |
| print('Cluster created successfully'); | |
| " | |
| {{range .Replicas}} | |
| echo "Adding instance 127.0.0.1:{{.Port}}..." | |
| # Configure each replica | |
| $MYSQLSH --uri root:{{$.DbPassword}}@127.0.0.1:{{.Port}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | |
| sleep 2 | |
| # Add to cluster | |
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{$.PrimaryPort}} --js -e " | |
| var cluster = dba.getCluster(); | |
| cluster.addInstance('icadmin:icadmin@127.0.0.1:{{.Port}}', {recoveryMethod: 'incremental'}); | |
| " | |
| sleep 3 | |
| {{end}} | |
| echo "Checking cluster status..." | |
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | |
| var cluster = dba.getCluster(); | |
| print(cluster.status()); | |
| " |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/templates/cluster/init_cluster.gotxt` around lines 4 - 38, The
template lacks a fail-fast setting so a failing mysqlsh call (e.g., the dba
configure-instance or cluster.addInstance invocations) can be masked by later
commands; update the init_cluster.gotxt template to enable shell errexit (e.g.,
add a fail-fast setting such as "set -e" or "set -o errexit -o pipefail") near
the top (after the MYSQLSH variable) so any non-zero exit from $MYSQLSH (used in
dba configure-instance, dba.createCluster, cluster.addInstance, cluster.status)
will immediately stop the script and surface the error back to
CreateInnoDBCluster.
| $MYSQLSH --uri root:{{.DbPassword}}@127.0.0.1:{{.PrimaryPort}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false | ||
|
|
||
| sleep 2 | ||
|
|
||
| # Create the cluster on the primary | ||
| $MYSQLSH --uri icadmin:icadmin@127.0.0.1:{{.PrimaryPort}} --js -e " | ||
| var cluster = dba.createCluster('{{.ClusterName}}', {memberWeight: 90}); | ||
| print('Cluster created successfully'); | ||
| " | ||
|
|
||
| {{range .Replicas}} | ||
| echo "Adding instance 127.0.0.1:{{.Port}}..." | ||
| # Configure each replica | ||
| $MYSQLSH --uri root:{{$.DbPassword}}@127.0.0.1:{{.Port}} -- dba configure-instance --clusterAdmin=icadmin --clusterAdminPassword=icadmin --interactive=false --restart=false |
There was a problem hiding this comment.
Don’t inline DbPassword into the shell command.
These lines render the password directly into shell and URI syntax. A custom password with shell metacharacters can alter the script, and ordinary URI separators like @ or : will also break bootstrap.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/templates/cluster/init_cluster.gotxt` around lines 9 - 22, The
template currently inlines DbPassword into shell URIs (seen in MYSQLSH
invocations and configure-instance/createCluster calls), which is unsafe for
passwords with shell/URI metacharacters; change the template to avoid emitting
the raw password by driving mysqlsh with a runtime environment variable or
external secret file instead of embedding {{.DbPassword}} directly—e.g., have
the bootstrap script expect an exported DB_PASSWORD (or read from a secure file)
and call MYSQLSH with URIs that omit the password (or use a --password option
read from the env/file) for both the root/configure-instance calls and the
icadmin/createCluster call so the template no longer renders the secret inline.
| gtid_mode=ON | ||
| enforce_gtid_consistency=ON | ||
| binlog_checksum=NONE | ||
| log_replica_updates=ON |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
MySQL log_replica_updates introduced version history 8.0
💡 Result:
The MySQL system variable log_replica_updates was introduced in MySQL 8.0.26 (released 2021-07-20) as the new preferred name and alias for the existing log_slave_updates variable, which it deprecates. Prior to 8.0.26, only log_slave_updates was used (with the same default value of ON). log_replica_updates is a non-dynamic, global boolean variable that controls whether updates received by a replica from its source are logged to the replica's own binary log, enabling chained replication topologies.
Citations:
- 1: https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-options-binary-log.html
- 2: https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-26.html
- 3: https://docs.oracle.com/cd/E17952_01/mysql-8.0-en/replication-howto-slavebaseconfig.html
🌐 Web query:
MySQL log_slave_updates log_replica_updates compatibility 8.0.26
💡 Result:
In MySQL 8.0.26, log_slave_updates is deprecated and log_replica_updates is introduced as its direct replacement (alias). Both control whether updates received by a replica from a replication source are logged to the replica's own binary log, enabling replication chains. They are fully compatible and interchangeable in MySQL 8.0.26: setting either has the same effect, with a default value of ON for both (assuming binary logging is enabled). Use log_replica_updates for new configurations and forward compatibility, as log_slave_updates may be removed in future releases. This change aligns with MySQL 8.0.26's terminology updates replacing "slave" with "replica".
Citations:
- 1: https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-options-binary-log.html
- 2: https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-26.html
🌐 Web query:
MySQL 8.0.11 8.0.26 replication options deprecated renamed
💡 Result:
In MySQL 8.0, numerous replication-related options, variables, and commands containing "master" and "slave" terminology were deprecated and renamed to use "source" and "replica". This change began progressively from MySQL 8.0.11 and was significantly expanded in 8.0.26 as part of a broader terminology update (WL#14171, WL#14189, WL#14194). Key deprecated/renamed replication items include: Commands (from MySQL 8.0.22/8.0.23): - CHANGE MASTER TO -> CHANGE REPLICATION SOURCE TO - START SLAVE -> START REPLICA - STOP SLAVE -> STOP REPLICA - RESET SLAVE -> RESET REPLICA - SHOW SLAVE STATUS -> SHOW REPLICA STATUS - SHOW SLAVE HOSTS -> SHOW REPLICAS System variables and options (many deprecated in 8.0.26): - slave_* -> replica_* (e.g., slave_parallel_workers -> replica_parallel_workers, slave_net_timeout -> replica_net_timeout, slave_skip_errors -> replica_skip_errors) - master_* -> source_* (e.g., master_info_repository -> source_info_repository, though some repositories deprecated earlier) - rpl_semi_sync_master_* -> rpl_semi_sync_source_* (e.g., rpl_semi_sync_master_enabled -> rpl_semi_sync_source_enabled) - Specific examples: init_slave -> init_replica, skip-slave-start -> skip-replica-start, log_slow_slave_statements -> log_slow_replica_statements, slave_load_tmpdir -> replica_load_tmpdir, slave_compressed_protocol -> replica_compressed_protocol Status variables: - Slave_* -> Replica_* (e.g., Slave_open_temp_tables -> Replica_open_temp_tables) - Rpl_semi_sync_master_* -> Rpl_semi_sync_source_* Other notable deprecations around this period: - In 8.0.11: expire_logs_days (use binlog_expire_logs_seconds) - Various semi-sync variables marked deprecated in 8.0.26 Old names remain as aliases (with deprecation warnings) for backward compatibility, but new names should be used. Full lists in official docs: https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html and replication option pages.
Citations:
- 1: https://dev.mysql.com/doc/refman/8.0/en/replication-options-source.html
- 2: https://docs.oracle.com/cd/E17952_01/mysql-8.0-en/added-deprecated-removed.html
- 3: https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-26.html
- 4: https://dev.mysql.com/doc/mysqld-version-reference/en/optvar-changes-8-0.html
- 5: https://docs.oracle.com/cd/E17952_01/mysql-8.0-en/replication-options-source.html
- 6: https://dev.mysql.com/worklog/task/?id=14189
- 7: https://dev.mysql.com/worklog/task/?id=14194
- 8: https://docs.oracle.com/cd/E17952_01/mysql-8.0-en/replication-options-replica.html
- 9: https://dev.mysql.com/doc/refman/8.0/en/change-master-to.html
Version incompatibility: log_replica_updates introduced in MySQL 8.0.26, not 8.0.11
The template advertises 8.0.11+ compatibility but uses log_replica_updates, which was not introduced until 8.0.26. Nodes running 8.0.11–8.0.25 will reject this option at startup. Use log_slave_updates for versions before 8.0.26 and log_replica_updates from 8.0.26 onward.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/templates/cluster/innodb_cluster_options.gotxt` at line 7, The
template sets log_replica_updates (unsupported before MySQL 8.0.26) which breaks
nodes 8.0.11–8.0.25; update the innodb_cluster_options template to emit
log_slave_updates for MySQL versions < 8.0.26 and log_replica_updates for >=
8.0.26 (use the version variable/lookup available to the template and
conditional logic around the log_replica_updates/log_slave_updates directive so
the correct option is rendered for each MySQL version).
| ROUTER_DIR={{.RouterDir}} | ||
| if [ -f $ROUTER_DIR/start.sh ]; then | ||
| $ROUTER_DIR/start.sh | ||
| echo "MySQL Router started" | ||
| else | ||
| echo "MySQL Router not bootstrapped" | ||
| exit 1 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n sandbox/templates/cluster/router_start.gotxtRepository: ProxySQL/dbdeployer
Length of output: 434
Guard the startup invocation to propagate failures.
If start.sh exits non-zero, this script still reports success because echo executes unconditionally. This masks startup/bootstrap failures from the deployment flow.
🔧 Suggested fix
-ROUTER_DIR={{.RouterDir}}
-if [ -f $ROUTER_DIR/start.sh ]; then
- $ROUTER_DIR/start.sh
- echo "MySQL Router started"
+ROUTER_DIR="{{.RouterDir}}"
+if [ -x "$ROUTER_DIR/start.sh" ]; then
+ if "$ROUTER_DIR/start.sh"; then
+ echo "MySQL Router started"
+ else
+ echo "MySQL Router failed to start"
+ exit 1
+ fi
else
echo "MySQL Router not bootstrapped"
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ROUTER_DIR={{.RouterDir}} | |
| if [ -f $ROUTER_DIR/start.sh ]; then | |
| $ROUTER_DIR/start.sh | |
| echo "MySQL Router started" | |
| else | |
| echo "MySQL Router not bootstrapped" | |
| exit 1 | |
| ROUTER_DIR="{{.RouterDir}}" | |
| if [ -x "$ROUTER_DIR/start.sh" ]; then | |
| if "$ROUTER_DIR/start.sh"; then | |
| echo "MySQL Router started" | |
| else | |
| echo "MySQL Router failed to start" | |
| exit 1 | |
| fi | |
| else | |
| echo "MySQL Router not bootstrapped" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/templates/cluster/router_start.gotxt` around lines 4 - 10, The script
currently runs $ROUTER_DIR/start.sh but always echoes "MySQL Router started"
regardless of the start.sh exit status; change the invocation to check the
command's exit code and only print success on zero, otherwise print the failure
message and exit with the start.sh exit code. Concretely, replace the plain call
to $ROUTER_DIR/start.sh inside the ROUTER_DIR guard with a conditional (e.g., if
"$ROUTER_DIR/start.sh"; then echo "MySQL Router started"; else echo "MySQL
Router not bootstrapped"; exit $?. fi) so failures from start.sh are propagated.
| ROUTER_DIR={{.RouterDir}} | ||
| if [ -f $ROUTER_DIR/stop.sh ]; then | ||
| $ROUTER_DIR/stop.sh | ||
| echo "MySQL Router stopped" | ||
| elif [ -f $ROUTER_DIR/mysqlrouter.pid ]; then | ||
| kill $(cat $ROUTER_DIR/mysqlrouter.pid) 2>/dev/null | ||
| echo "MySQL Router stopped" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, confirm the file exists and show its contents
fd -t f "router_stop.gotxt" --exec cat -n {}Repository: ProxySQL/dbdeployer
Length of output: 564
Don't trust the PID file blindly in the stop helper.
The fallback kills whatever PID is stored in mysqlrouter.pid without validating it belongs to the MySQL Router process, and both branches unconditionally report success even if the underlying stop/kill fails. A stale or corrupted PID file can terminate the wrong process, while this script still reports success when Router remains running.
Additionally, the script does not check the exit status of $ROUTER_DIR/stop.sh or kill, masking any errors. Both commands should propagate their failures, and the PID file should be verified to contain a valid, currently-running mysqlrouter process before attempting to kill it.
🔧 Possible fix
-ROUTER_DIR={{.RouterDir}}
-if [ -f $ROUTER_DIR/stop.sh ]; then
- $ROUTER_DIR/stop.sh
- echo "MySQL Router stopped"
-elif [ -f $ROUTER_DIR/mysqlrouter.pid ]; then
- kill $(cat $ROUTER_DIR/mysqlrouter.pid) 2>/dev/null
- echo "MySQL Router stopped"
+ROUTER_DIR="{{.RouterDir}}"
+if [ -x "$ROUTER_DIR/stop.sh" ]; then
+ "$ROUTER_DIR/stop.sh" || exit 1
+ echo "MySQL Router stopped"
+elif [ -f "$ROUTER_DIR/mysqlrouter.pid" ]; then
+ pid=$(cat "$ROUTER_DIR/mysqlrouter.pid")
+ if [ -n "$pid" ] && ps -p "$pid" -o comm= 2>/dev/null | grep -q '^mysqlrouter$'; then
+ kill "$pid" || exit 1
+ rm -f "$ROUTER_DIR/mysqlrouter.pid"
+ echo "MySQL Router stopped"
+ else
+ echo "MySQL Router not running"
+ exit 1
+ fi
else
echo "MySQL Router not running"
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ROUTER_DIR={{.RouterDir}} | |
| if [ -f $ROUTER_DIR/stop.sh ]; then | |
| $ROUTER_DIR/stop.sh | |
| echo "MySQL Router stopped" | |
| elif [ -f $ROUTER_DIR/mysqlrouter.pid ]; then | |
| kill $(cat $ROUTER_DIR/mysqlrouter.pid) 2>/dev/null | |
| echo "MySQL Router stopped" | |
| ROUTER_DIR="{{.RouterDir}}" | |
| if [ -x "$ROUTER_DIR/stop.sh" ]; then | |
| "$ROUTER_DIR/stop.sh" || exit 1 | |
| echo "MySQL Router stopped" | |
| elif [ -f "$ROUTER_DIR/mysqlrouter.pid" ]; then | |
| pid=$(cat "$ROUTER_DIR/mysqlrouter.pid") | |
| if [ -n "$pid" ] && ps -p "$pid" -o comm= 2>/dev/null | grep -q '^mysqlrouter$'; then | |
| kill "$pid" || exit 1 | |
| rm -f "$ROUTER_DIR/mysqlrouter.pid" | |
| echo "MySQL Router stopped" | |
| else | |
| echo "MySQL Router not running" | |
| exit 1 | |
| fi | |
| else | |
| echo "MySQL Router not running" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/templates/cluster/router_stop.gotxt` around lines 4 - 10, The stop
helper should not trust mysqlrouter.pid blindly and must propagate failures:
update the logic around ROUTER_DIR so that when $ROUTER_DIR/stop.sh is present
you run it and check its exit status and return a non-zero code on failure; when
falling back to $ROUTER_DIR/mysqlrouter.pid, read and validate the PID from the
file, verify it corresponds to a running mysqlrouter process (e.g., check
/proc/<pid>/cmdline or use pgrep -f with the pid) before sending SIGTERM, check
the kill exit status and only remove the pid file and print “MySQL Router
stopped” if validation and kill succeed, otherwise print an error and exit
non‑zero; ensure all branches return proper exit codes rather than
unconditionally echoing success.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
.github/workflows/integration_tests.yml (3)
300-302: Minor: Usecurlinstead ofwgetfor consistency.The rest of the workflow uses
curlfor downloads. Usingwgethere is a minor style inconsistency.♻️ Suggested fix
- wget -nv -O /tmp/proxysql.deb \ + curl -L -f -o /tmp/proxysql.deb \ "https://github.com/sysown/proxysql/releases/download/v${PROXYSQL_VERSION}/proxysql_${PROXYSQL_VERSION}-ubuntu22_amd64.deb"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration_tests.yml around lines 300 - 302, Replace the wget invocation that downloads the proxysql deb (the line referencing PROXYSQL_VERSION and /tmp/proxysql.deb) with a curl-based download for consistency with the rest of the workflow; use curl with silent/fail/progress flags (e.g., -fsSL) and -o /tmp/proxysql.deb to save the same URL built from PROXYSQL_VERSION so behavior and destination remain unchanged.
229-241: Consider adding fallback download URL for reliability.Unlike the
sandbox-testjob (lines 56-59), this step lacks a fallback to the MySQL archives URL. Newer versions like 9.5.0 may require the archives URL, and this could cause intermittent CI failures.♻️ Suggested fix to add fallback URL
if [ ! -f "/tmp/mysql-tarball/$TARBALL" ]; then curl -L -f -o "/tmp/mysql-tarball/$TARBALL" \ - "https://dev.mysql.com/get/Downloads/MySQL-${SHORT_VER}/$TARBALL" + "https://dev.mysql.com/get/Downloads/MySQL-${SHORT_VER}/$TARBALL" \ + || curl -L -f -o "/tmp/mysql-tarball/$TARBALL" \ + "https://downloads.mysql.com/archives/get/p/23/file/$TARBALL" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration_tests.yml around lines 229 - 241, The "Download MySQL Server" step should add a fallback download from the MySQL archives URL if the primary URL fails: wrap the curl that fetches "/tmp/mysql-tarball/$TARBALL" (using SHORT_VER_ENV/SHORT_VER, MYSQL_VERSION and TARBALL) so that on failure you attempt the archives URL (same path but host "https://downloads.mysql.com/archives/get/p/23/file/$TARBALL" or the archives pattern used in the sandbox-test job), and only then error out; ensure the logic still creates /tmp/mysql-tarball and uses ./dbdeployer unpack with SANDBOX_BINARY unchanged, and preserve -L -f behavior and the existing existence check for the tarball.
243-270: Consider caching MySQL Shell and Router downloads.The MySQL Server tarball is cached, but Shell and Router are downloaded on every run. Caching these could reduce CI time by 1-2 minutes per run.
♻️ Example caching approach
+ - name: Cache MySQL Shell tarball + uses: actions/cache@v4 + with: + path: /tmp/mysql-shell-tarball + key: mysql-shell-${{ matrix.mysql-version }}-linux-x86_64-v1 + - name: Download and install MySQL Shell env: SHORT_VER_ENV: ${{ matrix.mysql-version }} run: | SHORT_VER="${SHORT_VER_ENV%.*}" SHELL_TARBALL="mysql-shell-${MYSQL_VERSION}-linux-glibc2.17-x86-64bit.tar.gz" + mkdir -p /tmp/mysql-shell-tarball + if [ ! -f "/tmp/mysql-shell-tarball/$SHELL_TARBALL" ]; then - echo "Downloading MySQL Shell ${MYSQL_VERSION}..." - curl -L -f -o "/tmp/$SHELL_TARBALL" \ - "https://dev.mysql.com/get/Downloads/MySQL-Shell-${SHORT_VER}/$SHELL_TARBALL" + echo "Downloading MySQL Shell ${MYSQL_VERSION}..." + curl -L -f -o "/tmp/mysql-shell-tarball/$SHELL_TARBALL" \ + "https://dev.mysql.com/get/Downloads/MySQL-Shell-${SHORT_VER}/$SHELL_TARBALL" + fi - tar xzf "/tmp/$SHELL_TARBALL" -C /tmp/ + tar xzf "/tmp/mysql-shell-tarball/$SHELL_TARBALL" -C /tmp/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration_tests.yml around lines 243 - 270, Add caching for the MySQL Shell and Router downloads by reusing the CI cache pattern used for the MySQL Server tarball: before downloading SHELL_TARBALL and ROUTER_TARBALL, restore a cache keyed by matrix.mysql-version (and short ver) and paths that store the downloaded tarballs or the installed directories under SANDBOX_BINARY/${MYSQL_VERSION}; after extraction/copy, save the cache so subsequent runs skip curl for SHELL_TARBALL and ROUTER_TARBALL. Reference the existing variables and steps that set SHORT_VER_ENV/matrix.mysql-version, SHELL_TARBALL, ROUTER_TARBALL, and SANDBOX_BINARY when adding the actions/cache restore and save steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/integration_tests.yml:
- Around line 297-320: Remove the trailing "|| true" from the ProxySQL query
command so the CI step fails on errors; specifically edit the workflow step that
runs the command "~/sandboxes/ic_msb_*/proxysql/use -e \"SELECT * FROM
runtime_mysql_servers;\"" and ensure it is invoked without "|| true" so any
ProxySQL query failure causes the job to fail.
- Around line 243-255: The MySQL Shell install step only copies the mysqlsh
binary from SHELL_DIR but omits the bundled libraries; update that step (the
"Download and install MySQL Shell" block that sets SHELL_TARBALL and SHELL_DIR)
to also copy the lib directory from SHELL_DIR into the sandbox (same destination
used for the binary) so authentication plugins and the Python runtime are
available at runtime — e.g., after resolving SHELL_DIR, add a recursive copy of
the libraries from SHELL_DIR/lib into the target sandbox path
(${SANDBOX_BINARY}/${MYSQL_VERSION}/lib) to mirror how Router’s install copies
its lib directory.
---
Nitpick comments:
In @.github/workflows/integration_tests.yml:
- Around line 300-302: Replace the wget invocation that downloads the proxysql
deb (the line referencing PROXYSQL_VERSION and /tmp/proxysql.deb) with a
curl-based download for consistency with the rest of the workflow; use curl with
silent/fail/progress flags (e.g., -fsSL) and -o /tmp/proxysql.deb to save the
same URL built from PROXYSQL_VERSION so behavior and destination remain
unchanged.
- Around line 229-241: The "Download MySQL Server" step should add a fallback
download from the MySQL archives URL if the primary URL fails: wrap the curl
that fetches "/tmp/mysql-tarball/$TARBALL" (using SHORT_VER_ENV/SHORT_VER,
MYSQL_VERSION and TARBALL) so that on failure you attempt the archives URL (same
path but host "https://downloads.mysql.com/archives/get/p/23/file/$TARBALL" or
the archives pattern used in the sandbox-test job), and only then error out;
ensure the logic still creates /tmp/mysql-tarball and uses ./dbdeployer unpack
with SANDBOX_BINARY unchanged, and preserve -L -f behavior and the existing
existence check for the tarball.
- Around line 243-270: Add caching for the MySQL Shell and Router downloads by
reusing the CI cache pattern used for the MySQL Server tarball: before
downloading SHELL_TARBALL and ROUTER_TARBALL, restore a cache keyed by
matrix.mysql-version (and short ver) and paths that store the downloaded
tarballs or the installed directories under SANDBOX_BINARY/${MYSQL_VERSION};
after extraction/copy, save the cache so subsequent runs skip curl for
SHELL_TARBALL and ROUTER_TARBALL. Reference the existing variables and steps
that set SHORT_VER_ENV/matrix.mysql-version, SHELL_TARBALL, ROUTER_TARBALL, and
SANDBOX_BINARY when adding the actions/cache restore and save steps.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9a8880df-061c-4b86-b010-a1cc3f7ee642
📒 Files selected for processing (1)
.github/workflows/integration_tests.yml
| - name: Download and install MySQL Shell | ||
| env: | ||
| SHORT_VER_ENV: ${{ matrix.mysql-version }} | ||
| run: | | ||
| SHORT_VER="${SHORT_VER_ENV%.*}" | ||
| SHELL_TARBALL="mysql-shell-${MYSQL_VERSION}-linux-glibc2.17-x86-64bit.tar.gz" | ||
| echo "Downloading MySQL Shell ${MYSQL_VERSION}..." | ||
| curl -L -f -o "/tmp/$SHELL_TARBALL" \ | ||
| "https://dev.mysql.com/get/Downloads/MySQL-Shell-${SHORT_VER}/$SHELL_TARBALL" | ||
| tar xzf "/tmp/$SHELL_TARBALL" -C /tmp/ | ||
| SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1) | ||
| cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/" | ||
| echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check what libraries MySQL Shell depends on
curl -L -s "https://dev.mysql.com/get/Downloads/MySQL-Shell-8.4/mysql-shell-8.4.8-linux-glibc2.17-x86-64bit.tar.gz" -o /tmp/shell.tar.gz
tar tzf /tmp/shell.tar.gz | head -30
# Check if there's a lib directory
tar tzf /tmp/shell.tar.gz | grep -E '^[^/]+/lib/' | head -10Repository: ProxySQL/dbdeployer
Length of output: 3092
🏁 Script executed:
# View the workflow file around the MySQL Shell installation step
head -300 .github/workflows/integration_tests.yml | tail -100Repository: ProxySQL/dbdeployer
Length of output: 4158
🏁 Script executed:
# Also search for the Router step mentioned in the review (around line 269)
sed -n '260,280p' .github/workflows/integration_tests.ymlRepository: ProxySQL/dbdeployer
Length of output: 1215
🏁 Script executed:
# Check if there are any test failures or issues related to MySQL Shell
rg -i "mysqlsh|mysql.*shell" --type yaml -C 3Repository: ProxySQL/dbdeployer
Length of output: 45
🏁 Script executed:
# Look for any documentation or comments about MySQL Shell setup
rg -i "shell.*lib|shell.*library|shell.*depend" --type yamlRepository: ProxySQL/dbdeployer
Length of output: 45
MySQL Shell libraries are not copied during installation.
The MySQL Shell step only copies the mysqlsh binary but omits the bundled libraries (lib/ directory), which include authentication plugins and the Python 3.9 runtime. In contrast, the MySQL Router step (line 269) copies both the binary and libraries using cp -r "$ROUTER_DIR/lib/.". This inconsistency should be corrected to ensure MySQL Shell has all required dependencies at runtime.
Apply consistent library copying
SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1)
cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/"
+ cp -r "$SHELL_DIR/lib/." "$SANDBOX_BINARY/${MYSQL_VERSION}/lib/" 2>/dev/null || true
echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Download and install MySQL Shell | |
| env: | |
| SHORT_VER_ENV: ${{ matrix.mysql-version }} | |
| run: | | |
| SHORT_VER="${SHORT_VER_ENV%.*}" | |
| SHELL_TARBALL="mysql-shell-${MYSQL_VERSION}-linux-glibc2.17-x86-64bit.tar.gz" | |
| echo "Downloading MySQL Shell ${MYSQL_VERSION}..." | |
| curl -L -f -o "/tmp/$SHELL_TARBALL" \ | |
| "https://dev.mysql.com/get/Downloads/MySQL-Shell-${SHORT_VER}/$SHELL_TARBALL" | |
| tar xzf "/tmp/$SHELL_TARBALL" -C /tmp/ | |
| SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1) | |
| cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/" | |
| echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh" | |
| - name: Download and install MySQL Shell | |
| env: | |
| SHORT_VER_ENV: ${{ matrix.mysql-version }} | |
| run: | | |
| SHORT_VER="${SHORT_VER_ENV%.*}" | |
| SHELL_TARBALL="mysql-shell-${MYSQL_VERSION}-linux-glibc2.17-x86-64bit.tar.gz" | |
| echo "Downloading MySQL Shell ${MYSQL_VERSION}..." | |
| curl -L -f -o "/tmp/$SHELL_TARBALL" \ | |
| "https://dev.mysql.com/get/Downloads/MySQL-Shell-${MYSQL_VERSION}/$SHELL_TARBALL" | |
| tar xzf "/tmp/$SHELL_TARBALL" -C /tmp/ | |
| SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1) | |
| cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/" | |
| cp -r "$SHELL_DIR/lib/." "$SANDBOX_BINARY/${MYSQL_VERSION}/lib/" 2>/dev/null || true | |
| echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/integration_tests.yml around lines 243 - 255, The MySQL
Shell install step only copies the mysqlsh binary from SHELL_DIR but omits the
bundled libraries; update that step (the "Download and install MySQL Shell"
block that sets SHELL_TARBALL and SHELL_DIR) to also copy the lib directory from
SHELL_DIR into the sandbox (same destination used for the binary) so
authentication plugins and the Python runtime are available at runtime — e.g.,
after resolving SHELL_DIR, add a recursive copy of the libraries from
SHELL_DIR/lib into the target sandbox path
(${SANDBOX_BINARY}/${MYSQL_VERSION}/lib) to mirror how Router’s install copies
its lib directory.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/integration_tests.yml (1)
243-255:⚠️ Potential issue | 🟠 MajorMySQL Shell install is incomplete (binary copied, bundled libs omitted).
At Line 254, only
mysqlshis copied. The shell package ships runtime libraries/plugins that should be staged too (as you already do for Router at Line 269).Proposed fix
SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1) cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/" + if [ -d "$SHELL_DIR/lib" ]; then + mkdir -p "$SANDBOX_BINARY/${MYSQL_VERSION}/lib" + cp -r "$SHELL_DIR/lib/." "$SANDBOX_BINARY/${MYSQL_VERSION}/lib/" + else + echo "ERROR: mysqlsh lib directory not found in $SHELL_DIR" + exit 1 + fi echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh"#!/bin/bash set -euo pipefail echo "=== Current mysqlsh install block ===" sed -n '243,256p' .github/workflows/integration_tests.yml VER="8.4.8" SHORT="${VER%.*}" TARBALL="/tmp/mysql-shell-${VER}.tar.gz" URL="https://dev.mysql.com/get/Downloads/MySQL-Shell-${SHORT}/mysql-shell-${VER}-linux-glibc2.17-x86-64bit.tar.gz" echo "=== Downloading sample mysql-shell tarball ===" curl -L -f -o "$TARBALL" "$URL" echo "=== Verifying tarball contains lib payload ===" tar tzf "$TARBALL" | grep '/lib/' | head -20🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration_tests.yml around lines 243 - 255, The MySQL Shell install copies only the mysqlsh binary (SHELL_TARBALL / SHELL_DIR -> cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/") but omits the bundled runtime libraries/plugins; update the step that sets SHELL_TARBALL/SHELL_DIR to also copy the library and plugin directories (e.g., "$SHELL_DIR/lib/" and any plugin folders) into the sandbox (parallel to how the Router packaging copies its runtime files), preserving relative layout and executable permissions so mysqlsh can run from "$SANDBOX_BINARY/${MYSQL_VERSION}".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/integration_tests.yml:
- Around line 351-363: The test assumes ProxySQL is provisioned for
innodb-cluster but the command wiring uses MasterSlavePrefix (see
cmd/replication.go around the MasterSlavePrefix handling) and
sandbox/innodb_cluster.go does not provision ProxySQL for the innodb-cluster
topology; either update the replication command wiring to treat --with-proxysql
for innodb-cluster (add topology-specific proxy provisioning similar to
MasterSlave handling) or change the test to not expect ProxySQL (remove/guard
the proxysql/use call or only pass --with-proxysql when the plumbing supports
it). Inspect cmd/replication.go (lines handling topology and flags) and
sandbox/innodb_cluster.go (proxy provisioning) and implement consistent ProxySQL
provisioning or change the test invocation to match current wiring.
---
Duplicate comments:
In @.github/workflows/integration_tests.yml:
- Around line 243-255: The MySQL Shell install copies only the mysqlsh binary
(SHELL_TARBALL / SHELL_DIR -> cp "$SHELL_DIR/bin/mysqlsh"
"$SANDBOX_BINARY/${MYSQL_VERSION}/bin/") but omits the bundled runtime
libraries/plugins; update the step that sets SHELL_TARBALL/SHELL_DIR to also
copy the library and plugin directories (e.g., "$SHELL_DIR/lib/" and any plugin
folders) into the sandbox (parallel to how the Router packaging copies its
runtime files), preserving relative layout and executable permissions so mysqlsh
can run from "$SANDBOX_BINARY/${MYSQL_VERSION}".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 68765020-09ef-4616-bbab-f4fcb182d336
📒 Files selected for processing (2)
.github/workflows/integration_tests.yml.github/workflows/proxysql_integration_tests.yml
04c195c to
ae17bd7
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
sandbox/innodb_cluster.go (2)
197-203:⚠️ Potential issue | 🟠 MajorStill have a dead
makeNodesListinitialization here.
masterListandslaveListare overwritten immediately below, so staticcheck will keep reporting SA4006 and CI stays red.🧹 Minimal fix
- masterList := makeNodesList(nodes) - slaveList := masterList + masterList := "1" + slaveList := "" // InnoDB Cluster always uses single-primary mode by default // The primary is node 1, the rest are secondaries - masterList = "1" - slaveList = "" for N := 2; N <= nodes; N++ {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sandbox/innodb_cluster.go` around lines 197 - 203, The local variables masterList and slaveList are being initialized by calling makeNodesList(nodes) but then immediately overwritten, causing an unused-value warning (SA4006); remove the dead initialization by not calling makeNodesList and instead declare and assign the intended values directly (set masterList = "1" and slaveList = ""), or if you need the original list logic, use makeNodesList(nodes) instead of the hardcoded overrides; target the masterList, slaveList declarations and the makeNodesList(nodes) call to eliminate the unnecessary initialization.
531-583:⚠️ Potential issue | 🟠 MajorRouter bootstrap is still skipped by
--skip-start.The only call to
bootstrapRouterlives inside the!sandboxDef.SkipStartbranch. With Router enabled and--skip-start,router_startandrouter_stopare still generated, but the bootstrap directory/config they depend on is never created.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sandbox/innodb_cluster.go` around lines 531 - 583, The router bootstrap currently only runs inside the !sandboxDef.SkipStart branch, so when users pass --skip-start the router start/stop scripts (added to sbCluster.scripts) are generated but the Router bootstrap directory/config is never created; to fix, invoke bootstrapRouter(mysqlrouterPath, routerDir, basePort+1, sandboxDef.DbPassword, logger) in a separate branch that runs when skipRouter is false and mysqlrouterPath != "" even if sandboxDef.SkipStart is true (e.g., after concurrent.RunParallelTasksByPriority(execLists) or add an else block for sandboxDef.SkipStart); keep the existing error handling (log warning via common.CondPrintf/CondPrintln) and reference the symbols bootstrapRouter, skipRouter, mysqlrouterPath, routerDir, basePort, and sandboxDef.DbPassword so the change is applied in the correct place.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/proxysql_integration_tests.yml:
- Around line 144-152: Make the pgsql_servers admin check fatal by removing the
"|| true" so the workflow fails if ProxySQL has no pgsql_servers; for the
replication verification replace the fixed "sleep 2" and single SELECT with a
polling loop that repeatedly runs $SBDIR/replica1/use -c "SELECT val FROM
proxy_test;" (capturing RESULT) and greps for "pg_proxysql_test" until it
appears or a timeout elapses (e.g., poll every 1s for up to 30s), then exit
non‑zero on timeout; update the script locations referencing
$SBDIR/proxysql/use, the SELECT * FROM pgsql_servers admin command,
$SBDIR/primary/use CREATE/INSERT step, and the RESULT/grep logic to implement
the fatal admin check and timed polling.
In `@sandbox/innodb_cluster.go`:
- Around line 597-603: The Router bootstrap is hardcoding the admin credentials
in bootstrapURI ("icadmin:icadmin") which ignores the password passed into
CreateInnoDBCluster; update the bootstrap URI construction (bootstrapURI
variable) to use the actual db user/password parameters (the dbPassword value
passed into the helper) instead of "icadmin:icadmin", and ensure the args slice
for the bootstrap command includes the effective --db-password (or encodes the
correct password in the URI) so Router bootstrap authenticates with the same
credentials used for the rest of the cluster (refer to bootstrapURI, args,
CreateInnoDBCluster and routerDir/primaryPort to locate the code to change).
---
Duplicate comments:
In `@sandbox/innodb_cluster.go`:
- Around line 197-203: The local variables masterList and slaveList are being
initialized by calling makeNodesList(nodes) but then immediately overwritten,
causing an unused-value warning (SA4006); remove the dead initialization by not
calling makeNodesList and instead declare and assign the intended values
directly (set masterList = "1" and slaveList = ""), or if you need the original
list logic, use makeNodesList(nodes) instead of the hardcoded overrides; target
the masterList, slaveList declarations and the makeNodesList(nodes) call to
eliminate the unnecessary initialization.
- Around line 531-583: The router bootstrap currently only runs inside the
!sandboxDef.SkipStart branch, so when users pass --skip-start the router
start/stop scripts (added to sbCluster.scripts) are generated but the Router
bootstrap directory/config is never created; to fix, invoke
bootstrapRouter(mysqlrouterPath, routerDir, basePort+1, sandboxDef.DbPassword,
logger) in a separate branch that runs when skipRouter is false and
mysqlrouterPath != "" even if sandboxDef.SkipStart is true (e.g., after
concurrent.RunParallelTasksByPriority(execLists) or add an else block for
sandboxDef.SkipStart); keep the existing error handling (log warning via
common.CondPrintf/CondPrintln) and reference the symbols bootstrapRouter,
skipRouter, mysqlrouterPath, routerDir, basePort, and sandboxDef.DbPassword so
the change is applied in the correct place.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d4ff8ee7-6bfe-4692-8b35-37f6c95834cb
📒 Files selected for processing (15)
.github/workflows/integration_tests.yml.github/workflows/proxysql_integration_tests.ymlcmd/replication.godefaults/defaults.goglobals/globals.goglobals/template_names.gosandbox/cluster_templates.gosandbox/innodb_cluster.gosandbox/replication.gosandbox/templates.gosandbox/templates/cluster/check_cluster.gotxtsandbox/templates/cluster/init_cluster.gotxtsandbox/templates/cluster/innodb_cluster_options.gotxtsandbox/templates/cluster/router_start.gotxtsandbox/templates/cluster/router_stop.gotxt
✅ Files skipped from review due to trivial changes (6)
- sandbox/templates/cluster/check_cluster.gotxt
- sandbox/templates/cluster/router_stop.gotxt
- globals/template_names.go
- sandbox/templates/cluster/router_start.gotxt
- .github/workflows/integration_tests.yml
- sandbox/templates/cluster/innodb_cluster_options.gotxt
🚧 Files skipped from review as they are similar to previous changes (5)
- sandbox/templates.go
- cmd/replication.go
- sandbox/templates/cluster/init_cluster.gotxt
- sandbox/cluster_templates.go
- defaults/defaults.go
| echo "=== Check ProxySQL has pgsql_servers configured ===" | ||
| $SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;" || true | ||
|
|
||
| echo "=== Functional test: verify PostgreSQL replication works ===" | ||
| $SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');" | ||
| sleep 2 | ||
| RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1) | ||
| echo "$RESULT" | ||
| echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; } |
There was a problem hiding this comment.
Tighten this verification block so it actually catches regressions.
The pgsql_servers query is still best-effort because of || true, so missing ProxySQL backend registration will not fail the job. The fixed sleep 2 also makes the replication assertion timing-dependent. Make the admin query fatal and poll the replica with a timeout instead.
🧪 One way to harden the check
- $SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;" || true
+ $SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;"
echo "=== Functional test: verify PostgreSQL replication works ==="
$SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');"
- sleep 2
- RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1)
+ for _ in $(seq 1 15); do
+ RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1 || true)
+ echo "$RESULT" | grep -q "pg_proxysql_test" && break
+ sleep 1
+ done
echo "$RESULT"
echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo "=== Check ProxySQL has pgsql_servers configured ===" | |
| $SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;" || true | |
| echo "=== Functional test: verify PostgreSQL replication works ===" | |
| $SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');" | |
| sleep 2 | |
| RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1) | |
| echo "$RESULT" | |
| echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; } | |
| echo "=== Check ProxySQL has pgsql_servers configured ===" | |
| $SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;" | |
| echo "=== Functional test: verify PostgreSQL replication works ===" | |
| $SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');" | |
| for _ in $(seq 1 15); do | |
| RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1 || true) | |
| echo "$RESULT" | grep -q "pg_proxysql_test" && break | |
| sleep 1 | |
| done | |
| echo "$RESULT" | |
| echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/proxysql_integration_tests.yml around lines 144 - 152,
Make the pgsql_servers admin check fatal by removing the "|| true" so the
workflow fails if ProxySQL has no pgsql_servers; for the replication
verification replace the fixed "sleep 2" and single SELECT with a polling loop
that repeatedly runs $SBDIR/replica1/use -c "SELECT val FROM proxy_test;"
(capturing RESULT) and greps for "pg_proxysql_test" until it appears or a
timeout elapses (e.g., poll every 1s for up to 30s), then exit non‑zero on
timeout; update the script locations referencing $SBDIR/proxysql/use, the SELECT
* FROM pgsql_servers admin command, $SBDIR/primary/use CREATE/INSERT step, and
the RESULT/grep logic to implement the fatal admin check and timed polling.
| bootstrapURI := fmt.Sprintf("icadmin:icadmin@127.0.0.1:%d", primaryPort) | ||
| args := []string{ | ||
| "--bootstrap", bootstrapURI, | ||
| "--directory", routerDir, | ||
| "--force", | ||
| "--conf-use-sockets", | ||
| } |
There was a problem hiding this comment.
Don't ignore the configured password during Router bootstrap.
CreateInnoDBCluster passes sandboxDef.DbPassword into this helper, but the bootstrap URI still hardcodes icadmin:icadmin. That makes --db-password ineffective for Router bootstrap and can authenticate with a different credential than the rest of the cluster setup.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sandbox/innodb_cluster.go` around lines 597 - 603, The Router bootstrap is
hardcoding the admin credentials in bootstrapURI ("icadmin:icadmin") which
ignores the password passed into CreateInnoDBCluster; update the bootstrap URI
construction (bootstrapURI variable) to use the actual db user/password
parameters (the dbPassword value passed into the helper) instead of
"icadmin:icadmin", and ensure the args slice for the bootstrap command includes
the effective --db-password (or encodes the correct password in the URI) so
Router bootstrap authenticates with the same credentials used for the rest of
the cluster (refer to bootstrapURI, args, CreateInnoDBCluster and
routerDir/primaryPort to locate the code to change).
Deploy Group Replication + MySQL Shell cluster management + MySQL Router: dbdeployer deploy replication 8.4.4 --topology=innodb-cluster - MySQL Shell bootstraps the cluster via AdminAPI (dba.createCluster) - MySQL Router provides transparent connection routing - --skip-router flag to deploy without Router (useful for ProxySQL) - Requires mysqlsh binary; mysqlrouter optional - Minimum MySQL 8.0.11 required (MySQLX default capability) - Reuses existing GR node creation and template infrastructure New files: sandbox/innodb_cluster.go - CreateInnoDBCluster() implementation sandbox/cluster_templates.go - template embeds for cluster scripts sandbox/templates/cluster/ - init_cluster, check_cluster, router scripts Modified: globals/globals.go - InnoDBClusterLabel, AllowedTopologies, script names globals/template_names.go - template name constants defaults/defaults.go - InnoDBClusterPrefix, InnoDBClusterBasePort sandbox/replication.go - dispatch for innodb-cluster topology sandbox/templates.go - register InnoDBClusterTemplates in AllTemplates cmd/replication.go - --skip-router flag, help text, examples
Tests three deployment modes for each MySQL version: 1. InnoDB Cluster with MySQL Router (full stack) 2. InnoDB Cluster with --skip-router (cluster only) 3. InnoDB Cluster with ProxySQL instead of Router Downloads MySQL Shell and MySQL Router tarballs, installs them into the MySQL basedir alongside the server binaries.
Every non-trivial CI job now verifies data flow, not just deployment: InnoDB Cluster + Router: - Write on primary, verify replication to node2 and node3 - Write through MySQL Router R/W port, verify replicated InnoDB Cluster + skip-router: - Write on primary, verify replication to node3 InnoDB Cluster + ProxySQL: - Verify ProxySQL has ONLINE backends - Write through ProxySQL, verify replicated to node2 ProxySQL + PostgreSQL: - Write on primary, verify replication to replica1 - ProxySQL proxy connection test (WIP, non-fatal) MySQL replication and ProxySQL+MySQL already had functional tests (check_slaves, test_replication, proxysql-integration-tests.sh).
ae17bd7 to
f3c7ba9
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cmd/replication.go (1)
244-258:⚠️ Potential issue | 🟠 Major
--with-proxysqlis still incompatible withinnodb-cluster.This new
SkipRouterplumbing makes ProxySQL a supported alternative, but the post-createwithProxySQLpath below still reconstructs a master-slave layout.CreateInnoDBClustercreatesic_msb_*withnode1..N, sodeploy replication ... --topology=innodb-cluster --with-proxysql --skip-routerwill fail after the cluster is created.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/replication.go` around lines 244 - 258, The post-create withProxySQL path still assumes a master/slave layout which breaks when topology is "innodb-cluster" (CreateInnoDBCluster creates ic_msb_* nodes named node1..N) despite SkipRouter being passed via sandbox.CreateReplicationSandbox(…, ReplicationData{SkipRouter: skipRouter}); update the withProxySQL post-create logic to check the topology (or ReplicationData.SkipRouter) and for "innodb-cluster" either (a) skip the master/slave reconstruction entirely when SkipRouter is true, or (b) adapt to use the ic_msb_* / node1..N naming produced by CreateInnoDBCluster instead of expecting traditional master/slave hostnames so ProxySQL wiring succeeds. Ensure the change references the same symbols: CreateReplicationSandbox, ReplicationData.SkipRouter, CreateInnoDBCluster, and the withProxySQL post-create path.
♻️ Duplicate comments (4)
sandbox/innodb_cluster.go (3)
197-203:⚠️ Potential issue | 🟠 MajorRemove the dead
makeNodesListassignments.Both variables are overwritten before first use, which is the current SA4006 lint failure.
🧹 Minimal fix
- masterList := makeNodesList(nodes) - slaveList := masterList + masterList := "1" + slaveList := "" // InnoDB Cluster always uses single-primary mode by default // The primary is node 1, the rest are secondaries - masterList = "1" - slaveList = "" for N := 2; N <= nodes; N++ {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sandbox/innodb_cluster.go` around lines 197 - 203, The initializations that call makeNodesList and copy to slaveList are dead code because masterList and slaveList are immediately overwritten; remove the two lines "masterList := makeNodesList(nodes)" and "slaveList := masterList" and keep the explicit assignments "masterList = \"1\"" and "slaveList = \"\"" so the variables are only set where they are actually used (references: masterList, slaveList, makeNodesList).
557-582:⚠️ Potential issue | 🟠 MajorRouter bootstrap should not depend on
--skip-start.When Router is enabled but
--skip-startis used, this branch never runs, sorouter_start/router_stopare generated without a bootstrapped router directory. Please bootstrap Router whenever it is requested, not only when the cluster is started immediately.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sandbox/innodb_cluster.go` around lines 557 - 582, The router bootstrap is currently gated by sandboxDef.SkipStart, so when --skip-start is used we never call bootstrapRouter and the generated router_start/router_stop scripts point at a non-bootstrapped router; move the bootstrap logic out of the if !sandboxDef.SkipStart block and run it whenever mysqlrouterPath != "" and !skipRouter regardless of SkipStart; specifically call bootstrapRouter(mysqlrouterPath, routerDir, basePort+1, sandboxDef.DbPassword, logger) after the SkipStart block (or add a separate conditional) and preserve the existing error handling (common.CondPrintf warning and CondPrintln message) so the Router is created when requested even if the cluster startup was skipped.
597-605:⚠️ Potential issue | 🟠 MajorUse the configured Router password and avoid logging it.
CreateInnoDBClusterpassessandboxDef.DbPasswordinto this helper, butbootstrapURIstill hardcodesicadmin:icadmin, so--db-passwordis ignored. The nextlogger.Printfalso emits the full credential-bearing args, which will leak the admin secret once this is fixed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sandbox/innodb_cluster.go` around lines 597 - 605, The bootstrap URI currently hardcodes "icadmin:icadmin" and the log prints full args (leaking secrets); update the bootstrapURI in this helper to use the actual password passed from CreateInnoDBCluster (use sandboxDef.DbPassword when building bootstrapURI, e.g. "icadmin:%s@127.0.0.1:%d"), and stop logging sensitive data by changing the logger.Printf call: either log only mysqlrouterPath and a redacted/filtered version of args or omit args entirely before printing; ensure any use of args, bootstrapURI, mysqlrouterPath, and logger.Printf no longer exposes the admin password..github/workflows/proxysql_integration_tests.yml (1)
144-152:⚠️ Potential issue | 🟡 MinorHarden the ProxySQL/PostgreSQL verification block.
This is still best-effort:
|| truemasks admin-query failures, and the fixedsleep 2makes the replication assertion timing-dependent. Please make thepgsql_serverscheck fatal, assert it returns backends, and pollreplica1until the marker row appears or a timeout expires.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/proxysql_integration_tests.yml around lines 144 - 152, Make the ProxySQL/Postgres checks fatal and replace the timing-dependent sleep with a poll-and-timeout loop: run the admin query via $SBDIR/proxysql/use with "SELECT * FROM pgsql_servers;" and fail the script if it returns no rows (remove the "|| true" masking), then after inserting the marker via $SBDIR/primary/use ("CREATE TABLE proxy_test...; INSERT INTO proxy_test..."), loop polling $SBDIR/replica1/use for "SELECT val FROM proxy_test;" until the expected "pg_proxysql_test" appears or a configurable timeout elapses, and if the timeout is reached exit non-zero with a clear failure message.
🧹 Nitpick comments (1)
globals/globals.go (1)
182-184: Use the new topology constant in ProxySQL topology checks.
providers/proxysql/config.gostill matches"innodb-cluster"by string literal. Switching that call site toglobals.InnoDBClusterLabelkeeps ProxySQL addon detection in sync withAllowedTopologies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@globals/globals.go` around lines 182 - 184, Replace the hard-coded "innodb-cluster" literal in the ProxySQL topology detection with the shared constant to keep addon detection consistent with AllowedTopologies: locate the check in providers/proxysql/config.go where the code compares topology or labels to "innodb-cluster" and change it to use globals.InnoDBClusterLabel (ensure you import the globals package if not already imported) so ProxySQL uses the canonical constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@cmd/replication.go`:
- Around line 244-258: The post-create withProxySQL path still assumes a
master/slave layout which breaks when topology is "innodb-cluster"
(CreateInnoDBCluster creates ic_msb_* nodes named node1..N) despite SkipRouter
being passed via sandbox.CreateReplicationSandbox(…, ReplicationData{SkipRouter:
skipRouter}); update the withProxySQL post-create logic to check the topology
(or ReplicationData.SkipRouter) and for "innodb-cluster" either (a) skip the
master/slave reconstruction entirely when SkipRouter is true, or (b) adapt to
use the ic_msb_* / node1..N naming produced by CreateInnoDBCluster instead of
expecting traditional master/slave hostnames so ProxySQL wiring succeeds. Ensure
the change references the same symbols: CreateReplicationSandbox,
ReplicationData.SkipRouter, CreateInnoDBCluster, and the withProxySQL
post-create path.
---
Duplicate comments:
In @.github/workflows/proxysql_integration_tests.yml:
- Around line 144-152: Make the ProxySQL/Postgres checks fatal and replace the
timing-dependent sleep with a poll-and-timeout loop: run the admin query via
$SBDIR/proxysql/use with "SELECT * FROM pgsql_servers;" and fail the script if
it returns no rows (remove the "|| true" masking), then after inserting the
marker via $SBDIR/primary/use ("CREATE TABLE proxy_test...; INSERT INTO
proxy_test..."), loop polling $SBDIR/replica1/use for "SELECT val FROM
proxy_test;" until the expected "pg_proxysql_test" appears or a configurable
timeout elapses, and if the timeout is reached exit non-zero with a clear
failure message.
In `@sandbox/innodb_cluster.go`:
- Around line 197-203: The initializations that call makeNodesList and copy to
slaveList are dead code because masterList and slaveList are immediately
overwritten; remove the two lines "masterList := makeNodesList(nodes)" and
"slaveList := masterList" and keep the explicit assignments "masterList = \"1\""
and "slaveList = \"\"" so the variables are only set where they are actually
used (references: masterList, slaveList, makeNodesList).
- Around line 557-582: The router bootstrap is currently gated by
sandboxDef.SkipStart, so when --skip-start is used we never call bootstrapRouter
and the generated router_start/router_stop scripts point at a non-bootstrapped
router; move the bootstrap logic out of the if !sandboxDef.SkipStart block and
run it whenever mysqlrouterPath != "" and !skipRouter regardless of SkipStart;
specifically call bootstrapRouter(mysqlrouterPath, routerDir, basePort+1,
sandboxDef.DbPassword, logger) after the SkipStart block (or add a separate
conditional) and preserve the existing error handling (common.CondPrintf warning
and CondPrintln message) so the Router is created when requested even if the
cluster startup was skipped.
- Around line 597-605: The bootstrap URI currently hardcodes "icadmin:icadmin"
and the log prints full args (leaking secrets); update the bootstrapURI in this
helper to use the actual password passed from CreateInnoDBCluster (use
sandboxDef.DbPassword when building bootstrapURI, e.g.
"icadmin:%s@127.0.0.1:%d"), and stop logging sensitive data by changing the
logger.Printf call: either log only mysqlrouterPath and a redacted/filtered
version of args or omit args entirely before printing; ensure any use of args,
bootstrapURI, mysqlrouterPath, and logger.Printf no longer exposes the admin
password.
---
Nitpick comments:
In `@globals/globals.go`:
- Around line 182-184: Replace the hard-coded "innodb-cluster" literal in the
ProxySQL topology detection with the shared constant to keep addon detection
consistent with AllowedTopologies: locate the check in
providers/proxysql/config.go where the code compares topology or labels to
"innodb-cluster" and change it to use globals.InnoDBClusterLabel (ensure you
import the globals package if not already imported) so ProxySQL uses the
canonical constant.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b5fa2fec-30a4-401e-bc07-6b495f6bd601
📒 Files selected for processing (15)
.github/workflows/integration_tests.yml.github/workflows/proxysql_integration_tests.ymlcmd/replication.godefaults/defaults.goglobals/globals.goglobals/template_names.gosandbox/cluster_templates.gosandbox/innodb_cluster.gosandbox/replication.gosandbox/templates.gosandbox/templates/cluster/check_cluster.gotxtsandbox/templates/cluster/init_cluster.gotxtsandbox/templates/cluster/innodb_cluster_options.gotxtsandbox/templates/cluster/router_start.gotxtsandbox/templates/cluster/router_stop.gotxt
✅ Files skipped from review due to trivial changes (6)
- sandbox/templates/cluster/router_start.gotxt
- sandbox/templates/cluster/check_cluster.gotxt
- sandbox/templates/cluster/innodb_cluster_options.gotxt
- sandbox/templates/cluster/router_stop.gotxt
- sandbox/cluster_templates.go
- sandbox/templates/cluster/init_cluster.gotxt
🚧 Files skipped from review as they are similar to previous changes (4)
- sandbox/templates.go
- globals/template_names.go
- defaults/defaults.go
- .github/workflows/integration_tests.yml
Summary
New topology:
dbdeployer deploy replication 8.4.8 --topology=innodb-clusterDeploys Group Replication + MySQL Shell cluster management + optional MySQL Router:
dba.createCluster()+cluster.addInstance())--skip-routerflag to deploy without Router (useful when using ProxySQL instead)New files (829 lines)
sandbox/innodb_cluster.go—CreateInnoDBCluster()with mysqlsh/mysqlrouter detection, node creation, cluster bootstrapsandbox/cluster_templates.go— template embedssandbox/templates/cluster/Usage
Requirements
mysqlsh(MySQL Shell) — checked in basedir/bin/ then PATHmysqlrouter(optional) — checked in basedir/bin/ then PATHCloses #58
Test Plan
Summary by CodeRabbit
New Features
Bug Fixes
Tests