Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
193 changes: 193 additions & 0 deletions .github/workflows/integration_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,199 @@ jobs:
done 2>/dev/null || true
pkill -9 -u "$USER" postgres 2>/dev/null || true

# Test InnoDB Cluster topology with MySQL Shell + MySQL Router
innodb-cluster-test:
name: InnoDB Cluster (${{ matrix.mysql-version }})
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
mysql-version:
- '8.4.8'
- '9.5.0'
env:
GO111MODULE: on
SANDBOX_BINARY: ${{ github.workspace }}/opt/mysql
MYSQL_VERSION: ${{ matrix.mysql-version }}
steps:
- uses: actions/checkout@v4

- uses: actions/setup-go@v5
with:
go-version: '1.22'

- name: Install system libraries
run: |
sudo apt-get update
sudo apt-get install -y libaio1 libnuma1 libncurses5

- name: Build dbdeployer
run: go build -o dbdeployer .

- name: Cache MySQL tarball
uses: actions/cache@v4
with:
path: /tmp/mysql-tarball
key: mysql-${{ matrix.mysql-version }}-linux-x86_64-v2

- name: Download MySQL Server
env:
SHORT_VER_ENV: ${{ matrix.mysql-version }}
run: |
SHORT_VER="${SHORT_VER_ENV%.*}"
TARBALL="mysql-${MYSQL_VERSION}-linux-glibc2.17-x86_64.tar.xz"
mkdir -p /tmp/mysql-tarball
if [ ! -f "/tmp/mysql-tarball/$TARBALL" ]; then
curl -L -f -o "/tmp/mysql-tarball/$TARBALL" \
"https://dev.mysql.com/get/Downloads/MySQL-${SHORT_VER}/$TARBALL"
fi
mkdir -p "$SANDBOX_BINARY"
./dbdeployer unpack "/tmp/mysql-tarball/$TARBALL" --sandbox-binary="$SANDBOX_BINARY"

- name: Download and install MySQL Shell
env:
SHORT_VER_ENV: ${{ matrix.mysql-version }}
run: |
SHORT_VER="${SHORT_VER_ENV%.*}"
SHELL_TARBALL="mysql-shell-${MYSQL_VERSION}-linux-glibc2.17-x86-64bit.tar.gz"
echo "Downloading MySQL Shell ${MYSQL_VERSION}..."
curl -L -f -o "/tmp/$SHELL_TARBALL" \
"https://dev.mysql.com/get/Downloads/MySQL-Shell-${SHORT_VER}/$SHELL_TARBALL"
tar xzf "/tmp/$SHELL_TARBALL" -C /tmp/
SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1)
cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/"
echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh"
Comment on lines +243 to +255
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check what libraries MySQL Shell depends on
curl -L -s "https://dev.mysql.com/get/Downloads/MySQL-Shell-8.4/mysql-shell-8.4.8-linux-glibc2.17-x86-64bit.tar.gz" -o /tmp/shell.tar.gz
tar tzf /tmp/shell.tar.gz | head -30
# Check if there's a lib directory
tar tzf /tmp/shell.tar.gz | grep -E '^[^/]+/lib/' | head -10

Repository: ProxySQL/dbdeployer

Length of output: 3092


🏁 Script executed:

# View the workflow file around the MySQL Shell installation step
head -300 .github/workflows/integration_tests.yml | tail -100

Repository: ProxySQL/dbdeployer

Length of output: 4158


🏁 Script executed:

# Also search for the Router step mentioned in the review (around line 269)
sed -n '260,280p' .github/workflows/integration_tests.yml

Repository: ProxySQL/dbdeployer

Length of output: 1215


🏁 Script executed:

# Check if there are any test failures or issues related to MySQL Shell
rg -i "mysqlsh|mysql.*shell" --type yaml -C 3

Repository: ProxySQL/dbdeployer

Length of output: 45


🏁 Script executed:

# Look for any documentation or comments about MySQL Shell setup
rg -i "shell.*lib|shell.*library|shell.*depend" --type yaml

Repository: ProxySQL/dbdeployer

Length of output: 45


MySQL Shell libraries are not copied during installation.

The MySQL Shell step only copies the mysqlsh binary but omits the bundled libraries (lib/ directory), which include authentication plugins and the Python 3.9 runtime. In contrast, the MySQL Router step (line 269) copies both the binary and libraries using cp -r "$ROUTER_DIR/lib/.". This inconsistency should be corrected to ensure MySQL Shell has all required dependencies at runtime.

Apply consistent library copying
           SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1)
           cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/"
+          cp -r "$SHELL_DIR/lib/." "$SANDBOX_BINARY/${MYSQL_VERSION}/lib/" 2>/dev/null || true
           echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Download and install MySQL Shell
env:
SHORT_VER_ENV: ${{ matrix.mysql-version }}
run: |
SHORT_VER="${SHORT_VER_ENV%.*}"
SHELL_TARBALL="mysql-shell-${MYSQL_VERSION}-linux-glibc2.17-x86-64bit.tar.gz"
echo "Downloading MySQL Shell ${MYSQL_VERSION}..."
curl -L -f -o "/tmp/$SHELL_TARBALL" \
"https://dev.mysql.com/get/Downloads/MySQL-Shell-${SHORT_VER}/$SHELL_TARBALL"
tar xzf "/tmp/$SHELL_TARBALL" -C /tmp/
SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1)
cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/"
echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh"
- name: Download and install MySQL Shell
env:
SHORT_VER_ENV: ${{ matrix.mysql-version }}
run: |
SHORT_VER="${SHORT_VER_ENV%.*}"
SHELL_TARBALL="mysql-shell-${MYSQL_VERSION}-linux-glibc2.17-x86-64bit.tar.gz"
echo "Downloading MySQL Shell ${MYSQL_VERSION}..."
curl -L -f -o "/tmp/$SHELL_TARBALL" \
"https://dev.mysql.com/get/Downloads/MySQL-Shell-${MYSQL_VERSION}/$SHELL_TARBALL"
tar xzf "/tmp/$SHELL_TARBALL" -C /tmp/
SHELL_DIR=$(ls -d /tmp/mysql-shell-${MYSQL_VERSION}* | head -1)
cp "$SHELL_DIR/bin/mysqlsh" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/"
cp -r "$SHELL_DIR/lib/." "$SANDBOX_BINARY/${MYSQL_VERSION}/lib/" 2>/dev/null || true
echo "mysqlsh installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlsh"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/integration_tests.yml around lines 243 - 255, The MySQL
Shell install step only copies the mysqlsh binary from SHELL_DIR but omits the
bundled libraries; update that step (the "Download and install MySQL Shell"
block that sets SHELL_TARBALL and SHELL_DIR) to also copy the lib directory from
SHELL_DIR into the sandbox (same destination used for the binary) so
authentication plugins and the Python runtime are available at runtime — e.g.,
after resolving SHELL_DIR, add a recursive copy of the libraries from
SHELL_DIR/lib into the target sandbox path
(${SANDBOX_BINARY}/${MYSQL_VERSION}/lib) to mirror how Router’s install copies
its lib directory.


- name: Download and install MySQL Router
env:
SHORT_VER_ENV: ${{ matrix.mysql-version }}
run: |
SHORT_VER="${SHORT_VER_ENV%.*}"
ROUTER_TARBALL="mysql-router-${MYSQL_VERSION}-linux-glibc2.17-x86_64.tar.xz"
echo "Downloading MySQL Router ${MYSQL_VERSION}..."
curl -L -f -o "/tmp/$ROUTER_TARBALL" \
"https://dev.mysql.com/get/Downloads/MySQL-Router-${SHORT_VER}/$ROUTER_TARBALL"
tar xJf "/tmp/$ROUTER_TARBALL" -C /tmp/
ROUTER_DIR=$(ls -d /tmp/mysql-router-${MYSQL_VERSION}* | head -1)
cp "$ROUTER_DIR/bin/mysqlrouter" "$SANDBOX_BINARY/${MYSQL_VERSION}/bin/"
cp -r "$ROUTER_DIR/lib/." "$SANDBOX_BINARY/${MYSQL_VERSION}/lib/" 2>/dev/null || true
echo "mysqlrouter installed at $SANDBOX_BINARY/${MYSQL_VERSION}/bin/mysqlrouter"

- name: Install ProxySQL
run: |
PROXYSQL_VERSION="3.0.6"
wget -nv -O /tmp/proxysql.deb \
"https://github.com/sysown/proxysql/releases/download/v${PROXYSQL_VERSION}/proxysql_${PROXYSQL_VERSION}-ubuntu22_amd64.deb"
mkdir -p /tmp/proxysql-extract
dpkg-deb -x /tmp/proxysql.deb /tmp/proxysql-extract
sudo cp /tmp/proxysql-extract/usr/bin/proxysql /usr/local/bin/proxysql
sudo chmod +x /usr/local/bin/proxysql

- name: Test InnoDB Cluster with MySQL Router
run: |
echo "=== Deploy InnoDB Cluster ${MYSQL_VERSION} with Router ==="
./dbdeployer deploy replication "$MYSQL_VERSION" \
--topology=innodb-cluster \
--sandbox-binary="$SANDBOX_BINARY" \
--nodes=3

echo "=== Verify cluster status ==="
~/sandboxes/ic_msb_*/check_cluster

echo "=== Functional test: write on primary, read on all nodes ==="
# Find the primary node (node1)
SBDIR=$(ls -d ~/sandboxes/ic_msb_*)
$SBDIR/node1/use -e "CREATE DATABASE ic_test; USE ic_test; CREATE TABLE t1 (id INT AUTO_INCREMENT PRIMARY KEY, val VARCHAR(100)); INSERT INTO t1 (val) VALUES ('hello_from_primary');"
sleep 3
echo "--- Read from node2 (should see replicated data) ---"
RESULT=$($SBDIR/node2/use -e "SELECT val FROM ic_test.t1;" 2>&1)
echo "$RESULT"
echo "$RESULT" | grep -q "hello_from_primary" || { echo "FAIL: data not replicated to node2"; exit 1; }
echo "--- Read from node3 ---"
RESULT=$($SBDIR/node3/use -e "SELECT val FROM ic_test.t1;" 2>&1)
echo "$RESULT"
echo "$RESULT" | grep -q "hello_from_primary" || { echo "FAIL: data not replicated to node3"; exit 1; }

echo "=== Functional test: connect through MySQL Router ==="
ROUTER_RW_PORT=$(ls $SBDIR/router/mysqlrouter.conf 2>/dev/null && grep -A5 '\[routing:bootstrap_rw\]' $SBDIR/router/mysqlrouter.conf | grep 'bind_port' | awk -F= '{print $2}' | tr -d ' ' || echo "")
if [ -n "$ROUTER_RW_PORT" ]; then
echo "Router R/W port: $ROUTER_RW_PORT"
$SBDIR/node1/use -h 127.0.0.1 -P "$ROUTER_RW_PORT" -e "INSERT INTO ic_test.t1 (val) VALUES ('via_router');"
sleep 2
RESULT=$($SBDIR/node2/use -e "SELECT val FROM ic_test.t1 WHERE val='via_router';" 2>&1)
echo "$RESULT"
echo "$RESULT" | grep -q "via_router" || { echo "FAIL: write through Router not replicated"; exit 1; }
echo "OK: Router R/W connection works and replication verified"
else
echo "WARN: Could not determine Router R/W port, skipping Router connection test"
fi

echo "=== Cleanup ==="
./dbdeployer delete all --skip-confirm

- name: Test InnoDB Cluster with --skip-router + write/read verification
run: |
echo "=== Deploy InnoDB Cluster ${MYSQL_VERSION} without Router ==="
./dbdeployer deploy replication "$MYSQL_VERSION" \
--topology=innodb-cluster \
--skip-router \
--sandbox-binary="$SANDBOX_BINARY" \
--nodes=3

echo "=== Verify cluster status ==="
~/sandboxes/ic_msb_*/check_cluster

echo "=== Functional test: write/read across cluster ==="
SBDIR=$(ls -d ~/sandboxes/ic_msb_*)
$SBDIR/node1/use -e "CREATE DATABASE skiprt_test; USE skiprt_test; CREATE TABLE t1 (id INT AUTO_INCREMENT PRIMARY KEY, msg TEXT); INSERT INTO t1 (msg) VALUES ('skip_router_test');"
sleep 3
RESULT=$($SBDIR/node3/use -e "SELECT msg FROM skiprt_test.t1;" 2>&1)
echo "$RESULT"
echo "$RESULT" | grep -q "skip_router_test" || { echo "FAIL: data not replicated"; exit 1; }
echo "OK: InnoDB Cluster replication works without Router"

echo "=== Cleanup ==="
./dbdeployer delete all --skip-confirm

- name: Test InnoDB Cluster with ProxySQL (instead of Router)
run: |
echo "=== Deploy InnoDB Cluster ${MYSQL_VERSION} + ProxySQL ==="
./dbdeployer deploy replication "$MYSQL_VERSION" \
--topology=innodb-cluster \
--skip-router \
--with-proxysql \
--sandbox-binary="$SANDBOX_BINARY" \
--nodes=3

echo "=== Verify cluster status ==="
~/sandboxes/ic_msb_*/check_cluster

echo "=== Verify ProxySQL sees the backend servers ==="
SBDIR=$(ls -d ~/sandboxes/ic_msb_*)
SERVERS=$($SBDIR/proxysql/use -e "SELECT hostname, port, hostgroup_id, status FROM runtime_mysql_servers;" 2>&1)
echo "$SERVERS"
# Verify at least 2 servers are ONLINE
ONLINE_COUNT=$(echo "$SERVERS" | grep -c "ONLINE" || true)
echo "Online servers: $ONLINE_COUNT"
[ "$ONLINE_COUNT" -ge 2 ] || { echo "FAIL: expected at least 2 ONLINE servers in ProxySQL"; exit 1; }

echo "=== Functional test: write through ProxySQL ==="
$SBDIR/proxysql/use_proxy -e "CREATE DATABASE proxy_ic_test; USE proxy_ic_test; CREATE TABLE t1 (id INT AUTO_INCREMENT PRIMARY KEY, val VARCHAR(100)); INSERT INTO t1 (val) VALUES ('via_proxysql');"
sleep 3
echo "--- Verify on node2 directly ---"
RESULT=$($SBDIR/node2/use -e "SELECT val FROM proxy_ic_test.t1;" 2>&1)
echo "$RESULT"
echo "$RESULT" | grep -q "via_proxysql" || { echo "FAIL: write through ProxySQL not replicated"; exit 1; }
echo "OK: ProxySQL -> InnoDB Cluster write + replication verified"

- name: Cleanup
if: always()
run: |
./dbdeployer delete all --skip-confirm 2>/dev/null || true
pkill -9 -u "$USER" mysqld 2>/dev/null || true
pkill -9 -u "$USER" mysqlrouter 2>/dev/null || true
pkill -9 -u "$USER" proxysql 2>/dev/null || true

# Test the "downloads get-by-version" + "unpack" flow that users follow
# from the quickstart guide. This catches registry gaps and download issues.
downloads-test:
Expand Down
22 changes: 17 additions & 5 deletions .github/workflows/proxysql_integration_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -136,12 +136,24 @@ jobs:
PG_FULL=$(ls ~/opt/postgresql/ | head -1)
echo "=== Deploying PostgreSQL $PG_FULL replication + ProxySQL ==="
./dbdeployer deploy replication "$PG_FULL" --provider=postgresql --with-proxysql
SBDIR=$(ls -d ~/sandboxes/postgresql_repl_*)

echo "=== Check ProxySQL is running ==="
~/sandboxes/postgresql_repl_*/proxysql/status
echo "=== Check ProxySQL admin interface ==="
~/sandboxes/postgresql_repl_*/proxysql/use -e "SELECT * FROM pgsql_servers;" || true
echo "=== Connect through ProxySQL (may fail - pgsql auth config is WIP) ==="
~/sandboxes/postgresql_repl_*/proxysql/use_proxy -c "SELECT 1;" || echo "WARN: ProxySQL pgsql proxy connection failed (expected - auth config WIP)"
$SBDIR/proxysql/status

echo "=== Check ProxySQL has pgsql_servers configured ==="
$SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;" || true

echo "=== Functional test: verify PostgreSQL replication works ==="
$SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');"
sleep 2
RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1)
echo "$RESULT"
echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; }
Comment on lines +144 to +152
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Tighten this verification block so it actually catches regressions.

The pgsql_servers query is still best-effort because of || true, so missing ProxySQL backend registration will not fail the job. The fixed sleep 2 also makes the replication assertion timing-dependent. Make the admin query fatal and poll the replica with a timeout instead.

🧪 One way to harden the check
-          $SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;" || true
+          $SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;"

           echo "=== Functional test: verify PostgreSQL replication works ==="
           $SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');"
-          sleep 2
-          RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1)
+          for _ in $(seq 1 15); do
+            RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1 || true)
+            echo "$RESULT" | grep -q "pg_proxysql_test" && break
+            sleep 1
+          done
           echo "$RESULT"
           echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
echo "=== Check ProxySQL has pgsql_servers configured ==="
$SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;" || true
echo "=== Functional test: verify PostgreSQL replication works ==="
$SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');"
sleep 2
RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1)
echo "$RESULT"
echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; }
echo "=== Check ProxySQL has pgsql_servers configured ==="
$SBDIR/proxysql/use -e "SELECT * FROM pgsql_servers;"
echo "=== Functional test: verify PostgreSQL replication works ==="
$SBDIR/primary/use -c "CREATE TABLE proxy_test(id serial, val text); INSERT INTO proxy_test(val) VALUES ('pg_proxysql_test');"
for _ in $(seq 1 15); do
RESULT=$($SBDIR/replica1/use -c "SELECT val FROM proxy_test;" 2>&1 || true)
echo "$RESULT" | grep -q "pg_proxysql_test" && break
sleep 1
done
echo "$RESULT"
echo "$RESULT" | grep -q "pg_proxysql_test" || { echo "FAIL: PG replication not working"; exit 1; }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/proxysql_integration_tests.yml around lines 144 - 152,
Make the pgsql_servers admin check fatal by removing the "|| true" so the
workflow fails if ProxySQL has no pgsql_servers; for the replication
verification replace the fixed "sleep 2" and single SELECT with a polling loop
that repeatedly runs $SBDIR/replica1/use -c "SELECT val FROM proxy_test;"
(capturing RESULT) and greps for "pg_proxysql_test" until it appears or a
timeout elapses (e.g., poll every 1s for up to 30s), then exit non‑zero on
timeout; update the script locations referencing $SBDIR/proxysql/use, the SELECT
* FROM pgsql_servers admin command, $SBDIR/primary/use CREATE/INSERT step, and
the RESULT/grep logic to implement the fatal admin check and timed polling.

echo "OK: PostgreSQL replication verified with ProxySQL deployed"

echo "=== ProxySQL proxy connection test (pgsql auth WIP) ==="
$SBDIR/proxysql/use_proxy -c "SELECT 1;" || echo "WARN: ProxySQL pgsql proxy connection failed (expected - auth config WIP)"

- name: Cleanup
if: always()
Expand Down
12 changes: 10 additions & 2 deletions cmd/replication.go
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,8 @@ func replicationSandbox(cmd *cobra.Command, args []string) {
globals.NdbLabel)

}
skipRouter, _ := flags.GetBool(globals.SkipRouterLabel)
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--skip-router is accepted unconditionally and forwarded into ReplicationData, but it only has an effect for the innodb-cluster topology. For consistency with existing option validation (e.g., --ndb-nodes, --single-primary), consider erroring out when --skip-router is provided with any other topology to avoid surprising no-op flags.

Suggested change
skipRouter, _ := flags.GetBool(globals.SkipRouterLabel)
skipRouter, _ := flags.GetBool(globals.SkipRouterLabel)
if skipRouter && topology != globals.InnodbClusterLabel {
common.Exitf(1, "option '%s' can only be used with '%s' topology ",
globals.SkipRouterLabel,
globals.InnodbClusterLabel)
}

Copilot uses AI. Check for mistakes.

origin := args[0]
if args[0] != sd.BasedirName {
origin = sd.BasedirName
Expand All @@ -252,7 +254,8 @@ func replicationSandbox(cmd *cobra.Command, args []string) {
NdbNodes: ndbNodes,
MasterIp: masterIp,
MasterList: masterList,
SlaveList: slaveList})
SlaveList: slaveList,
SkipRouter: skipRouter})
if err != nil {
common.Exitf(1, globals.ErrCreatingSandbox, err)
}
Expand Down Expand Up @@ -296,7 +299,9 @@ var replicationCmd = &cobra.Command{
Long: `The replication command allows you to deploy several nodes in replication.
Allowed topologies are "master-slave" for all versions, and "group", "all-masters", "fan-in"
for 5.7.17+.
Topologies "pcx" and "ndb" are available for binaries of type Percona Xtradb Cluster and MySQL Cluster.
Topologies "pxc" and "ndb" are available for binaries of type Percona Xtradb Cluster and MySQL Cluster.
Topology "innodb-cluster" deploys Group Replication managed by MySQL Shell AdminAPI with optional
MySQL Router for connection routing (requires MySQL 8.0.11+ and mysqlsh).
Comment on lines 299 to +304
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--with-proxysql wiring later in this function computes the created sandbox directory using the master/slave prefix (MasterSlavePrefix). With the new innodb-cluster topology using InnoDBClusterPrefix, ProxySQL deployment will look in the wrong directory and fail when both --topology=innodb-cluster and --with-proxysql are used. Consider deriving sandboxDir from the selected topology (or reading it from the sandbox catalog/description after deployment) instead of assuming master/slave.

Copilot uses AI. Check for mistakes.
For this command to work, there must be a directory $HOME/opt/mysql/5.7.21, containing
the binary files from mysql-5.7.21-$YOUR_OS-x86_64.tar.gz
Use the "unpack" command to get the tarball into the right directory.
Expand All @@ -321,6 +326,8 @@ Use the "unpack" command to get the tarball into the right directory.
$ dbdeployer deploy --topology=fan-in replication 5.7
$ dbdeployer deploy --topology=pxc replication pxc5.7.25
$ dbdeployer deploy --topology=ndb replication ndb8.0.14
$ dbdeployer deploy --topology=innodb-cluster replication 8.4.4
$ dbdeployer deploy --topology=innodb-cluster replication 8.4.4 --skip-router
`,
Annotations: map[string]string{"export": ExportAnnotationToJson(ReplicationExport)},
}
Expand All @@ -339,6 +346,7 @@ func init() {
replicationCmd.PersistentFlags().BoolP(globals.SuperReadOnlyLabel, "", false, "Set super-read-only for slaves")
replicationCmd.PersistentFlags().Bool(globals.ReplHistoryDirLabel, false, "uses the replication directory to store mysql client history")
setPflag(replicationCmd, globals.ChangeMasterOptions, "", "CHANGE_MASTER_OPTIONS", "", "options to add to CHANGE MASTER TO", true)
replicationCmd.PersistentFlags().Bool(globals.SkipRouterLabel, false, "Skip MySQL Router deployment for InnoDB Cluster topology")
replicationCmd.PersistentFlags().Bool("with-proxysql", false, "Deploy ProxySQL alongside the replication sandbox")
Comment on lines 346 to 350
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is extensive testscript coverage for existing replication topologies (e.g., ts/templates/group/group.tmpl, ts/templates/replication/replication.tmpl), but no automated coverage for the new --topology=innodb-cluster flow. Consider adding a testscript smoke test that deploys with --topology=innodb-cluster --skip-router --skip-start using a mock basedir that includes a stub mysqlsh binary, and asserts that the expected scripts (init_cluster, check_cluster, etc.) are generated.

Copilot uses AI. Check for mistakes.
replicationCmd.PersistentFlags().String(globals.ProviderLabel, globals.ProviderValue, "Database provider (mysql, postgresql)")
}
15 changes: 15 additions & 0 deletions defaults/defaults.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,8 @@ type DbdeployerDefaults struct {
RemoteTarballUrl string `json:"remote-tarball-url"`
PxcPrefix string `json:"pxc-prefix"`
NdbPrefix string `json:"ndb-prefix"`
InnoDBClusterPrefix string `json:"innodb-cluster-prefix"`
InnoDBClusterBasePort int `json:"innodb-cluster-base-port"`
DefaultSandboxExecutable string `json:"default-sandbox-executable"`
DownloadNameLinux string `json:"download-name-linux"`
DownloadNameMacOs string `json:"download-name-macos"`
Expand Down Expand Up @@ -134,6 +136,8 @@ var (
RemoteTarballUrl: "https://github.com/datacharmer/dbdeployer/master/downloads/tarball_list.json",
NdbPrefix: "ndb_msb_",
PxcPrefix: "pxc_msb_",
InnoDBClusterPrefix: "ic_msb_",
InnoDBClusterBasePort: 21000,
DefaultSandboxExecutable: "default",
DownloadNameLinux: "mysql-{{.Version}}-linux-glibc2.17-x86_64{{.Minimal}}.{{.Ext}}",
DownloadNameMacOs: "mysql-{{.Version}}-macos11-x86_64.{{.Ext}}",
Expand Down Expand Up @@ -226,6 +230,7 @@ func ValidateDefaults(nd DbdeployerDefaults) bool {
checkInt("pxc-base-port", nd.PxcBasePort, minPortValue, maxPortValue) &&
checkInt("ndb-base-port", nd.NdbBasePort, minPortValue, maxPortValue) &&
checkInt("ndb-cluster-port", nd.NdbClusterPort, minPortValue, maxPortValue) &&
checkInt("innodb-cluster-base-port", nd.InnoDBClusterBasePort, minPortValue, maxPortValue) &&
checkInt("group-port-delta", nd.GroupPortDelta, 101, 299) &&
checkInt("mysqlx-port-delta", nd.MysqlXPortDelta, 2000, 15000) &&
checkInt("admin-port-delta", nd.AdminPortDelta, 2000, 15000)
Expand All @@ -250,6 +255,7 @@ func ValidateDefaults(nd DbdeployerDefaults) bool {
nd.MasterAbbr != nd.SlaveAbbr &&
nd.MultiplePrefix != nd.NdbPrefix &&
nd.MultiplePrefix != nd.PxcPrefix &&
nd.MultiplePrefix != nd.InnoDBClusterPrefix &&
nd.SandboxHome != nd.SandboxBinary
if !noConflicts {
common.CondPrintf("Conflicts found in defaults values:\n")
Expand All @@ -270,6 +276,7 @@ func ValidateDefaults(nd DbdeployerDefaults) bool {
nd.MultiplePrefix != "" &&
nd.PxcPrefix != "" &&
nd.NdbPrefix != "" &&
nd.InnoDBClusterPrefix != "" &&
nd.DefaultSandboxExecutable != "" &&
nd.DownloadUrl != "" &&
nd.DownloadNameLinux != "" &&
Expand Down Expand Up @@ -403,6 +410,10 @@ func UpdateDefaults(label, value string, storeDefaults bool) {
newDefaults.PxcPrefix = value
case "ndb-prefix":
newDefaults.NdbPrefix = value
case "innodb-cluster-prefix":
newDefaults.InnoDBClusterPrefix = value
case "innodb-cluster-base-port":
newDefaults.InnoDBClusterBasePort = common.Atoi(value)
case "default-sandbox-executable":
newDefaults.DefaultSandboxExecutable = value
case "download-url":
Expand Down Expand Up @@ -538,6 +549,10 @@ func DefaultsToMap() common.StringMap {
"pxc-prefix": currentDefaults.PxcPrefix,
"NdbPrefix": currentDefaults.NdbPrefix,
"ndb-prefix": currentDefaults.NdbPrefix,
"InnoDBClusterPrefix": currentDefaults.InnoDBClusterPrefix,
"innodb-cluster-prefix": currentDefaults.InnoDBClusterPrefix,
"InnoDBClusterBasePort": currentDefaults.InnoDBClusterBasePort,
"innodb-cluster-base-port": currentDefaults.InnoDBClusterBasePort,
"DefaultSandboxExecutable": currentDefaults.DefaultSandboxExecutable,
"default-sandbox-executable": currentDefaults.DefaultSandboxExecutable,
"download-url": currentDefaults.DownloadUrl,
Expand Down
9 changes: 9 additions & 0 deletions globals/globals.go
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,9 @@ const (
TopologyValue = "master-slave"
PxcLabel = "pxc"
NdbLabel = "ndb"
InnoDBClusterLabel = "innodb-cluster"
ChangeMasterOptions = "change-master-options"
SkipRouterLabel = "skip-router"

// Instantiated in cmd/unpack.go and unpack/unpack.go
GzExt = ".gz"
Expand Down Expand Up @@ -320,6 +322,12 @@ const (
ScriptCheckSlaves = "check_slaves"
ScriptUseAllMasters = "use_all_masters"
ScriptUseAllSlaves = "use_all_slaves"

// InnoDB Cluster scripts
ScriptInitCluster = "init_cluster"
ScriptCheckCluster = "check_cluster"
ScriptRouterStart = "router_start"
ScriptRouterStop = "router_stop"
)

// Common error messages
Expand Down Expand Up @@ -475,6 +483,7 @@ var AllowedTopologies = []string{
FanInLabel,
AllMastersLabel,
NdbLabel,
InnoDBClusterLabel,
}

// This structure is not used directly by dbdeployer.
Expand Down
7 changes: 7 additions & 0 deletions globals/template_names.go
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,13 @@ const (
TmplInitNodes84 = "init_nodes84"
TmplGroupReplOptions84 = "group_repl_options84"

// innodb_cluster
TmplInnoDBClusterOptions = "innodb_cluster_options"
TmplInitCluster = "init_cluster"
TmplCheckCluster = "check_cluster"
TmplRouterStart = "router_start"
TmplRouterStop = "router_stop"

// MySQL 8.4+ specific templates
TmplInitSlaves84 = "init_slaves_84"
TmplReplCrashSafeOptions84 = "repl_crash_safe_options84"
Expand Down
Loading
Loading