Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions website/astro.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ export default defineConfig({
{ label: 'Group Replication', slug: 'deploying/group-replication' },
{ label: 'Fan-In & All-Masters', slug: 'deploying/fan-in-all-masters' },
{ label: 'NDB Cluster', slug: 'deploying/ndb-cluster' },
{ label: 'InnoDB Cluster', slug: 'deploying/innodb-cluster' },
],
},
{
Expand Down
98 changes: 96 additions & 2 deletions website/src/content/docs/deploying/fan-in-all-masters.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,100 @@
---
title: Fan-In & All-Masters
description: Fan-In & All-Masters documentation
description: Deploy multi-source replication topologies with dbdeployer — fan-in and all-masters.
---

Coming soon.
dbdeployer supports two multi-source replication topologies where nodes receive writes from more than one master. Both require MySQL 5.7.9 or later.

## Fan-In

Fan-in is the inverse of master-slave: multiple masters feed into a single slave. This is useful for consolidating writes from many sources into one replica — for example, aggregating data from multiple application databases.

```bash
dbdeployer deploy replication 8.4.8 --topology=fan-in
```

Default layout: nodes 1 and 2 are masters, node 3 is the slave.

```
~/sandboxes/fan_in_msb_8_4_8/
├── node1/ # master
├── node2/ # master
├── node3/ # slave (replicates from both masters)
├── check_slaves
├── test_replication
└── use_all
```

### Custom Master and Slave Lists

Use `--master-list` and `--slave-list` with `--nodes` to define any layout:

```bash
dbdeployer deploy replication 8.4.8 --topology=fan-in \
--nodes=5 \
--master-list="1,2,3" \
--slave-list="4,5" \
--concurrent
```

This creates 5 nodes where nodes 1–3 are masters and nodes 4–5 each replicate from all three masters.

### Verifying Fan-In Replication

```bash
~/sandboxes/fan_in_msb_8_4_8/test_replication
# master 1
# master 2
# slave 3
# ok - '2' == '2' - Slaves received tables from all masters
# pass: 1
# fail: 0
```

## All-Masters

In the all-masters topology, every node is simultaneously a master and a slave of every other node. This creates a fully-connected circular replication graph where a write on any node propagates to all others.

```bash
dbdeployer deploy replication 8.4.8 --topology=all-masters
```

Default: 3 nodes, each replicating from the other two.

```
~/sandboxes/all_masters_msb_8_4_8/
├── node1/ # master + slave
├── node2/ # master + slave
├── node3/ # master + slave
├── check_slaves
├── test_replication
└── use_all
```

### Use Cases

**Fan-in** is suited for:
- Data warehouses that consolidate writes from multiple OLTP sources
- Centralized audit or logging replicas
- Cross-shard aggregation in sharded setups

**All-masters** is suited for:
- Testing multi-source conflict scenarios
- Active-active setups where all nodes need to accept writes and stay in sync
- Exploring MySQL's multi-source replication capabilities

## Running Queries on All Nodes

```bash
~/sandboxes/all_masters_msb_8_4_8/use_all -e "SHOW SLAVE STATUS\G" | grep -E "Master_Host|Running"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

In MySQL 8.0.22 and later, the SHOW SLAVE STATUS command and the Master_Host column are deprecated in favor of SHOW REPLICA STATUS and Source_Host. Since the examples in this documentation use MySQL 8.4.8 (LTS), it is recommended to use the modern syntax for better alignment with current best practices.

Suggested change
~/sandboxes/all_masters_msb_8_4_8/use_all -e "SHOW SLAVE STATUS\G" | grep -E "Master_Host|Running"
~/sandboxes/all_masters_msb_8_4_8/use_all -e "SHOW REPLICA STATUS\G" | grep -E "Source_Host|Running"

```

## Minimum Version

Both topologies require MySQL 5.7.9 or later. Use `dbdeployer versions` to see what is available.

## Related Pages

- [Replication overview](/dbdeployer/deploying/replication)
- [Group Replication](/dbdeployer/deploying/group-replication)
- [Topology reference](/dbdeployer/reference/topology-reference)
100 changes: 98 additions & 2 deletions website/src/content/docs/deploying/group-replication.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,102 @@
---
title: Group Replication
description: Group Replication documentation
description: Deploy MySQL Group Replication clusters with dbdeployer — single-primary and multi-primary topologies.
---

Coming soon.
MySQL Group Replication (GR) is MySQL's built-in multi-master clustering technology. It provides automatic failover, conflict detection, and distributed recovery without external tools. dbdeployer makes it easy to spin up GR clusters for testing and development.

**Minimum version:** MySQL 5.7.17+

## Single-Primary Mode

In single-primary mode, one node is the primary (read/write) and the rest are secondaries (read-only). Failover is automatic — if the primary fails, the group elects a new one.

```bash
dbdeployer deploy replication 8.4.8 --topology=group --single-primary
```

This creates three nodes by default:

```
~/sandboxes/group_sp_msb_8_4_8/
├── node1/ # primary
├── node2/ # secondary
├── node3/ # secondary
├── check_nodes
├── start_all
├── stop_all
└── use_all
```

Connect to the primary:

```bash
~/sandboxes/group_sp_msb_8_4_8/n1 -e "SELECT @@port, @@read_only"
```

## Multi-Primary Mode

In multi-primary mode, all nodes accept writes simultaneously. Conflict detection handles concurrent updates to the same rows.

```bash
dbdeployer deploy replication 8.4.8 --topology=group
```

All nodes are writable:

```bash
~/sandboxes/group_msb_8_4_8/n1 -e "CREATE DATABASE test1"
~/sandboxes/group_msb_8_4_8/n2 -e "CREATE DATABASE test2"
~/sandboxes/group_msb_8_4_8/n3 -e "SELECT schema_name FROM information_schema.schemata"
```

## Monitoring: check_nodes

The `check_nodes` script queries `performance_schema.replication_group_members` on each node and summarizes the group state:

```bash
~/sandboxes/group_msb_8_4_8/check_nodes
# node 1 - ONLINE (PRIMARY)
# node 2 - ONLINE (SECONDARY)
# node 3 - ONLINE (SECONDARY)
```
Comment on lines +53 to +62
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check_nodes script for Group Replication runs select * from performance_schema.replication_group_members on each node (raw output), but this page describes it as a summarized status report and shows example lines like # node 1 - ONLINE (PRIMARY). Please update the description/example output to match what the script actually prints.

Copilot uses AI. Check for mistakes.

## Available Scripts

| Script | Purpose |
|--------|---------|
| `n1`, `n2`, `n3` | Connect to node 1, 2, 3 |
| `check_nodes` | Show group membership and role of each node |
| `start_all` | Start all nodes |
| `stop_all` | Stop all nodes |
| `use_all` | Run a query on all nodes |
| `test_replication` | Verify data propagates across all nodes |

## Controlling the Number of Nodes

Use `--nodes` to deploy more than three nodes:

```bash
dbdeployer deploy replication 8.4.8 --topology=group --nodes=5
```

## Concurrent Deployment

Large clusters start faster with `--concurrent`:

```bash
dbdeployer deploy replication 8.4.8 --topology=group --nodes=5 --concurrent
```

## InnoDB Cluster: the Managed Alternative

MySQL InnoDB Cluster wraps Group Replication with MySQL Shell (for orchestration) and MySQL Router (for transparent failover routing). If you need the full managed stack, see [InnoDB Cluster](/dbdeployer/deploying/innodb-cluster).

For plain Group Replication without the Shell/Router overhead, the `--topology=group` approach on this page is sufficient.

## Related Pages

- [Replication overview](/dbdeployer/deploying/replication)
- [InnoDB Cluster](/dbdeployer/deploying/innodb-cluster)
- [ProxySQL integration](/dbdeployer/providers/proxysql)
- [Topology reference](/dbdeployer/reference/topology-reference)
136 changes: 136 additions & 0 deletions website/src/content/docs/deploying/innodb-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
---
title: InnoDB Cluster
description: Deploy MySQL InnoDB Cluster with dbdeployer — Group Replication managed by MySQL Shell and routed by MySQL Router or ProxySQL.
---

MySQL InnoDB Cluster combines three components into a fully managed HA solution:

- **Group Replication** — synchronous multi-master replication with automatic failover
- **MySQL Shell** (`mysqlsh`) — orchestrates cluster bootstrapping and management
- **MySQL Router** — transparent connection routing that directs reads/writes to the right node

dbdeployer automates the entire setup. You get a working cluster with a router in one command.

**Minimum version:** MySQL 8.0+
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This page states “Minimum version: MySQL 8.0+”, but the code enforces MySQL 8.0.11+ for InnoDB Cluster deployment. Please update the minimum version text to 8.0.11+ to prevent users trying unsupported 8.0.0–8.0.10 builds.

Suggested change
**Minimum version:** MySQL 8.0+
**Minimum version:** MySQL 8.0.11+

Copilot uses AI. Check for mistakes.

## Requirements

Before deploying, ensure the following are installed and in your `PATH`:

- `mysqlsh` (MySQL Shell) — required for cluster bootstrapping
- `mysqlrouter` (MySQL Router) — required unless you use `--skip-router`

```bash
which mysqlsh mysqlrouter
mysqlsh --version
mysqlrouter --version
```

## Deploy an InnoDB Cluster

```bash
dbdeployer deploy replication 8.4.8 --topology=innodb-cluster
```

This bootstraps a 3-node Group Replication cluster via MySQL Shell, then starts MySQL Router pointed at it.

```
~/sandboxes/ic_msb_8_4_8/
├── node1/ # GR node (primary)
├── node2/ # GR node (secondary)
├── node3/ # GR node (secondary)
├── router/ # MySQL Router instance
│ ├── router_start
│ ├── router_stop
│ └── router.conf
├── check_cluster
├── start_all
├── stop_all
└── use_all
```

## MySQL Router Ports

| Port | Purpose |
|------|---------|
| 6446 | Read/Write — routes to the current primary |
| 6447 | Read-Only — routes to secondaries (round-robin) |

Connect through the router:

```bash
# Writes (goes to primary)
mysql -h 127.0.0.1 -P 6446 -u msandbox -pmsandbox

# Reads (goes to a secondary)
mysql -h 127.0.0.1 -P 6447 -u msandbox -pmsandbox
```

## Deploy Without MySQL Router

If you don't have MySQL Router installed, or want to manage routing yourself:

```bash
dbdeployer deploy replication 8.4.8 --topology=innodb-cluster --skip-router
```

No `router/` directory is created. Nodes are still bootstrapped as a Group Replication cluster via MySQL Shell.

## Deploy with ProxySQL Instead of MySQL Router

ProxySQL can serve as the connection router for InnoDB Cluster:

```bash
dbdeployer deploy replication 8.4.8 --topology=innodb-cluster \
--skip-router \
--with-proxysql
```

ProxySQL is deployed alongside the cluster and configured with the cluster nodes as backends.

For a comparison of MySQL Router vs ProxySQL for InnoDB Cluster routing, see [Topology reference](/dbdeployer/reference/topology-reference).

## Checking Cluster Status

```bash
~/sandboxes/ic_msb_8_4_8/check_cluster
# Cluster members:
# node1:3310 PRIMARY ONLINE
# node2:3320 SECONDARY ONLINE
# node3:3330 SECONDARY ONLINE
Comment on lines +98 to +100
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sample check_cluster output shows node ports 3310/3320/3330, but InnoDB Cluster port allocation is based on the InnoDB Cluster base port range (defaults to 21000 + offsets) rather than the classic 3310/3320/3330 pattern. Consider updating the example output to reflect the actual ports users will see (or omit specific port numbers to keep it accurate across configurations).

Suggested change
# node1:3310 PRIMARY ONLINE
# node2:3320 SECONDARY ONLINE
# node3:3330 SECONDARY ONLINE
# node1 PRIMARY ONLINE
# node2 SECONDARY ONLINE
# node3 SECONDARY ONLINE

Copilot uses AI. Check for mistakes.
```

Or query the cluster via MySQL Shell:

```bash
~/sandboxes/ic_msb_8_4_8/n1 -e \
"SELECT member_host, member_port, member_role, member_state
FROM performance_schema.replication_group_members"
```

## Router Management

```bash
# Start router
~/sandboxes/ic_msb_8_4_8/router/router_start

# Stop router
~/sandboxes/ic_msb_8_4_8/router/router_stop
```

## Available Scripts

| Script | Purpose |
|--------|---------|
| `n1`, `n2`, `n3` | Connect to cluster node 1, 2, 3 |
| `check_cluster` | Show cluster member status and roles |
| `start_all` / `stop_all` | Start or stop all cluster nodes |
| `use_all` | Run a query on every node |
| `router/router_start` | Start the MySQL Router |
| `router/router_stop` | Stop the MySQL Router |

## Related Pages

- [Group Replication](/dbdeployer/deploying/group-replication)
- [ProxySQL integration](/dbdeployer/providers/proxysql)
- [Topology reference](/dbdeployer/reference/topology-reference)
Loading
Loading